NASA Astrophysics Data System (ADS)
Joyce, M.; Ramirez, P.; Boustani, M.; Mattmann, C. A.; Khudikyan, S.; McGibbney, L. J.; Whitehall, K. D.
2014-12-01
Apache Open Climate Workbench (OCW; https://climate.apache.org/) is a Top-Level Project at the Apache Software Foundation that aims to provide a suite of tools for performing climate science evaluations using model outputs from a multitude of different sources (ESGF, CORDEX, U.S. NCA, NARCCAP) with remote sensing data from NASA, NOAA, and other agencies. Apache OCW is the second NASA project to become a Top-Level Project at the Apache Software Foundation. It grew out of the Jet Propulsion Laboratory's (JPL) Regional Climate Model Evaluation System (RCMES) project, a collaboration between JPL and the University of California, Los Angeles' Joint Institute for Regional Earth System Science and Engineering (JIFRESSE). Apache OCW provides scientists and developers with tools for data manipulation, metrics for dataset comparisons, and a visualization suite. In addition to a powerful low-level API, Apache OCW also supports a web application for quick, browser-controlled evaluations, a command line application for local evaluations, and a virtual machine for isolated experimentation with minimal setup. This talk will look at the difficulties and successes of moving a closed community research project out into the wild world of open source. We'll explore the growing pains Apache OCW went through to become a Top-Level Project at the Apache Software Foundation as well as the benefits gained by opening up development to the broader climate and computer science communities.
A Tour of Big Data, Open Source Data Management Technologies from the Apache Software Foundation
NASA Astrophysics Data System (ADS)
Mattmann, C. A.
2012-12-01
The Apache Software Foundation, a non-profit foundation charged with dissemination of open source software for the public good, provides a suite of data management technologies for distributed archiving, data ingestion, data dissemination, processing, triage and a host of other functionalities that are becoming critical in the Big Data regime. Apache is the world's largest open source software organization, boasting over 3000 developers from around the world all contributing to some of the most pervasive technologies in use today, from the HTTPD web server that powers a majority of Internet web sites to the Hadoop technology that is now projected at over a $1B dollar industry. Apache data management technologies are emerging as de facto off-the-shelf components for searching, distributing, processing and archiving key science data sets both geophysical, space and planetary based, all the way to biomedicine. In this talk, I will give a virtual tour of the Apache Software Foundation, its meritocracy and governance structure, and also its key big data technologies that organizations can take advantage of today and use to save cost, schedule, and resources in implementing their Big Data needs. I'll illustrate the Apache technologies in the context of several national priority projects, including the U.S. National Climate Assessment (NCA), and in the International Square Kilometre Array (SKA) project that are stretching the boundaries of volume, velocity, complexity, and other key Big Data dimensions.
Frameworks Coordinate Scientific Data Management
NASA Technical Reports Server (NTRS)
2012-01-01
Jet Propulsion Laboratory computer scientists developed a unique software framework to help NASA manage its massive amounts of science data. Through a partnership with the Apache Software Foundation of Forest Hill, Maryland, the technology is now available as an open-source solution and is in use by cancer researchers and pediatric hospitals.
NASA Astrophysics Data System (ADS)
Mattmann, Chris
2014-04-01
In this era of exascale instruments for astronomy we must naturally develop next generation capabilities for the unprecedented data volume and velocity that will arrive due to the veracity of these ground-based sensor and observatories. Integrating scientific algorithms stewarded by scientific groups unobtrusively and rapidly; intelligently selecting data movement technologies; making use of cloud computing for storage and processing; and automatically extracting text and metadata and science from any type of file are all needed capabilities in this exciting time. Our group at NASA JPL has promoted the use of open source data management technologies available from the Apache Software Foundation (ASF) in pursuit of constructing next generation data management and processing systems for astronomical instruments including the Expanded Very Large Array (EVLA) in Socorro, NM and the Atacama Large Milimetre/Sub Milimetre Array (ALMA); as well as for the KAT-7 project led by SKA South Africa as a precursor to the full MeerKAT telescope. In addition we are funded currently by the National Science Foundation in the US to work with MIT Haystack Observatory and the University of Cambridge in the UK to construct a Radio Array of Portable Interferometric Devices (RAPID) that will undoubtedly draw from the rich technology advances underway. NASA JPL is investing in a strategic initiative for Big Data that is pulling in these capabilities and technologies for astronomical instruments and also for Earth science remote sensing. In this talk I will describe the above collaborative efforts underway and point to solutions in open source from the Apache Software Foundation that can be deployed and used today and that are already bringing our teams and projects benefits. I will describe how others can take advantage of our experience and point towards future application and contribution of these tools.
Developer Initiation and Social Interactions in OSS: A Case Study of the Apache Software Foundation
2014-08-01
public interaction with the Apache Pluto community is on the mailing list in August 2006: Hello all, I’am John from the University [...], we are...developing the Prototype for the JSR 286. I hope that we can discuss the code [...] we have made and then develop new code for Pluto together [...], referring...to his and some of his fellow student’s intentions to contribute to Pluto . John gets the attention of Pluto committers and is immediately welcomed as
Chełkowski, Tadeusz; Gloor, Peter; Jemielniak, Dariusz
2016-01-01
While researchers are becoming increasingly interested in studying OSS phenomenon, there is still a small number of studies analyzing larger samples of projects investigating the structure of activities among OSS developers. The significant amount of information that has been gathered in the publicly available open-source software repositories and mailing-list archives offers an opportunity to analyze projects structures and participant involvement. In this article, using on commits data from 263 Apache projects repositories (nearly all), we show that although OSS development is often described as collaborative, but it in fact predominantly relies on radically solitary input and individual, non-collaborative contributions. We also show, in the first published study of this magnitude, that the engagement of contributors is based on a power-law distribution.
2016-01-01
While researchers are becoming increasingly interested in studying OSS phenomenon, there is still a small number of studies analyzing larger samples of projects investigating the structure of activities among OSS developers. The significant amount of information that has been gathered in the publicly available open-source software repositories and mailing-list archives offers an opportunity to analyze projects structures and participant involvement. In this article, using on commits data from 263 Apache projects repositories (nearly all), we show that although OSS development is often described as collaborative, but it in fact predominantly relies on radically solitary input and individual, non-collaborative contributions. We also show, in the first published study of this magnitude, that the engagement of contributors is based on a power-law distribution. PMID:27096157
An overview of the Hadoop/MapReduce/HBase framework and its current applications in bioinformatics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taylor, Ronald C.
Bioinformatics researchers are increasingly confronted with analysis of ultra large-scale data sets, a problem that will only increase at an alarming rate in coming years. Recent developments in open source software, that is, the Hadoop project and associated software, provide a foundation for scaling to petabyte scale data warehouses on Linux clusters, providing fault-tolerant parallelized analysis on such data using a programming style named MapReduce. An overview is given of the current usage within the bioinformatics community of Hadoop, a top-level Apache Software Foundation project, and of associated open source software projects. The concepts behind Hadoop and the associated HBasemore » project are defined, and current bioinformatics software that employ Hadoop is described. The focus is on next-generation sequencing, as the leading application area to date.« less
Science Gateways, Scientific Workflows and Open Community Software
NASA Astrophysics Data System (ADS)
Pierce, M. E.; Marru, S.
2014-12-01
Science gateways and scientific workflows occupy different ends of the spectrum of user-focused cyberinfrastructure. Gateways, sometimes called science portals, provide a way for enabling large numbers of users to take advantage of advanced computing resources (supercomputers, advanced storage systems, science clouds) by providing Web and desktop interfaces and supporting services. Scientific workflows, at the other end of the spectrum, support advanced usage of cyberinfrastructure that enable "power users" to undertake computational experiments that are not easily done through the usual mechanisms (managing simulations across multiple sites, for example). Despite these different target communities, gateways and workflows share many similarities and can potentially be accommodated by the same software system. For example, pipelines to process InSAR imagery sets or to datamine GPS time series data are workflows. The results and the ability to make downstream products may be made available through a gateway, and power users may want to provide their own custom pipelines. In this abstract, we discuss our efforts to build an open source software system, Apache Airavata, that can accommodate both gateway and workflow use cases. Our approach is general, and we have applied the software to problems in a number of scientific domains. In this talk, we discuss our applications to usage scenarios specific to earth science, focusing on earthquake physics examples drawn from the QuakSim.org and GeoGateway.org efforts. We also examine the role of the Apache Software Foundation's open community model as a way to build up common commmunity codes that do not depend upon a single "owner" to sustain. Pushing beyond open source software, we also see the need to provide gateways and workflow systems as cloud services. These services centralize operations, provide well-defined programming interfaces, scale elastically, and have global-scale fault tolerance. We discuss our work providing Apache Airavata as a hosted service to provide these features.
2012-12-21
material data and other key information in a UIMA environment. In the course of this project, the tools and methods developed were used to extract and...Architecture ( UIMA ) library from the Apache Software Foundation. Using this architecture, a given document is run through several “annotators” to...material taxonomy developed for the XSB, Inc. Coherent View™ database. In order to integrate this technology into the Java-based UIMA annotation
Experiences with the ALICE Mesos infrastructure
NASA Astrophysics Data System (ADS)
Berzano, D.; Eulisse, G.; Grigoraş, C.; Napoli, K.
2017-10-01
Apache Mesos is a resource management system for large data centres, initially developed by UC Berkeley, and now maintained under the Apache Foundation umbrella. It is widely used in the industry by companies like Apple, Twitter, and Airbnb and it is known to scale to 10 000s of nodes. Together with other tools of its ecosystem, such as Mesosphere Marathon or Metronome, it provides an end-to-end solution for datacenter operations and a unified way to exploit large distributed systems. We present the experience of the ALICE Experiment Offline & Computing in deploying and using in production the Apache Mesos ecosystem for a variety of tasks on a small 500 cores cluster, using hybrid OpenStack and bare metal resources. We will initially introduce the architecture of our setup and its operation, we will then describe the tasks which are performed by it, including release building and QA, release validation, and simple Monte Carlo production. We will show how we developed Mesos enabled components (called “Mesos Frameworks”) to carry out ALICE specific needs. In particular, we will illustrate our effort to integrate Work Queue, a lightweight batch processing engine developed by University of Notre Dame, which ALICE uses to orchestrate release validation. Finally, we will give an outlook on how to use Mesos as resource manager for DDS, a software deployment system developed by GSI which will be the foundation of the system deployment for ALICE next generation Online-Offline (O2).
An overview of the Hadoop/MapReduce/HBase framework and its current applications in bioinformatics
2010-01-01
Background Bioinformatics researchers are now confronted with analysis of ultra large-scale data sets, a problem that will only increase at an alarming rate in coming years. Recent developments in open source software, that is, the Hadoop project and associated software, provide a foundation for scaling to petabyte scale data warehouses on Linux clusters, providing fault-tolerant parallelized analysis on such data using a programming style named MapReduce. Description An overview is given of the current usage within the bioinformatics community of Hadoop, a top-level Apache Software Foundation project, and of associated open source software projects. The concepts behind Hadoop and the associated HBase project are defined, and current bioinformatics software that employ Hadoop is described. The focus is on next-generation sequencing, as the leading application area to date. Conclusions Hadoop and the MapReduce programming paradigm already have a substantial base in the bioinformatics community, especially in the field of next-generation sequencing analysis, and such use is increasing. This is due to the cost-effectiveness of Hadoop-based analysis on commodity Linux clusters, and in the cloud via data upload to cloud vendors who have implemented Hadoop/HBase; and due to the effectiveness and ease-of-use of the MapReduce method in parallelization of many data analysis algorithms. PMID:21210976
An overview of the Hadoop/MapReduce/HBase framework and its current applications in bioinformatics.
Taylor, Ronald C
2010-12-21
Bioinformatics researchers are now confronted with analysis of ultra large-scale data sets, a problem that will only increase at an alarming rate in coming years. Recent developments in open source software, that is, the Hadoop project and associated software, provide a foundation for scaling to petabyte scale data warehouses on Linux clusters, providing fault-tolerant parallelized analysis on such data using a programming style named MapReduce. An overview is given of the current usage within the bioinformatics community of Hadoop, a top-level Apache Software Foundation project, and of associated open source software projects. The concepts behind Hadoop and the associated HBase project are defined, and current bioinformatics software that employ Hadoop is described. The focus is on next-generation sequencing, as the leading application area to date. Hadoop and the MapReduce programming paradigm already have a substantial base in the bioinformatics community, especially in the field of next-generation sequencing analysis, and such use is increasing. This is due to the cost-effectiveness of Hadoop-based analysis on commodity Linux clusters, and in the cloud via data upload to cloud vendors who have implemented Hadoop/HBase; and due to the effectiveness and ease-of-use of the MapReduce method in parallelization of many data analysis algorithms.
Building a Snow Data System on the Apache OODT Open Technology Stack
NASA Astrophysics Data System (ADS)
Goodale, C. E.; Painter, T. H.; Mattmann, C. A.; Hart, A. F.; Ramirez, P.; Zimdars, P.; Bryant, A. C.; Snow Data System Team
2011-12-01
Snow cover and its melt dominate regional climate and hydrology in many of the world's mountainous regions. One-sixth of Earth's population depends on snow- or glacier-melt for water resources. Operationally, seasonal forecasts of snowmelt-generated streamflow are leveraged through empirical relations based on past snowmelt periods. These historical data show that climate is changing, but the changes reduce the reliability of the empirical relations. Therefore optimal future management of snowmelt derived water resources will require explicit physical models driven by remotely sensed snow property data. Toward this goal, the Snow Optics Laboratory at the Jet Propulsion Laboratory has initiated a near real-time processing pipeline to generate and publish post-processed snow data products within a few hours of satellite acquisition. To solve this challenge, a Scientific Data Management and Processing System was required and the JPL Team leveraged an open-source project called Object Oriented Data Technology (OODT). OODT was developed within NASA's Jet Propulsion Laboratory across the last 10 years. OODT has supported various scientific data management and processing projects, providing solutions in the Earth, Planetary, and Medical science fields. It became apparent that the project needed to be opened to a larger audience to foster and promote growth and adoption. OODT was open-sourced at the Apache Software Foundation in November 2010 and has a growing community of users and committers that are constantly improving the software. Leveraging OODT, the JPL Snow Data System (SnowDS) Team was able to install and configure a core Data Management System (DMS) that would download MODIS raw data files and archive the products in a local repository for post processing. The team has since built an online data portal, and an algorithm-processing pipeline using the Apache OODT software as the foundation. We will present the working SnowDS system with its core remote sensing components: the MODIS Snow Covered Area and Grain size model (MODSCAG) and the MODIS Dust Radiative Forcing in Snow (MOD-DRFS). These products will be delivered in near real time to water managers and the broader cryosphere and climate community beginning in Winter 2012. We will then present the challenges and opportunities we see in the future as the SnowDS matures and contributions are made back to the OODT project.
Building a Snow Data Management System using Open Source Software (and IDL)
NASA Astrophysics Data System (ADS)
Goodale, C. E.; Mattmann, C. A.; Ramirez, P.; Hart, A. F.; Painter, T.; Zimdars, P. A.; Bryant, A.; Brodzik, M.; Skiles, M.; Seidel, F. C.; Rittger, K. E.
2012-12-01
At NASA's Jet Propulsion Laboratory free and open source software is used everyday to support a wide range of projects, from planetary to climate to research and development. In this abstract I will discuss the key role that open source software has played in building a robust science data processing pipeline for snow hydrology research, and how the system is also able to leverage programs written in IDL, making JPL's Snow Data System a hybrid of open source and proprietary software. Main Points: - The Design of the Snow Data System (illustrate how the collection of sub-systems are combined to create a complete data processing pipeline) - Discuss the Challenges of moving from a single algorithm on a laptop, to running 100's of parallel algorithms on a cluster of servers (lesson's learned) - Code changes - Software license related challenges - Storage Requirements - System Evolution (from data archiving, to data processing, to data on a map, to near-real-time products and maps) - Road map for the next 6 months (including how easily we re-used the snowDS code base to support the Airborne Snow Observatory Mission) Software in Use and their Software Licenses: IDL - Used for pre and post processing of data. Licensed under a proprietary software license held by Excelis. Apache OODT - Used for data management and workflow processing. Licensed under the Apache License Version 2. GDAL - Geospatial Data processing library used for data re-projection currently. Licensed under the X/MIT license. GeoServer - WMS Server. Licensed under the General Public License Version 2.0 Leaflet.js - Javascript web mapping library. Licensed under the Berkeley Software Distribution License. Python - Glue code and miscellaneous data processing support. Licensed under the Python Software Foundation License. Perl - Script wrapper for running the SCAG algorithm. Licensed under the General Public License Version 3. PHP - Front-end web application programming. Licensed under the PHP License Version 3.01
The Apache OODT Project: An Introduction
NASA Astrophysics Data System (ADS)
Mattmann, C. A.; Crichton, D. J.; Hughes, J. S.; Ramirez, P.; Goodale, C. E.; Hart, A. F.
2012-12-01
Apache OODT is a science data system framework, borne over the past decade, with 100s of FTEs of investment, tens of sponsoring agencies (NASA, NIH/NCI, DoD, NSF, universities, etc.), and hundreds of projects and science missions that it powers everyday to their success. At its core, Apache OODT carries with it two fundamental classes of software services and components: those that deal with information integration from existing science data repositories and archives, that themselves have already-in-use business processes and models for populating those archives. Information integration allows search, retrieval, and dissemination across these heterogeneous systems, and ultimately rapid, interactive data access, and retrieval. The other suite of services and components within Apache OODT handle population and processing of those data repositories and archives. Workflows, resource management, crawling, remote data retrieval, curation and ingestion, along with science data algorithm integration all are part of these Apache OODT software elements. In this talk, I will provide an overview of the use of Apache OODT to unlock and populate information from science data repositories and archives. We'll cover the basics, along with some advanced use cases and success stories.
Constructing Flexible, Configurable, ETL Pipelines for the Analysis of "Big Data" with Apache OODT
NASA Astrophysics Data System (ADS)
Hart, A. F.; Mattmann, C. A.; Ramirez, P.; Verma, R.; Zimdars, P. A.; Park, S.; Estrada, A.; Sumarlidason, A.; Gil, Y.; Ratnakar, V.; Krum, D.; Phan, T.; Meena, A.
2013-12-01
A plethora of open source technologies for manipulating, transforming, querying, and visualizing 'big data' have blossomed and matured in the last few years, driven in large part by recognition of the tremendous value that can be derived by leveraging data mining and visualization techniques on large data sets. One facet of many of these tools is that input data must often be prepared into a particular format (e.g.: JSON, CSV), or loaded into a particular storage technology (e.g.: HDFS) before analysis can take place. This process, commonly known as Extract-Transform-Load, or ETL, often involves multiple well-defined steps that must be executed in a particular order, and the approach taken for a particular data set is generally sensitive to the quantity and quality of the input data, as well as the structure and complexity of the desired output. When working with very large, heterogeneous, unstructured or semi-structured data sets, automating the ETL process and monitoring its progress becomes increasingly important. Apache Object Oriented Data Technology (OODT) provides a suite of complementary data management components called the Process Control System (PCS) that can be connected together to form flexible ETL pipelines as well as browser-based user interfaces for monitoring and control of ongoing operations. The lightweight, metadata driven middleware layer can be wrapped around custom ETL workflow steps, which themselves can be implemented in any language. Once configured, it facilitates communication between workflow steps and supports execution of ETL pipelines across a distributed cluster of compute resources. As participants in a DARPA-funded effort to develop open source tools for large-scale data analysis, we utilized Apache OODT to rapidly construct custom ETL pipelines for a variety of very large data sets to prepare them for analysis and visualization applications. We feel that OODT, which is free and open source software available through the Apache Software Foundation, is particularly well suited to developing and managing arbitrary large-scale ETL processes both for the simplicity and flexibility of its wrapper framework, as well as the detailed provenance information it exposes throughout the process. Our experience using OODT to manage processing of large-scale data sets in domains as diverse as radio astronomy, life sciences, and social network analysis demonstrates the flexibility of the framework, and the range of potential applications to a broad array of big data ETL challenges.
Agentless Cloud-Wide Monitoring of Virtual Disk State
2015-10-01
packages include Apache, MySQL , PHP, Ruby on Rails, Java Application Servers, and many others. Figure 2.12 shows the results of a run of the Software...Linux, Apache, MySQL , PHP (LAMP) set of applications. Thus, many file-level update logs will contain the same versions of files repeated across many
Information Flow Integrity for Systems of Independently-Developed Components
2015-06-22
We also examined three programs (Apache, MySQL , and PHP) in detail to evaluate the efficacy of using the provided package test suites to generate...method are just as effective as hooks that were manually placed over the course of years while greatly reducing the burden on programmers. ”Leveraging...to validate optimizations of real-world, mature applications: the Apache software suite, the Mozilla Suite, and the MySQL database. ”Validating Library
NASA Technical Reports Server (NTRS)
Hart, Andrew F.; Verma, Rishi; Mattmann, Chris A.; Crichton, Daniel J.; Kelly, Sean; Kincaid, Heather; Hughes, Steven; Ramirez, Paul; Goodale, Cameron; Anton, Kristen;
2012-01-01
For the past decade, the NASA Jet Propulsion Laboratory, in collaboration with Dartmouth University has served as the center for informatics for the Early Detection Research Network (EDRN). The EDRN is a multi-institution research effort funded by the U.S. National Cancer Institute (NCI) and tasked with identifying and validating biomarkers for the early detection of cancer. As the distributed network has grown, increasingly formal processes have been developed for the acquisition, curation, storage, and dissemination of heterogeneous research information assets, and an informatics infrastructure has emerged. In this paper we discuss the evolution of EDRN informatics, its success as a mechanism for distributed information integration, and the potential sustainability and reuse benefits of emerging efforts to make the platform components themselves open source. We describe our experience transitioning a large closed-source software system to a community driven, open source project at the Apache Software Foundation, and point to lessons learned that will guide our present efforts to promote the reuse of the EDRN informatics infrastructure by a broader community.
Large-scale virtual screening on public cloud resources with Apache Spark.
Capuccini, Marco; Ahmed, Laeeq; Schaal, Wesley; Laure, Erwin; Spjuth, Ola
2017-01-01
Structure-based virtual screening is an in-silico method to screen a target receptor against a virtual molecular library. Applying docking-based screening to large molecular libraries can be computationally expensive, however it constitutes a trivially parallelizable task. Most of the available parallel implementations are based on message passing interface, relying on low failure rate hardware and fast network connection. Google's MapReduce revolutionized large-scale analysis, enabling the processing of massive datasets on commodity hardware and cloud resources, providing transparent scalability and fault tolerance at the software level. Open source implementations of MapReduce include Apache Hadoop and the more recent Apache Spark. We developed a method to run existing docking-based screening software on distributed cloud resources, utilizing the MapReduce approach. We benchmarked our method, which is implemented in Apache Spark, docking a publicly available target receptor against [Formula: see text]2.2 M compounds. The performance experiments show a good parallel efficiency (87%) when running in a public cloud environment. Our method enables parallel Structure-based virtual screening on public cloud resources or commodity computer clusters. The degree of scalability that we achieve allows for trying out our method on relatively small libraries first and then to scale to larger libraries. Our implementation is named Spark-VS and it is freely available as open source from GitHub (https://github.com/mcapuccini/spark-vs).Graphical abstract.
Cwik, Mary F; Tingey, Lauren; Maschino, Alexandra; Goklish, Novalene; Larzelere-Hinton, Francene; Walkup, John; Barlow, Allison
2016-12-01
We evaluated the impact of a comprehensive, multitiered youth suicide prevention program among the White Mountain Apache of Arizona since its implementation in 2006. Using data from the tribally mandated Celebrating Life surveillance system, we compared the rates, numbers, and characteristics of suicide deaths and attempts from 2007 to 2012 with those from 2001 to 2006. The overall Apache suicide death rates dropped from 40.0 to 24.7 per 100 000 (38.3% decrease), and the rate among those aged 15 to 24 years dropped from 128.5 to 99.0 per 100 000 (23.0% decrease). The annual number of attempts also dropped from 75 (in 2007) to 35 individuals (in 2012). National rates remained relatively stable during this time, at 10 to 13 per 100 000. Although national rates remained stable or increased slightly, the overall Apache suicide death rates dropped following the suicide prevention program. The community surveillance system served a critical role in providing a foundation for prevention programming and evaluation.
NASA's Earth Imagery Service as Open Source Software
NASA Astrophysics Data System (ADS)
De Cesare, C.; Alarcon, C.; Huang, T.; Roberts, J. T.; Rodriguez, J.; Cechini, M. F.; Boller, R. A.; Baynes, K.
2016-12-01
The NASA Global Imagery Browse Service (GIBS) is a software system that provides access to an archive of historical and near-real-time Earth imagery from NASA-supported satellite instruments. The imagery itself is open data, and is accessible via standards such as the Open Geospatial Consortium (OGC)'s Web Map Tile Service (WMTS) protocol. GIBS includes three core software projects: The Imagery Exchange (TIE), OnEarth, and the Meta Raster Format (MRF) project. These projects are developed using a variety of open source software, including: Apache HTTPD, GDAL, Mapserver, Grails, Zookeeper, Eclipse, Maven, git, and Apache Commons. TIE has recently been released for open source, and is now available on GitHub. OnEarth, MRF, and their sub-projects have been on GitHub since 2014, and the MRF project in particular receives many external contributions from the community. Our software has been successful beyond the scope of GIBS: the PO.DAAC State of the Ocean and COVERAGE visualization projects reuse components from OnEarth. The MRF source code has recently been incorporated into GDAL, which is a core library in many widely-used GIS software such as QGIS and GeoServer. This presentation will describe the challenges faced in incorporating open software and open data into GIBS, and also showcase GIBS as a platform on which scientists and the general public can build their own applications.
2012-10-01
higher Java v5Apache Struts v2 Hibernate v2 C3PO SQL*Net client / JDBC Database Server Oracle 10.0.2 Desktop Client Internet Explorer...for mobile Smartphones - A Java -based framework utilizing Apache Struts on the server - Relational database to handle data storage requirements B...technologies are as follows: Technology Use Requirements Java Application Provides the backend application software to drive the PHR-A 7 BEA Web
Open Source Clinical NLP - More than Any Single System.
Masanz, James; Pakhomov, Serguei V; Xu, Hua; Wu, Stephen T; Chute, Christopher G; Liu, Hongfang
2014-01-01
The number of Natural Language Processing (NLP) tools and systems for processing clinical free-text has grown as interest and processing capability have surged. Unfortunately any two systems typically cannot simply interoperate, even when both are built upon a framework designed to facilitate the creation of pluggable components. We present two ongoing activities promoting open source clinical NLP. The Open Health Natural Language Processing (OHNLP) Consortium was originally founded to foster a collaborative community around clinical NLP, releasing UIMA-based open source software. OHNLP's mission currently includes maintaining a catalog of clinical NLP software and providing interfaces to simplify the interaction of NLP systems. Meanwhile, Apache cTAKES aims to integrate best-of-breed annotators, providing a world-class NLP system for accessing clinical information within free-text. These two activities are complementary. OHNLP promotes open source clinical NLP activities in the research community and Apache cTAKES bridges research to the health information technology (HIT) practice.
Optimizing CMS build infrastructure via Apache Mesos
NASA Astrophysics Data System (ADS)
Abdurachmanov, David; Degano, Alessandro; Elmer, Peter; Eulisse, Giulio; Mendez, David; Muzaffar, Shahzad
2015-12-01
The Offline Software of the CMS Experiment at the Large Hadron Collider (LHC) at CERN consists of 6M lines of in-house code, developed over a decade by nearly 1000 physicists, as well as a comparable amount of general use open-source code. A critical ingredient to the success of the construction and early operation of the WLCG was the convergence, around the year 2000, on the use of a homogeneous environment of commodity x86-64 processors and Linux. Apache Mesos is a cluster manager that provides efficient resource isolation and sharing across distributed applications, or frameworks. It can run Hadoop, Jenkins, Spark, Aurora, and other applications on a dynamically shared pool of nodes. We present how we migrated our continuous integration system to schedule jobs on a relatively small Apache Mesos enabled cluster and how this resulted in better resource usage, higher peak performance and lower latency thanks to the dynamic scheduling capabilities of Mesos.
NASA Astrophysics Data System (ADS)
Gallagher, J. H. R.; Potter, N.; Evans, B. J. K.
2016-12-01
OPeNDAP, in conjunction with the Australian National University, documented the installation process needed to add authentication to OPeNDAP-enabled data servers (Hyrax, TDS, etc.) and examined 13 OPeNDAP clients to determine how best to add authentication using LDAP, Shibboleth and OAuth2 (we used NASA's URS). We settled on a server configuration (architecture) that uses the Apache web server and a collection of open-source modules to perform the authentication and authorization actions. This is not the only way to accomplish those goals, but using Apache represents a good balance between functionality, leveraging existing work that has been well vetted and includes support for a wide variety of web services, include those that depend on a servlet engine such as tomcat (which both Hyrax and TDS do). Or work shows how LDAP, OAuth2 and Shibboleth can all be accommodated using this readily available software stack. Also important is that the Apache software is very widely used and is fairly robust - extremely important for security software components. In order to make use of a server requiring authentication, clients must support the authentication process. Because HTTP has included authentication for well over a decade, and because HTTP/HTTPS can be used by simply linking programs with a library, both the LDAP and OAuth2/URS authentication schemes have almost universal support within the OPeNDAP client base. The clients, i.e. the HTTP client libraries they employ, understand how to submit the credentials to the correct server when confronted by an HTTP/S Unauthorized (401) response. Interestingly OAuth2 can achieve it's SSO objectives while relying entirely on normative HTTP transport. All 13 of the clients examined worked.The situation with Shibboleth is different. While Shibboleth does use HTTP, it also requires the client to either scrape a web page or support the SAML2.0 ECP profile, which, for programmatic clients, means using SOAP messages. Since working with SOAP is outside the scope of HTTP, support for Shibboleth must be added explicitly into the client software. Some of the potential burden of enabling OPeNDAP clients to work with Shibboleth may be mitigated by getting both NetCDF-C and NetCDF-Java libraries to use the Shibboleth ECP profile. If done, this would get 9 of the 13 clients we examined working.
Zhou, Lianjie; Chen, Nengcheng; Chen, Zeqiang
2017-01-01
The efficient data access of streaming vehicle data is the foundation of analyzing, using and mining vehicle data in smart cities, which is an approach to understand traffic environments. However, the number of vehicles in urban cities has grown rapidly, reaching hundreds of thousands in number. Accessing the mass streaming data of vehicles is hard and takes a long time due to limited computation capability and backward modes. We propose an efficient streaming spatio-temporal data access based on Apache Storm (ESDAS) to achieve real-time streaming data access and data cleaning. As a popular streaming data processing tool, Apache Storm can be applied to streaming mass data access and real time data cleaning. By designing the Spout/bolt workflow of topology in ESDAS and by developing the speeding bolt and other bolts, Apache Storm can achieve the prospective aim. In our experiments, Taiyuan BeiDou bus location data is selected as the mass spatio-temporal data source. In the experiments, the data access results with different bolts are shown in map form, and the filtered buses’ aggregation forms are different. In terms of performance evaluation, the consumption time in ESDAS for ten thousand records per second for a speeding bolt is approximately 300 milliseconds, and that for MongoDB is approximately 1300 milliseconds. The efficiency of ESDAS is approximately three times higher than that of MongoDB. PMID:28394287
Zhou, Lianjie; Chen, Nengcheng; Chen, Zeqiang
2017-04-10
The efficient data access of streaming vehicle data is the foundation of analyzing, using and mining vehicle data in smart cities, which is an approach to understand traffic environments. However, the number of vehicles in urban cities has grown rapidly, reaching hundreds of thousands in number. Accessing the mass streaming data of vehicles is hard and takes a long time due to limited computation capability and backward modes. We propose an efficient streaming spatio-temporal data access based on Apache Storm (ESDAS) to achieve real-time streaming data access and data cleaning. As a popular streaming data processing tool, Apache Storm can be applied to streaming mass data access and real time data cleaning. By designing the Spout/bolt workflow of topology in ESDAS and by developing the speeding bolt and other bolts, Apache Storm can achieve the prospective aim. In our experiments, Taiyuan BeiDou bus location data is selected as the mass spatio-temporal data source. In the experiments, the data access results with different bolts are shown in map form, and the filtered buses' aggregation forms are different. In terms of performance evaluation, the consumption time in ESDAS for ten thousand records per second for a speeding bolt is approximately 300 milliseconds, and that for MongoDB is approximately 1300 milliseconds. The efficiency of ESDAS is approximately three times higher than that of MongoDB.
Methods, Knowledge Support, and Experimental Tools for Modeling
2006-10-01
open source software entities: the PostgreSQL relational database management system (http://www.postgres.org), the Apache web server (http...past. The revision control system allows the program to capture disagreements, and allows users to explore the history of such disagreements by
Application of Open Source Software by the Lunar Mapping and Modeling Project
NASA Astrophysics Data System (ADS)
Ramirez, P.; Goodale, C. E.; Bui, B.; Chang, G.; Kim, R. M.; Law, E.; Malhotra, S.; Rodriguez, L.; Sadaqathullah, S.; Mattmann, C. A.; Crichton, D. J.
2011-12-01
The Lunar Mapping and Modeling Project (LMMP), led by the Marshall Space Flight center (MSFC), is responsible for the development of an information system to support lunar exploration, decision analysis, and release of lunar data to the public. The data available through the lunar portal is predominantly derived from present lunar missions (e.g., the Lunar Reconnaissance Orbiter (LRO)) and from historical missions (e.g., Apollo). This project has created a gold source of data, models, and tools for lunar explorers to exercise and incorporate into their activities. At Jet Propulsion Laboratory (JPL), we focused on engineering and building the infrastructure to support cataloging, archiving, accessing, and delivery of lunar data. We decided to use a RESTful service-oriented architecture to enable us to abstract from the underlying technology choices and focus on interfaces to be used internally and externally. This decision allowed us to leverage several open source software components and integrate them by either writing a thin REST service layer or relying on the API they provided; the approach chosen was dependent on the targeted consumer of a given interface. We will discuss our varying experience using open source products; namely Apache OODT, Oracle Berkley DB XML, Apache Solr, and Oracle OpenSSO (now named OpenAM). Apache OODT, developed at NASA's Jet Propulsion Laboratory and recently migrated over to Apache, provided the means for ingestion and cataloguing of products within the infrastructure. Its usage was based upon team experience with the project and past benefit received on other projects internal and external to JPL. Berkeley DB XML, distributed by Oracle for both commercial and open source use, was the storage technology chosen for our metadata. This decision was in part based on our use Federal Geographic Data Committee (FGDC) Metadata, which is expressed in XML, and the desire to keep it in its native form and exploit other technologies built on top of XML. Apache Solr, an open source search engine, was used to drive our search interface and as way to store references to metadata and data exposed via REST endpoints. As was the case with Apache OODT there was team experience with this component that helped drive this choice. Lastly, OpenSSO, an open source single sign on service, was used to secure and provide access constraints to our REST based services. For this product there was little past experience but given our service based approach seemed to be a natural fit. Given our exposure to open source we will discuss the tradeoffs and benefits received by the choices made. Moreover, we will dive into the context of how the software packages were used and the impact of their design and extensibility had on the construction of the infrastructure. Finally, we will compare our encounter across open source solutions and attributes that can vary the impression one will get. This comprehensive account of our endeavor should aid others in their assessment and use of open source.
Optimizing CMS build infrastructure via Apache Mesos
Abdurachmanov, David; Degano, Alessandro; Elmer, Peter; ...
2015-12-23
The Offline Software of the CMS Experiment at the Large Hadron Collider (LHC) at CERN consists of 6M lines of in-house code, developed over a decade by nearly 1000 physicists, as well as a comparable amount of general use open-source code. A critical ingredient to the success of the construction and early operation of the WLCG was the convergence, around the year 2000, on the use of a homogeneous environment of commodity x86-64 processors and Linux.Apache Mesos is a cluster manager that provides efficient resource isolation and sharing across distributed applications, or frameworks. It can run Hadoop, Jenkins, Spark, Aurora,more » and other applications on a dynamically shared pool of nodes. Lastly, we present how we migrated our continuous integration system to schedule jobs on a relatively small Apache Mesos enabled cluster and how this resulted in better resource usage, higher peak performance and lower latency thanks to the dynamic scheduling capabilities of Mesos.« less
Optimizing CMS build infrastructure via Apache Mesos
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abdurachmanov, David; Degano, Alessandro; Elmer, Peter
The Offline Software of the CMS Experiment at the Large Hadron Collider (LHC) at CERN consists of 6M lines of in-house code, developed over a decade by nearly 1000 physicists, as well as a comparable amount of general use open-source code. A critical ingredient to the success of the construction and early operation of the WLCG was the convergence, around the year 2000, on the use of a homogeneous environment of commodity x86-64 processors and Linux.Apache Mesos is a cluster manager that provides efficient resource isolation and sharing across distributed applications, or frameworks. It can run Hadoop, Jenkins, Spark, Aurora,more » and other applications on a dynamically shared pool of nodes. Lastly, we present how we migrated our continuous integration system to schedule jobs on a relatively small Apache Mesos enabled cluster and how this resulted in better resource usage, higher peak performance and lower latency thanks to the dynamic scheduling capabilities of Mesos.« less
2011-08-01
dominates the global mobile application market and mobile computing software ecosystems. But overall, OA systems are not necessarily excluded from...License 3.0 (OSL) Corel Transactional License ( CTL ) The licenses were chosen to represent a variety of kinds of licenses, and include one...proprietary ( CTL ), three academic (Apache, BSD, MIT), and six reciprocal licenses (CPL, EPL, GPL, LGPL, MPL, OSL) that take varying approaches in
Open Source Clinical NLP – More than Any Single System
Masanz, James; Pakhomov, Serguei V.; Xu, Hua; Wu, Stephen T.; Chute, Christopher G.; Liu, Hongfang
2014-01-01
The number of Natural Language Processing (NLP) tools and systems for processing clinical free-text has grown as interest and processing capability have surged. Unfortunately any two systems typically cannot simply interoperate, even when both are built upon a framework designed to facilitate the creation of pluggable components. We present two ongoing activities promoting open source clinical NLP. The Open Health Natural Language Processing (OHNLP) Consortium was originally founded to foster a collaborative community around clinical NLP, releasing UIMA-based open source software. OHNLP’s mission currently includes maintaining a catalog of clinical NLP software and providing interfaces to simplify the interaction of NLP systems. Meanwhile, Apache cTAKES aims to integrate best-of-breed annotators, providing a world-class NLP system for accessing clinical information within free-text. These two activities are complementary. OHNLP promotes open source clinical NLP activities in the research community and Apache cTAKES bridges research to the health information technology (HIT) practice. PMID:25954581
Satellite Imagery Production and Processing Using Apache Hadoop
NASA Astrophysics Data System (ADS)
Hill, D. V.; Werpy, J.
2011-12-01
The United States Geological Survey's (USGS) Earth Resources Observation and Science (EROS) Center Land Science Research and Development (LSRD) project has devised a method to fulfill its processing needs for Essential Climate Variable (ECV) production from the Landsat archive using Apache Hadoop. Apache Hadoop is the distributed processing technology at the heart of many large-scale, processing solutions implemented at well-known companies such as Yahoo, Amazon, and Facebook. It is a proven framework and can be used to process petabytes of data on thousands of processors concurrently. It is a natural fit for producing satellite imagery and requires only a few simple modifications to serve the needs of science data processing. This presentation provides an invaluable learning opportunity and should be heard by anyone doing large scale image processing today. The session will cover a description of the problem space, evaluation of alternatives, feature set overview, configuration of Hadoop for satellite image processing, real-world performance results, tuning recommendations and finally challenges and ongoing activities. It will also present how the LSRD project built a 102 core processing cluster with no financial hardware investment and achieved ten times the initial daily throughput requirements with a full time staff of only one engineer. Satellite Imagery Production and Processing Using Apache Hadoop is presented by David V. Hill, Principal Software Architect for USGS LSRD.
High-Performance Tiled WMS and KML Web Server
NASA Technical Reports Server (NTRS)
Plesea, Lucian
2007-01-01
This software is an Apache 2.0 module implementing a high-performance map server to support interactive map viewers and virtual planet client software. It can be used in applications that require access to very-high-resolution geolocated images, such as GIS, virtual planet applications, and flight simulators. It serves Web Map Service (WMS) requests that comply with a given request grid from an existing tile dataset. It also generates the KML super-overlay configuration files required to access the WMS image tiles.
Manned/Unmanned Common Architecture Program (MCAP) net centric flight tests
NASA Astrophysics Data System (ADS)
Johnson, Dale
2009-04-01
Properly architected avionics systems can reduce the costs of periodic functional improvements, maintenance, and obsolescence. With this in mind, the U.S. Army Aviation Applied Technology Directorate (AATD) initiated the Manned/Unmanned Common Architecture Program (MCAP) in 2003 to develop an affordable, high-performance embedded mission processing architecture for potential application to multiple aviation platforms. MCAP analyzed Army helicopter and unmanned air vehicle (UAV) missions, identified supporting subsystems, surveyed advanced hardware and software technologies, and defined computational infrastructure technical requirements. The project selected a set of modular open systems standards and market-driven commercial-off-theshelf (COTS) electronics and software, and, developed experimental mission processors, network architectures, and software infrastructures supporting the integration of new capabilities, interoperability, and life cycle cost reductions. MCAP integrated the new mission processing architecture into an AH-64D Apache Longbow and participated in Future Combat Systems (FCS) network-centric operations field experiments in 2006 and 2007 at White Sands Missile Range (WSMR), New Mexico and at the Nevada Test and Training Range (NTTR) in 2008. The MCAP Apache also participated in PM C4ISR On-the-Move (OTM) Capstone Experiments 2007 (E07) and 2008 (E08) at Ft. Dix, NJ and conducted Mesa, Arizona local area flight tests in December 2005, February 2006, and June 2008.
Boubela, Roland N.; Kalcher, Klaudius; Huf, Wolfgang; Našel, Christian; Moser, Ewald
2016-01-01
Technologies for scalable analysis of very large datasets have emerged in the domain of internet computing, but are still rarely used in neuroimaging despite the existence of data and research questions in need of efficient computation tools especially in fMRI. In this work, we present software tools for the application of Apache Spark and Graphics Processing Units (GPUs) to neuroimaging datasets, in particular providing distributed file input for 4D NIfTI fMRI datasets in Scala for use in an Apache Spark environment. Examples for using this Big Data platform in graph analysis of fMRI datasets are shown to illustrate how processing pipelines employing it can be developed. With more tools for the convenient integration of neuroimaging file formats and typical processing steps, big data technologies could find wider endorsement in the community, leading to a range of potentially useful applications especially in view of the current collaborative creation of a wealth of large data repositories including thousands of individual fMRI datasets. PMID:26778951
Teaching Undergraduate Software Engineering Using Open Source Development Tools
2012-01-01
ware. Some example appliances are: a LAMP stack, Redmine, MySQL database, Moodle, Tom- cat on Apache, and Bugzilla. Some of the important features...Ada, C, C++, PHP , Py- thon, etc., and also supports a wide range of SDKs such as Google’s Android SDK and the Google Web Toolkit SDK. Additionally
Integrating the Apache Big Data Stack with HPC for Big Data
NASA Astrophysics Data System (ADS)
Fox, G. C.; Qiu, J.; Jha, S.
2014-12-01
There is perhaps a broad consensus as to important issues in practical parallel computing as applied to large scale simulations; this is reflected in supercomputer architectures, algorithms, libraries, languages, compilers and best practice for application development. However, the same is not so true for data intensive computing, even though commercially clouds devote much more resources to data analytics than supercomputers devote to simulations. We look at a sample of over 50 big data applications to identify characteristics of data intensive applications and to deduce needed runtime and architectures. We suggest a big data version of the famous Berkeley dwarfs and NAS parallel benchmarks and use these to identify a few key classes of hardware/software architectures. Our analysis builds on combining HPC and ABDS the Apache big data software stack that is well used in modern cloud computing. Initial results on clouds and HPC systems are encouraging. We propose the development of SPIDAL - Scalable Parallel Interoperable Data Analytics Library -- built on system aand data abstractions suggested by the HPC-ABDS architecture. We discuss how it can be used in several application areas including Polar Science.
Scalable Automated Model Search
2014-05-20
ma- chines. Categories and Subject Descriptors Big Data [Distributed Computing]: Large scale optimization 1. INTRODUCTION Modern scientific and...from Continuum Analytics[1], and Apache Spark 0.8.1. Additionally, we made use of Hadoop 1.0.4 configured on local disks as our data store for the large...Borkar et al. Hyracks: A flexible and extensible foundation for data -intensive computing. In ICDE, 2011. [16] J. Canny and H. Zhao. Big data
Crawling The Web for Libre: Selecting, Integrating, Extending and Releasing Open Source Software
NASA Astrophysics Data System (ADS)
Truslove, I.; Duerr, R. E.; Wilcox, H.; Savoie, M.; Lopez, L.; Brandt, M.
2012-12-01
Libre is a project developed by the National Snow and Ice Data Center (NSIDC). Libre is devoted to liberating science data from its traditional constraints of publication, location, and findability. Libre embraces and builds on the notion of making knowledge freely available, and both Creative Commons licensed content and Open Source Software are crucial building blocks for, as well as required deliverable outcomes of the project. One important aspect of the Libre project is to discover cryospheric data published on the internet without prior knowledge of the location or even existence of that data. Inspired by well-known search engines and their underlying web crawling technologies, Libre has explored tools and technologies required to build a search engine tailored to allow users to easily discover geospatial data related to the polar regions. After careful consideration, the Libre team decided to base its web crawling work on the Apache Nutch project (http://nutch.apache.org). Nutch is "an open source web-search software project" written in Java, with good documentation, a significant user base, and an active development community. Nutch was installed and configured to search for the types of data of interest, and the team created plugins to customize the default Nutch behavior to better find and categorize these data feeds. This presentation recounts the Libre team's experiences selecting, using, and extending Nutch, and working with the Nutch user and developer community. We will outline the technical and organizational challenges faced in order to release the project's software as Open Source, and detail the steps actually taken. We distill these experiences into a set of heuristics and recommendations for using, contributing to, and releasing Open Source Software.
Finding geospatial pattern of unstructured data by clustering routes
NASA Astrophysics Data System (ADS)
Boustani, M.; Mattmann, C. A.; Ramirez, P.; Burke, W.
2016-12-01
Today the majority of data generated has a geospatial context to it. Either in attribute form as a latitude or longitude, or name of location or cross referenceable using other means such as an external gazetteer or location service. Our research is interested in exploiting geospatial location and context in unstructured data such as that found on the web in HTML pages, images, videos, documents, and other areas, and in structured information repositories found on intranets, in scientific environments, and otherwise. We are working together on the DARPA MEMEX project to exploit open source software tools such as the Lucene Geo Gazetteer, Apache Tika, Apache Lucene, and Apache OpenNLP, to automatically extract, and make meaning out of geospatial information. In particular, we are interested in unstructured descriptors e.g., a phone number, or a named entity, and the ability to automatically learn geospatial paths related to these descriptors. For example, a particular phone number may represent an entity that travels on a monthly basis, according to easily identifiable and somes more difficult to track patterns. We will present a set of automatic techniques to extract descriptors, and then to geospatially infer their paths across unstructured data.
NASA Astrophysics Data System (ADS)
Mattmann, C. A.
2013-12-01
A wave of open source big data analytic infrastructure is currently shaping government, private sector, and academia. Projects are consuming, adapting, and contributing back to various ecosystems of software e.g., the Apache Hadoop project and its ecosystem of related efforts including Hive, HBase, Pig, Oozie, Ambari, Knox, Tez and Yarn, to name a few; the Berkeley AMPLab stack which includes Spark, Shark, Mesos, Tachyon, BlinkDB, MLBase, and other emerging efforts; MapR and its related stack of technologies, offerings from commercial companies building products around these tools e.g., Hortonworks Data Platform (HDP), Cloudera's CDH project, etc. Though the technologies all offer different capabilities including low latency support/in-memory, versus record oriented file I/O, high availability, support for the Map Reduce programming paradigm or other dataflow/workflow constructs, there is a common thread that binds these products - they are all released under an open source license e.g., Apache2, MIT, BSD, GPL/LGPL, etc.; all thrive in various ecosystems, such as Apache, or Berkeley AMPLab; all are developed collaboratively, and all technologies provide plug in architecture models and methodologies for allowing others to contribute, and participate via various community models. This talk will cover the open source aspects and governance aspects of the aforementioned Big Data ecosystems and point out the differences, subtleties, and implications of those differences. The discussion will be by example, using several national deployments and Big Data initiatives stemming from the Administration including DARPA's XDATA program; NASA's CMAC program; NSF's EarthCube and geosciences BigData projects. Lessons learned from these efforts in terms of the open source aspects of these technologies will help guide the AGU community in their use, deployment and understanding.
Teaching a laboratory-intensive online introductory electronics course*
NASA Astrophysics Data System (ADS)
Markes, Mark
2008-03-01
Most current online courses provide little or no hands-on laboratory content. This talk will describe the development and initial experiences with presenting an introductory online electronics course with significant hands-on laboratory content. The course is delivered using a Linux-based Apache web server, a Darwin Streaming Server, a SMART Board interactive white board, SMART Notebook software and a video camcorder. The laboratory uses primarily the Global Specialties PB-505 trainer and a Tenma 20MHz Oscilloscope that are provided to the students for the duration of the course and then returned. Testing is performed using Course Blackboard course management software.
A Scalable, Open Source Platform for Data Processing, Archiving and Dissemination
2016-01-01
Object Oriented Data Technology (OODT) big data toolkit developed by NASA and the Work-flow INstance Generation and Selection (WINGS) scientific work...to several challenge big data problems and demonstrated the utility of OODT-WINGS in addressing them. Specific demonstrated analyses address i...source software, Apache, Object Oriented Data Technology, OODT, semantic work-flows, WINGS, big data , work- flow management 16. SECURITY CLASSIFICATION OF
The WLCG Messaging Service and its Future
NASA Astrophysics Data System (ADS)
Cons, Lionel; Paladin, Massimo
2012-12-01
Enterprise messaging is seen as an attractive mechanism to simplify and extend several portions of the Grid middleware, from low level monitoring to experiments dashboards. The production messaging service currently used by WLCG includes four tightly coupled brokers operated by EGI (running Apache ActiveMQ and designed to host the Grid operational tools such as SAM) as well as two dedicated services for ATLAS-DDM and experiments dashboards (currently also running Apache ActiveMQ). In the future, this service is expected to grow in numbers of applications supported, brokers and technologies. The WLCG Messaging Roadmap identified three areas with room for improvement (security, scalability and availability/reliability) as well as ten practical recommendations to address them. This paper describes a messaging service architecture that is in line with these recommendations as well as a software architecture based on reusable components that ease interactions with the messaging service. These two architectures will support the growth of the WLCG messaging service.
Sideloading - Ingestion of Large Point Clouds Into the Apache Spark Big Data Engine
NASA Astrophysics Data System (ADS)
Boehm, J.; Liu, K.; Alis, C.
2016-06-01
In the geospatial domain we have now reached the point where data volumes we handle have clearly grown beyond the capacity of most desktop computers. This is particularly true in the area of point cloud processing. It is therefore naturally lucrative to explore established big data frameworks for big geospatial data. The very first hurdle is the import of geospatial data into big data frameworks, commonly referred to as data ingestion. Geospatial data is typically encoded in specialised binary file formats, which are not naturally supported by the existing big data frameworks. Instead such file formats are supported by software libraries that are restricted to single CPU execution. We present an approach that allows the use of existing point cloud file format libraries on the Apache Spark big data framework. We demonstrate the ingestion of large volumes of point cloud data into a compute cluster. The approach uses a map function to distribute the data ingestion across the nodes of a cluster. We test the capabilities of the proposed method to load billions of points into a commodity hardware compute cluster and we discuss the implications on scalability and performance. The performance is benchmarked against an existing native Apache Spark data import implementation.
Wiewiórka, Marek S; Messina, Antonio; Pacholewska, Alicja; Maffioletti, Sergio; Gawrysiak, Piotr; Okoniewski, Michał J
2014-09-15
Many time-consuming analyses of next -: generation sequencing data can be addressed with modern cloud computing. The Apache Hadoop-based solutions have become popular in genomics BECAUSE OF: their scalability in a cloud infrastructure. So far, most of these tools have been used for batch data processing rather than interactive data querying. The SparkSeq software has been created to take advantage of a new MapReduce framework, Apache Spark, for next-generation sequencing data. SparkSeq is a general-purpose, flexible and easily extendable library for genomic cloud computing. It can be used to build genomic analysis pipelines in Scala and run them in an interactive way. SparkSeq opens up the possibility of customized ad hoc secondary analyses and iterative machine learning algorithms. This article demonstrates its scalability and overall fast performance by running the analyses of sequencing datasets. Tests of SparkSeq also prove that the use of cache and HDFS block size can be tuned for the optimal performance on multiple worker nodes. Available under open source Apache 2.0 license: https://bitbucket.org/mwiewiorka/sparkseq/. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
A Disk-Based System for Producing and Distributing Science Products from MODIS
NASA Technical Reports Server (NTRS)
Masuoka, Edward; Wolfe, Robert; Sinno, Scott; Ye Gang; Teague, Michael
2007-01-01
Since beginning operations in 1999, the MODIS Adaptive Processing System (MODAPS) has evolved to take advantage of trends in information technology, such as the falling cost of computing cycles and disk storage and the availability of high quality open-source software (Linux, Apache and Perl), to achieve substantial gains in processing and distribution capacity and throughput while driving down the cost of system operations.
Zabolotskikh, I B; Musaeva, T S; Denisova, E A
2012-01-01
to estimate efficiency of APACHE II, APACHE III, SAPS II, SAPS III, SOFA scales for obstetric patients with heavy sepsis. 186 medical cards retrospective analysis of pregnant women with pulmonary sepsis, 40 women with urosepsis and puerperas with abdominal sepsis--66 was performed. Middle age of women was 26.7 (22.4-34.5). In population of puerperas with abdominal sepsis APACHE II, APACHE III, SAPS 2, SAPS 3, SOFA scales showed to good calibration, however, high resolution was observed only in APACHE III, SAPS 3 and SOFA (AUROC 0.95; 0.93; 0.92 respectively). APACHE III and SOFA scales provided qualitative prognosis in pregnant women with urosepsis; resolution ratio of these scales considerably exceeds APACHE II, SAPS 2 and SAPS 3 (AUROC 0.73; 0.74; 0.79 respectively). APACHE II scale is inapplicable because of a lack of calibration (X2 = 13.1; p < 0.01), and at other scales (APACHE III, SAPS 2, SAPS 3, SOFA) was observed the insufficient resolution (AUROC < 0.9) in pregnant women with pulmonary sepsis. Prognostic possibilities assessment of score scales showed that APACHE III, SAPS 3 and SOFA scales can be used for a lethality prognosis for puerperas with abdominal sepsis, in population of pregnant women with urosepsis--only APACHE III and SOFA, and with pulmonary sepsis--SAPS 3 and APACHE III only in case of additional clinical information.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-26
... Mountain Apache Tribe of the Fort Apache Reservation, Arizona; Yavapai-Apache Nation of the Camp Verde... Tribe of the Fort Apache Reservation, Arizona; and Yavapai-Apache Nation of the Camp Verde Indian...-Apache Nation of the Camp Verde Indian Reservation, Arizona. Other credible lines of evidence, including...
Pereira, Andre; Atri, Mostafa; Rogalla, Patrik; Huynh, Thien; O'Malley, Martin E
2015-11-01
The value of a teaching case repository in radiology training programs is immense. The allocation of resources for putting one together is a complex issue, given the factors that have to be coordinated: hardware, software, infrastructure, administration, and ethics. Costs may be significant and cost-effective solutions are desirable. We chose Medical Imaging Resource Center (MIRC) to build our teaching file. It is offered by RSNA for free. For the hardware, we chose the Raspberry Pi, developed by the Raspberry Foundation: a small control board developed as a low cost computer for schools also used in alternative projects such as robotics and environmental data collection. Its performance and reliability as a file server were unknown to us. For the operational system, we chose Raspbian, a variant of Debian Linux, along with Apache (web server), MySql (database server) and PHP, which enhance the functionality of the server. A USB hub and an external hard drive completed the setup. Installation of software was smooth. The Raspberry Pi was able to handle very well the task of hosting the teaching file repository for our division. Uptime was logged at 100 %, and loading times were similar to other MIRC sites available online. We setup two servers (one for backup), each costing just below $200.00 including external storage and USB hub. It is feasible to run RSNA's MIRC off a low-cost control board (Raspberry Pi). Performance and reliability are comparable to full-size servers for the intended purpose of hosting a teaching file within an intranet environment.
A Security-façade Library for Virtual-observatory Software
NASA Astrophysics Data System (ADS)
Rixon, G.
2009-09-01
The security-façade library implements, for Java, IVOA's security standards. It supports the authentication mechanisms for SOAP and REST web-services, the sign-on mechanisms (with MyProxy, AstroGrid Accounts protocol or local credential-caches), the delegation protocol, and RFC3820-enabled HTTPS for Apache Tomcat. Using the façade, a developer who is not a security specialist can easily add access control to a virtual-observatory service and call secured services from an application. The library has been an internal part of AstroGrid software for some time and it is now offered for use by other developers.
Sloan Digital Sky Survey IV: Mapping the Milky Way, nearby galaxies, and the distant universe
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blanton, Michael R.; Bershady, Matthew A.; Abolfathi, Bela
Here, we describe the Sloan Digital Sky Survey IV (SDSS-IV), a project encompassing three major spectroscopic programs. The Apache Point Observatory Galactic Evolution Experiment 2 (APOGEE-2) is observing hundreds of thousands of Milky Way stars at high resolution and high signal-to-noise ratios in the near-infrared. The Mapping Nearby Galaxies at Apache Point Observatory (MaNGA) survey is obtaining spatially resolved spectroscopy for thousands of nearby galaxies (medianmore » $$z\\sim 0.03$$). The extended Baryon Oscillation Spectroscopic Survey (eBOSS) is mapping the galaxy, quasar, and neutral gas distributions between $$z\\sim 0.6$$ and 3.5 to constrain cosmology using baryon acoustic oscillations, redshift space distortions, and the shape of the power spectrum. Within eBOSS, we are conducting two major subprograms: the SPectroscopic IDentification of eROSITA Sources (SPIDERS), investigating X-ray AGNs and galaxies in X-ray clusters, and the Time Domain Spectroscopic Survey (TDSS), obtaining spectra of variable sources. All programs use the 2.5 m Sloan Foundation Telescope at the Apache Point Observatory; observations there began in Summer 2014. APOGEE-2 also operates a second near-infrared spectrograph at the 2.5 m du Pont Telescope at Las Campanas Observatory, with observations beginning in early 2017. Observations at both facilities are scheduled to continue through 2020. In keeping with previous SDSS policy, SDSS-IV provides regularly scheduled public data releases; the first one, Data Release 13, was made available in 2016 July.« less
Sloan Digital Sky Survey IV: Mapping the Milky Way, nearby galaxies, and the distant universe
Blanton, Michael R.; Bershady, Matthew A.; Abolfathi, Bela; ...
2017-06-29
Here, we describe the Sloan Digital Sky Survey IV (SDSS-IV), a project encompassing three major spectroscopic programs. The Apache Point Observatory Galactic Evolution Experiment 2 (APOGEE-2) is observing hundreds of thousands of Milky Way stars at high resolution and high signal-to-noise ratios in the near-infrared. The Mapping Nearby Galaxies at Apache Point Observatory (MaNGA) survey is obtaining spatially resolved spectroscopy for thousands of nearby galaxies (medianmore » $$z\\sim 0.03$$). The extended Baryon Oscillation Spectroscopic Survey (eBOSS) is mapping the galaxy, quasar, and neutral gas distributions between $$z\\sim 0.6$$ and 3.5 to constrain cosmology using baryon acoustic oscillations, redshift space distortions, and the shape of the power spectrum. Within eBOSS, we are conducting two major subprograms: the SPectroscopic IDentification of eROSITA Sources (SPIDERS), investigating X-ray AGNs and galaxies in X-ray clusters, and the Time Domain Spectroscopic Survey (TDSS), obtaining spectra of variable sources. All programs use the 2.5 m Sloan Foundation Telescope at the Apache Point Observatory; observations there began in Summer 2014. APOGEE-2 also operates a second near-infrared spectrograph at the 2.5 m du Pont Telescope at Las Campanas Observatory, with observations beginning in early 2017. Observations at both facilities are scheduled to continue through 2020. In keeping with previous SDSS policy, SDSS-IV provides regularly scheduled public data releases; the first one, Data Release 13, was made available in 2016 July.« less
Sloan Digital Sky Survey IV: Mapping the Milky Way, Nearby Galaxies, and the Distant Universe
NASA Astrophysics Data System (ADS)
Blanton, Michael R.; Bershady, Matthew A.; Abolfathi, Bela; Albareti, Franco D.; Allende Prieto, Carlos; Almeida, Andres; Alonso-García, Javier; Anders, Friedrich; Anderson, Scott F.; Andrews, Brett; Aquino-Ortíz, Erik; Aragón-Salamanca, Alfonso; Argudo-Fernández, Maria; Armengaud, Eric; Aubourg, Eric; Avila-Reese, Vladimir; Badenes, Carles; Bailey, Stephen; Barger, Kathleen A.; Barrera-Ballesteros, Jorge; Bartosz, Curtis; Bates, Dominic; Baumgarten, Falk; Bautista, Julian; Beaton, Rachael; Beers, Timothy C.; Belfiore, Francesco; Bender, Chad F.; Berlind, Andreas A.; Bernardi, Mariangela; Beutler, Florian; Bird, Jonathan C.; Bizyaev, Dmitry; Blanc, Guillermo A.; Blomqvist, Michael; Bolton, Adam S.; Boquien, Médéric; Borissova, Jura; van den Bosch, Remco; Bovy, Jo; Brandt, William N.; Brinkmann, Jonathan; Brownstein, Joel R.; Bundy, Kevin; Burgasser, Adam J.; Burtin, Etienne; Busca, Nicolás G.; Cappellari, Michele; Delgado Carigi, Maria Leticia; Carlberg, Joleen K.; Carnero Rosell, Aurelio; Carrera, Ricardo; Chanover, Nancy J.; Cherinka, Brian; Cheung, Edmond; Gómez Maqueo Chew, Yilen; Chiappini, Cristina; Doohyun Choi, Peter; Chojnowski, Drew; Chuang, Chia-Hsun; Chung, Haeun; Cirolini, Rafael Fernando; Clerc, Nicolas; Cohen, Roger E.; Comparat, Johan; da Costa, Luiz; Cousinou, Marie-Claude; Covey, Kevin; Crane, Jeffrey D.; Croft, Rupert A. C.; Cruz-Gonzalez, Irene; Garrido Cuadra, Daniel; Cunha, Katia; Damke, Guillermo J.; Darling, Jeremy; Davies, Roger; Dawson, Kyle; de la Macorra, Axel; Dell'Agli, Flavia; De Lee, Nathan; Delubac, Timothée; Di Mille, Francesco; Diamond-Stanic, Aleks; Cano-Díaz, Mariana; Donor, John; Downes, Juan José; Drory, Niv; du Mas des Bourboux, Hélion; Duckworth, Christopher J.; Dwelly, Tom; Dyer, Jamie; Ebelke, Garrett; Eigenbrot, Arthur D.; Eisenstein, Daniel J.; Emsellem, Eric; Eracleous, Mike; Escoffier, Stephanie; Evans, Michael L.; Fan, Xiaohui; Fernández-Alvar, Emma; Fernandez-Trincado, J. G.; Feuillet, Diane K.; Finoguenov, Alexis; Fleming, Scott W.; Font-Ribera, Andreu; Fredrickson, Alexander; Freischlad, Gordon; Frinchaboy, Peter M.; Fuentes, Carla E.; Galbany, Lluís; Garcia-Dias, R.; García-Hernández, D. A.; Gaulme, Patrick; Geisler, Doug; Gelfand, Joseph D.; Gil-Marín, Héctor; Gillespie, Bruce A.; Goddard, Daniel; Gonzalez-Perez, Violeta; Grabowski, Kathleen; Green, Paul J.; Grier, Catherine J.; Gunn, James E.; Guo, Hong; Guy, Julien; Hagen, Alex; Hahn, ChangHoon; Hall, Matthew; Harding, Paul; Hasselquist, Sten; Hawley, Suzanne L.; Hearty, Fred; Gonzalez Hernández, Jonay I.; Ho, Shirley; Hogg, David W.; Holley-Bockelmann, Kelly; Holtzman, Jon A.; Holzer, Parker H.; Huehnerhoff, Joseph; Hutchinson, Timothy A.; Hwang, Ho Seong; Ibarra-Medel, Héctor J.; da Silva Ilha, Gabriele; Ivans, Inese I.; Ivory, KeShawn; Jackson, Kelly; Jensen, Trey W.; Johnson, Jennifer A.; Jones, Amy; Jönsson, Henrik; Jullo, Eric; Kamble, Vikrant; Kinemuchi, Karen; Kirkby, David; Kitaura, Francisco-Shu; Klaene, Mark; Knapp, Gillian R.; Kneib, Jean-Paul; Kollmeier, Juna A.; Lacerna, Ivan; Lane, Richard R.; Lang, Dustin; Law, David R.; Lazarz, Daniel; Lee, Youngbae; Le Goff, Jean-Marc; Liang, Fu-Heng; Li, Cheng; Li, Hongyu; Lian, Jianhui; Lima, Marcos; Lin, Lihwai; Lin, Yen-Ting; Bertran de Lis, Sara; Liu, Chao; de Icaza Lizaola, Miguel Angel C.; Long, Dan; Lucatello, Sara; Lundgren, Britt; MacDonald, Nicholas K.; Deconto Machado, Alice; MacLeod, Chelsea L.; Mahadevan, Suvrath; Geimba Maia, Marcio Antonio; Maiolino, Roberto; Majewski, Steven R.; Malanushenko, Elena; Malanushenko, Viktor; Manchado, Arturo; Mao, Shude; Maraston, Claudia; Marques-Chaves, Rui; Masseron, Thomas; Masters, Karen L.; McBride, Cameron K.; McDermid, Richard M.; McGrath, Brianne; McGreer, Ian D.; Medina Peña, Nicolás; Melendez, Matthew; Merloni, Andrea; Merrifield, Michael R.; Meszaros, Szabolcs; Meza, Andres; Minchev, Ivan; Minniti, Dante; Miyaji, Takamitsu; More, Surhud; Mulchaey, John; Müller-Sánchez, Francisco; Muna, Demitri; Munoz, Ricardo R.; Myers, Adam D.; Nair, Preethi; Nandra, Kirpal; Correa do Nascimento, Janaina; Negrete, Alenka; Ness, Melissa; Newman, Jeffrey A.; Nichol, Robert C.; Nidever, David L.; Nitschelm, Christian; Ntelis, Pierros; O'Connell, Julia E.; Oelkers, Ryan J.; Oravetz, Audrey; Oravetz, Daniel; Pace, Zach; Padilla, Nelson; Palanque-Delabrouille, Nathalie; Alonso Palicio, Pedro; Pan, Kaike; Parejko, John K.; Parikh, Taniya; Pâris, Isabelle; Park, Changbom; Patten, Alim Y.; Peirani, Sebastien; Pellejero-Ibanez, Marcos; Penny, Samantha; Percival, Will J.; Perez-Fournon, Ismael; Petitjean, Patrick; Pieri, Matthew M.; Pinsonneault, Marc; Pisani, Alice; Poleski, Radosław; Prada, Francisco; Prakash, Abhishek; Queiroz, Anna Bárbara de Andrade; Raddick, M. Jordan; Raichoor, Anand; Barboza Rembold, Sandro; Richstein, Hannah; Riffel, Rogemar A.; Riffel, Rogério; Rix, Hans-Walter; Robin, Annie C.; Rockosi, Constance M.; Rodríguez-Torres, Sergio; Roman-Lopes, A.; Román-Zúñiga, Carlos; Rosado, Margarita; Ross, Ashley J.; Rossi, Graziano; Ruan, John; Ruggeri, Rossana; Rykoff, Eli S.; Salazar-Albornoz, Salvador; Salvato, Mara; Sánchez, Ariel G.; Aguado, D. S.; Sánchez-Gallego, José R.; Santana, Felipe A.; Santiago, Basílio Xavier; Sayres, Conor; Schiavon, Ricardo P.; da Silva Schimoia, Jaderson; Schlafly, Edward F.; Schlegel, David J.; Schneider, Donald P.; Schultheis, Mathias; Schuster, William J.; Schwope, Axel; Seo, Hee-Jong; Shao, Zhengyi; Shen, Shiyin; Shetrone, Matthew; Shull, Michael; Simon, Joshua D.; Skinner, Danielle; Skrutskie, M. F.; Slosar, Anže; Smith, Verne V.; Sobeck, Jennifer S.; Sobreira, Flavia; Somers, Garrett; Souto, Diogo; Stark, David V.; Stassun, Keivan; Stauffer, Fritz; Steinmetz, Matthias; Storchi-Bergmann, Thaisa; Streblyanska, Alina; Stringfellow, Guy S.; Suárez, Genaro; Sun, Jing; Suzuki, Nao; Szigeti, Laszlo; Taghizadeh-Popp, Manuchehr; Tang, Baitian; Tao, Charling; Tayar, Jamie; Tembe, Mita; Teske, Johanna; Thakar, Aniruddha R.; Thomas, Daniel; Thompson, Benjamin A.; Tinker, Jeremy L.; Tissera, Patricia; Tojeiro, Rita; Hernandez Toledo, Hector; de la Torre, Sylvain; Tremonti, Christy; Troup, Nicholas W.; Valenzuela, Octavio; Martinez Valpuesta, Inma; Vargas-González, Jaime; Vargas-Magaña, Mariana; Vazquez, Jose Alberto; Villanova, Sandro; Vivek, M.; Vogt, Nicole; Wake, David; Walterbos, Rene; Wang, Yuting; Weaver, Benjamin Alan; Weijmans, Anne-Marie; Weinberg, David H.; Westfall, Kyle B.; Whelan, David G.; Wild, Vivienne; Wilson, John; Wood-Vasey, W. M.; Wylezalek, Dominika; Xiao, Ting; Yan, Renbin; Yang, Meng; Ybarra, Jason E.; Yèche, Christophe; Zakamska, Nadia; Zamora, Olga; Zarrouk, Pauline; Zasowski, Gail; Zhang, Kai; Zhao, Gong-Bo; Zheng, Zheng; Zheng, Zheng; Zhou, Xu; Zhou, Zhi-Min; Zhu, Guangtun B.; Zoccali, Manuela; Zou, Hu
2017-07-01
We describe the Sloan Digital Sky Survey IV (SDSS-IV), a project encompassing three major spectroscopic programs. The Apache Point Observatory Galactic Evolution Experiment 2 (APOGEE-2) is observing hundreds of thousands of Milky Way stars at high resolution and high signal-to-noise ratios in the near-infrared. The Mapping Nearby Galaxies at Apache Point Observatory (MaNGA) survey is obtaining spatially resolved spectroscopy for thousands of nearby galaxies (median z˜ 0.03). The extended Baryon Oscillation Spectroscopic Survey (eBOSS) is mapping the galaxy, quasar, and neutral gas distributions between z˜ 0.6 and 3.5 to constrain cosmology using baryon acoustic oscillations, redshift space distortions, and the shape of the power spectrum. Within eBOSS, we are conducting two major subprograms: the SPectroscopic IDentification of eROSITA Sources (SPIDERS), investigating X-ray AGNs and galaxies in X-ray clusters, and the Time Domain Spectroscopic Survey (TDSS), obtaining spectra of variable sources. All programs use the 2.5 m Sloan Foundation Telescope at the Apache Point Observatory; observations there began in Summer 2014. APOGEE-2 also operates a second near-infrared spectrograph at the 2.5 m du Pont Telescope at Las Campanas Observatory, with observations beginning in early 2017. Observations at both facilities are scheduled to continue through 2020. In keeping with previous SDSS policy, SDSS-IV provides regularly scheduled public data releases; the first one, Data Release 13, was made available in 2016 July.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-24
... Apache Tribe of the Fort Apache Reservation, Arizona; and the Yavapai-Apache Nation of the Camp Verde... of the Fort Apache Reservation, Arizona; Yavapai-Apache Nation of the Camp Verde Indian Reservation...
DISPAQ: Distributed Profitable-Area Query from Big Taxi Trip Data.
Putri, Fadhilah Kurnia; Song, Giltae; Kwon, Joonho; Rao, Praveen
2017-09-25
One of the crucial problems for taxi drivers is to efficiently locate passengers in order to increase profits. The rapid advancement and ubiquitous penetration of Internet of Things (IoT) technology into transportation industries enables us to provide taxi drivers with locations that have more potential passengers (more profitable areas) by analyzing and querying taxi trip data. In this paper, we propose a query processing system, called Distributed Profitable-Area Query ( DISPAQ ) which efficiently identifies profitable areas by exploiting the Apache Software Foundation's Spark framework and a MongoDB database. DISPAQ first maintains a profitable-area query index (PQ-index) by extracting area summaries and route summaries from raw taxi trip data. It then identifies candidate profitable areas by searching the PQ-index during query processing. Then, it exploits a Z-Skyline algorithm, which is an extension of skyline processing with a Z-order space filling curve, to quickly refine the candidate profitable areas. To improve the performance of distributed query processing, we also propose local Z-Skyline optimization, which reduces the number of dominant tests by distributing killer profitable areas to each cluster node. Through extensive evaluation with real datasets, we demonstrate that our DISPAQ system provides a scalable and efficient solution for processing profitable-area queries from huge amounts of big taxi trip data.
Reactive Aggregate Model Protecting Against Real-Time Threats
2014-09-01
on the underlying functionality of three core components. • MS SQL server 2008 backend database. • Microsoft IIS running on Windows server 2008...services. The capstone tested a Linux-based Apache web server with the following software implementations: • MySQL as a Linux-based backend server for...malicious compromise. 1. Assumptions • GINA could connect to a backend MS SQL database through proper configuration of DotNetNuke. • GINA had access
Portable Map-Reduce Utility for MIT SuperCloud Environment
2015-09-17
Reuther, A. Rosa, C. Yee, “Driving Big Data With Big Compute,” IEEE HPEC, Sep 10-12, 2012, Waltham, MA. [6] Apache Hadoop 1.2.1 Documentation: HDFS... big data architecture, which is designed to address these challenges, is made of the computing resources, scheduler, central storage file system...databases, analytics software and web interfaces [1]. These components are common to many big data and supercomputing systems. The platform is
Using a Foundational Ontology for Reengineering a Software Enterprise Ontology
NASA Astrophysics Data System (ADS)
Perini Barcellos, Monalessa; de Almeida Falbo, Ricardo
The knowledge about software organizations is considerably relevant to software engineers. The use of a common vocabulary for representing the useful knowledge about software organizations involved in software projects is important for several reasons, such as to support knowledge reuse and to allow communication and interoperability between tools. Domain ontologies can be used to define a common vocabulary for sharing and reuse of knowledge about some domain. Foundational ontologies can be used for evaluating and re-designing domain ontologies, giving to these real-world semantics. This paper presents an evaluating of a Software Enterprise Ontology that was reengineered using the Unified Foundation Ontology (UFO) as basis.
MiniWall Tool for Analyzing CFD and Wind Tunnel Large Data Sets
NASA Technical Reports Server (NTRS)
Schuh, Michael J.; Melton, John E.; Stremel, Paul M.
2017-01-01
It is challenging to review and assimilate large data sets created by Computational Fluid Dynamics (CFD) simulations and wind tunnel tests. Over the past 10 years, NASA Ames Research Center has developed and refined a software tool dubbed the MiniWall to increase productivity in reviewing and understanding large CFD-generated data sets. Under the recent NASA ERA project, the application of the tool expanded to enable rapid comparison of experimental and computational data. The MiniWall software is browser based so that it runs on any computer or device that can display a web page. It can also be used remotely and securely by using web server software such as the Apache HTTP server. The MiniWall software has recently been rewritten and enhanced to make it even easier for analysts to review large data sets and extract knowledge and understanding from these data sets. This paper describes the MiniWall software and demonstrates how the different features are used to review and assimilate large data sets.
MiniWall Tool for Analyzing CFD and Wind Tunnel Large Data Sets
NASA Technical Reports Server (NTRS)
Schuh, Michael J.; Melton, John E.; Stremel, Paul M.
2017-01-01
It is challenging to review and assimilate large data sets created by Computational Fluid Dynamics (CFD) simulations and wind tunnel tests. Over the past 10 years, NASA Ames Research Center has developed and refined a software tool dubbed the "MiniWall" to increase productivity in reviewing and understanding large CFD-generated data sets. Under the recent NASA ERA project, the application of the tool expanded to enable rapid comparison of experimental and computational data. The MiniWall software is browser based so that it runs on any computer or device that can display a web page. It can also be used remotely and securely by using web server software such as the Apache HTTP Server. The MiniWall software has recently been rewritten and enhanced to make it even easier for analysts to review large data sets and extract knowledge and understanding from these data sets. This paper describes the MiniWall software and demonstrates how the different features are used to review and assimilate large data sets.
A Study of E+A Galaxies Through SDSS-MaNGA Integral Field Spectroscopy
NASA Astrophysics Data System (ADS)
Wally, Muhammad; Weaver, Olivia A.; Anderson, Miguel Ricardo; Liu, Allen; Falcone, Julia; Wallack, Nicole Lisa; James, Olivia; Liu, Charles
2017-01-01
We outline the selection process and analysis of sixteen E+A galaxies observed by the Mapping Nearby Galaxies at the Apache Point Observatory (MaNGA) survey as a part of the fourth generation of the Sloan Digital Sky Survey (SDSS-IV). We present their Integral field spectroscopy and analyze their spatial distribution of stellar ages, metallicities and other stellar population properties. We can potentially study the variation in these properties as a function of redshift. This work was supported by the Alfred P. Sloan Foundation via the SDSS-IV Faculty and Student Team (FAST) initiative, ARC Agreement #SSP483 to the CUNY College of Staten Island. This work was also supported by grants to The American Museum of Natural History, and the CUNY College of Staten Island through The National Science Foundation.
DeNovoGUI: An Open Source Graphical User Interface for de Novo Sequencing of Tandem Mass Spectra
2013-01-01
De novo sequencing is a popular technique in proteomics for identifying peptides from tandem mass spectra without having to rely on a protein sequence database. Despite the strong potential of de novo sequencing algorithms, their adoption threshold remains quite high. We here present a user-friendly and lightweight graphical user interface called DeNovoGUI for running parallelized versions of the freely available de novo sequencing software PepNovo+, greatly simplifying the use of de novo sequencing in proteomics. Our platform-independent software is freely available under the permissible Apache2 open source license. Source code, binaries, and additional documentation are available at http://denovogui.googlecode.com. PMID:24295440
DeNovoGUI: an open source graphical user interface for de novo sequencing of tandem mass spectra.
Muth, Thilo; Weilnböck, Lisa; Rapp, Erdmann; Huber, Christian G; Martens, Lennart; Vaudel, Marc; Barsnes, Harald
2014-02-07
De novo sequencing is a popular technique in proteomics for identifying peptides from tandem mass spectra without having to rely on a protein sequence database. Despite the strong potential of de novo sequencing algorithms, their adoption threshold remains quite high. We here present a user-friendly and lightweight graphical user interface called DeNovoGUI for running parallelized versions of the freely available de novo sequencing software PepNovo+, greatly simplifying the use of de novo sequencing in proteomics. Our platform-independent software is freely available under the permissible Apache2 open source license. Source code, binaries, and additional documentation are available at http://denovogui.googlecode.com .
The Western Apache home: landscape management and failing ecosystems
Seth Pilsk; Jeanette C. Cassa
2005-01-01
The traditional Western Apache home lies largely within the Madrean Archipelago. The natural resources of the region make up the basis of the Apache home and culture. Profound landscape changes in the region have occurred over the past 150 years. A survey of traditional Western Apache place names documents many of these changes. An analysis of the history and Apache...
High Speed White Dwarf Asteroseismology with the Herty Hall Cluster
NASA Astrophysics Data System (ADS)
Gray, Aaron; Kim, A.
2012-01-01
Asteroseismology is the process of using observed oscillations of stars to infer their interior structure. In high speed asteroseismology, we complete that by quickly computing hundreds of thousands of models to match the observed period spectra. Each model on a single processor takes five to ten seconds to run. Therefore, we use a cluster of sixteen Dell Workstations with dual-core processors. The computers use the Ubuntu operating system and Apache Hadoop software to manage workloads.
2015-06-01
Hadoop Distributed File System (HDFS) without any integration with Accumulo-based Knowledge Stores based on OWL/RDF. 4. Cloud Based The Apache Software...BTW, 7(12), pp. 227–241. Godin, A. & Akins, D. (2014). Extending DCGS-N naval tactical clouds from in-storage to in-memory for the integrated fires...VISUALIZATIONS: A TOOL TO ACHIEVE OPTIMIZED OPERATIONAL DECISION MAKING AND DATA INTEGRATION by Paul C. Hudson Jeffrey A. Rzasa June 2015 Thesis
Venkataraman, Ramesh; Gopichandran, Vijayaprasad; Ranganathan, Lakshmi; Rajagopal, Senthilkumar; Abraham, Babu K; Ramakrishnan, Nagarajan
2018-01-01
Background: Mortality prediction in the Intensive Care Unit (ICU) setting is complex, and there are several scoring systems utilized for this process. The Acute Physiology and Chronic Health Evaluation (APACHE) II has been the most widely used scoring system; although, the more recent APACHE IV is considered an updated and advanced prediction model. However, these two systems may not give similar mortality predictions. Objectives: The aim of this study is to compare the mortality prediction ability of APACHE II and APACHE IV scoring systems among patients admitted to a tertiary care ICU. Methods: In this prospective longitudinal observational study, APACHE II and APACHE IV scores of ICU patients were computed using an online calculator. The outcome of the ICU admissions for all the patients was collected as discharged or deceased. The data were analyzed to compare the discrimination and calibration of the mortality prediction ability of the two scores. Results: Out of the 1670 patients' data analyzed, the area under the receiver operating characteristic of APACHE II score was 0.906 (95% confidence interval [CI] – 0.890–0.992), and APACHE IV score was 0.881 (95% CI – 0.862–0.890). The mean predicted mortality rate of the study population as given by the APACHE II scoring system was 44.8 ± 26.7 and as given by APACHE IV scoring system was 29.1 ± 28.5. The observed mortality rate was 22.4%. Conclusions: The APACHE II and IV scoring systems have comparable discrimination ability, but the calibration of APACHE IV seems to be better than that of APACHE II. There is a need to recalibrate the scales with weights derived from the Indian population. PMID:29910542
Venkataraman, Ramesh; Gopichandran, Vijayaprasad; Ranganathan, Lakshmi; Rajagopal, Senthilkumar; Abraham, Babu K; Ramakrishnan, Nagarajan
2018-05-01
Mortality prediction in the Intensive Care Unit (ICU) setting is complex, and there are several scoring systems utilized for this process. The Acute Physiology and Chronic Health Evaluation (APACHE) II has been the most widely used scoring system; although, the more recent APACHE IV is considered an updated and advanced prediction model. However, these two systems may not give similar mortality predictions. The aim of this study is to compare the mortality prediction ability of APACHE II and APACHE IV scoring systems among patients admitted to a tertiary care ICU. In this prospective longitudinal observational study, APACHE II and APACHE IV scores of ICU patients were computed using an online calculator. The outcome of the ICU admissions for all the patients was collected as discharged or deceased. The data were analyzed to compare the discrimination and calibration of the mortality prediction ability of the two scores. Out of the 1670 patients' data analyzed, the area under the receiver operating characteristic of APACHE II score was 0.906 (95% confidence interval [CI] - 0.890-0.992), and APACHE IV score was 0.881 (95% CI - 0.862-0.890). The mean predicted mortality rate of the study population as given by the APACHE II scoring system was 44.8 ± 26.7 and as given by APACHE IV scoring system was 29.1 ± 28.5. The observed mortality rate was 22.4%. The APACHE II and IV scoring systems have comparable discrimination ability, but the calibration of APACHE IV seems to be better than that of APACHE II. There is a need to recalibrate the scales with weights derived from the Indian population.
Scalable and fail-safe deployment of the ATLAS Distributed Data Management system Rucio
NASA Astrophysics Data System (ADS)
Lassnig, M.; Vigne, R.; Beermann, T.; Barisits, M.; Garonne, V.; Serfon, C.
2015-12-01
This contribution details the deployment of Rucio, the ATLAS Distributed Data Management system. The main complication is that Rucio interacts with a wide variety of external services, and connects globally distributed data centres under different technological and administrative control, at an unprecedented data volume. It is therefore not possible to create a duplicate instance of Rucio for testing or integration. Every software upgrade or configuration change is thus potentially disruptive and requires fail-safe software and automatic error recovery. Rucio uses a three-layer scaling and mitigation strategy based on quasi-realtime monitoring. This strategy mainly employs independent stateless services, automatic failover, and service migration. The technologies used for deployment and mitigation include OpenStack, Puppet, Graphite, HAProxy and Apache. In this contribution, the interplay between these components, their deployment, software mitigation, and the monitoring strategy are discussed.
Teacher's Guide to SERAPHIM Software I. Chemistry: Experimental Foundations.
ERIC Educational Resources Information Center
Bogner, Donna J.
Designed to assist chemistry teachers in selecting appropriate software programs, this publication is the first in a series of six teacher's guides from Project SERAPHIM, a program sponsored by the National Science Foundation. This guide is keyed to the chapters of the text "Chemistry: Experimental Foundations." Program suggestions are…
CrossTalk: The Journal of Defense Software Engineering. Volume 20, Number 9, September 2007
2007-09-01
underlying application framework, e.g., Java Enter- prise Edition or .NET. This increases the risk that consumer Web services not based on the same...weaknesses and vulnera- bilities that are targeted by attackers and malicious code. For example, Apache Axis 2 enables a Java devel- oper to simply...load his/her Java objects into the Axis SOAP engine. At runtime, it is the SOAP engine that determines which incoming SOAP request messages should be
The need for scientific software engineering in the pharmaceutical industry
NASA Astrophysics Data System (ADS)
Luty, Brock; Rose, Peter W.
2017-03-01
Scientific software engineering is a distinct discipline from both computational chemistry project support and research informatics. A scientific software engineer not only has a deep understanding of the science of drug discovery but also the desire, skills and time to apply good software engineering practices. A good team of scientific software engineers can create a software foundation that is maintainable, validated and robust. If done correctly, this foundation enable the organization to investigate new and novel computational ideas with a very high level of efficiency.
The need for scientific software engineering in the pharmaceutical industry.
Luty, Brock; Rose, Peter W
2017-03-01
Scientific software engineering is a distinct discipline from both computational chemistry project support and research informatics. A scientific software engineer not only has a deep understanding of the science of drug discovery but also the desire, skills and time to apply good software engineering practices. A good team of scientific software engineers can create a software foundation that is maintainable, validated and robust. If done correctly, this foundation enable the organization to investigate new and novel computational ideas with a very high level of efficiency.
Evaluation of Apache Hadoop for parallel data analysis with ROOT
NASA Astrophysics Data System (ADS)
Lehrack, S.; Duckeck, G.; Ebke, J.
2014-06-01
The Apache Hadoop software is a Java based framework for distributed processing of large data sets across clusters of computers, using the Hadoop file system (HDFS) for data storage and backup and MapReduce as a processing platform. Hadoop is primarily designed for processing large textual data sets which can be processed in arbitrary chunks, and must be adapted to the use case of processing binary data files which cannot be split automatically. However, Hadoop offers attractive features in terms of fault tolerance, task supervision and control, multi-user functionality and job management. For this reason, we evaluated Apache Hadoop as an alternative approach to PROOF for ROOT data analysis. Two alternatives in distributing analysis data were discussed: either the data was stored in HDFS and processed with MapReduce, or the data was accessed via a standard Grid storage system (dCache Tier-2) and MapReduce was used only as execution back-end. The focus in the measurements were on the one hand to safely store analysis data on HDFS with reasonable data rates and on the other hand to process data fast and reliably with MapReduce. In the evaluation of the HDFS, read/write data rates from local Hadoop cluster have been measured and compared to standard data rates from the local NFS installation. In the evaluation of MapReduce, realistic ROOT analyses have been used and event rates have been compared to PROOF.
CosmoQuest Transient Tracker: Opensource Photometry & Astrometry software
NASA Astrophysics Data System (ADS)
Myers, Joseph L.; Lehan, Cory; Gay, Pamela; Richardson, Matthew; CosmoQuest Team
2018-01-01
CosmoQuest is moving from online citizen science, to observational astronomy with the creation of Transient Trackers. This open source software is designed to identify asteroids and other transient/variable objects in image sets. Transient Tracker’s features in final form will include: astrometric and photometric solutions, identification of moving/transient objects, identification of variable objects, and lightcurve analysis. In this poster we present our initial, v0.1 release and seek community input.This software builds on the existing NIH funded ImageJ libraries. Creation of this suite of opensource image manipulation routines is lead by Wayne Rasband and is released primarily under the MIT license. In this release, we are building on these libraries to add source identification for point / point-like sources, and to do astrometry. Our materials released under the Apache 2.0 license on github (http://github.com/CosmoQuestTeam) and documentation can be found at http://cosmoquest.org/TransientTracker.
Project Management Software for Distributed Industrial Companies
NASA Astrophysics Data System (ADS)
Dobrojević, M.; Medjo, B.; Rakin, M.; Sedmak, A.
This paper gives an overview of the development of a new software solution for project management, intended mainly to use in industrial environment. The main concern of the proposed solution is application in everyday engineering practice in various, mainly distributed industrial companies. Having this in mind, special care has been devoted to development of appropriate tools for tracking, storing and analysis of the information about the project, and in-time delivering to the right team members or other responsible persons. The proposed solution is Internet-based and uses LAMP/WAMP (Linux or Windows - Apache - MySQL - PHP) platform, because of its stability, versatility, open source technology and simple maintenance. Modular structure of the software makes it easy for customization according to client specific needs, with a very short implementation period. Its main advantages are simple usage, quick implementation, easy system maintenance, short training and only basic computer skills needed for operators.
Software support for SBGN maps: SBGN-ML and LibSBGN.
van Iersel, Martijn P; Villéger, Alice C; Czauderna, Tobias; Boyd, Sarah E; Bergmann, Frank T; Luna, Augustin; Demir, Emek; Sorokin, Anatoly; Dogrusoz, Ugur; Matsuoka, Yukiko; Funahashi, Akira; Aladjem, Mirit I; Mi, Huaiyu; Moodie, Stuart L; Kitano, Hiroaki; Le Novère, Nicolas; Schreiber, Falk
2012-08-01
LibSBGN is a software library for reading, writing and manipulating Systems Biology Graphical Notation (SBGN) maps stored using the recently developed SBGN-ML file format. The library (available in C++ and Java) makes it easy for developers to add SBGN support to their tools, whereas the file format facilitates the exchange of maps between compatible software applications. The library also supports validation of maps, which simplifies the task of ensuring compliance with the detailed SBGN specifications. With this effort we hope to increase the adoption of SBGN in bioinformatics tools, ultimately enabling more researchers to visualize biological knowledge in a precise and unambiguous manner. Milestone 2 was released in December 2011. Source code, example files and binaries are freely available under the terms of either the LGPL v2.1+ or Apache v2.0 open source licenses from http://libsbgn.sourceforge.net. sbgn-libsbgn@lists.sourceforge.net.
CometQuest: A Rosetta Adventure
NASA Technical Reports Server (NTRS)
Leon, Nancy J.; Fisher, Diane K.; Novati, Alexander; Chmielewski, Artur B.; Fitzpatrick, Austin J.; Angrum, Andrea
2012-01-01
This software is a higher-performance implementation of tiled WMS, with integral support for KML and time-varying data. This software is compliant with the Open Geospatial WMS standard, and supports KML natively as a WMS return type, including support for the time attribute. Regionated KML wrappers are generated that match the existing tiled WMS dataset. Ping and JPG formats are supported, and the software is implemented as an Apache 2.0 module that supports a threading execution model that is capable of supporting very high request rates. The module intercepts and responds to WMS requests that match certain patterns and returns the existing tiles. If a KML format that matches an existing pyramid and tile dataset is requested, regionated KML is generated and returned to the requesting application. In addition, KML requests that do not match the existing tile datasets generate a KML response that includes the corresponding JPG WMS request, effectively adding KML support to a backing WMS server.
NASA Technical Reports Server (NTRS)
Plesea, Lucian
2012-01-01
This software is a higher-performance implementation of tiled WMS, with integral support for KML and time-varying data. This software is compliant with the Open Geospatial WMS standard, and supports KML natively as a WMS return type, including support for the time attribute. Regionated KML wrappers are generated that match the existing tiled WMS dataset. Ping and JPG formats are supported, and the software is implemented as an Apache 2.0 module that supports a threading execution model that is capable of supporting very high request rates. The module intercepts and responds to WMS requests that match certain patterns and returns the existing tiles. If a KML format that matches an existing pyramid and tile dataset is requested, regionated KML is generated and returned to the requesting application. In addition, KML requests that do not match the existing tile datasets generate a KML response that includes the corresponding JPG WMS request, effectively adding KML support to a backing WMS server.
Geologic influences on Apache trout habitat in the White Mountains of Arizona
Jonathan W. Long; Alvin L. Medina
2006-01-01
Geologic variation has important influences on habitat quality for species of concern, but it can be difficult to evaluate due to subtle variations, complex terminology, and inadequate maps. To better understand habitat of the Apache trout (Onchorhynchus apache or O. gilae apache Miller), a threatened endemic species of the White...
Curriculum Program for the Apache Language.
ERIC Educational Resources Information Center
Whiteriver Public Schools, AZ.
These curriculum materials from the Whiteriver (Arizona) Elementary School consist of--(1) an English-Apache word list of some of the most commonly used words in Apache, 29p.; (2) a list of enclitics with approximate or suggested meanings and illustrations of usage, 5 p.; (3) an illustrated chart of Apache vowels and consonants, various written…
The Jicarilla Apaches. A Study in Survival.
ERIC Educational Resources Information Center
Gunnerson, Dolores A.
Focusing on the ultimate fate of the Cuartelejo and/or Paloma Apaches known in archaeological terms as the Dismal River people of the Central Plains, this book is divided into 2 parts. The early Apache (1525-1700) and the Jicarilla Apache (1700-1800) tribes are studied in terms of their: persistent cultural survival, social/political adaptability,…
A multi-platform evaluation of the randomized CX low-rank matrix factorization in Spark
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gittens, Alex; Kottalam, Jey; Yang, Jiyan
We investigate the performance and scalability of the randomized CX low-rank matrix factorization and demonstrate its applicability through the analysis of a 1TB mass spectrometry imaging (MSI) dataset, using Apache Spark on an Amazon EC2 cluster, a Cray XC40 system, and an experimental Cray cluster. We implemented this factorization both as a parallelized C implementation with hand-tuned optimizations and in Scala using the Apache Spark high-level cluster computing framework. We obtained consistent performance across the three platforms: using Spark we were able to process the 1TB size dataset in under 30 minutes with 960 cores on all systems, with themore » fastest times obtained on the experimental Cray cluster. In comparison, the C implementation was 21X faster on the Amazon EC2 system, due to careful cache optimizations, bandwidth-friendly access of matrices and vector computation using SIMD units. We report these results and their implications on the hardware and software issues arising in supporting data-centric workloads in parallel and distributed environments.« less
Implementation and performance test of cloud platform based on Hadoop
NASA Astrophysics Data System (ADS)
Xu, Jingxian; Guo, Jianhong; Ren, Chunlan
2018-01-01
Hadoop, as an open source project for the Apache foundation, is a distributed computing framework that deals with large amounts of data and has been widely used in the Internet industry. Therefore, it is meaningful to study the implementation of Hadoop platform and the performance of test platform. The purpose of this subject is to study the method of building Hadoop platform and to study the performance of test platform. This paper presents a method to implement Hadoop platform and a test platform performance method. Experimental results show that the proposed test performance method is effective and it can detect the performance of Hadoop platform.
Gajic, Ognjen; Afessa, Bekele
2012-01-01
Background: There are few comparisons among the most recent versions of the major adult ICU prognostic systems (APACHE [Acute Physiology and Chronic Health Evaluation] IV, Simplified Acute Physiology Score [SAPS] 3, Mortality Probability Model [MPM]0III). Only MPM0III includes resuscitation status as a predictor. Methods: We assessed the discrimination, calibration, and overall performance of the models in 2,596 patients in three ICUs at our tertiary referral center in 2006. For APACHE and SAPS, the analyses were repeated with and without inclusion of resuscitation status as a predictor variable. Results: Of the 2,596 patients studied, 283 (10.9%) died before hospital discharge. The areas under the curve (95% CI) of the models for prediction of hospital mortality were 0.868 (0.854-0.880), 0.861 (0.847-0.874), 0.801 (0.785-0.816), and 0.721 (0.704-0.738) for APACHE III, APACHE IV, SAPS 3, and MPM0III, respectively. The Hosmer-Lemeshow statistics for the models were 33.7, 31.0, 36.6, and 21.8 for APACHE III, APACHE IV, SAPS 3, and MPM0III, respectively. Each of the Hosmer-Lemeshow statistics generated P values < .05, indicating poor calibration. Brier scores for the models were 0.0771, 0.0749, 0.0890, and 0.0932, respectively. There were no significant differences between the discriminative ability or the calibration of APACHE or SAPS with and without “do not resuscitate” status. Conclusions: APACHE III and IV had similar discriminatory capability and both were better than SAPS 3, which was better than MPM0III. The calibrations of the models studied were poor. Overall, models with more predictor variables performed better than those with fewer. The addition of resuscitation status did not improve APACHE III or IV or SAPS 3 prediction. PMID:22499827
Su, Yingying; Wang, Miao; Liu, Yifei; Ye, Hong; Gao, Daiquan; Chen, Weibi; Zhang, Yunzhou; Zhang, Yan
2014-12-01
This study aimed to conduct and assess a module modified acute physiology and chronic health evaluation (MM-APACHE) II model, based on disease categories modified-acute physiology and chronic health evaluation (DCM-APACHE) II model, in predicting mortality more accurately in neuro-intensive care units (N-ICUs). In total, 1686 patients entered into this prospective study. Acute physiology and chronic health evaluation (APACHE) II scores of all patients on admission and worst 24-, 48-, 72-hour scores were obtained. Neurological diagnosis on admission was classified into five categories: cerebral infarction, intracranial hemorrhage, neurological infection, spinal neuromuscular (SNM) disease, and other neurological diseases. The APACHE II scores of cerebral infarction, intracranial hemorrhage, and neurological infection patients were used for building the MM-APACHE II model. There were 1386 cases for cerebral infarction disease, intracranial hemorrhage disease, and neurological infection disease. The logistic linear regression showed that 72-hour APACHE II score (Wals = 173.04, P < 0.001) and disease classification (Wals = 12.51, P = 0.02) were of importance in forecasting hospital mortality. Module modified acute physiology and chronic health evaluation II model, built on the variables of the 72-hour APACHE II score and disease category, had good discrimination (area under the receiver operating characteristic curve (AU-ROC = 0.830)) and calibration (χ2 = 12.518, P = 0.20), and was better than the Knaus APACHE II model (AU-ROC = 0.778). The APACHE II severity of disease classification system cannot provide accurate prognosis for all kinds of the diseases. A MM-APACHE II model can accurately predict hospital mortality for cerebral infarction, intracranial hemorrhage, and neurologic infection patients in N-ICU.
MzJava: An open source library for mass spectrometry data processing.
Horlacher, Oliver; Nikitin, Frederic; Alocci, Davide; Mariethoz, Julien; Müller, Markus; Lisacek, Frederique
2015-11-03
Mass spectrometry (MS) is a widely used and evolving technique for the high-throughput identification of molecules in biological samples. The need for sharing and reuse of code among bioinformaticians working with MS data prompted the design and implementation of MzJava, an open-source Java Application Programming Interface (API) for MS related data processing. MzJava provides data structures and algorithms for representing and processing mass spectra and their associated biological molecules, such as metabolites, glycans and peptides. MzJava includes functionality to perform mass calculation, peak processing (e.g. centroiding, filtering, transforming), spectrum alignment and clustering, protein digestion, fragmentation of peptides and glycans as well as scoring functions for spectrum-spectrum and peptide/glycan-spectrum matches. For data import and export MzJava implements readers and writers for commonly used data formats. For many classes support for the Hadoop MapReduce (hadoop.apache.org) and Apache Spark (spark.apache.org) frameworks for cluster computing was implemented. The library has been developed applying best practices of software engineering. To ensure that MzJava contains code that is correct and easy to use the library's API was carefully designed and thoroughly tested. MzJava is an open-source project distributed under the AGPL v3.0 licence. MzJava requires Java 1.7 or higher. Binaries, source code and documentation can be downloaded from http://mzjava.expasy.org and https://bitbucket.org/sib-pig/mzjava. This article is part of a Special Issue entitled: Computational Proteomics. Copyright © 2015 Elsevier B.V. All rights reserved.
ScriptingRT: A Software Library for Collecting Response Latencies in Online Studies of Cognition
Schubert, Thomas W.; Murteira, Carla; Collins, Elizabeth C.; Lopes, Diniz
2013-01-01
ScriptingRT is a new open source tool to collect response latencies in online studies of human cognition. ScriptingRT studies run as Flash applets in enabled browsers. ScriptingRT provides the building blocks of response latency studies, which are then combined with generic Apache Flex programming. Six studies evaluate the performance of ScriptingRT empirically. Studies 1–3 use specialized hardware to measure variance of response time measurement and stimulus presentation timing. Studies 4–6 implement a Stroop paradigm and run it both online and in the laboratory, comparing ScriptingRT to other response latency software. Altogether, the studies show that Flash programs developed in ScriptingRT show a small lag and an increased variance in response latencies. However, this did not significantly influence measured effects: The Stroop effect was reliably replicated in all studies, and the found effects did not depend on the software used. We conclude that ScriptingRT can be used to test response latency effects online. PMID:23805326
Cameo: A Python Library for Computer Aided Metabolic Engineering and Optimization of Cell Factories.
Cardoso, João G R; Jensen, Kristian; Lieven, Christian; Lærke Hansen, Anne Sofie; Galkina, Svetlana; Beber, Moritz; Özdemir, Emre; Herrgård, Markus J; Redestig, Henning; Sonnenschein, Nikolaus
2018-04-20
Computational systems biology methods enable rational design of cell factories on a genome-scale and thus accelerate the engineering of cells for the production of valuable chemicals and proteins. Unfortunately, the majority of these methods' implementations are either not published, rely on proprietary software, or do not provide documented interfaces, which has precluded their mainstream adoption in the field. In this work we present cameo, a platform-independent software that enables in silico design of cell factories and targets both experienced modelers as well as users new to the field. It is written in Python and implements state-of-the-art methods for enumerating and prioritizing knockout, knock-in, overexpression, and down-regulation strategies and combinations thereof. Cameo is an open source software project and is freely available under the Apache License 2.0. A dedicated Web site including documentation, examples, and installation instructions can be found at http://cameo.bio . Users can also give cameo a try at http://try.cameo.bio .
Yaniv, Ziv; Lowekamp, Bradley C; Johnson, Hans J; Beare, Richard
2018-06-01
Modern scientific endeavors increasingly require team collaborations to construct and interpret complex computational workflows. This work describes an image-analysis environment that supports the use of computational tools that facilitate reproducible research and support scientists with varying levels of software development skills. The Jupyter notebook web application is the basis of an environment that enables flexible, well-documented, and reproducible workflows via literate programming. Image-analysis software development is made accessible to scientists with varying levels of programming experience via the use of the SimpleITK toolkit, a simplified interface to the Insight Segmentation and Registration Toolkit. Additional features of the development environment include user friendly data sharing using online data repositories and a testing framework that facilitates code maintenance. SimpleITK provides a large number of examples illustrating educational and research-oriented image analysis workflows for free download from GitHub under an Apache 2.0 license: github.com/InsightSoftwareConsortium/SimpleITK-Notebooks .
77 FR 51475 - Safety Zone; Apache Pier Labor Day Fireworks; Myrtle Beach, SC
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-24
...-AA00 Safety Zone; Apache Pier Labor Day Fireworks; Myrtle Beach, SC AGENCY: Coast Guard, DHS. ACTION... Atlantic Ocean in the vicinity of Apache Pier in Myrtle Beach, SC, during the Labor Day fireworks... [[Page 51476
40 CFR 52.150 - Yavapai-Apache Reservation.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 3 2010-07-01 2010-07-01 false Yavapai-Apache Reservation. 52.150 Section 52.150 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) APPROVAL AND PROMULGATION OF IMPLEMENTATION PLANS Arizona § 52.150 Yavapai-Apache Reservation. (a...
40 CFR 52.150 - Yavapai-Apache Reservation.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 3 2013-07-01 2013-07-01 false Yavapai-Apache Reservation. 52.150 Section 52.150 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) APPROVAL AND PROMULGATION OF IMPLEMENTATION PLANS Arizona § 52.150 Yavapai-Apache Reservation. (a...
40 CFR 52.150 - Yavapai-Apache Reservation.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 3 2014-07-01 2014-07-01 false Yavapai-Apache Reservation. 52.150 Section 52.150 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) APPROVAL AND PROMULGATION OF IMPLEMENTATION PLANS Arizona § 52.150 Yavapai-Apache Reservation. (a...
40 CFR 52.150 - Yavapai-Apache Reservation.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 3 2011-07-01 2011-07-01 false Yavapai-Apache Reservation. 52.150 Section 52.150 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) APPROVAL AND PROMULGATION OF IMPLEMENTATION PLANS Arizona § 52.150 Yavapai-Apache Reservation. (a...
40 CFR 52.150 - Yavapai-Apache Reservation.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 3 2012-07-01 2012-07-01 false Yavapai-Apache Reservation. 52.150 Section 52.150 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) APPROVAL AND PROMULGATION OF IMPLEMENTATION PLANS Arizona § 52.150 Yavapai-Apache Reservation. (a...
Engine's Running, But Where's the Fuel?
NASA Astrophysics Data System (ADS)
2006-01-01
Astronomers have found a relatively tiny galaxy whose black-hole-powered "central engine" is pouring out energy at a rate equal to that of much larger galaxies, and they're wondering how it manages to do so. The astronomers used the National Science Foundation's Very Large Array (VLA) radio telescope and optical telescopes at the Apache Point Observatory to study a galaxy dubbed J170902+641728, more than a billion light-years from Earth. The VLA The Very Large Array CREDIT: NRAO/AUI/NSF (Click on image for VLA gallery) "This thing looks like a quasar in VLA images, but quasars come in big galaxies, not little ones like this," said Neal Miller, an astronomer with the National Radio Astronomy Observatory. In visible-light images, the galaxy is lost in the glare from the bright central engine, but those images place strong limits on the galaxy's size, Miller explained. Miller and Kurt Anderson of New Mexico State University presented their findings to the American Astronomical Society's meeting in Washington, DC. Most galaxies have black holes at their centers. The black hole, a concentration of mass whose gravity is so strong that not even light can escape it, can draw material into itself from the surrounding galaxy. If the black hole has gas or stars to "eat," that process generates large amounts of energy as the infalling gas is compressed and heated to high temperatures. This usually is seen in young galaxies,massive galaxies, or in galaxies that have experienced close encounters with companions, stirring up the material and sending it close enough to the black hole to be gobbled up. The black hole in J170902+641728 is about a million times more massive than the Sun, the astronomers say. Their images show that the galaxy can be no larger than about 2,000 light-years across. Our Milky Way Galaxy is about 100,000 light-years across. "There are other galaxies that are likely to be the same size as this one that have black holes of similar mass. However, their black holes are quiet -- they're not putting out the large amounts of energy we see in this one. We're left to wonder just why this one is so active," Miller said. Answering that question may help astronomers better understand how galaxies and their central black holes are formed. "This galaxy is a rare find -- a tiny galaxy that is still building up the mass of its black hole. It's exciting to find an object that can help us understand this important aspect of galaxy evolution," Miller said. J170902+641728 is part of a cluster of galaxies that the scientists have studied with the VLA, with the 3.5-meter telescope at Apache Point Observatory, and with the Sloan Digital Sky Survey telescope at Apache Point. All these telescopes are in New Mexico. The National Radio Astronomy Observatory is a facility of the National Science Foundation, operated under cooperative agreement by Associated Universities, Inc. Apache Point Observatory is a facility of the Astrophysical Research Consortium which also manages the Sloan Digital Sky Survey.
Construction of Database for Pulsating Variable Stars
NASA Astrophysics Data System (ADS)
Chen, B. Q.; Yang, M.; Jiang, B. W.
2011-07-01
A database for the pulsating variable stars is constructed for Chinese astronomers to study the variable stars conveniently. The database includes about 230000 variable stars in the Galactic bulge, LMC and SMC observed by the MACHO (MAssive Compact Halo Objects) and OGLE (Optical Gravitational Lensing Experiment) projects at present. The software used for the construction is LAMP, i.e., Linux+Apache+MySQL+PHP. A web page is provided to search the photometric data and the light curve in the database through the right ascension and declination of the object. More data will be incorporated into the database.
Biology and distribution of Lutzomyia apache as it relates to VSV
USDA-ARS?s Scientific Manuscript database
Phlebotomine sand flies are vectors of bacteria, parasites, and viruses. Lutzomyia apache was incriminated as a vector of vesicular stomatitis viruses(VSV)due to overlapping ranges of the sand fly and outbreaks of VSV. I report on newly discovered populations of L. apache in Wyoming from Albany and ...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-02-19
... DEPARTMENT OF THE INTERIOR National Park Service [NPS-WASO-NAGPRA-12186; 2200-1100-665] Notice of Inventory Completion: U.S. Department of Agriculture, Forest Service, Apache-Sitgreaves National Forests.... ACTION: Notice. SUMMARY: The U.S. Department of Agriculture (USDA), Forest Service, Apache-Sitgreaves...
75 FR 57290 - Notice of Inventory Completion: University of Colorado Museum, Boulder, CO
Federal Register 2010, 2011, 2012, 2013, 2014
2010-09-20
...; Winnemucca Indian Colony of Nevada; Yavapai-Apache Nation of the Camp Verde Indian Reservation, Arizona... of Oklahoma; Susanville Indian Rancheria, California; and Yavapai-Apache Nation of the Camp Verde...; Winnemucca Indian Colony of Nevada; Yavapai-Apache Nation of the Camp Verde Indian Reservation, Arizona...
KinderApache Song and Dance Project.
ERIC Educational Resources Information Center
Shanklin, M. Trevor; Paciotto, Carla; Prater, Greg
This paper describes activities and evaluation of the KinderApache Song and Dance Project, piloted in a kindergarten class in Cedar Creek (Arizona) on the White Mountain Apache Reservation. Introducing Native-language song and dance in kindergarten could help foster a sense of community and cultural pride and greater awareness of traditional…
75 FR 68607 - BP Canada Energy Marketing Corp. Apache Corporation; Notice for Temporary Waivers
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-08
... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Docket No. RP11-1479-000] BP Canada Energy Marketing Corp. Apache Corporation; Notice for Temporary Waivers November 1, 2010. Take notice that on October 29, 2010, BP Canada Energy Marketing Corp. and Apache Corporation filed with the...
Escape from Albuquerque: An Apache Memorate.
ERIC Educational Resources Information Center
Greenfeld, Philip J.
2001-01-01
Clarence Hawkins, a White Mountain Apache, escaped from the Albuquerque Indian School around 1920. His 300-mile trip home, made with two other boys, exemplifies the reaction of many Indian youths to the American government's plans for cultural assimilation. The tale is told in the form of traditional Apache narrative. (TD)
ERIC Educational Resources Information Center
Arnold, Adele R.
Among the Native Americans, few tribes were as warlike as the Apaches of the Southwest. The courage and ferocity of Apache warriors like Geronimo, Cochise, Victorio, and Mangas Coloradas is legendary. Based on a true story, this book is about an Apache boy who was captured by an enemy tribe and sold to a white man. Carlos Gentile, a photographer…
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-28
... Reservation, Arizona; Yavapai-Apache Nation of the Camp Verde Indian Reservation, Arizona; Yavapai-Prescott... Tribe of the Fort Apache Reservation, Arizona; Yavapai-Apache Nation of the Camp Verde Indian... Camp Verde Indian Reservation, Arizona; Yavapai-Prescott Tribe of the Yavapai Reservation, Arizona; and...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-28
... Reservation, Arizona; Yavapai-Apache Nation of the Camp Verde Indian Reservation, Arizona; Yavapai-Prescott... of the Fort Apache Reservation, Arizona; Yavapai-Apache Nation of the Camp Verde Indian Reservation... Camp Verde Indian Reservation, Arizona; Yavapai-Prescott Tribe of the Yavapai Reservation, Arizona; and...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-28
... Reservation, Arizona; Yavapai-Apache Nation of the Camp Verde Indian Reservation, Arizona; Yavapai-Prescott...; Yavapai-Apache Nation of the Camp Verde Indian Reservation, Arizona; and Yavapai-Prescott Tribe of the... Reservation, Arizona; Yavapai-Apache Nation of the Camp Verde Indian Reservation, Arizona; Yavapai-Prescott...
Go-Gii-Ya [A Jicarilla Apache Religious Celebration].
ERIC Educational Resources Information Center
Pesata, Levi; And Others
Developed by utilizing only Jicarilla Apache people as resources to preserve the authenticity of the material and information, this booklet presents information on the Jicarilla Apache celebration of "Go-gii-ya". "Go-gii-ya" is a religious feast and ceremony held annually over a three-day period which climaxes on the fifteenth…
Evidence of sexually dimorphic introgression in Pinaleno Mountain Apache trout
Porath, M.T.; Nielsen, J.L.
2003-01-01
The high-elevation headwater streams of the Pinaleno Mountains support small populations of threatened Apache trout Oncorhynchus apache that were stocked following the chemical removal of nonnative salmonids in the 1960s. A fisheries survey to assess population composition, growth, and size structure confirmed angler reports of infrequent occurrences of Oncorhynchus spp. exhibiting the external morphological characteristics of both Apache trout and rainbow trout O. mykiss. Nonlethal tissue samples were collected from 50 individuals in the headwaters of each stream. Mitochondrial DNA (mtDNA) sequencing and amplification of nuclear microsatellite loci were used to determine the levels of genetic introgression by rainbow trout in Apache trout populations at these locations. Sexually dimorphic introgression from the spawning of male rainbow trout with female Apache trout was detected using mtDNA and microsatellites. Estimates of the degree of hybridization based on three microsatellite loci were 10-88%. The use of nonlethal DNA genetic analyses can supplement information obtained from standard survey methods and be useful in assessing the relative importance of small and sensitive populations with a history of nonnative introductions.
Determination of habitat requirements for Apache Trout
Petre, Sally J.; Bonar, Scott A.
2017-01-01
The Apache Trout Oncorhynchus apache, a salmonid endemic to east-central Arizona, is currently listed as threatened under the U.S. Endangered Species Act. Establishing and maintaining recovery streams for Apache Trout and other endemic species requires determination of their specific habitat requirements. We built upon previous studies of Apache Trout habitat by defining both stream-specific and generalized optimal and suitable ranges of habitat criteria in three streams located in the White Mountains of Arizona. Habitat criteria were measured at the time thought to be most limiting to juvenile and adult life stages, the summer base flow period. Based on the combined results from three streams, we found that Apache Trout use relatively deep (optimal range = 0.15–0.32 m; suitable range = 0.032–0.470 m) pools with slow stream velocities (suitable range = 0.00–0.22 m/s), gravel or smaller substrate (suitable range = 0.13–2.0 [Wentworth scale]), overhead cover (suitable range = 26–88%), and instream cover (large woody debris and undercut banks were occupied at higher rates than other instream cover types). Fish were captured at cool to moderate temperatures (suitable range = 10.4–21.1°C) in streams with relatively low maximum seasonal temperatures (optimal range = 20.1–22.9°C; suitable range = 17.1–25.9°C). Multiple logistic regression generally confirmed the importance of these variables for predicting the presence of Apache Trout. All measured variables except mean velocity were significant predictors in our model. Understanding habitat needs is necessary in managing for persistence, recolonization, and recruitment of Apache Trout. Management strategies such as fencing areas to restrict ungulate use and grazing and planting native riparian vegetation might favor Apache Trout persistence and recolonization by providing overhead cover and large woody debris to form pools and instream cover, shading streams and lowering temperatures.
Johnson, Z. P.; Eady, R. D.; Ahmad, S. F.; Agravat, S.; Morris, T; Else, J; Lank, S. M.; Wiseman, R. W.; O’Connor, D. H.; Penedo, M. C. T.; Larsen, C. P.
2012-01-01
Here we describe the Immunogenetic Management Software (IMS) system, a novel web-based application that permitsmultiplexed analysis of complex immunogenetic traits that are necessary for the accurate planning and execution of experiments involving large animal models, including nonhuman primates. IMS is capable of housing complex pedigree relationships, microsatellite-based MHC typing data, as well as MHC pyrosequencing expression analysis of class I alleles. It includes a novel, automated MHC haplotype naming algorithm and has accomplished an innovative visualization protocol that allows users to view multiple familial and MHC haplotype relationships through a single, interactive graphical interface. Detailed DNA and RNA-based data can also be queried and analyzed in a highly accessible fashion, and flexible search capabilities allow experimental choices to be made based on multiple, individualized and expandable immunogenetic factors. This web application is implemented in Java, MySQL, Tomcat, and Apache, with supported browsers including Internet Explorer and Firefox onWindows and Safari on Mac OS. The software is freely available for distribution to noncommercial users by contacting Leslie. kean@emory.edu. A demonstration site for the software is available at http://typing.emory.edu/typing_demo, user name: imsdemo7@gmail.com and password: imsdemo. PMID:22080300
Johnson, Z P; Eady, R D; Ahmad, S F; Agravat, S; Morris, T; Else, J; Lank, S M; Wiseman, R W; O'Connor, D H; Penedo, M C T; Larsen, C P; Kean, L S
2012-04-01
Here we describe the Immunogenetic Management Software (IMS) system, a novel web-based application that permits multiplexed analysis of complex immunogenetic traits that are necessary for the accurate planning and execution of experiments involving large animal models, including nonhuman primates. IMS is capable of housing complex pedigree relationships, microsatellite-based MHC typing data, as well as MHC pyrosequencing expression analysis of class I alleles. It includes a novel, automated MHC haplotype naming algorithm and has accomplished an innovative visualization protocol that allows users to view multiple familial and MHC haplotype relationships through a single, interactive graphical interface. Detailed DNA and RNA-based data can also be queried and analyzed in a highly accessible fashion, and flexible search capabilities allow experimental choices to be made based on multiple, individualized and expandable immunogenetic factors. This web application is implemented in Java, MySQL, Tomcat, and Apache, with supported browsers including Internet Explorer and Firefox on Windows and Safari on Mac OS. The software is freely available for distribution to noncommercial users by contacting Leslie.kean@emory.edu. A demonstration site for the software is available at http://typing.emory.edu/typing_demo , user name: imsdemo7@gmail.com and password: imsdemo.
JUDE: An Ultraviolet Imaging Telescope pipeline
NASA Astrophysics Data System (ADS)
Murthy, J.; Rahna, P. T.; Sutaria, F.; Safonova, M.; Gudennavar, S. B.; Bubbly, S. G.
2017-07-01
The Ultraviolet Imaging Telescope (UVIT) was launched as part of the multi-wavelength Indian AstroSat mission on 28 September, 2015 into a low Earth orbit. A 6-month performance verification (PV) phase ended in March 2016, and the instrument is now in the general observing phase. UVIT operates in three channels: visible, near-ultraviolet (NUV) and far-ultraviolet (FUV), each with a choice of broad and narrow band filters, and has NUV and FUV gratings for low-resolution spectroscopy. We have written a software package (JUDE) to convert the Level 1 data from UVIT into scientifically useful photon lists and images. The routines are written in the GNU Data Language (GDL) and are compatible with the IDL software package. We use these programs in our own scientific work, and will continue to update the programs as we gain better understanding of the UVIT instrument and its performance. We have released JUDE under an Apache License.
A Photographic Essay of Apache Chiefs and Warriors, Volume 2-Part B.
ERIC Educational Resources Information Center
Barkan, Gerald; Jacobs, Ben
As part of a series designed for instruction of American Indian children and youth, this resource guide constitutes a pictorial essay describing forts, Indian agents, and Apache chiefs, warriors, and scouts of the 19th century. Accompanying each picture is a brief historical-biographical narrative. Focus is on Apache resistance to the reservation.…
ERIC Educational Resources Information Center
Hammond, Vanessa Lea; Watson, P. J.; O'Leary, Brian J.; Cothran, D. Lisa
2009-01-01
Hopelessness is central to prominent mental health problems within American Indian (AI) communities. Apaches living on a reservation in Arizona responded to diverse expressions of hope along with Hopelessness, Personal Self-Esteem, and Collective Self-Esteem scales. An Apache Hopefulness Scale expressed five themes of hope and correlated…
ERIC Educational Resources Information Center
Cwik, Mary F.; Barlow, Allison; Tingey, Lauren; Larzelere-Hinton, Francene; Goklish, Novalene; Walkup, John T.
2011-01-01
Objective: To describe characteristics and correlates of nonsuicidal self-injury (NSSI) among the White Mountain Apache Tribe. NSSI has not been studied before in American Indian samples despite associated risks for suicide, which disproportionately affect American Indian youth. Method: Apache case managers collected data through a tribally…
ERIC Educational Resources Information Center
Guidera, Stan; MacPherson, D. Scot
2008-01-01
This paper presents the results of a study that was conducted to identify and document student perceptions of the effectiveness of computer modeling software introduced in a design foundations course that had previously utilized only conventional manually-produced representation techniques. Rather than attempt to utilize a production-oriented CAD…
Khwannimit, Bodin; Bhurayanontachai, Rungsun; Vattanavanit, Veerapong
2017-06-01
Recently, the Sepsis Severity Score (SSS) was constructed to predict mortality in sepsis patients. The aim of this study was to compare performance of the SSS with the Acute Physiology and Chronic Health Evaluation (APACHE) II-IV, Simplified Acute Physiology Score (SAPS) II, and SAPS 3 scores in predicting hospital outcome in sepsis patients. A retroprospective analysis was conducted in the medical intensive care unit of a tertiary university hospital. A total of 913 patients were enrolled; 476 of these patients (52.1%) had septic shock. The median SSS was 80 (range 20-137). The SSS presented good discrimination with an area under the receiver operating characteristic curve (AUC) of 0.892. However, the AUC of the SSS did not differ significantly from that of APACHE II (P = 0.07), SAPS II (P = 0.06), and SAPS 3 (P = 0.11). The APACHE IV score showed the best discrimination with an AUC of 0.948 and the overall performance by a Brier score of 0.096. The AUC of the APACHE IV score was statistically greater than the SSS, APACHE II, SAPS II, and SAPS 3 (P <0.0001 for all) and APACHE III (P = 0.0002). The calibration of all scores was poor with the Hosmer-Lemeshow goodness-of-fit H test <0.05. The SSS provided as good discrimination as the APACHE II, SAPS II, and SAPS 3 scores. However, the APACHE IV score had the best discrimination and overall performance in our sepsis patients. The SSS needs to be adapted and modified with new parameters to improve its performance.
ERIC Educational Resources Information Center
Gonzalez-Santin, Edwin, Comp.
This curriculum manual provides 8 days of training for child protective services (CPS) personnel (social workers and administrators) working in the White Mountain Apache tribal community. Each of the first seven units in the manual contains a brief description of contents, course objectives, time required, key concepts, possible discussion topics,…
Stocker, Gernot; Rieder, Dietmar; Trajanoski, Zlatko
2004-03-22
ClusterControl is a web interface to simplify distributing and monitoring bioinformatics applications on Linux cluster systems. We have developed a modular concept that enables integration of command line oriented program into the application framework of ClusterControl. The systems facilitate integration of different applications accessed through one interface and executed on a distributed cluster system. The package is based on freely available technologies like Apache as web server, PHP as server-side scripting language and OpenPBS as queuing system and is available free of charge for academic and non-profit institutions. http://genome.tugraz.at/Software/ClusterControl
BlueSNP: R package for highly scalable genome-wide association studies using Hadoop clusters.
Huang, Hailiang; Tata, Sandeep; Prill, Robert J
2013-01-01
Computational workloads for genome-wide association studies (GWAS) are growing in scale and complexity outpacing the capabilities of single-threaded software designed for personal computers. The BlueSNP R package implements GWAS statistical tests in the R programming language and executes the calculations across computer clusters configured with Apache Hadoop, a de facto standard framework for distributed data processing using the MapReduce formalism. BlueSNP makes computationally intensive analyses, such as estimating empirical p-values via data permutation, and searching for expression quantitative trait loci over thousands of genes, feasible for large genotype-phenotype datasets. http://github.com/ibm-bioinformatics/bluesnp
ERIC Educational Resources Information Center
Axelrod, Melissa; de Garcia, Jule Gomez; Lachler, Jordan
2003-01-01
Reports on the progress of a project to produce a dictionary of the Jicarilla Apache language. Jicarilla, an Eastern Apachean language is spoken on the Jicarilla Apache reservation in Northern New Mexico. The project has revealed much about the role of literacy in language standardization and in speaker empowerment. Suggests that many parallels…
A Photographic Essay of Apache Children in Early Times, Volume 2-Part C.
ERIC Educational Resources Information Center
Thompson, Doris; Jacobs, Ben
As part of a series of guides designed for instruction of American Indian children and youth, this resource guide constitutes a pictorial essay on life of the Apache child from 1880 to the early 20th century. Each of the 12 photographs is accompanied by an historical narrative which describes one or more cultural aspects of Apache childhood.…
Assessment Environment for Complex Systems Software Guide
NASA Technical Reports Server (NTRS)
2013-01-01
This Software Guide (SG) describes the software developed to test the Assessment Environment for Complex Systems (AECS) by the West Virginia High Technology Consortium (WVHTC) Foundation's Mission Systems Group (MSG) for the National Aeronautics and Space Administration (NASA) Aeronautics Research Mission Directorate (ARMD). This software is referred to as the AECS Test Project throughout the remainder of this document. AECS provides a framework for developing, simulating, testing, and analyzing modern avionics systems within an Integrated Modular Avionics (IMA) architecture. The purpose of the AECS Test Project is twofold. First, it provides a means to test the AECS hardware and system developed by MSG. Second, it provides an example project upon which future AECS research may be based. This Software Guide fully describes building, installing, and executing the AECS Test Project as well as its architecture and design. The design of the AECS hardware is described in the AECS Hardware Guide. Instructions on how to configure, build and use the AECS are described in the User's Guide. Sample AECS software, developed by the WVHTC Foundation, is presented in the AECS Software Guide. The AECS Hardware Guide, AECS User's Guide, and AECS Software Guide are authored by MSG. The requirements set forth for AECS are presented in the Statement of Work for the Assessment Environment for Complex Systems authored by NASA Dryden Flight Research Center (DFRC). The intended audience for this document includes software engineers, hardware engineers, project managers, and quality assurance personnel from WVHTC Foundation (the suppliers of the software), NASA (the customer), and future researchers (users of the software). Readers are assumed to have general knowledge in the field of real-time, embedded computer software development.
Software Process Automation: Interviews, Survey, and Workshop Results.
1997-10-01
International Business Machines Coproration Foundation is a pending trademark of Foundation Software , Inc. FrameMaker is a registered trademark of Adobe, Inc...amount of technology Integration of technologies, con- flicting points of view between adopting org. and consultants E CM FrameMaker Labor/resource...Weaver FrameMaker , CM System Integration of CM tool L InConcert Cadre, AutoPlan, DBStar Ineffective process integration, poor training, time
NASA Astrophysics Data System (ADS)
Chugh, Saryu; Arivu Selvan, K.; Nadesh, RK
2017-11-01
Numerous destructive things influence the working arrangement of human body as hypertension, smoking, obesity, inappropriate medication taking which causes many contrasting diseases as diabetes, thyroid, strokes and coronary diseases. The impermanence and horribleness of the environment situation is also the reason for the coronary disease. The structure of Apache start relies on the evolution which requires gathering of the data. To break down the significance of use programming focused on data structure the Apache stop ought to be utilized and it gives various central focuses as it is fast in light as it uses memory worked in preparing. Apache Spark continues running on dispersed environment and chops down the data in bunches giving a high profitability rate. Utilizing mining procedure as a part of the determination of coronary disease has been exhaustively examined indicating worthy levels of precision. Decision trees, Neural Network, Gradient Boosting Algorithm are the various apache spark proficiencies which help in collecting the information.
ERIC Educational Resources Information Center
Ove, Robert S.; Stockel, H. Henrietta
In 1948, a young and naive Robert Ove arrived at Whitetail, on the Mescalero Apache Reservation, to teach at the Bureau of Indian Affairs day school. Living there were the Chiricahua Apaches--descendants of Geronimo and the survivors of nearly 30 years of incarceration by the U.S. government. With help from Indian historian H. Henrietta Stockel,…
Nakhoda, Shazia; Zimrin, Ann B; Baer, Maria R; Law, Jennie Y
2017-04-01
Hypertriglyceridemic (HTG) pancreatitis carries significant morbidity and mortality and often requires intensive care unit (ICU) admission. Therapeutic plasma exchange (TPE) rapidly lowers serum triglyceride (TG) levels. However, evidence supporting TPE for HTG pancreatitis is lacking. Ten patients admitted to the ICU for HTG pancreatitis underwent TPE at our institution from 2005-2015. We retrospectively calculated the Acute Physiology and Chronic Health Examination II (APACHE II) score at the time of initial TPE and again after the final TPE session to assess the impact of triglyceride apheresis on morbidity and mortality associated with HTG pancreatitis. All 10 patients had rapid reduction in TG level after TPE, but only 5 had improvement in their APACHE II score. The median APACHE II score decreased from 19% to 17% after TPE, correlating with an 8% and 9% decrease in median predicted non-operative and post-operative mortality, respectively. The APACHE II score did not differ statistically before and after TPE implementation in our patient group (p=0.39). TPE is a clinically useful tool to rapidly lower TG levels, but its impact on mortality of HTG pancreatitis as assessed by the APACHE II score remains uncertain. Copyright © 2016 Elsevier Ltd. All rights reserved.
Use of APACHE II and SAPS II to predict mortality for hemorrhagic and ischemic stroke patients.
Moon, Byeong Hoo; Park, Sang Kyu; Jang, Dong Kyu; Jang, Kyoung Sool; Kim, Jong Tae; Han, Yong Min
2015-01-01
We studied the applicability of the Acute Physiology and Chronic Health Evaluation II (APACHE II) and Simplified Acute Physiology Score II (SAPS II) in patients admitted to the intensive care unit (ICU) with acute stroke and compared the results with the Glasgow Coma Scale (GCS) and National Institutes of Health Stroke Scale (NIHSS). We also conducted a comparative study of accuracy for predicting hemorrhagic and ischemic stroke mortality. Between January 2011 and December 2012, ischemic or hemorrhagic stroke patients admitted to the ICU were included in the study. APACHE II and SAPS II-predicted mortalities were compared using a calibration curve, the Hosmer-Lemeshow goodness-of-fit test, and the receiver operating characteristic (ROC) curve, and the results were compared with the GCS and NIHSS. Overall 498 patients were included in this study. The observed mortality was 26.3%, whereas APACHE II and SAPS II-predicted mortalities were 35.12% and 35.34%, respectively. The mean GCS and NIHSS scores were 9.43 and 21.63, respectively. The calibration curve was close to the line of perfect prediction. The ROC curve showed a slightly better prediction of mortality for APACHE II in hemorrhagic stroke patients and SAPS II in ischemic stroke patients. The GCS and NIHSS were inferior in predicting mortality in both patient groups. Although both the APACHE II and SAPS II systems can be used to measure performance in the neurosurgical ICU setting, the accuracy of APACHE II in hemorrhagic stroke patients and SAPS II in ischemic stroke patients was superior. Copyright © 2014 Elsevier Ltd. All rights reserved.
Donahoe, Laura; McDonald, Ellen; Kho, Michelle E; Maclennan, Margaret; Stratford, Paul W; Cook, Deborah J
2009-01-01
Given their clinical, research, and administrative purposes, scores on the Acute Physiology and Chronic Health Evaluation (APACHE) II should be reliable, whether calculated by health care personnel or a clinical information system. To determine reliability of APACHE II scores calculated by a clinical information system and by health care personnel before and after a multifaceted quality improvement intervention. APACHE II scores of 37 consecutive patients admitted to a closed, 15-bed, university-affiliated intensive care unit were collected by a research coordinator, a database clerk, and a clinical information system. After a quality improvement intervention focused on health care personnel and the clinical information system, the same methods were used to collect data on 32 consecutive patients. The research coordinator and the clerk did not know each other's scores or the information system's score. The data analyst did not know the source of the scores until analysis was complete. APACHE II scores obtained by the clerk and the research coordinator were highly reliable (intraclass correlation coefficient, 0.88 before vs 0.80 after intervention; P = .25). No significant changes were detected after the intervention; however, compared with scores of the research coordinator, the overall reliability of APACHE II scores calculated by the clinical information system improved (intraclass correlation coefficient, 0.24 before intervention vs 0.91 after intervention, P < .001). After completion of a quality improvement intervention, health care personnel and a computerized clinical information system calculated sufficiently reliable APACHE II scores for clinical, research, and administrative purposes.
Markgraf, Rainer; Deutschinoff, Gerd; Pientka, Ludger; Scholten, Theo; Lorenz, Cristoph
2001-01-01
Background: Mortality predictions calculated using scoring scales are often not accurate in populations other than those in which the scales were developed because of differences in case-mix. The present study investigates the effect of first-level customization, using a logistic regression technique, on discrimination and calibration of the Acute Physiology and Chronic Health Evaluation (APACHE) II and III scales. Method: Probabilities of hospital death for patients were estimated by applying APACHE II and III and comparing these with observed outcomes. Using the split sample technique, a customized model to predict outcome was developed by logistic regression. The overall goodness-of-fit of the original and the customized models was assessed. Results: Of 3383 consecutive intensive care unit (ICU) admissions over 3 years, 2795 patients could be analyzed, and were split randomly into development and validation samples. The discriminative powers of APACHE II and III were unchanged by customization (areas under the receiver operating characteristic [ROC] curve 0.82 and 0.85, respectively). Hosmer-Lemeshow goodness-of-fit tests showed good calibration for APACHE II, but insufficient calibration for APACHE III. Customization improved calibration for both models, with a good fit for APACHE III as well. However, fit was different for various subgroups. Conclusions: The overall goodness-of-fit of APACHE III mortality prediction was improved significantly by customization, but uniformity of fit in different subgroups was not achieved. Therefore, application of the customized model provides no advantage, because differences in case-mix still limit comparisons of quality of care. PMID:11178223
Research on construction settlement of different soft foundation under vacuum preloading condition
NASA Astrophysics Data System (ADS)
Bin, LI; Changquan, YIN
2017-11-01
Vacuum preloading, rigid foundation, raft foundation and piled raft foundation are more commonly used in soft foundation treatment. PLAXIS is large finite element software of rock and soil, which can simulate the influence of different foundation forms. After the vacuum preloading treatment, the foundation settlement is reduced by 80%, the raft foundation settlement is reduced by 60%, the pile raft foundation settlement is reduced by 40%. It is suggested that the vacuum preloading is used to deal with the foundation of the building. If the time limit, the pile raft foundation is used as the foundation form of the foundation is better than others.
Observations of the larval stages of Diceroprocta apache Davis (Homoptera: Tibicinidae)
Ellingson, A.R.; Andersen, D.C.; Kondratieff, B.C.
2002-01-01
Diceroprocta apache Davis is a locally abundant cicada in the riparian woodlands of the southwestern United States. While its ecological importance has often been hypothesized, very little is known of its specific life history. This paper presents preliminary information on life history of D. apache from larvae collected in the field at seasonal intervals as well as a smaller number of reared specimens. Morphological development of the fore-femoral comb closely parallels growth through distinct size classes. The data indicate the presence of five larval instars in D. apache. Development times from greenhouse-reared specimens suggest a 3-4 year life span and overlapping broods were present in the field. Sex ratios among pre-emergent larvae suggest the asynchronous emergence of sexes.
A Metadata Management Framework for Collaborative Review of Science Data Products
NASA Astrophysics Data System (ADS)
Hart, A. F.; Cinquini, L.; Mattmann, C. A.; Thompson, D. R.; Wagstaff, K.; Zimdars, P. A.; Jones, D. L.; Lazio, J.; Preston, R. A.
2012-12-01
Data volumes generated by modern scientific instruments often preclude archiving the complete observational record. To compensate, science teams have developed a variety of "triage" techniques for identifying data of potential scientific interest and marking it for prioritized processing or permanent storage. This may involve multiple stages of filtering with both automated and manual components operating at different timescales. A promising approach exploits a fast, fully automated first stage followed by a more reliable offline manual review of candidate events. This hybrid approach permits a 24-hour rapid real-time response while also preserving the high accuracy of manual review. To support this type of second-level validation effort, we have developed a metadata-driven framework for the collaborative review of candidate data products. The framework consists of a metadata processing pipeline and a browser-based user interface that together provide a configurable mechanism for reviewing data products via the web, and capturing the full stack of associated metadata in a robust, searchable archive. Our system heavily leverages software from the Apache Object Oriented Data Technology (OODT) project, an open source data integration framework that facilitates the construction of scalable data systems and places a heavy emphasis on the utilization of metadata to coordinate processing activities. OODT provides a suite of core data management components for file management and metadata cataloging that form the foundation for this effort. The system has been deployed at JPL in support of the V-FASTR experiment [1], a software-based radio transient detection experiment that operates commensally at the Very Long Baseline Array (VLBA), and has a science team that is geographically distributed across several countries. Daily review of automatically flagged data is a shared responsibility for the team, and is essential to keep the project within its resource constraints. We describe the development of the platform using open source software, and discuss our experience deploying the system operationally. [1] R.B.Wayth,W.F.Brisken,A.T.Deller,W.A.Majid,D.R.Thompson, S. J. Tingay, and K. L. Wagstaff, "V-fastr: The vlba fast radio transients experiment," The Astrophysical Journal, vol. 735, no. 2, p. 97, 2011. Acknowledgement: This effort was supported by the Jet Propulsion Laboratory, managed by the California Institute of Technology under a contract with the National Aeronautics and Space Administration.
ProteoWizard: open source software for rapid proteomics tools development.
Kessner, Darren; Chambers, Matt; Burke, Robert; Agus, David; Mallick, Parag
2008-11-01
The ProteoWizard software project provides a modular and extensible set of open-source, cross-platform tools and libraries. The tools perform proteomics data analyses; the libraries enable rapid tool creation by providing a robust, pluggable development framework that simplifies and unifies data file access, and performs standard proteomics and LCMS dataset computations. The library contains readers and writers of the mzML data format, which has been written using modern C++ techniques and design principles and supports a variety of platforms with native compilers. The software has been specifically released under the Apache v2 license to ensure it can be used in both academic and commercial projects. In addition to the library, we also introduce a rapidly growing set of companion tools whose implementation helps to illustrate the simplicity of developing applications on top of the ProteoWizard library. Cross-platform software that compiles using native compilers (i.e. GCC on Linux, MSVC on Windows and XCode on OSX) is available for download free of charge, at http://proteowizard.sourceforge.net. This website also provides code examples, and documentation. It is our hope the ProteoWizard project will become a standard platform for proteomics development; consequently, code use, contribution and further development are strongly encouraged.
Software Component Technologies and Space Applications
NASA Technical Reports Server (NTRS)
Batory, Don
1995-01-01
In the near future, software systems will be more reconfigurable than hardware. This will be possible through the advent of software component technologies which have been prototyped in universities and research labs. In this paper, we outline the foundations for those technologies and suggest how they might impact software for space applications.
CMS Analysis and Data Reduction with Apache Spark
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gutsche, Oliver; Canali, Luca; Cremer, Illia
Experimental Particle Physics has been at the forefront of analyzing the world's largest datasets for decades. The HEP community was among the first to develop suitable software and computing tools for this task. In recent times, new toolkits and systems for distributed data processing, collectively called "Big Data" technologies have emerged from industry and open source projects to support the analysis of Petabyte and Exabyte datasets in industry. While the principles of data analysis in HEP have not changed (filtering and transforming experiment-specific data formats), these new technologies use different approaches and tools, promising a fresh look at analysis ofmore » very large datasets that could potentially reduce the time-to-physics with increased interactivity. Moreover these new tools are typically actively developed by large communities, often profiting of industry resources, and under open source licensing. These factors result in a boost for adoption and maturity of the tools and for the communities supporting them, at the same time helping in reducing the cost of ownership for the end-users. In this talk, we are presenting studies of using Apache Spark for end user data analysis. We are studying the HEP analysis workflow separated into two thrusts: the reduction of centrally produced experiment datasets and the end-analysis up to the publication plot. Studying the first thrust, CMS is working together with CERN openlab and Intel on the CMS Big Data Reduction Facility. The goal is to reduce 1 PB of official CMS data to 1 TB of ntuple output for analysis. We are presenting the progress of this 2-year project with first results of scaling up Spark-based HEP analysis. Studying the second thrust, we are presenting studies on using Apache Spark for a CMS Dark Matter physics search, comparing Spark's feasibility, usability and performance to the ROOT-based analysis.« less
Update on Astrometric Follow-Up at Apache Point Observatory by Adler Planetarium
NASA Astrophysics Data System (ADS)
Nault, Kristie A.; Brucker, Melissa; Hammergren, Mark
2016-10-01
We began our NEO astrometric follow-up and characterization program in 2014 Q4 using about 500 hours of observing time per year with the Astrophysical Research Consortium (ARC) 3.5m telescope at Apache Point Observatory (APO). Our observing is split into 2 hour blocks approximately every other night for astrometry (this poster) and several half-nights per month for spectroscopy (see poster by M. Hammergren et al.) and light curve studies.For astrometry, we use the ARC Telescope Imaging Camera (ARCTIC) with an SDSS r filter, in 2 hour observing blocks centered around midnight. ARCTIC has a magnitude limit of V~23 in 60s, and we target 20 NEOs per session. ARCTIC has a FOV 1.57 times larger and a readout time half as long as the previous imager, SPIcam, which we used from 2014 Q4 through 2015 Q3. Targets are selected primarily from the Minor Planet Center's (MPC) NEO Confirmation Page (NEOCP), and NEA Observation Planning Aid; we also refer to JPL's What's Observable page, the Spaceguard Priority List and Faint NEOs List, and requests from other observers. To quickly adapt to changing weather and seeing conditions, we create faint, midrange, and bright target lists. Detected NEOs are measured with Astrometrica and internal software, and the astrometry is reported to the MPC.As of June 19, 2016, we have targeted 2264 NEOs, 1955 with provisional designations, 1582 of which were detected. We began observing NEOCP asteroids on January 30, 2016, and have targeted 309, 207 of which were detected. In addition, we serendipitously observed 281 moving objects, 201 of which were identified as previously known objects.This work is based on observations obtained with the Apache Point Observatory 3.5m telescope, which is owned and operated by the Astrophysical Research Consortium. We gratefully acknowledge support from NASA NEOO award NNX14AL17G and thank the University of Chicago Department of Astronomy and Astrophysics for observing time in 2014.
SciSpark's SRDD : A Scientific Resilient Distributed Dataset for Multidimensional Data
NASA Astrophysics Data System (ADS)
Palamuttam, R. S.; Wilson, B. D.; Mogrovejo, R. M.; Whitehall, K. D.; Mattmann, C. A.; McGibbney, L. J.; Ramirez, P.
2015-12-01
Remote sensing data and climate model output are multi-dimensional arrays of massive sizes locked away in heterogeneous file formats (HDF5/4, NetCDF 3/4) and metadata models (HDF-EOS, CF) making it difficult to perform multi-stage, iterative science processing since each stage requires writing and reading data to and from disk. We have developed SciSpark, a robust Big Data framework, that extends ApacheTM Spark for scaling scientific computations. Apache Spark improves the map-reduce implementation in ApacheTM Hadoop for parallel computing on a cluster, by emphasizing in-memory computation, "spilling" to disk only as needed, and relying on lazy evaluation. Central to Spark is the Resilient Distributed Dataset (RDD), an in-memory distributed data structure that extends the functional paradigm provided by the Scala programming language. However, RDDs are ideal for tabular or unstructured data, and not for highly dimensional data. The SciSpark project introduces the Scientific Resilient Distributed Dataset (sRDD), a distributed-computing array structure which supports iterative scientific algorithms for multidimensional data. SciSpark processes data stored in NetCDF and HDF files by partitioning them across time or space and distributing the partitions among a cluster of compute nodes. We show usability and extensibility of SciSpark by implementing distributed algorithms for geospatial operations on large collections of multi-dimensional grids. In particular we address the problem of scaling an automated method for finding Mesoscale Convective Complexes. SciSpark provides a tensor interface to support the pluggability of different matrix libraries. We evaluate performance of the various matrix libraries in distributed pipelines, such as Nd4jTM and BreezeTM. We detail the architecture and design of SciSpark, our efforts to integrate climate science algorithms, parallel ingest and partitioning (sharding) of A-Train satellite observations from model grids. These solutions are encompassed in SciSpark, an open-source software framework for distributed computing on scientific data.
Natural language processing: an introduction.
Nadkarni, Prakash M; Ohno-Machado, Lucila; Chapman, Wendy W
2011-01-01
To provide an overview and tutorial of natural language processing (NLP) and modern NLP-system design. This tutorial targets the medical informatics generalist who has limited acquaintance with the principles behind NLP and/or limited knowledge of the current state of the art. We describe the historical evolution of NLP, and summarize common NLP sub-problems in this extensive field. We then provide a synopsis of selected highlights of medical NLP efforts. After providing a brief description of common machine-learning approaches that are being used for diverse NLP sub-problems, we discuss how modern NLP architectures are designed, with a summary of the Apache Foundation's Unstructured Information Management Architecture. We finally consider possible future directions for NLP, and reflect on the possible impact of IBM Watson on the medical field.
Natural language processing: an introduction
Ohno-Machado, Lucila; Chapman, Wendy W
2011-01-01
Objectives To provide an overview and tutorial of natural language processing (NLP) and modern NLP-system design. Target audience This tutorial targets the medical informatics generalist who has limited acquaintance with the principles behind NLP and/or limited knowledge of the current state of the art. Scope We describe the historical evolution of NLP, and summarize common NLP sub-problems in this extensive field. We then provide a synopsis of selected highlights of medical NLP efforts. After providing a brief description of common machine-learning approaches that are being used for diverse NLP sub-problems, we discuss how modern NLP architectures are designed, with a summary of the Apache Foundation's Unstructured Information Management Architecture. We finally consider possible future directions for NLP, and reflect on the possible impact of IBM Watson on the medical field. PMID:21846786
Growth and survival of Apache Trout under static and fluctuating temperature regimes
Recsetar, Matthew S.; Bonar, Scott A.; Feuerbacher, Olin
2014-01-01
Increasing stream temperatures have important implications for arid-region fishes. Little is known about effects of high water temperatures that fluctuate over extended periods on Apache Trout Oncorhynchus gilae apache, a federally threatened species of southwestern USA streams. We compared survival and growth of juvenile Apache Trout held for 30 d in static temperatures (16, 19, 22, 25, and 28°C) and fluctuating diel temperatures (±3°C from 16, 19, 22 and 25°C midpoints and ±6°C from 19°C and 22°C midpoints). Lethal temperature for 50% (LT50) of the Apache Trout under static temperatures (mean [SD] = 22.8 [0.6]°C) was similar to that of ±3°C diel temperature fluctuations (23.1 [0.1]°C). Mean LT50 for the midpoint of the ±6°C fluctuations could not be calculated because survival in the two treatments (19 ± 6°C and 22 ± 6°C) was not below 50%; however, it probably was also between 22°C and 25°C because the upper limb of a ±6°C fluctuation on a 25°C midpoint is above critical thermal maximum for Apache Trout (28.5–30.4°C). Growth decreased as temperatures approached the LT50. Apache Trout can survive short-term exposure to water temperatures with daily maxima that remain below 25°C and midpoint diel temperatures below 22°C. However, median summer stream temperatures must remain below 19°C for best growth and even lower if daily fluctuations are high (≥12°C).
2001-09-01
100 miles southwest of Melrose AFR near Ruidoso , New Mexico. The Jicarilla Apache Reservation is 195 miles northwest of the range. The Comanche Tribe...of the MOAs near Ruidoso , New Mexico. The Jicarilla Apache Reservation is about 150 miles northwest of the MOAs; and the Comanche Reservation is in...and Comanche. The Mescalero Apache Reservation is located approximately 25 miles south of VRs-100/125 near Ruidoso , New Mexico. The Jicarilla
Almog, Yaniv; Perl, Yael; Novack, Victor; Galante, Ori; Klein, Moti; Pencina, Michael J.; Douvdevani, Amos
2014-01-01
Aim The aim of the current study is to assess the mortality prediction accuracy of circulating cell-free DNA (CFD) level at admission measured by a new simplified method. Materials and Methods CFD levels were measured by a direct fluorescence assay in severe sepsis patients on intensive care unit (ICU) admission. In-hospital and/or twenty eight day all-cause mortality was the primary outcome. Results Out of 108 patients with median APACHE II of 20, 32.4% have died in hospital/or at 28-day. CFD levels were higher in decedents: median 3469.0 vs. 1659 ng/ml, p<0.001. In multivariable model APACHE II score and CFD (quartiles) were significantly associated with the mortality: odds ratio of 1.05, p = 0.049 and 2.57, p<0.001 per quartile respectively. C-statistics for the models was 0.79 for CFD and 0.68 for APACHE II. Integrated discrimination improvement (IDI) analyses showed that CFD and CFD+APACHE II score models had better discriminatory ability than APACHE II score alone. Conclusions CFD level assessed by a new, simple fluorometric-assay is an accurate predictor of acute mortality among ICU patients with severe sepsis. Comparison of CFD to APACHE II score and Procalcitonin (PCT), suggests that CFD has the potential to improve clinical decision making. PMID:24955978
Study on the Accident-causing of Foundation Pit Engineering
NASA Astrophysics Data System (ADS)
Shuicheng, Tian; Xinyue, Zhang; Pengfei, Yang; Longgang, Chen
2018-05-01
With the development of high-rise buildings and underground space, a large number of foundation pit projects have occurred. Frequent accidents of it cause great losses to the society, how to reduce the frequency of pit accidents has become one of the most urgent problems to be solved. Therefore, analysing the influencing factors of foundation pit engineering accidents and studying the causes of foundation pit accidents, which of great significance for improving the safety management level of foundation pit engineering and reducing the incidence of foundation pit accidents. Firstly, based on literature review and questionnaires, this paper selected construction management, survey, design, construction, supervision and monitoring as research factors, we used the AHP method and the Dematel method to analyze the weights of various influencing factors to screen indicators to determine the ultimate system of accidents caused by foundation pit accidents; Secondly, SPSS 21.0 software was used to test the reliability and validity of the recovered questionnaire data. AMOS 7.0 software was used to fit, evaluate, and explain the set model; Finally, this paper analysed the influencing factors of foundation pit engineering accidents, corresponding management countermeasures and suggestions were put forward.
Vasilyeva, I V; Shvirev, S L; Arseniev, S B; Zarubina, T V
2013-01-01
The aim of the present study is to assess a possibility and validity of prognostic scales ISS-RTS-TRISS, PRISM, APACHE II and PTS to be used for the automated calculation in decision support when treating children with severe mechanical traumas. The mentioned scales are used in the Hospital Information System (HIS) MEDIALOG. The retrospective study was conducted using clinical and physiological data collected at the admission and during the first 24 hours of hospitalization in 166 patients. Scales PRISM, APACHE II, ISS-RTS-TRISS were used for calculating the severity of injury and for prognosis in death outcomes. Scale PTS was used for evaluating the severity index only. Our research has shown that ISS-RTS-TRISS has excellent discrimination ability, PRISM and APACHE II prognostic scales have acceptable discrimination ability; moreover, they all have significant calibration ability. PTS scale has acceptable discrimination ability. It has been showed that automated calculation scales ISS-RTS-TRISS, PRISM, APACHE II and PTS are useful for assessing outcomes in children with severe mechanical trauma.
NASA Astrophysics Data System (ADS)
Kaur, Jagreet; Singh Mann, Kulwinder, Dr.
2018-01-01
AI in Healthcare needed to bring real, actionable insights and Individualized insights in real time for patients and Doctors to support treatment decisions., We need a Patient Centred Platform for integrating EHR Data, Patient Data, Prescriptions, Monitoring, Clinical research and Data. This paper proposes a generic architecture for enabling AI based healthcare analytics Platform by using open sources Technologies Apache beam, Apache Flink Apache Spark, Apache NiFi, Kafka, Tachyon, Gluster FS, NoSQL- Elasticsearch, Cassandra. This paper will show the importance of applying AI based predictive and prescriptive analytics techniques in Health sector. The system will be able to extract useful knowledge that helps in decision making and medical monitoring in real-time through an intelligent process analysis and big data processing.
NASA Astrophysics Data System (ADS)
Painter, T.; Mattmann, C. A.; Brodzik, M.; Bryant, A. C.; Goodale, C. E.; Hart, A. F.; Ramirez, P.; Rittger, K. E.; Seidel, F. C.; Zimdars, P. A.
2012-12-01
The response of the cryosphere to climate forcings largely determines Earth's climate sensitivity. However, our understanding of the strength of the simulated snow albedo feedback varies by a factor of three in the GCMs used in the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, mainly caused by uncertainties in snow extent and the albedo of snow-covered areas from imprecise remote sensing retrievals. Additionally, the Western US and other regions of the globe depend predominantly on snowmelt for their water supply to agriculture, industry and cities, hydroelectric power, and recreation, against rising demand from increasing population. In the mountains of the Upper Colorado River Basin, dust radiative forcing in snow shortens snow cover duration by 3-7 weeks. Extended to the entire upper basin, the 5-fold increase in dust load since the late-1800s results in a 3-week earlier peak runoff and a 5% annual loss of total runoff. The remotely sensed dynamics of snow cover duration and melt however have not been factored into hydrological modeling, operational forecasting, and policymaking. To address these deficiencies in our understanding of snow properties, we have developed and validated a suite of MODIS snow products that provide accurate fractional snow covered area and radiative forcing of dust and carbonaceous aerosols in snow. The MODIS Snow Covered Area and Grain size (MODSCAG) and MODIS Dust Radiative Forcing in Snow (MODDRFS) algorithms, developed and transferred from imaging spectroscopy techniques, leverage the complete MODIS surface reflectance spectrum. The two most critical properties for understanding snowmelt runoff and timing are the spatial and temporal distributions of snow water equivalent (SWE) and snow albedo. We have created the Airborne Snow Observatory (ASO), an imaging spectrometer and scanning LiDAR system, to quantify SWE and snow albedo, generate unprecedented knowledge of snow properties, and provide complete, robust inputs to water management models and systems of the future. In the push to better understand the physical and ecological processes of snowmelt and how they influence regional to global hydrologic and climatic cycles, these technologies and retrievals provide markedly improved detail. We have implemented a science computing facility anchored upon the open source Apache OODT data processing framework. Apache OODT provides adaptable, rapid, and effective workflow technologies that we leverage to execute 10s of thousands of MOD-DRFS and MODSCAG jobs in the Western US, Alaska, and High Asia, critical regions where snowmelt and runoff must be more accurately and precisely identified. Apache OODT also provides us data dissemination capabilities built upon the popular, open source WebDAV protocol that allow our system to disseminate over 20 TB of MOD-DRFS and MODSCAG to the decision making community. Our latest endeavor involves building out Apache OODT to support Geospatial exploration of our data, including providing a Leaflet.js based Map, Geoserver backed protocols, and seamless integration with our Apache OODT system. This framework provides the foundation for the ASO data system.
Construction of the Database for Pulsating Variable Stars
NASA Astrophysics Data System (ADS)
Chen, Bing-Qiu; Yang, Ming; Jiang, Bi-Wei
2012-01-01
A database for pulsating variable stars is constructed to favor the study of variable stars in China. The database includes about 230,000 variable stars in the Galactic bulge, LMC and SMC observed in an about 10 yr period by the MACHO(MAssive Compact Halo Objects) and OGLE(Optical Gravitational Lensing Experiment) projects. The software used for the construction is LAMP, i.e., Linux+Apache+MySQL+PHP. A web page is provided for searching the photometric data and light curves in the database through the right ascension and declination of an object. Because of the flexibility of this database, more up-to-date data of variable stars can be incorporated into the database conveniently.
T-Check in Technologies for Interoperability: Web Services and Security--Single Sign-On
2007-12-01
following tools: • Apache Tomcat 6.0—a Java Servlet container to host the Web services and a simple Web client application [Apache 2007a] • Apache Axis...Eclipse. Eclipse – an open development platform. http://www.eclipse.org/ (2007) [Hunter 2001] Hunter, Jason. Java Servlet Programming, 2nd Edition...Citation SAML 1.1 Java Toolkit SAML Ping Identity’s SAML-1.1 implementation [SourceID 2006] OpenSAML SAML An open source implementation of SAML 1.1
2009-12-01
forward-looking infrared FOV field-of-view HDU helmet display unit HMD helmet-mounted display IHADSS Integrated Helmet and Display...monocular Integrated Helmet and Display Sighting System (IHADSS) helmet-mounted display ( HMD ) in the British Army’s Apache AH Mk 1 attack helicopter has any...Integrated Helmet and Display Sighting System, IHADSS, Helmet-mounted display, HMD , Apache helicopter, Visual performance UNCLAS UNCLAS UNCLAS SAR 96
Apache sharply expands western Egypt acreage position
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1997-02-10
Apache Corp. became Egypt`s second largest acreage holder with acquisition of Mobil Corp.`s nonoperating interests in three western desert exploration concessions covering a combined 7.7 million gross acres. Apache assumed a 50% contractor interest in the Repsol SA operated East Bahariya concession, a 33% contractor interest in the Repsol operated West Mediterranean Block 1 concession, and a 24% contractor interest in the Royal Dutch/Shell operated Northeast Abu Gharadig concession. The concessions carry a total drilling obligation of 11 wells the next 3 years.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-31
...NMFS received an application from Apache Alaska Corporation (Apache) for an Incidental Harassment Authorization (IHA) to take marine mammals, by harassment, incidental to a proposed 3D seismic survey in Cook Inlet, Alaska, between March 1, 2014, and December 31, 2014. Pursuant to the Marine Mammal Protection Act (MMPA), NMFS requests comments on its proposal to issue an IHA to Apache to take, by Level B harassment only, five species of marine mammals during the specified activity.
Develop, Build, and Test a Virtual Lab to Support a Vulnerability Training System
2004-09-01
docs.us.dell.com/support/edocs/systems/pe1650/ en /it/index.htm> (20 August 2004) “HOWTO: Installing Web Services with Linux /Tomcat/Apache/Struts...configured as host machines with VMware and VNC running on a Linux RedHat 9 Kernel. An Apache-Tomcat web server was configured as the external interface to...1650, dual processor, blade servers were configured as host machines with VMware and VNC running on a Linux RedHat 9 Kernel. An Apache-Tomcat web
DOE Office of Scientific and Technical Information (OSTI.GOV)
Murphy, TW; Adelberger, Eric G.; Battat, J.
2008-01-01
A next-generation lunar laser ranging apparatus using the 3.5 m telescope at the Apache Point Observatory in southern New Mexico has begun science operation. APOLLO (the Apache Point Observatory Lunar Laser-ranging Operation) has achieved one-millimeter range precision to the moon which should lead to aproximately one-orderof-magnitude improvements in the precision of several tests of fundamental properties of gravity. We briefly motivate the scientific goals, and then give a detailed discussion of the APOLLO instrumentation.
Popova, A Yu; Kuzkin, B P; Demina, Yu V; Dubyansky, V M; Kulichenko, A N; Maletskaya, O V; Shayakhmetov, O Kh; Semenko, O V; Nazarenko, Yu V; Agapitov, D S; Mezentsev, V M; Kharchenko, T V; Efremenko, D V; Oroby, V G; Klindukhov, V P; Grechanaya, T V; Nikolaevich, P N; Tesheva, S Ch; Rafeenko, G K
2015-01-01
To improve the sanitary and epidemiological surveillance at the Olympic Games has developed a system of GIS for monitoring objects and situations in the region of Sochi. The system is based on software package ArcGIS, version 10.2 server, with Web-java.lang. Object, Web-server Apach, and software developed in language java. During th execution of the tasks are solved: the stratification of the region of the Olympic Games for the private and aggregate epidemiological risk OCI various eti- ologies, ranking epidemiologically important facilities for the sanitary and hygienic conditions, monitoring of infectious diseases (in real time according to the preliminary diagnosis). GIS monitoring has shown its effectiveness: Information received from various sources, but focused on one portal. Information was available in real time all the specialists involved in ensuring epidemiological well-being and use at work during the Olympic Games in Sochi.
S-Cube: Enabling the Next Generation of Software Services
NASA Astrophysics Data System (ADS)
Metzger, Andreas; Pohl, Klaus
The Service Oriented Architecture (SOA) paradigm is increasingly adopted by industry for building distributed software systems. However, when designing, developing and operating innovative software services and servicebased systems, several challenges exist. Those challenges include how to manage the complexity of those systems, how to establish, monitor and enforce Quality of Service (QoS) and Service Level Agreements (SLAs), as well as how to build those systems such that they can proactively adapt to dynamically changing requirements and context conditions. Developing foundational solutions for those challenges requires joint efforts of different research communities such as Business Process Management, Grid Computing, Service Oriented Computing and Software Engineering. This paper provides an overview of S-Cube, the European Network of Excellence on Software Services and Systems. S-Cube brings together researchers from leading research institutions across Europe, who join their competences to develop foundations, theories as well as methods and tools for future service-based systems.
The HARPS-N archive through a Cassandra, NoSQL database suite?
NASA Astrophysics Data System (ADS)
Molinari, Emilio; Guerra, Jose; Harutyunyan, Avet; Lodi, Marcello; Martin, Adrian
2016-07-01
The TNG-INAF is developing the science archive for the WEAVE instrument. The underlying architecture of the archive is based on a non relational database, more precisely, on Apache Cassandra cluster, which uses a NoSQL technology. In order to test and validate the use of this architecture, we created a local archive which we populated with all the HARPSN spectra collected at the TNG since the instrument's start of operations in mid-2012, as well as developed tools for the analysis of this data set. The HARPS-N data set is two orders of magnitude smaller than WEAVE, but we want to demonstrate the ability to walk through a complete data set and produce scientific output, as valuable as that produced by an ordinary pipeline, though without accessing directly the FITS files. The analytics is done by Apache Solr and Spark and on a relational PostgreSQL database. As an example, we produce observables like metallicity indexes for the targets in the archive and compare the results with the ones coming from the HARPS-N regular data reduction software. The aim of this experiment is to explore the viability of a high availability cluster and distributed NoSQL database as a platform for complex scientific analytics on a large data set, which will then be ported to the WEAVE Archive System (WAS) which we are developing for the WEAVE multi object, fiber spectrograph.
NASA Astrophysics Data System (ADS)
Gruzin, A. V.; Gruzin, V. V.; Shalay, V. V.
2018-04-01
Analysis of existing technologies for preparing foundation beds of oil and gas buildings and structures has revealed the lack of reasoned recommendations on the selection of rational technical and technological parameters of compaction. To study the nature of the dynamics of fast processes during compaction of foundation beds of oil and gas facilities, a specialized software and hardware system was developed. The method of calculating the basic technical parameters of the equipment for recording fast processes is presented, as well as the algorithm for processing the experimental data. The performed preliminary studies confirmed the accuracy of the decisions made and the calculations performed.
Mortality in Code Blue; can APACHE II and PRISM scores be used as markers for prognostication?
Bakan, Nurten; Karaören, Gülşah; Tomruk, Şenay Göksu; Keskin Kayalar, Sinem
2018-03-01
Code blue (CB) is an emergency call system developed to respond to cardiac and respiratory arrest in hospitals. However, in literature, no scoring system has been reported that can predict mortality in CB procedures. In this study, we aimed to investigate the effectiveness of estimated APACHE II and PRISM scores in the prediction of mortality in patients assessed using CB to retrospectively analyze CB calls. We retrospectively examined 1195 patients who were evaluated by the CB team at our hospital between 2009 and 2013. The demographic data of the patients, diagnosis and relevant de-partments, reasons for CB, cardiopulmonary resuscitation duration, mortality calculated from the APACHE II and PRISM scores, and the actual mortality rates were retrospectively record-ed from CB notification forms and the hospital database. In all age groups, there was a significant difference between actual mortality rate and the expected mortality rate as estimated using APACHE II and PRISM scores in CB calls (p<0.05). The actual mortality rate was significantly lower than the expected mortality. APACHE and PRISM scores with the available parameters will not help predict mortality in CB procedures. Therefore, novels scoring systems using different parameters are needed.
Andersen, Douglas C.
1994-01-01
Apache cicada (Homoptera: Cicadidae: Diceroprocta apache Davis) densities were estimated to be 10 individuals/m2 within a closed-canopy stand of Fremont cottonwood (Populus fremontii) and Goodding willow (Salix gooddingii) in a revegetated site adjacent to the Colorado River near Parker, Arizona. Coupled with data drawn from the literature, I estimate that up to 1.3 cm (13 1/m2) of water may be added to the upper soil layers annually through the feeding activities of cicada nymphs. This is equivalent to 12% of the annual precipitation received in the study area. Apache cicadas may have significant effects on ecosystem functioning via effects on water transport and thus act as a critical-link species in this southwest desert riverine ecosystem. Cicadas emerged later within the cottonwood-willow stand than in relatively open saltcedar-mesquite stands; this difference in temporal dynamics would affect their availability to several insectivorous bird species and may help explain the birds' recent declines. Resource managers in this region should be sensitive to the multiple and strong effects that Apache cicadas may have on ecosystem structure and functioning.
Software Engineering Education: Some Important Dimensions
ERIC Educational Resources Information Center
Mishra, Alok; Cagiltay, Nergiz Ercil; Kilic, Ozkan
2007-01-01
Software engineering education has been emerging as an independent and mature discipline. Accordingly, various studies are being done to provide guidelines for curriculum design. The main focus of these guidelines is around core and foundation courses. This paper summarizes the current problems of software engineering education programs. It also…
Jentzer, Jacob C; Bennett, Courtney; Wiley, Brandon M; Murphree, Dennis H; Keegan, Mark T; Gajic, Ognjen; Wright, R Scott; Barsness, Gregory W
2018-03-10
Optimal methods of mortality risk stratification in patients in the cardiac intensive care unit (CICU) remain uncertain. We evaluated the ability of the Sequential Organ Failure Assessment (SOFA) score to predict mortality in a large cohort of unselected patients in the CICU. Adult patients admitted to the CICU from January 1, 2007, to December 31, 2015, at a single tertiary care hospital were retrospectively reviewed. SOFA scores were calculated daily, and Acute Physiology and Chronic Health Evaluation (APACHE)-III and APACHE-IV scores were calculated on CICU day 1. Discrimination of hospital mortality was assessed using area under the receiver-operator characteristic curve values. We included 9961 patients, with a mean age of 67.5±15.2 years; all-cause hospital mortality was 9.0%. Day 1 SOFA score predicted hospital mortality, with an area under the receiver-operator characteristic curve value of 0.83; area under the receiver-operator characteristic curve values were similar for the APACHE-III score, and APACHE-IV predicted mortality ( P >0.05). Mean and maximum SOFA scores over multiple CICU days had greater discrimination for hospital mortality ( P <0.01). Patients with an increasing SOFA score from day 1 and day 2 had higher mortality. Patients with day 1 SOFA score <2 were at low risk of mortality. Increasing tertiles of day 1 SOFA score predicted higher long-term mortality ( P <0.001 by log-rank test). The day 1 SOFA score has good discrimination for short-term mortality in unselected patients in the CICU, which is comparable to APACHE-III and APACHE-IV. Advantages of the SOFA score over APACHE include simplicity, improved discrimination using serial scores, and prediction of long-term mortality. © 2018 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley.
SBSI: an extensible distributed software infrastructure for parameter estimation in systems biology.
Adams, Richard; Clark, Allan; Yamaguchi, Azusa; Hanlon, Neil; Tsorman, Nikos; Ali, Shakir; Lebedeva, Galina; Goltsov, Alexey; Sorokin, Anatoly; Akman, Ozgur E; Troein, Carl; Millar, Andrew J; Goryanin, Igor; Gilmore, Stephen
2013-03-01
Complex computational experiments in Systems Biology, such as fitting model parameters to experimental data, can be challenging to perform. Not only do they frequently require a high level of computational power, but the software needed to run the experiment needs to be usable by scientists with varying levels of computational expertise, and modellers need to be able to obtain up-to-date experimental data resources easily. We have developed a software suite, the Systems Biology Software Infrastructure (SBSI), to facilitate the parameter-fitting process. SBSI is a modular software suite composed of three major components: SBSINumerics, a high-performance library containing parallelized algorithms for performing parameter fitting; SBSIDispatcher, a middleware application to track experiments and submit jobs to back-end servers; and SBSIVisual, an extensible client application used to configure optimization experiments and view results. Furthermore, we have created a plugin infrastructure to enable project-specific modules to be easily installed. Plugin developers can take advantage of the existing user-interface and application framework to customize SBSI for their own uses, facilitated by SBSI's use of standard data formats. All SBSI binaries and source-code are freely available from http://sourceforge.net/projects/sbsi under an Apache 2 open-source license. The server-side SBSINumerics runs on any Unix-based operating system; both SBSIVisual and SBSIDispatcher are written in Java and are platform independent, allowing use on Windows, Linux and Mac OS X. The SBSI project website at http://www.sbsi.ed.ac.uk provides documentation and tutorials.
PDS4: Harnessing the Power of Generate and Apache Velocity
NASA Astrophysics Data System (ADS)
Padams, J.; Cayanan, M.; Hardman, S.
2018-04-01
The PDS4 Generate Tool is a Java-based command-line tool developed by the Cartography and Imaging Sciences Nodes (PDSIMG) for generating PDS4 XML labels, from Apache Velocity templates and input metadata.
An External Independent Validation of APACHE IV in a Malaysian Intensive Care Unit.
Wong, Rowena S Y; Ismail, Noor Azina; Tan, Cheng Cheng
2015-04-01
Intensive care unit (ICU) prognostic models are predominantly used in more developed nations such as the United States, Europe and Australia. These are not that popular in Southeast Asian countries due to costs and technology considerations. The purpose of this study is to evaluate the suitability of the acute physiology and chronic health evaluation (APACHE) IV model in a single centre Malaysian ICU. A prospective study was conducted at the single centre ICU in Hospital Sultanah Aminah (HSA) Malaysia. External validation of APACHE IV involved a cohort of 916 patients who were admitted in 2009. Model performance was assessed through its calibration and discrimination abilities. A first-level customisation using logistic regression approach was also applied to improve model calibration. APACHE IV exhibited good discrimination, with an area under receiver operating characteristic (ROC) curve of 0.78. However, the model's overall fit was observed to be poor, as indicated by the Hosmer-Lemeshow goodness-of-fit test (Ĉ = 113, P <0.001). Predicted in-ICU mortality rate (28.1%) was significantly higher than the actual in-ICU mortality rate (18.8%). Model calibration was improved after applying first-level customisation (Ĉ = 6.39, P = 0.78) although discrimination was not affected. APACHE IV is not suitable for application in HSA ICU, without further customisation. The model's lack of fit in the Malaysian study is attributed to differences in the baseline characteristics between HSA ICU and APACHE IV datasets. Other possible factors could be due to differences in clinical practice, quality and services of health care systems between Malaysia and the United States.
ERIC Educational Resources Information Center
Bitter, Gary G., Ed.
1989-01-01
Describes three software packages: (1) "MacMendeleev"--database/graphic display for chemistry, grades 10-12, Macintosh; (2) "Geometry One: Foundations"--geometry tutorial, grades 7-12, IBM; (3) "Mathematics Exploration Toolkit"--algebra and calculus tutorial, grades 8-12, IBM. (MVL)
NASA Astrophysics Data System (ADS)
Hasan, B.; Hasbullah, H.; Elvyanti, S.; Purnama, W.
2018-02-01
The creative industry is the utilization of creativity, skill and talent of individuals to create wealth and jobs by generating and exploiting creativity power of individual. In the field of design, utilization of information technology can spur creative industry, development of creative industry design will accommodate a lot of creative energy that can pour their ideas and creativity without limitations. Open Source software is a trend in the field of information technology has developed since the 1990s. Examples of applications developed by the Open Source approach is the Apache web services, Linux and Android Operating System, the MySQL database. This community service activities based entrepreneurship aims to: 1). give an idea about the profile of the UPI student’s knowledge of entrepreneurship about the business based creative industries in software by using web software development and educational game 2) create a model for fostering entrepreneurship based on the creative industries in software by leveraging web development and educational games, 3) conduct training and guidance on UPI students who want to develop business in the field of creative industries engaged in the software industry . PKM-based entrepreneurship activity was attended by about 35 students DPTE FPTK UPI had entrepreneurial high interest and competence in information technology. Outcome generated from PKM entrepreneurship is the emergence of entrepreneurs from the students who are interested in the creative industry in the field of software which is able to open up business opportunities for themselves and others. Another outcome of this entrepreneurship PKM activity is the publication of articles or scientific publications in journals of national/international indexed.
Wang, Shengyun; Chen, Dechang
2015-02-01
To investigate the correlation between procalcitonin (PCT), C-reactive protein (CRP) and acute physiology and chronic health evaluation II (APACHE II) score and sequential organ failure assessment (SOFA) score, and to investigate the value in assessment of PCT and CRP in prognosis in patients with sepsis. Clinical data of patients admitted to intensive care unit (ICU) of Changzheng Hospital Affiliated to the Second Military Medical University from January 2011 to June 2014 were retrospectively analyzed. 201 sepsis patients who received PCT and CRP tests, and evaluation of APACHE II score and SOFA score were enrolled. The values of PCT, CRP, APACHE II score and SOFA score between survivals (n = 136) and non-survivals (n = 65) were compared. The values of PCT and CRP among groups with different APACHE II scores and SOFA scores were compared. The relationships between PCT, CRP and APACHE II score and SOFA score were analyzed by Spearman correlation analysis. Receiver operating characteristic (ROC) curve was plotted to assess the prognostic value of PCT and CRP for prognosis of patients with sepsis. Compared with survival group, the values of PCT [μg/L: 11.03 (19.17) vs. 1.39 (2.61), Z = -4.572, P < 0.001], APACHE II score (19.16±5.32 vs. 10.01±3.88, t = -13.807, P < 0.001) and SOFA score (9.66±4.28 vs. 4.27±3.19, t = -9.993, P < 0.001) in non-survival group were significantly increased, but the value of CRP was not significantly different between non-survival group and survival group [mg/L: 75.22 (110.94) vs. 56.93 (100.75), Z = -0.731, P = 0.665]. The values of PCT were significantly correlated with APACHE II score and SOFA score (r1 = 0.373, r2 = 0.392, both P < 0.001), but the values of CRP were not significantly correlated with APACHE II score and SOFA score (r1 = -0.073, P1 = 0.411; r2 = -0.106, P2 = 0.282). The values of PCT rose significantly as the APACHE II score and SOFA score became higher, but the value of CRP was not significantly increased. When APACHE II score was 0-10, 11-20, and > 20, the value of PCT was 1.45 (2.62), 1.96 (9.04), and 7.41 (28.9) μg/L, respectively, and the value of CRP was 57.50 (83.40), 59.00 (119.70), and 77.60 (120.00) mg/L, respectively. When SOFA score was 0-5, 6-10, and > 10, the value of PCT was respectively 1.43 (3.09), 3.41 (9.75), and 5.43 (29.60) μg/L, and the value of CRP was 49.30 (86.20), 76.00 (108.70), and 75.60 (118.10) mg/L, respectively. There was significant difference in PCT between any two groups with different APACHE II and SOFA scores (P < 0.05 or P < 0.01), but no significant differences in CRP were found. The area under the ROC curve (AUC) of PCT for prognosis was significantly greater than that of CRP [0.872 (95% confidence interval 0.811-0.943) vs. 0.512 (95% confidence interval 0.427-0.612), P < 0.001]. When the cut-off value of PCT was 3.36 μg/L, the sensitivity was 66.8%, and the specificity was 45.4%. When the cut-off value of CRP was 44.50 mg/L, the sensitivity was 82.2%, and the specificity was 80.3%. Compared with CRP, PCT was more significantly correlated with APACHE II score and SOFA score. PCT can be a better indicator for evaluation of degree of severity, and also prognosis in sepsis patients.
Goloborodko, Anton A; Levitsky, Lev I; Ivanov, Mark V; Gorshkov, Mikhail V
2013-02-01
Pyteomics is a cross-platform, open-source Python library providing a rich set of tools for MS-based proteomics. It provides modules for reading LC-MS/MS data, search engine output, protein sequence databases, theoretical prediction of retention times, electrochemical properties of polypeptides, mass and m/z calculations, and sequence parsing. Pyteomics is available under Apache license; release versions are available at the Python Package Index http://pypi.python.org/pyteomics, the source code repository at http://hg.theorchromo.ru/pyteomics, documentation at http://packages.python.org/pyteomics. Pyteomics.biolccc documentation is available at http://packages.python.org/pyteomics.biolccc/. Questions on installation and usage can be addressed to pyteomics mailing list: pyteomics@googlegroups.com.
The Snow Data System at NASA JPL
NASA Astrophysics Data System (ADS)
Horn, J.; Painter, T. H.; Bormann, K. J.; Rittger, K.; Brodzik, M. J.; Skiles, M.; Burgess, A. B.; Mattmann, C. A.; Ramirez, P.; Joyce, M.; Goodale, C. E.; McGibbney, L. J.; Zimdars, P.; Yaghoobi, R.
2017-12-01
The Snow Data System at NASA JPL includes data processing pipelines built with open source software, Apache 'Object Oriented Data Technology' (OODT). Processing is carried out in parallel across a high-powered computing cluster. The pipelines use input data from satellites such as MODIS, VIIRS and Landsat. They apply algorithms to the input data to produce a variety of outputs in GeoTIFF format. These outputs include daily data for SCAG (Snow Cover And Grain size) and DRFS (Dust Radiative Forcing in Snow), along with 8-day composites and MODICE annual minimum snow and ice calculations. This poster will describe the Snow Data System, its outputs and their uses and applications. It will also highlight recent advancements to the system and plans for the future.
The Snow Data System at NASA JPL
NASA Astrophysics Data System (ADS)
Joyce, M.; Laidlaw, R.; Painter, T. H.; Bormann, K. J.; Rittger, K.; Brodzik, M. J.; Skiles, M.; Burgess, A. B.; Mattmann, C. A.; Ramirez, P.; Goodale, C. E.; McGibbney, L. J.; Zimdars, P.; Yaghoobi, R.
2016-12-01
The Snow Data System at NASA JPL includes data processing pipelines built with open source software, Apache 'Object Oriented Data Technology' (OODT). Processing is carried out in parallel across a high-powered computing cluster. The pipelines use input data from satellites such as MODIS, VIIRS and Landsat. They apply algorithms to the input data to produce a variety of outputs in GeoTIFF format. These outputs include daily data for SCAG (Snow Cover And Grain size) and DRFS (Dust Radiative Forcing in Snow), along with 8-day composites and MODICE annual minimum snow and ice calculations. This poster will describe the Snow Data System, its outputs and their uses and applications. It will also highlight recent advancements to the system and plans for the future.
Genotyping in the cloud with Crossbow.
Gurtowski, James; Schatz, Michael C; Langmead, Ben
2012-09-01
Crossbow is a scalable, portable, and automatic cloud computing tool for identifying SNPs from high-coverage, short-read resequencing data. It is built on Apache Hadoop, an implementation of the MapReduce software framework. Hadoop allows Crossbow to distribute read alignment and SNP calling subtasks over a cluster of commodity computers. Two robust tools, Bowtie and SOAPsnp, implement the fundamental alignment and variant calling operations respectively, and have demonstrated capabilities within Crossbow of analyzing approximately one billion short reads per hour on a commodity Hadoop cluster with 320 cores. Through protocol examples, this unit will demonstrate the use of Crossbow for identifying variations in three different operating modes: on a Hadoop cluster, on a single computer, and on the Amazon Elastic MapReduce cloud computing service.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-25
... Ranger, Lakeside Ranger District, Apache-Sitgreaves National Forests, c/o TEC Inc., 514 Via de la Valle... to other papers serving areas affected by this proposal: Tucson Citizen, Sierra Vista Herald, Nogales...
4. APACHE INDIAN LABORER WITH TEAM AND SCRAPER WORKING ON ...
4. APACHE INDIAN LABORER WITH TEAM AND SCRAPER WORKING ON THE POWER CANAL LINE FOUR MILES ABOVE LIVINGSTONE, ARIZONA Photographer: Walter J. Lubken, June 14, 1906 - Roosevelt Power Canal & Diversion Dam, Parallels Salt River, Roosevelt, Gila County, AZ
Knaus, W. A.; Draper, E. A.; Wagner, D. P.
1991-01-01
The APACHE III data base reflects the disease, physiologic status, and outcome data from 17,400 ICU patients at 40 hospitals, 26 of which were randomly selected from representative geographic regions, bed size, and teaching status. This provides a nationally representative standard for measuring several important aspects of ICU performance. Results from the study have now been used to develop an automated information system to provide real time information about expected ICU patient outcome, length of stay, production cost, and ICU performance. The information system provides several new capabilities to ICU clinicians, clinic, and hospital administrators. Among the system's capabilities are: the ability to compare local ICU performance against predetermined criteria; the ability to forecast nursing requirements; and, the ability to make both individual and group patient outcome predictions. The system also provides improved administrative support by tracking ICU charges at the point of origin and reduces staff workload eliminating the requirement for several manually maintained logs and patient lists. APACHE III has the capability to electronically interface with and utilize data already captured in existing hospital information systems, automated laboratory information systems, and patient monitoring systems. APACHE III will also be completely integrated with several CIS vendors' products. PMID:1807779
Better prognostic marker in ICU - APACHE II, SOFA or SAP II!
Naqvi, Iftikhar Haider; Mahmood, Khalid; Ziaullaha, Syed; Kashif, Syed Mohammad; Sharif, Asim
2016-01-01
This study was designed to determine the comparative efficacy of different scoring system in assessing the prognosis of critically ill patients. This was a retrospective study conducted in medical intensive care unit (MICU) and high dependency unit (HDU) Medical Unit III, Civil Hospital, from April 2012 to August 2012. All patients over age 16 years old who have fulfilled the criteria for MICU admission were included. Predictive mortality of APACHE II, SAP II and SOFA were calculated. Calibration and discrimination were used for validity of each scoring model. A total of 96 patients with equal gender distribution were enrolled. The average APACHE II score in non-survivors (27.97+8.53) was higher than survivors (15.82+8.79) with statistically significant p value (<0.001). The average SOFA score in non-survivors (9.68+4.88) was higher than survivors (5.63+3.63) with statistically significant p value (<0.001). SAP II average score in non-survivors (53.71+19.05) was higher than survivors (30.18+16.24) with statistically significant p value (<0.001). All three tested scoring models (APACHE II, SAP II and SOFA) would be accurate enough for a general description of our ICU patients. APACHE II has showed better calibration and discrimination power than SAP II and SOFA.
Getting started on metrics - Jet Propulsion Laboratory productivity and quality
NASA Technical Reports Server (NTRS)
Bush, M. W.
1990-01-01
A review is presented to describe the effort and difficulties of reconstructing fifteen years of JPL software history. In 1987 the collection and analysis of project data were started with the objective of creating laboratory-wide measures of quality and productivity for software development. As a result of this two-year Software Product Assurance metrics study, a rough measurement foundation for software productivity and software quality, and an order-of-magnitude quantitative baseline for software systems and subsystems are now available.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-10-26
... proprietary software (e.g., Amazon's Kindle) to multipurpose devices running free software applications (e.g... Media Rights, Mozilla Corporation (``Mozilla''), and the Free Software Foundation (``FSF''), as well as... radical popularity over the past two years.'' EFF asserted that courts have long found copying and...
Learning Vocabulary in a Foreign Language: A Computer Software Based Model Attempt
ERIC Educational Resources Information Center
Yelbay Yilmaz, Yasemin
2015-01-01
This study aimed at devising a vocabulary learning software that would help learners learn and retain vocabulary items effectively. Foundation linguistics and learning theories have been adapted to the foreign language vocabulary learning context using a computer software named Parole that was designed exclusively for this study. Experimental…
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-01
...; Yavapai-Apache Nation of the Camp Verde Indian Reservation, Arizona; and Zuni Tribe of the Zuni... Band of Paiutes); San Juan Southern Paiute Tribe of Arizona; Yavapai- Apache Nation of the Camp Verde...
Rathnakar, Surag Kajoor; Vishnu, Vikram Hubbanageri; Muniyappa, Shridhar; Prasath, Arun
2017-02-01
Acute Pancreatitis (AP) is one of the common conditions encountered in the emergency room. The course of the disease ranges from mild form to severe acute form. Most of these episodes are mild and spontaneously subsiding within 3 to 5 days. In contrast, Severe Acute Pancreatitis (SAP) occurring in around 15-20% of all cases, mortality can range between 10 to 85% across various centres and countries. In such a situation we need an indicator which can predict the outcome of an attack, as severe or mild, as early as possible and such an indicator should be sensitive and specific enough to trust upon. PANC-3 scoring is such a scoring system in predicting the outcome of an attack of AP. To assess the accuracy and predictability of PANC-3 scoring system over APACHE II in predicting severity in an attack of AP. This prospective study was conducted on 82 patients admitted with the diagnosis of pancreatitis. Investigations to evaluate PANC-3 and APACHE II were done on all the patients and the PANC-3 and APACHE II score was calculated. PANC-3 score has a sensitivity of 82.6% and specificity of 77.9%, the test had a Positive Predictive Value (PPV) of 0.59 and Negative Predictive Value (NPV) of 0.92. Sensitivity of APACHE II in predicting SAP was 91.3% and specificity was 96.6% with PPV of 0.91, NPV was 0.96. Our study shows that PANC-3 can be used to predict the severity of pancreatitis as efficiently as APACHE II. The interpretation of PANC-3 does not need expertise and can be applied at the time of admission which is an advantage when compared to classical scoring systems.
Significance of blood pressure variability in patients with sepsis.
Pandey, Nishant Raj; Bian, Yu-Yao; Shou, Song-Tao
2014-01-01
This study was undertaken to observe the characteristics of blood pressure variability (BPV) and sepsis and to investigate changes in blood pressure and its value on the severity of illness in patients with sepsis. Blood parameters, APACHE II score, and 24-hour ambulatory BP were analyzed in 89 patients with sepsis. In patients with APACHE II score>19, the values of systolic blood pressure (SBPV), diasystolic blood pressure (DBPV), non-dipper percentage, cortisol (COR), lactate (LAC), platelet count (PLT) and glucose (GLU) were significantly higher than in those with APACHE II score ≤19 (P<0.05), whereas the values of procalcitonin (PCT), white blood cell (WBC), creatinine (Cr), PaO2, C-reactive protein (CRP), adrenocorticotropic hormone (ACTH) and tumor necrosis factor α (TNF-α) were not statistically significant (P>0.05). Correlation analysis showed that APACHE II scores correlated significantly with SBPV and DBPV (P<0.01, r=0.732 and P<0.01, r=0.762). SBPV and DBPV were correlated with COR (P=0.018 and r=0.318; P=0.008 and r=0.353 respectively). However, SBPV and DBPV were not correlated with TNF-α, IL-10, and PCT (P>0.05). Logistic regression analysis of SBPV, DBPV, APACHE II score, and LAC was used to predict prognosis in terms of survival and non-survival rates. Receiver operating characteristics curve (ROC) showed that DBPV was a better predictor of survival rate with an AUC value of 0.890. However, AUC of SBPV, APACHE II score, and LAC was 0.746, 0.831 and 0.915, respectively. The values of SBPV, DBPV and non-dipper percentage are higher in patients with sepsis. DBPV and SBPV can be used to predict the survival rate of patients with sepsis.
Mortality Probability Model III and Simplified Acute Physiology Score II
Vasilevskis, Eduard E.; Kuzniewicz, Michael W.; Cason, Brian A.; Lane, Rondall K.; Dean, Mitzi L.; Clay, Ted; Rennie, Deborah J.; Vittinghoff, Eric; Dudley, R. Adams
2009-01-01
Background: To develop and compare ICU length-of-stay (LOS) risk-adjustment models using three commonly used mortality or LOS prediction models. Methods: Between 2001 and 2004, we performed a retrospective, observational study of 11,295 ICU patients from 35 hospitals in the California Intensive Care Outcomes Project. We compared the accuracy of the following three LOS models: a recalibrated acute physiology and chronic health evaluation (APACHE) IV-LOS model; and models developed using risk factors in the mortality probability model III at zero hours (MPM0) and the simplified acute physiology score (SAPS) II mortality prediction model. We evaluated models by calculating the following: (1) grouped coefficients of determination; (2) differences between observed and predicted LOS across subgroups; and (3) intraclass correlations of observed/expected LOS ratios between models. Results: The grouped coefficients of determination were APACHE IV with coefficients recalibrated to the LOS values of the study cohort (APACHE IVrecal) [R2 = 0.422], mortality probability model III at zero hours (MPM0 III) [R2 = 0.279], and simplified acute physiology score (SAPS II) [R2 = 0.008]. For each decile of predicted ICU LOS, the mean predicted LOS vs the observed LOS was significantly different (p ≤ 0.05) for three, two, and six deciles using APACHE IVrecal, MPM0 III, and SAPS II, respectively. Plots of the predicted vs the observed LOS ratios of the hospitals revealed a threefold variation in LOS among hospitals with high model correlations. Conclusions: APACHE IV and MPM0 III were more accurate than SAPS II for the prediction of ICU LOS. APACHE IV is the most accurate and best calibrated model. Although it is less accurate, MPM0 III may be a reasonable option if the data collection burden or the treatment effect bias is a consideration. PMID:19363210
Vinnik, Y S; Dunaevskaya, S S; Antufrieva, D A
2015-01-01
The aim of the study was to evaluate the diagnostic value of specific and nonspecific scoring systems Tolstoy-Krasnogorov score, Ranson, BISAP, Glasgow, MODS 2, APACHE II and CTSI, which used at urgent pancreatology for estimation the severity of acute pancreatitis and status of patient. 1550 case reports of patients which had inpatient surgical treatment at Road clinical hospital at the station Krasnoyarsk from 2009 till 2013 were analyzed. Diagnosis of severe acute pancreatitis and its complications were determined based on anamnestic data, physical exami- nation, clinical indexes, ultrasonic examination and computed tomography angiography. Specific and nonspecific scores (scoring system of estimation by Tolstoy-Krasnogorov, Ranson, Glasgow, BISAP, MODS 2, APACHE II, CTSI) were used for estimation the severity of acute pancreatitis and patient's general condition. Effectiveness of these scoring systems was determined based on some parameters: accuracy (Ac), sensitivity (Se), specificity (Sp), positive predictive value (PPV) and negative predictive value (NPV). Most valuables score for estimation of acute pancreatitis's severity is BISAP (Se--98.10%), for estimation of organ failure--MODS 2 (Sp--100%, PPV--100%) and APACHE II (Sp--100%, PPV--100%), for detection of pancreatonecrosis sings--CTSI (Sp--100%, NPV--100%), for estimation of need for intensive care--MODS 2 (Sp--100%, PPV--100%, NPV--96.29%) and APACHE II (Sp--100%, PPV--100%, NPV--97.21%), for prediction of lethality--MODS 2 (Se-- 100%, Sp--98.14%, NPV--100%) and APACHE II (Se--95.00%, NPV-.99.86%). Most effective scores for estimation of acute pancreatitis's severity are Score of estimation by Tolstoy-Krasnogorov, Ranson, Glasgow and BISAP Scoring systems MODS 2, APACHE I high specificity and positive predictive value allow using it at clinical practice.
ERIC Educational Resources Information Center
Arizona Univ., Tucson. Coll. of Medicine.
Designed to provide health services for American Indians living on rurally isolated reservations, the Arizona TeleMedicine Project proposes to link Phoenix and Tucson medical centers, via a statewide telecommunications system, with the Hopi, San Carlos Apache, Papago, Navajo, and White Mountain Apache reservations. Advisory boards are being…
Malone, James; Brown, Andy; Lister, Allyson L; Ison, Jon; Hull, Duncan; Parkinson, Helen; Stevens, Robert
2014-01-01
Biomedical ontologists to date have concentrated on ontological descriptions of biomedical entities such as gene products and their attributes, phenotypes and so on. Recently, effort has diversified to descriptions of the laboratory investigations by which these entities were produced. However, much biological insight is gained from the analysis of the data produced from these investigations, and there is a lack of adequate descriptions of the wide range of software that are central to bioinformatics. We need to describe how data are analyzed for discovery, audit trails, provenance and reproducibility. The Software Ontology (SWO) is a description of software used to store, manage and analyze data. Input to the SWO has come from beyond the life sciences, but its main focus is the life sciences. We used agile techniques to gather input for the SWO and keep engagement with our users. The result is an ontology that meets the needs of a broad range of users by describing software, its information processing tasks, data inputs and outputs, data formats versions and so on. Recently, the SWO has incorporated EDAM, a vocabulary for describing data and related concepts in bioinformatics. The SWO is currently being used to describe software used in multiple biomedical applications. The SWO is another element of the biomedical ontology landscape that is necessary for the description of biomedical entities and how they were discovered. An ontology of software used to analyze data produced by investigations in the life sciences can be made in such a way that it covers the important features requested and prioritized by its users. The SWO thus fits into the landscape of biomedical ontologies and is produced using techniques designed to keep it in line with user's needs. The Software Ontology is available under an Apache 2.0 license at http://theswo.sourceforge.net/; the Software Ontology blog can be read at http://softwareontology.wordpress.com.
2014-01-01
Motivation Biomedical ontologists to date have concentrated on ontological descriptions of biomedical entities such as gene products and their attributes, phenotypes and so on. Recently, effort has diversified to descriptions of the laboratory investigations by which these entities were produced. However, much biological insight is gained from the analysis of the data produced from these investigations, and there is a lack of adequate descriptions of the wide range of software that are central to bioinformatics. We need to describe how data are analyzed for discovery, audit trails, provenance and reproducibility. Results The Software Ontology (SWO) is a description of software used to store, manage and analyze data. Input to the SWO has come from beyond the life sciences, but its main focus is the life sciences. We used agile techniques to gather input for the SWO and keep engagement with our users. The result is an ontology that meets the needs of a broad range of users by describing software, its information processing tasks, data inputs and outputs, data formats versions and so on. Recently, the SWO has incorporated EDAM, a vocabulary for describing data and related concepts in bioinformatics. The SWO is currently being used to describe software used in multiple biomedical applications. Conclusion The SWO is another element of the biomedical ontology landscape that is necessary for the description of biomedical entities and how they were discovered. An ontology of software used to analyze data produced by investigations in the life sciences can be made in such a way that it covers the important features requested and prioritized by its users. The SWO thus fits into the landscape of biomedical ontologies and is produced using techniques designed to keep it in line with user’s needs. Availability The Software Ontology is available under an Apache 2.0 license at http://theswo.sourceforge.net/; the Software Ontology blog can be read at http://softwareontology.wordpress.com. PMID:25068035
A streamlined build system foundation for developing HPC software
DOE Office of Scientific and Technical Information (OSTI.GOV)
White, Chris; Harrison, Cyrus; Hornung, Richard
2017-02-09
BLT bundles custom CMake macros, unit testing frameworks for C++ and Fortran, and a set of smoke tests for common HPC dependencies. The combination of these three provides a foundation for quickly bootstrapping a CMale-based system for developing HPC softward.
25 CFR 183.1 - What is the purpose of this part?
Code of Federal Regulations, 2010 CFR
2010-04-01
... SAN CARLOS APACHE TRIBE DEVELOPMENT TRUST FUND AND SAN CARLOS APACHE TRIBE LEASE FUND Introduction... Tribe Water Settlement Act (the Act), Public Law 102-575, 106 Stat. 4748, that requires regulations to administer the Trust Fund, and the Lease Fund established by the Act. ...
DOT National Transportation Integrated Search
2016-03-03
This report summarizes the observations and findings of an interagency transportation assistance group (TAG) convened to discuss the long-term future of Arizona State Route 88, also known as the Apache Trail, a historic road on the Tonto Nation...
Army-NASA aircrew/aircraft integration program (A3I) software detailed design document, phase 3
NASA Technical Reports Server (NTRS)
Banda, Carolyn; Chiu, Alex; Helms, Gretchen; Hsieh, Tehming; Lui, Andrew; Murray, Jerry; Shankar, Renuka
1990-01-01
The capabilities and design approach of the MIDAS (Man-machine Integration Design and Analysis System) computer-aided engineering (CAE) workstation under development by the Army-NASA Aircrew/Aircraft Integration Program is detailed. This workstation uses graphic, symbolic, and numeric prototyping tools and human performance models as part of an integrated design/analysis environment for crewstation human engineering. Developed incrementally, the requirements and design for Phase 3 (Dec. 1987 to Jun. 1989) are described. Software tools/models developed or significantly modified during this phase included: an interactive 3-D graphic cockpit design editor; multiple-perspective graphic views to observe simulation scenarios; symbolic methods to model the mission decomposition, equipment functions, pilot tasking and loading, as well as control the simulation; a 3-D dynamic anthropometric model; an intermachine communications package; and a training assessment component. These components were successfully used during Phase 3 to demonstrate the complex interactions and human engineering findings involved with a proposed cockpit communications design change in a simulated AH-64A Apache helicopter/mission that maps to empirical data from a similar study and AH-1 Cobra flight test.
Stone, Paul; Cossette, P.M.
2000-01-01
The Apache Canyon 7.5-minute quadrangle is located in southwestern California about 55 km northeast of Santa Barbara and 65 km southwest of Bakersfield. This report presents the results of a geologic mapping investigation of the Apache Canyon quadrangle that was carried out in 1997-1999 as part of the U.S. Geological Survey's Southern California Areal Mapping Project. This quadrangle was chosen for study because it is in an area of complex, incompletely understood Cenozoic stratigraphy and structure of potential importance for regional tectonic interpretations, particularly those involving the San Andreas fault located just northwest of the quadrangle and the Big Pine fault about 10 km to the south. In addition, the quadrangle is notable for its well-exposed sequences of folded Neogene nonmarine strata including the Caliente Formation of Miocene age from which previous workers have collected and described several biostratigraphically significant land-mammal fossil assemblages. During the present study, these strata were mapped in detail throughout the quadrangle to provide an improved framework for possible future paleontologic investigations. The Apache Canyon quadrangle is in the eastern part of the Cuyama 30-minute by 60-minute quadrangle and is largely part of an erosionally dissected terrain known as the Cuyama badlands at the east end of Cuyama Valley. Most of the Apache Canyon quadrangle consists of public lands in the Los Padres National Forest.
Survival of Apache Trout eggs and alevins under static and fluctuating temperature regimes
Recsetar, Matthew S.; Bonar, Scott A.
2013-01-01
Increased stream temperatures due to global climate change, livestock grazing, removal of riparian cover, reduction of stream flow, and urbanization will have important implications for fishes worldwide. Information exists that describes the effects of elevated water temperatures on fish eggs, but less information is available on the effects of fluctuating water temperatures on egg survival, especially those of threatened and endangered species. We tested the posthatch survival of eyed eggs and alevins of Apache Trout Oncorhynchus gilae apache, a threatened salmonid, in static temperatures of 15, 18, 21, 24, and 27°C, and also in treatments with diel fluctuations of ±3°C around those temperatures. The LT50 for posthatch survival of Apache Trout eyed eggs and alevins was 17.1°C for static temperatures treatments and 17.9°C for the midpoints of ±3°C fluctuating temperature treatments. There was no significant difference in survival between static temperatures and fluctuating temperatures that shared the same mean temperature, yet there was a slight difference in LT50s. Upper thermal tolerance of Apache Trout eyed eggs and alevins is much lower than that of fry to adult life stages (22–23°C). Information on thermal tolerance of early life stages (eyed egg and alevin) will be valuable to those restoring streams or investigating thermal tolerances of imperiled fishes.
Numerical Analyses of Subsoil-structure Interaction in Original Non-commercial Software based on FEM
NASA Astrophysics Data System (ADS)
Cajka, R.; Vaskova, J.; Vasek, J.
2018-04-01
For decades attention has been paid to interaction of foundation structures and subsoil and development of interaction models. Given that analytical solutions of subsoil-structure interaction could be deduced only for some simple shapes of load, analytical solutions are increasingly being replaced by numerical solutions (eg. FEM – Finite element method). Numerical analyses provides greater possibilities for taking into account the real factors involved in the subsoil-structure interaction and was also used in this article. This makes it possible to design the foundation structures more efficiently and still reliably and securely. Currently there are several software that, can deal with the interaction of foundations and subsoil. It has been demonstrated that non-commercial software called MKPINTER (created by Cajka) provides appropriately results close to actual measured values. In MKPINTER software stress-strain analysis of elastic half-space by means of Gauss numerical integration and Jacobean of transformation is done. Input data for numerical analysis were observed by experimental loading test of concrete slab. The loading was performed using unique experimental equipment which was constructed in the area Faculty of Civil Engineering, VŠB-TU Ostrava. The purpose of this paper is to compare resulting deformation of the slab with values observed during experimental loading test.
NASA Astrophysics Data System (ADS)
Luo, Min
2018-02-01
On the basis of consulting data, the bearing mechanism of gravel pile composite foundation is analyzed in this paper. The use of ANSYS software under flexible foundation according to the plum blossoms gravel pile additional stress between pile and soil additional stress distribution, load on pile top stress and pile bearing rate of modulus ratio between pile and soil on the pile top stress and rate of pile bearing capacity, pile-soil effect the stress ratio was calculated and analyzed, providing reasonable design reference for the design of gravel pile composite foundation.
Lutzomyia (Helcocyrtomyia) Apache Young and Perkins (Diptera: Psychodidae) feeds on reptiles
USDA-ARS?s Scientific Manuscript database
Phlebotomine sand flies are vectors of bacteria, parasites, and viruses. In the western USA a sand fly, Lutzomyia apache Young and Perkins, was initially associated with epizootics of vesicular stomatitis virus (VSV), because sand flies were trapped at sites of an outbreak. Additional studies indica...
ERIC Educational Resources Information Center
Pono, Filomena P.; And Others
The Jicarilla Apache people celebrate a young girl's coming of age by having a feast called "Keesda". Derived from the Spanish word "fiesta", "Keesda" is a Jicarilla Apache word meaning "feast". This feast is held for four days, usually during the summer months. However, it may be held at any time during the…
The Consumer Juggernaut: Web-Based and Mobile Applications as Innovation Pioneer
NASA Astrophysics Data System (ADS)
Messerschmitt, David G.
As happened previously in electronics, software targeted at consumers is increasingly the focus of investment and innovation. Some of the areas where it is leading is animated interfaces, treating users as a community, audio and video information, software as a service, agile software development, and the integration of business models with software design. As a risk-taking and experimental market, and as a source of ideas, consumer software can benefit other areas of applications software. The influence of consumer software can be magnified by research into the internal organizations and processes of the innovative firms at its foundation.
Ban, Nobuhiko; Takahashi, Fumiaki; Ono, Koji; Hasegawa, Takayuki; Yoshitake, Takayasu; Katsunuma, Yasushi; Sato, Kaoru; Endo, Akira; Kai, Michiaki
2011-07-01
A web-based dose computation system, WAZA-ARI, is being developed for patients undergoing X-ray CT examinations. The system is implemented in Java on a Linux server running Apache Tomcat. Users choose scanning options and input parameters via a web browser over the Internet. Dose coefficients, which were calculated in a Japanese adult male phantom (JM phantom) are called upon user request and are summed over the scan range specified by the user to estimate a normalised dose. Tissue doses are finally computed based on the radiographic exposure (mA s) and the pitch factor. While dose coefficients are currently available only for limited CT scanner models, the system has achieved a high degree of flexibility and scalability without the use of commercial software.
A Structured Approach for Reviewing Architecture Documentation
2009-12-01
as those found in ISO 12207 [ ISO /IEC 12207 :2008] (for software engineering), ISO 15288 [ ISO /IEC 15288:2008] (for systems engineering), the Rational...Open Distributed Processing - Reference Model: Foundations ( ISO /IEC 10746-2). 1996. [ ISO /IEC 12207 :2008] International Organization for...Standardization & International Electrotechnical Commission. Sys- tems and software engineering – Software life cycle processes ( ISO /IEC 12207 ). 2008. [ ISO
Theoretical Foundations of Software Technology.
1983-02-14
major research interests are software testing, aritificial intelligence , pattern recogu- tion, and computer graphics. Dr. Chandranekaran is currently...produce PASCAL language code for the problems. Because of its relationship to many issues in Artificial Intelligence , we also investigated problems of...analysis to concurmt-prmcess software re- are not " intelligent " enough to discover these by themselves, ouirl more complex control flow models. The PAF
NASA Astrophysics Data System (ADS)
Oberlyn Simanjuntak, Johan; Suita, Diana
2017-12-01
Pile foundation is one type deep foundation that serves to distribute the load of hard soil structure loading which has a high bearing capacity that is located deep enough inside the soil. To determine the bearing capacity of the pile and at the same time control the Calendring results, the Pile Driving Analyzer (PDA) test at 8 pile sections from the 84 point piling section (10% of the number sections), the results were analyzed by CAPWAP SOFTWARE, and the highest bearing capacity of Ru 177 ton and the lowest bearing capacity of 111 tons, is bigger than the plan load which load plans that is 60,9 tons. Finally the PDA safe is bearing bearing capacity of the load planning.
Conservation priorities in the Apache Highlands ecoregion
Dale Turner; Rob Marshall; Carolyn A. F. Enquist; Anne Gondor; David F. Gori; Eduardo Lopez; Gonzalo Luna; Rafaela Paredes Aguilar; Chris Watts; Sabra Schwartz
2005-01-01
The Apache Highlands ecoregion incorporates the entire Madrean Archipelago/Sky Island region. We analyzed the current distribution of 223 target species and 26 terrestrial ecological systems there, and compared them with constraints on ecosystem integrity (e.g., road density) to determine the most efficient set of areas needed to maintain current biodiversity. The...
Recapturing the Past with Digital Imaging
ERIC Educational Resources Information Center
Gronseth, Susie
2008-01-01
Theodore Roosevelt School (TRS) is surrounded by culture and history. Located on the grounds of the former Fort Apache Army Post, TRS serves sixth- through eighth-grade native students, primarily from the White Mountain Apache Tribe. Tradition and culture are so much a part of the TRS students' background of experiences that teachers at the school…
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-28
...; Hualapai Indian Tribe of the Hualapai Indian Reservation, Arizona; Jicarilla Apache Nation, New Mexico; Kaibab Band of Paiute Indians of the Kaibab Indian Reservation, Arizona; Kewa Pueblo, New Mexico (formerly the Pueblo of Santo Domingo); Mescalero Apache Tribe of the Mescalero Reservation, New Mexico...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-26
.... Apache Junction Public Library, 1177 N. Idaho Road, Apache Junction, Arizona 85219. Buckeye Public Library, 310 North 6th Street, Buckeye, Arizona 85326. Casa Grande Public Library, 449 North Dry Lake, Casa Grande, Arizona 85222. Gila Bend Public Library, 202 North Euclid Avenue, Gila Bend, Arizona 85337...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-28
... Verde Indian Reservation, Arizona; Yavapai-Prescott Tribe of the Yavapai Reservation, Arizona; Ysleta...-Apache Nation of the Camp Verde Indian Reservation, Arizona; and Yavapai-Prescott Tribe of the Yavapai... Reservation, Arizona; Yavapai-Apache Nation of the Camp Verde Indian Reservation, Arizona; Yavapai-Prescott...
Forest resources of the Forest resources of the Apache-Sitgreaves National Forest
Paul Rogers
2008-01-01
The Interior West Forest Inventory and Analysis (IWFIA) program of the USDA Forest Service, Rocky Mountain Research Station, as part of its national Forest Inventory and Analysis (FIA) duties, conducted forest resource inventories of the Southwestern Region (Region 3) National Forests. This report presents highlights of the Apache-Sitgreaves National Forest...
Context-Based Mobile Security Enclave
2012-09-01
29 c. Change IMSI .............................30 d. Change CellID ...........................31 e. Change Geolocation ...Assisted Global Positioning System ADB Android Debugger API Application Programming Interface APK Android Application Package BSC Base Station...Programming Interfaces ( APIs ), which use Java compatible libraries based on Apache Harmony (an open source Java implementation developed by the Apache
Saad, Sameh; Mohamed, Naglaa; Moghazy, Amr; Ellabban, Gouda; El-Kamash, Soliman
2016-01-01
The trauma and injury severity score (TRISS) and Acute Physiology and Chronic Health Evaluation IV (APACHE IV) are accurate but complex. This study aimed to compare venous glucose, levels of serum lactate, and base deficit in polytraumatized patients as simple parameters to predict the mortality in these patients versus (TRISS) and (APACHE IV). This was a comparative cross-sectional study of 282 patients with polytrauma presented to the Emergency Department (ED). The best cut off value of TRISS probability of survival score for prediction of mortality among poly-traumatized patients was ≤90. APACHE IV demonstrated 67% sensitivity and 95% specificity at 95% CI at cut off point 99. The best cutoff value of Random Blood Sugar was >140 mg/dl, with 89% sensitivity, 49% specificity; base deficit was less than -5.6 with 64% sensitivity, 93% specificity; lactate was >2.6 mmol/L with 92%, sensitivity, 42% specificity. Venous glucose, serum lactate and base deficit are easy and rapid biochemical predictors of mortality in patients with polytrauma. These predictors could be used as TRISS and APACHE IV in predicting mortality.
Schein, M; Gecelter, G
1989-07-01
This study examined the prognostic value of the APACHE II scoring system in patients undergoing emergency operations for bleeding peptic ulcer. There were 96 operations for gastric ulcers and 58 for duodenal ulcers. The mean scores in survivors and in patients who died were 10.8 and 17.5 respectively. None of the 66 patients with an APACHE II score less than 11 died, while the mortality rate in those scored greater than 10 was 22 per cent. In patients scored greater than 10 non-resective procedures carried less risk of mortality than gastrectomy. The APACHE II score is useful when measuring the severity of the acute disease and predicting the outcome in these patients. If used in daily practice it may assist the surgeon in stratifying patients into a low-risk group (score less than 11) in which major operations are well tolerated and outcome is favourable and a high-risk group (score greater than 10) in which the risk of mortality is high and the performance of procedures of lesser magnitude is probably more likely to improve survival.
Berghmans, T; Paesmans, M; Sculier, J P
2004-04-01
To evaluate the effectiveness of a specific oncologic scoring system-the ICU Cancer Mortality model (ICM)-in predicting hospital mortality in comparison to two general severity scores-the Acute Physiology and Chronic Health Evaluation (APACHE II) and the Simplified Acute Physiology Score (SAPS II). All 247 patients admitted for a medical acute complication over an 18-month period in an oncological medical intensive care unit were prospectively registered. Their data, including type of complication, vital status at discharge and cancer characteristics as well as other variables necessary to calculate the three scoring systems were retrospectively assessed. Observed in-hospital mortality was 34%. The predicted in-hospital mortality rate for APACHE II was 32%; SAPS II, 24%; and ICM, 28%. The goodness of fit was inadequate except for the ICM score. Comparison of the area under the ROC curves revealed a better fit for ICM (area 0.79). The maximum correct classification rate was 72% for APACHE II, 74% for SAPS II and 77% for ICM. APACHE II and SAPS II were better at predicting outcome for survivors to hospital discharge, although ICM was better for non-survivors. Two variables were independently predicting the risk of death during hospitalisation: ICM (OR=2.31) and SAPS II (OR=1.05). Gravity scores were the single independent predictors for hospital mortality, and ICM was equivalent to APACHE II and SAPS II.
Que, Ri-sheng; Cao, Li-ping; Ding, Guo-ping; Hu, Jun-an; Mao, Ke-jie; Wang, Gui-feng
2010-05-01
To investigate the correlation of nitric oxide (NO) and other free radicals with the severity of acute pancreatitis (AP) and complicated systemic inflammatory response syndrome (SIRS). Fifty AP patients (24 simple AP patients and 26 patients with AP complicated by SIRS) were involved in the study. Fifty healthy volunteers were included as controls. Acute Physiology and Chronic Health Evaluation II (APACHE II) scores were evaluated, and plasma NO, plasma lipid peroxides, plasma vitamin E, plasma beta-carotene, whole-blood glutathione (GSH), and the activity of plasma GSH peroxidase were measured. Compared with the control group, the APACHE II scores heightened in the AP group, and the SIRS group had the highest APACHE II scores (P < 0.005, P < 0.001, respectively). Plasma NO and plasma lipid peroxides increased with the heightening APACHE II scores, demonstrating a significant linear positive correlation (r = 0.618, r = 0.577, respectively; P < 0.001). Plasma vitamin E, plasma beta-carotene, whole-blood GSH, and the activity of plasma GSH peroxidase decreased with the heightening APACHE II scores, demonstrating a significant linear negative correlation (r = -0.600, r = -0.609, r = -0.559, r = -0.592, respectively; P < 0.001). Nitric oxide and other free radicals take part in the aggravation of oxidative stress and oxidative injury and may play important roles in the pathogenesis of AP and SIRS. It may be valuable to measure free radicals to predict the severity of AP.
WU, XINKUAN; XIE, WEI; CHENG, YUELEI; GUAN, QINGLONG
2016-01-01
The aim of the present study was to investigate the plasma levels of C-reactive protein (CRP) and copeptin, in addition to the acute physiology and chronic health evaluation II (APACHE II) scores, in patients with acute organophosphorus pesticide poisoning (AOPP). A total of 100 patients with AOPP were included and divided into mild, moderate and severe groups according to AOPP diagnosis and classification standards. Blood samples were collected from all patients on days 1, 3 and 7 following AOPP. The concentrations of CRP and copeptin in the plasma were determined using enzyme-linked immunosorbent assay. All AOPP patients underwent APACHE II scoring and the diagnostic value of these scores was analyzed using receiver operating characteristic curves (ROCs). On days 1, 3 and 7 after AOPP, the levels of CRP and copeptin were increased in correlation with the increase in AOPP severity, and were significantly higher compared with the control groups. Furthermore, elevated CRP and copeptin plasma levels were detected in patients with severe AOPP on day 7, whereas these levels were reduced in patients with mild or moderate AOPP. APACHE II scores, blood lactate level, acetylcholine esterase level, twitch disappearance time, reactivating agent dose and inability to raise the head were the high-risk factors that affected the prognosis of AOPP. Patients with plasma CRP and copeptin levels higher than median values had worse prognoses. The areas under curve for ROCs were 0.89, 0.75 and 0.72 for CRP levels, copeptin levels and APACHE II scores, respectively. In addition, the plasma contents of CRP and copeptin are increased according to the severity of AOPP. Therefore, the results of the present study suggest that CRP and copeptin levels and APACHE II scores may be used for the determination of AOPP severity and the prediction of AOPP prognosis. PMID:26997996
Wu, Xinkuan; Xie, Wei; Cheng, Yuelei; Guan, Qinglong
2016-03-01
The aim of the present study was to investigate the plasma levels of C-reactive protein (CRP) and copeptin, in addition to the acute physiology and chronic health evaluation II (APACHE II) scores, in patients with acute organophosphorus pesticide poisoning (AOPP). A total of 100 patients with AOPP were included and divided into mild, moderate and severe groups according to AOPP diagnosis and classification standards. Blood samples were collected from all patients on days 1, 3 and 7 following AOPP. The concentrations of CRP and copeptin in the plasma were determined using enzyme-linked immunosorbent assay. All AOPP patients underwent APACHE II scoring and the diagnostic value of these scores was analyzed using receiver operating characteristic curves (ROCs). On days 1, 3 and 7 after AOPP, the levels of CRP and copeptin were increased in correlation with the increase in AOPP severity, and were significantly higher compared with the control groups. Furthermore, elevated CRP and copeptin plasma levels were detected in patients with severe AOPP on day 7, whereas these levels were reduced in patients with mild or moderate AOPP. APACHE II scores, blood lactate level, acetylcholine esterase level, twitch disappearance time, reactivating agent dose and inability to raise the head were the high-risk factors that affected the prognosis of AOPP. Patients with plasma CRP and copeptin levels higher than median values had worse prognoses. The areas under curve for ROCs were 0.89, 0.75 and 0.72 for CRP levels, copeptin levels and APACHE II scores, respectively. In addition, the plasma contents of CRP and copeptin are increased according to the severity of AOPP. Therefore, the results of the present study suggest that CRP and copeptin levels and APACHE II scores may be used for the determination of AOPP severity and the prediction of AOPP prognosis.
Sathe, Prachee M; Bapat, Sharda N
2014-01-01
To assess the performance and utility of two mortality prediction models viz. Acute Physiology and Chronic Health Evaluation II (APACHE II) and Simplified Acute Physiology Score II (SAPS II) in a single Indian mixed tertiary intensive care unit (ICU). Secondary objectives were bench-marking and setting a base line for research. In this observational cohort, data needed for calculation of both scores were prospectively collected for all consecutive admissions to 28-bedded ICU in the year 2011. After excluding readmissions, discharges within 24 h and age <18 years, the records of 1543 patients were analyzed using appropriate statistical methods. Both models overpredicted mortality in this cohort [standardized mortality ratio (SMR) 0.88 ± 0.05 and 0.95 ± 0.06 using APACHE II and SAPS II respectively]. Patterns of predicted mortality had strong association with true mortality (R (2) = 0.98 for APACHE II and R (2) = 0.99 for SAPS II). Both models performed poorly in formal Hosmer-Lemeshow goodness-of-fit testing (Chi-square = 12.8 (P = 0.03) for APACHE II, Chi-square = 26.6 (P = 0.001) for SAPS II) but showed good discrimination (area under receiver operating characteristic curve 0.86 ± 0.013 SE (P < 0.001) and 0.83 ± 0.013 SE (P < 0.001) for APACHE II and SAPS II, respectively). There were wide variations in SMRs calculated for subgroups based on International Classification of Disease, 10(th) edition (standard deviation ± 0.27 for APACHE II and 0.30 for SAPS II). Lack of fit of data to the models and wide variation in SMRs in subgroups put a limitation on utility of these models as tools for assessing quality of care and comparing performances of different units without customization. Considering comparable performance and simplicity of use, efforts should be made to adapt SAPS II.
Lee, Young-Joo; Park, Chan-Hee; Yun, Jang-Woon; Lee, Young-Suk
2004-02-29
Procalcitonin (PCT) is a newly introduced marker of systemic inflammation and bacterial infection. A marked increase in circulating PCT level in critically ill patients has been related with the severity of illness and poor survival. The goal of this study was to compare the prognostic power of PCT and three other parameters, the arterial ketone body ratio (AKBR), the acute physiology, age, chronic health evaluation (APACHE) III score and the multiple organ dysfunction score (MODS), in the differentiation between survivors and nonsurvivors of systemic inflammatory response syndrome (SIRS). The study was performed in 95 patients over 16 years of age who met the criteria of SIRS. PCT and AKBR were assayed in arterial blood samples. The APACHE III score and MODS were recorded after the first 24 hours of surgical ICU (SICU) admission and then daily for two weeks or until either discharge or death. The patients were divided into two groups, survivors (n=71) and nonsurvivors (n=24), in accordance with the ICU outcome. They were also divided into three groups according to the trend of PCT level: declining, increasing or no change. Significant differences between survivors and nonsurvivors were found in APACHE III score and MODS throughout the study period, but in PCT value only up to the 7th day and in AKBR only up to the 3rd day. PCT values of the three groups were not significantly different on the first day between survivors and nonsurvivors. Receiver operating characteristic (ROC) curves for prediction of mortality by PCT, AKBR, APACHE III score and MODS were 0.690, 0.320, 0.915 and 0.913, respectively, on the admission day. In conclusion, PCT could have some use as a mortality predictor in SIRS patients but was less reliable than APACHE III score or MODS.
A future Outlook: Web based Simulation of Hydrodynamic models
NASA Astrophysics Data System (ADS)
Islam, A. S.; Piasecki, M.
2003-12-01
Despite recent advances to present simulation results as 3D graphs or animation contours, the modeling user community still faces some shortcomings when trying to move around and analyze data. Typical problems include the lack of common platforms with standard vocabulary to exchange simulation results from different numerical models, insufficient descriptions about data (metadata), lack of robust search and retrieval tools for data, and difficulties to reuse simulation domain knowledge. This research demonstrates how to create a shared simulation domain in the WWW and run a number of models through multi-user interfaces. Firstly, meta-datasets have been developed to describe hydrodynamic model data based on geographic metadata standard (ISO 19115) that has been extended to satisfy the need of the hydrodynamic modeling community. The Extended Markup Language (XML) is used to publish this metadata by the Resource Description Framework (RDF). Specific domain ontology for Web Based Simulation (WBS) has been developed to explicitly define vocabulary for the knowledge based simulation system. Subsequently, this knowledge based system is converted into an object model using Meta Object Family (MOF). The knowledge based system acts as a Meta model for the object oriented system, which aids in reusing the domain knowledge. Specific simulation software has been developed based on the object oriented model. Finally, all model data is stored in an object relational database. Database back-ends help store, retrieve and query information efficiently. This research uses open source software and technology such as Java Servlet and JSP, Apache web server, Tomcat Servlet Engine, PostgresSQL databases, Protégé ontology editor, RDQL and RQL for querying RDF in semantic level, Jena Java API for RDF. Also, we use international standards such as the ISO 19115 metadata standard, and specifications such as XML, RDF, OWL, XMI, and UML. The final web based simulation product is deployed as Web Archive (WAR) files which is platform and OS independent and can be used by Windows, UNIX, or Linux. Keywords: Apache, ISO 19115, Java Servlet, Jena, JSP, Metadata, MOF, Linux, Ontology, OWL, PostgresSQL, Protégé, RDF, RDQL, RQL, Tomcat, UML, UNIX, Windows, WAR, XML
Data Rights Valuation in Software Acquisitions
2012-09-01
Practices (A. Krattiger et al.). MIHR (Oxford, UK), PIPRA (Davis, USA), Oswaldo Cruz Foundation (Fiocruz, Rio de Janeiro , Brazil), and bioDevelopments...are generally conveyed with software de - liverables (e.g., government purpose rights may be conveyed instead of only restricted rights to software...needed to purchase C-130J spare parts through competitive procurements. When the prime contractor de - clined to provide a technical data rights package
25 CFR 183.8 - How can the Tribe spend funds?
Code of Federal Regulations, 2014 CFR
2014-04-01
... 25 Indians 1 2014-04-01 2014-04-01 false How can the Tribe spend funds? 183.8 Section 183.8... SAN CARLOS APACHE TRIBE DEVELOPMENT TRUST FUND AND SAN CARLOS APACHE TRIBE LEASE FUND Trust Fund Disposition Limitations § 183.8 How can the Tribe spend funds? (a) The Tribe must spend principal or income...
25 CFR 183.15 - Must the Tribe submit any reports?
Code of Federal Regulations, 2010 CFR
2010-04-01
... 25 Indians 1 2010-04-01 2010-04-01 false Must the Tribe submit any reports? 183.15 Section 183.15... SAN CARLOS APACHE TRIBE DEVELOPMENT TRUST FUND AND SAN CARLOS APACHE TRIBE LEASE FUND Reports § 183.15 Must the Tribe submit any reports? Yes. The Tribe must submit the following reports after receiving...
25 CFR 183.8 - How can the Tribe spend funds?
Code of Federal Regulations, 2010 CFR
2010-04-01
... 25 Indians 1 2010-04-01 2010-04-01 false How can the Tribe spend funds? 183.8 Section 183.8... SAN CARLOS APACHE TRIBE DEVELOPMENT TRUST FUND AND SAN CARLOS APACHE TRIBE LEASE FUND Trust Fund Disposition Limitations § 183.8 How can the Tribe spend funds? (a) The Tribe must spend principal or income...
25 CFR 183.8 - How can the Tribe spend funds?
Code of Federal Regulations, 2011 CFR
2011-04-01
... 25 Indians 1 2011-04-01 2011-04-01 false How can the Tribe spend funds? 183.8 Section 183.8... SAN CARLOS APACHE TRIBE DEVELOPMENT TRUST FUND AND SAN CARLOS APACHE TRIBE LEASE FUND Trust Fund Disposition Limitations § 183.8 How can the Tribe spend funds? (a) The Tribe must spend principal or income...
25 CFR 183.15 - Must the Tribe submit any reports?
Code of Federal Regulations, 2013 CFR
2013-04-01
... 25 Indians 1 2013-04-01 2013-04-01 false Must the Tribe submit any reports? 183.15 Section 183.15... SAN CARLOS APACHE TRIBE DEVELOPMENT TRUST FUND AND SAN CARLOS APACHE TRIBE LEASE FUND Reports § 183.15 Must the Tribe submit any reports? Yes. The Tribe must submit the following reports after receiving...
25 CFR 183.15 - Must the Tribe submit any reports?
Code of Federal Regulations, 2012 CFR
2012-04-01
... 25 Indians 1 2012-04-01 2011-04-01 true Must the Tribe submit any reports? 183.15 Section 183.15... SAN CARLOS APACHE TRIBE DEVELOPMENT TRUST FUND AND SAN CARLOS APACHE TRIBE LEASE FUND Reports § 183.15 Must the Tribe submit any reports? Yes. The Tribe must submit the following reports after receiving...
25 CFR 183.15 - Must the Tribe submit any reports?
Code of Federal Regulations, 2014 CFR
2014-04-01
... 25 Indians 1 2014-04-01 2014-04-01 false Must the Tribe submit any reports? 183.15 Section 183.15... SAN CARLOS APACHE TRIBE DEVELOPMENT TRUST FUND AND SAN CARLOS APACHE TRIBE LEASE FUND Reports § 183.15 Must the Tribe submit any reports? Yes. The Tribe must submit the following reports after receiving...
25 CFR 183.15 - Must the Tribe submit any reports?
Code of Federal Regulations, 2011 CFR
2011-04-01
... 25 Indians 1 2011-04-01 2011-04-01 false Must the Tribe submit any reports? 183.15 Section 183.15... SAN CARLOS APACHE TRIBE DEVELOPMENT TRUST FUND AND SAN CARLOS APACHE TRIBE LEASE FUND Reports § 183.15 Must the Tribe submit any reports? Yes. The Tribe must submit the following reports after receiving...
25 CFR 183.8 - How can the Tribe spend funds?
Code of Federal Regulations, 2012 CFR
2012-04-01
... 25 Indians 1 2012-04-01 2011-04-01 true How can the Tribe spend funds? 183.8 Section 183.8 Indians... CARLOS APACHE TRIBE DEVELOPMENT TRUST FUND AND SAN CARLOS APACHE TRIBE LEASE FUND Trust Fund Disposition Limitations § 183.8 How can the Tribe spend funds? (a) The Tribe must spend principal or income distributed...
25 CFR 183.8 - How can the Tribe spend funds?
Code of Federal Regulations, 2013 CFR
2013-04-01
... 25 Indians 1 2013-04-01 2013-04-01 false How can the Tribe spend funds? 183.8 Section 183.8... SAN CARLOS APACHE TRIBE DEVELOPMENT TRUST FUND AND SAN CARLOS APACHE TRIBE LEASE FUND Trust Fund Disposition Limitations § 183.8 How can the Tribe spend funds? (a) The Tribe must spend principal or income...
75 FR 20608 - Notice of Re-Designation of the Service Delivery Area for the Cowlitz Indian Tribe
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-20
..., Louisiana Grand Parish, LA,\\22\\ LaSalle Parish, LA, Rapides Parish, LA. Jicarilla Apache Nation, New Mexico... Mexico. NM. Miccosukee Tribe of Indians of Florida. Broward, FL, Collier, FL, Miami- Dade, FL, Hendry, FL.... Narragansett Indian Tribe of Rhode Washington, RI.\\32\\ Island. Navajo Nation, Arizona, New Mexico and Apache...
Nutrition Survey of White Mountain Apache Preschool Children.
ERIC Educational Resources Information Center
Owen, George M.; And Others
As part of a national study of the nutrition of preschool children, data were collected on 201 Apache children, 1 to 6 years of age, living on an Indian reservation in Arizona. This report reviews procedures and clinical findings, and gives an analysis of growth data including skeletal maturation, nutrient intakes and clinical biochemical data. In…
An assessment of the spatial extent and condition of grasslands in the Apache Highlands ecoregion
Carolyn A. F. Enquist; David F. Gori
2005-01-01
Grasslands in the Apache Highlands ecoregion have experienced dramatic changes. To assess and identify remaining native grasslands for conservation planning and management, we used a combination of expert consultation and field verification. Over two-thirds of native grasslands have experienced shrub encroachment. More than 30% of these may be restorable with...
Publications - GMC 397 | Alaska Division of Geological & Geophysical
: Apache Corp., Alaska Division of Oil and Gas, and Weatherford Laboratories Publication Date: Nov 2011 Apache Corp., Alaska Division of Oil and Gas, and Weatherford Laboratories, 2011, Porosity and Files gmc397.pdf (2.8 M) gmc397.zip (24.2 M) Keywords Cook Inlet Basin; Oil and Gas; Permeability
A Photographic Essay of the San Carlos Apache Indians, Volume 2-Part A.
ERIC Educational Resources Information Center
Soto, Ed; And Others
As part of a series of guides designed for instruction of American Indian children and youth, this resource guide constitutes a pictorial essay on the San Carlos Apache Reservation founded in the late 1800's and located in Arizona's Gila County. An historical narrative and discussion questions accompany each of the 12 photographs. Photographic…
National Science Foundation 1989 Engineering Senior Design Projects To Aid the Disabled.
ERIC Educational Resources Information Center
Enderle, John D., Ed.
Through the Bioengineering and Research to Aid the Disabled program of the National Science Foundation, design projects were awarded competitively to 16 universities. Senior engineering students at each of the universities constructed custom devices and software for disabled individuals. This compendium contains a description of each project in…
Reducing Our Ignorance: Finding Answers to Certain Epistemic Questions for Software Systems
NASA Technical Reports Server (NTRS)
Holloway, C. Michael; Johnson, Christopher W.
2011-01-01
In previous papers, we asserted that software system safety is primarily concerned with epistemic questions, that is, questions concerning knowledge and the degree of confidence that can be placed in that knowledge. We also enumerated a set of 21 foundational epistemic questions, discussed some of the difficulties that exist in answering these questions adequately today, and speculated briefly on possible research that may provide improved confidence in the sufficiency of answers in the future. This paper focuses on three of the foundational questions. For each of these questions, current answers are discussed and potential research is proposed to help increase the justifiable level of confidence.
SBSI: an extensible distributed software infrastructure for parameter estimation in systems biology
Adams, Richard; Clark, Allan; Yamaguchi, Azusa; Hanlon, Neil; Tsorman, Nikos; Ali, Shakir; Lebedeva, Galina; Goltsov, Alexey; Sorokin, Anatoly; Akman, Ozgur E.; Troein, Carl; Millar, Andrew J.; Goryanin, Igor; Gilmore, Stephen
2013-01-01
Summary: Complex computational experiments in Systems Biology, such as fitting model parameters to experimental data, can be challenging to perform. Not only do they frequently require a high level of computational power, but the software needed to run the experiment needs to be usable by scientists with varying levels of computational expertise, and modellers need to be able to obtain up-to-date experimental data resources easily. We have developed a software suite, the Systems Biology Software Infrastructure (SBSI), to facilitate the parameter-fitting process. SBSI is a modular software suite composed of three major components: SBSINumerics, a high-performance library containing parallelized algorithms for performing parameter fitting; SBSIDispatcher, a middleware application to track experiments and submit jobs to back-end servers; and SBSIVisual, an extensible client application used to configure optimization experiments and view results. Furthermore, we have created a plugin infrastructure to enable project-specific modules to be easily installed. Plugin developers can take advantage of the existing user-interface and application framework to customize SBSI for their own uses, facilitated by SBSI’s use of standard data formats. Availability and implementation: All SBSI binaries and source-code are freely available from http://sourceforge.net/projects/sbsi under an Apache 2 open-source license. The server-side SBSINumerics runs on any Unix-based operating system; both SBSIVisual and SBSIDispatcher are written in Java and are platform independent, allowing use on Windows, Linux and Mac OS X. The SBSI project website at http://www.sbsi.ed.ac.uk provides documentation and tutorials. Contact: stg@inf.ed.ac.uk Supplementary information: Supplementary data are available at Bioinformatics online. PMID:23329415
Ruffier, Magali; Kähäri, Andreas; Komorowska, Monika; Keenan, Stephen; Laird, Matthew; Longden, Ian; Proctor, Glenn; Searle, Steve; Staines, Daniel; Taylor, Kieron; Vullo, Alessandro; Yates, Andrew; Zerbino, Daniel; Flicek, Paul
2017-01-01
The Ensembl software resources are a stable infrastructure to store, access and manipulate genome assemblies and their functional annotations. The Ensembl 'Core' database and Application Programming Interface (API) was our first major piece of software infrastructure and remains at the centre of all of our genome resources. Since its initial design more than fifteen years ago, the number of publicly available genomic, transcriptomic and proteomic datasets has grown enormously, accelerated by continuous advances in DNA-sequencing technology. Initially intended to provide annotation for the reference human genome, we have extended our framework to support the genomes of all species as well as richer assembly models. Cross-referenced links to other informatics resources facilitate searching our database with a variety of popular identifiers such as UniProt and RefSeq. Our comprehensive and robust framework storing a large diversity of genome annotations in one location serves as a platform for other groups to generate and maintain their own tailored annotation. We welcome reuse and contributions: our databases and APIs are publicly available, all of our source code is released with a permissive Apache v2.0 licence at http://github.com/Ensembl and we have an active developer mailing list ( http://www.ensembl.org/info/about/contact/index.html ). http://www.ensembl.org. © The Author(s) 2017. Published by Oxford University Press.
NASA Astrophysics Data System (ADS)
Ramirez, P.; Mattmann, C. A.; Painter, T. H.; Seidel, F. C.; Trangsrud, A.; Hart, A. F.; Goodale, C. E.; Boardman, J. W.; Heneghan, C.; Verma, R.; Khudikyan, S.; Boustani, M.; Zimdars, P. A.; Horn, J.; Neely, S.
2013-12-01
The JPL Airborne Snow Observatory (ASO) must process 100s of GB of raw data to 100s of Terabytes of derived data in 24 hour Near Real Time (NRT) latency in a geographically distributed mobile compute and data-intensive processing setting. ASO provides meaningful information to water resource managers in the Western US letting them know how much water to maintain; or release, and what the prospectus of the current snow season is in the Sierra Nevadas. Providing decision support products processed from airborne data in a 24 hour timeframe is an emergent field and required the team to develop a novel solution as this process is typically done over months. We've constructed a system that combines Apache OODT; with Apache Tika; with the Interactive Data Analysis (IDL)/ENVI programming environment to rapidly and unobtrusively generate, distribute and archive ASO data as soon as the plane lands near Mammoth Lakes, CA. Our system is flexible, underwent several redeployments and reconfigurations, and delivered this critical information to stakeholders during the recent "Snow On" campaign March 2013 - June 2013. This talk will take you through a day in the life of the compute team from data acquisition, delivery, processing, and dissemination. Within this context, we will discuss the architecture of ASO; the open source software we used; the data we stored; and how it was delivered to its users. Moreover we will discuss the logistics, system engineering, and staffing that went into the developing, deployment, and operation of the mobile compute system.
PathVisio 3: an extendable pathway analysis toolbox.
Kutmon, Martina; van Iersel, Martijn P; Bohler, Anwesha; Kelder, Thomas; Nunes, Nuno; Pico, Alexander R; Evelo, Chris T
2015-02-01
PathVisio is a commonly used pathway editor, visualization and analysis software. Biological pathways have been used by biologists for many years to describe the detailed steps in biological processes. Those powerful, visual representations help researchers to better understand, share and discuss knowledge. Since the first publication of PathVisio in 2008, the original paper was cited more than 170 times and PathVisio was used in many different biological studies. As an online editor PathVisio is also integrated in the community curated pathway database WikiPathways. Here we present the third version of PathVisio with the newest additions and improvements of the application. The core features of PathVisio are pathway drawing, advanced data visualization and pathway statistics. Additionally, PathVisio 3 introduces a new powerful extension systems that allows other developers to contribute additional functionality in form of plugins without changing the core application. PathVisio can be downloaded from http://www.pathvisio.org and in 2014 PathVisio 3 has been downloaded over 5,500 times. There are already more than 15 plugins available in the central plugin repository. PathVisio is a freely available, open-source tool published under the Apache 2.0 license (http://www.apache.org/licenses/LICENSE-2.0). It is implemented in Java and thus runs on all major operating systems. The code repository is available at http://svn.bigcat.unimaas.nl/pathvisio. The support mailing list for users is available on https://groups.google.com/forum/#!forum/wikipathways-discuss and for developers on https://groups.google.com/forum/#!forum/wikipathways-devel.
Teacher's Guide to SERAPHIM Software II. Chemical Principles.
ERIC Educational Resources Information Center
Bogner, Donna J.
Designed to assist chemistry teachers in selecting appropriate software programs, this publication is the second in a series of six teacher's guides from Project SERAPHIM, a program sponsored by the National Science Foundation. This guide is keyed to the chapters of the text "Chemical Principles." Program suggestions are arranged in the…
Using Interactive Software to Teach Foundational Mathematical Skills
ERIC Educational Resources Information Center
Lysenko, Larysa; Rosenfield, Steven; Dedic, Helena; Savard, Annie; Idan, Einat; Abrami, Philip C.; Wade, C. Anne; Naffi, Nadia
2016-01-01
The pilot research presented here explores the classroom use of Emerging Literacy in Mathematics (ELM) software, a research-based bilingual interactive multimedia instructional tool, and its potential to develop emerging numeracy skills. At the time of the study, a central theme of early mathematics curricula, "Number Concept," was fully…
Investigating Team Cohesion in COCOMO II.2000
ERIC Educational Resources Information Center
Snowdeal-Carden, Betty A.
2013-01-01
Software engineering is team oriented and intensely complex, relying on human collaboration and creativity more than any other engineering discipline. Poor software estimation is a problem that within the United States costs over a billion dollars per year. Effective measurement of team cohesion is foundationally important to gain accurate…
Discovering and Mitigating Software Vulnerabilities through Large-Scale Collaboration
ERIC Educational Resources Information Center
Zhao, Mingyi
2016-01-01
In today's rapidly digitizing society, people place their trust in a wide range of digital services and systems that deliver latest news, process financial transactions, store sensitive information, etc. However, this trust does not have a solid foundation, because software code that supports this digital world has security vulnerabilities. These…
2008-06-01
agenda are summarized. x | CMU/SEI-2008-SR-011 SOFTWARE ENGINEERING INSTITUTE | 1 1 Introduction Service -oriented architecture (SOA... service -provision software systems. In this po- sition paper, we investigate an initial classification of challenge areas related to service orientation...decade we have witnessed a significant growth of software applications that are de- livered in the form of services utilizing a network infrastructure
The Mescalero Apaches. The Civilization of the American Indian Series.
ERIC Educational Resources Information Center
Sonnichsen, C. L.
The history of the Eastern Apache tribe called the Mescaleros is one of hardship and oppression altering with wars of revenge. They were friendly to the Spaniard until victimized by them. They were also friendly to the white man until they were betrayed again. For three hundred years they fought the Spaniards and Mexicans. For forty more they…
25 CFR 183.9 - Can the Tribe request the principal of the Lease Fund?
Code of Federal Regulations, 2011 CFR
2011-04-01
... 25 Indians 1 2011-04-01 2011-04-01 false Can the Tribe request the principal of the Lease Fund... AND DISTRIBUTION OF THE SAN CARLOS APACHE TRIBE DEVELOPMENT TRUST FUND AND SAN CARLOS APACHE TRIBE LEASE FUND Lease Fund Disposition Use of Principal and Income § 183.9 Can the Tribe request the...
A Photographic Essay of Apache Clothing, War Charms, and Weapons, Volume 2-Part D.
ERIC Educational Resources Information Center
Thompson, Doris; Jacobs, Ben
As part of a series of guides designed for instruction of American Indian children and youth, this resource guide constitutes a pictorial essay on Apache clothing, war charms, and weaponry. A brief historical introduction is followed by 21 question suggestions for classroom use. Each of the 12 photographic topics is accompanied by a descriptive…
Jeffrey F. Kelly; Deborah M. Finch
1999-01-01
We compared diversity, abundance and energetic condition of migrant landbirds captured in four different vegetation types in the Bosque del Apache National Wildlife Refuge. We found lower species diversity among migrants caught in exotic saltcedar vegetation than in native willow or cottonwood. In general, Migrants were most abundant in agricultural edge and least...
The Apache Campaigns. Values in Conflict
1985-06-01
cultural aspects as land use, property ownership, criminal justice, re- ligious faith, and family and group loyalty differed sharply. Conceptual...and emphasized the primary importance of family and group loyalties. Initially, the Apache and Frontier Army co-habited the Southwest peacefully. Then...guidance during my research and writing this year. For intellectual stim- ulation and timely encouragement, I particularly thank my Committee Chairman
Fallugia paradoxa (D. Don) Endl. ex Torr.: Apache-plume
Susan E. Meyer
2008-01-01
The genus Fallugia contains a single species - Apache-plume, F. paradoxa (D. Don) Endl. ex Torr. - found throughout the southwestern United States and northern Mexico. It occurs mostly on coarse soils on benches and especially along washes and canyons in both warm and cool desert shrub communities and up into the pinyon-juniper vegetation type. It is a sprawling, much-...
Restoration of Soldier Spring: an isolated habitat for native Apache trout
Jonathan W. Long; B. Mae Burnette; Alvin L. Medina; Joshua L. Parker
2004-01-01
Degradation of streams is a threat to the recovery of the Apache trout, an endemic fish of the White Mountains of Arizona. Historic efforts to improve trout habitat in the Southwest relied heavily on placement of in-stream log structures. However, the effects of structural interventions on trout habitat and populations have not been adequately evaluated. We treated an...
Solar Feasibility Study May 2013 - San Carlos Apache Tribe
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rapp, Jim; Duncan, Ken; Albert, Steve
2013-05-01
The San Carlos Apache Tribe (Tribe) in the interests of strengthening tribal sovereignty, becoming more energy self-sufficient, and providing improved services and economic opportunities to tribal members and San Carlos Apache Reservation (Reservation) residents and businesses, has explored a variety of options for renewable energy development. The development of renewable energy technologies and generation is consistent with the Tribe’s 2011 Strategic Plan. This Study assessed the possibilities for both commercial-scale and community-scale solar development within the southwestern portions of the Reservation around the communities of San Carlos, Peridot, and Cutter, and in the southeastern Reservation around the community of Bylas.more » Based on the lack of any commercial-scale electric power transmission between the Reservation and the regional transmission grid, Phase 2 of this Study greatly expanded consideration of community-scale options. Three smaller sites (Point of Pines, Dudleyville/Winkleman, and Seneca Lake) were also evaluated for community-scale solar potential. Three building complexes were identified within the Reservation where the development of site-specific facility-scale solar power would be the most beneficial and cost-effective: Apache Gold Casino/Resort, Tribal College/Skill Center, and the Dudleyville (Winkleman) Casino.« less
Ecoupling server: A tool to compute and analyze electronic couplings.
Cabeza de Vaca, Israel; Acebes, Sandra; Guallar, Victor
2016-07-05
Electron transfer processes are often studied through the evaluation and analysis of the electronic coupling (EC). Since most standard QM codes do not provide readily such a measure, additional, and user-friendly tools to compute and analyze electronic coupling from external wave functions will be of high value. The first server to provide a friendly interface for evaluation and analysis of electronic couplings under two different approximations (FDC and GMH) is presented in this communication. Ecoupling server accepts inputs from common QM and QM/MM software and provides useful plots to understand and analyze the results easily. The web server has been implemented in CGI-python using Apache and it is accessible at http://ecouplingserver.bsc.es. Ecoupling server is free and open to all users without login. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Change Detection of Mobile LIDAR Data Using Cloud Computing
NASA Astrophysics Data System (ADS)
Liu, Kun; Boehm, Jan; Alis, Christian
2016-06-01
Change detection has long been a challenging problem although a lot of research has been conducted in different fields such as remote sensing and photogrammetry, computer vision, and robotics. In this paper, we blend voxel grid and Apache Spark together to propose an efficient method to address the problem in the context of big data. Voxel grid is a regular geometry representation consisting of the voxels with the same size, which fairly suites parallel computation. Apache Spark is a popular distributed parallel computing platform which allows fault tolerance and memory cache. These features can significantly enhance the performance of Apache Spark and results in an efficient and robust implementation. In our experiments, both synthetic and real point cloud data are employed to demonstrate the quality of our method.
Advanced information processing system: Local system services
NASA Technical Reports Server (NTRS)
Burkhardt, Laura; Alger, Linda; Whittredge, Roy; Stasiowski, Peter
1989-01-01
The Advanced Information Processing System (AIPS) is a multi-computer architecture composed of hardware and software building blocks that can be configured to meet a broad range of application requirements. The hardware building blocks are fault-tolerant, general-purpose computers, fault-and damage-tolerant networks (both computer and input/output), and interfaces between the networks and the computers. The software building blocks are the major software functions: local system services, input/output, system services, inter-computer system services, and the system manager. The foundation of the local system services is an operating system with the functions required for a traditional real-time multi-tasking computer, such as task scheduling, inter-task communication, memory management, interrupt handling, and time maintenance. Resting on this foundation are the redundancy management functions necessary in a redundant computer and the status reporting functions required for an operator interface. The functional requirements, functional design and detailed specifications for all the local system services are documented.
TimeBench: a data model and software library for visual analytics of time-oriented data.
Rind, Alexander; Lammarsch, Tim; Aigner, Wolfgang; Alsallakh, Bilal; Miksch, Silvia
2013-12-01
Time-oriented data play an essential role in many Visual Analytics scenarios such as extracting medical insights from collections of electronic health records or identifying emerging problems and vulnerabilities in network traffic. However, many software libraries for Visual Analytics treat time as a flat numerical data type and insufficiently tackle the complexity of the time domain such as calendar granularities and intervals. Therefore, developers of advanced Visual Analytics designs need to implement temporal foundations in their application code over and over again. We present TimeBench, a software library that provides foundational data structures and algorithms for time-oriented data in Visual Analytics. Its expressiveness and developer accessibility have been evaluated through application examples demonstrating a variety of challenges with time-oriented data and long-term developer studies conducted in the scope of research and student projects.
Operations analysis (study 2.1): Shuttle upper stage software requirements
NASA Technical Reports Server (NTRS)
Wolfe, R. R.
1974-01-01
An investigation of software costs related to space shuttle upper stage operations with emphasis on the additional costs attributable to space servicing was conducted. The questions and problem areas include the following: (1) the key parameters involved with software costs; (2) historical data for extrapolation of future costs; (3) elements of the basic software development effort that are applicable to servicing functions; (4) effect of multiple servicing on complexity of the operation; and (5) are recurring software costs significant. The results address these questions and provide a foundation for estimating software costs based on the costs of similar programs and a series of empirical factors.
FLEX: A Modular Software Architecture for Flight License Exam
NASA Astrophysics Data System (ADS)
Arsan, Taner; Saka, Hamit Emre; Sahin, Ceyhun
This paper is about the design and implementation of an examination system based on World Wide Web. It is called FLEX-Flight License Exam Software. We designed and implemented flexible and modular software architecture. The implemented system has basic specifications such as appending questions in system, building exams with these appended questions and making students to take these exams. There are three different types of users with different authorizations. These are system administrator, operators and students. System administrator operates and maintains the system, and also audits the system integrity. The system administrator can not be able to change the result of exams and can not take an exam. Operator module includes instructors. Operators have some privileges such as preparing exams, entering questions, changing the existing questions and etc. Students can log on the system and can be accessed to exams by a certain URL. The other characteristic of our system is that operators and system administrator are not able to delete questions due to the security problems. Exam questions can be inserted on their topics and lectures in the database. Thus; operators and system administrator can easily choose questions. When all these are taken into consideration, FLEX software provides opportunities to many students to take exams at the same time in safe, reliable and user friendly conditions. It is also reliable examination system for the authorized aviation administration companies. Web development platform - LAMP; Linux, Apache web server, MySQL, Object-oriented scripting Language - PHP are used for developing the system and page structures are developed by Content Management System - CMS.
IPeak: An open source tool to combine results from multiple MS/MS search engines.
Wen, Bo; Du, Chaoqin; Li, Guilin; Ghali, Fawaz; Jones, Andrew R; Käll, Lukas; Xu, Shaohang; Zhou, Ruo; Ren, Zhe; Feng, Qiang; Xu, Xun; Wang, Jun
2015-09-01
Liquid chromatography coupled tandem mass spectrometry (LC-MS/MS) is an important technique for detecting peptides in proteomics studies. Here, we present an open source software tool, termed IPeak, a peptide identification pipeline that is designed to combine the Percolator post-processing algorithm and multi-search strategy to enhance the sensitivity of peptide identifications without compromising accuracy. IPeak provides a graphical user interface (GUI) as well as a command-line interface, which is implemented in JAVA and can work on all three major operating system platforms: Windows, Linux/Unix and OS X. IPeak has been designed to work with the mzIdentML standard from the Proteomics Standards Initiative (PSI) as an input and output, and also been fully integrated into the associated mzidLibrary project, providing access to the overall pipeline, as well as modules for calling Percolator on individual search engine result files. The integration thus enables IPeak (and Percolator) to be used in conjunction with any software packages implementing the mzIdentML data standard. IPeak is freely available and can be downloaded under an Apache 2.0 license at https://code.google.com/p/mzidentml-lib/. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
25 CFR 183.10 - How can the Tribe use income from the Lease Fund?
Code of Federal Regulations, 2011 CFR
2011-04-01
... 25 Indians 1 2011-04-01 2011-04-01 false How can the Tribe use income from the Lease Fund? 183.10... DISTRIBUTION OF THE SAN CARLOS APACHE TRIBE DEVELOPMENT TRUST FUND AND SAN CARLOS APACHE TRIBE LEASE FUND Lease Fund Disposition Use of Principal and Income § 183.10 How can the Tribe use income from the Lease Fund...
ERIC Educational Resources Information Center
Velarde, Hubert
The statement by the President of the Jicarilla Apache Tribe emphasizes reservation problems that need to be examined. Presented at a 1972 Civil Rights Commission hearing on Indian Concerns, Velarde's statement listed employment, education, the administration of justice, water rights, and medical services as areas for investigation. (KM)
Teacher's Guide to SERAPHIM Software III. Modern Chemistry.
ERIC Educational Resources Information Center
Bogner, Donna J.
Designed to assist chemistry teachers in selecting appropriate software programs, this publication is the third in a series of six teacher's guides from Project SERAPHIM, a program sponsored by the National Science Foundation. This guide is keyed to the chapters of the text "Modern Chemistry." Program suggestions are arranged in the same…
Teacher's Guide to SERAPHIM Software IV Chemistry: A Modern Course.
ERIC Educational Resources Information Center
Bogner, Donna J.
Designed to assist chemistry teachers in selecting appropriate software programs, this publication is the fourth in a series of six teacher's guides from Project SERAPHIM, a program sponsored by the National Science Foundation. This guide is keyed to the chapters of the text "Chemistry: A Modern Course." Program suggestions are arranged…
Teacher's Guide to SERAPHIM Software VI. Chemistry: The Study of Matter.
ERIC Educational Resources Information Center
Bogner, Donna J.
Designed to assist chemistry teachers in selecting appropriate software programs, this publication is the sixth in a series of six teacher's guides from Project SERAPHIM, a program sponsored by the National Science Foundation. This guide is keyed to the chapters of the text "Chemistry: The Study of Matter." Program suggestions are…
Teacher's Guide to SERAPHIM Software V. Chemistry: The Central Science.
ERIC Educational Resources Information Center
Bogner, Donna J.
Designed to assist chemistry teachers in selecting appropriate software programs, this publication is the fifth in a series of six teacher's guides from Project SERAPHIM, a program sponsored by the National Science Foundation. This guide is keyed to the chapters of the text "Chemistry: The Central Science." Program suggestions are…
Using Business Analysis Software in a Business Intelligence Course
ERIC Educational Resources Information Center
Elizondo, Juan; Parzinger, Monica J.; Welch, Orion J.
2011-01-01
This paper presents an example of a project used in an undergraduate business intelligence class which integrates concepts from statistics, marketing, and information systems disciplines. SAS Enterprise Miner software is used as the foundation for predictive analysis and data mining. The course culminates with a competition and the project is used…
Negotiating Software Agreements: Avoid Contractual Mishaps and Get the Biggest Bang for Your Buck
ERIC Educational Resources Information Center
Riley, Sheila
2006-01-01
Purchasing software license and service agreements can be daunting for any district. Greg Lindner, director of information and technology services for the Elk Grove Unified School District in California, and Steve Midgley, program manager at the Stupski Foundation, provided several tips on contract negotiation. This article presents the tips…
Developing ICALL Tools Using GATE
ERIC Educational Resources Information Center
Wood, Peter
2008-01-01
This article discusses the use of the General Architecture for Text Engineering (GATE) as a tool for the development of ICALL and NLP applications. It outlines a paradigm shift in software development, which is mainly influenced by projects such as the Free Software Foundation. It looks at standards that have been proposed to facilitate the…
Ellingson, A.R.; Andersen, D.C.
2002-01-01
1. The hypothesis that the habitat-scale spatial distribution of the, Apache cicada Diceroprocta apache Davis is unaffected by the presence of the invasive exotic saltcedar Tamarix ramosissima was tested using data from 205 1-m2 quadrats placed within the flood-plain of the Bill Williams River, Arizona, U.S.A. Spatial dependencies within and between cicada density and habitat variables were estimated using Moran's I and its bivariate analogue to discern patterns and associations at spatial scales from 1 to 30 m. 2. Apache cicadas were spatially aggregated in high-density clusters averaging 3m in diameter. A positive association between cicada density, estimated by exuvial density, and the per cent canopy cover of a native tree, Goodding's willow Salix gooddingii, was detected in a non-spatial correlation analysis. No non-spatial association between cicada density and saltcedar canopy cover was detected. 3. Tests for spatial cross-correlation using the bivariate IYZ indicated the presence of a broad-scale negative association between cicada density and saltcedar canopy cover. This result suggests that large continuous stands of saltcedar are associated with reduced cicada density. In contrast, positive associations detected at spatial scales larger than individual quadrats suggested a spill-over of high cicada density from areas featuring Goodding's willow canopy into surrounding saltcedar monoculture. 4. Taken together and considered in light of the Apache cicada's polyphagous habits, the observed spatial patterns suggest that broad-scale factors such as canopy heterogeneity affect cicada habitat use more than host plant selection. This has implications for management of lower Colorado River riparian woodlands to promote cicada presence and density through maintenance or creation of stands of native trees as well as manipulation of the characteristically dense and homogeneous saltcedar canopies.
Ellingson, A.R.; Andersen, D.C.
2002-01-01
1. The hypothesis that the habitat-scale spatial distribution of the Apache cicada Diceroprocta apache Davis is unaffected by the presence of the invasive exotic saltcedar Tamarix ramosissima was tested using data from 205 1-m2 quadrats placed within the flood-plain of the Bill Williams River, Arizona, U.S.A. Spatial dependencies within and between cicada density and habitat variables were estimated using Moran's I and its bivariate analogue to discern patterns and associations at spatial scales from 1 to 30 m.2. Apache cicadas were spatially aggregated in high-density clusters averaging 3 m in diameter. A positive association between cicada density, estimated by exuvial density, and the per cent canopy cover of a native tree, Goodding's willow Salix gooddingii, was detected in a non-spatial correlation analysis. No non-spatial association between cicada density and saltcedar canopy cover was detected.3. Tests for spatial cross-correlation using the bivariate IYZ indicated the presence of a broad-scale negative association between cicada density and saltcedar canopy cover. This result suggests that large continuous stands of saltcedar are associated with reduced cicada density. In contrast, positive associations detected at spatial scales larger than individual quadrats suggested a spill-over of high cicada density from areas featuring Goodding's willow canopy into surrounding saltcedar monoculture.4. Taken together and considered in light of the Apache cicada's polyphagous habits, the observed spatial patterns suggest that broad-scale factors such as canopy heterogeneity affect cicada habitat use more than host plant selection. This has implications for management of lower Colorado River riparian woodlands to promote cicada presence and density through maintenance or creation of stands of native trees as well as manipulation of the characteristically dense and homogeneous saltcedar canopies.
Khwannimit, Bodin
2008-01-01
The Logistic Organ Dysfunction score (LOD) is an organ dysfunction score that can predict hospital mortality. The aim of this study was to validate the performance of the LOD score compared with the Acute Physiology and Chronic Health Evaluation II (APACHE II) score in a mixed intensive care unit (ICU) at a tertiary referral university hospital in Thailand. The data were collected prospectively on consecutive ICU admissions over a 24 month period from July1, 2004 until June 30, 2006. Discrimination was evaluated by the area under the receiver operating characteristic curve (AUROC). The calibration was assessed by the Hosmer-Lemeshow goodness-of-fit H statistic. The overall fit of the model was evaluated by the Brier's score. Overall, 1,429 patients were enrolled during the study period. The mortality in the ICU was 20.9% and in the hospital was 27.9%. The median ICU and hospital lengths of stay were 3 and 18 days, respectively, for all patients. Both models showed excellent discrimination. The AUROC for the LOD and APACHE II were 0.860 [95% confidence interval (CI) = 0.838-0.882] and 0.898 (95% Cl = 0.879-0.917), respectively. The LOD score had perfect calibration with the Hosmer-Lemeshow goodness-of-fit H chi-2 = 10 (p = 0.44). However, the APACHE II had poor calibration with the Hosmer-Lemeshow goodness-of-fit H chi-2 = 75.69 (p < 0.001). Brier's score showed the overall fit for both models were 0.123 (95%Cl = 0.107-0.141) and 0.114 (0.098-0.132) for the LOD and APACHE II, respectively. Thus, the LOD score was found to be accurate for predicting hospital mortality for general critically ill patients in Thailand.
VijayGanapathy, Sundaramoorthy; Karthikeyan, VIlvapathy Senguttuvan; Sreenivas, Jayaram; Mallya, Ashwin; Keshavamurthy, Ramaiah
2017-11-01
Urosepsis implies clinically evident severe infection of urinary tract with features of systemic inflammatory response syndrome (SIRS). We validate the role of a single Acute Physiology and Chronic Health Evaluation II (APACHE II) score at 24 hours after admission in predicting mortality in urosepsis. A prospective observational study was done in 178 patients admitted with urosepsis in the Department of Urology, in a tertiary care institute from January 2015 to August 2016. Patients >18 years diagnosed as urosepsis using SIRS criteria with positive urine or blood culture for bacteria were included. At 24 hours after admission to intensive care unit, APACHE II score was calculated using 12 physiological variables, age and chronic health. Mean±standard deviation (SD) APACHE II score was 26.03±7.03. It was 24.31±6.48 in survivors and 32.39±5.09 in those expired (p<0.001). Among patients undergoing surgery, mean±SD score was higher (30.74±4.85) than among survivors (24.30±6.54) (p<0.001). Receiver operating characteristic (ROC) analysis revealed area under curve (AUC) of 0.825 with cutoff 25.5 being 94.7% sensitive and 56.4% specific to predict mortality. Mean±SD score in those undergoing surgery was 25.22±6.70 and was lesser than those who did not undergo surgery (28.44±7.49) (p=0.007). ROC analysis revealed AUC of 0.760 with cutoff 25.5 being 94.7% sensitive and 45.6% specific to predict mortality even after surgery. A single APACHE II score assessed at 24 hours after admission was able to predict morbidity, mortality, need for surgical intervention, length of hospitalization, treatment success and outcome in urosepsis patients.
Kaymak, Cetin; Sencan, Irfan; Izdes, Seval; Sari, Aydin; Yagmurdur, Hatice; Karadas, Derya; Oztuna, Derya
2018-04-01
The aim of this study was to evaluate intensive care unit (ICU) performance using risk-adjusted ICU mortality rates nationally, assessing patients who died or had been discharged from the ICU. For this purpose, this study analyzed the Acute Physiology and Chronic Health Evaluation (APACHE) II and Sequential Organ Failure Assessment (SOFA) databases, containing detailed clinical and physiological information and mortality of mixed critically ill patients in a medical ICU at secondary and tertiary referral ICUs in Turkey. A total of 690 adult intensive care units in Turkey were included in the study. Among 690 ICUs evaluated, 39.7% were secondary and 60.3% were tertiary ICUs. A total of 4188 patients were enrolled in this study. Intensive care units of ministry, university, and private hospitals were evaluated all over Turkey. During the study period, clinical data that were collected concurrently for each patient contained demographic details and the diagnostic category leading to ICU admission. APACHE II and SOFA scores following ICU admission were calculated and recorded. Patients were followed up for outcome data until death or ICU discharge. The mean age of patients was 68.8 ±19 and 54% of them were male. The mean APACHE II score was 20 ±8.7. The ICUs' mortality rate was 46.3%, and mean predicted mortality was 37.2% for APACHE II. The standardized mortality ratio was 1.28 (95% confidence interval: 1.21-1.31). There was a wide difference in outcome for patients admitted to different ICUs and severity of illness using risk adjustment methods. The high mortality rate in patients could be related to comorbid diseases, high mechanical ventilation rates and older ages.
Prognostic scores in cirrhotic patients admitted to a gastroenterology intensive care unit.
Freire, Paulo; Romãozinho, José M; Amaro, Pedro; Ferreira, Manuela; Sofia, Carlos
2011-04-01
prognostic scores have been validated in cirrhotic patients admitted to general Intensive Care Units. No assessment of these scores was performed in cirrhotics admitted to specialized Gastroenterology Intensive Care Units (GICUs). to assess the prognostic accuracy of Acute Physiology and Chronic Health Evaluation (APACHE) II, Simplified Acute Physiology Score (SAPS) II, Sequential Organ Failure Assessment (SOFA), Model for End-stage Liver Disease (MELD) and Child-Pugh-Turcotte (CPT) in predicting GICU mortality in cirrhotic patients. the study involved 124 consecutive cirrhotic admissions to a GICU. Clinical data, prognostic scores and mortality were recorded. Discrimination was evaluated with area under receiver operating characteristic curves (AUC). Calibration was assessed with Hosmer-Lemeshow goodness-of-fit test. GICU mortality was 9.7%. Mean APACHE II, SAPS II, SOFA, MELD and CPT scores for survivors (13.6, 25.4, 3.5,18.0 and 8.6, respectively) were found to be significantly lower than those of non-survivors (22.0, 47.5, 10.1, 30.7 and 12.5,respectively) (p < 0.001). All the prognostic systems showed good discrimination, with AUC = 0.860, 0.911, 0.868, 0.897 and 0.914 for APACHE II, SAPS II, SOFA, MELD and CPT, respectively. Similarly, APACHE II, SAPS II, SOFA, MELD and CPT scores achieved good calibration, with p = 0.146, 0.120, 0.686,0.267 and 0.120, respectively. The overall correctness of prediction was 81.9%, 86.1%, 93.3%, 90.7% and 87.7% for the APA-CHE II, SAPS II, SOFA, MELD and CPT scores, respectively. in cirrhotics admitted to a GICU, all the tested scores have good prognostic accuracy, with SOFA and MELD showing the greatest overall correctness of prediction.
PT-SAFE: a software tool for development and annunciation of medical audible alarms.
Bennett, Christopher L; McNeer, Richard R
2012-03-01
Recent reports by The Joint Commission as well as the Anesthesia Patient Safety Foundation have indicated that medical audible alarm effectiveness needs to be improved. Several recent studies have explored various approaches to improving the audible alarms, motivating the authors to develop real-time software capable of comparing such alarms. We sought to devise software that would allow for the development of a variety of audible alarm designs that could also integrate into existing operating room equipment configurations. The software is meant to be used as a tool for alarm researchers to quickly evaluate novel alarm designs. A software tool was developed for the purpose of creating and annunciating audible alarms. The alarms consisted of annunciators that were mapped to vital sign data received from a patient monitor. An object-oriented approach to software design was used to create a tool that is flexible and modular at run-time, can annunciate wave-files from disk, and can be programmed with MATLAB by the user to create custom alarm algorithms. The software was tested in a simulated operating room to measure technical performance and to validate the time-to-annunciation against existing equipment alarms. The software tool showed efficacy in a simulated operating room environment by providing alarm annunciation in response to physiologic and ventilator signals generated by a human patient simulator, on average 6.2 seconds faster than existing equipment alarms. Performance analysis showed that the software was capable of supporting up to 15 audible alarms on a mid-grade laptop computer before audio dropouts occurred. These results suggest that this software tool provides a foundation for rapidly staging multiple audible alarm sets from the laboratory to a simulation environment for the purpose of evaluating novel alarm designs, thus producing valuable findings for medical audible alarm standardization.
[Book review] Return of the Whooping Crane
Ellis, D.H.; Smith, D.G.
1990-01-01
Fewer than 40 years ago, Life magazine ran an article decrying the plight of Whooping Cranes (Grus americana) on their wintering grounds at Aransas National Wildlife Refuge (Aransas) along the Gulf Coast. The small flock of approximately 20 birds that summered at Wood Buffalo National Park (Wood Buffalo) in Canada and wintered on the Texas coast at Aransas comprised the entire wild population of the species-a population that at the time seemed to be drifting inexorably to- ward extinction. Today, the Aransas/Wood Buffalo flock numbers more than 140 birds, there are more than 30 birds in captivity at the Patuxent Wildlife Research Center (Patuxent), and another 20-plus birds at the International Crane Foundation in Baraboo, Wisconsin. There are also a dozen wild birds in an experimental flock (termed the Rocky Mountain flock by Doughty) that winters at Bosque Del Apache National Wildlife Refuge (NWR) in New Mexico and summers in the mountain valleys centered on Grays Lake NWR in Idaho.
Cold Gas in Quenched Dwarf Galaxies using HI-MaNGA
NASA Astrophysics Data System (ADS)
Bonilla, Alaina
2017-01-01
MaNGA (Mapping of Nearby Galaxies at Apache Point Observatory) is a 6-year Sloan Digital Sky Survey fourth generation (SDSS-IV) project that will obtain integral field spectroscopy of a catalogue of 10,000 nearby galaxies. In this study, we explore the properties of the passive dwarf galaxy sample presented in Penny et al. 2016, making use of MaNGA IFU (Integral Field Unit) data to plot gas emission, stellar velocity, and flux maps. In addition, HI-MaNGA, a legacy radio-survey of MaNGA, collects single dish HI data retrieved from the GBT (Green Bank Telescope), which we use to study the the 21cm emission lines present in HI detections. Studying the HI content of passive dwarves will help us reveal the processes that are preventing star formation, such as possible AGN feedback. This work was supported by the SDSS Research Experience for Undergraduates program, which is funded by a grant from the Sloan Foundation to the Astrophysical Research Consortium.
Pre-fire treatment effects and post-fire forest dynamics on the Rodeo-Chediski burn area, Arizona
Barbara A. Strom
2005-01-01
The 2002 Rodeo-Chediski fire was the largest wildfire in Arizona history at 189,000 ha (468,000 acres), and exhibited some of the most extreme fire behavior ever seen in the Southwest. Pre-fire fuel reduction treatments of thinning, timber harvesting, and prescribed burning on the White Mountain Apache Tribal lands (WMAT) and thinning on the Apache-Sitgreaves National...
Cloud Computing Trace Characterization and Synthetic Workload Generation
2013-03-01
measurements [44]. Olio is primarily for learning Web 2.0 technologies, evaluating the three implementations (PHP, Java EE, and RubyOnRails (ROR...Add Event 17 Olio is well documented, but assumes prerequisite knowledge with setup and operation of apache web servers and MySQL databases. Olio...Faban supports numerous servers such as Apache httpd, Sun Java System Web, Portal and Mail Servers, Oracle RDBMS, memcached, and others [18]. Perhaps
DOE Office of Scientific and Technical Information (OSTI.GOV)
Churchill, R. Michael
Apache Spark is explored as a tool for analyzing large data sets from the magnetic fusion simulation code XGCI. Implementation details of Apache Spark on the NERSC Edison supercomputer are discussed, including binary file reading, and parameter setup. Here, an unsupervised machine learning algorithm, k-means clustering, is applied to XGCI particle distribution function data, showing that highly turbulent spatial regions do not have common coherent structures, but rather broad, ring-like structures in velocity space.
25 CFR 183.5 - What documents must the Tribe submit to request money from the Trust Fund?
Code of Federal Regulations, 2010 CFR
2010-04-01
... 25 Indians 1 2010-04-01 2010-04-01 false What documents must the Tribe submit to request money from the Trust Fund? 183.5 Section 183.5 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR LAND AND WATER USE AND DISTRIBUTION OF THE SAN CARLOS APACHE TRIBE DEVELOPMENT TRUST FUND AND SAN CARLOS APACHE TRIBE LEASE FUND Trust Fund Dispositio...
76 FR 72969 - Proclaiming Certain Lands as Reservation for the Fort Sill Apache Indian Tribe
Federal Register 2010, 2011, 2012, 2013, 2014
2011-11-28
... the Fort Sill Apache Tribe of Indians. FOR FURTHER INFORMATION CONTACT: Ben Burshia, Bureau of Indian... from a tangent which bears N. 89[deg]56'18'' W., having a radius of 789.30 feet, a delta angle of 32... radius of 1096.00 feet, a delta angle of 39[deg]58'50'', a chord which bears S. 77[deg]15'43'' W., 749.36...
Pietraszek-Grzywaczewska, Iwona; Bernas, Szymon; Łojko, Piotr; Piechota, Anna; Piechota, Mariusz
2016-01-01
Scoring systems in critical care patients are essential for predicting of the patient outcome and evaluating the therapy. In this study, we determined the value of the Acute Physiology and Chronic Health Evaluation II (APACHE II), Simplified Acute Physiology Score II (SAPS II), Sequential Organ Failure Assessment (SOFA) and Glasgow Coma Scale (GCS) scoring systems in the prediction of mortality in adult patients admitted to the intensive care unit (ICU) with severe purulent bacterial meningitis. We retrospectively analysed data from 98 adult patients with severe purulent bacterial meningitis who were admitted to the single ICU between March 2006 and September 2015. Univariate logistic regression identified the following risk factors of death in patients with severe purulent bacterial meningitis: APACHE II, SAPS II, SOFA, and GCS scores, and the lengths of ICU stay and hospital stay. The independent risk factors of patient death in multivariate analysis were the SAPS II score, the length of ICU stay and the length of hospital stay. In the prediction of mortality according to the area under the curve, the SAPS II score had the highest accuracy followed by the APACHE II, GCS and SOFA scores. For the prediction of mortality in a patient with severe purulent bacterial meningitis, SAPS II had the highest accuracy.
Afessa, B
2000-04-01
This study's aim was to determine the prognostic factors and to develop a triage system for intensive care unit (ICU) admission of patients with gastrointestinal bleeding (GIB). This prospective, observational study included 411 adults consecutively hospitalized for GIB. Each patient's selected clinical findings and laboratory values at presentation were obtained. The Acute Physiology and Chronic Health Evaluation (APACHE) II scores were calculated from the initial findings in the emergency department. Poor outcome was defined as recurrent GIB, emergency surgery, or death. The role of hepatic cirrhosis, APACHE II score, active GIB, end-organ dysfunction, and hypotension in predicting outcome was evaluated. Chi-square, Student's t, Mann-Whitney U, and logistic regression analysis tests were used for statistical comparisons. Poor outcome developed in 81 (20%) patients; 39 died, 23 underwent emergency surgery, and 47 rebled. End-organ dysfunction, active bleeding, hepatic cirrhosis, and high APACHE II scores were independent predictors of poor outcome with odds ratios of 3:1, 3:1, 2:3, and 1:1, respectively. The ICU admission rate was 37%. High APACHE II score, active bleeding, end-organ dysfunction, and hepatic cirrhosis are independent predictors of poor outcome in patients with GIB and can be used in the triage of these patients for ICU admission.
2009-11-01
interest of scientific and technical information exchange. This work is sponsored by the U.S. Department of Defense. The Software Engineering Institute is a...an interesting conti- nuum between how many different requirements a program must satisfy: the more complex and diverse the requirements, the more... Gender differences in approaches to end-user software development have also been reported in debugging feature usage [1] and in end-user web programming
Comparison of Numerical Analyses with a Static Load Test of a Continuous Flight Auger Pile
NASA Astrophysics Data System (ADS)
Hoľko, Michal; Stacho, Jakub
2014-12-01
The article deals with numerical analyses of a Continuous Flight Auger (CFA) pile. The analyses include a comparison of calculated and measured load-settlement curves as well as a comparison of the load distribution over a pile's length. The numerical analyses were executed using two types of software, i.e., Ansys and Plaxis, which are based on FEM calculations. Both types of software are different from each other in the way they create numerical models, model the interface between the pile and soil, and use constitutive material models. The analyses have been prepared in the form of a parametric study, where the method of modelling the interface and the material models of the soil are compared and analysed. Our analyses show that both types of software permit the modelling of pile foundations. The Plaxis software uses advanced material models as well as the modelling of the impact of groundwater or overconsolidation. The load-settlement curve calculated using Plaxis is equal to the results of a static load test with a more than 95 % degree of accuracy. In comparison, the load-settlement curve calculated using Ansys allows for the obtaining of only an approximate estimate, but the software allows for the common modelling of large structure systems together with a foundation system.
Martin, Daniel B; Holzman, Ted; May, Damon; Peterson, Amelia; Eastham, Ashley; Eng, Jimmy; McIntosh, Martin
2008-11-01
Multiple reaction monitoring (MRM) mass spectrometry identifies and quantifies specific peptides in a complex mixture with very high sensitivity and speed and thus has promise for the high throughput screening of clinical samples for candidate biomarkers. We have developed an interactive software platform, called MRMer, for managing highly complex MRM-MS experiments, including quantitative analyses using heavy/light isotopic peptide pairs. MRMer parses and extracts information from MS files encoded in the platform-independent mzXML data format. It extracts and infers precursor-product ion transition pairings, computes integrated ion intensities, and permits rapid visual curation for analyses exceeding 1000 precursor-product pairs. Results can be easily output for quantitative comparison of consecutive runs. Additionally MRMer incorporates features that permit the quantitative analysis experiments including heavy and light isotopic peptide pairs. MRMer is open source and provided under the Apache 2.0 license.
EXP-PAC: providing comparative analysis and storage of next generation gene expression data.
Church, Philip C; Goscinski, Andrzej; Lefèvre, Christophe
2012-07-01
Microarrays and more recently RNA sequencing has led to an increase in available gene expression data. How to manage and store this data is becoming a key issue. In response we have developed EXP-PAC, a web based software package for storage, management and analysis of gene expression and sequence data. Unique to this package is SQL based querying of gene expression data sets, distributed normalization of raw gene expression data and analysis of gene expression data across experiments and species. This package has been populated with lactation data in the international milk genomic consortium web portal (http://milkgenomics.org/). Source code is also available which can be hosted on a Windows, Linux or Mac APACHE server connected to a private or public network (http://mamsap.it.deakin.edu.au/~pcc/Release/EXP_PAC.html). Copyright © 2012 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Begoli, Edmon; Dunning, Ted; Charlie, Frasure
We present a service platform for schema-leess exploration of data and discovery of patient-related statistics from healthcare data sets. The architecture of this platform is motivated by the need for fast, schema-less, and flexible approaches to SQL-based exploration and discovery of information embedded in the common, heterogeneously structured healthcare data sets and supporting components (electronic health records, practice management systems, etc.) The motivating use cases described in the paper are clinical trials candidate discovery, and a treatment effectiveness analysis. Following the use cases, we discuss the key features and software architecture of the platform, the underlying core components (Apache Parquet,more » Drill, the web services server), and the runtime profiles and performance characteristics of the platform. We conclude by showing dramatic speedup with some approaches, and the performance tradeoffs and limitations of others.« less
Narrowing the scope of failure prediction using targeted fault load injection
NASA Astrophysics Data System (ADS)
Jordan, Paul L.; Peterson, Gilbert L.; Lin, Alan C.; Mendenhall, Michael J.; Sellers, Andrew J.
2018-05-01
As society becomes more dependent upon computer systems to perform increasingly critical tasks, ensuring that those systems do not fail becomes increasingly important. Many organizations depend heavily on desktop computers for day-to-day operations. Unfortunately, the software that runs on these computers is written by humans and, as such, is still subject to human error and consequent failure. A natural solution is to use statistical machine learning to predict failure. However, since failure is still a relatively rare event, obtaining labelled training data to train these models is not a trivial task. This work presents new simulated fault-inducing loads that extend the focus of traditional fault injection techniques to predict failure in the Microsoft enterprise authentication service and Apache web server. These new fault loads were successful in creating failure conditions that were identifiable using statistical learning methods, with fewer irrelevant faults being created.
Development of Web-Based Menu Planning Support System and its Solution Using Genetic Algorithm
NASA Astrophysics Data System (ADS)
Kashima, Tomoko; Matsumoto, Shimpei; Ishii, Hiroaki
2009-10-01
Recently lifestyle-related diseases have become an object of public concern, while at the same time people are being more health conscious. As an essential factor for causing the lifestyle-related diseases, we assume that the knowledge circulation on dietary habits is still insufficient. This paper focuses on everyday meals close to our life and proposes a well-balanced menu planning system as a preventive measure of lifestyle-related diseases. The system is developed by using a Web-based frontend and it provides multi-user services and menu information sharing capabilities like social networking services (SNS). The system is implemented on a Web server running Apache (HTTP server software), MySQL (database management system), and PHP (scripting language for dynamic Web pages). For the menu planning, a genetic algorithm is applied by understanding this problem as multidimensional 0-1 integer programming.
ReSEARCH: A Requirements Search Engine: Progress Report 2
2008-09-01
and provides a convenient user interface for the search process. Ideally, the web application would be based on Tomcat, a free Java Servlet and JSP...Implementation issues Lucene Java is an Open Source project, available under the Apache License, which provides an accessible API for the development of...from the Apache Lucene website (Lucene- java Wiki , 2008). A search application developed with Lucene consists of the same two major com- ponents
ERIC Educational Resources Information Center
Fay, George E., Comp.
The Museum of Anthropology of the University of Northern Colorado (formerly known as Colorado State College) has assembled a large number of Indian tribal charters, constitutions, and by-laws to be reproduced as a series of publications. Included in this volume are the amended charter and constitution of the Jicarilla Apache Tribe, Dulce, New…
25 CFR 183.4 - How can the Tribe use the principal and income from the Trust Fund?
Code of Federal Regulations, 2010 CFR
2010-04-01
... 25 Indians 1 2010-04-01 2010-04-01 false How can the Tribe use the principal and income from the Trust Fund? 183.4 Section 183.4 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR LAND AND WATER USE AND DISTRIBUTION OF THE SAN CARLOS APACHE TRIBE DEVELOPMENT TRUST FUND AND SAN CARLOS APACHE TRIBE LEASE FUND Trust Fund Disposition Use o...
25 CFR 183.3 - Does the American Indian Trust Fund Management Reform Act of 1994 apply to this part?
Code of Federal Regulations, 2010 CFR
2010-04-01
... 25 Indians 1 2010-04-01 2010-04-01 false Does the American Indian Trust Fund Management Reform Act of 1994 apply to this part? 183.3 Section 183.3 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR LAND AND WATER USE AND DISTRIBUTION OF THE SAN CARLOS APACHE TRIBE DEVELOPMENT TRUST FUND AND SAN CARLOS APACHE TRIBE LEASE FUND...
Modeling methods for high-fidelity rotorcraft flight mechanics simulation
NASA Technical Reports Server (NTRS)
Mansur, M. Hossein; Tischler, Mark B.; Chaimovich, Menahem; Rosen, Aviv; Rand, Omri
1992-01-01
The cooperative effort being carried out under the agreements of the United States-Israel Memorandum of Understanding is discussed. Two different models of the AH-64 Apache Helicopter, which may differ in their approach to modeling the main rotor, are presented. The first model, the Blade Element Model for the Apache (BEMAP), was developed at Ames Research Center, and is the only model of the Apache to employ a direct blade element approach to calculating the coupled flap-lag motion of the blades and the rotor force and moment. The second model was developed at the Technion-Israel Institute of Technology and uses an harmonic approach to analyze the rotor. The approach allows two different levels of approximation, ranging from the 'first harmonic' (similar to a tip-path-plane model) to 'complete high harmonics' (comparable to a blade element approach). The development of the two models is outlined and the two are compared using available flight test data.
Localized waves of the coupled cubic-quintic nonlinear Schrödinger equations in nonlinear optics
NASA Astrophysics Data System (ADS)
Xu, Tao; Chen, Yong; Lin, Ji
2017-12-01
Not Available Project supported by the Global Change Research Program of China (Grant No. 2015CB953904), the National Natural Science Foundation of China (Grant Nos. 11675054 and 11435005), the Shanghai Collaborative Innovation Center of Trustworthy Software for Internet of Things (Grant No. ZF1213), and the Natural Science Foundation of Hebei Province, China (Grant No. A2014210140).
Analysis of Foundation of Tall R/C Chimney Incorporating Flexibility of Soil
NASA Astrophysics Data System (ADS)
Jayalekshmi, B. R.; Jisha, S. V.; Shivashankar, R.
2017-09-01
Three dimensional Finite Element (FE) analysis was carried out for 100 and 400 m high R/C chimneys having piled annular raft and annular raft foundations considering the flexibility of soil subjected to across-wind load. Stiffness of supporting soil and foundation were varied to evaluate the significance of Soil-Structure Interaction (SSI). The integrated chimney-foundation-soil system was analysed by finite element software ANSYS based on direct method of SSI assuming linear elastic material behaviour. FE analyses were carried out for two cases of SSI namely, (1) chimney with annular raft foundation and (2) chimney with piled annular raft foundation. The responses in raft such as bending moments and settlements were evaluated for both the cases and compared to those obtained from the conventional method of analysis of annular raft foundation. It is found that the responses in raft vary considerably depending on the stiffness of the underlying soil and the stiffness of foundation. Piled raft foundations are better suited for tall chimneys to be constructed in loose or medium sand.
Wang, Hao; Li, Zhong; Yin, Mei; Chen, Xiao-Mei; Ding, Shi-Fang; Li, Chen; Zhai, Qian; Li, Yuan; Liu, Han; Wu, Da-Wei
2015-04-01
Given the high mortality rates in elderly patients with septic shock, the early recognition of patients at greatest risk of death is crucial for the implementation of early intervention strategies. Serum lactate and N-terminal prohormone of brain natriuretic peptide (NT-proBNP) levels are often elevated in elderly patients with septic shock and are therefore important biomarkers of metabolic and cardiac dysfunction. We hypothesized that a risk stratification system that incorporates the Acute Physiology and Chronic Health Evaluation (APACHE) II score and lactate and NT-proBNP biomarkers would better predict mortality in geriatric patients with septic shock than the APACHE II score alone. A single-center prospective study was conducted from January 2012 to December 2013 in a 30-bed intensive care unit of a triservice hospital. The lactate area score was defined as the sum of the area under the curve of serial lactate levels measured during the 24 hours following admission divided by 24. The NT-proBNP score was assigned based on NT-proBNP levels measured at admission. The combined score was calculated by adding the lactate area and NT-proBNP scores to the APACHE II score. Multivariate logistic regression analyses and receiver operating characteristic curves were used to evaluate which variables and scoring systems served as the best predictors of mortality in elderly septic patients. A total of 115 patients with septic shock were included in the study. The overall 28-day mortality rate was 67.0%. When compared to survivors, nonsurvivors had significantly higher lactate area scores, NT-proBNP scores, APACHE II scores, and combined scores. In the multivariate regression model, the combined score, lactate area score, and mechanical ventilation were independent risk factors associated with death. Receiver operating characteristic curves indicated that the combined score had significantly greater predictive power when compared to the APACHE II score or the NT-proBNP score (P < .05). A combined score that incorporates the APACHE II score with early lactate area and NT-proBNP levels is a useful method for risk stratification in geriatric patients with septic shock. Copyright © 2014 Elsevier Inc. All rights reserved.
2005-01-01
Introduction Risk prediction scores usually overestimate mortality in obstetric populations because mortality rates in this group are considerably lower than in others. Studies examining this effect were generally small and did not distinguish between obstetric and nonobstetric pathologies. We evaluated the performance of the Acute Physiology and Chronic Health Evaluation (APACHE) II model in obstetric admissions to critical care units contributing to the ICNARC Case Mix Programme. Methods All obstetric admissions were extracted from the ICNARC Case Mix Programme Database of 219,468 admissions to UK critical care units from 1995 to 2003 inclusive. Cases were divided into direct obstetric pathologies and indirect or coincidental pathologies, and compared with a control cohort of all women aged 16–50 years not included in the obstetric categories. The predictive ability of APACHE II was evaluated in the three groups. A prognostic model was developed for direct obstetric admissions to predict the risk for hospital mortality. A log-linear model was developed to predict the length of stay in the critical care unit. Results A total of 1452 direct obstetric admissions were identified, the most common pathologies being haemorrhage and hypertensive disorders of pregnancy. There were 278 admissions identified as indirect or coincidental and 22,938 in the nonpregnant control cohort. Hospital mortality rates were 2.2%, 6.0% and 19.6% for the direct obstetric group, the indirect or coincidental group, and the control cohort, respectively. Cox regression calibration analysis showed a reasonable fit of the APACHE II model for the nonpregnant control cohort (slope = 1.1, intercept = -0.1). However, the APACHE II model vastly overestimated mortality for obstetric admissions (mortality ratio = 0.25). Risk prediction modelling demonstrated that the Glasgow Coma Scale score was the best discriminator between survival and death in obstetric admissions. Conclusion This study confirms that APACHE II overestimates mortality in obstetric admissions to critical care units. This may be because of the physiological changes in pregnancy or the unique scoring profile of obstetric pathologies such as HELLP syndrome. It may be possible to recalibrate the APACHE II score for obstetric admissions or to devise an alternative score specifically for obstetric admissions.
NASA Technical Reports Server (NTRS)
Moseley, Warren
1989-01-01
The early stages of a research program designed to establish an experimental research platform for software engineering are described. Major emphasis is placed on Computer Assisted Software Engineering (CASE). The Poor Man's CASE Tool is based on the Apple Macintosh system, employing available software including Focal Point II, Hypercard, XRefText, and Macproject. These programs are functional in themselves, but through advanced linking are available for operation from within the tool being developed. The research platform is intended to merge software engineering technology with artificial intelligence (AI). In the first prototype of the PMCT, however, the sections of AI are not included. CASE tools assist the software engineer in planning goals, routes to those goals, and ways to measure progress. The method described allows software to be synthesized instead of being written or built.
Software Cost Estimation Using a Decision Graph Process: A Knowledge Engineering Approach
NASA Technical Reports Server (NTRS)
Stukes, Sherry; Spagnuolo, John, Jr.
2011-01-01
This paper is not a description per se of the efforts by two software cost analysts. Rather, it is an outline of the methodology used for FSW cost analysis presented in a form that would serve as a foundation upon which others may gain insight into how to perform FSW cost analyses for their own problems at hand.
Intent Specifications: An Approach to Building Human-Centered Specifications
NASA Technical Reports Server (NTRS)
Leveson, Nancy G.
1999-01-01
This paper examines and proposes an approach to writing software specifications, based on research in systems theory, cognitive psychology, and human-machine interaction. The goal is to provide specifications that support human problem solving and the tasks that humans must perform in software development and evolution. A type of specification, called intent specifications, is constructed upon this underlying foundation.
An Investigation of Techniques for Detecting Data Anomalies in Earned Value Management Data
2011-12-01
Management Studio Harte Hanks Trillium Software Trillium Software System IBM Info Sphere Foundation Tools Informatica Data Explorer Informatica ...Analyst Informatica Developer Informatica Administrator Pitney Bowes Business Insight Spectrum SAP BusinessObjects Data Quality Management DataFlux...menting quality monitoring efforts and tracking data quality improvements Informatica http://www.informatica.com/products_services/Pages/index.aspx
HEP Software Foundation Community White Paper Working Group - Detector Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Apostolakis, J.
A working group on detector simulation was formed as part of the high-energy physics (HEP) Software Foundation's initiative to prepare a Community White Paper that describes the main software challenges and opportunities to be faced in the HEP field over the next decade. The working group met over a period of several months in order to review the current status of the Full and Fast simulation applications of HEP experiments and the improvements that will need to be made in order to meet the goals of future HEP experimental programmes. The scope of the topics covered includes the main componentsmore » of a HEP simulation application, such as MC truth handling, geometry modeling, particle propagation in materials and fields, physics modeling of the interactions of particles with matter, the treatment of pileup and other backgrounds, as well as signal processing and digitisation. The resulting work programme described in this document focuses on the need to improve both the software performance and the physics of detector simulation. The goals are to increase the accuracy of the physics models and expand their applicability to future physics programmes, while achieving large factors in computing performance gains consistent with projections on available computing resources.« less
Foundations of the Bandera Abstraction Tools
NASA Technical Reports Server (NTRS)
Hatcliff, John; Dwyer, Matthew B.; Pasareanu, Corina S.; Robby
2003-01-01
Current research is demonstrating that model-checking and other forms of automated finite-state verification can be effective for checking properties of software systems. Due to the exponential costs associated with model-checking, multiple forms of abstraction are often necessary to obtain system models that are tractable for automated checking. The Bandera Tool Set provides multiple forms of automated support for compiling concurrent Java software systems to models that can be supplied to several different model-checking tools. In this paper, we describe the foundations of Bandera's data abstraction mechanism which is used to reduce the cardinality (and the program's state-space) of data domains in software to be model-checked. From a technical standpoint, the form of data abstraction used in Bandera is simple, and it is based on classical presentations of abstract interpretation. We describe the mechanisms that Bandera provides for declaring abstractions, for attaching abstractions to programs, and for generating abstracted programs and properties. The contributions of this work are the design and implementation of various forms of tool support required for effective application of data abstraction to software components written in a programming language like Java which has a rich set of linguistic features.
2014-12-01
An Investigation of Multiple Unmanned Aircraft Systems Control from the Cockpit of an AH-64 Apache Helicopter by Jamison S Hicks and David B...estimate or any other aspect of this collection of information, including suggestions for reducing the burden, to Department of Defense , Washington...infantrymen, aircraft pilots, or dedicated UAS ground control station (GCS) operators. The purpose of the UAS is to allow for longer and more discrete
Sam, Kishore Gnana; Kondabolu, Krishnakanth; Pati, Dipanwita; Kamath, Asha; Pradeep Kumar, G; Rao, Padma G M
2009-07-01
Self-poisoning with organophosphorus (OP) compounds is a major cause of morbidity and mortality across South Asian countries. To develop uniform and effective management guidelines, the severity of acute OP poisoning should be assessed through scientific methods and a clinical database should be maintained. A prospective descriptive survey was carried out to assess the utility of severity scales in predicting the outcome of 71 organophosphate (OP) and carbamate poisoning patients admitted during a one year period at the Kasturba Hospital, Manipal, India. The Glasgow coma scale (GCS) scores, acute physiology and chronic health evaluation II (APACHE II) scores, predicted mortality rate (PMR) and Poisoning severity score (PSS) were estimated within 24h of admission. Significant correlation (P<0.05) between PSS and GCS and APACHE II and PMR scores were observed with the PSS scores predicting mortality significantly (P< or =0.001). A total of 84.5% patients improved after treatment while 8.5% of the patients were discharged with severe morbidity. The mortality rate was 7.0%. Suicidal poisoning was observed to be the major cause (80.2%), while other reasons attributed were occupational (9.1%), accidental (6.6%), homicidal (1.6%) and unknown (2.5%) reasons. This study highlights the application of clinical indices like GCS, APACHE, PMR and severity scores in predicting mortality and may be considered for planning standard treatment guidelines.
Amaral Gonçalves Fusatto, Helena; Castilho de Figueiredo, Luciana; Ragonete Dos Anjos Agostini, Ana Paula; Sibinelli, Melissa; Dragosavac, Desanka
2018-01-01
The aim of this study was to identify pulmonary dysfunction and factors associated with prolonged mechanical ventilation, hospital stay, weaning failure and mortality in patients undergoing coronary artery bypass grafting with use of intra-aortic balloon pump (IABP). This observational study analyzed respiratory, surgical, clinical and demographic variables and related them to outcomes. We analyzed 39 patients with a mean age of 61.2 years. Pulmonary dysfunction, characterized by mildly impaired gas exchange, was present from the immediate postoperative period to the third postoperative day. Mechanical ventilation time was influenced by the use of IABP and PaO2/FiO2, female gender and smoking. Intensive care unit (ICU) stay was influenced by APACHE II score and use of IABP. Mortality was strongly influenced by APACHE II score, followed by weaning failure. Pulmonary dysfunction was present from the first to the third postoperative day. Mechanical ventilation time was influenced by female gender, smoking, duration of IABP use and PaO2/FiO2 on the first postoperative day. ICU stay was influenced by APACHE II score and duration of IABP. Mortality was influenced by APACHE II score, followed by weaning failure. Copyright © 2017 Sociedade Portuguesa de Cardiologia. Publicado por Elsevier España, S.L.U. All rights reserved.
Dependency visualization for complex system understanding
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smart, J. Allison Cory
1994-09-01
With the volume of software in production use dramatically increasing, the importance of software maintenance has become strikingly apparent. Techniques now sought and developed for reverse engineering and design extraction and recovery. At present, numerous commercial products and research tools exist which are capable of visualizing a variety of programming languages and software constructs. The list of new tools and services continues to grow rapidly. Although the scope of the existing commercial and academic product set is quite broad, these tools still share a common underlying problem. The ability of each tool to visually organize object representations is increasingly impairedmore » as the number of components and component dependencies within systems increases. Regardless of how objects are defined, complex ``spaghetti`` networks result in nearly all large system cases. While this problem is immediately apparent in modem systems analysis involving large software implementations, it is not new. As will be discussed in Chapter 2, related problems involving the theory of graphs were identified long ago. This important theoretical foundation provides a useful vehicle for representing and analyzing complex system structures. While the utility of directed graph based concepts in software tool design has been demonstrated in literature, these tools still lack the capabilities necessary for large system comprehension. This foundation must therefore be expanded with new organizational and visualization constructs necessary to meet this challenge. This dissertation addresses this need by constructing a conceptual model and a set of methods for interactively exploring, organizing, and understanding the structure of complex software systems.« less
Yuan, Shaoxin; Gao, Yusong; Ji, Wenqing; Song, Junshuai; Mei, Xue
2018-05-01
The aim of this study was to assess the ability of acute physiology and chronic health evaluation II (APACHE II) score, poisoning severity score (PSS) as well as sequential organ failure assessment (SOFA) score combining with lactate (Lac) to predict mortality in the Emergency Department (ED) patients who were poisoned with organophosphate.A retrospective review of 59 stands-compliant patients was carried out. Receiver operating characteristic (ROC) curves were constructed based on the APACHE II score, PSS, SOFA score with or without Lac, respectively, and the areas under the ROC curve (AUCs) were determined to assess predictive value. According to SOFA-Lac (a combination of SOFA and Lac) classification standard, acute organophosphate pesticide poisoning (AOPP) patients were divided into low-risk and high-risk groups. Then mortality rates were compared between risk levels.Between survivors and non-survivors, there were significant differences in the APACHE II score, PSS, SOFA score, and Lac (all P < .05). The AUCs of the APACHE II score, PSS, and SOFA score were 0.876, 0.811, and 0.837, respectively. However, after combining with Lac, the AUCs were 0.922, 0.878, and 0.956, respectively. According to SOFA-Lac, the mortality of high-risk group was significantly higher than low-risk group (P < .05) and the patients of the non-survival group were all at high risk.These data suggest the APACHE II score, PSS, SOFA score can all predict the prognosis of AOPP patients. For its simplicity and objectivity, the SOFA score is a superior predictor. Lac significantly improved the predictive abilities of the 3 scoring systems, especially for the SOFA score. The SOFA-Lac system effectively distinguished the high-risk group from the low-risk group. Therefore, the SOFA-Lac system is significantly better at predicting mortality in AOPP patients.
Usefulness of Glycemic Gap to Predict ICU Mortality in Critically Ill Patients With Diabetes.
Liao, Wen-I; Wang, Jen-Chun; Chang, Wei-Chou; Hsu, Chin-Wang; Chu, Chi-Ming; Tsai, Shih-Hung
2015-09-01
Stress-induced hyperglycemia (SIH) has been independently associated with an increased risk of mortality in critically ill patients without diabetes. However, it is also necessary to consider preexisting hyperglycemia when investigating the relationship between SIH and mortality in patients with diabetes. We therefore assessed whether the gap between admission glucose and A1C-derived average glucose (ADAG) levels could be a predictor of mortality in critically ill patients with diabetes.We retrospectively reviewed the Acute Physiology and Chronic Health Evaluation II (APACHE-II) scores and clinical outcomes of patients with diabetes admitted to our medical intensive care unit (ICU) between 2011 and 2014. The glycosylated hemoglobin (HbA1c) levels were converted to the ADAG by the equation, ADAG = [(28.7 × HbA1c) - 46.7]. We also used receiver operating characteristic (ROC) curves to determine the optimal cut-off value for the glycemic gap when predicting ICU mortality and used the net reclassification improvement (NRI) to measure the improvement in prediction performance gained by adding the glycemic gap to the APACHE-II score.We enrolled 518 patients, of which 87 (17.0%) died during their ICU stay. Nonsurvivors had significantly higher APACHE-II scores and glycemic gaps than survivors (P < 0.001). Critically ill patients with diabetes and a glycemic gap ≥80 mg/dL had significantly higher ICU mortality and adverse outcomes than those with a glycemic gap <80 mg/dL (P < 0.001). Incorporation of the glycemic gap into the APACHE-II score increased the discriminative performance for predicting ICU mortality by increasing the area under the ROC curve from 0.755 to 0.794 (NRI = 13.6%, P = 0.0013).The glycemic gap can be used to assess the severity and prognosis of critically ill patients with diabetes. The addition of the glycemic gap to the APACHE-II score significantly improved its ability to predict ICU mortality.
[Prevalence of severe sepsis in intensive care units. A national multicentric study].
Dougnac, Alberto L; Mercado, Marcelo F; Cornejo, Rodrigo R; Cariaga, Mario V; Hernández, Glenn P; Andresen, Max H; Bugedo, Guillermo T; Castillo, Luis F
2007-05-01
Severe sepsis (SS) is the leading cause of death in the Intensive Care Units (ICU). To study the prevalence of SS in Chilean ICUs. An observational, cross-sectional study using a predesigned written survey was done in all ICUs of Chile on April 21st, 2004. General hospital and ICU data and the number of hospitalized patients in the hospital and in the ICU at the survey day, were recorded. Patients were followed for 28 days. Ninety four percent of ICUs participated in the survey. The ICU occupation index was 66%. Mean age of patients was 57.7+/-18 years and 59% were male, APACHE II score was 15+/-7.5 and SOFA score was 6+/-4. SS was the admission diagnosis of 94 of the 283 patients (33%) and 38 patients presented SS after admission. On the survey day, 112 patients fulfilled SS criteria (40%). APACHE II and SOFA scores were significantly higher in SS patients than in non SS patients. Global case-fatality ratio at 28 days was 15.9% (45/283). Case-fatality ratio in patients with or without SS at the moment of the survey was 26.7% (30/112) and 8.7% (17/171), respectively p <0.05. Thirteen percent of patients who developed SS after admission, died. Case-fatality ratios for patients with SS from Santiago and the other cities were similar, but APACHE II score was significantly higher in patients from Santiago. In SS patients, the independent predictors of mortality were SS as cause of hospital admission, APACHE II and SOFA scores. Ninety nine percent of SS patients had a known sepsis focus (48% respiratory and 30% abdominal). Eighty five patients that presented SS after admission, had a respiratory focus. SS is highly prevalent in Chilean ICUs and represents the leading diagnosis at admission. SS as cause of hospitalization, APACHE II and SOFA scores were independent predictors of mortality.
DNS load balancing in the CERN cloud
NASA Astrophysics Data System (ADS)
Reguero Naredo, Ignacio; Lobato Pardavila, Lorena
2017-10-01
Load Balancing is one of the technologies enabling deployment of large-scale applications on cloud resources. A DNS Load Balancer Daemon (LBD) has been developed at CERN as a cost-effective way to balance applications accepting DNS timing dynamics and not requiring persistence. It currently serves over 450 load-balanced aliases with two small VMs acting as master and slave. The aliases are mapped to DNS subdomains. These subdomains are managed with DDNS according to a load metric, which is collected from the alias member nodes with SNMP. During the last years, several improvements were brought to the software, for instance: support for IPv6, parallelization of the status requests, implementing the client in Python to allow for multiple aliases with differentiated states on the same machine or support for application state. The configuration of the Load Balancer is currently managed by a Puppet type. It discovers the alias member nodes and gets the alias definitions from the Ermis REST service. The Aiermis self-service GUI for the management of the LB aliases has been produced and is based on the Ermis service above that implements a form of Load Balancing as a Service (LBaaS). The Ermis REST API has authorisation based in Foreman hostgroups. The CERN DNS LBD is Open Software with Apache 2 license.
Spectrophotometric Properties of E+A Galaxies in SDSS-IV MaNGA
NASA Astrophysics Data System (ADS)
Marinelli, Mariarosa; Dudley, Raymond; Edwards, Kay; Gonzalez, Andrea; Johnson, Amalya; Kerrison, Nicole; Melchert, Nancy; Ojanen, Winonah; Weaver, Olivia; Liu, Charles; SDSS-IV MaNGA
2018-01-01
Quenched post-starburst galaxies, or E+A galaxies, represent a unique and informative phase in the evolution of galaxies. We used a qualitative rubric-based methodology, informed by the literature, to manually select galaxies from the SDSS-IV IFU survey Mapping Nearby Galaxies at Apache Point Observatory (MaNGA) using the single-fiber spectra from the Sloan Digital Sky Survey Data Release 8. Of the 2,812 galaxies observed so far in MaNGA, we found 39 galaxies meeting our criteria for E+A classification. Spectral energy distributions of these 39 galaxies from the far-UV to the mid-infrared demonstrate a heterogeneity in our sample emerging in the infrared, indicating many distinct paths to visually similar optical spectra. We used SDSS-IV MaNGA Pipe3D data products to analyze stellar population ages, and found that 34 galaxies exhibited stellar populations that were older at 1 effective radius than at the center of the galaxy. Given that our sample was manually chosen based on E+A markers in the single-fiber spectra aimed at the center of each galaxy, our E+A galaxies may have only experienced their significant starbursts in the central region, with a disk of quenched or quenching material further outward. This work was supported by grants AST-1460860 from the National Science Foundation and SDSS FAST/SSP-483 from the Alfred P. Sloan Foundation to the CUNY College of Staten Island.
NASA Astrophysics Data System (ADS)
Senthilkumar, K.; Ruchika Mehra Vijayan, E.
2017-11-01
This paper aims to illustrate real time analysis of large scale data. For practical implementation we are performing sentiment analysis on live Twitter feeds for each individual tweet. To analyze sentiments we will train our data model on sentiWordNet, a polarity assigned wordNet sample by Princeton University. Our main objective will be to efficiency analyze large scale data on the fly using distributed computation. Apache Spark and Apache Hadoop eco system is used as distributed computation platform with Java as development language
Global ISR: Toward a Comprehensive Defense Against Unauthorized Code Execution
2010-10-01
implementation using two of the most popular open- source servers: the Apache web server, and the MySQL database server. For Apache, we measure the effect that...utility ab. T o ta l T im e ( s e c ) 0 500 1000 1500 2000 2500 3000 Native Null ISR ISR−MP Fig. 3. The MySQL test-insert bench- mark measures...various SQL operations. The figure draws total execution time as reported by the benchmark utility. Finally, we benchmarked a MySQL database server using
Towards understanding software: 15 years in the SEL
NASA Technical Reports Server (NTRS)
Mcgarry, Frank; Pajerski, Rose
1990-01-01
For 15 years, the Software Engineering Laboratory (SEL) at GSFC has been carrying out studies and experiments for the purpose of understanding, assessing, and improving software, and software processes within a production software environment. The SEL comprises three major organizations: (1) the GSFC Flight Dynamics Division; (2) the University of Maryland Computer Science Department; and (3) the Computer Sciences Corporation Flight Dynamics Technology Group. These organizations have jointly carried out several hundred software studies, producing hundreds of reports, papers, and documents: all describing some aspect of the software engineering technology that has undergone analysis in the flight dynamics environment. The studies range from small controlled experiments (such as analyzing the effectiveness of code reading versus functional testing) to large, multiple-project studies (such as assessing the impacts of Ada on a production environment). The key findings that NASA feels have laid the foundation for ongoing and future software development and research activities are summarized.
Design Recovery for Software Library Population
1992-12-01
increase understandability, efficiency, and maintainability of the software and the design. A good representation choice will also aid in...required for a reengineering project. It details the analysis and planning phase and gives good criteria for determining the need for a reengineering...because it deals with all of these issues. With his complete description of the analysis and planning phase, Byrne has a good foundation for
Nuclear and Particle Physics Simulations: The Consortium of Upper-Level Physics Software
NASA Astrophysics Data System (ADS)
Bigelow, Roberta; Moloney, Michael J.; Philpott, John; Rothberg, Joseph
1995-06-01
The Consortium for Upper Level Physics Software (CUPS) has developed a comprehensive series of Nine Book/Software packages that Wiley will publish in FY `95 and `96. CUPS is an international group of 27 physicists, all with extensive backgrounds in the research, teaching, and development of instructional software. The project is being supported by the National Science Foundation (PHY-9014548), and it has received other support from the IBM Corp., Apple Computer Corp., and George Mason University. The Simulations being developed are: Astrophysics, Classical Mechanics, Electricity & Magnetism, Modern Physics, Nuclear and Particle Physics, Quantum Mechanics, Solid State, Thermal and Statistical, and Wave and Optics.
Introduction: Cybersecurity and Software Assurance Minitrack
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burns, Luanne; George, Richard; Linger, Richard C
Modern society is dependent on software systems of remarkable scope and complexity. Yet methods for assuring their security and functionality have not kept pace. The result is persistent compromises and failures despite best efforts. Cybersecurity methods must work together for situational awareness, attack prevention and detection, threat attribution, minimization of consequences, and attack recovery. Because defective software cannot be secure, assurance technologies must play a central role in cybersecurity approaches. There is increasing recognition of the need for rigorous methods for cybersecurity and software assurance. The goal of this minitrack is to develop science foundations, technologies, and practices that canmore » improve the security and dependability of complex systems.« less
The NCC project: A quality management perspective
NASA Technical Reports Server (NTRS)
Lee, Raymond H.
1993-01-01
The Network Control Center (NCC) Project introduced the concept of total quality management (TQM) in mid-1990. The CSC project team established a program which focused on continuous process improvement in software development methodology and consistent deliveries of high quality software products for the NCC. The vision of the TQM program was to produce error free software. Specific goals were established to allow continuing assessment of the progress toward meeting the overall quality objectives. The total quality environment, now a part of the NCC Project culture, has become the foundation for continuous process improvement and has resulted in the consistent delivery of quality software products over the last three years.
The customization of APACHE II for patients receiving orthotopic liver transplants
Moreno, Rui
2002-01-01
General outcome prediction models developed for use with large, multicenter databases of critically ill patients may not correctly estimate mortality if applied to a particular group of patients that was under-represented in the original database. The development of new diagnostic weights has been proposed as a method of adapting the general model – the Acute Physiology and Chronic Health Evaluation (APACHE) II in this case – to a new group of patients. Such customization must be empirically tested, because the original model cannot contain an appropriate set of predictive variables for the particular group. In this issue of Critical Care, Arabi and co-workers present the results of the validation of a modified model of the APACHE II system for patients receiving orthotopic liver transplants. The use of a highly heterogeneous database for which not all important variables were taken into account and of a sample too small to use the Hosmer–Lemeshow goodness-of-fit test appropriately makes their conclusions uncertain. PMID:12133174
Chen, Wan-Ling; Chen, Chin-Ming; Kung, Shu-Chen; Wang, Ching-Min; Lai, Chih-Cheng; Chao, Chien-Ming
2018-01-23
This retrospective cohort study investigated the outcomes and prognostic factors in nonagenarians (patients 90 years old or older) with acute respiratory failure. Between 2006 and 2016, all nonagenarians with acute respiratory failure requiring invasive mechanical ventilation (MV) were enrolled. Outcomes including in-hospital mortality and ventilator dependency were measured. A total of 173 nonagenarians with acute respiratory failure were admitted to the intensive care unit (ICU). A total of 56 patients died during the hospital stay and the rate of in-hospital mortality was 32.4%. Patients with higher APACHE (Acute Physiology and Chronic Health Evaluation) II scores (adjusted odds ratio [OR], 5.91; 95 % CI, 1.55-22.45; p = 0.009, APACHE II scores ≥ 25 vs APACHE II scores < 15), use of vasoactive agent (adjust OR, 2.67; 95% CI, 1.12-6.37; p = 0.03) and more organ dysfunction (adjusted OR, 11.13; 95% CI, 3.38-36.36, p < 0.001; ≥ 3 organ dysfunction vs ≤ 1 organ dysfunction) were more likely to die. Among the 117 survivors, 25 (21.4%) patients became dependent on MV. Female gender (adjusted OR, 3.53; 95% CI, 1.16-10.76, p = 0.027) and poor consciousness level (adjusted OR, 4.98; 95% CI, 1.41-17.58, p = 0.013) were associated with MV dependency. In conclusion, the mortality rate of nonagenarians with acute respiratory failure was high, especially for those with higher APACHE II scores or more organ dysfunction.
Mica, Ladislav; Rufibach, Kaspar; Keel, Marius; Trentz, Otmar
2013-01-01
The early hemodynamic normalization of polytrauma patients may lead to better survival outcomes. The aim of this study was to assess the diagnostic quality of trauma and physiological scores from widely used scoring systems in polytrauma patients. In total, 770 patients with ISS > 16 who were admitted to a trauma center within the first 24 hours after injury were included in this retrospective study. The patients were subdivided into three groups: those who died on the day of admission, those who died within the first three days, and those who survived for longer than three days. ISS, NISS, APACHE II score, and prothrombin time were recorded at admission. The descriptive statistics for early death in polytrauma patients who died on the day of admission, 1-3 days after admission, and > 3 days after admission were: ISS of 41.0, 34.0, and 29.0, respectively; NISS of 50.0, 50.0, and 41.0, respectively; APACHE II score of 30.0, 25.0, and 15.0, respectively; and prothrombin time of 37.0%, 56.0%, and 84%, respectively. These data indicate that prothrombin time (AUC: 0.89) and APACHE II (AUC: 0.88) have the greatest prognostic utility for early death. The estimated densities of the scores may suggest a direction for resuscitative procedures in polytrauma patients. "Retrospektive Analysen in der Chirurgischen Intensivmedizin"StV01-2008.
Afessa, B; Kubilis, P S
2000-02-01
We conducted this study to describe the complications and validate the accuracy of previously reported prognostic indices in predicting the mortality of cirrhotic patients hospitalized for upper GI bleeding. This prospective, observational study included 111 consecutive hospitalizations of 85 cirrhotic patients admitted for GI bleeding. Data obtained included intensive care unit (ICU) admission status, Child-Pugh score, the development of systemic inflammatory response syndrome (SIRS), organ failure, and inhospital mortality. The performances of Garden's, Gatta's, and Acute Physiology and Chronic Health Evaluation (APACHE) II prognostic systems in predicting mortality were assessed. Patients' mean age was 48.7 yr, and the median APACHE II and Child-Pugh scores were 17 and 9, respectively. Their ICU admission rate was 71%. Organ failure developed in 57%, and SIRS in 46% of the patients. Nine patients had acute respiratory distress syndrome, and three patients had hepatorenal syndrome. The inhospital mortality was 21%. The APACHE II, Garden's, and Gatta' s predicted mortality rates were 39%, 24%, and 20%, respectively, and their areas under the receiver operating characteristic curve (AUC) were 0.78, 0.70, and 0.71, respectively. The AUC for Child-Pugh score was 0.76. SIRS and organ failure develop in many patients with hepatic cirrhosis hospitalized for upper GI bleeding, and are associated with increased mortality. Although the APACHE II prognostic system overestimated the mortality of these patients, the receiver operating characteristic curves did not show significant differences between the various prognostic systems.
A Separate Compilation Extension to Standard ML (Revised and Expanded)
2006-09-17
repetition of interfaces. The language is given a formal semantics, and we argue that this semantics is implementable in a variety of compilers. This...material is based on work supported in part by the National Science Foundation under grant 0121633 Language Technology for Trustless Software...Dissemination and by the Defense Advanced Research Projects Agency under contracts F196268-95-C-0050 The Fox Project: Advanced Languages for Systems Software
Computational Methods for Identification, Optimization and Control of PDE Systems
2010-04-30
focused on the development of numerical methods and software specifically for the purpose of solving control, design, and optimization prob- lems where...that provide the foundations of simulation software must play an important role in any research of this type, the demands placed on numerical methods...y sus Aplicaciones , Ciudad de Cor- doba - Argentina, October 2007. 3. Inverse Problems in Deployable Space Structures, Fourth Conference on Inverse
Evaluation of Fieldbus and OPC for Advanced Life Support
NASA Technical Reports Server (NTRS)
Boulanger, Richard P.; Cardinale, Paul; Bradley, Matthew; Luna, Bernadette (Technical Monitor)
2000-01-01
FOUNDATION(Tm) Fieldbus and OP(TM) (OLE(TM)for Process Control) technologies were integrated into an existing control system for a crop growth chamber at NASA Ames Research Center. FOUNDATION(TM) Fieldbus is a digital, bi-directional, multi-drop, serial communications network which functions essentially as a LAN for sensors. FOUNDATION(TM) Fieldbus is heterarchical, with publishers and subscribers of data performing complex control functions at low levels without centralized control and its associated overhead. OPC(TM) is a set of interfaces which replace proprietary drivers with a transparent means of exchanging data between the fieldbus and applications. The objectives were: (1) to integrate FOUNDATION(TM) Fieldbus into existing ALS hardware and determine its overall effectiveness and reliability and, (2) to quantify any savings produced by using fieldbus and OPC technologies. We encountered several problems with the FOUNDATION(TM) Fieldbus hardware chosen. Our hardware exposed 100 data for each channel of the fieldbus. The fieldbus configurator software used to program the fieldbus was simply not adequate. The fieldbus was also not inherently reliable. It lost its settings twice during our tests for unknown reasons. OPC also had issues. It did not function at all as supplied, requiring substitution of some of its components with those from other vendors. It would stop working after a fixed period of time. Certain database calls eventually lock the machine. Overall, we would not recommend FOUNDATION(TM) Fieldbus: it was too difficult to implement with little overall added value. It also seems unlikely that FOUNDATION(TM) Fieldbus will gain sufficient penetration into the laboratory instrument market to ever be cost effective for the ALS community. OPC had good reliability and performance once a stable installation was achieved. It allowed a rapid change to an alternative software strategy when our first strategy failed. It is a cost effective solution to distributed control systems development.
Building a Billion Spatio-Temporal Object Search and Visualization Platform
NASA Astrophysics Data System (ADS)
Kakkar, D.; Lewis, B.
2017-10-01
With funding from the Sloan Foundation and Harvard Dataverse, the Harvard Center for Geographic Analysis (CGA) has developed a prototype spatio-temporal visualization platform called the Billion Object Platform or BOP. The goal of the project is to lower barriers for scholars who wish to access large, streaming, spatio-temporal datasets. The BOP is now loaded with the latest billion geo-tweets, and is fed a real-time stream of about 1 million tweets per day. The geo-tweets are enriched with sentiment and census/admin boundary codes when they enter the system. The system is open source and is currently hosted on Massachusetts Open Cloud (MOC), an OpenStack environment with all components deployed in Docker orchestrated by Kontena. This paper will provide an overview of the BOP architecture, which is built on an open source stack consisting of Apache Lucene, Solr, Kafka, Zookeeper, Swagger, scikit-learn, OpenLayers, and AngularJS. The paper will further discuss the approach used for harvesting, enriching, streaming, storing, indexing, visualizing and querying a billion streaming geo-tweets.
. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
2010-03-31
Davis Highway, Suite 1204, Arlington, VA 22202-4302, and to the O ffice of Management and Budget, Paperwork Reduction Project (0704-0188) Washington...Jafferson Davis Highway, Suite 1204, Arlington, VA 22202-4302, and to the Office of Management and Budget, Paperwork Reduction Project (0704-0100...General William T. Sherman, upon Crook’s death, said he was, "the greatest Indian-fighter and manager the army of the United States ever had.,,4
2010-10-01
Requirements Application Server BEA Weblogic Express 9.2 or higher Java v5Apache Struts v2 Hibernate v2 C3PO SQL*Net client / JDBC Database Server...designed for the desktop o An HTML and JavaScript browser-based front end designed for mobile Smartphones - A Java -based framework utilizing Apache...Technology Requirements The recommended technologies are as follows: Technology Use Requirements Java Application Provides the backend application
Auxiliary Salvage Tow and Rescue: T-STAR
2011-08-01
These agencies also operate four ships of the T-ATF class (Fleet Ocean Tug): Catawba (T-ATF 168), Navajo (T-ATF 169), Sioux (T-ATF 171), and Apache (T...Ocean Tug): CATAWBA (T-ATF 168), NAVAJO (T-ATF 169), SIOUX (T-ATF 171), and APACHE (T-ATF 172). These ships were commissioned during the 1980’s and...Bottles 1 0.6 Portable HP Air Plant 10’x18’x10’ 1 40.2 200 Amp Welder 2 0.4 Power Pack Unit 1 8.4 Salvage Equipment 400 Amp
mod_bio: Apache modules for Next-Generation sequencing data.
Lindenbaum, Pierre; Redon, Richard
2015-01-01
We describe mod_bio, a set of modules for the Apache HTTP server that allows the users to access and query fastq, tabix, fasta and bam files through a Web browser. Those data are made available in plain text, HTML, XML, JSON and JSON-P. A javascript-based genome browser using the JSON-P communication technique is provided as an example of cross-domain Web service. https://github.com/lindenb/mod_bio. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
[Prediction of mortality in patients with acute hepatic failure].
Eremeeva, L F; Berdnikov, A P; Musaeva, T S; Zabolotskikh, I B
2013-01-01
The article deals with a study of 243 patients (from 18 to 65 years old) with acute hepatic failure. Purpose of the study was to evaluate the predictive capability of severity scales APACHE III, SOFA, MODS, Child-Pugh and to identify mortality predictors in patients with acute hepatic failure. Results; The best predictive ability in patients with acute hepatic failure and multiple organ failure had APACHE III and SOFA scales. The strongest mortality predictors were: serum creatinine > 132 mmol/L, fibrinogen < 1.4 g/L, Na < 129 mmol/L.
Expert system verification and validation study: ES V/V Workshop
NASA Technical Reports Server (NTRS)
French, Scott; Hamilton, David
1992-01-01
The primary purpose of this document is to build a foundation for applying principles of verification and validation (V&V) of expert systems. To achieve this, some V&V as applied to conventionally implemented software is required. Part one will discuss the background of V&V from the perspective of (1) what is V&V of software and (2) V&V's role in developing software. Part one will also overview some common analysis techniques that are applied when performing V&V of software. All of these materials will be presented based on the assumption that the reader has little or no background in V&V or in developing procedural software. The primary purpose of part two is to explain the major techniques that have been developed for V&V of expert systems.
NASA Astrophysics Data System (ADS)
Brandt, Douglas; Hiller, John R.; Moloney, Michael J.
1995-10-01
The Consortium for Upper Level Physics Software (CUPS) has developed a comprehensive series of Nine Book/Software packages that Wiley will publish in FY `95 and `96. CUPS is an international group of 27 physicists, all with extensive backgrounds in the research, teaching, and development of instructional software. The project is being supported by the National Science Foundation (PHY-9014548), and it has received other support from the IBM Corp., Apple Computer Corp., and George Mason University. The Simulations being developed are: Astrophysics, Classical Mechanics, Electricity & Magnetism, Modern Physics, Nuclear and Particle Physics, Quantum Mechanics, Solid State, Thermal and Statistical, and Wave and Optics.
Large Smoke Plumes, Alberta Canada
Atmospheric Science Data Center
2016-12-30
... has adverse impacts on human health. These data were acquired during Terra orbit 87148. The stereoscopic analysis was ... software tool, which is publicly available through the Open Channel Foundation at: https://www.openchannelsoftware.com/projects/MINX ...
A measurement system for large, complex software programs
NASA Technical Reports Server (NTRS)
Rone, Kyle Y.; Olson, Kitty M.; Davis, Nathan E.
1994-01-01
This paper describes measurement systems required to forecast, measure, and control activities for large, complex software development and support programs. Initial software cost and quality analysis provides the foundation for meaningful management decisions as a project evolves. In modeling the cost and quality of software systems, the relationship between the functionality, quality, cost, and schedule of the product must be considered. This explicit relationship is dictated by the criticality of the software being developed. This balance between cost and quality is a viable software engineering trade-off throughout the life cycle. Therefore, the ability to accurately estimate the cost and quality of software systems is essential to providing reliable software on time and within budget. Software cost models relate the product error rate to the percent of the project labor that is required for independent verification and validation. The criticality of the software determines which cost model is used to estimate the labor required to develop the software. Software quality models yield an expected error discovery rate based on the software size, criticality, software development environment, and the level of competence of the project and developers with respect to the processes being employed.
Predictive ability of the ISS, NISS, and APACHE II score for SIRS and sepsis in polytrauma patients.
Mica, L; Furrer, E; Keel, M; Trentz, O
2012-12-01
Systemic inflammatory response syndrome (SIRS) and sepsis as causes of multiple organ dysfunction syndrome (MODS) remain challenging to treat in polytrauma patients. In this study, the focus was set on widely used scoring systems to assess their diagnostic quality. A total of 512 patients (mean age: 39.2 ± 16.2, range: 16-88 years) who had an Injury Severity Score (ISS) ≥17 were included in this retrospective study. The patients were subdivided into four groups: no SIRS, slight SIRS, severe SIRS, and sepsis. The ISS, New Injury Severity Score (NISS), Acute Physiology and Chronic Health Evaluation II (APACHE II) scores, and prothrombin time were collected at admission. The Kruskal-Wallis test and χ(2)-test, multinomial regression analysis, and kernel density estimates were performed. Receiver operating characteristic (ROC) analysis is reported as the area under the curve (AUC). Data were considered as significant if p < 0.05. All variables were significantly different in all groups (p < 0.001). The odds ratio increased with increasing SIRS severity for NISS (slight vs. no SIRS, 1.06, p = 0.07; severe vs. no SIRS, 1.07, p = 0.04; and sepsis vs. no SIRS, 1.11, p = 0.0028) and APACHE II score (slight vs. no SIRS, 0.97, p = 0.44; severe vs. no SIRS, 1.08, p = 0.02; and sepsis vs. no SIRS, 1.12, p = 0.0028). ROC analysis revealed that the NISS (slight vs. no SIRS, AUC 0.61; severe vs. no SIRS, AUC 0.67; and sepsis vs. no SIRS, AUC 0.77) and APACHE II score (slight vs. no SIRS, AUC 0.60; severe vs. no SIRS, AUC 0.74; and sepsis vs. no SIRS, AUC 0.82) had the best predictive ability for SIRS and sepsis. Quick assessment with the NISS or APACHE II score could preselect possible candidates for sepsis following polytrauma and provide guidance in trauma surgeons' decision-making.
NASA Astrophysics Data System (ADS)
McGibbney, L. J.; Whitehall, K. D.; Mattmann, C. A.; Goodale, C. E.; Joyce, M.; Ramirez, P.; Zimdars, P.
2014-12-01
We detail how Apache Open Climate Workbench (OCW) (recently open sourced by NASA JPL) was adapted to facilitate an ongoing study of Mesoscale Convective Complexes (MCCs) in West Africa and their contributions within the weather-climate continuum as it relates to climate variability. More than 400 MCCs occur annually over various locations on the globe. In West Africa, approximately one-fifth of that total occur during the summer months (June-November) alone and are estimated to contribute more than 50% of the seasonal rainfall amounts. Furthermore, in general the non-discriminatory socio-economic geospatial distribution of these features correlates with currently and projected densely populated locations. As such, the convective nature of MCCs raises questions regarding their seasonal variability and frequency in current and future climates, amongst others. However, in spite of the formal observation criteria of these features in 1980, these questions have remained comprehensively unanswered because of the untimely and subjective methods for identifying and characterizing MCCs due to limitations data-handling limitations. The main outcome of this work therefore documents how a graph-based search algorithm was implemented on top of the OCW stack with the ultimate goal of improving fully automated end-to-end identification and characterization of MCCs in high resolution observational datasets. Apache OCW as an open source project was demonstrated from inception and we display how it was again utilized to advance understanding and knowledge within the above domain. The project was born out of refactored code donated by NASA JPL from the Earth science community's Regional Climate Model Evaluation System (RCMES), a joint project between the Joint Institute for Regional Earth System Science and Engineering (JIFRESSE), and a scientific collaboration between the University of California at Los Angeles (UCLA) and NASA JPL. The Apache OCW project was then integrated back into the donor code with the aim of more efficiently powering that project. Notwithstanding, the object-oriented approach to creating a core set of libraries Apache OCW has scaled the usability of the project beyond climate model evaluation as displayed in the MCC use case detailed herewith.
Hosseini, Seyed Hossein; Ayyasi, Mitra; Akbari, Hooshang; Heidari Gorji, Mohammad Ali
2016-01-01
Background Traumatic brain injury (TBI) is a common cause of mortality and disability worldwide. Choosing an appropriate diagnostic tool is critical in early stage for appropriate decision about primary diagnosis, medical care and prognosis. Objectives This study aimed to compare the Glasgow coma scale (GCS), full outline of unresponsiveness (FOUR) and acute physiology and chronic health evaluation (APACHE II) with respect to prediction of the mortality rate of patients with TBI admitted to intensive care unit. Patients and Methods This diagnostic study was conducted on 80 patients with TBI in educational hospitals. The scores of APACHE II, GCS and FOUR were recorded during the first 24 hours of admission of patients. In this study, early mortality means the patient death before 14 days and delayed mortality means the patient death 15 days after admitting to hospital. The collected data were analyzed using descriptive and inductive statistics. Results The results showed that the mean age of the patients was 33.80 ± 12.60. From a total of 80 patients with TBI, 16 (20%) were females and 64 (80%) males. The mortality rate was 15 (18.7%). The results showed no significant difference among three tools. In prediction of early mortality, the areas under the curve (AUCs) were 0.92 (CI = 0.95. 0.81 - 0.97), 0.90 (CI = 0.95. 0.74 - 0.94), and 0.96 (CI = 0.95. 0.87 - 0.9) for FOUR, APACHE II and GCS, respectively. In delayed mortality, the AUCs were 0.89 (CI = 0.95. 0.81-0.94), 0.94 (CI = 0.95. 0.74 - 0.97) and 0.90 (CI = 0.95. 0.87 - 0.95) for FOUR, APACHE II and GCS, respectively. Conclusions Considering that GCS is easy to use and the FOUR can diagnose a locking syndrome along same values of subscales. These two subscales are superior to APACHI II in prediction of early mortality. Conversation APACHE II is more punctual in the prediction of delayed mortality. PMID:29696116
Implementation of Altimetry Data in the GIPSY POD Software Package
NASA Technical Reports Server (NTRS)
Stauch, Jason R.; Gold, Kenn; Born, George H.
2001-01-01
Altimetry data has been used extensively to acquire data about characteristics of the Earth, the Moon, and Mars. More recently, the idea of using altimetry for orbit determination has also been explored. This report discusses modifications to JPL's GIPSY/OASIS II software to include altimetry data as an observation type for precise orbit determination. The mathematical foundation of using altimetry for the purpose of orbit determination is presented, along with results.
Foundations for a syntatic pattern recognition system for genomic DNA sequences
DOE Office of Scientific and Technical Information (OSTI.GOV)
Searles, D.B.
1993-03-01
The goal of the proposed work is the creation of a software system that will perform sophisticated pattern recognition and related functions at a level of abstraction and with expressive power beyond current general-purpose pattern-matching systems for biological sequences; and with a more uniform language, environment, and graphical user interface, and with greater flexibility, extensibility, embeddability, and ability to incorporate other algorithms, than current special-purpose analytic software.
xSDK Foundations: Toward an Extreme-scale Scientific Software Development Kit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heroux, Michael A.; Bartlett, Roscoe; Demeshko, Irina
Here, extreme-scale computational science increasingly demands multiscale and multiphysics formulations. Combining software developed by independent groups is imperative: no single team has resources for all predictive science and decision support capabilities. Scientific libraries provide high-quality, reusable software components for constructing applications with improved robustness and portability. However, without coordination, many libraries cannot be easily composed. Namespace collisions, inconsistent arguments, lack of third-party software versioning, and additional difficulties make composition costly. The Extreme-scale Scientific Software Development Kit (xSDK) defines community policies to improve code quality and compatibility across independently developed packages (hypre, PETSc, SuperLU, Trilinos, and Alquimia) and provides a foundationmore » for addressing broader issues in software interoperability, performance portability, and sustainability. The xSDK provides turnkey installation of member software and seamless combination of aggregate capabilities, and it marks first steps toward extreme-scale scientific software ecosystems from which future applications can be composed rapidly with assured quality and scalability.« less
xSDK Foundations: Toward an Extreme-scale Scientific Software Development Kit
Heroux, Michael A.; Bartlett, Roscoe; Demeshko, Irina; ...
2017-03-01
Here, extreme-scale computational science increasingly demands multiscale and multiphysics formulations. Combining software developed by independent groups is imperative: no single team has resources for all predictive science and decision support capabilities. Scientific libraries provide high-quality, reusable software components for constructing applications with improved robustness and portability. However, without coordination, many libraries cannot be easily composed. Namespace collisions, inconsistent arguments, lack of third-party software versioning, and additional difficulties make composition costly. The Extreme-scale Scientific Software Development Kit (xSDK) defines community policies to improve code quality and compatibility across independently developed packages (hypre, PETSc, SuperLU, Trilinos, and Alquimia) and provides a foundationmore » for addressing broader issues in software interoperability, performance portability, and sustainability. The xSDK provides turnkey installation of member software and seamless combination of aggregate capabilities, and it marks first steps toward extreme-scale scientific software ecosystems from which future applications can be composed rapidly with assured quality and scalability.« less
Kung, Shu-Chen; Wang, Ching-Min; Lai, Chih-Cheng; Chao, Chien-Ming
2018-01-01
This retrospective cohort study investigated the outcomes and prognostic factors in nonagenarians (patients 90 years old or older) with acute respiratory failure. Between 2006 and 2016, all nonagenarians with acute respiratory failure requiring invasive mechanical ventilation (MV) were enrolled. Outcomes including in-hospital mortality and ventilator dependency were measured. A total of 173 nonagenarians with acute respiratory failure were admitted to the intensive care unit (ICU). A total of 56 patients died during the hospital stay and the rate of in-hospital mortality was 32.4%. Patients with higher APACHE (Acute Physiology and Chronic Health Evaluation) II scores (adjusted odds ratio [OR], 5.91; 95 % CI, 1.55-22.45; p = 0.009, APACHE II scores ≥ 25 vs APACHE II scores < 15), use of vasoactive agent (adjust OR, 2.67; 95% CI, 1.12-6.37; p = 0.03) and more organ dysfunction (adjusted OR, 11.13; 95% CI, 3.38-36.36, p < 0.001; ≥ 3 organ dysfunction vs ≤ 1 organ dysfunction) were more likely to die. Among the 117 survivors, 25 (21.4%) patients became dependent on MV. Female gender (adjusted OR, 3.53; 95% CI, 1.16-10.76, p = 0.027) and poor consciousness level (adjusted OR, 4.98; 95% CI, 1.41-17.58, p = 0.013) were associated with MV dependency. In conclusion, the mortality rate of nonagenarians with acute respiratory failure was high, especially for those with higher APACHE II scores or more organ dysfunction. PMID:29467961
Banderas-Bravo, María Esther; Arias-Verdú, Maria Dolores; Macías-Guarasa, Ines; Castillo-Lorente, Encarnación; Pérez-Costillas, Lucia; Gutierrez-Rodriguez, Raquel; Quesada-García, Guillermo; Rivera-Fernández, Ricardo
2017-01-01
Objectives. To evaluate the gravity and mortality of those patients admitted to the intensive care unit for poisoning. Also, the applicability and predicted capacity of prognostic scales most frequently used in ICU must be evaluated. Methods. Multicentre study between 2008 and 2013 on all patients admitted for poisoning. Results. The results are from 119 patients. The causes of poisoning were medication, 92 patients (77.3%), caustics, 11 (9.2%), and alcohol, 20 (16,8%). 78.3% attempted suicides. Mean age was 44.42 ± 13.85 years. 72.5% had a Glasgow Coma Scale (GCS) ≤8 points. The ICU mortality was 5.9% and the hospital mortality was 6.7%. The mortality from caustic poisoning was 54.5%, and it was 1.9% for noncaustic poisoning (p < 0.001). After adjusting for SAPS-3 (OR: 1.19 (1.02–1.39)) the mortality of patients who had ingested caustics was far higher than the rest (OR: 560.34 (11.64–26973.83)). There was considerable discrepancy between mortality predicted by SAPS-3 (26.8%) and observed (6.7%) (Hosmer-Lemeshow test: H = 35.10; p < 0.001). The APACHE-II (7,57%) and APACHE-III (8,15%) were no discrepancies. Conclusions. Admission to ICU for poisoning is rare in our country. Medication is the most frequent cause, but mortality of caustic poisoning is higher. APACHE-II and APACHE-III provide adequate predictions about mortality, while SAPS-3 tends to overestimate. PMID:28459061
Cholongitas, E; Senzolo, M; Patch, D; Kwong, K; Nikolopoulou, V; Leandro, G; Shaw, S; Burroughs, A K
2006-04-01
Prognostic scores in an intensive care unit (ICU) evaluate outcomes, but derive from cohorts containing few cirrhotic patients. To evaluate 6-week mortality in cirrhotic patients admitted to an ICU, and to compare general and liver-specific prognostic scores. A total of 312 consecutive cirrhotic patients (65% alcoholic; mean age 49.6 years). Multivariable logistic regression to evaluate admission factors associated with survival. Child-Pugh, Model for End-stage Liver Disease (MELD), Acute Physiology and Chronic Health Evaluation (APACHE) II and Sequential Organ Failure Assessment (SOFA) scores were compared by receiver operating characteristic curves. Major indication for admission was respiratory failure (35.6%). Median (range) Child-Pugh, APACHE II, MELD and SOFA scores were 11 (5-15), 18 (0-44), 24 (6-40) and 11 (0-21), respectively; 65% (n = 203) died. Survival improved over time (P = 0.005). Multivariate model factors: more organs failing (FOS) (<3 = 49.5%, > or =3 = 90%), higher FiO(2), lactate, urea and bilirubin; resulting in good discrimination [area under receiver operating characteristic curve (AUC) = 0.83], similar to SOFA and MELD (AUC = 0.83 and 0.81, respectively) and superior to APACHE II and Child-Pugh (AUC = 0.78 and 0.72, respectively). Cirrhotics admitted to ICU with > or =3 failing organ systems have 90% mortality. The Royal Free model discriminated well and contained key variables of organ function. SOFA and MELD were better predictors than APACHE II or Child-Pugh scores.
Secure web book to store structural genomics research data.
Manjasetty, Babu A; Höppner, Klaus; Mueller, Uwe; Heinemann, Udo
2003-01-01
Recently established collaborative structural genomics programs aim at significantly accelerating the crystal structure analysis of proteins. These large-scale projects require efficient data management systems to ensure seamless collaboration between different groups of scientists working towards the same goal. Within the Berlin-based Protein Structure Factory, the synchrotron X-ray data collection and the subsequent crystal structure analysis tasks are located at BESSY, a third-generation synchrotron source. To organize file-based communication and data transfer at the BESSY site of the Protein Structure Factory, we have developed the web-based BCLIMS, the BESSY Crystallography Laboratory Information Management System. BCLIMS is a relational data management system which is powered by MySQL as the database engine and Apache HTTP as the web server. The database interface routines are written in Python programing language. The software is freely available to academic users. Here we describe the storage, retrieval and manipulation of laboratory information, mainly pertaining to the synchrotron X-ray diffraction experiments and the subsequent protein structure analysis, using BCLIMS.
Development of a web-based CT dose calculator: WAZA-ARI.
Ban, N; Takahashi, F; Sato, K; Endo, A; Ono, K; Hasegawa, T; Yoshitake, T; Katsunuma, Y; Kai, M
2011-09-01
A web-based computed tomography (CT) dose calculation system (WAZA-ARI) is being developed based on the modern techniques for the radiation transport simulation and for software implementation. Dose coefficients were calculated in a voxel-type Japanese adult male phantom (JM phantom), using the Particle and Heavy Ion Transport code System. In the Monte Carlo simulation, the phantom was irradiated with a 5-mm-thick, fan-shaped photon beam rotating in a plane normal to the body axis. The dose coefficients were integrated into the system, which runs as Java servlets within Apache Tomcat. Output of WAZA-ARI for GE LightSpeed 16 was compared with the dose values calculated similarly using MIRD and ICRP Adult Male phantoms. There are some differences due to the phantom configuration, demonstrating the significance of the dose calculation with appropriate phantoms. While the dose coefficients are currently available only for limited CT scanner models and scanning options, WAZA-ARI will be a useful tool in clinical practice when development is finalised.
The Snow Data System at NASA JPL
NASA Astrophysics Data System (ADS)
Laidlaw, R.; Painter, T. H.; Mattmann, C. A.; Ramirez, P.; Bormann, K.; Brodzik, M. J.; Burgess, A. B.; Rittger, K.; Goodale, C. E.; Joyce, M.; McGibbney, L. J.; Zimdars, P.
2014-12-01
NASA JPL's Snow Data System has a data-processing pipeline powered by Apache OODT, an open source software tool. The pipeline has been running for several years and has successfully generated a significant amount of cryosphere data, including MODIS-based products such as MODSCAG, MODDRFS and MODICE, with historical and near-real time windows and covering regions such as the Artic, Western US, Alaska, Central Europe, Asia, South America, Australia and New Zealand. The team continues to improve the pipeline, using monitoring tools such as Ganglia to give an overview of operations, and improving fault-tolerance with automated recovery scripts. Several alternative adaptations of the Snow Covered Area and Grain size (SCAG) algorithm are being investigated. These include using VIIRS and Landsat TM/ETM+ satellite data as inputs. Parallel computing techniques are being considered for core SCAG processing, such as using the PyCUDA Python API to utilize multi-core GPU architectures. An experimental version of MODSCAG is also being developed for the Google Earth Engine platform, a cloud-based service.
Federated querying architecture with clinical & translational health IT application.
Livne, Oren E; Schultz, N Dustin; Narus, Scott P
2011-10-01
We present a software architecture that federates data from multiple heterogeneous health informatics data sources owned by multiple organizations. The architecture builds upon state-of-the-art open-source Java and XML frameworks in innovative ways. It consists of (a) federated query engine, which manages federated queries and result set aggregation via a patient identification service; and (b) data source facades, which translate the physical data models into a common model on-the-fly and handle large result set streaming. System modules are connected via reusable Apache Camel integration routes and deployed to an OSGi enterprise service bus. We present an application of our architecture that allows users to construct queries via the i2b2 web front-end, and federates patient data from the University of Utah Enterprise Data Warehouse and the Utah Population database. Our system can be easily adopted, extended and integrated with existing SOA Healthcare and HL7 frameworks such as i2b2 and caGrid.
An open source, web based, simple solution for seismic data dissemination and collaborative research
NASA Astrophysics Data System (ADS)
Diviacco, Paolo
2005-06-01
Collaborative research and data dissemination in the field of geophysical exploration need network tools that can access large amounts of data from anywhere using any PC or workstation. Simple solutions based on a combination of Open Source software can be developed to address such requests, exploiting the possibilities offered by the web technologies, and at the same time avoiding the costs and inflexibility of commercial systems. A viable solution consists of MySQL for data storage and retrieval, CWP/SU and GMT for data visualisation and a scripting layer driven by PHP that allows users to access the system via an Apache web server. In the light of the experience building the on-line archive of seismic data of the Istituto Nazionale di Oceanografia e di Geofisica Sperimentale (OGS), we describe the solutions and the methods adopted, with a view to stimulate both the attitude of network collaborative research of other institutions similar to ours, and the development of different applications.
StreptomycesInforSys: A web-enabled information repository
Jain, Chakresh Kumar; Gupta, Vidhi; Gupta, Ashvarya; Gupta, Sanjay; Wadhwa, Gulshan; Sharma, Sanjeev Kumar; Sarethy, Indira P
2012-01-01
Members of Streptomyces produce 70% of natural bioactive products. There is considerable amount of information available based on polyphasic approach for classification of Streptomyces. However, this information based on phenotypic, genotypic and bioactive component production profiles is crucial for pharmacological screening programmes. This is scattered across various journals, books and other resources, many of which are not freely accessible. The designed database incorporates polyphasic typing information using combinations of search options to aid in efficient screening of new isolates. This will help in the preliminary categorization of appropriate groups. It is a free relational database compatible with existing operating systems. A cross platform technology with XAMPP Web server has been used to develop, manage, and facilitate the user query effectively with database support. Employment of PHP, a platform-independent scripting language, embedded in HTML and the database management software MySQL will facilitate dynamic information storage and retrieval. The user-friendly, open and flexible freeware (PHP, MySQL and Apache) is foreseen to reduce running and maintenance cost. Availability www.sis.biowaves.org PMID:23275736
StreptomycesInforSys: A web-enabled information repository.
Jain, Chakresh Kumar; Gupta, Vidhi; Gupta, Ashvarya; Gupta, Sanjay; Wadhwa, Gulshan; Sharma, Sanjeev Kumar; Sarethy, Indira P
2012-01-01
Members of Streptomyces produce 70% of natural bioactive products. There is considerable amount of information available based on polyphasic approach for classification of Streptomyces. However, this information based on phenotypic, genotypic and bioactive component production profiles is crucial for pharmacological screening programmes. This is scattered across various journals, books and other resources, many of which are not freely accessible. The designed database incorporates polyphasic typing information using combinations of search options to aid in efficient screening of new isolates. This will help in the preliminary categorization of appropriate groups. It is a free relational database compatible with existing operating systems. A cross platform technology with XAMPP Web server has been used to develop, manage, and facilitate the user query effectively with database support. Employment of PHP, a platform-independent scripting language, embedded in HTML and the database management software MySQL will facilitate dynamic information storage and retrieval. The user-friendly, open and flexible freeware (PHP, MySQL and Apache) is foreseen to reduce running and maintenance cost. www.sis.biowaves.org.
Ishibashi, Yuichi; Juzoji, Hiroshi; Kitano, Toshihiko; Nakajima, Isao
2011-06-01
Tokai University School of Medicine provided a short-term e-Health training program for persons from Pacific Island Nations from 2006 until 2008 supported by funds from the Sasakawa Peace Foundation. There were lectures on software, hardware and topics relating to e-Health. We could assess the current medical situation in the Pacific Islands through this training course, and also obtain relevant material to analyze appropriate measures deemed necessary to improve the situation.
Improvements in hover display dynamics for a combat helicopter
NASA Technical Reports Server (NTRS)
Eshow, Michelle M.; Schroeder, Jeffery A.
1993-01-01
This paper describes a piloted simulation conducted on the NASA Ames Vertical Motion Simulator. The objective of the experiment was to investigate the handling qualities benefits attainable using new display law design methods for hover displays. The new display laws provide improved methods to specify the behavior of the display symbol that predicts the vehicle's ground velocity in the horizontal plane; it is the primary symbol that the pilot uses to control aircraft horizontal position. The display law design was applied to the Apache helmet-mounted display format, using the Apache vehicle dynamics to tailor the dynamics of the velocity predictor symbol. The representations of the Apache vehicle used in the display design process and in the simulation were derived from flight data. During the simulation, the new symbol dynamics were seen to improve the pilots' ability to maneuver about hover in poor visual cuing environments. The improvements were manifested in pilot handling qualities ratings and in measured task performance. The paper details the display design techniques, the experiment design and conduct, and the results.
Developing open-source codes for electromagnetic geophysics using industry support
NASA Astrophysics Data System (ADS)
Key, K.
2017-12-01
Funding for open-source software development in academia often takes the form of grants and fellowships awarded by government bodies and foundations where there is no conflict-of-interest between the funding entity and the free dissemination of the open-source software products. Conversely, funding for open-source projects in the geophysics industry presents challenges to conventional business models where proprietary licensing offers value that is not present in open-source software. Such proprietary constraints make it easier to convince companies to fund academic software development under exclusive software distribution agreements. A major challenge for obtaining commercial funding for open-source projects is to offer a value proposition that overcomes the criticism that such funding is a give-away to the competition. This work draws upon a decade of experience developing open-source electromagnetic geophysics software for the oil, gas and minerals exploration industry, and examines various approaches that have been effective for sustaining industry sponsorship.
Time-domain representation of frequency-dependent foundation impedance functions
Safak, E.
2006-01-01
Foundation impedance functions provide a simple means to account for soil-structure interaction (SSI) when studying seismic response of structures. Impedance functions represent the dynamic stiffness of the soil media surrounding the foundation. The fact that impedance functions are frequency dependent makes it difficult to incorporate SSI in standard time-history analysis software. This paper introduces a simple method to convert frequency-dependent impedance functions into time-domain filters. The method is based on the least-squares approximation of impedance functions by ratios of two complex polynomials. Such ratios are equivalent, in the time-domain, to discrete-time recursive filters, which are simple finite-difference equations giving the relationship between foundation forces and displacements. These filters can easily be incorporated into standard time-history analysis programs. Three examples are presented to show the applications of the method.
Evaluation of static resistance of deep foundations.
DOT National Transportation Integrated Search
2017-05-01
The focus of this research was to evaluate and improve Florida Department of Transportation (FDOT) FB-Deep software prediction of nominal resistance of H-piles, prestressed concrete piles in limestone, large diameter (> 36) open steel and concrete...
MISR Observes Southern California Wildfires
Atmospheric Science Data Center
2016-12-30
... of the smoke is confined to the local area. These data were acquired during Terra orbit 87818. The stereoscopic analysis was ... software tool, which is publicly available through the Open Channel Foundation at: https://www.openchannelsoftware.com/projects/MINX ...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-02
... are International Digital Publishing Forum (IDPF), Seattle, WA; Datalogics, Inc., Chicago, IL; Evident... activities: (a) Advance the creation, evolution, promotion, and support of software tools supporting the EPUB...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
This paper reports that Myanmar's state oil company has awarded production sharing contracts (PSCs) on two blocks to units of Apache Corp. and Santa Fe Energy Resources Inc., both of Houston. That comes on the heels of a report by County NatWest Woodmac that notes Myanmar's oil production, currently meeting less than half the country's demand, is set to fall further this year. 150 line km of new seismic data could be acquired and one well drilled. During the initial 2 year exploration period on Block EP-3, Apache will conduct geological studies and conduct at least 200 line km ofmore » seismic data.« less
Requirements: Towards an understanding on why software projects fail
NASA Astrophysics Data System (ADS)
Hussain, Azham; Mkpojiogu, Emmanuel O. C.
2016-08-01
Requirement engineering is at the foundation of every successful software project. There are many reasons for software project failures; however, poorly engineered requirements process contributes immensely to the reason why software projects fail. Software project failure is usually costly and risky and could also be life threatening. Projects that undermine requirements engineering suffer or are likely to suffer from failures, challenges and other attending risks. The cost of project failures and overruns when estimated is very huge. Furthermore, software project failures or overruns pose a challenge in today's competitive market environment. It affects the company's image, goodwill, and revenue drive and decreases the perceived satisfaction of customers and clients. In this paper, requirements engineering was discussed. Its role in software projects success was elaborated. The place of software requirements process in relation to software project failure was explored and examined. Also, project success and failure factors were also discussed with emphasis placed on requirements factors as they play a major role in software projects' challenges, successes and failures. The paper relied on secondary data and empirical statistics to explore and examine factors responsible for the successes, challenges and failures of software projects in large, medium and small scaled software companies.
Requirements model for an e-Health awareness portal
NASA Astrophysics Data System (ADS)
Hussain, Azham; Mkpojiogu, Emmanuel O. C.; Nawi, Mohd Nasrun M.
2016-08-01
Requirements engineering is at the heart and foundation of software engineering process. Poor quality requirements inevitably lead to poor quality software solutions. Also, poor requirement modeling is tantamount to designing a poor quality product. So, quality assured requirements development collaborates fine with usable products in giving the software product the needed quality it demands. In the light of the foregoing, the requirements for an e-Ebola Awareness Portal were modeled with a good attention given to these software engineering concerns. The requirements for the e-Health Awareness Portal are modeled as a contribution to the fight against Ebola and helps in the fulfillment of the United Nation's Millennium Development Goal No. 6. In this study requirements were modeled using UML 2.0 modeling technique.
Research on infrared small-target tracking technology under complex background
NASA Astrophysics Data System (ADS)
Liu, Lei; Wang, Xin; Chen, Jilu; Pan, Tao
2012-10-01
In this paper, some basic principles and the implementing flow charts of a series of algorithms for target tracking are described. On the foundation of above works, a moving target tracking software base on the OpenCV is developed by the software developing platform MFC. Three kinds of tracking algorithms are integrated in this software. These two tracking algorithms are Kalman Filter tracking method and Camshift tracking method. In order to explain the software clearly, the framework and the function are described in this paper. At last, the implementing processes and results are analyzed, and those algorithms for tracking targets are evaluated from the two aspects of subjective and objective. This paper is very significant in the application of the infrared target tracking technology.
Software and Systems Test Track Architecture and Concept Definition
2007-05-01
Light 11.0 11.0 11.0 ASC Flex Free Software Foundation 2.5.31 2.5.31 2.5.31 ASC Fluent Fluent Inc. 6.2.26 6.2.26 6.2.26 6.2.26 ASC FMD ...11 ERDC Fluent Fluent 6.2.16 ERDC Fortran 77/90 compiler Compaq/Cray/SGI 7.4 7.4.3m 7.4.4m 5.6 ERDC FTA Platform 1.1 1.1 1.1 ERDC GAMESS
Exploring Convergent Evolution to Provide a Foundation for Protein Engineering
2009-02-26
information if it does not display a currently valid OMB control number PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ORGANIZATION. RETORT DATE (DD-MM-YYYY...the DivergentSet and MotifCluster Algorithms Using support from this grant, we developed two software packages that provide key infrastructure for...software package we developed, MotifCluster," provides a novel way of detecting distantly related homologs, one of the key aims of the proposal. Unlike
Welter, David E.; White, Jeremy T.; Hunt, Randall J.; Doherty, John E.
2015-09-18
The PEST++ Version 3 software suite can be compiled for Microsoft Windows®4 and Linux®5 operating systems; the source code is available in a Microsoft Visual Studio®6 2013 solution; Linux Makefiles are also provided. PEST++ Version 3 continues to build a foundation for an open-source framework capable of producing robust and efficient parameter estimation tools for large environmental models.
MyMolDB: a micromolecular database solution with open source and free components.
Xia, Bing; Tai, Zheng-Fu; Gu, Yu-Cheng; Li, Bang-Jing; Ding, Li-Sheng; Zhou, Yan
2011-10-01
To manage chemical structures in small laboratories is one of the important daily tasks. Few solutions are available on the internet, and most of them are closed source applications. The open-source applications typically have limited capability and basic cheminformatics functionalities. In this article, we describe an open-source solution to manage chemicals in research groups based on open source and free components. It has a user-friendly interface with the functions of chemical handling and intensive searching. MyMolDB is a micromolecular database solution that supports exact, substructure, similarity, and combined searching. This solution is mainly implemented using scripting language Python with a web-based interface for compound management and searching. Almost all the searches are in essence done with pure SQL on the database by using the high performance of the database engine. Thus, impressive searching speed has been archived in large data sets for no external Central Processing Unit (CPU) consuming languages were involved in the key procedure of the searching. MyMolDB is an open-source software and can be modified and/or redistributed under GNU General Public License version 3 published by the Free Software Foundation (Free Software Foundation Inc. The GNU General Public License, Version 3, 2007. Available at: http://www.gnu.org/licenses/gpl.html). The software itself can be found at http://code.google.com/p/mymoldb/. Copyright © 2011 Wiley Periodicals, Inc.
Morin, Jean-François; Botton, Eléonore; Jacquemard, François; Richard-Gireme, Anouk
2013-01-01
The Fetal medicine foundation (FMF) has developed a new algorithm called Prenatal Risk Calculation (PRC) to evaluate Down syndrome screening based on free hCGβ, PAPP-A and nuchal translucency. The peculiarity of this algorithm is to use the degree of extremeness (DoE) instead of the multiple of the median (MoM). The biologists measuring maternal seric markers on Kryptor™ machines (Thermo Fisher Scientific) use Fast Screen pre I plus software for the prenatal risk calculation. This software integrates the PRC algorithm. Our study evaluates the data of 2.092 patient files of which 19 show a fœtal abnormality. These files have been first evaluated with the ViewPoint software based on MoM. The link between DoE and MoM has been analyzed and the different calculated risks compared. The study shows that Fast Screen pre I plus software gives the same risk results as ViewPoint software, but yields significantly fewer false positive results.
The HEP Software and Computing Knowledge Base
NASA Astrophysics Data System (ADS)
Wenaus, T.
2017-10-01
HEP software today is a rich and diverse domain in itself and exists within the mushrooming world of open source software. As HEP software developers and users we can be more productive and effective if our work and our choices are informed by a good knowledge of what others in our community have created or found useful. The HEP Software and Computing Knowledge Base, hepsoftware.org, was created to facilitate this by serving as a collection point and information exchange on software projects and products, services, training, computing facilities, and relating them to the projects, experiments, organizations and science domains that offer them or use them. It was created as a contribution to the HEP Software Foundation, for which a HEP S&C knowledge base was a much requested early deliverable. This contribution will motivate and describe the system, what it offers, its content and contributions both existing and needed, and its implementation (node.js based web service and javascript client app) which has emphasized ease of use for both users and contributors.
Cloud Engineering Principles and Technology Enablers for Medical Image Processing-as-a-Service.
Bao, Shunxing; Plassard, Andrew J; Landman, Bennett A; Gokhale, Aniruddha
2017-04-01
Traditional in-house, laboratory-based medical imaging studies use hierarchical data structures (e.g., NFS file stores) or databases (e.g., COINS, XNAT) for storage and retrieval. The resulting performance from these approaches is, however, impeded by standard network switches since they can saturate network bandwidth during transfer from storage to processing nodes for even moderate-sized studies. To that end, a cloud-based "medical image processing-as-a-service" offers promise in utilizing the ecosystem of Apache Hadoop, which is a flexible framework providing distributed, scalable, fault tolerant storage and parallel computational modules, and HBase, which is a NoSQL database built atop Hadoop's distributed file system. Despite this promise, HBase's load distribution strategy of region split and merge is detrimental to the hierarchical organization of imaging data (e.g., project, subject, session, scan, slice). This paper makes two contributions to address these concerns by describing key cloud engineering principles and technology enhancements we made to the Apache Hadoop ecosystem for medical imaging applications. First, we propose a row-key design for HBase, which is a necessary step that is driven by the hierarchical organization of imaging data. Second, we propose a novel data allocation policy within HBase to strongly enforce collocation of hierarchically related imaging data. The proposed enhancements accelerate data processing by minimizing network usage and localizing processing to machines where the data already exist. Moreover, our approach is amenable to the traditional scan, subject, and project-level analysis procedures, and is compatible with standard command line/scriptable image processing software. Experimental results for an illustrative sample of imaging data reveals that our new HBase policy results in a three-fold time improvement in conversion of classic DICOM to NiFTI file formats when compared with the default HBase region split policy, and nearly a six-fold improvement over a commonly available network file system (NFS) approach even for relatively small file sets. Moreover, file access latency is lower than network attached storage.
Helmy, Tamer Abdallah; El-Reweny, Ehab Mahmoud; Ghazy, Farahat Gomaa
2017-09-01
The partial pressure of venous to arterial carbon dioxide gradient (PCO 2 gap) is considered as an alternative marker of tissue hypoperfusion and has been used to guide treatment for shock. The aim of this study was to investigate the prognostic value of venous-to-arterial carbon dioxide difference during early resuscitation of patients with septic shock and compared it with that of lactate clearance and Acute Physiology and Chronic Health Evaluation II (APACHE-II) score. Forty patients admitted to one Intensive Care Unit were enrolled. APACHE-II score was calculated on admission. An arterial blood gas, central venous, and lactate samples were obtained on admission and after 6 h, and lactate clearance was calculated. Patients were classified retrospectively into Group I (survivors) and Group II (nonsurvivors). Pv-aCO 2 difference in the two groups was evaluated. Data were fed to the computer and analyzed using IBM SPSS software package version 20.0. At T0, Group II showed high PCO 2 gap (8.37 ± 1.36 mmHg) than Group I (7.55 ± 0.95 mmHg) with statistically significant difference ( P = 0.030). While at T6, Group II showed higher PCO 2 gap (9.48 ± 1.47 mmHg) with statistically significant difference ( P < 0.001) and higher mean lactate values (62.71 ± 23.66 mg/dl) with statistically significant difference ( P < 0.001) than Group I where PCO 2 gap and mean lactate values became much lower, 5.91 ± 1.12 mmHg and 33.61 ± 5.80 mg mg/dl, respectively. Group I showed higher lactate clearance (25.42 ± 6.79%) with statistically significant difference ( P < 0.001) than Group II (-69.40-15.46%). High PCO 2 gap >7.8 mmHg after 6 h from resuscitation of septic shock patients is associated with high mortality.
Enabling cost-effective multimodal trip planners through open transit data.
DOT National Transportation Integrated Search
2011-05-01
This study examined whether multimodal trip planners can be developed using opensource software and open data sources. : OpenStreetMap (OSM), maintained by the nonprofit OpenStreetMap Foundation, is an open, freely available international : rep...
Enabling cost-effective multimodal trip planners through open transit data.
DOT National Transportation Integrated Search
2011-05-01
This study examined whether multimodal trip planners can be developed using opensource software and open data sources. OpenStreetMap (OSM), maintained by the nonprofit OpenStreetMap Foundation, is an open, freely available international reposit...
Kute, V. B.; Shah, P. R.; Munjappa, B. C.; Gumber, M. R.; Patel, H. V.; Jain, S. H.; Engineer, D. P.; Naresh, V. V. Sai; Vanikar, A. V.; Trivedi, H. L.
2012-01-01
Acute kidney injury (AKI) is one of the most dreaded complications of severe malaria. We carried out prospective study in 2010, to describe clinical characteristics, laboratory parameters, prognostic factors, and outcome in 59 (44 males, 15 females) smear-positive malaria patients with AKI. The severity of illness was assessed using Acute Physiology and Chronic Health Evaluation (APACHE) II, Sequential Organ Failure Assessment (SOFA) score, Multiple Organ Dysfunction Score (MODS), and Glasgow Coma Scale (GCS) scores. All patients received artesunate and hemodialysis (HD). Mean age of patients was 33.63 ± 14 years. Plasmodium falciparum malaria was seen in 76.3% (n = 45), Plasmodium vivax in 16.9% (n = 10), and mixed infection in 6.8% (n = 4) patients. Presenting clinical features were fever (100%), nausea-vomiting (85%), oliguria (61%), abdominal pain/tenderness (50.8%), and jaundice (74.5%). Mean APACHE II, SOFA, MODS, and GCS scores were 18.1 ± 3, 10.16 ± 3.09, 9.71 ± 2.69, and 14.15 ± 1.67, respectively, all were higher among patients who died than among those who survived. APACHE II ≥20, SOFA and MODS scores ≥12 were associated with higher mortality (P < 0.05). 34% patients received blood component transfusion and exchange transfusion was done in 15%. Mean number of HD sessions required was 4.59 ± 3.03. Renal biopsies were performed in five patients (three with patchy cortical necrosis and two with acute tubular necrosis). 81.3% of patients had complete renal recovery and 11.8% succumbed to malaria. Prompt diagnosis, timely HD, and supportive therapy were associated with improved survival and recovery of kidney functions in malarial with AKI. Mortality was associated with higher APACHE II, SOFA, MODS, GCS scores, requirement of inotrope, and ventilator support. PMID:22279340
Prognostic Factors in Cholinesterase Inhibitor Poisoning.
Sun, In O; Yoon, Hyun Ju; Lee, Kwang Young
2015-09-28
Organophosphates and carbamates are insecticides that are associated with high human mortality. The purpose of this study is to investigate the prognostic factors affecting survival in patients with cholinesterase inhibitor (CI) poisoning. This study included 92 patients with CI poisoning in the period from January 2005 to August 2013. We divided these patients into 2 groups (survivors vs. non-survivors), compared their clinical characteristics, and analyzed the predictors of survival. The mean age of the included patients was 56 years (range, 16-88). The patients included 57 (62%) men and 35 (38%) women. When we compared clinical characteristics between the survivor group (n=81, 88%) and non-survivor group (n=11, 12%), there were no differences in renal function, pancreatic enzymes, or serum cholinesterase level, except for serum bicarbonate level and APACHE II score. The serum bicarbonate level was lower in non-survivors than in survivors (12.45±2.84 vs. 18.36±4.73, P<0.01). The serum APACHE II score was higher in non-survivors than in survivors (24.36±5.22 vs. 12.07±6.67, P<0.01). The development of pneumonia during hospitalization was higher in non-survivors than in survivors (n=9, 82% vs. n=31, 38%, P<0.01). In multiple logistic regression analysis, serum bicarbonate concentration, APACHE II score, and pneumonia during hospitalization were the important prognostic factors in patients with CI poisoning. Serum bicarbonate and APACHE II score are useful prognostic factors in patients with CI poisoning. Furthermore, pneumonia during hospitalization was also important in predicting prognosis in patients with CI poisoning. Therefore, prevention and active treatment of pneumonia is important in the management of patients with CI poisoning.
Chen, Yun-Xia; Li, Chun-Sheng
2014-08-01
To evaluate the prognostic and risk-stratified ability of heart-type fatty acid-binding protein (H-FABP) in septic patients in the emergency department (ED). From August to November 2012, 295 consecutive septic patients were enrolled. Circulating H-FABP was measured. The predictive value of H-FABP for 28-day mortality, organ dysfunction on ED arrival, and requirement for mechanical ventilation or a vasopressor within 6 hours after ED arrival was assessed by the receiver operating characteristic curve and logistic regression and was compared with Acute Physiology and Chronic Health Evaluation (APACHE) II score, Mortality in Emergency Department Sepsis (MEDS) score, and Sequential Organ Failure Assessment score. The 28-day mortality, APACHE II, MEDS, and Sequential Organ Failure Assessment scores were much higher in H-FABP-positive patients. The incidence of organ dysfunction at ED arrival and requirement for mechanical ventilation or a vasopressor within 6 hours after ED arrival was higher in H-FABP-positive patients. Heart-type fatty acid-binding protein was an independent predictor of 28-day mortality and organ dysfunction. The area under the receiver operating characteristic curve for H-FABP predicting 28-day mortality and organ dysfunction was 0.784 and 0.755, respectively. Combination of H-FABP and MEDS improved the performance of MEDS in predicting organ dysfunction, and the difference of AUC was statistically significant (P<.05). The combinations of H-FABP and MEDS or H-FABP and APACHE II also improved the prognostic value of MEDS and APACHE II, but the areas under the curve were not statistically different. Heart-type fatty acid-binding protein was helpful for prognosis and risk stratification of septic patients in the ED. Copyright © 2014 Elsevier Inc. All rights reserved.
Bouharras-El Idrissi, Hicham; Molina-López, Jorge; Herrera-Quintana, Lourdes; Domínguez-García, Álvaro; Lobo-Támer, Gabriela; Pérez-Moreno, Irene; Pérez-de la Cruz, Antonio; Planells-Del Pozo, Elena
2016-11-29
Critically ill patients typically develop a catabolic stress state as a result of a systemic inflammatory response (SIRS) that alters clinical-nutritional biomarkers, increasing energy demands and nutritional requirements. To evaluate the status of albumin, prealbumin and transferrin in critically ill patients and the association between these clinical-nutritional parameters with the severity during a seven day stay in intensive care unit (ICU). Multicenter, prospective, observational and analytical follow-up study. A total of 115 subjects in critical condition were included in this study. Clinical and nutritional parameters and severity were monitored at admission and at the seventh day of the ICU stay. A significant decrease in APACHE II and SOFA (p < 0.05) throughout the evolution of critically ill patients in ICU. In general, patients showed an alteration of most of the parameters analyzed. The status of albumin, prealbumin and transferrin were below reference levels both at admission and the 7th day in ICU. A high percentage of patients presented an unbalanced status of albumin (71.3%), prealbumin (84.3%) and transferrin (69.0%). At admission, 27% to 47% of patients with altered protein parameters had APACHE II above 18. The number of patients with altered protein parameters and APACHE II below 18 were significantly higher than severe ones throughout the ICU stay (p < 0.01). Regarding the multivariate analysis, low prealbumin status was the best predictor of severity critical (p < 0.05) both at admission and 7th day of the ICU stay. The results of the present study support the idea of including low prealbumin status as a severity predictor in APACHE II scale, due to the association found between severity and poor status of prealbumin.
Matsumoto, Hisatake; Yamakawa, Kazuma; Ogura, Hiroshi; Koh, Taichin; Matsumoto, Naoya; Shimazu, Takeshi
2017-04-01
Activated immune cells such as monocytes are key factors in systemic inflammatory response syndrome (SIRS) following trauma and sepsis. Activated monocytes induce almost all tissue factor (TF) expression contributing to inflammation and coagulation. TF and CD13 double-positive microparticles (TF/CD13MPs) are predominantly released from these activated monocytes. This study aimed to evaluate TF/CD13MPs and assess their usefulness as a biomarker of pathogenesis in early SIRS following trauma and sepsis. This prospective study comprising 24 trauma patients, 25 severe sepsis patients, and 23 healthy controls was conducted from November 2012 to February 2015. Blood samples were collected from patients within 24 h after injury and diagnosis of severe sepsis and from healthy controls. Numbers of TF/CD13MPs were measured by flow cytometry immediately thereafter. Injury Severity Score (ISS) and Acute Physiology and Chronic Health Evaluation (APACHE) II and Sequential Organ Failure Assessment (SOFA) scores were calculated at patient enrollment. APACHE II and SOFA scores and International Society of Thrombosis and Haemostasis (ISTH) overt disseminated intravascular coagulation (DIC) diagnostic criteria algorithm were calculated at the time of enrollment of severe sepsis patients. Numbers of TF/CD13MPs were significantly increased in both trauma and severe sepsis patients versus controls and correlated significantly with ISS and APACHE II score in trauma patients and with APACHE II and ISTH DIC scores in severe sepsis patients. Increased numbers of TF/CD13MPs correlated significantly with severities in the acute phase in trauma and severe sepsis patients, suggesting that TF/CD13MPs are important in the pathogenesis of early SIRS following trauma and sepsis.
The Open Data Repositorys Data Publisher
NASA Technical Reports Server (NTRS)
Stone, N.; Lafuente, B.; Downs, R. T.; Blake, D.; Bristow, T.; Fonda, M.; Pires, A.
2015-01-01
Data management and data publication are becoming increasingly important components of researcher's workflows. The complexity of managing data, publishing data online, and archiving data has not decreased significantly even as computing access and power has greatly increased. The Open Data Repository's Data Publisher software strives to make data archiving, management, and publication a standard part of a researcher's workflow using simple, web-based tools and commodity server hardware. The publication engine allows for uploading, searching, and display of data with graphing capabilities and downloadable files. Access is controlled through a robust permissions system that can control publication at the field level and can be granted to the general public or protected so that only registered users at various permission levels receive access. Data Publisher also allows researchers to subscribe to meta-data standards through a plugin system, embargo data publication at their discretion, and collaborate with other researchers through various levels of data sharing. As the software matures, semantic data standards will be implemented to facilitate machine reading of data and each database will provide a REST application programming interface for programmatic access. Additionally, a citation system will allow snapshots of any data set to be archived and cited for publication while the data itself can remain living and continuously evolve beyond the snapshot date. The software runs on a traditional LAMP (Linux, Apache, MySQL, PHP) server and is available on GitHub (http://github.com/opendatarepository) under a GPLv2 open source license. The goal of the Open Data Repository is to lower the cost and training barrier to entry so that any researcher can easily publish their data and ensure it is archived for posterity.
Lowering the Barrier for Standards-Compliant and Discoverable Hydrological Data Publication
NASA Astrophysics Data System (ADS)
Kadlec, J.
2013-12-01
The growing need for sharing and integration of hydrological and climate data across multiple organizations has resulted in the development of distributed, services-based, standards-compliant hydrological data management and data hosting systems. The problem with these systems is complicated set-up and deployment. Many existing systems assume that the data publisher has remote-desktop access to a locally managed server and experience with computer network setup. For corporate websites, shared web hosting services with limited root access provide an inexpensive, dynamic web presence solution using the Linux, Apache, MySQL and PHP (LAMP) software stack. In this paper, we hypothesize that a webhosting service provides an optimal, low-cost solution for hydrological data hosting. We propose a software architecture of a standards-compliant, lightweight and easy-to-deploy hydrological data management system that can be deployed on the majority of existing shared internet webhosting services. The architecture and design is validated by developing Hydroserver Lite: a PHP and MySQL-based hydrological data hosting package that is fully standards-compliant and compatible with the Consortium of Universities for Advancement of Hydrologic Sciences (CUAHSI) hydrologic information system. It is already being used for management of field data collection by students of the McCall Outdoor Science School in Idaho. For testing, the Hydroserver Lite software has been installed on multiple different free and low-cost webhosting sites including Godaddy, Bluehost and 000webhost. The number of steps required to set-up the server is compared with the number of steps required to set-up other standards-compliant hydrologic data hosting systems including THREDDS, IstSOS and MapServer SOS.
Enabling Diverse Software Stacks on Supercomputers using High Performance Virtual Clusters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Younge, Andrew J.; Pedretti, Kevin; Grant, Ryan
While large-scale simulations have been the hallmark of the High Performance Computing (HPC) community for decades, Large Scale Data Analytics (LSDA) workloads are gaining attention within the scientific community not only as a processing component to large HPC simulations, but also as standalone scientific tools for knowledge discovery. With the path towards Exascale, new HPC runtime systems are also emerging in a way that differs from classical distributed com- puting models. However, system software for such capabilities on the latest extreme-scale DOE supercomputing needs to be enhanced to more appropriately support these types of emerging soft- ware ecosystems. In thismore » paper, we propose the use of Virtual Clusters on advanced supercomputing resources to enable systems to support not only HPC workloads, but also emerging big data stacks. Specifi- cally, we have deployed the KVM hypervisor within Cray's Compute Node Linux on a XC-series supercomputer testbed. We also use libvirt and QEMU to manage and provision VMs directly on compute nodes, leveraging Ethernet-over-Aries network emulation. To our knowledge, this is the first known use of KVM on a true MPP supercomputer. We investigate the overhead our solution using HPC benchmarks, both evaluating single-node performance as well as weak scaling of a 32-node virtual cluster. Overall, we find single node performance of our solution using KVM on a Cray is very efficient with near-native performance. However overhead increases by up to 20% as virtual cluster size increases, due to limitations of the Ethernet-over-Aries bridged network. Furthermore, we deploy Apache Spark with large data analysis workloads in a Virtual Cluster, ef- fectively demonstrating how diverse software ecosystems can be supported by High Performance Virtual Clusters.« less
ERIC Educational Resources Information Center
Blatecky, Alan; West, Ann; Spada, Mary
2002-01-01
Defines middleware, often called the "glue" that makes the elements of the cyberinfrastructure work together. Discusses how the National Science Foundation (NSF) Middleware Initiative (NMI) is consolidating expertise, software, and technology to address the critical and ubiquitous middleware issues facing research and education today.…
Teaching Engineering Design in a Laboratory Setting
ERIC Educational Resources Information Center
Hummon, Norman P.; Bullen, A. G. R.
1974-01-01
Discusses the establishment of an environmental systems laboratory at the University of Pittsburgh with the support of the Sloan Foundation. Indicates that the "real world" can be brought into the laboratory by simulating on computers, software systems, and data bases. (CC)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Searles, D.B.
1993-03-01
The goal of the proposed work is the creation of a software system that will perform sophisticated pattern recognition and related functions at a level of abstraction and with expressive power beyond current general-purpose pattern-matching systems for biological sequences; and with a more uniform language, environment, and graphical user interface, and with greater flexibility, extensibility, embeddability, and ability to incorporate other algorithms, than current special-purpose analytic software.
1986-12-01
graphics : The package allows a character set which can be defined by users giving the picture for a character by designating its pixels. Such characters...type lonts and gsei-oriented "help" messages tailored to the operations being performed and user expertise In general, critical design issues...other volumes include command language, software design , description and analysis tools, database management system operating systems; planning and
DOIDB: Reusing DataCite's search software as metadata portal for GFZ Data Services
NASA Astrophysics Data System (ADS)
Elger, K.; Ulbricht, D.; Bertelmann, R.
2016-12-01
GFZ Data Services is the central service point for the publication of research data at the Helmholtz Centre Potsdam GFZ German Research Centre for Geosciences (GFZ). It provides data publishing services to scientists of GFZ, associated projects, and associated institutions. The publishing services aim to make research data and physical samples visible and citable, by assigning persistent identifiers (DOI, IGSN) and by complementing existing IT infrastructure. To integrate several research domains a modular software stack that is made of free software components has been created to manage data and metadata as well as register persistent identifiers [1]. Pivotal component for the registration of DOIs is the DOIDB. It has been derived from three software components provided by DataCite [2] that moderate the registration of DOIs and the deposition of metadata, allow the dissemination of metadata, and provide a user interface to navigate and discover datasets. The DOIDB acts as a proxy to the DataCite infrastructure and in addition to the DataCite metadata schema, it allows to deposit and disseminate metadata following the schemas ISO19139 and NASA GCMD DIF. The search component has been modified to meet the requirements of a geosciences metadata portal. In particular, the search component has been altered to make use of Apache SOLRs capability to index and query spatial coordinates. Furthermore, the user interface has been adjusted to provide a first impression of the data by showing a map, summary information and subjects. DOIDB and its components are available on GitHub [3].We present a software solution for registration of DOIs that allows to integrate existing data systems, keeps track of registered DOIs, and provides a metadata portal to discover datasets [4]. [1] Ulbricht, D.; Elger, K.; Bertelmann, R.; Klump, J. panMetaDocs, eSciDoc, and DOIDB—An Infrastructure for the Curation and Publication of File-Based Datasets for GFZ Data Services. ISPRS Int. J. Geo-Inf. 2016, 5, 25. http://doi.org/10.3390/ijgi5030025[2] https://github.com/datacite[3] https://github.com/ulbricht/search/tree/doidb , https://github.com/ulbricht/mds/tree/doidb , https://github.com/ulbricht/oaip/tree/doidb[4] http://doidb.wdc-terra.org
Software Framework for Development of Web-GIS Systems for Analysis of Georeferenced Geophysical Data
NASA Astrophysics Data System (ADS)
Okladnikov, I.; Gordov, E. P.; Titov, A. G.
2011-12-01
Georeferenced datasets (meteorological databases, modeling and reanalysis results, remote sensing products, etc.) are currently actively used in numerous applications including modeling, interpretation and forecast of climatic and ecosystem changes for various spatial and temporal scales. Due to inherent heterogeneity of environmental datasets as well as their size which might constitute up to tens terabytes for a single dataset at present studies in the area of climate and environmental change require a special software support. A dedicated software framework for rapid development of providing such support information-computational systems based on Web-GIS technologies has been created. The software framework consists of 3 basic parts: computational kernel developed using ITTVIS Interactive Data Language (IDL), a set of PHP-controllers run within specialized web portal, and JavaScript class library for development of typical components of web mapping application graphical user interface (GUI) based on AJAX technology. Computational kernel comprise of number of modules for datasets access, mathematical and statistical data analysis and visualization of results. Specialized web-portal consists of web-server Apache, complying OGC standards Geoserver software which is used as a base for presenting cartographical information over the Web, and a set of PHP-controllers implementing web-mapping application logic and governing computational kernel. JavaScript library aiming at graphical user interface development is based on GeoExt library combining ExtJS Framework and OpenLayers software. Based on the software framework an information-computational system for complex analysis of large georeferenced data archives was developed. Structured environmental datasets available for processing now include two editions of NCEP/NCAR Reanalysis, JMA/CRIEPI JRA-25 Reanalysis, ECMWF ERA-40 Reanalysis, ECMWF ERA Interim Reanalysis, MRI/JMA APHRODITE's Water Resources Project Reanalysis, meteorological observational data for the territory of the former USSR for the 20th century, and others. Current version of the system is already involved into a scientific research process. Particularly, recently the system was successfully used for analysis of Siberia climate changes and its impact in the region. The software framework presented allows rapid development of Web-GIS systems for geophysical data analysis thus providing specialists involved into multidisciplinary research projects with reliable and practical instruments for complex analysis of climate and ecosystems changes on global and regional scales. This work is partially supported by RFBR grants #10-07-00547, #11-05-01190, and SB RAS projects 4.31.1.5, 4.31.2.7, 4, 8, 9, 50 and 66.
Search Analytics: Automated Learning, Analysis, and Search with Open Source
NASA Astrophysics Data System (ADS)
Hundman, K.; Mattmann, C. A.; Hyon, J.; Ramirez, P.
2016-12-01
The sheer volume of unstructured scientific data makes comprehensive human analysis impossible, resulting in missed opportunities to identify relationships, trends, gaps, and outliers. As the open source community continues to grow, tools like Apache Tika, Apache Solr, Stanford's DeepDive, and Data-Driven Documents (D3) can help address this challenge. With a focus on journal publications and conference abstracts often in the form of PDF and Microsoft Office documents, we've initiated an exploratory NASA Advanced Concepts project aiming to use the aforementioned open source text analytics tools to build a data-driven justification for the HyspIRI Decadal Survey mission. We call this capability Search Analytics, and it fuses and augments these open source tools to enable the automatic discovery and extraction of salient information. In the case of HyspIRI, a hyperspectral infrared imager mission, key findings resulted from the extractions and visualizations of relationships from thousands of unstructured scientific documents. The relationships include links between satellites (e.g. Landsat 8), domain-specific measurements (e.g. spectral coverage) and subjects (e.g. invasive species). Using the above open source tools, Search Analytics mined and characterized a corpus of information that would be infeasible for a human to process. More broadly, Search Analytics offers insights into various scientific and commercial applications enabled through missions and instrumentation with specific technical capabilities. For example, the following phrases were extracted in close proximity within a publication: "In this study, hyperspectral images…with high spatial resolution (1 m) were analyzed to detect cutleaf teasel in two areas. …Classification of cutleaf teasel reached a users accuracy of 82 to 84%." Without reading a single paper we can use Search Analytics to automatically identify that a 1 m spatial resolution provides a cutleaf teasel detection users accuracy of 82-84%, which could have tangible, direct downstream implications for crop protection. Automatically assimilating this information expedites and supplements human analysis, and, ultimately, Search Analytics and its foundation of open source tools will result in more efficient scientific investment and research.
Transient Faults in Computer Systems
NASA Technical Reports Server (NTRS)
Masson, Gerald M.
1993-01-01
A powerful technique particularly appropriate for the detection of errors caused by transient faults in computer systems was developed. The technique can be implemented in either software or hardware; the research conducted thus far primarily considered software implementations. The error detection technique developed has the distinct advantage of having provably complete coverage of all errors caused by transient faults that affect the output produced by the execution of a program. In other words, the technique does not have to be tuned to a particular error model to enhance error coverage. Also, the correctness of the technique can be formally verified. The technique uses time and software redundancy. The foundation for an effective, low-overhead, software-based certification trail approach to real-time error detection resulting from transient fault phenomena was developed.
The roles of the AAS Journals' Data Editors
NASA Astrophysics Data System (ADS)
Muench, August; NASA/SAO ADS, CERN/Zenodo.org, Harvard/CfA Wolbach Library
2018-01-01
I will summarize the community services provided by the AAS Journals' Data Editors to support authors’ when citing and preserving the software and data used in the published literature. In addition I will describe the life of a piece of code as it passes through the current workflows for software citation in astronomy. Using this “lifecycle” I will detail the ongoing work funded by a grant from the Alfred P. Sloan Foundation to the American Astronomical Society to improve the citation of software in the literature. The funded development team and advisory boards, made up of non-profit publishers, literature indexers, and preservation archives, is implementing the Force11 Software citation principles for astronomy Journals. The outcome of this work will be new workflows for authors and developers that fit in their current practices while enabling versioned citation of software and granular credit for its creators.
A rocket-borne data-manipulation experiment using a microprocessor
NASA Technical Reports Server (NTRS)
Davis, L. L.; Smith, L. G.; Voss, H. D.
1979-01-01
The development of a data-manipulation experiment using a Z-80 microprocessor is described. The instrumentation is included in the payloads of two Nike Apache sounding rockets used in an investigation of energetic particle fluxes. The data from an array of solid-state detectors and an electrostatic analyzer is processed to give the energy spectrum as a function of pitch angle. The experiment performed well in its first flight test: Nike Apache 14.543 was launched from Wallops Island at 2315 EST on 19 June 1978. The system was designed to be easily adaptable to other data-manipulation requirements and some suggestions for further development are included.
Development and validation of a blade-element mathematical model for the AH-64A Apache helicopter
NASA Technical Reports Server (NTRS)
Mansur, M. Hossein
1995-01-01
A high-fidelity blade-element mathematical model for the AH-64A Apache Advanced Attack Helicopter has been developed by the Aeroflightdynamics Directorate of the U.S. Army's Aviation and Troop Command (ATCOM) at Ames Research Center. The model is based on the McDonnell Douglas Helicopter Systems' (MDHS) Fly Real Time (FLYRT) model of the AH-64A (acquired under contract) which was modified in-house and augmented with a blade-element-type main-rotor module. This report describes, in detail, the development of the rotor module, and presents some results of an extensive validation effort.
Enhancing the AliEn Web Service Authentication
NASA Astrophysics Data System (ADS)
Zhu, Jianlin; Saiz, Pablo; Carminati, Federico; Betev, Latchezar; Zhou, Daicui; Mendez Lorenzo, Patricia; Grigoras, Alina Gabriela; Grigoras, Costin; Furano, Fabrizio; Schreiner, Steffen; Vladimirovna Datskova, Olga; Sankar Banerjee, Subho; Zhang, Guoping
2011-12-01
Web Services are an XML based technology that allow applications to communicate with each other across disparate systems. Web Services are becoming the de facto standard that enable inter operability between heterogeneous processes and systems. AliEn2 is a grid environment based on web services. The AliEn2 services can be divided in three categories: Central services, deployed once per organization; Site services, deployed on each of the participating centers; Job Agents running on the worker nodes automatically. A security model to protect these services is essential for the whole system. Current implementations of web server, such as Apache, are not suitable to be used within the grid environment. Apache with the mod_ssl and OpenSSL only supports the X.509 certificates. But in the grid environment, the common credential is the proxy certificate for the purpose of providing restricted proxy and delegation. An Authentication framework was taken for AliEn2 web services to add the ability to accept X.509 certificates and proxy certificates from client-side to Apache Web Server. The authentication framework could also allow the generation of access control policies to limit access to the AliEn2 web services.
Chen, Yun-Xia; Li, Chun-Sheng
2014-04-16
The predisposition, infection, response and organ dysfunction (PIRO) staging system was designed as a stratification tool to deal with the inherent heterogeneity of septic patients. The present study was conducted to assess the performance of PIRO in predicting multiple organ dysfunction (MOD), intensive care unit (ICU) admission, and 28-day mortality in septic patients in the emergency department (ED), and to compare this scoring system with the Mortality in Emergency Department Sepsis (MEDS) and Acute Physiology and Chronic Health Evaluation (APACHE II) scores. Consecutive septic patients (n = 680) admitted to the ED of Beijing Chao-Yang Hospital were enrolled. PIRO, MEDS, and APACHE II scores were calculated for each patient on ED arrival. Organ function was reassessed within 3 days of enrollment. All patients were followed up for 28 days. Outcome criteria were the development of MOD within 3 days, ICU admission or death within 28 days after enrollment. The predictive ability of the four components of PIRO was analyzed separately. Receiver operating characteristic (ROC) curve and logistic regression analysis were used to assess the prognostic and risk stratification value of the scoring systems. Organ dysfunction independently predicted ICU admission, MOD, and 28-day mortality, with areas under the ROC curve (AUC) of 0.888, 0.851, and 0.816, respectively. The predictive value of predisposition, infection, and response was weaker than that of organ dysfunction. A negative correlation was found between the response component and MOD, as well as mortality. PIRO, MEDS, and APACHE II scores significantly differed between patients who did and did not meet the outcome criteria (P < 0.001). PIRO and APACHE II independently predicted ICU admission and MOD, but MEDS did not. All three systems were independent predictors of 28-day mortality with similar AUC values. The AUC of PIRO was 0.889 for ICU admission, 0.817 for MOD, and 0.744 for 28-day mortality. The AUCs of PIRO were significantly greater than those of APACHE II and MEDS (P < 0.05) in predicting ICU admission and MOD. The study indicates that PIRO is helpful for risk stratification and prognostic determinations in septic patients in the ED.
[The association between early blood glucose fluctuation and prognosis in critically ill patients].
Tang, Jian; Gu, Qin
2012-01-01
To investigate the association between early blood glucose level fluctuation and prognosis of critically ill patients. A retrospective study involving 95 critically ill patients in intensive care unit (ICU) was conducted. According to the 28-day outcome after admission to ICU, the patients were divided into nonsurvivors (43 cases) and survivors (52 cases), and the blood glucose level in them was monitored in the first 72 hours. Blood glucose concentration at admission (BGadm), mean blood glucose level (MBG), hyperglycemia index (HGI), glycemic lability index (GLI), incidence of hypoglycemia and total dosage of intravenous insulin for each patient were compared. The index as an independent risk factor of mortality was determined by multivariate logistic regression analysis and the predictor value by comparing the area under the receiver operating characteristic curve (ROC curve, AUC) of each index. The BGadm (mmol/L), MBG (mmol/L), HGI and the incidence of hypoglycemia showed no significant differences between nonsurvivors and survivors [BGadm: 9.87 ± 4.48 vs. 9.26 ± 3.07, MBG: 8.59 ± 1.23 vs. 8.47 ± 1.01, HGI(6.0): 2.45 ± 0.94 vs. 1.68 ± 1.05, HGI(8.3): 0.84 ± 0.70 vs. 0.68 ± 0.51, the incidence of hypoglycemia: 9.30% vs. 5.77%, all P > 0.05], but acute physiology and chronic health evaluation II (APACHE II ) score, GLI and the total dosage of intravenous insulin (U) were significantly higher in nonsurvivors than survivors [APACHE II score: 23 ± 6 vs. 19 ± 6, GLI: 56.96 (65.43) vs. 23.87 (41.62), the total dosage of intravenous insulin: 65.5 (130.5) vs. 12.5 (90.0), all P < 0.05]. Multivariate logistic regression analysis showed that APACHE II score and GLI were both independent risk factors [APACHE II score: odds ratio (OR) = 1.09, 95% confidence interval (95%CI) 1.01-1.17; GLI: OR = 1.03, 95%CI 1.01-1.06, both P < 0.05]. When ROC curve was plotted, the AUC of APACHE II score and GLI was respectively 0.69 and 0.71, and there was no significant difference (P > 0.05). Early fluctuation of blood glucose is a significant independent risk factor of mortality in critically ill patients. Control the early fluctuation of blood glucose concentration might improve the patients' outcome.
Earobics[R]. What Works Clearinghouse Intervention Report
ERIC Educational Resources Information Center
What Works Clearinghouse, 2007
2007-01-01
"Earobics"[R] is interactive software that provides students in pre-K through third grade with individual, systematic instruction in early literacy skills as students interact with animated characters. "Earobics[R] Foundations" is a version for pre-Kindergarten, Kindergarten, and first graders. "Earobics[R]…
[Smart eye data : Development of a foundation for medical research using Smart Data applications].
Kortüm, K; Müller, M; Hirneiß, C; Babenko, A; Nasseh, D; Kern, C; Kampik, A; Priglinger, S; Kreutzer, T C
2016-06-01
Smart Data means intelligent data accumulation and the evaluation of large data sets. This is particularly important in ophthalmology as more and more data are being created. Increasing knowledge and personalized therapies are expected by combining clinical data from electronic health records (EHR) with measurement data. In this study we investigated the possibilities to consolidate data from measurement devices and clinical data in a data warehouse (DW). An EHR was adjusted to the needs of ophthalmology and the contents of referral letters were extracted. The data were imported into a DW overnight. Measuring devices were connected to the EHR by an HL7 standard interface and the use of a picture archiving and communications system (PACS). Data were exported from the review software using a self-developed software. For data analysis the software was modified to the specific requirements of ophthalmology. In the EHR 12 graphical user interfaces were created and the data from 32,234 referral letters were extracted. A total of 23 diagnostic devices could be linked to the PACS and 85,114 optical coherence tomography (OCT) scans, 19,098 measurements from IOLMaster as well as 5,425 pentacam examinations were imported into the DW including over 300,000 patients. Data discovery software was modified providing filtering methods. By building a DW a foundation for clinical and epidemiological studies could be implemented. In the future, decision support systems and strategies for personalized therapies can be based on such a database.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bartlett, Roscoe A; Heroux, Dr. Michael A; Willenbring, James
2012-01-01
Software lifecycles are becoming an increasingly important issue for computational science & engineering (CSE) software. The process by which a piece of CSE software begins life as a set of research requirements and then matures into a trusted high-quality capability is both commonplace and extremely challenging. Although an implicit lifecycle is obviously being used in any effort, the challenges of this process--respecting the competing needs of research vs. production--cannot be overstated. Here we describe a proposal for a well-defined software lifecycle process based on modern Lean/Agile software engineering principles. What we propose is appropriate for many CSE software projects thatmore » are initially heavily focused on research but also are expected to eventually produce usable high-quality capabilities. The model is related to TriBITS, a build, integration and testing system, which serves as a strong foundation for this lifecycle model, and aspects of this lifecycle model are ingrained in the TriBITS system. Indeed this lifecycle process, if followed, will enable large-scale sustainable integration of many complex CSE software efforts across several institutions.« less
NSF Policies on Software and Data Sharing and their Implementation
NASA Astrophysics Data System (ADS)
Katz, Daniel
2014-01-01
Since January 2011, the National Science Foundation has required a Data Management plan to be submitted with all proposals. This plan should include a description of how the proposers will share the products of the research (http://www.nsf.gov/bfa/dias/policy/dmp.jsp). What constitutes such data will be determined by the community of interest through the process of peer review and program management. This may include, but is not limited to: data, publications, samples, physical collections, software and models. In particular, “investigators and grantees are encouraged to share software and inventions created under an award or otherwise make them or their products widely available and usable.”
Demonstration of a Safety Analysis on a Complex System
NASA Technical Reports Server (NTRS)
Leveson, Nancy; Alfaro, Liliana; Alvarado, Christine; Brown, Molly; Hunt, Earl B.; Jaffe, Matt; Joslyn, Susan; Pinnell, Denise; Reese, Jon; Samarziya, Jeffrey;
1997-01-01
For the past 17 years, Professor Leveson and her graduate students have been developing a theoretical foundation for safety in complex systems and building a methodology upon that foundation. The methodology includes special management structures and procedures, system hazard analyses, software hazard analysis, requirements modeling and analysis for completeness and safety, special software design techniques including the design of human-machine interaction, verification, operational feedback, and change analysis. The Safeware methodology is based on system safety techniques that are extended to deal with software and human error. Automation is used to enhance our ability to cope with complex systems. Identification, classification, and evaluation of hazards is done using modeling and analysis. To be effective, the models and analysis tools must consider the hardware, software, and human components in these systems. They also need to include a variety of analysis techniques and orthogonal approaches: There exists no single safety analysis or evaluation technique that can handle all aspects of complex systems. Applying only one or two may make us feel satisfied, but will produce limited results. We report here on a demonstration, performed as part of a contract with NASA Langley Research Center, of the Safeware methodology on the Center-TRACON Automation System (CTAS) portion of the air traffic control (ATC) system and procedures currently employed at the Dallas/Fort Worth (DFW) TRACON (Terminal Radar Approach CONtrol). CTAS is an automated system to assist controllers in handling arrival traffic in the DFW area. Safety is a system property, not a component property, so our safety analysis considers the entire system and not simply the automated components. Because safety analysis of a complex system is an interdisciplinary effort, our team included system engineers, software engineers, human factors experts, and cognitive psychologists.
ERIC Educational Resources Information Center
MacKenzie, Douglas
1996-01-01
Discusses the use of computer systems for archival applications based on experiences at the Demarco European Arts Foundation (Scotland) and the TAMH Project, an attempt to build a virtual museum of Tay Valley maritime history. Highlights include hardware; development software; data representation, including storage space versus quality;…
An Architecture, System Engineering, and Acquisition Approach for Space System Software Resiliency
NASA Astrophysics Data System (ADS)
Phillips, Dewanne Marie
Software intensive space systems can harbor defects and vulnerabilities that may enable external adversaries or malicious insiders to disrupt or disable system functions, risking mission compromise or loss. Mitigating this risk demands a sustained focus on the security and resiliency of the system architecture including software, hardware, and other components. Robust software engineering practices contribute to the foundation of a resilient system so that the system "can take a hit to a critical component and recover in a known, bounded, and generally acceptable period of time". Software resiliency must be a priority and addressed early in the life cycle development to contribute a secure and dependable space system. Those who develop, implement, and operate software intensive space systems must determine the factors and systems engineering practices to address when investing in software resiliency. This dissertation offers methodical approaches for improving space system resiliency through software architecture design, system engineering, increased software security, thereby reducing the risk of latent software defects and vulnerabilities. By providing greater attention to the early life cycle phases of development, we can alter the engineering process to help detect, eliminate, and avoid vulnerabilities before space systems are delivered. To achieve this objective, this dissertation will identify knowledge, techniques, and tools that engineers and managers can utilize to help them recognize how vulnerabilities are produced and discovered so that they can learn to circumvent them in future efforts. We conducted a systematic review of existing architectural practices, standards, security and coding practices, various threats, defects, and vulnerabilities that impact space systems from hundreds of relevant publications and interviews of subject matter experts. We expanded on the system-level body of knowledge for resiliency and identified a new software architecture framework and acquisition methodology to improve the resiliency of space systems from a software perspective with an emphasis on the early phases of the systems engineering life cycle. This methodology involves seven steps: 1) Define technical resiliency requirements, 1a) Identify standards/policy for software resiliency, 2) Develop a request for proposal (RFP)/statement of work (SOW) for resilient space systems software, 3) Define software resiliency goals for space systems, 4) Establish software resiliency quality attributes, 5) Perform architectural tradeoffs and identify risks, 6) Conduct architecture assessments as part of the procurement process, and 7) Ascertain space system software architecture resiliency metrics. Data illustrates that software vulnerabilities can lead to opportunities for malicious cyber activities, which could degrade the space mission capability for the user community. Reducing the number of vulnerabilities by improving architecture and software system engineering practices can contribute to making space systems more resilient. Since cyber-attacks are enabled by shortfalls in software, robust software engineering practices and an architectural design are foundational to resiliency, which is a quality that allows the system to "take a hit to a critical component and recover in a known, bounded, and generally acceptable period of time". To achieve software resiliency for space systems, acquirers and suppliers must identify relevant factors and systems engineering practices to apply across the lifecycle, in software requirements analysis, architecture development, design, implementation, verification and validation, and maintenance phases.
HEP Software Foundation Community White Paper Working Group - Data Analysis and Interpretation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bauerdick, Lothar
At the heart of experimental high energy physics (HEP) is the development of facilities and instrumentation that provide sensitivity to new phenomena. Our understanding of nature at its most fundamental level is advanced through the analysis and interpretation of data from sophisticated detectors in HEP experiments. The goal of data analysis systems is to realize the maximum possible scientific potential of the data within the constraints of computing and human resources in the least time. To achieve this goal, future analysis systems should empower physicists to access the data with a high level of interactivity, reproducibility and throughput capability. Asmore » part of the HEP Software Foundation Community White Paper process, a working group on Data Analysis and Interpretation was formed to assess the challenges and opportunities in HEP data analysis and develop a roadmap for activities in this area over the next decade. In this report, the key findings and recommendations of the Data Analysis and Interpretation Working Group are presented.« less
Development of Data Processing Software for NBI Spectroscopic Analysis System
NASA Astrophysics Data System (ADS)
Zhang, Xiaodan; Hu, Chundong; Sheng, Peng; Zhao, Yuanzhe; Wu, Deyun; Cui, Qinglong
2015-04-01
A set of data processing software is presented in this paper for processing NBI spectroscopic data. For better and more scientific managment and querying these data, they are managed uniformly by the NBI data server. The data processing software offers the functions of uploading beam spectral original and analytic data to the data server manually and automatically, querying and downloading all the NBI data, as well as dealing with local LZO data. The set software is composed of a server program and a client program. The server software is programmed in C/C++ under a CentOS development environment. The client software is developed under a VC 6.0 platform, which offers convenient operational human interfaces. The network communications between the server and the client are based on TCP. With the help of this set software, the NBI spectroscopic analysis system realizes the unattended automatic operation, and the clear interface also makes it much more convenient to offer beam intensity distribution data and beam power data to operators for operation decision-making. supported by National Natural Science Foundation of China (No. 11075183), the Chinese Academy of Sciences Knowledge Innovation
Evaluation of Gear Condition Indicator Performance on Rotorcraft Fleet
NASA Technical Reports Server (NTRS)
Antolick, Lance J.; Branning, Jeremy S.; Wade, Daniel R.; Dempsey, Paula J.
2010-01-01
The U.S. Army is currently expanding its fleet of Health Usage Monitoring Systems (HUMS) equipped aircraft at significant rates, to now include over 1,000 rotorcraft. Two different on-board HUMS, the Honeywell Modern Signal Processing Unit (MSPU) and the Goodrich Integrated Vehicle Health Management System (IVHMS), are collecting vibration health data on aircraft that include the Apache, Blackhawk, Chinook, and Kiowa Warrior. The objective of this paper is to recommend the most effective gear condition indicators for fleet use based on both a theoretical foundation and field data. Gear diagnostics with better performance will be recommended based on both a theoretical foundation and results of in-fleet use. In order to evaluate the gear condition indicator performance on rotorcraft fleets, results of more than five years of health monitoring for gear faults in the entire HUMS equipped Army helicopter fleet will be presented. More than ten examples of gear faults indicated by the gear CI have been compiled and each reviewed for accuracy. False alarms indications will also be discussed. Performance data from test rigs and seeded fault tests will also be presented. The results of the fleet analysis will be discussed, and a performance metric assigned to each of the competing algorithms. Gear fault diagnostic algorithms that are compliant with ADS-79A will be recommended for future use and development. The performance of gear algorithms used in the commercial units and the effectiveness of the gear CI as a fault identifier will be assessed using the criteria outlined in the standards in ADS-79A-HDBK, an Army handbook that outlines the conversion from Reliability Centered Maintenance to the On-Condition status of Condition Based Maintenance.
NASA Technical Reports Server (NTRS)
Fountain T.; Tilak, S.; Shin, P.; Hubbard, P.; Freudinger, L.
2009-01-01
The Open Source DataTurbine Initiative is an international community of scientists and engineers sharing a common interest in real-time streaming data middleware and applications. The technology base of the OSDT Initiative is the DataTurbine open source middleware. Key applications of DataTurbine include coral reef monitoring, lake monitoring and limnology, biodiversity and animal tracking, structural health monitoring and earthquake engineering, airborne environmental monitoring, and environmental sustainability. DataTurbine software emerged as a commercial product in the 1990 s from collaborations between NASA and private industry. In October 2007, a grant from the USA National Science Foundation (NSF) Office of Cyberinfrastructure allowed us to transition DataTurbine from a proprietary software product into an open source software initiative. This paper describes the DataTurbine software and highlights key applications in environmental monitoring.
Offline software for the DAMPE experiment
NASA Astrophysics Data System (ADS)
Wang, Chi; Liu, Dong; Wei, Yifeng; Zhang, Zhiyong; Zhang, Yunlong; Wang, Xiaolian; Xu, Zizong; Huang, Guangshun; Tykhonov, Andrii; Wu, Xin; Zang, Jingjing; Liu, Yang; Jiang, Wei; Wen, Sicheng; Wu, Jian; Chang, Jin
2017-10-01
A software system has been developed for the DArk Matter Particle Explorer (DAMPE) mission, a satellite-based experiment. The DAMPE software is mainly written in C++ and steered using a Python script. This article presents an overview of the DAMPE offline software, including the major architecture design and specific implementation for simulation, calibration and reconstruction. The whole system has been successfully applied to DAMPE data analysis. Some results obtained using the system, from simulation and beam test experiments, are presented. Supported by Chinese 973 Program (2010CB833002), the Strategic Priority Research Program on Space Science of the Chinese Academy of Science (CAS) (XDA04040202-4), the Joint Research Fund in Astronomy under cooperative agreement between the National Natural Science Foundation of China (NSFC) and CAS (U1531126) and 100 Talents Program of the Chinese Academy of Science
Upgraded cameras for the HESS imaging atmospheric Cherenkov telescopes
NASA Astrophysics Data System (ADS)
Giavitto, Gianluca; Ashton, Terry; Balzer, Arnim; Berge, David; Brun, Francois; Chaminade, Thomas; Delagnes, Eric; Fontaine, Gérard; Füßling, Matthias; Giebels, Berrie; Glicenstein, Jean-François; Gräber, Tobias; Hinton, James; Jahnke, Albert; Klepser, Stefan; Kossatz, Marko; Kretzschmann, Axel; Lefranc, Valentin; Leich, Holger; Lüdecke, Hartmut; Lypova, Iryna; Manigot, Pascal; Marandon, Vincent; Moulin, Emmanuel; de Naurois, Mathieu; Nayman, Patrick; Penno, Marek; Ross, Duncan; Salek, David; Schade, Markus; Schwab, Thomas; Simoni, Rachel; Stegmann, Christian; Steppa, Constantin; Thornhill, Julian; Toussnel, François
2016-08-01
The High Energy Stereoscopic System (H.E.S.S.) is an array of five imaging atmospheric Cherenkov telescopes, sensitive to cosmic gamma rays of energies between 30 GeV and several tens of TeV. Four of them started operations in 2003 and their photomultiplier tube (PMT) cameras are currently undergoing a major upgrade, with the goals of improving the overall performance of the array and reducing the failure rate of the ageing systems. With the exception of the 960 PMTs, all components inside the camera have been replaced: these include the readout and trigger electronics, the power, ventilation and pneumatic systems and the control and data acquisition software. New designs and technical solutions have been introduced: the readout makes use of the NECTAr analog memory chip, which samples and stores the PMT signals and was developed for the Cherenkov Telescope Array (CTA). The control of all hardware subsystems is carried out by an FPGA coupled to an embedded ARM computer, a modular design which has proven to be very fast and reliable. The new camera software is based on modern C++ libraries such as Apache Thrift, ØMQ and Protocol buffers, offering very good performance, robustness, flexibility and ease of development. The first camera was upgraded in 2015, the other three cameras are foreseen to follow in fall 2016. We describe the design, the performance, the results of the tests and the lessons learned from the first upgraded H.E.S.S. camera.
NASA Astrophysics Data System (ADS)
Gibson, Justus; Stencel, Robert E.; ARCES Team; Ketzeback, W.; Barentine, J.; Bradley, A.; Coughlin, J.; Dembicky, J.; Hawley, S.; Huehnerhoff, J.; Leadbeater, R.; McMillan, R.; Saurage, G.; Schmidt, S.; Ule, N.; Wallerstein, G.; York, D.
2018-06-01
Worldwide interest in the recent eclipse of epsilon Aurigae resulted in the generation of several extensive data sets, including high resolution spectroscopic monitoring. This lead to the discovery, among other things, of the existence of a mass transfer stream, seen notably during third contact. We explored spectroscopic facets of the mass transfer stream during third contact, using high resolution spectra obtained with the ARCES and TripleSpec instruments at Apache Point Observatory. One hundred and sixteen epochs of data were obtained between 2009 and 2012, and equivalent widths and line velocities measured for high versus low eccentricity accretion disk lines. These datasets also enable greater detail to be measured of the mid-eclipse enhancement of the He I 10830A line, and the discovery of the P Cygni shape of the Pa-beta line at third contact. We found evidence of higher speed material, associated with the mass transfer stream, persisting between third and fourth eclipse contacts. We visualized the disk and stream interaction using SHAPE software, and used CLOUDY software to estimate that the source of the enhanced He I 10830A absorption arises from a region with log nH = 11 cm-3 and temperature of 20,000 K, consistent with a mid-B type central star. We thank the following for their contributions to this paper: William Ketzeback, John Barentine, Jeffrey Coughlin, Robin Leadbeater, Gabrelle Saurage, and others. This paper has been submitted to Monthly Notices.
Acute physiology and chronic health evaluation (APACHE II) and Medicare reimbursement
Wagner, Douglas P.; Draper, Elizabeth A.
1984-01-01
This article describes the potential for the acute physiology score (APS) of acute physiology and chronic health evaluation (APACHE) II, to be used as a severity adjustment to diagnosis-related groups (DRG's) or other diagnostic classifications. The APS is defined by a relative value scale applied to 12 objective physiologic variables routinely measured on most hospitalized patients shortly after hospital admission. For intensive care patients, APS at admission is strongly related to subsequent resource costs of intensive care for 5,790 consecutive admissions to 13 large hospitals, across and within diagnoses. The APS could also be used to evaluate quality of care, medical technology, and the response to changing financial incentives. PMID:10311080
A Software Laboratory Environment for Computer-Based Problem Solving.
ERIC Educational Resources Information Center
Kurtz, Barry L.; O'Neal, Micheal B.
This paper describes a National Science Foundation-sponsored project at Louisiana Technological University to develop computer-based laboratories for "hands-on" introductions to major topics of computer science. The underlying strategy is to develop structured laboratory environments that present abstract concepts through the use of…
Development of a Traditional/Computer-aided Graphics Course for Engineering Technology.
ERIC Educational Resources Information Center
Anand, Vera B.
1985-01-01
Describes a two-semester-hour freshman course in engineering graphics which uses both traditional and computerized instruction. Includes course description, computer graphics topics, and recommendations. Indicates that combining interactive graphics software with development of simple programs gave students a better foundation for upper-division…
Teaching Model Building to High School Students: Theory and Reality.
ERIC Educational Resources Information Center
Roberts, Nancy; Barclay, Tim
1988-01-01
Builds on a National Science Foundation (NSF) microcomputer based laboratory project to introduce system dynamics into the precollege setting. Focuses on providing students with powerful and investigatory theory building tools. Discusses developed hardware, software, and curriculum materials used to introduce model building and simulations into…
Finding Text-Supported Gene-to-Disease Co-appearances with MOPED-Digger.
Kolker, Eugene; Janko, Imre; Montague, Elizabeth; Higdon, Roger; Stewart, Elizabeth; Choiniere, John; Lai, Aaron; Eckert, Mary; Broomall, William; Kolker, Natali
2015-12-01
Gene/disease associations are a critical part of exploring disease causes and ultimately cures, yet the publications that might provide such information are too numerous to be manually reviewed. We present a software utility, MOPED-Digger, that enables focused human assessment of literature by applying natural language processing (NLP) to search for customized lists of genes and diseases in titles and abstracts from biomedical publications. The results are ranked lists of gene/disease co-appearances and the publications that support them. Analysis of 18,159,237 PubMed title/abstracts yielded 1,796,799 gene/disease co-appearances that can be used to focus attention on the most promising publications for a possible gene/disease association. An integrated score is provided to enable assessment of broadly presented published evidence to capture more tenuous connections. MOPED-Digger is written in Java and uses Apache Lucene 5.0 library. The utility runs as a command-line program with a variety of user-options and is freely available for download from the MOPED 3.0 website (moped.proteinspire.org).
TOPDOM: database of conservatively located domains and motifs in proteins.
Varga, Julia; Dobson, László; Tusnády, Gábor E
2016-09-01
The TOPDOM database-originally created as a collection of domains and motifs located consistently on the same side of the membranes in α-helical transmembrane proteins-has been updated and extended by taking into consideration consistently localized domains and motifs in globular proteins, too. By taking advantage of the recently developed CCTOP algorithm to determine the type of a protein and predict topology in case of transmembrane proteins, and by applying a thorough search for domains and motifs as well as utilizing the most up-to-date version of all source databases, we managed to reach a 6-fold increase in the size of the whole database and a 2-fold increase in the number of transmembrane proteins. TOPDOM database is available at http://topdom.enzim.hu The webpage utilizes the common Apache, PHP5 and MySQL software to provide the user interface for accessing and searching the database. The database itself is generated on a high performance computer. tusnady.gabor@ttk.mta.hu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.
Model of load balancing using reliable algorithm with multi-agent system
NASA Astrophysics Data System (ADS)
Afriansyah, M. F.; Somantri, M.; Riyadi, M. A.
2017-04-01
Massive technology development is linear with the growth of internet users which increase network traffic activity. It also increases load of the system. The usage of reliable algorithm and mobile agent in distributed load balancing is a viable solution to handle the load issue on a large-scale system. Mobile agent works to collect resource information and can migrate according to given task. We propose reliable load balancing algorithm using least time first byte (LFB) combined with information from the mobile agent. In system overview, the methodology consisted of defining identification system, specification requirements, network topology and design system infrastructure. The simulation method for simulated system was using 1800 request for 10 s from the user to the server and taking the data for analysis. Software simulation was based on Apache Jmeter by observing response time and reliability of each server and then compared it with existing method. Results of performed simulation show that the LFB method with mobile agent can perform load balancing with efficient systems to all backend server without bottleneck, low risk of server overload, and reliable.
Integrated photovoltaic (PV) monitoring system
NASA Astrophysics Data System (ADS)
Mahinder Singh, Balbir Singh; Husain, NurSyahidah; Mohamed, Norani Muti
2012-09-01
The main aim of this research work is to design an accurate and reliable monitoring system to be integrated with solar electricity generating system. The performance monitoring system is required to ensure that the PVEGS is operating at an optimum level. The PV monitoring system is able to measure all the important parameters that determine an optimum performance. The measured values are recorded continuously, as the data acquisition system is connected to a computer, and data is stored at fixed intervals. The data can be locally used and can also be transmitted via internet. The data that appears directly on the local monitoring system is displayed via graphical user interface that was created by using Visual basic and Apache software was used for data transmission The accuracy and reliability of the developed monitoring system was tested against the data that captured simultaneously by using a standard power quality analyzer device. The high correlation which is 97% values indicates the level of accuracy of the monitoring system. The aim of leveraging on a system for continuous monitoring system is achieved, both locally, and can be viewed simultaneously at a remote system.
Evolution of the Generic Lock System at Jefferson Lab
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brian Bevins; Yves Roblin
2003-10-13
The Generic Lock system is a software framework that allows highly flexible feedback control of large distributed systems. It allows system operators to implement new feedback loops between arbitrary process variables quickly and with no disturbance to the underlying control system. Several different types of feedback loops are provided and more are being added. This paper describes the further evolution of the system since it was first presented at ICALEPCS 2001 and reports on two years of successful use in accelerator operations. The framework has been enhanced in several key ways. Multiple-input, multiple-output (MIMO) lock types have been added formore » accelerator orbit and energy stabilization. The general purpose Proportional-Integral-Derivative (PID) locks can now be tuned automatically. The generic lock server now makes use of the Proxy IOC (PIOC) developed at Jefferson Lab to allow the locks to be monitored from any EPICS Channel Access aware client. (Previously clients had to be Cdev aware.) The dependency on the Qt XML parser has been replaced with the freely available Xerces DOM parser from the Apache project.« less
Experiences with the Twitter Health Surveillance (THS) System
Rodríguez-Martínez, Manuel
2018-01-01
Social media has become an important platform to gauge public opinion on topics related to our daily lives. In practice, processing these posts requires big data analytics tools since the volume of data and the speed of production overwhelm single-server solutions. Building an application to capture and analyze posts from social media can be a challenge simply because it requires combining a set of complex software tools that often times are tricky to configure, tune, and maintain. In many instances, the application ends up being an assorted collection of Java/Scala programs or Python scripts that developers cobble together to generate the data products they need. In this paper, we present the Twitter Health Surveillance (THS) application framework. THS is designed as a platform to allow end-users to monitor a stream of tweets, and process the stream with a combination of built-in functionality and their own user-defined functions. We discuss the architecture of THS, and describe its implementation atop the Apache Hadoop Ecosystem. We also present several lessons learned while developing our current prototype. PMID:29607412
NASA Astrophysics Data System (ADS)
The, Matthew; MacCoss, Michael J.; Noble, William S.; Käll, Lukas
2016-11-01
Percolator is a widely used software tool that increases yield in shotgun proteomics experiments and assigns reliable statistical confidence measures, such as q values and posterior error probabilities, to peptides and peptide-spectrum matches (PSMs) from such experiments. Percolator's processing speed has been sufficient for typical data sets consisting of hundreds of thousands of PSMs. With our new scalable approach, we can now also analyze millions of PSMs in a matter of minutes on a commodity computer. Furthermore, with the increasing awareness for the need for reliable statistics on the protein level, we compared several easy-to-understand protein inference methods and implemented the best-performing method—grouping proteins by their corresponding sets of theoretical peptides and then considering only the best-scoring peptide for each protein—in the Percolator package. We used Percolator 3.0 to analyze the data from a recent study of the draft human proteome containing 25 million spectra (PM:24870542). The source code and Ubuntu, Windows, MacOS, and Fedora binary packages are available from http://percolator.ms/ under an Apache 2.0 license.
The, Matthew; MacCoss, Michael J; Noble, William S; Käll, Lukas
2016-11-01
Percolator is a widely used software tool that increases yield in shotgun proteomics experiments and assigns reliable statistical confidence measures, such as q values and posterior error probabilities, to peptides and peptide-spectrum matches (PSMs) from such experiments. Percolator's processing speed has been sufficient for typical data sets consisting of hundreds of thousands of PSMs. With our new scalable approach, we can now also analyze millions of PSMs in a matter of minutes on a commodity computer. Furthermore, with the increasing awareness for the need for reliable statistics on the protein level, we compared several easy-to-understand protein inference methods and implemented the best-performing method-grouping proteins by their corresponding sets of theoretical peptides and then considering only the best-scoring peptide for each protein-in the Percolator package. We used Percolator 3.0 to analyze the data from a recent study of the draft human proteome containing 25 million spectra (PM:24870542). The source code and Ubuntu, Windows, MacOS, and Fedora binary packages are available from http://percolator.ms/ under an Apache 2.0 license. Graphical Abstract ᅟ.
Experiences with the Twitter Health Surveillance (THS) System.
Rodríguez-Martínez, Manuel
2017-06-01
Social media has become an important platform to gauge public opinion on topics related to our daily lives. In practice, processing these posts requires big data analytics tools since the volume of data and the speed of production overwhelm single-server solutions. Building an application to capture and analyze posts from social media can be a challenge simply because it requires combining a set of complex software tools that often times are tricky to configure, tune, and maintain. In many instances, the application ends up being an assorted collection of Java/Scala programs or Python scripts that developers cobble together to generate the data products they need. In this paper, we present the Twitter Health Surveillance (THS) application framework. THS is designed as a platform to allow end-users to monitor a stream of tweets, and process the stream with a combination of built-in functionality and their own user-defined functions. We discuss the architecture of THS, and describe its implementation atop the Apache Hadoop Ecosystem. We also present several lessons learned while developing our current prototype.
Big Data in HEP: A comprehensive use case study
NASA Astrophysics Data System (ADS)
Gutsche, Oliver; Cremonesi, Matteo; Elmer, Peter; Jayatilaka, Bo; Kowalkowski, Jim; Pivarski, Jim; Sehrish, Saba; Mantilla Surez, Cristina; Svyatkovskiy, Alexey; Tran, Nhan
2017-10-01
Experimental Particle Physics has been at the forefront of analyzing the worlds largest datasets for decades. The HEP community was the first to develop suitable software and computing tools for this task. In recent times, new toolkits and systems collectively called Big Data technologies have emerged to support the analysis of Petabyte and Exabyte datasets in industry. While the principles of data analysis in HEP have not changed (filtering and transforming experiment-specific data formats), these new technologies use different approaches and promise a fresh look at analysis of very large datasets and could potentially reduce the time-to-physics with increased interactivity. In this talk, we present an active LHC Run 2 analysis, searching for dark matter with the CMS detector, as a testbed for Big Data technologies. We directly compare the traditional NTuple-based analysis with an equivalent analysis using Apache Spark on the Hadoop ecosystem and beyond. In both cases, we start the analysis with the official experiment data formats and produce publication physics plots. We will discuss advantages and disadvantages of each approach and give an outlook on further studies needed.
The new protein topology graph library web server.
Schäfer, Tim; Scheck, Andreas; Bruneß, Daniel; May, Patrick; Koch, Ina
2016-02-01
We present a new, extended version of the Protein Topology Graph Library web server. The Protein Topology Graph Library describes the protein topology on the super-secondary structure level. It allows to compute and visualize protein ligand graphs and search for protein structural motifs. The new server features additional information on ligand binding to secondary structure elements, increased usability and an application programming interface (API) to retrieve data, allowing for an automated analysis of protein topology. The Protein Topology Graph Library server is freely available on the web at http://ptgl.uni-frankfurt.de. The website is implemented in PHP, JavaScript, PostgreSQL and Apache. It is supported by all major browsers. The VPLG software that was used to compute the protein ligand graphs and all other data in the database is available under the GNU public license 2.0 from http://vplg.sourceforge.net. tim.schaefer@bioinformatik.uni-frankfurt.de; ina.koch@bioinformatik.uni-frankfurt.de Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Schmidt, Peter; Lund, Björn; Hieronymus, Christoph
2012-03-01
When general-purpose finite element analysis software is used to model glacial isostatic adjustment (GIA), the first-order effect of prestress advection has to be accounted for by the user. We show here that the common use of elastic foundations at boundaries between materials of different densities will produce incorrect displacements, unless the boundary is perpendicular to the direction of gravity. This is due to the foundations always acting perpendicular to the surface to which they are attached, while the body force they represent always acts in the direction of gravity. If prestress advection is instead accounted for by the use of elastic spring elements in the direction of gravity, the representation will be correct. The use of springs adds a computation of the spring constants to the analysis. The spring constant for a particular node is defined by the product of the density contrast at the boundary, the gravitational acceleration, and the area supported by the node. To be consistent with the finite element formulation, the area is evaluated by integration of the nodal shape functions. We outline an algorithm for the calculation and include a Python script that integrates the shape functions over a bilinear quadrilateral element. For linear rectangular and triangular elements, the area supported by each node is equal to the element area divided the number of defining nodes, thereby simplifying the computation. This is, however, not true in the general nonrectangular case, and we demonstrate this with a simple 1-element model. The spring constant calculation is simple and performed in the preprocessing stage of the analysis. The time spent on the calculation is more than compensated for by a shorter analysis time, compared to that for a model with foundations. We illustrate the effects of using springs versus foundations with a simple two-dimensional GIA model of glacial loading, where the Earth model has an inclined boundary between the overlying elastic layer and the lower viscoelastic layer. Our example shows that the error introduced by the use of foundations is large enough to affect an analysis based on high-accuracy geodetic data.
Rodríguez, I; Fluiters, E; Pérez-Méndez, L F; Luna, R; Páramo, C; García-Mayor, R V
2004-02-01
This study was carried out to investigate the clinical and biochemical factors which might be of importance in predicting the outcome of patients with myxoedema coma. Eleven patients (ten female) aged 68.1+/-19.5 years attended our institution over a period of 18 years. Glasgow and APACHE II scores and serum free thyroxine and TSH were measured in all the patients on entry. Patients were selected at random to be treated with two different regimens of l-thyroxine. Four patients died with the mortality rate being 36.4%. The patients in coma at entry had significantly higher mortality rates than those with minor degrees of consciousness (75% vs 14.3% respectively, P=0.04). The surviving patients had significantly higher Glasgow scores than those who died (11.85+/-2.3 vs 5.25+/-2.2 respectively, P<0.001). Comparison of the mean values of APACHE II scores between the surviving group and those who died was significantly different (18.0+/-2.08 vs 31.5+/-2.08 respectively, P<0.0001). The degree of consciousness, the Glasgow score and the severity of the illness measured by APACHE II score on entry were the main factors that determined the post-treatment outcome of patients with myxoedema coma.
Douglas, Helen E; Ratcliffe, Andrew; Sandhu, Rajdeep; Anwar, Umair
2015-02-01
Many different burns mortality prediction models exist; however most agree that important factors that can be weighted include the age of the patient, the total percentage of body surface area burned and the presence or absence of smoke inhalation. A retrospective review of all burns primarily admitted to Pinderfields Burns ICU under joint care of burns surgeons and intensivists for the past 3 years was completed. Predicted mortality was calculated using the revised Baux score (2010), the Belgian Outcome in Burn Injury score (2009) and the Boston group score by Ryan et al. (1998). Additionally 28 of the 48 patients had APACHE II scores recorded on admission and the predicted and actual mortality of this group were compared. The Belgian score had the highest sensitivity and negative predictive value (72%/85%); followed by the Boston score (66%/78%) and then the revised Baux score (53%/70%). APACHE II scores had higher sensitivity (81%) and NPV (92%) than any of the burns scores. In our group of burns ICU patients the Belgian model was the most sensitive and specific predictor of mortality. In our subgroup of patients with APACHE II data, this score more accurately predicted survival and mortality. Copyright © 2014 Elsevier Ltd and ISBI. All rights reserved.
Application of Open Source Technologies for Oceanographic Data Analysis
NASA Astrophysics Data System (ADS)
Huang, T.; Gangl, M.; Quach, N. T.; Wilson, B. D.; Chang, G.; Armstrong, E. M.; Chin, T. M.; Greguska, F.
2015-12-01
NEXUS is a data-intensive analysis solution developed with a new approach for handling science data that enables large-scale data analysis by leveraging open source technologies such as Apache Cassandra, Apache Spark, Apache Solr, and Webification. NEXUS has been selected to provide on-the-fly time-series and histogram generation for the Soil Moisture Active Passive (SMAP) mission for Level 2 and Level 3 Active, Passive, and Active Passive products. It also provides an on-the-fly data subsetting capability. NEXUS is designed to scale horizontally, enabling it to handle massive amounts of data in parallel. It takes a new approach on managing time and geo-referenced array data by dividing data artifacts into chunks and stores them in an industry-standard, horizontally scaled NoSQL database. This approach enables the development of scalable data analysis services that can infuse and leverage the elastic computing infrastructure of the Cloud. It is equipped with a high-performance geospatial and indexed data search solution, coupled with a high-performance data Webification solution free from file I/O bottlenecks, as well as a high-performance, in-memory data analysis engine. In this talk, we will focus on the recently funded AIST 2014 project by using NEXUS as the core for oceanographic anomaly detection service and web portal. We call it, OceanXtremes
Mortality indicators and risk factors for intra-abdominal hypertension in severe acute pancreatitis.
Zhao, J G; Liao, Q; Zhao, Y P; Hu, Y
2014-01-01
This study assessed the risk factors associated with mortality and the development of intra-abdominal hypertension (IAH) in patients with severe acute pancreatitis (SAP). To identify significant risk factors, we assessed the following variables in 102 patients with SAP: age, gender, etiology, serum amylase level, white blood cell (WBC) count, serum calcium level, Acute Physiology and Chronic Health Evaluation II (APACHE-II) score, computed tomography severity index (CTSI) score, pancreatic necrosis, surgical interventions, and multiple organ dysfunction syndrome (MODS). Statistically significant differences were identified using the Student t test and the χ (2) test. Independent risk factors for survival were analyzed by Cox proportional hazards regression. The following variables were significantly related to both mortality and IAH: WBC count, serum calcium level, serum amylase level, APACHE-II score, CTSI score, pancreatic necrosis, pancreatic necrosis >50%, and MODS. However, it was found that surgical intervention had no significant association with mortality. MODS and pancreatic necrosis >50% were found to be independent risk factors for survival in patients with SAP. Mortality and IAH from SAP were significantly related to WBC count, serum calcium level, serum amylase level, APACHE-II score, CTSI score, pancreatic necrosis, and MODS. However, Surgical intervention did not result in higher mortality. Moreover, MODS and pancreatic necrosis >50% predicted a worse prognosis in SAP patients.
HbA1c is outcome predictor in diabetic patients with sepsis.
Gornik, Ivan; Gornik, Olga; Gasparović, Vladimir
2007-07-01
We have investigated predictive value of HbA1c for hospital mortality and length of stay (LOS) in patients with type 2 diabetes admitted because of sepsis. A prospective observational study was implemented in a university hospital, 286 patients with type 2 diabetes admitted with sepsis were included. Leukocyte count, CRP, admission plasma glucose, APACHE II and SOFA score were noted at admission, HbA1c was measured on the first day following admission. Hospital mortality and hospital length of stay (LOS) were the outcome measures. Admission HbA1c was significantly lower in surviving patients than in non-survivors (median 8.2% versus 9.75%, respectively; P<0.001). There was a significant correlation between admission HbA1c and hospital LOS of surviving patients (r=0.29; P<0.001). Logistic regression showed that HbA1c is an independent predictor of hospital mortality (odds ratio 1.36), together with female sex (OR 2.24), APACHE II score (OR 1.08) and SOFA score (OR 1.28). Multiple regression showed that HbA1c and APACHE II score are independently related to hospital LOS. According to our results, HbA1c is an independent predictive factor for hospital mortality and hospital LOS of diabetic patients with sepsis.
Haque, Nadia Z.; Arshad, Samia; Peyrani, Paula; Ford, Kimbal D.; Perri, Mary B.; Jacobsen, Gordon; Reyes, Katherine; Scerpella, Ernesto G.; Ramirez, Julio A.
2012-01-01
Methicillin-resistant Staphylococcus aureus (MRSA) is a major cause of nosocomial pneumonia. To characterize pathogen-derived and host-related factors in intensive care unit (ICU) patients with MRSA pneumonia, we evaluated the Improving Medicine through Pathway Assessment of Critical Therapy in Hospital-Acquired Pneumonia (IMPACT-HAP) database. We performed multivariate regression analyses of 28-day mortality and clinical response using univariate analysis variables at a P level of <0.25. In isolates from 251 patients, the most common molecular characteristics were USA100 (55.0%) and USA300 (23.9%), SCCmec types II (64.1%) and IV (33.1%), and agr I (36.7%) and II (61.8%). Panton-Valentine leukocidin (PVL) was present in 21.9%, and vancomycin heteroresistance was present in 15.9%. Mortality occurred in 37.1% of patients; factors in the univariate analysis were age, APACHE II score, AIDS, cardiac disease, vascular disease, diabetes, SCCmec type II, PVL negativity, and higher vancomycin MIC (all P values were <0.05). In multivariate analysis, independent predictors were APACHE II score (odds ratio [OR], 1.090; 95% confidence interval [CI], 1.041 to 1.141; P < 0.001) and age (OR, 1.024; 95% CI, 1.003 to 1.046; P = 0.02). Clinical failure occurred in 36.8% of 201 evaluable patients; the only independent predictor was APACHE II score (OR, 1.082; 95% CI, 1.031 to 1.136; P = 0.002). In summary, APACHE II score (mortality, clinical failure) and age (mortality) were the only independent predictors, which is consistent with severity of illness in ICU patients with MRSA pneumonia. Interestingly, our univariate findings suggest that both pathogen and host factors influence outcomes. As the epidemiology of MRSA pneumonia continues to evolve, both pathogen- and host-related factors should be considered when describing epidemiological trends and outcomes of therapeutic interventions. PMID:22337980
Presneill, J J; Waring, P M; Layton, J E; Maher, D W; Cebon, J; Harley, N S; Wilson, J W; Cade, J F
2000-07-01
To define the circulating levels of granulocyte colony-stimulating factor (G-CSF) and granulocyte-macrophage colony-stimulating factor (GM-CSF) during critical illness and to determine their relationship to the severity of illness as measured by the Acute Physiology and Chronic Health Evaluation (APACHE) II score, the development of multiple organ dysfunction, or mortality. Prospective cohort study. University hospital intensive care unit. A total of 82 critically ill adult patients in four clinically defined groups, namely septic shock (n = 29), sepsis without shock (n = 17), shock without sepsis (n = 22), and nonseptic, nonshock controls (n = 14). None. During day 1 of septic shock, peak plasma levels of G-CSF, interleukin (IL)-6, and leukemia inhibitory factor (LIF), but not GM-CSF, were greater than in sepsis or shock alone (p < .001), and were correlated among themselves (rs = 0.44-0.77; p < .02) and with the APACHE II score (rs = 0.25-0.40; p = .03 to .18). G-CSF, IL-6, and UF, and sepsis, shock, septic shock, and APACHE II scores were strongly associated with organ dysfunction or 5-day mortality by univariate analysis. However, multiple logistic regression analysis showed that only septic shock remained significantly associated with organ dysfunction and only APACHE II scores and shock with 5-day mortality. Similarly, peak G-CSF, IL-6, and LIF were poorly predictive of 30-day mortality. Plasma levels of G-CSF, IL-6, and LIF are greatly elevated in critical illness, including septic shock, and are correlated with one another and with the severity of illness. However, they are not independently predictive of mortality, or the development of multiple organ dysfunction. GM-CSF was rarely elevated, suggesting different roles for G-CSF and GM-CSF in human septic shock.
FOUR Score Predicts Early Outcome in Patients After Traumatic Brain Injury.
Nyam, Tee-Tau Eric; Ao, Kam-Hou; Hung, Shu-Yu; Shen, Mei-Li; Yu, Tzu-Chieh; Kuo, Jinn-Rung
2017-04-01
The aim of the study was to determine whether the Full Outline of UnResponsiveness (FOUR) score, which includes eyes opening (E), motor function (M), brainstem reflex (B), and respiratory pattern (R), can be used as an alternate method to the Glasgow Coma Scale (GCS) in predicting intensive care unit (ICU) mortality in traumatic brain injury (TBI) patients. From January 2015 to June 2015, patients with isolated TBI admitted to the ICU were enrolled. Three advanced practice nurses administered the FOUR score, GCS, Acute Physiology and Chronic Health Evaluation II (APACHE II), and Therapeutic Intervention Scoring System (TISS) concurrently from ICU admissions. The endpoint of observation was mortality when the patients left the ICU. Data are presented as frequency with percentages, mean with standard deviation, or median with interquartile range. Each measurement tool used area under the receiver operating characteristic curve to compare the predictive power between these four tools. In addition, the difference between survival and death was estimated using the Wilcoxon rank sum test. From 55 TBI patients, males (72.73 %) were represented more than females, the mean age was 63.1 ± 17.9, and 19 of 55 observations (35 %) had a maximum FOUR score of 16. The overall mortality rate was 14.6 %. The area under the receiver operating characteristic curve was 74.47 % for the FOUR score, 74.73 % for the GCS, 81.78 % for the APACHE II, and 53.32 % for the TISS. The FOUR score has similar predictive power of mortality compared to the GCS and APACHE II. Each of the parameters-E, M, B, and R-of the FOUR score showed a significant difference between mortality and survival group, while the verbal and eye-opening components of the GCS did not. Having similar predictive power of mortality compared to the GCS and APACHE II, the FOUR score can be used as an alternative in the prediction of early mortality in TBI patients in the ICU.
Flint, Richard; Windsor, John A
2004-04-01
The physiological response to treatment is a better predictor of outcome in acute pancreatitis than are traditional static measures. Retrospective diagnostic test study. The criterion standard was Organ Failure Score (OFS) and Acute Physiology and Chronic Health Evaluation II (APACHE II) score at the time of hospital admission. Intensive care unit of a tertiary referral center, Auckland City Hospital, Auckland, New Zealand. Consecutive sample of 92 patients (60 male, 32 female; median age, 61 years; range, 24-79 years) with severe acute pancreatitis. Twenty patients were not included because of incomplete data. The cause of pancreatitis was gallstones (42%), alcohol use (27%), or other (31%). At hospital admission, the mean +/- SD OFS was 8.1 +/- 6.1, and the mean +/- SD APACHE II score was 19.9 +/- 8.2. All cases were managed according to a standardized protocol. There was no randomization or testing of any individual interventions. Survival and death. There were 32 deaths (pretest probability of dying was 35%). The physiological response to treatment was more accurate in predicting the outcome than was OFS or APACHE II score at hospital admission. For example, 17 patients had an initial OFS of 7-8 (posttest probability of dying was 58%); after 48 hours, 7 had responded to treatment (posttest probability of dying was 28%), and 10 did not respond (posttest probability of dying was 82%). The effect of the change in OFS and APACHE II score was graphically depicted by using a series of logistic regression equations. The resultant sigmoid curve suggests that there is a midrange of scores (the steep portion of the graph) within which the probability of death is most affected by the response to intensive care treatment. Measuring the initial severity of pancreatitis combined with the physiological response to intensive care treatment is a practical and clinically relevant approach to predicting death in patients with severe acute pancreatitis.
Quispe E, Álvaro; Li, Xiang-Min; Yi, Hong
2016-05-01
To compare the ability of thyroid hormones, IL-6, IL-10, and albumin to predict mortality, and to assess their relationship in case-mix acute critically ill patients. APACHE II scores and serum thyroid hormones (FT3, FT4, and TSH), IL-6, IL-10, and albumin were obtained at EICU admission for 79 cases of mix acute critically ill patients without previous history of thyroid disease. Patients were followed for 28 days with patient's death as the primary outcome. All mean values were compared, correlations assessed with Pearson' test, and mortality prediction assessed by multivariate logistic regression and ROC. Non survivors were older, with higher APACHE II score (p=0.000), IL-6 (p<0.05), IL-10 (p=0.000) levels, and lower albumin (p=0.000) levels compared to survivors at 28 days. IL-6 and IL-10 had significant negative correlation with albumin (p=0.001) and FT3 (p ⩽ 0.05) respectively, while low albumin had a direct correlation with FT3 (p<0.05). In the mortality prediction assessment, IL-10, albumin and APACHE II were independent morality predictors and showed to have a good (0.70-0.79) AUC-ROC (p<0.05). Despite that the entire cohort showed low FT3 serum levels (p=0.000), there was not statistical difference between survivors and non-survivors; neither showed any significance as mortality predictor. IL-6 and IL-10 are correlated with Low FT3 and hypoalbuminemia. Thyroid hormones assessed at EICU admission did not have any predictive value in our study. And finally, high levels of IL-6 and IL-10 in conjunction with albumin could improve our ability to evaluate disease's severity and predict mortality in the critically ill patients. When use in combination with APACHE II scores, our model showed improved mortality prediction. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Salica, Andrea; Weltert, Luca; Scaffa, Raffaele; Guerrieri Wolf, Lorenzo; Nardella, Saverio; Bellisario, Alessandro; De Paulis, Ruggero
2014-11-01
Optimal management of poststernotomy mediastinitis is controversial. Negative pressure wound treatment improves wound environment and sternal stability with low surgical invasiveness. Our protocol was based on negative pressure followed by delayed surgical closure. The aim of this study was to provide the results at early follow-up and to identify the risk factors for adverse outcome. In 5400 cardiac procedures, 44 consecutive patients with mediastinitis were enrolled in the study. Mediastinitis treatment was based on urgent debridement and negative pressure as the first-line approach. After wound sterilization, chest closure was achieved by elective pectoralis muscle advancement flap. Each patient's hospital data were collected prospectively. Variables included patient demographics and clinical and biological data. Acute Physiology and Chronic Health Evaluation (APACHE) II score was calculated at the time of diagnosis and 48 hours after debridement. Focus outcome measures were mediastinitis-related death and need for reintervention after pectoralis muscle closure. El Oakley type I and type IIIA mediastinitis were the most frequent types (63.6%). Methicillin-resistant Staphylococcus aureus was present in 25 patients (56.8%). Mean APACHE II score was 19.4±4 at the time of diagnosis, and 30 patients (68.2%) required intensive care unit transfer before surgical debridement. APACHE II score improved 48 hours after wound debridement and negative pressure application (mean value, 19.4±4 vs 7.2±2; P=.005) independently of any other variables included in the study. One patient in septic shock at the time of diagnosis died (2.2%). Negative pressure promotes a significant improvement in clinical status according to APACHE II score and allows a successful elective surgical closure. Copyright © 2014 The American Association for Thoracic Surgery. Published by Elsevier Inc. All rights reserved.
Relationship between fish size and upper thermal tolerance
Recsetar, Matthew S.; Zeigler, Matthew P.; Ward, David L.; Bonar, Scott A.; Caldwell, Colleen A.
2012-01-01
Using critical thermal maximum (CTMax) tests, we examined the relationship between upper temperature tolerances and fish size (fry-adult or subadult lengths) of rainbow trout Oncorhynchus mykiss (41-200-mm TL), Apache trout O. gilae apache (40-220-mm TL), largemouth bass Micropterus salmoides (72-266-mm TL), Nile tilapia Oreochromis niloticus (35-206-mm TL), channel catfish Ictalurus punctatus (62-264 mm-TL), and Rio Grande cutthroat trout O. clarkii virginalis (36-181-mm TL). Rainbow trout and Apache trout were acclimated at 18°C, Rio Grande cutthroat trout were acclimated at 14°C, and Nile tilapia, largemouth bass, and channel catfish were acclimated at 25°C, all for 14 d. Critical thermal maximum temperatures were estimated and data were analyzed using simple linear regression. There was no significant relationship (P > 0.05) between thermal tolerance and length for Nile tilapia (P = 0.33), channel catfish (P = 0.55), rainbow trout (P = 0.76), or largemouth bass (P = 0.93) for the length ranges we tested. There was a significant negative relationship between thermal tolerance and length for Rio Grande cutthroat trout (R2 = 0.412, P 2 = 0.1374, P = 0.028); however, the difference was less than 1°C across all lengths of Apache trout tested and about 1.3°C across all lengths of Rio Grande cutthroat trout tested. Because there was either no or at most a slight relationship between upper thermal tolerance and size, management and research decisions based on upper thermal tolerance should be similar for the range of sizes within each species we tested. However, the different sizes we tested only encompassed life stages ranging from fry to adult/subadult, so thermal tolerance of eggs, alevins, and larger adults should also be considered before making management decisions affecting an entire species.
Interactive system for geomagnetic data analysis
NASA Astrophysics Data System (ADS)
Solovev, Igor
2017-10-01
The paper suggests the methods for analyzing geomagnetic field variations, which are implemented in "Aurora" software system for complex analysis of geophysical parameters. The software system allows one to perform a detailed magnetic data analysis. The methods allow one to estimate the intensity of geomagnetic perturbations and to allocate increased geomagnetic activity periods. The software system is publicly available (
The 2017 Bioinformatics Open Source Conference (BOSC)
Harris, Nomi L.; Cock, Peter J.A.; Chapman, Brad; Fields, Christopher J.; Hokamp, Karsten; Lapp, Hilmar; Munoz-Torres, Monica; Tzovaras, Bastian Greshake; Wiencko, Heather
2017-01-01
The Bioinformatics Open Source Conference (BOSC) is a meeting organized by the Open Bioinformatics Foundation (OBF), a non-profit group dedicated to promoting the practice and philosophy of Open Source software development and Open Science within the biological research community. The 18th annual BOSC ( http://www.open-bio.org/wiki/BOSC_2017) took place in Prague, Czech Republic in July 2017. The conference brought together nearly 250 bioinformatics researchers, developers and users of open source software to interact and share ideas about standards, bioinformatics software development, open and reproducible science, and this year’s theme, open data. As in previous years, the conference was preceded by a two-day collaborative coding event open to the bioinformatics community, called the OBF Codefest. PMID:29118973
The 2017 Bioinformatics Open Source Conference (BOSC).
Harris, Nomi L; Cock, Peter J A; Chapman, Brad; Fields, Christopher J; Hokamp, Karsten; Lapp, Hilmar; Munoz-Torres, Monica; Tzovaras, Bastian Greshake; Wiencko, Heather
2017-01-01
The Bioinformatics Open Source Conference (BOSC) is a meeting organized by the Open Bioinformatics Foundation (OBF), a non-profit group dedicated to promoting the practice and philosophy of Open Source software development and Open Science within the biological research community. The 18th annual BOSC ( http://www.open-bio.org/wiki/BOSC_2017) took place in Prague, Czech Republic in July 2017. The conference brought together nearly 250 bioinformatics researchers, developers and users of open source software to interact and share ideas about standards, bioinformatics software development, open and reproducible science, and this year's theme, open data. As in previous years, the conference was preceded by a two-day collaborative coding event open to the bioinformatics community, called the OBF Codefest.
A CS1 Pedagogical Approach to Parallel Thinking
ERIC Educational Resources Information Center
Rague, Brian William
2010-01-01
Almost all collegiate programs in Computer Science offer an introductory course in programming primarily devoted to communicating the foundational principles of software design and development. The ACM designates this introduction to computer programming course for first-year students as CS1, during which methodologies for solving problems within…
New Decision Tool To Evaluate Award Selection Process.
ERIC Educational Resources Information Center
Thornley, Richard; Spence, Matthew W.; Taylor, Mark; Magnan, Jacques
2002-01-01
Describes an Alberta Heritage Foundation for Medical Research initiative to enhance the review process for its training awards using a new tool based on the ProGrid decision-assist software. Implementation resulted in several modifications to the review process in the areas of definition, rationality, fairness, timeliness, and responsiveness; the…
ERIC Educational Resources Information Center
Markin, Karen M.
2012-01-01
It is not news that software exists to check undergraduate papers for plagiarism. What is less well known is that some federal grant agencies are using technology to detect plagiarism in grant proposals. That variety of research misconduct is a growing problem, according to federal experts. The National Science Foundation, in its most recent…
A New Key to Scholarly Collaboration?
ERIC Educational Resources Information Center
Fitzmier, Jack
2012-01-01
The American Academy of Religion, in concert with the Sakai Foundation, has envisioned a scholarly use of the new Sakai Open Academic Environment open-source software. Currently working under the title "Biosphere," the program would put a rich collection of collaborative tools in the hands of AAR members, their colleagues in related scholarly…
Earobics[R]. What Works Clearinghouse Intervention Report
ERIC Educational Resources Information Center
What Works Clearinghouse, 2009
2009-01-01
Earobics[R] is interactive software that provides students in pre-K through third grade with individual, systematic instruction in early literacy skills as students interact with animated characters. Earobics[R] Foundations is the version for pre-kindergarten, kindergarten, and first grade. Earobics[R] Connections is for second and third graders…
Investigation of the foundations of a Byzantine church by three-dimensional seismic tomography
NASA Astrophysics Data System (ADS)
Polymenakos, L.; Papamarinopoulos, S.; Miltiadou, A.; Charkiolakis, N.
2005-02-01
Byzantine public buildings are of high historical and cultural value. Churches, in particular, are of high architectural and artistic value because they are built using various materials and construction techniques and may contain significant frescoes and mosaics. The knowledge of the state of foundations and ground material conditions is important for their proper restoration and preservation. Seismic tomography is employed to investigate the foundation structure and ground material of a Byzantine church. Energy sources are placed across the floor of the church and surrounding courts, while recorders are placed in a subterranean crypt. Travel time data are analyzed and processed with a three-dimensional (3D) tomographic inversion software in order to construct seismic velocity images at the foundation and below foundation level. Velocity variations are known to correlate well with the lithological character of the earth materials, thus providing important structural and lithological information. A case study from a Byzantine church of 11th c. A.D. in the suburbs of Athens, Greece, is presented. The objective of this research is the nondestructive investigation of unknown underground structures or void spaces, mainly under the floor of the building. The results are interpreted in terms of the foundation elements as well as of significant variations in the earth material character.
Precise and Scalable Static Program Analysis of NASA Flight Software
NASA Technical Reports Server (NTRS)
Brat, G.; Venet, A.
2005-01-01
Recent NASA mission failures (e.g., Mars Polar Lander and Mars Orbiter) illustrate the importance of having an efficient verification and validation process for such systems. One software error, as simple as it may be, can cause the loss of an expensive mission, or lead to budget overruns and crunched schedules. Unfortunately, traditional verification methods cannot guarantee the absence of errors in software systems. Therefore, we have developed the CGS static program analysis tool, which can exhaustively analyze large C programs. CGS analyzes the source code and identifies statements in which arrays are accessed out of bounds, or, pointers are used outside the memory region they should address. This paper gives a high-level description of CGS and its theoretical foundations. It also reports on the use of CGS on real NASA software systems used in Mars missions (from Mars PathFinder to Mars Exploration Rover) and on the International Space Station.
Knowledge-based assistance in costing the space station DMS
NASA Technical Reports Server (NTRS)
Henson, Troy; Rone, Kyle
1988-01-01
The Software Cost Engineering (SCE) methodology developed over the last two decades at IBM Systems Integration Division (SID) in Houston is utilized to cost the NASA Space Station Data Management System (DMS). An ongoing project to capture this methodology, which is built on a foundation of experiences and lessons learned, has resulted in the development of an internal-use-only, PC-based prototype that integrates algorithmic tools with knowledge-based decision support assistants. This prototype Software Cost Engineering Automation Tool (SCEAT) is being employed to assist in the DMS costing exercises. At the same time, DMS costing serves as a forcing function and provides a platform for the continuing, iterative development, calibration, and validation and verification of SCEAT. The data that forms the cost engineering database is derived from more than 15 years of development of NASA Space Shuttle software, ranging from low criticality, low complexity support tools to highly complex and highly critical onboard software.
The Live Access Server - A Web-Services Framework for Earth Science Data
NASA Astrophysics Data System (ADS)
Schweitzer, R.; Hankin, S. C.; Callahan, J. S.; O'Brien, K.; Manke, A.; Wang, X. Y.
2005-12-01
The Live Access Server (LAS) is a general purpose Web-server for delivering services related to geo-science data sets. Data providers can use the LAS architecture to build custom Web interfaces to their scientific data. Users and client programs can then access the LAS site to search the provider's on-line data holdings, make plots of data, create sub-sets in a variety of formats, compare data sets and perform analysis on the data. The Live Access server software has continued to evolve by expanding the types of data (in-situ observations and curvilinear grids) it can serve and by taking advantages of advances in software infrastructure both in the earth sciences community (THREDDS, the GrADS Data Server, the Anagram framework and Java netCDF 2.2) and in the Web community (Java Servlet and the Apache Jakarta frameworks). This presentation will explore the continued evolution of the LAS architecture towards a complete Web-services-based framework. Additionally, we will discuss the redesign and modernization of some of the support tools available to LAS installers. Soon after the initial implementation, the LAS architecture was redesigned to separate the components that are responsible for the user interaction (the User Interface Server) from the components that are responsible for interacting with the data and producing the output requested by the user (the Product Server). During this redesign, we changed the implementation of the User Interface Server from CGI and JavaScript to the Java Servlet specification using Apache Jakarta Velocity backed by a database store for holding the user interface widget components. The User Interface server is now quite flexible and highly configurable because we modernized the components used for the implementation. Meanwhile, the implementation of the Product Server has remained a Perl CGI-based system. Clearly, the time has come to modernize this part of the LAS architecture. Before undertaking such a modernization it is important to understand what we hope to gain. Specifically we would like to make it even easier to add new output products into our core system based on the Ferret analysis and visualization package. By carefully factoring the tasks needed to create a product we will be able to create new products simply by adding a description of the product into the configuration and by writing the Ferret script needed to create the product. No code will need to be added to the Product Server to bring the new product on-line. The new architecture should be faster at extracting and processing configuration information needed to address each request. Finally, the new Product Server architecture should make it even easier to pass specialized configuration information to the Product Server to deal with unanticipated special data structures or processing requirements.
Generic, Extensible, Configurable Push-Pull Framework for Large-Scale Science Missions
NASA Technical Reports Server (NTRS)
Foster, Brian M.; Chang, Albert Y.; Freeborn, Dana J.; Crichton, Daniel J.; Woollard, David M.; Mattmann, Chris A.
2011-01-01
The push-pull framework was developed in hopes that an infrastructure would be created that could literally connect to any given remote site, and (given a set of restrictions) download files from that remote site based on those restrictions. The Cataloging and Archiving Service (CAS) has recently been re-architected and re-factored in its canonical services, including file management, workflow management, and resource management. Additionally, a generic CAS Crawling Framework was built based on motivation from Apache s open-source search engine project called Nutch. Nutch is an Apache effort to provide search engine services (akin to Google), including crawling, parsing, content analysis, and indexing. It has produced several stable software releases, and is currently used in production services at companies such as Yahoo, and at NASA's Planetary Data System. The CAS Crawling Framework supports many of the Nutch Crawler's generic services, including metadata extraction, crawling, and ingestion. However, one service that was not ported over from Nutch is a generic protocol layer service that allows the Nutch crawler to obtain content using protocol plug-ins that download content using implementations of remote protocols, such as HTTP, FTP, WinNT file system, HTTPS, etc. Such a generic protocol layer would greatly aid in the CAS Crawling Framework, as the layer would allow the framework to generically obtain content (i.e., data products) from remote sites using protocols such as FTP and others. Augmented with this capability, the Orbiting Carbon Observatory (OCO) and NPP (NPOESS Preparatory Project) Sounder PEATE (Product Evaluation and Analysis Tools Elements) would be provided with an infrastructure to support generic FTP-based pull access to remote data products, obviating the need for any specialized software outside of the context of their existing process control systems. This extensible configurable framework was created in Java, and allows the use of different underlying communication middleware (at present, both XMLRPC, and RMI). In addition, the framework is entirely suitable in a multi-mission environment and is supporting both NPP Sounder PEATE and the OCO Mission. Both systems involve tasks such as high-throughput job processing, terabyte-scale data management, and science computing facilities. NPP Sounder PEATE is already using the push-pull framework to accept hundreds of gigabytes of IASI (infrared atmospheric sounding interferometer) data, and is in preparation to accept CRIMS (Cross-track Infrared Microwave Sounding Suite) data. OCO will leverage the framework to download MODIS, CloudSat, and other ancillary data products for use in the high-performance Level 2 Science Algorithm. The National Cancer Institute is also evaluating the framework for use in sharing and disseminating cancer research data through its Early Detection Research Network (EDRN).
Proceedings of the Twenty-Fourth Annual Software Engineering Workshop
NASA Technical Reports Server (NTRS)
2000-01-01
On December 1 and 2, the Software Engineering Laboratory (SEL), a consortium composed of NASA/Goddard, the University of Maryland, and CSC, held the 24th Software Engineering Workshop (SEW), the last of the millennium. Approximately 240 people attended the 2-day workshop. Day 1 was composed of four sessions: International Influence of the Software Engineering Laboratory; Object Oriented Testing and Reading; Software Process Improvement; and Space Software. For the first session, three internationally known software process experts discussed the influence of the SEL with respect to software engineering research. In the Space Software session, prominent representatives from three different NASA sites- GSFC's Marti Szczur, the Jet Propulsion Laboratory's Rick Doyle, and the Ames Research Center IV&V Facility's Lou Blazy- discussed the future of space software in their respective centers. At the end of the first day, the SEW sponsored a reception at the GSFC Visitors' Center. Day 2 also provided four sessions: Using the Experience Factory; A panel discussion entitled "Software Past, Present, and Future: Views from Government, Industry, and Academia"; Inspections; and COTS. The day started with an excellent talk by CSC's Frank McGarry on "Attaining Level 5 in CMM Process Maturity." Session 2, the panel discussion on software, featured NASA Chief Information Officer Lee Holcomb (Government), our own Jerry Page (Industry), and Mike Evangelist of the National Science Foundation (Academia). Each presented his perspective on the most important developments in software in the past 10 years, in the present, and in the future.
NASA Astrophysics Data System (ADS)
Kazanskiy, Nikolay; Protsenko, Vladimir; Serafimovich, Pavel
2016-03-01
This research article contains an experiment with implementation of image filtering task in Apache Storm and IBM InfoSphere Streams stream data processing systems. The aim of presented research is to show that new technologies could be effectively used for sliding window filtering of image sequences. The analysis of execution was focused on two parameters: throughput and memory consumption. Profiling was performed on CentOS operating systems running on two virtual machines for each system. The experiment results showed that IBM InfoSphere Streams has about 1.5 to 13.5 times lower memory footprint than Apache Storm, but could be about 2.0 to 2.5 slower on a real hardware.
Fowler, K. R.; Jenkins, E.W.; Parno, M.; Chrispell, J.C.; Colón, A. I.; Hanson, Randall T.
2016-01-01
The development of appropriate water management strategies requires, in part, a methodology for quantifying and evaluating the impact of water policy decisions on regional stakeholders. In this work, we describe the framework we are developing to enhance the body of resources available to policy makers, farmers, and other community members in their e orts to understand, quantify, and assess the often competing objectives water consumers have with respect to usage. The foundation for the framework is the construction of a simulation-based optimization software tool using two existing software packages. In particular, we couple a robust optimization software suite (DAKOTA) with the USGS MF-OWHM water management simulation tool to provide a flexible software environment that will enable the evaluation of one or multiple (possibly competing) user-defined (or stakeholder) objectives. We introduce the individual software components and outline the communication strategy we defined for the coupled development. We present numerical results for case studies related to crop portfolio management with several defined objectives. The objectives are not optimally satisfied for any single user class, demonstrating the capability of the software tool to aid in the evaluation of a variety of competing interests.
Model Transformation for a System of Systems Dependability Safety Case
NASA Technical Reports Server (NTRS)
Murphy, Judy; Driskell, Stephen B.
2010-01-01
Software plays an increasingly larger role in all aspects of NASA's science missions. This has been extended to the identification, management and control of faults which affect safety-critical functions and by default, the overall success of the mission. Traditionally, the analysis of fault identification, management and control are hardware based. Due to the increasing complexity of system, there has been a corresponding increase in the complexity in fault management software. The NASA Independent Validation & Verification (IV&V) program is creating processes and procedures to identify, and incorporate safety-critical software requirements along with corresponding software faults so that potential hazards may be mitigated. This Specific to Generic ... A Case for Reuse paper describes the phases of a dependability and safety study which identifies a new, process to create a foundation for reusable assets. These assets support the identification and management of specific software faults and, their transformation from specific to generic software faults. This approach also has applications to other systems outside of the NASA environment. This paper addresses how a mission specific dependability and safety case is being transformed to a generic dependability and safety case which can be reused for any type of space mission with an emphasis on software fault conditions.
Goede, Patricia A.; Lauman, Jason R.; Cochella, Christopher; Katzman, Gregory L.; Morton, David A.; Albertine, Kurt H.
2004-01-01
Use of digital medical images has become common over the last several years, coincident with the release of inexpensive, mega-pixel quality digital cameras and the transition to digital radiology operation by hospitals. One problem that clinicians, medical educators, and basic scientists encounter when handling images is the difficulty of using business and graphic arts commercial-off-the-shelf (COTS) software in multicontext authoring and interactive teaching environments. The authors investigated and developed software-supported methodologies to help clinicians, medical educators, and basic scientists become more efficient and effective in their digital imaging environments. The software that the authors developed provides the ability to annotate images based on a multispecialty methodology for annotation and visual knowledge representation. This annotation methodology is designed by consensus, with contributions from the authors and physicians, medical educators, and basic scientists in the Departments of Radiology, Neurobiology and Anatomy, Dermatology, and Ophthalmology at the University of Utah. The annotation methodology functions as a foundation for creating, using, reusing, and extending dynamic annotations in a context-appropriate, interactive digital environment. The annotation methodology supports the authoring process as well as output and presentation mechanisms. The annotation methodology is the foundation for a Windows implementation that allows annotated elements to be represented as structured eXtensible Markup Language and stored separate from the image(s). PMID:14527971
Generating Systems Biology Markup Language Models from the Synthetic Biology Open Language.
Roehner, Nicholas; Zhang, Zhen; Nguyen, Tramy; Myers, Chris J
2015-08-21
In the context of synthetic biology, model generation is the automated process of constructing biochemical models based on genetic designs. This paper discusses the use cases for model generation in genetic design automation (GDA) software tools and introduces the foundational concepts of standards and model annotation that make this process useful. Finally, this paper presents an implementation of model generation in the GDA software tool iBioSim and provides an example of generating a Systems Biology Markup Language (SBML) model from a design of a 4-input AND sensor written in the Synthetic Biology Open Language (SBOL).
2013-01-01
Background Immunoassays that employ multiplexed bead arrays produce high information content per sample. Such assays are now frequently used to evaluate humoral responses in clinical trials. Integrated software is needed for the analysis, quality control, and secure sharing of the high volume of data produced by such multiplexed assays. Software that facilitates data exchange and provides flexibility to perform customized analyses (including multiple curve fits and visualizations of assay performance over time) could increase scientists’ capacity to use these immunoassays to evaluate human clinical trials. Results The HIV Vaccine Trials Network and the Statistical Center for HIV/AIDS Research and Prevention collaborated with LabKey Software to enhance the open source LabKey Server platform to facilitate workflows for multiplexed bead assays. This system now supports the management, analysis, quality control, and secure sharing of data from multiplexed immunoassays that leverage Luminex xMAP® technology. These assays may be custom or kit-based. Newly added features enable labs to: (i) import run data from spreadsheets output by Bio-Plex Manager™ software; (ii) customize data processing, curve fits, and algorithms through scripts written in common languages, such as R; (iii) select script-defined calculation options through a graphical user interface; (iv) collect custom metadata for each titration, analyte, run and batch of runs; (v) calculate dose–response curves for titrations; (vi) interpolate unknown concentrations from curves for titrated standards; (vii) flag run data for exclusion from analysis; (viii) track quality control metrics across runs using Levey-Jennings plots; and (ix) automatically flag outliers based on expected values. Existing system features allow researchers to analyze, integrate, visualize, export and securely share their data, as well as to construct custom user interfaces and workflows. Conclusions Unlike other tools tailored for Luminex immunoassays, LabKey Server allows labs to customize their Luminex analyses using scripting while still presenting users with a single, graphical interface for processing and analyzing data. The LabKey Server system also stands out among Luminex tools for enabling smooth, secure transfer of data, quality control information, and analyses between collaborators. LabKey Server and its Luminex features are freely available as open source software at http://www.labkey.com under the Apache 2.0 license. PMID:23631706
Eckels, Josh; Nathe, Cory; Nelson, Elizabeth K; Shoemaker, Sara G; Nostrand, Elizabeth Van; Yates, Nicole L; Ashley, Vicki C; Harris, Linda J; Bollenbeck, Mark; Fong, Youyi; Tomaras, Georgia D; Piehler, Britt
2013-04-30
Immunoassays that employ multiplexed bead arrays produce high information content per sample. Such assays are now frequently used to evaluate humoral responses in clinical trials. Integrated software is needed for the analysis, quality control, and secure sharing of the high volume of data produced by such multiplexed assays. Software that facilitates data exchange and provides flexibility to perform customized analyses (including multiple curve fits and visualizations of assay performance over time) could increase scientists' capacity to use these immunoassays to evaluate human clinical trials. The HIV Vaccine Trials Network and the Statistical Center for HIV/AIDS Research and Prevention collaborated with LabKey Software to enhance the open source LabKey Server platform to facilitate workflows for multiplexed bead assays. This system now supports the management, analysis, quality control, and secure sharing of data from multiplexed immunoassays that leverage Luminex xMAP® technology. These assays may be custom or kit-based. Newly added features enable labs to: (i) import run data from spreadsheets output by Bio-Plex Manager™ software; (ii) customize data processing, curve fits, and algorithms through scripts written in common languages, such as R; (iii) select script-defined calculation options through a graphical user interface; (iv) collect custom metadata for each titration, analyte, run and batch of runs; (v) calculate dose-response curves for titrations; (vi) interpolate unknown concentrations from curves for titrated standards; (vii) flag run data for exclusion from analysis; (viii) track quality control metrics across runs using Levey-Jennings plots; and (ix) automatically flag outliers based on expected values. Existing system features allow researchers to analyze, integrate, visualize, export and securely share their data, as well as to construct custom user interfaces and workflows. Unlike other tools tailored for Luminex immunoassays, LabKey Server allows labs to customize their Luminex analyses using scripting while still presenting users with a single, graphical interface for processing and analyzing data. The LabKey Server system also stands out among Luminex tools for enabling smooth, secure transfer of data, quality control information, and analyses between collaborators. LabKey Server and its Luminex features are freely available as open source software at http://www.labkey.com under the Apache 2.0 license.
Whiteriver Sewage Lagoons, Whiteriver, AZ: AZ0024058
Authorization to Discharge Under National Pollutant Discharge Elimination System (NPDES) Permit No. AZ0024058 for Tribal Utility Authority, White Mountain Apache Tribe Whiteriver Sewage Lagoons, Whiteriver, AZ.
Bright-Dark Mixed N-Soliton Solution of the Two-Dimensional Maccari System
NASA Astrophysics Data System (ADS)
Han, Zhong; Chen, Yong
2017-07-01
Not Available Supported by the Global Change Research Program of China under Grant No 2015CB953904, the National Natural Science Foundation of China under Grant Nos 11675054 and 11435005, and the Shanghai Collaborative Innovation Center of Trustworthy Software for Internet of Things under Grant No ZF1213.
Software Acquisition Program Dynamics
2011-10-24
greatest capability, which requires latest technologies • Contractors prefer using latest technologies to boost staff competency for future bids Risk...mistakes Build foundation to test future mitigation/solution approaches to assess value • Qualitatively validate new approaches before applying them to...classroom training, eLearning , certification, and more—to serve the needs of customers and partners worldwide.
Analysis of Elementary School Web Sites
ERIC Educational Resources Information Center
Hartshorne, Richard; Friedman, Adam; Algozzine, Bob; Kaur, Daljit
2008-01-01
While researchers have studied the use and value of educational software for many years, study of school Web sites and/or their effectiveness is limited. In this investigation, we identified goals and functions of school Web sites and used the foundations of effective Web site design to develop an evaluation checklist. We then applied these…
Helping Students Express Their Passion
ERIC Educational Resources Information Center
Mann, Michelle
2011-01-01
Adobe Youth Voices (AYV) is a global educational program sponsored by the Adobe Foundation, the philanthropic arm of software maker Adobe. The education-based initiative teaches underserved kids aged 13-18 how to use digital media to comment on their world, share ideas, and take action on the social issues that are important to them. The AYV…
Computer Aided Instruction: A Study of Student Evaluations and Academic Performance
ERIC Educational Resources Information Center
Collins, David; Deck, Alan; McCrickard, Myra
2008-01-01
Computer aided instruction (CAI) encompasses a broad range of computer technologies that supplement the classroom learning environment and can dramatically increase a student's access to information. Criticism of CAI generally focuses on two issues: it lacks an adequate foundation in educational theory and the software is difficult to implement…
ERIC Educational Resources Information Center
Robelen, Erik W.
2006-01-01
This article discusses how the philanthropy of Microsoft Corp software magnate co-chairs, Bill Gates and his wife Melinda, are reshaping the American high school nowadays. Gates and his wife have put the issue on the national agenda like never before, with a commitment of more than 1.3 billion US dollars this decade toward the foundation's agenda…
Student Learning in Science Simulations: Design Features that Promote Learning Gains
ERIC Educational Resources Information Center
Scalise, Kathleen; Timms, Michael; Moorjani, Anita; Clark, LaKisha; Holtermann, Karen; Irvin, P. Shawn
2011-01-01
This research examines science-simulation software available for grades 6-12 science courses. The study presented, funded by the National Science Foundation, had two objectives: a literature synthesis and a product review. The literature synthesis examines research findings on grade 6-12 student learning gains and losses using virtual laboratories…
Effects of a Preschool Mathematics Curriculum: Summative Research on the "Building Blocks" Project
ERIC Educational Resources Information Center
Clements, Douglas H.; Sarama, Julie
2007-01-01
This study evaluated the efficacy of a preschool mathematics program based on a comprehensive model of developing research-based software and print curricula. Building Blocks, funded by the National Science Foundation, is a curriculum development project focused on creating research-based, technology-enhanced mathematics materials for pre-K…
Addressing Plagiarism in Online Programmes at a Health Sciences University: A Case Study
ERIC Educational Resources Information Center
Ewing, Helen; Anast, Ade; Roehling, Tamara
2016-01-01
Plagiarism continues to be a concern for all educational institutions. To build a solid foundation for high academic standards and best practices at a graduate university, aspects of plagiarism were reviewed to develop better management processes for reducing plagiarism. Specifically, the prevalence of plagiarism and software programmes for…
Exploring Foundation Concepts in Introductory Statistics Using Dynamic Data Points
ERIC Educational Resources Information Center
Ekol, George
2015-01-01
This paper analyses introductory statistics students' verbal and gestural expressions as they interacted with a dynamic sketch (DS) designed using "Sketchpad" software. The DS involved numeric data points built on the number line whose values changed as the points were dragged along the number line. The study is framed on aggregate…
Cloud Engineering Principles and Technology Enablers for Medical Image Processing-as-a-Service
Bao, Shunxing; Plassard, Andrew J.; Landman, Bennett A.; Gokhale, Aniruddha
2017-01-01
Traditional in-house, laboratory-based medical imaging studies use hierarchical data structures (e.g., NFS file stores) or databases (e.g., COINS, XNAT) for storage and retrieval. The resulting performance from these approaches is, however, impeded by standard network switches since they can saturate network bandwidth during transfer from storage to processing nodes for even moderate-sized studies. To that end, a cloud-based “medical image processing-as-a-service” offers promise in utilizing the ecosystem of Apache Hadoop, which is a flexible framework providing distributed, scalable, fault tolerant storage and parallel computational modules, and HBase, which is a NoSQL database built atop Hadoop’s distributed file system. Despite this promise, HBase’s load distribution strategy of region split and merge is detrimental to the hierarchical organization of imaging data (e.g., project, subject, session, scan, slice). This paper makes two contributions to address these concerns by describing key cloud engineering principles and technology enhancements we made to the Apache Hadoop ecosystem for medical imaging applications. First, we propose a row-key design for HBase, which is a necessary step that is driven by the hierarchical organization of imaging data. Second, we propose a novel data allocation policy within HBase to strongly enforce collocation of hierarchically related imaging data. The proposed enhancements accelerate data processing by minimizing network usage and localizing processing to machines where the data already exist. Moreover, our approach is amenable to the traditional scan, subject, and project-level analysis procedures, and is compatible with standard command line/scriptable image processing software. Experimental results for an illustrative sample of imaging data reveals that our new HBase policy results in a three-fold time improvement in conversion of classic DICOM to NiFTI file formats when compared with the default HBase region split policy, and nearly a six-fold improvement over a commonly available network file system (NFS) approach even for relatively small file sets. Moreover, file access latency is lower than network attached storage. PMID:28884169
Analysis of gear reducer housing using the finite element method
NASA Astrophysics Data System (ADS)
Miklos, I. Zs; Miklos, C. C.; Alic, C. I.; Raţiu, S.
2018-01-01
The housing is an important component in the construction of gear reducers, having the role of fixing the relative position of the shafts and toothed wheels. At the same time, the housing takes over, via the bearings, the shaft loads resulting when the toothed wheel is engaging another toothed mechanism (i.e. power transmission through belts or chains), and conveys them to the foundation on which it is anchored. In this regard, in order to ensure the most accurate gearing, a high stiffness of the housing is required. In this paper, we present the computer-aided 3D modelling of the housing (in cast version) of a single stage cylindrical gear reducer, using the Autodesk Inventor Professional software, on the principle of constructive sizing. For the housing resistance calculation, we carried out an analysis using the Autodesk Simulation Mechanical software to apply the finite element method, based on the actual loads, as well as a comparative study of the stress and strain distribution, for several tightening values of the retaining bolts that secure the cover and the foundation housing.
Building an experience factory for maintenance
NASA Technical Reports Server (NTRS)
Valett, Jon D.; Condon, Steven E.; Briand, Lionel; Kim, Yong-Mi; Basili, Victor R.
1994-01-01
This paper reports the preliminary results of a study of the software maintenance process in the Flight Dynamics Division (FDD) of the National Aeronautics and Space Administration/Goddard Space Flight Center (NASA/GSFC). This study is being conducted by the Software Engineering Laboratory (SEL), a research organization sponsored by the Software Engineering Branch of the FDD, which investigates the effectiveness of software engineering technologies when applied to the development of applications software. This software maintenance study began in October 1993 and is being conducted using the Quality Improvement Paradigm (QIP), a process improvement strategy based on three iterative steps: understanding, assessing, and packaging. The preliminary results represent the outcome of the understanding phase, during which SEL researchers characterized the maintenance environment, product, and process. Findings indicate that a combination of quantitative and qualitative analysis is effective for studying the software maintenance process, that additional measures should be collected for maintenance (as opposed to new development), and that characteristics such as effort, error rate, and productivity are best considered on a 'release' basis rather than on a project basis. The research thus far has documented some basic differences between new development and software maintenance. It lays the foundation for further application of the QIP to investigate means of improving the maintenance process and product in the FDD.
The Implementation of Satellite Attitude Control System Software Using Object Oriented Design
NASA Technical Reports Server (NTRS)
Reid, W. Mark; Hansell, William; Phillips, Tom; Anderson, Mark O.; Drury, Derek
1998-01-01
NASA established the Small Explorer (SNMX) program in 1988 to provide frequent opportunities for highly focused and relatively inexpensive space science missions. The SMEX program has produced five satellites, three of which have been successfully launched. The remaining two spacecraft are scheduled for launch within the coming year. NASA has recently developed a prototype for the next generation Small Explorer spacecraft (SMEX-Lite). This paper describes the object-oriented design (OOD) of the SMEX-Lite Attitude Control System (ACS) software. The SMEX-Lite ACS is three-axis controlled and is capable of performing sub-arc-minute pointing. This paper first describes high level requirements governing the SMEX-Lite ACS software architecture. Next, the context in which the software resides is explained. The paper describes the principles of encapsulation, inheritance, and polymorphism with respect to the implementation of an ACS software system. This paper will also discuss the design of several ACS software components. Specifically, object-oriented designs are presented for sensor data processing, attitude determination, attitude control, and failure detection. Finally, this paper will address the establishment of the ACS Foundation Class (AFC) Library. The AFC is a large software repository, requiring a minimal amount of code modifications to produce ACS software for future projects.
AAS Publishing News: Astronomical Software Citation Workshop
NASA Astrophysics Data System (ADS)
Kohler, Susanna
2015-07-01
Do you write code for your research? Use astronomical software? Do you wish there were a better way of citing, sharing, archiving, or discovering software for astronomy research? You're not alone! In April 2015, AAS's publishing team joined other leaders in the astronomical software community in a meeting funded by the Sloan Foundation, with the purpose of discussing these issues and potential solutions. In attendance were representatives from academic astronomy, publishing, libraries, for-profit software sharing platforms, telescope facilities, and grantmaking institutions. The goal of the group was to establish “protocols, policies, and platforms for astronomical software citation, sharing, and archiving,” in the hopes of encouraging a set of normalized standards across the field. The AAS is now collaborating with leaders at GitHub to write grant proposals for a project to develop strategies for software discoverability and citation, in astronomy and beyond. If this topic interests you, you can find more details in this document released by the group after the meeting: http://astronomy-software-index.github.io/2015-workshop/ The group hopes to move this project forward with input and support from the broader community. Please share the above document, discuss it on social media using the hashtag #astroware (so that your conversations can be found!), or send private comments to julie.steffen@aas.org.
A Clustering-Based Approach to Enriching Code Foraging Environment.
Niu, Nan; Jin, Xiaoyu; Niu, Zhendong; Cheng, Jing-Ru C; Li, Ling; Kataev, Mikhail Yu
2016-09-01
Developers often spend valuable time navigating and seeking relevant code in software maintenance. Currently, there is a lack of theoretical foundations to guide tool design and evaluation to best shape the code base to developers. This paper contributes a unified code navigation theory in light of the optimal food-foraging principles. We further develop a novel framework for automatically assessing the foraging mechanisms in the context of program investigation. We use the framework to examine to what extent the clustering of software entities affects code foraging. Our quantitative analysis of long-lived open-source projects suggests that clustering enriches the software environment and improves foraging efficiency. Our qualitative inquiry reveals concrete insights into real developer's behavior. Our research opens the avenue toward building a new set of ecologically valid code navigation tools.
The 2016 Bioinformatics Open Source Conference (BOSC).
Harris, Nomi L; Cock, Peter J A; Chapman, Brad; Fields, Christopher J; Hokamp, Karsten; Lapp, Hilmar; Muñoz-Torres, Monica; Wiencko, Heather
2016-01-01
Message from the ISCB: The Bioinformatics Open Source Conference (BOSC) is a yearly meeting organized by the Open Bioinformatics Foundation (OBF), a non-profit group dedicated to promoting the practice and philosophy of Open Source software development and Open Science within the biological research community. BOSC has been run since 2000 as a two-day Special Interest Group (SIG) before the annual ISMB conference. The 17th annual BOSC ( http://www.open-bio.org/wiki/BOSC_2016) took place in Orlando, Florida in July 2016. As in previous years, the conference was preceded by a two-day collaborative coding event open to the bioinformatics community. The conference brought together nearly 100 bioinformatics researchers, developers and users of open source software to interact and share ideas about standards, bioinformatics software development, and open and reproducible science.
MATHEMATICAL METHODS IN MEDICAL IMAGE PROCESSING
ANGENENT, SIGURD; PICHON, ERIC; TANNENBAUM, ALLEN
2013-01-01
In this paper, we describe some central mathematical problems in medical imaging. The subject has been undergoing rapid changes driven by better hardware and software. Much of the software is based on novel methods utilizing geometric partial differential equations in conjunction with standard signal/image processing techniques as well as computer graphics facilitating man/machine interactions. As part of this enterprise, researchers have been trying to base biomedical engineering principles on rigorous mathematical foundations for the development of software methods to be integrated into complete therapy delivery systems. These systems support the more effective delivery of many image-guided procedures such as radiation therapy, biopsy, and minimally invasive surgery. We will show how mathematics may impact some of the main problems in this area, including image enhancement, registration, and segmentation. PMID:23645963
Feng, Zhihong; Wang, Tao; Liu, Ping; Chen, Sipeng; Xiao, Han; Xia, Ning; Luo, Zhiming; Wei, Bing; Nie, Xiuhong
2017-01-01
We aimed to investigate the efficacy of four severity-of-disease scoring systems in predicting the 28-day survival rate among patients with acute exacerbation of chronic obstructive pulmonary disease (AECOPD) requiring emergency care. Clinical data of patients with AECOPD who required emergency care were recorded over 2 years. APACHE II, SAPS II, SOFA, and MEDS scores were calculated from severity-of-disease indicators recorded at admission and compared between patients who died within 28 days of admission (death group; 46 patients) and those who did not (survival group; 336 patients). Compared to the survival group, the death group had a significantly higher GCS score, frequency of comorbidities including hypertension and heart failure, and age ( P < 0.05 for all). With all four systems, scores of age, gender, renal inadequacy, hypertension, coronary heart disease, heart failure, arrhythmia, anemia, fracture leading to bedridden status, tumor, and the GCS were significantly higher in the death group than the survival group. The prediction efficacy of the APACHE II and SAPS II scores was 88.4%. The survival rates did not differ significantly between APACHE II and SAPS II ( P = 1.519). Our results may guide triage for early identification of critically ill patients with AECOPD in the emergency department.
Feasibility analysis for biomass cogeneration at the Fort Apache Timber Company
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whittier, J.; Hasse, S.; Tomberlin, G.
1996-12-31
The Fort Apache Timber Company (FATCO) is a wholly-owned tribal enterprise of the White Mountain Apache Tribe (WMAT). WMAT officials are concerned about fuel buildup on the forest floor and the potential for catastrophic forests fires. Cogeneration is viewed as one means to effectively utilize biomass from the forest to reduce the chance of forest fires. FATCO presently spends approximately $1.6 million per year for electricity service from Navopache Electric Cooperative, Inc. for three sites. Peak demand is approximately 3.9 MW and the annual load factor is slightly under 50 percent. The blended cost of electricity is approximately $0.089 /more » kWh at the main mill. Biomass resources for fuel purposes may be obtained both from mill operations and from the forest operations. For many years FATCO has burned its wood residues to supply steam for dry kilns. It is estimated that a total of 125,778 bone dry tons (bdt) per year are available for fuel. A twenty year economic analysis model was used to evaluate the cogeneration potential. The model performs annual cash flow calculations to arrive at three measures of economic vitality: (1) Net Present Value (NPV), (2) levelized cost per kWh, and (3) Year 2 Return on Investment (ROI). Results of the analysis are positive for several scenarios.« less
Harrison, David A; Brady, Anthony R; Parry, Gareth J; Carpenter, James R; Rowan, Kathy
2006-05-01
To assess the performance of published risk prediction models in common use in adult critical care in the United Kingdom and to recalibrate these models in a large representative database of critical care admissions. Prospective cohort study. A total of 163 adult general critical care units in England, Wales, and Northern Ireland, during the period of December 1995 to August 2003. A total of 231,930 admissions, of which 141,106 met inclusion criteria and had sufficient data recorded for all risk prediction models. None. The published versions of the Acute Physiology and Chronic Health Evaluation (APACHE) II, APACHE II UK, APACHE III, Simplified Acute Physiology Score (SAPS) II, and Mortality Probability Models (MPM) II were evaluated for discrimination and calibration by means of a combination of appropriate statistical measures recommended by an expert steering committee. All models showed good discrimination (the c index varied from 0.803 to 0.832) but imperfect calibration. Recalibration of the models, which was performed by both the Cox method and re-estimating coefficients, led to improved discrimination and calibration, although all models still showed significant departures from perfect calibration. Risk prediction models developed in another country require validation and recalibration before being used to provide risk-adjusted outcomes within a new country setting. Periodic reassessment is beneficial to ensure calibration is maintained.
NASA Technical Reports Server (NTRS)
Bertrand-Sarfati, J.; Awramik, S. M.
1992-01-01
The 25- to 30-m-thick Algal Member of the Mescal Limestone (middle Proterozoic Apache Group) contains two distinct stromatolitic units: at the base, a 2- to 3-m-thick unit composed of columnar stromatolites and above, a thicker unit of stratiform and pseudocolumnar stromatolites. Columnar forms from the first unit belong to the Group Tungussia, and two new Forms are described: T. mescalita and T. chrysotila. Among the pseudocolumnar stromatolites of the thicker unit, one distinctive new taxon, Apachina henryi, is described. Because of the low stromatolite diversity, the biostratigraphic value of this assemblage is limited. The presence of Tungussia is consistent with the generally accepted isotopic age for the Apache Group of 1200 to 1100 Ma. The Mescal stromatolites do not closely resemble any other known Proterozoic stromatolites in the southwestern United States or northwestern Mexico. Analyses of sedimentary features and stromatolite growth forms suggest deposition on a stable, flat, shallow, subtidal protected platform during phases of Tungussia growth. Current action probably influenced the development of columns, pseudocolumns, and elongate stromatolitic ridges; these conditions alternated with phases of relatively quiet water characterized by nonoriented stromatolitic domes and stratiform stromatolites. Stable conditions favorable for development of the Mescal stromatolites were short-lived and did not permit the development of thick, stromatolite-bearing units such as those characteristic of many Proterozoic sequences elsewhere.
Sun, Zhao-Xi; Huang, Hai-Rong; Zhou, Hong
2006-01-01
AIM: To study the effect of combined indwelling catheter, hemofiltration, respiration support and traditional Chinese medicine (e.g. Dahuang) in treating abdominal compartment syndrome of fulminant acute pancreatitis. METHODS: Patients with fulminant acute pancreatitis were divided randomly into 2 groups of combined indwelling catheter celiac drainage and intra-abdominal pressure monitoring and routine conservative measures group (group 1) and control group (group 2). Routine non-operative conservative treatments including hemofiltration, respiration support, gastrointestinal TCM ablution were also applied in control group patients. Effectiveness of the two groups was observed, and APACHE II scores were applied for analysis. RESULTS: On the second and fifth days after treatment, APACHE II scores of group 1 and 2 patients were significantly different. Comparison of effectiveness (abdominalgia and burbulence relief time, hospitalization time) between groups 1 and 2 showed significant difference, as well as incidence rates of cysts formation. Mortality rates of groups 1 and 2 were 10.0% and 20.7%, respectively. For patients in group 1, celiac drainage quantity and intra-abdominal pressure, and hospitalization time were positively correlated (r = 0.552, 0.748, 0.923, P < 0.01) with APACHE II scores. CONCLUSION: Combined indwelling catheter celiac drainage and intra-abdominal pressure monitoring, short veno-venous hemofiltration (SVVH), gastrointestinal TCM ablution, respiration support have preventive and treatment effects on abdominal compartment syndrome of fulminant acute pancreatitis. PMID:16937509
Field of genes: using Apache Kafka as a bioinformatic data repository.
Lawlor, Brendan; Lynch, Richard; Mac Aogáin, Micheál; Walsh, Paul
2018-04-01
Bioinformatic research is increasingly dependent on large-scale datasets, accessed either from private or public repositories. An example of a public repository is National Center for Biotechnology Information's (NCBI's) Reference Sequence (RefSeq). These repositories must decide in what form to make their data available. Unstructured data can be put to almost any use but are limited in how access to them can be scaled. Highly structured data offer improved performance for specific algorithms but limit the wider usefulness of the data. We present an alternative: lightly structured data stored in Apache Kafka in a way that is amenable to parallel access and streamed processing, including subsequent transformations into more highly structured representations. We contend that this approach could provide a flexible and powerful nexus of bioinformatic data, bridging the gap between low structure on one hand, and high performance and scale on the other. To demonstrate this, we present a proof-of-concept version of NCBI's RefSeq database using this technology. We measure the performance and scalability characteristics of this alternative with respect to flat files. The proof of concept scales almost linearly as more compute nodes are added, outperforming the standard approach using files. Apache Kafka merits consideration as a fast and more scalable but general-purpose way to store and retrieve bioinformatic data, for public, centralized reference datasets such as RefSeq and for private clinical and experimental data.
... FrameworkServlet.doGet(FrameworkServlet.java:549) at javax.servlet.http.HttpServlet.service(HttpServlet.java:617) at javax.servlet.http.HttpServlet.service(HttpServlet.java:717) at org.apache. ...
... FrameworkServlet.doGet(FrameworkServlet.java:549) at javax.servlet.http.HttpServlet.service(HttpServlet.java:617) at javax.servlet.http.HttpServlet.service(HttpServlet.java:717) at org.apache. ...
Yu, Zhixin; Ji, Musen; Hu, Xiulan; Yan, Jun; Jin, Zhaochen
2017-01-01
To investigate the value of procalcitonin (PCT) on predicting the severity and prognosis in patients with early acute respiratory distress syndrome (ARDS). A prospective observation study was conducted. A total of 113 patients with ARDS undergoing mechanical ventilation admitted to intensive care unit (ICU) of Affiliated People's Hospital of Jiangsu University from October 2012 to April 2016 were enrolled. Based on oxygenation index (PaO 2 /FiO 2 ), the patients were classified into mild, moderate, and severe groups according to Berlin Definition. Twenty-five healthy volunteers were served as controls. Demographics, acute physiology and chronic health evaluation II (APACHE II) score, and Murray lung injury score were recorded. Within 24 hours after diagnosis of ARDS, the serum levels of PCT and C-reactive protein (CRP) were determined by enzyme-linked fluorescence analysis (ELFA) and immune turbidimetric method, respectively. The patients were also divided into survival and non-survival groups according to clinical outcome within 28-day follow-up, and the clinical data were compared between the two groups. Spearman rank correlation was applied to determine the correlation between variables. The predictive value of the parameters on 28-day mortality was evaluated with receiver operating characteristic curve (ROC). Kaplan-Meier survival curve analysis was used to compare different PCT levels of patients with 28-day cumulative survival rate. After excluding patients who did not meet the inclusion criteria and loss to follow-up, the final 89 patients were enrolled in the analysis. Among 89 ARDS patients analyzed, 27 of them were mild, 34 moderate, and 28 severe ARDS. No significant differences were found in age and gender between ARDS and healthy control groups. Infection and trauma were the most common etiology of ARDS (55.1% and 34.8%, respectively). Compared with healthy control group, both CRP and PCT in serum of ARDS group were higher [CRP (mg/L): 146.32 (111.31, 168.49) vs. 6.08 (4.47, 7.89), PCT (μg/L): 3.46 (1.98, 5.56) vs. 0.02 (0.01, 0.04), both P < 0.01], and the two showed sustained upward trends with the ARDS course of disease. Compared with mild group, severe group had significantly higher APACHE II and Murray scores. Spearman rank correlation analysis showed that both serum PCT and CRP in patients with ARDS was correlated well with APACHE II score (r values were 0.669 and 0.601, respectively, both P < 0.001), while PCT was weakly but positively correlated with Murray score (r = 0.294, P = 0.005), but not the case of CRP (r = 0.203, P = 0.052). APACHE II score and serum PCT in non-survival group (n = 38) were significantly higher than those of the survival group [n = 51; APACHE II score: 26.00 (23.00, 28.50) vs. 21.00 (17.00, 25.00), PCT (μg/L): 6.38 (2.82, 9.49) vs. 3.09 (1.08, 3.57), both P < 0.01], but Murray score and CRP level were similar between survivors and non-survivors. The areas under ROC curve (AUC) of APACHE II score and PCT for predicting 28-day mortality were 0.781 and 0.793, respectively, which were better than those of AUC of Murray score and CRP (0.606 and 0.561, respectively, all P < 0.05). The AUC of APACHE II score combined with PCT was significantly higher than that of PCT (0.859 vs. 0.793, P = 0.048) or APACHE II score alone (0.859 vs. 0.781, P = 0.038). Using a PCT cut-off value of > 4.35 μg/L for predicting 28-day mortality, the sensitivity and specificity was 92.2% and 63.2%, respectively, and the positive and negative likelihood ratios were 2.50 and 0.12 respectively. Kaplan-Meier survival curve analysis indicated that the patients whose PCT more than 4.35 μg/L, had lower 28-day cummulative survival rate as compared with those with PCT ≤ 4.35 μg/L (log-rank test: χ 2 = 5.013, P = 0.025). The elevated serum PCT level in patients with ARDS seems to be correlated well with the severity of lung injury, and appears to be a useful prognostic marker of outcome in the early phases of ARDS.
The Open Data Repository's Data Publisher
NASA Astrophysics Data System (ADS)
Stone, N.; Lafuente, B.; Downs, R. T.; Bristow, T.; Blake, D. F.; Fonda, M.; Pires, A.
2015-12-01
Data management and data publication are becoming increasingly important components of research workflows. The complexity of managing data, publishing data online, and archiving data has not decreased significantly even as computing access and power has greatly increased. The Open Data Repository's Data Publisher software (http://www.opendatarepository.org) strives to make data archiving, management, and publication a standard part of a researcher's workflow using simple, web-based tools and commodity server hardware. The publication engine allows for uploading, searching, and display of data with graphing capabilities and downloadable files. Access is controlled through a robust permissions system that can control publication at the field level and can be granted to the general public or protected so that only registered users at various permission levels receive access. Data Publisher also allows researchers to subscribe to meta-data standards through a plugin system, embargo data publication at their discretion, and collaborate with other researchers through various levels of data sharing. As the software matures, semantic data standards will be implemented to facilitate machine reading of data and each database will provide a REST application programming interface for programmatic access. Additionally, a citation system will allow snapshots of any data set to be archived and cited for publication while the data itself can remain living and continuously evolve beyond the snapshot date. The software runs on a traditional LAMP (Linux, Apache, MySQL, PHP) server and is available on GitHub (http://github.com/opendatarepository) under a GPLv2 open source license. The goal of the Open Data Repository is to lower the cost and training barrier to entry so that any researcher can easily publish their data and ensure it is archived for posterity. We gratefully acknowledge the support for this study by the Science-Enabling Research Activity (SERA), and NASA NNX11AP82A, Mars Science Laboratory Investigations and University of Arizona Geosciences.