Agentless Cloud-Wide Monitoring of Virtual Disk State
2015-10-01
packages include Apache, MySQL , PHP, Ruby on Rails, Java Application Servers, and many others. Figure 2.12 shows the results of a run of the Software...Linux, Apache, MySQL , PHP (LAMP) set of applications. Thus, many file-level update logs will contain the same versions of files repeated across many
Reactive Aggregate Model Protecting Against Real-Time Threats
2014-09-01
on the underlying functionality of three core components. • MS SQL server 2008 backend database. • Microsoft IIS running on Windows server 2008...services. The capstone tested a Linux-based Apache web server with the following software implementations: • MySQL as a Linux-based backend server for...malicious compromise. 1. Assumptions • GINA could connect to a backend MS SQL database through proper configuration of DotNetNuke. • GINA had access
Murray, Peter J; Oyri, Karl
2005-01-01
Many health informatics organisations do not seem to use, on a practical basis, for the benefit of their activities and interaction with their members, the very technologies that they often promote for use within healthcare environments. In particular, many organisations seem to be slow to take up the benefits of interactive web technologies. This paper presents an introduction to some of the many free/libre and open source (FLOSS) applications currently available and using the LAMP - Linux, Apache, MySQL, PHP architecture - as a way of cheaply deploying reliable, scalable, and secure web applications. The experience of moving to applications using LAMP architecture, in particular that of the Open Source Nursing Informatics (OSNI) Working Group of the Special Interest Group in Nursing Informatics of the International Medical Informatics Association (IMIA-NI), in using PostNuke, a FLOSS Content Management System (CMS) illustrates many of the benefits of such applications. The experiences of the authors in installing and maintaining a large number of websites using FLOSS CMS to develop dynamic, interactive websites that facilitate real engagement with the members of IMIA-NI OSNI, the IMIA Open Source Working Group, and the Centre for Health Informatics Research and Development (CHIRAD), as well as other organisations, is used as the basis for discussing the potential benefits that could be realised by others within the health informatics community.
Global ISR: Toward a Comprehensive Defense Against Unauthorized Code Execution
2010-10-01
implementation using two of the most popular open- source servers: the Apache web server, and the MySQL database server. For Apache, we measure the effect that...utility ab. T o ta l T im e ( s e c ) 0 500 1000 1500 2000 2500 3000 Native Null ISR ISR−MP Fig. 3. The MySQL test-insert bench- mark measures...various SQL operations. The figure draws total execution time as reported by the benchmark utility. Finally, we benchmarked a MySQL database server using
Develop, Build, and Test a Virtual Lab to Support a Vulnerability Training System
2004-09-01
docs.us.dell.com/support/edocs/systems/pe1650/ en /it/index.htm> (20 August 2004) “HOWTO: Installing Web Services with Linux /Tomcat/Apache/Struts...configured as host machines with VMware and VNC running on a Linux RedHat 9 Kernel. An Apache-Tomcat web server was configured as the external interface to...1650, dual processor, blade servers were configured as host machines with VMware and VNC running on a Linux RedHat 9 Kernel. An Apache-Tomcat web
A web-server of cell type discrimination system.
Wang, Anyou; Zhong, Yan; Wang, Yanhua; He, Qianchuan
2014-01-01
Discriminating cell types is a daily request for stem cell biologists. However, there is not a user-friendly system available to date for public users to discriminate the common cell types, embryonic stem cells (ESCs), induced pluripotent stem cells (iPSCs), and somatic cells (SCs). Here, we develop WCTDS, a web-server of cell type discrimination system, to discriminate the three cell types and their subtypes like fetal versus adult SCs. WCTDS is developed as a top layer application of our recent publication regarding cell type discriminations, which employs DNA-methylation as biomarkers and machine learning models to discriminate cell types. Implemented by Django, Python, R, and Linux shell programming, run under Linux-Apache web server, and communicated through MySQL, WCTDS provides a friendly framework to efficiently receive the user input and to run mathematical models for analyzing data and then to present results to users. This framework is flexible and easy to be expended for other applications. Therefore, WCTDS works as a user-friendly framework to discriminate cell types and subtypes and it can also be expended to detect other cell types like cancer cells.
A Web-Server of Cell Type Discrimination System
Zhong, Yan
2014-01-01
Discriminating cell types is a daily request for stem cell biologists. However, there is not a user-friendly system available to date for public users to discriminate the common cell types, embryonic stem cells (ESCs), induced pluripotent stem cells (iPSCs), and somatic cells (SCs). Here, we develop WCTDS, a web-server of cell type discrimination system, to discriminate the three cell types and their subtypes like fetal versus adult SCs. WCTDS is developed as a top layer application of our recent publication regarding cell type discriminations, which employs DNA-methylation as biomarkers and machine learning models to discriminate cell types. Implemented by Django, Python, R, and Linux shell programming, run under Linux-Apache web server, and communicated through MySQL, WCTDS provides a friendly framework to efficiently receive the user input and to run mathematical models for analyzing data and then to present results to users. This framework is flexible and easy to be expended for other applications. Therefore, WCTDS works as a user-friendly framework to discriminate cell types and subtypes and it can also be expended to detect other cell types like cancer cells. PMID:24578634
HippDB: a database of readily targeted helical protein-protein interactions.
Bergey, Christina M; Watkins, Andrew M; Arora, Paramjit S
2013-11-01
HippDB catalogs every protein-protein interaction whose structure is available in the Protein Data Bank and which exhibits one or more helices at the interface. The Web site accepts queries on variables such as helix length and sequence, and it provides computational alanine scanning and change in solvent-accessible surface area values for every interfacial residue. HippDB is intended to serve as a starting point for structure-based small molecule and peptidomimetic drug development. HippDB is freely available on the web at http://www.nyu.edu/projects/arora/hippdb. The Web site is implemented in PHP, MySQL and Apache. Source code freely available for download at http://code.google.com/p/helidb, implemented in Perl and supported on Linux. arora@nyu.edu.
Information Flow Integrity for Systems of Independently-Developed Components
2015-06-22
We also examined three programs (Apache, MySQL , and PHP) in detail to evaluate the efficacy of using the provided package test suites to generate...method are just as effective as hooks that were manually placed over the course of years while greatly reducing the burden on programmers. ”Leveraging...to validate optimizations of real-world, mature applications: the Apache software suite, the Mozilla Suite, and the MySQL database. ”Validating Library
A new information architecture, website and services for the CMS experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taylor, Lucas; Rusack, Eleanor; Zemleris, Vidmantas
2012-01-01
The age and size of the CMS collaboration at the LHC means it now has many hundreds of inhomogeneous web sites and services, and hundreds of thousands of documents. We describe a major initiative to create a single coherent CMS internal and public web site. This uses the Drupal web Content Management System (now supported by CERN/IT) on top of a standard LAMP stack (Linux, Apache, MySQL, and php/perl). The new navigation, content and search services are coherently integrated with numerous existing CERN services (CDS, EDMS, Indico, phonebook, Twiki) as well as many CMS internal Web services. We describe themore » information architecture, the system design, implementation and monitoring, the document and content database, security aspects, and our deployment strategy, which ensured continual smooth operation of all systems at all times.« less
A new Information Architecture, Website and Services for the CMS Experiment
NASA Astrophysics Data System (ADS)
Taylor, Lucas; Rusack, Eleanor; Zemleris, Vidmantas
2012-12-01
The age and size of the CMS collaboration at the LHC means it now has many hundreds of inhomogeneous web sites and services, and hundreds of thousands of documents. We describe a major initiative to create a single coherent CMS internal and public web site. This uses the Drupal web Content Management System (now supported by CERN/IT) on top of a standard LAMP stack (Linux, Apache, MySQL, and php/perl). The new navigation, content and search services are coherently integrated with numerous existing CERN services (CDS, EDMS, Indico, phonebook, Twiki) as well as many CMS internal Web services. We describe the information architecture; the system design, implementation and monitoring; the document and content database; security aspects; and our deployment strategy, which ensured continual smooth operation of all systems at all times.
Lu, Ying-Hao; Kuo, Chen-Chun; Huang, Yaw-Bin
2011-08-01
We selected HTML, PHP and JavaScript as the programming languages to build "WebBio", a web-based system for patient data of biological products and used MySQL as database. WebBio is based on the PHP-MySQL suite and is run by Apache server on Linux machine. WebBio provides the functions of data management, searching function and data analysis for 20 kinds of biological products (plasma expanders, human immunoglobulin and hematological products). There are two particular features in WebBio: (1) pharmacists can rapidly find out whose patients used contaminated products for medication safety, and (2) the statistics charts for a specific product can be automatically generated to reduce pharmacist's work loading. WebBio has successfully turned traditional paper work into web-based data management.
The database design of LAMOST based on MYSQL/LINUX
NASA Astrophysics Data System (ADS)
Li, Hui-Xian, Sang, Jian; Wang, Sha; Luo, A.-Li
2006-03-01
The Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST) will be set up in the coming years. A fully automated software system for reducing and analyzing the spectra has to be developed with the telescope. This database system is an important part of the software system. The requirements for the database of the LAMOST, the design of the LAMOST database system based on MYSQL/LINUX and performance tests of this system are described in this paper.
Project Management Software for Distributed Industrial Companies
NASA Astrophysics Data System (ADS)
Dobrojević, M.; Medjo, B.; Rakin, M.; Sedmak, A.
This paper gives an overview of the development of a new software solution for project management, intended mainly to use in industrial environment. The main concern of the proposed solution is application in everyday engineering practice in various, mainly distributed industrial companies. Having this in mind, special care has been devoted to development of appropriate tools for tracking, storing and analysis of the information about the project, and in-time delivering to the right team members or other responsible persons. The proposed solution is Internet-based and uses LAMP/WAMP (Linux or Windows - Apache - MySQL - PHP) platform, because of its stability, versatility, open source technology and simple maintenance. Modular structure of the software makes it easy for customization according to client specific needs, with a very short implementation period. Its main advantages are simple usage, quick implementation, easy system maintenance, short training and only basic computer skills needed for operators.
NASA Astrophysics Data System (ADS)
Exby, J.; Busby, R.; Dimitrov, D. A.; Bruhwiler, D.; Cary, J. R.
2003-10-01
We present our design and initial implementation of a web service model for running particle-in-cell (PIC) codes remotely from a web browser interface. PIC codes have grown significantly in complexity and now often require parallel execution on multiprocessor computers, which in turn requires sophisticated post-processing and data analysis. A significant amount of time and effort is required for a physicist to develop all the necessary skills, at the expense of actually doing research. Moreover, parameter studies with a computationally intensive code justify the systematic management of results with an efficient way to communicate them among a group of remotely located collaborators. Our initial implementation uses the OOPIC Pro code [1], Linux, Apache, MySQL, Python, and PHP. The Interactive Data Language is used for visualization. [1] D.L. Bruhwiler et al., Phys. Rev. ST-AB 4, 101302 (2001). * This work is supported by DOE grant # DE-FG02-03ER83857 and by Tech-X Corp. ** Also University of Colorado.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Curtis, Darren S.; Peterson, Elena S.; Oehmen, Chris S.
2008-05-04
This work presents the ScalaBLAST Web Application (SWA), a web based application implemented using the PHP script language, MySQL DBMS, and Apache web server under a GNU/Linux platform. SWA is an application built as part of the Data Intensive Computer for Complex Biological Systems (DICCBS) project at the Pacific Northwest National Laboratory (PNNL). SWA delivers accelerated throughput of bioinformatics analysis via high-performance computing through a convenient, easy-to-use web interface. This approach greatly enhances emerging fields of study in biology such as ontology-based homology, and multiple whole genome comparisons which, in the absence of a tool like SWA, require a heroicmore » effort to overcome the computational bottleneck associated with genome analysis. The current version of SWA includes a user account management system, a web based user interface, and a backend process that generates the files necessary for the Internet scientific community to submit a ScalaBLAST parallel processing job on a dedicated cluster.« less
Tropical Cyclone Information System
NASA Technical Reports Server (NTRS)
Li, P. Peggy; Knosp, Brian W.; Vu, Quoc A.; Yi, Chao; Hristova-Veleva, Svetla M.
2009-01-01
The JPL Tropical Cyclone Infor ma tion System (TCIS) is a Web portal (http://tropicalcyclone.jpl.nasa.gov) that provides researchers with an extensive set of observed hurricane parameters together with large-scale and convection resolving model outputs. It provides a comprehensive set of high-resolution satellite (see figure), airborne, and in-situ observations in both image and data formats. Large-scale datasets depict the surrounding environmental parameters such as SST (Sea Surface Temperature) and aerosol loading. Model outputs and analysis tools are provided to evaluate model performance and compare observations from different platforms. The system pertains to the thermodynamic and microphysical structure of the storm, the air-sea interaction processes, and the larger-scale environment as depicted by ocean heat content and the aerosol loading of the environment. Currently, the TCIS is populated with satellite observations of all tropical cyclones observed globally during 2005. There is a plan to extend the database both forward in time till present as well as backward to 1998. The portal is powered by a MySQL database and an Apache/Tomcat Web server on a Linux system. The interactive graphic user interface is provided by Google Map.
NEMiD: a web-based curated microbial diversity database with geo-based plotting.
Bhattacharjee, Kaushik; Joshi, Santa Ram
2014-01-01
The majority of the Earth's microbes remain unknown, and that their potential utility cannot be exploited until they are discovered and characterized. They provide wide scope for the development of new strains as well as biotechnological uses. The documentation and bioprospection of microorganisms carry enormous significance considering their relevance to human welfare. This calls for an urgent need to develop a database with emphasis on the microbial diversity of the largest untapped reservoirs in the biosphere. The data annotated in the North-East India Microbial database (NEMiD) were obtained by the isolation and characterization of microbes from different parts of the Eastern Himalayan region. The database was constructed as a relational database management system (RDBMS) for data storage in MySQL in the back-end on a Linux server and implemented in an Apache/PHP environment. This database provides a base for understanding the soil microbial diversity pattern in this megabiodiversity hotspot and indicates the distribution patterns of various organisms along with identification. The NEMiD database is freely available at www.mblabnehu.info/nemid/.
NEMiD: A Web-Based Curated Microbial Diversity Database with Geo-Based Plotting
Bhattacharjee, Kaushik; Joshi, Santa Ram
2014-01-01
The majority of the Earth's microbes remain unknown, and that their potential utility cannot be exploited until they are discovered and characterized. They provide wide scope for the development of new strains as well as biotechnological uses. The documentation and bioprospection of microorganisms carry enormous significance considering their relevance to human welfare. This calls for an urgent need to develop a database with emphasis on the microbial diversity of the largest untapped reservoirs in the biosphere. The data annotated in the North-East India Microbial database (NEMiD) were obtained by the isolation and characterization of microbes from different parts of the Eastern Himalayan region. The database was constructed as a relational database management system (RDBMS) for data storage in MySQL in the back-end on a Linux server and implemented in an Apache/PHP environment. This database provides a base for understanding the soil microbial diversity pattern in this megabiodiversity hotspot and indicates the distribution patterns of various organisms along with identification. The NEMiD database is freely available at www.mblabnehu.info/nemid/. PMID:24714636
Remote Numerical Simulations of the Interaction of High Velocity Clouds with Random Magnetic Fields
NASA Astrophysics Data System (ADS)
Santillan, Alfredo; Hernandez--Cervantes, Liliana; Gonzalez--Ponce, Alejandro; Kim, Jongsoo
The numerical simulations associated with the interaction of High Velocity Clouds (HVC) with the Magnetized Galactic Interstellar Medium (ISM) are a powerful tool to describe the evolution of the interaction of these objects in our Galaxy. In this work we present a new project referred to as Theoretical Virtual i Observatories. It is oriented toward to perform numerical simulations in real time through a Web page. This is a powerful astrophysical computational tool that consists of an intuitive graphical user interface (GUI) and a database produced by numerical calculations. In this Website the user can make use of the existing numerical simulations from the database or run a new simulation introducing initial conditions such as temperatures, densities, velocities, and magnetic field intensities for both the ISM and HVC. The prototype is programmed using Linux, Apache, MySQL, and PHP (LAMP), based on the open source philosophy. All simulations were performed with the MHD code ZEUS-3D, which solves the ideal MHD equations by finite differences on a fixed Eulerian mesh. Finally, we present typical results that can be obtained with this tool.
Cloud Computing Trace Characterization and Synthetic Workload Generation
2013-03-01
measurements [44]. Olio is primarily for learning Web 2.0 technologies, evaluating the three implementations (PHP, Java EE, and RubyOnRails (ROR...Add Event 17 Olio is well documented, but assumes prerequisite knowledge with setup and operation of apache web servers and MySQL databases. Olio...Faban supports numerous servers such as Apache httpd, Sun Java System Web, Portal and Mail Servers, Oracle RDBMS, memcached, and others [18]. Perhaps
MVC for Content Management on the Cloud
2011-09-01
Windows, Linux , MacOS, PalmOS and other customized ones (Qiu). Figure 20 illustrates implementation of MVC architecture. Qiu examines a “universal...Listing of Unzipped Text Document (From O’Reilly & Associates, Inc, 2005) Figure 37 shows the results of unzipping this file in Linux . The contents of the...ODF Adoption TC, and the ODF Alliance include members from Adobe, BBC, Bristol City Council, Bull, Corel, EDS, EMC, GNOME, IBM, Intel, KDE , MySQL
WebArray: an online platform for microarray data analysis
Xia, Xiaoqin; McClelland, Michael; Wang, Yipeng
2005-01-01
Background Many cutting-edge microarray analysis tools and algorithms, including commonly used limma and affy packages in Bioconductor, need sophisticated knowledge of mathematics, statistics and computer skills for implementation. Commercially available software can provide a user-friendly interface at considerable cost. To facilitate the use of these tools for microarray data analysis on an open platform we developed an online microarray data analysis platform, WebArray, for bench biologists to utilize these tools to explore data from single/dual color microarray experiments. Results The currently implemented functions were based on limma and affy package from Bioconductor, the spacings LOESS histogram (SPLOSH) method, PCA-assisted normalization method and genome mapping method. WebArray incorporates these packages and provides a user-friendly interface for accessing a wide range of key functions of limma and others, such as spot quality weight, background correction, graphical plotting, normalization, linear modeling, empirical bayes statistical analysis, false discovery rate (FDR) estimation, chromosomal mapping for genome comparison. Conclusion WebArray offers a convenient platform for bench biologists to access several cutting-edge microarray data analysis tools. The website is freely available at . It runs on a Linux server with Apache and MySQL. PMID:16371165
Graphics interfaces and numerical simulations: Mexican Virtual Solar Observatory
NASA Astrophysics Data System (ADS)
Hernández, L.; González, A.; Salas, G.; Santillán, A.
2007-08-01
Preliminary results associated to the computational development and creation of the Mexican Virtual Solar Observatory (MVSO) are presented. Basically, the MVSO prototype consists of two parts: the first, related to observations that have been made during the past ten years at the Solar Observation Station (EOS) and at the Carl Sagan Observatory (OCS) of the Universidad de Sonora in Mexico. The second part is associated to the creation and manipulation of a database produced by numerical simulations related to solar phenomena, we are using the MHD ZEUS-3D code. The development of this prototype was made using mysql, apache, java and VSO 1.2. based GNU and `open source philosophy'. A graphic user interface (GUI) was created in order to make web-based, remote numerical simulations. For this purpose, Mono was used, because it is provides the necessary software to develop and run .NET client and server applications on Linux. Although this project is still under development, we hope to have access, by means of this portal, to other virtual solar observatories and to be able to count on a database created through numerical simulations or, given the case, perform simulations associated to solar phenomena.
CrisprGE: a central hub of CRISPR/Cas-based genome editing.
Kaur, Karambir; Tandon, Himani; Gupta, Amit Kumar; Kumar, Manoj
2015-01-01
CRISPR system is a powerful defense mechanism in bacteria and archaea to provide immunity against viruses. Recently, this process found a new application in intended targeting of the genomes. CRISPR-mediated genome editing is performed by two main components namely single guide RNA and Cas9 protein. Despite the enormous data generated in this area, there is a dearth of high throughput resource. Therefore, we have developed CrisprGE, a central hub of CRISPR/Cas-based genome editing. Presently, this database holds a total of 4680 entries of 223 unique genes from 32 model and other organisms. It encompasses information about the organism, gene, target gene sequences, genetic modification, modifications length, genome editing efficiency, cell line, assay, etc. This depository is developed using the open source LAMP (Linux Apache MYSQL PHP) server. User-friendly browsing, searching facility is integrated for easy data retrieval. It also includes useful tools like BLAST CrisprGE, BLAST NTdb and CRISPR Mapper. Considering potential utilities of CRISPR in the vast area of biology and therapeutics, we foresee this platform as an assistance to accelerate research in the burgeoning field of genome engineering. © The Author(s) 2015. Published by Oxford University Press.
A Quality-Control-Oriented Database for a Mesoscale Meteorological Observation Network
NASA Astrophysics Data System (ADS)
Lussana, C.; Ranci, M.; Uboldi, F.
2012-04-01
In the operational context of a local weather service, data accessibility and quality related issues must be managed by taking into account a wide set of user needs. This work describes the structure and the operational choices made for the operational implementation of a database system storing data from highly automated observing stations, metadata and information on data quality. Lombardy's environmental protection agency, ARPA Lombardia, manages a highly automated mesoscale meteorological network. A Quality Assurance System (QAS) ensures that reliable observational information is collected and disseminated to the users. The weather unit in ARPA Lombardia, at the same time an important QAS component and an intensive data user, has developed a database specifically aimed to: 1) providing quick access to data for operational activities and 2) ensuring data quality for real-time applications, by means of an Automatic Data Quality Control (ADQC) procedure. Quantities stored in the archive include hourly aggregated observations of: precipitation amount, temperature, wind, relative humidity, pressure, global and net solar radiation. The ADQC performs several independent tests on raw data and compares their results in a decision-making procedure. An important ADQC component is the Spatial Consistency Test based on Optimal Interpolation. Interpolated and Cross-Validation analysis values are also stored in the database, providing further information to human operators and useful estimates in case of missing data. The technical solution adopted is based on a LAMP (Linux, Apache, MySQL and Php) system, constituting an open source environment suitable for both development and operational practice. The ADQC procedure itself is performed by R scripts directly interacting with the MySQL database. Users and network managers can access the database by using a set of web-based Php applications.
siRNAmod: A database of experimentally validated chemically modified siRNAs.
Dar, Showkat Ahmad; Thakur, Anamika; Qureshi, Abid; Kumar, Manoj
2016-01-28
Small interfering RNA (siRNA) technology has vast potential for functional genomics and development of therapeutics. However, it faces many obstacles predominantly instability of siRNAs due to nuclease digestion and subsequently biologically short half-life. Chemical modifications in siRNAs provide means to overcome these shortcomings and improve their stability and potency. Despite enormous utility bioinformatics resource of these chemically modified siRNAs (cm-siRNAs) is lacking. Therefore, we have developed siRNAmod, a specialized databank for chemically modified siRNAs. Currently, our repository contains a total of 4894 chemically modified-siRNA sequences, comprising 128 unique chemical modifications on different positions with various permutations and combinations. It incorporates important information on siRNA sequence, chemical modification, their number and respective position, structure, simplified molecular input line entry system canonical (SMILES), efficacy of modified siRNA, target gene, cell line, experimental methods, reference etc. It is developed and hosted using Linux Apache MySQL PHP (LAMP) software bundle. Standard user-friendly browse, search facility and analysis tools are also integrated. It would assist in understanding the effect of chemical modifications and further development of stable and efficacious siRNAs for research as well as therapeutics. siRNAmod is freely available at: http://crdd.osdd.net/servers/sirnamod.
NASA Astrophysics Data System (ADS)
Altini, V.; Carena, F.; Carena, W.; Chapeland, S.; Chibante Barroso, V.; Costa, F.; Divià, R.; Fuchs, U.; Makhlyueva, I.; Roukoutakis, F.; Schossmaier, K.; Soòs, C.; Vande Vyvre, P.; Von Haller, B.; ALICE Collaboration
2010-04-01
All major experiments need tools that provide a way to keep a record of the events and activities, both during commissioning and operations. In ALICE (A Large Ion Collider Experiment) at CERN, this task is performed by the Alice Electronic Logbook (eLogbook), a custom-made application developed and maintained by the Data-Acquisition group (DAQ). Started as a statistics repository, the eLogbook has evolved to become not only a fully functional electronic logbook, but also a massive information repository used to store the conditions and statistics of the several online systems. It's currently used by more than 600 users in 30 different countries and it plays an important role in the daily ALICE collaboration activities. This paper will describe the LAMP (Linux, Apache, MySQL and PHP) based architecture of the eLogbook, the database schema and the relevance of the information stored in the eLogbook to the different ALICE actors, not only for near real time procedures but also for long term data-mining and analysis. It will also present the web interface, including the different used technologies, the implemented security measures and the current main features. Finally it will present the roadmap for the future, including a migration to the web 2.0 paradigm, the handling of the database ever-increasing data volume and the deployment of data-mining tools.
Recent improvements to Binding MOAD: a resource for protein–ligand binding affinities and structures
Ahmed, Aqeel; Smith, Richard D.; Clark, Jordan J.; Dunbar, James B.; Carlson, Heather A.
2015-01-01
For over 10 years, Binding MOAD (Mother of All Databases; http://www.BindingMOAD.org) has been one of the largest resources for high-quality protein–ligand complexes and associated binding affinity data. Binding MOAD has grown at the rate of 1994 complexes per year, on average. Currently, it contains 23 269 complexes and 8156 binding affinities. Our annual updates curate the data using a semi-automated literature search of the references cited within the PDB file, and we have recently upgraded our website and added new features and functionalities to better serve Binding MOAD users. In order to eliminate the legacy application server of the old platform and to accommodate new changes, the website has been completely rewritten in the LAMP (Linux, Apache, MySQL and PHP) environment. The improved user interface incorporates current third-party plugins for better visualization of protein and ligand molecules, and it provides features like sorting, filtering and filtered downloads. In addition to the field-based searching, Binding MOAD now can be searched by structural queries based on the ligand. In order to remove redundancy, Binding MOAD records are clustered in different families based on 90% sequence identity. The new Binding MOAD, with the upgraded platform, features and functionalities, is now equipped to better serve its users. PMID:25378330
Quality Controlling CMIP datasets at GFDL
NASA Astrophysics Data System (ADS)
Horowitz, L. W.; Radhakrishnan, A.; Balaji, V.; Adcroft, A.; Krasting, J. P.; Nikonov, S.; Mason, E. E.; Schweitzer, R.; Nadeau, D.
2017-12-01
As GFDL makes the switch from model development to production in light of the Climate Model Intercomparison Project (CMIP), GFDL's efforts are shifted to testing and more importantly establishing guidelines and protocols for Quality Controlling and semi-automated data publishing. Every CMIP cycle introduces key challenges and the upcoming CMIP6 is no exception. The new CMIP experimental design comprises of multiple MIPs facilitating research in different focus areas. This paradigm has implications not only for the groups that develop the models and conduct the runs, but also for the groups that monitor, analyze and quality control the datasets before data publishing, before their knowledge makes its way into reports like the IPCC (Intergovernmental Panel on Climate Change) Assessment Reports. In this talk, we discuss some of the paths taken at GFDL to quality control the CMIP-ready datasets including: Jupyter notebooks, PrePARE, LAMP (Linux, Apache, MySQL, PHP/Python/Perl): technology-driven tracker system to monitor the status of experiments qualitatively and quantitatively, provide additional metadata and analysis services along with some in-built controlled-vocabulary validations in the workflow. In addition to this, we also discuss the integration of community-based model evaluation software (ESMValTool, PCMDI Metrics Package, and ILAMB) as part of our CMIP6 workflow.
Teaching Undergraduate Software Engineering Using Open Source Development Tools
2012-01-01
ware. Some example appliances are: a LAMP stack, Redmine, MySQL database, Moodle, Tom- cat on Apache, and Bugzilla. Some of the important features...Ada, C, C++, PHP , Py- thon, etc., and also supports a wide range of SDKs such as Google’s Android SDK and the Google Web Toolkit SDK. Additionally
Mining Bug Databases for Unidentified Software Vulnerabilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dumidu Wijayasekara; Milos Manic; Jason Wright
2012-06-01
Identifying software vulnerabilities is becoming more important as critical and sensitive systems increasingly rely on complex software systems. It has been suggested in previous work that some bugs are only identified as vulnerabilities long after the bug has been made public. These vulnerabilities are known as hidden impact vulnerabilities. This paper discusses the feasibility and necessity to mine common publicly available bug databases for vulnerabilities that are yet to be identified. We present bug database analysis of two well known and frequently used software packages, namely Linux kernel and MySQL. It is shown that for both Linux and MySQL, amore » significant portion of vulnerabilities that were discovered for the time period from January 2006 to April 2011 were hidden impact vulnerabilities. It is also shown that the percentage of hidden impact vulnerabilities has increased in the last two years, for both software packages. We then propose an improved hidden impact vulnerability identification methodology based on text mining bug databases, and conclude by discussing a few potential problems faced by such a classifier.« less
Wong, Wing Chung; Kim, Dewey; Carter, Hannah; Diekhans, Mark; Ryan, Michael C; Karchin, Rachel
2011-08-01
Thousands of cancer exomes are currently being sequenced, yielding millions of non-synonymous single nucleotide variants (SNVs) of possible relevance to disease etiology. Here, we provide a software toolkit to prioritize SNVs based on their predicted contribution to tumorigenesis. It includes a database of precomputed, predictive features covering all positions in the annotated human exome and can be used either stand-alone or as part of a larger variant discovery pipeline. MySQL database, source code and binaries freely available for academic/government use at http://wiki.chasmsoftware.org, Source in Python and C++. Requires 32 or 64-bit Linux system (tested on Fedora Core 8,10,11 and Ubuntu 10), 2.5*≤ Python <3.0*, MySQL server >5.0, 60 GB available hard disk space (50 MB for software and data files, 40 GB for MySQL database dump when uncompressed), 2 GB of RAM.
Enabling interspecies epigenomic comparison with CEpBrowser.
Cao, Xiaoyi; Zhong, Sheng
2013-05-01
We developed the Comparative Epigenome Browser (CEpBrowser) to allow the public to perform multi-species epigenomic analysis. The web-based CEpBrowser integrates, manages and visualizes sequencing-based epigenomic datasets. Five key features were developed to maximize the efficiency of interspecies epigenomic comparisons. CEpBrowser is a web application implemented with PHP, MySQL, C and Apache. URL: http://www.cepbrowser.org/.
Web-Based Search and Plot System for Nuclear Reaction Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Otuka, N.; Nakagawa, T.; Fukahori, T.
2005-05-24
A web-based search and plot system for nuclear reaction data has been developed, covering experimental data in EXFOR format and evaluated data in ENDF format. The system is implemented for Linux OS, with Perl and MySQL used for CGI scripts and the database manager, respectively. Two prototypes for experimental and evaluated data are presented.
Gupta, Surya; De Puysseleyr, Veronic; Van der Heyden, José; Maddelein, Davy; Lemmens, Irma; Lievens, Sam; Degroeve, Sven; Tavernier, Jan; Martens, Lennart
2017-05-01
Protein-protein interaction (PPI) studies have dramatically expanded our knowledge about cellular behaviour and development in different conditions. A multitude of high-throughput PPI techniques have been developed to achieve proteome-scale coverage for PPI studies, including the microarray based Mammalian Protein-Protein Interaction Trap (MAPPIT) system. Because such high-throughput techniques typically report thousands of interactions, managing and analysing the large amounts of acquired data is a challenge. We have therefore built the MAPPIT cell microArray Protein Protein Interaction-Data management & Analysis Tool (MAPPI-DAT) as an automated data management and analysis tool for MAPPIT cell microarray experiments. MAPPI-DAT stores the experimental data and metadata in a systematic and structured way, automates data analysis and interpretation, and enables the meta-analysis of MAPPIT cell microarray data across all stored experiments. MAPPI-DAT is developed in Python, using R for data analysis and MySQL as data management system. MAPPI-DAT is cross-platform and can be ran on Microsoft Windows, Linux and OS X/macOS. The source code and a Microsoft Windows executable are freely available under the permissive Apache2 open source license at https://github.com/compomics/MAPPI-DAT. jan.tavernier@vib-ugent.be or lennart.martens@vib-ugent.be. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press.
Ahmed, Aqeel; Smith, Richard D; Clark, Jordan J; Dunbar, James B; Carlson, Heather A
2015-01-01
For over 10 years, Binding MOAD (Mother of All Databases; http://www.BindingMOAD.org) has been one of the largest resources for high-quality protein-ligand complexes and associated binding affinity data. Binding MOAD has grown at the rate of 1994 complexes per year, on average. Currently, it contains 23,269 complexes and 8156 binding affinities. Our annual updates curate the data using a semi-automated literature search of the references cited within the PDB file, and we have recently upgraded our website and added new features and functionalities to better serve Binding MOAD users. In order to eliminate the legacy application server of the old platform and to accommodate new changes, the website has been completely rewritten in the LAMP (Linux, Apache, MySQL and PHP) environment. The improved user interface incorporates current third-party plugins for better visualization of protein and ligand molecules, and it provides features like sorting, filtering and filtered downloads. In addition to the field-based searching, Binding MOAD now can be searched by structural queries based on the ligand. In order to remove redundancy, Binding MOAD records are clustered in different families based on 90% sequence identity. The new Binding MOAD, with the upgraded platform, features and functionalities, is now equipped to better serve its users. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.
CHASM and SNVBox: toolkit for detecting biologically important single nucleotide mutations in cancer
Carter, Hannah; Diekhans, Mark; Ryan, Michael C.; Karchin, Rachel
2011-01-01
Summary: Thousands of cancer exomes are currently being sequenced, yielding millions of non-synonymous single nucleotide variants (SNVs) of possible relevance to disease etiology. Here, we provide a software toolkit to prioritize SNVs based on their predicted contribution to tumorigenesis. It includes a database of precomputed, predictive features covering all positions in the annotated human exome and can be used either stand-alone or as part of a larger variant discovery pipeline. Availability and Implementation: MySQL database, source code and binaries freely available for academic/government use at http://wiki.chasmsoftware.org, Source in Python and C++. Requires 32 or 64-bit Linux system (tested on Fedora Core 8,10,11 and Ubuntu 10), 2.5*≤ Python <3.0*, MySQL server >5.0, 60 GB available hard disk space (50 MB for software and data files, 40 GB for MySQL database dump when uncompressed), 2 GB of RAM. Contact: karchin@jhu.edu Supplementary Information: Supplementary data are available at Bioinformatics online. PMID:21685053
A database for coconut crop improvement.
Rajagopal, Velamoor; Manimekalai, Ramaswamy; Devakumar, Krishnamurthy; Rajesh; Karun, Anitha; Niral, Vittal; Gopal, Murali; Aziz, Shamina; Gunasekaran, Marimuthu; Kumar, Mundappurathe Ramesh; Chandrasekar, Arumugam
2005-12-08
Coconut crop improvement requires a number of biotechnology and bioinformatics tools. A database containing information on CG (coconut germplasm), CCI (coconut cultivar identification), CD (coconut disease), MIFSPC (microbial information systems in plantation crops) and VO (vegetable oils) is described. The database was developed using MySQL and PostgreSQL running in Linux operating system. The database interface is developed in PHP, HTML and JAVA. http://www.bioinfcpcri.org.
Information Security Considerations for Applications Using Apache Accumulo
2014-09-01
Distributed File System INSCOM United States Army Intelligence and Security Command JPA Java Persistence API JSON JavaScript Object Notation MAC Mandatory... MySQL [13]. BigTable can process 20 petabytes per day [14]. High degree of scalability on commodity hardware. NoSQL databases do not rely on highly...manipulation in relational databases. NoSQL databases each have a unique programming interface that uses a lower level procedural language (e.g., Java
NASA Astrophysics Data System (ADS)
Coronel, Andrei D.; Saldana, Rafael P.
Cancer is a leading cause of morbidity and mortality in the Philippines. Developed within the context of a Philippine Cancer Grid, the present study used web development technologies such as PHP, MySQL, and Apache server to build a prototype data retrieval system for breast cancer research that incorporates medical ontologies from the Unified Medical Language System (UMLS).
Stocker, Gernot; Rieder, Dietmar; Trajanoski, Zlatko
2004-03-22
ClusterControl is a web interface to simplify distributing and monitoring bioinformatics applications on Linux cluster systems. We have developed a modular concept that enables integration of command line oriented program into the application framework of ClusterControl. The systems facilitate integration of different applications accessed through one interface and executed on a distributed cluster system. The package is based on freely available technologies like Apache as web server, PHP as server-side scripting language and OpenPBS as queuing system and is available free of charge for academic and non-profit institutions. http://genome.tugraz.at/Software/ClusterControl
Lightweight application for generating clinical research information systems: MAGIC.
Leskošek, Brane; Pajntar, Marjan
2015-12-01
Our purpose was to build and test a lightweight solution for generating clinical research information systems (CRIS) that would allow non-IT professionals with basic knowledge of computer usage to quickly define and build a ready-to-use, safe and secure web-based clinical research system for data management. We use the acronym MAGIC (Medical Application Generator InteraCtive) for the system. The generated CRIS should be very easy to build and use, so a common LAMP (Linux Apache MySQL Perl) platform was used, which also enables short development cycles. The application was built and tested using eXtreme Programming (XP) principles by a small development team consisting of one informatics specialist, one physician and one graphical designer/programmer. The parameter and graphical user interface (GUI) definitions for the CRIS can be made by non-IT professionals using an intuitive English-language-like formalism called application definition language (ADL). From these definitions, the MAGIC builds an end-user CRIS that can be used on a wide variety of platforms (from standard workstations to hand-held devices). A working example of a national health-care-quality assessment program is presented to illustrate this process. The lightweight application for generating CRIS (MAGIC) has proven to be useful for both clinical and analytical users in real working environment. To achieve better performance and interoperability, we are planning to recompile the application using XML schemas (XSD) in HL7 CDA or openEHR archetypes formats used for parameters definition and for data interchange between different information systems.
Experience with ATLAS MySQL PanDA database service
NASA Astrophysics Data System (ADS)
Smirnov, Y.; Wlodek, T.; De, K.; Hover, J.; Ozturk, N.; Smith, J.; Wenaus, T.; Yu, D.
2010-04-01
The PanDA distributed production and analysis system has been in production use for ATLAS data processing and analysis since late 2005 in the US, and globally throughout ATLAS since early 2008. Its core architecture is based on a set of stateless web services served by Apache and backed by a suite of MySQL databases that are the repository for all PanDA information: active and archival job queues, dataset and file catalogs, site configuration information, monitoring information, system control parameters, and so on. This database system is one of the most critical components of PanDA, and has successfully delivered the functional and scaling performance required by PanDA, currently operating at a scale of half a million jobs per week, with much growth still to come. In this paper we describe the design and implementation of the PanDA database system, its architecture of MySQL servers deployed at BNL and CERN, backup strategy and monitoring tools. The system has been developed, thoroughly tested, and brought to production to provide highly reliable, scalable, flexible and available database services for ATLAS Monte Carlo production, reconstruction and physics analysis.
A database for coconut crop improvement
Rajagopal, Velamoor; Manimekalai, Ramaswamy; Devakumar, Krishnamurthy; Rajesh; Karun, Anitha; Niral, Vittal; Gopal, Murali; Aziz, Shamina; Gunasekaran, Marimuthu; Kumar, Mundappurathe Ramesh; Chandrasekar, Arumugam
2005-01-01
Coconut crop improvement requires a number of biotechnology and bioinformatics tools. A database containing information on CG (coconut germplasm), CCI (coconut cultivar identification), CD (coconut disease), MIFSPC (microbial information systems in plantation crops) and VO (vegetable oils) is described. The database was developed using MySQL and PostgreSQL running in Linux operating system. The database interface is developed in PHP, HTML and JAVA. Availability http://www.bioinfcpcri.org PMID:17597858
Optimizing CMS build infrastructure via Apache Mesos
NASA Astrophysics Data System (ADS)
Abdurachmanov, David; Degano, Alessandro; Elmer, Peter; Eulisse, Giulio; Mendez, David; Muzaffar, Shahzad
2015-12-01
The Offline Software of the CMS Experiment at the Large Hadron Collider (LHC) at CERN consists of 6M lines of in-house code, developed over a decade by nearly 1000 physicists, as well as a comparable amount of general use open-source code. A critical ingredient to the success of the construction and early operation of the WLCG was the convergence, around the year 2000, on the use of a homogeneous environment of commodity x86-64 processors and Linux. Apache Mesos is a cluster manager that provides efficient resource isolation and sharing across distributed applications, or frameworks. It can run Hadoop, Jenkins, Spark, Aurora, and other applications on a dynamically shared pool of nodes. We present how we migrated our continuous integration system to schedule jobs on a relatively small Apache Mesos enabled cluster and how this resulted in better resource usage, higher peak performance and lower latency thanks to the dynamic scheduling capabilities of Mesos.
2016-03-01
Representational state transfer Java messaging service Java application programming interface (API) Internet relay chat (IRC)/extensible messaging and...JBoss application server or an Apache Tomcat servlet container instance. The relational database management system can be either PostgreSQL or MySQL ... Java library called direct web remoting. This library has been part of the core CACE architecture for quite some time; however, there have not been
Improved Information Retrieval Performance on SQL Database Using Data Adapter
NASA Astrophysics Data System (ADS)
Husni, M.; Djanali, S.; Ciptaningtyas, H. T.; Wicaksana, I. G. N. A.
2018-02-01
The NoSQL databases, short for Not Only SQL, are increasingly being used as the number of big data applications increases. Most systems still use relational databases (RDBs), but as the number of data increases each year, the system handles big data with NoSQL databases to analyze and access data more quickly. NoSQL emerged as a result of the exponential growth of the internet and the development of web applications. The query syntax in the NoSQL database differs from the SQL database, therefore requiring code changes in the application. Data adapter allow applications to not change their SQL query syntax. Data adapters provide methods that can synchronize SQL databases with NotSQL databases. In addition, the data adapter provides an interface which is application can access to run SQL queries. Hence, this research applied data adapter system to synchronize data between MySQL database and Apache HBase using direct access query approach, where system allows application to accept query while synchronization process in progress. From the test performed using data adapter, the results obtained that the data adapter can synchronize between SQL databases, MySQL, and NoSQL database, Apache HBase. This system spends the percentage of memory resources in the range of 40% to 60%, and the percentage of processor moving from 10% to 90%. In addition, from this system also obtained the performance of database NoSQL better than SQL database.
Kilintzis, Vassilis; Beredimas, Nikolaos; Chouvarda, Ioanna
2014-01-01
An integral part of a system that manages medical data is the persistent storage engine. For almost twenty five years Relational Database Management Systems(RDBMS) were considered the obvious decision, yet today new technologies have emerged that require our attention as possible alternatives. Triplestores store information in terms of RDF triples without necessarily binding to a specific predefined structural model. In this paper we present an attempt to compare the performance of Apache JENA-Fuseki and the Virtuoso Universal Server 6 triplestores with that of MySQL 5.6 RDBMS for storing and retrieving medical information that it is communicated as RDF/XML ontology instances over a RESTful web service. The results show that the performance, calculated as average time of storing and retrieving instances, is significantly better using Virtuoso Server while MySQL performed better than Fuseki.
Optimizing CMS build infrastructure via Apache Mesos
Abdurachmanov, David; Degano, Alessandro; Elmer, Peter; ...
2015-12-23
The Offline Software of the CMS Experiment at the Large Hadron Collider (LHC) at CERN consists of 6M lines of in-house code, developed over a decade by nearly 1000 physicists, as well as a comparable amount of general use open-source code. A critical ingredient to the success of the construction and early operation of the WLCG was the convergence, around the year 2000, on the use of a homogeneous environment of commodity x86-64 processors and Linux.Apache Mesos is a cluster manager that provides efficient resource isolation and sharing across distributed applications, or frameworks. It can run Hadoop, Jenkins, Spark, Aurora,more » and other applications on a dynamically shared pool of nodes. Lastly, we present how we migrated our continuous integration system to schedule jobs on a relatively small Apache Mesos enabled cluster and how this resulted in better resource usage, higher peak performance and lower latency thanks to the dynamic scheduling capabilities of Mesos.« less
Optimizing CMS build infrastructure via Apache Mesos
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abdurachmanov, David; Degano, Alessandro; Elmer, Peter
The Offline Software of the CMS Experiment at the Large Hadron Collider (LHC) at CERN consists of 6M lines of in-house code, developed over a decade by nearly 1000 physicists, as well as a comparable amount of general use open-source code. A critical ingredient to the success of the construction and early operation of the WLCG was the convergence, around the year 2000, on the use of a homogeneous environment of commodity x86-64 processors and Linux.Apache Mesos is a cluster manager that provides efficient resource isolation and sharing across distributed applications, or frameworks. It can run Hadoop, Jenkins, Spark, Aurora,more » and other applications on a dynamically shared pool of nodes. Lastly, we present how we migrated our continuous integration system to schedule jobs on a relatively small Apache Mesos enabled cluster and how this resulted in better resource usage, higher peak performance and lower latency thanks to the dynamic scheduling capabilities of Mesos.« less
An integrated open framework for thermodynamics of reactions that combines accuracy and coverage.
Noor, Elad; Bar-Even, Arren; Flamholz, Avi; Lubling, Yaniv; Davidi, Dan; Milo, Ron
2012-08-01
The laws of thermodynamics describe a direct, quantitative relationship between metabolite concentrations and reaction directionality. Despite great efforts, thermodynamic data suffer from limited coverage, scattered accessibility and non-standard annotations. We present a framework for unifying thermodynamic data from multiple sources and demonstrate two new techniques for extrapolating the Gibbs energies of unmeasured reactions and conditions. Both methods account for changes in cellular conditions (pH, ionic strength, etc.) by using linear regression over the ΔG(○) of pseudoisomers and reactions. The Pseudoisomeric Reactant Contribution method systematically infers compound formation energies using measured K' and pK(a) data. The Pseudoisomeric Group Contribution method extends the group contribution method and achieves a high coverage of unmeasured reactions. We define a continuous index that predicts the reversibility of a reaction under a given physiological concentration range. In the characteristic physiological range 3μM-3mM, we find that roughly half of the reactions in Escherichia coli's metabolism are reversible. These new tools can increase the accuracy of thermodynamic-based models, especially in non-standard pH and ionic strengths. The reversibility index can help modelers decide which reactions are reversible in physiological conditions. Freely available on the web at: http://equilibrator.weizmann.ac.il. Website implemented in Python, MySQL, Apache and Django, with all major browsers supported. The framework is open-source (code.google.com/p/milo-lab), implemented in pure Python and tested mainly on Linux. ron.milo@weizmann.ac.il Supplementary data are available at Bioinformatics online.
An integrated open framework for thermodynamics of reactions that combines accuracy and coverage
Noor, Elad; Bar-Even, Arren; Flamholz, Avi; Lubling, Yaniv; Davidi, Dan; Milo, Ron
2012-01-01
Motivation: The laws of thermodynamics describe a direct, quantitative relationship between metabolite concentrations and reaction directionality. Despite great efforts, thermodynamic data suffer from limited coverage, scattered accessibility and non-standard annotations. We present a framework for unifying thermodynamic data from multiple sources and demonstrate two new techniques for extrapolating the Gibbs energies of unmeasured reactions and conditions. Results: Both methods account for changes in cellular conditions (pH, ionic strength, etc.) by using linear regression over the ΔG○ of pseudoisomers and reactions. The Pseudoisomeric Reactant Contribution method systematically infers compound formation energies using measured K′ and pKa data. The Pseudoisomeric Group Contribution method extends the group contribution method and achieves a high coverage of unmeasured reactions. We define a continuous index that predicts the reversibility of a reaction under a given physiological concentration range. In the characteristic physiological range 3μM–3mM, we find that roughly half of the reactions in Escherichia coli's metabolism are reversible. These new tools can increase the accuracy of thermodynamic-based models, especially in non-standard pH and ionic strengths. The reversibility index can help modelers decide which reactions are reversible in physiological conditions. Availability: Freely available on the web at: http://equilibrator.weizmann.ac.il. Website implemented in Python, MySQL, Apache and Django, with all major browsers supported. The framework is open-source (code.google.com/p/milo-lab), implemented in pure Python and tested mainly on Linux. Contact: ron.milo@weizmann.ac.il Supplementary Information: Supplementary data are available at Bioinformatics online. PMID:22645166
Development of new on-line statistical program for the Korean Society for Radiation Oncology
Song, Si Yeol; Ahn, Seung Do; Chung, Weon Kuu; Choi, Eun Kyung; Cho, Kwan Ho
2015-01-01
Purpose To develop new on-line statistical program for the Korean Society for Radiation Oncology (KOSRO) to collect and extract medical data in radiation oncology more efficiently. Materials and Methods The statistical program is a web-based program. The directory was placed in a sub-folder of the homepage of KOSRO and its web address is http://www.kosro.or.kr/asda. The operating systems server is Linux and the webserver is the Apache HTTP server. For database (DB) server, MySQL is adopted and dedicated scripting language is the PHP. Each ID and password are controlled independently and all screen pages for data input or analysis are made to be friendly to users. Scroll-down menu is actively used for the convenience of user and the consistence of data analysis. Results Year of data is one of top categories and main topics include human resource, equipment, clinical statistics, specialized treatment and research achievement. Each topic or category has several subcategorized topics. Real-time on-line report of analysis is produced immediately after entering each data and the administrator is able to monitor status of data input of each hospital. Backup of data as spread sheets can be accessed by the administrator and be used for academic works by any members of the KOSRO. Conclusion The new on-line statistical program was developed to collect data from nationwide departments of radiation oncology. Intuitive screen and consistent input structure are expected to promote entering data of member hospitals and annual statistics should be a cornerstone of advance in radiation oncology. PMID:26157684
Pereira, Andre; Atri, Mostafa; Rogalla, Patrik; Huynh, Thien; O'Malley, Martin E
2015-11-01
The value of a teaching case repository in radiology training programs is immense. The allocation of resources for putting one together is a complex issue, given the factors that have to be coordinated: hardware, software, infrastructure, administration, and ethics. Costs may be significant and cost-effective solutions are desirable. We chose Medical Imaging Resource Center (MIRC) to build our teaching file. It is offered by RSNA for free. For the hardware, we chose the Raspberry Pi, developed by the Raspberry Foundation: a small control board developed as a low cost computer for schools also used in alternative projects such as robotics and environmental data collection. Its performance and reliability as a file server were unknown to us. For the operational system, we chose Raspbian, a variant of Debian Linux, along with Apache (web server), MySql (database server) and PHP, which enhance the functionality of the server. A USB hub and an external hard drive completed the setup. Installation of software was smooth. The Raspberry Pi was able to handle very well the task of hosting the teaching file repository for our division. Uptime was logged at 100 %, and loading times were similar to other MIRC sites available online. We setup two servers (one for backup), each costing just below $200.00 including external storage and USB hub. It is feasible to run RSNA's MIRC off a low-cost control board (Raspberry Pi). Performance and reliability are comparable to full-size servers for the intended purpose of hosting a teaching file within an intranet environment.
Development of new on-line statistical program for the Korean Society for Radiation Oncology.
Song, Si Yeol; Ahn, Seung Do; Chung, Weon Kuu; Shin, Kyung Hwan; Choi, Eun Kyung; Cho, Kwan Ho
2015-06-01
To develop new on-line statistical program for the Korean Society for Radiation Oncology (KOSRO) to collect and extract medical data in radiation oncology more efficiently. The statistical program is a web-based program. The directory was placed in a sub-folder of the homepage of KOSRO and its web address is http://www.kosro.or.kr/asda. The operating systems server is Linux and the webserver is the Apache HTTP server. For database (DB) server, MySQL is adopted and dedicated scripting language is the PHP. Each ID and password are controlled independently and all screen pages for data input or analysis are made to be friendly to users. Scroll-down menu is actively used for the convenience of user and the consistence of data analysis. Year of data is one of top categories and main topics include human resource, equipment, clinical statistics, specialized treatment and research achievement. Each topic or category has several subcategorized topics. Real-time on-line report of analysis is produced immediately after entering each data and the administrator is able to monitor status of data input of each hospital. Backup of data as spread sheets can be accessed by the administrator and be used for academic works by any members of the KOSRO. The new on-line statistical program was developed to collect data from nationwide departments of radiation oncology. Intuitive screen and consistent input structure are expected to promote entering data of member hospitals and annual statistics should be a cornerstone of advance in radiation oncology.
FLEX: A Modular Software Architecture for Flight License Exam
NASA Astrophysics Data System (ADS)
Arsan, Taner; Saka, Hamit Emre; Sahin, Ceyhun
This paper is about the design and implementation of an examination system based on World Wide Web. It is called FLEX-Flight License Exam Software. We designed and implemented flexible and modular software architecture. The implemented system has basic specifications such as appending questions in system, building exams with these appended questions and making students to take these exams. There are three different types of users with different authorizations. These are system administrator, operators and students. System administrator operates and maintains the system, and also audits the system integrity. The system administrator can not be able to change the result of exams and can not take an exam. Operator module includes instructors. Operators have some privileges such as preparing exams, entering questions, changing the existing questions and etc. Students can log on the system and can be accessed to exams by a certain URL. The other characteristic of our system is that operators and system administrator are not able to delete questions due to the security problems. Exam questions can be inserted on their topics and lectures in the database. Thus; operators and system administrator can easily choose questions. When all these are taken into consideration, FLEX software provides opportunities to many students to take exams at the same time in safe, reliable and user friendly conditions. It is also reliable examination system for the authorized aviation administration companies. Web development platform - LAMP; Linux, Apache web server, MySQL, Object-oriented scripting Language - PHP are used for developing the system and page structures are developed by Content Management System - CMS.
A Disk-Based System for Producing and Distributing Science Products from MODIS
NASA Technical Reports Server (NTRS)
Masuoka, Edward; Wolfe, Robert; Sinno, Scott; Ye Gang; Teague, Michael
2007-01-01
Since beginning operations in 1999, the MODIS Adaptive Processing System (MODAPS) has evolved to take advantage of trends in information technology, such as the falling cost of computing cycles and disk storage and the availability of high quality open-source software (Linux, Apache and Perl), to achieve substantial gains in processing and distribution capacity and throughput while driving down the cost of system operations.
StreptomycesInforSys: A web-enabled information repository
Jain, Chakresh Kumar; Gupta, Vidhi; Gupta, Ashvarya; Gupta, Sanjay; Wadhwa, Gulshan; Sharma, Sanjeev Kumar; Sarethy, Indira P
2012-01-01
Members of Streptomyces produce 70% of natural bioactive products. There is considerable amount of information available based on polyphasic approach for classification of Streptomyces. However, this information based on phenotypic, genotypic and bioactive component production profiles is crucial for pharmacological screening programmes. This is scattered across various journals, books and other resources, many of which are not freely accessible. The designed database incorporates polyphasic typing information using combinations of search options to aid in efficient screening of new isolates. This will help in the preliminary categorization of appropriate groups. It is a free relational database compatible with existing operating systems. A cross platform technology with XAMPP Web server has been used to develop, manage, and facilitate the user query effectively with database support. Employment of PHP, a platform-independent scripting language, embedded in HTML and the database management software MySQL will facilitate dynamic information storage and retrieval. The user-friendly, open and flexible freeware (PHP, MySQL and Apache) is foreseen to reduce running and maintenance cost. Availability www.sis.biowaves.org PMID:23275736
StreptomycesInforSys: A web-enabled information repository.
Jain, Chakresh Kumar; Gupta, Vidhi; Gupta, Ashvarya; Gupta, Sanjay; Wadhwa, Gulshan; Sharma, Sanjeev Kumar; Sarethy, Indira P
2012-01-01
Members of Streptomyces produce 70% of natural bioactive products. There is considerable amount of information available based on polyphasic approach for classification of Streptomyces. However, this information based on phenotypic, genotypic and bioactive component production profiles is crucial for pharmacological screening programmes. This is scattered across various journals, books and other resources, many of which are not freely accessible. The designed database incorporates polyphasic typing information using combinations of search options to aid in efficient screening of new isolates. This will help in the preliminary categorization of appropriate groups. It is a free relational database compatible with existing operating systems. A cross platform technology with XAMPP Web server has been used to develop, manage, and facilitate the user query effectively with database support. Employment of PHP, a platform-independent scripting language, embedded in HTML and the database management software MySQL will facilitate dynamic information storage and retrieval. The user-friendly, open and flexible freeware (PHP, MySQL and Apache) is foreseen to reduce running and maintenance cost. www.sis.biowaves.org.
NASA Astrophysics Data System (ADS)
Carniel, Roberto; Di Cecca, Mauro; Jaquet, Olivier
2006-05-01
In the framework of the EU-funded project "Multi-disciplinary monitoring, modelling and forecasting of volcanic hazard" (MULTIMO), multiparametric data have been recorded at the MULTIMO station in Montserrat. Moreover, several other long time series, recorded at Montserrat and at other volcanoes, have been acquired in order to test stochastic and deterministic methodologies under development. Creating a general framework to handle data efficiently is a considerable task even for homogeneous data. In the case of heterogeneous data, this becomes a major issue. A need for a consistent way of browsing such a heterogeneous dataset in a user-friendly way therefore arose. Additionally, a framework for applying the calculation of the developed dynamical parameters on the data series was also needed in order to easily keep these parameters under control, e.g. for monitoring, research or forecasting purposes. The solution which we present is completely based on Open Source software, including Linux operating system, MySql database management system, Apache web server, Zope application server, Scilab math engine, Plone content management framework, Unified Modelling Language. From the user point of view the main advantage is the possibility of browsing through datasets recorded on different volcanoes, with different instruments, with different sampling frequencies, stored in different formats, all via a consistent, user- friendly interface that transparently runs queries to the database, gets the data from the main storage units, generates the graphs and produces dynamically generated web pages to interact with the user. The involvement of third parties for continuing the development in the Open Source philosophy and/or extending the application fields is now sought.
The Open Data Repositorys Data Publisher
NASA Technical Reports Server (NTRS)
Stone, N.; Lafuente, B.; Downs, R. T.; Blake, D.; Bristow, T.; Fonda, M.; Pires, A.
2015-01-01
Data management and data publication are becoming increasingly important components of researcher's workflows. The complexity of managing data, publishing data online, and archiving data has not decreased significantly even as computing access and power has greatly increased. The Open Data Repository's Data Publisher software strives to make data archiving, management, and publication a standard part of a researcher's workflow using simple, web-based tools and commodity server hardware. The publication engine allows for uploading, searching, and display of data with graphing capabilities and downloadable files. Access is controlled through a robust permissions system that can control publication at the field level and can be granted to the general public or protected so that only registered users at various permission levels receive access. Data Publisher also allows researchers to subscribe to meta-data standards through a plugin system, embargo data publication at their discretion, and collaborate with other researchers through various levels of data sharing. As the software matures, semantic data standards will be implemented to facilitate machine reading of data and each database will provide a REST application programming interface for programmatic access. Additionally, a citation system will allow snapshots of any data set to be archived and cited for publication while the data itself can remain living and continuously evolve beyond the snapshot date. The software runs on a traditional LAMP (Linux, Apache, MySQL, PHP) server and is available on GitHub (http://github.com/opendatarepository) under a GPLv2 open source license. The goal of the Open Data Repository is to lower the cost and training barrier to entry so that any researcher can easily publish their data and ensure it is archived for posterity.
NASA Astrophysics Data System (ADS)
Hasan, B.; Hasbullah, H.; Elvyanti, S.; Purnama, W.
2018-02-01
The creative industry is the utilization of creativity, skill and talent of individuals to create wealth and jobs by generating and exploiting creativity power of individual. In the field of design, utilization of information technology can spur creative industry, development of creative industry design will accommodate a lot of creative energy that can pour their ideas and creativity without limitations. Open Source software is a trend in the field of information technology has developed since the 1990s. Examples of applications developed by the Open Source approach is the Apache web services, Linux and Android Operating System, the MySQL database. This community service activities based entrepreneurship aims to: 1). give an idea about the profile of the UPI student’s knowledge of entrepreneurship about the business based creative industries in software by using web software development and educational game 2) create a model for fostering entrepreneurship based on the creative industries in software by leveraging web development and educational games, 3) conduct training and guidance on UPI students who want to develop business in the field of creative industries engaged in the software industry . PKM-based entrepreneurship activity was attended by about 35 students DPTE FPTK UPI had entrepreneurial high interest and competence in information technology. Outcome generated from PKM entrepreneurship is the emergence of entrepreneurs from the students who are interested in the creative industry in the field of software which is able to open up business opportunities for themselves and others. Another outcome of this entrepreneurship PKM activity is the publication of articles or scientific publications in journals of national/international indexed.
Fish Karyome version 2.1: a chromosome database of fishes and other aquatic organisms
Nagpure, Naresh Sahebrao; Pathak, Ajey Kumar; Pati, Rameshwar; Rashid, Iliyas; Sharma, Jyoti; Singh, Shri Prakash; Singh, Mahender; Sarkar, Uttam Kumar; Kushwaha, Basdeo; Kumar, Ravindra; Murali, S.
2016-01-01
A voluminous information is available on karyological studies of fishes; however, limited efforts were made for compilation and curation of the available karyological data in a digital form. ‘Fish Karyome’ database was the preliminary attempt to compile and digitize the available karyological information on finfishes belonging to the Indian subcontinent. But the database had limitations since it covered data only on Indian finfishes with limited search options. Perceiving the feedbacks from the users and its utility in fish cytogenetic studies, the Fish Karyome database was upgraded by applying Linux, Apache, MySQL and PHP (pre hypertext processor) (LAMP) technologies. In the present version, the scope of the system was increased by compiling and curating the available chromosomal information over the globe on fishes and other aquatic organisms, such as echinoderms, molluscs and arthropods, especially of aquaculture importance. Thus, Fish Karyome version 2.1 presently covers 866 chromosomal records for 726 species supported with 253 published articles and the information is being updated regularly. The database provides information on chromosome number and morphology, sex chromosomes, chromosome banding, molecular cytogenetic markers, etc. supported by fish and karyotype images through interactive tools. It also enables the users to browse and view chromosomal information based on habitat, family, conservation status and chromosome number. The system also displays chromosome number in model organisms, protocol for chromosome preparation and allied techniques and glossary of cytogenetic terms. A data submission facility has also been provided through data submission panel. The database can serve as a unique and useful resource for cytogenetic characterization, sex determination, chromosomal mapping, cytotaxonomy, karyo-evolution and systematics of fishes. Database URL: http://mail.nbfgr.res.in/Fish_Karyome PMID:26980518
Nascimento, Leandro Costa; Salazar, Marcela Mendes; Lepikson-Neto, Jorge; Camargo, Eduardo Leal Oliveira; Parreiras, Lucas Salera; Carazzolle, Marcelo Falsarella
2017-01-01
Abstract Tree species of the genus Eucalyptus are the most valuable and widely planted hardwoods in the world. Given the economic importance of Eucalyptus trees, much effort has been made towards the generation of specimens with superior forestry properties that can deliver high-quality feedstocks, customized to the industrýs needs for both cellulosic (paper) and lignocellulosic biomass production. In line with these efforts, large sets of molecular data have been generated by several scientific groups, providing invaluable information that can be applied in the development of improved specimens. In order to fully explore the potential of available datasets, the development of a public database that provides integrated access to genomic and transcriptomic data from Eucalyptus is needed. EUCANEXT is a database that analyses and integrates publicly available Eucalyptus molecular data, such as the E. grandis genome assembly and predicted genes, ESTs from several species and digital gene expression from 26 RNA-Seq libraries. The database has been implemented in a Fedora Linux machine running MySQL and Apache, while Perl CGI was used for the web interfaces. EUCANEXT provides a user-friendly web interface for easy access and analysis of publicly available molecular data from Eucalyptus species. This integrated database allows for complex searches by gene name, keyword or sequence similarity and is publicly accessible at http://www.lge.ibi.unicamp.br/eucalyptusdb. Through EUCANEXT, users can perform complex analysis to identify genes related traits of interest using RNA-Seq libraries and tools for differential expression analysis. Moreover, all the bioinformatics pipeline here described, including the database schema and PERL scripts, are readily available and can be applied to any genomic and transcriptomic project, regardless of the organism. Database URL: http://www.lge.ibi.unicamp.br/eucalyptusdb PMID:29220468
Lowering the Barrier for Standards-Compliant and Discoverable Hydrological Data Publication
NASA Astrophysics Data System (ADS)
Kadlec, J.
2013-12-01
The growing need for sharing and integration of hydrological and climate data across multiple organizations has resulted in the development of distributed, services-based, standards-compliant hydrological data management and data hosting systems. The problem with these systems is complicated set-up and deployment. Many existing systems assume that the data publisher has remote-desktop access to a locally managed server and experience with computer network setup. For corporate websites, shared web hosting services with limited root access provide an inexpensive, dynamic web presence solution using the Linux, Apache, MySQL and PHP (LAMP) software stack. In this paper, we hypothesize that a webhosting service provides an optimal, low-cost solution for hydrological data hosting. We propose a software architecture of a standards-compliant, lightweight and easy-to-deploy hydrological data management system that can be deployed on the majority of existing shared internet webhosting services. The architecture and design is validated by developing Hydroserver Lite: a PHP and MySQL-based hydrological data hosting package that is fully standards-compliant and compatible with the Consortium of Universities for Advancement of Hydrologic Sciences (CUAHSI) hydrologic information system. It is already being used for management of field data collection by students of the McCall Outdoor Science School in Idaho. For testing, the Hydroserver Lite software has been installed on multiple different free and low-cost webhosting sites including Godaddy, Bluehost and 000webhost. The number of steps required to set-up the server is compared with the number of steps required to set-up other standards-compliant hydrologic data hosting systems including THREDDS, IstSOS and MapServer SOS.
Fish Karyome version 2.1: a chromosome database of fishes and other aquatic organisms.
Nagpure, Naresh Sahebrao; Pathak, Ajey Kumar; Pati, Rameshwar; Rashid, Iliyas; Sharma, Jyoti; Singh, Shri Prakash; Singh, Mahender; Sarkar, Uttam Kumar; Kushwaha, Basdeo; Kumar, Ravindra; Murali, S
2016-01-01
A voluminous information is available on karyological studies of fishes; however, limited efforts were made for compilation and curation of the available karyological data in a digital form. 'Fish Karyome' database was the preliminary attempt to compile and digitize the available karyological information on finfishes belonging to the Indian subcontinent. But the database had limitations since it covered data only on Indian finfishes with limited search options. Perceiving the feedbacks from the users and its utility in fish cytogenetic studies, the Fish Karyome database was upgraded by applying Linux, Apache, MySQL and PHP (pre hypertext processor) (LAMP) technologies. In the present version, the scope of the system was increased by compiling and curating the available chromosomal information over the globe on fishes and other aquatic organisms, such as echinoderms, molluscs and arthropods, especially of aquaculture importance. Thus, Fish Karyome version 2.1 presently covers 866 chromosomal records for 726 species supported with 253 published articles and the information is being updated regularly. The database provides information on chromosome number and morphology, sex chromosomes, chromosome banding, molecular cytogenetic markers, etc. supported by fish and karyotype images through interactive tools. It also enables the users to browse and view chromosomal information based on habitat, family, conservation status and chromosome number. The system also displays chromosome number in model organisms, protocol for chromosome preparation and allied techniques and glossary of cytogenetic terms. A data submission facility has also been provided through data submission panel. The database can serve as a unique and useful resource for cytogenetic characterization, sex determination, chromosomal mapping, cytotaxonomy, karyo-evolution and systematics of fishes. Database URL: http://mail.nbfgr.res.in/Fish_Karyome. © The Author(s) 2016. Published by Oxford University Press.
NASA Astrophysics Data System (ADS)
Moreau, N.; Dubernet, M. L.
2006-07-01
Basecol is a combination of a website (using PHP and HTML) and a MySQL database concerning molecular ro-vibrational transitions induced by collisions with atoms or molecules. This database has been created in view of the scientific preparation of the Heterodyne Instrument for the Far-Infrared on board the Herschel Space Observatory (HSO). Basecol offers an access to numerical and bibliographic data through various output methods such as ASCII, HTML or VOTable (which is a first step towards a VO compliant system). A web service using Apache Axis has been developed in order to provide a direct access to data for external applications.
Nuclear Data Online Services at Peking University
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fan, T.S.; Guo, Z.Y.; Ye, W.G.
2005-05-24
The Institute of Heavy Ion Physics at Peking University has developed a new nuclear data online services software package. Through the web site (http://ndos.nst.pku.edu.cn), it offers online access to main relational nuclear databases: five evaluated neutron libraries (BROND, CENDL, ENDF, JEF, JENDL), the ENSDF library, the EXFOR library, the IAEA photonuclear library and the charged particle data of the FENDL library. This software allows the comparison and graphic representations of the different data sets. The computer programs of this package are based on the Linux implementation of PHP and the MySQL software.
Nuclear Data Online Services at Peking University
NASA Astrophysics Data System (ADS)
Fan, T. S.; Guo, Z. Y.; Ye, W. G.; Liu, W. L.; Liu, T. J.; Liu, C. X.; Chen, J. X.; Tang, G. Y.; Shi, Z. M.; Huang, X. L.; Chen, J. E.
2005-05-01
The Institute of Heavy Ion Physics at Peking University has developed a new nuclear data online services software package. Through the web site (http://ndos.nst.pku.edu.cn), it offers online access to main relational nuclear databases: five evaluated neutron libraries (BROND, CENDL, ENDF, JEF, JENDL), the ENSDF library, the EXFOR library, the IAEA photonuclear library and the charged particle data of the FENDL library. This software allows the comparison and graphic representations of the different data sets. The computer programs of this package are based on the Linux implementation of PHP and the MySQL software.
NOAO observing proposal processing system
NASA Astrophysics Data System (ADS)
Bell, David J.; Gasson, David; Hartman, Mia
2002-12-01
Since going electronic in 1994, NOAO has continued to refine and enhance its observing proposal handling system. Virtually all related processes are now handled electronically. Members of the astronomical community can submit proposals through email, web form or via Gemini's downloadable Phase-I Tool. NOAO staff can use online interfaces for administrative tasks, technical reviews, telescope scheduling, and compilation of various statistics. In addition, all information relevant to the TAC process is made available online. The system, now known as ANDES, is designed as a thin-client architecture (web pages are now used for almost all database functions) built using open source tools (FreeBSD, Apache, MySQL, Perl, PHP) to process descriptively-marked (LaTeX, XML) proposal documents.
Teaching a laboratory-intensive online introductory electronics course*
NASA Astrophysics Data System (ADS)
Markes, Mark
2008-03-01
Most current online courses provide little or no hands-on laboratory content. This talk will describe the development and initial experiences with presenting an introductory online electronics course with significant hands-on laboratory content. The course is delivered using a Linux-based Apache web server, a Darwin Streaming Server, a SMART Board interactive white board, SMART Notebook software and a video camcorder. The laboratory uses primarily the Global Specialties PB-505 trainer and a Tenma 20MHz Oscilloscope that are provided to the students for the duration of the course and then returned. Testing is performed using Course Blackboard course management software.
Construction of Database for Pulsating Variable Stars
NASA Astrophysics Data System (ADS)
Chen, B. Q.; Yang, M.; Jiang, B. W.
2011-07-01
A database for the pulsating variable stars is constructed for Chinese astronomers to study the variable stars conveniently. The database includes about 230000 variable stars in the Galactic bulge, LMC and SMC observed by the MACHO (MAssive Compact Halo Objects) and OGLE (Optical Gravitational Lensing Experiment) projects at present. The software used for the construction is LAMP, i.e., Linux+Apache+MySQL+PHP. A web page is provided to search the photometric data and the light curve in the database through the right ascension and declination of the object. More data will be incorporated into the database.
SearchGUI: An open-source graphical user interface for simultaneous OMSSA and X!Tandem searches.
Vaudel, Marc; Barsnes, Harald; Berven, Frode S; Sickmann, Albert; Martens, Lennart
2011-03-01
The identification of proteins by mass spectrometry is a standard technique in the field of proteomics, relying on search engines to perform the identifications of the acquired spectra. Here, we present a user-friendly, lightweight and open-source graphical user interface called SearchGUI (http://searchgui.googlecode.com), for configuring and running the freely available OMSSA (open mass spectrometry search algorithm) and X!Tandem search engines simultaneously. Freely available under the permissible Apache2 license, SearchGUI is supported on Windows, Linux and OSX. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Integrated database for identifying candidate genes for Aspergillus flavus resistance in maize
2010-01-01
Background Aspergillus flavus Link:Fr, an opportunistic fungus that produces aflatoxin, is pathogenic to maize and other oilseed crops. Aflatoxin is a potent carcinogen, and its presence markedly reduces the value of grain. Understanding and enhancing host resistance to A. flavus infection and/or subsequent aflatoxin accumulation is generally considered an efficient means of reducing grain losses to aflatoxin. Different proteomic, genomic and genetic studies of maize (Zea mays L.) have generated large data sets with the goal of identifying genes responsible for conferring resistance to A. flavus, or aflatoxin. Results In order to maximize the usage of different data sets in new studies, including association mapping, we have constructed a relational database with web interface integrating the results of gene expression, proteomic (both gel-based and shotgun), Quantitative Trait Loci (QTL) genetic mapping studies, and sequence data from the literature to facilitate selection of candidate genes for continued investigation. The Corn Fungal Resistance Associated Sequences Database (CFRAS-DB) (http://agbase.msstate.edu/) was created with the main goal of identifying genes important to aflatoxin resistance. CFRAS-DB is implemented using MySQL as the relational database management system running on a Linux server, using an Apache web server, and Perl CGI scripts as the web interface. The database and the associated web-based interface allow researchers to examine many lines of evidence (e.g. microarray, proteomics, QTL studies, SNP data) to assess the potential role of a gene or group of genes in the response of different maize lines to A. flavus infection and subsequent production of aflatoxin by the fungus. Conclusions CFRAS-DB provides the first opportunity to integrate data pertaining to the problem of A. flavus and aflatoxin resistance in maize in one resource and to support queries across different datasets. The web-based interface gives researchers different query options for mining the database across different types of experiments. The database is publically available at http://agbase.msstate.edu. PMID:20946609
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yung, J; Stefan, W; Reeve, D
2015-06-15
Purpose: Phantom measurements allow for the performance of magnetic resonance (MR) systems to be evaluated. Association of Physicists in Medicine (AAPM) Report No. 100 Acceptance Testing and Quality Assurance Procedures for MR Imaging Facilities, American College of Radiology (ACR) MR Accreditation Program MR phantom testing, and ACR MRI quality control (QC) program documents help to outline specific tests for establishing system performance baselines as well as system stability over time. Analyzing and processing tests from multiple systems can be time-consuming for medical physicists. Besides determining whether tests are within predetermined limits or criteria, monitoring longitudinal trends can also help preventmore » costly downtime of systems during clinical operation. In this work, a semi-automated QC program was developed to analyze and record measurements in a database that allowed for easy access to historical data. Methods: Image analysis was performed on 27 different MR systems of 1.5T and 3.0T field strengths from GE and Siemens manufacturers. Recommended measurements involved the ACR MRI Accreditation Phantom, spherical homogenous phantoms, and a phantom with an uniform hole pattern. Measurements assessed geometric accuracy and linearity, position accuracy, image uniformity, signal, noise, ghosting, transmit gain, center frequency, and magnetic field drift. The program was designed with open source tools, employing Linux, Apache, MySQL database and Python programming language for the front and backend. Results: Processing time for each image is <2 seconds. Figures are produced to show regions of interests (ROIs) for analysis. Historical data can be reviewed to compare previous year data and to inspect for trends. Conclusion: A MRI quality assurance and QC program is necessary for maintaining high quality, ACR MRI Accredited MR programs. A reviewable database of phantom measurements assists medical physicists with processing and monitoring of large datasets. Longitudinal data can reveal trends that although are within passing criteria indicate underlying system issues.« less
The Open Data Repository's Data Publisher
NASA Astrophysics Data System (ADS)
Stone, N.; Lafuente, B.; Downs, R. T.; Bristow, T.; Blake, D. F.; Fonda, M.; Pires, A.
2015-12-01
Data management and data publication are becoming increasingly important components of research workflows. The complexity of managing data, publishing data online, and archiving data has not decreased significantly even as computing access and power has greatly increased. The Open Data Repository's Data Publisher software (http://www.opendatarepository.org) strives to make data archiving, management, and publication a standard part of a researcher's workflow using simple, web-based tools and commodity server hardware. The publication engine allows for uploading, searching, and display of data with graphing capabilities and downloadable files. Access is controlled through a robust permissions system that can control publication at the field level and can be granted to the general public or protected so that only registered users at various permission levels receive access. Data Publisher also allows researchers to subscribe to meta-data standards through a plugin system, embargo data publication at their discretion, and collaborate with other researchers through various levels of data sharing. As the software matures, semantic data standards will be implemented to facilitate machine reading of data and each database will provide a REST application programming interface for programmatic access. Additionally, a citation system will allow snapshots of any data set to be archived and cited for publication while the data itself can remain living and continuously evolve beyond the snapshot date. The software runs on a traditional LAMP (Linux, Apache, MySQL, PHP) server and is available on GitHub (http://github.com/opendatarepository) under a GPLv2 open source license. The goal of the Open Data Repository is to lower the cost and training barrier to entry so that any researcher can easily publish their data and ensure it is archived for posterity. We gratefully acknowledge the support for this study by the Science-Enabling Research Activity (SERA), and NASA NNX11AP82A, Mars Science Laboratory Investigations and University of Arizona Geosciences.
Integrated database for identifying candidate genes for Aspergillus flavus resistance in maize.
Kelley, Rowena Y; Gresham, Cathy; Harper, Jonathan; Bridges, Susan M; Warburton, Marilyn L; Hawkins, Leigh K; Pechanova, Olga; Peethambaran, Bela; Pechan, Tibor; Luthe, Dawn S; Mylroie, J E; Ankala, Arunkanth; Ozkan, Seval; Henry, W B; Williams, W P
2010-10-07
Aspergillus flavus Link:Fr, an opportunistic fungus that produces aflatoxin, is pathogenic to maize and other oilseed crops. Aflatoxin is a potent carcinogen, and its presence markedly reduces the value of grain. Understanding and enhancing host resistance to A. flavus infection and/or subsequent aflatoxin accumulation is generally considered an efficient means of reducing grain losses to aflatoxin. Different proteomic, genomic and genetic studies of maize (Zea mays L.) have generated large data sets with the goal of identifying genes responsible for conferring resistance to A. flavus, or aflatoxin. In order to maximize the usage of different data sets in new studies, including association mapping, we have constructed a relational database with web interface integrating the results of gene expression, proteomic (both gel-based and shotgun), Quantitative Trait Loci (QTL) genetic mapping studies, and sequence data from the literature to facilitate selection of candidate genes for continued investigation. The Corn Fungal Resistance Associated Sequences Database (CFRAS-DB) (http://agbase.msstate.edu/) was created with the main goal of identifying genes important to aflatoxin resistance. CFRAS-DB is implemented using MySQL as the relational database management system running on a Linux server, using an Apache web server, and Perl CGI scripts as the web interface. The database and the associated web-based interface allow researchers to examine many lines of evidence (e.g. microarray, proteomics, QTL studies, SNP data) to assess the potential role of a gene or group of genes in the response of different maize lines to A. flavus infection and subsequent production of aflatoxin by the fungus. CFRAS-DB provides the first opportunity to integrate data pertaining to the problem of A. flavus and aflatoxin resistance in maize in one resource and to support queries across different datasets. The web-based interface gives researchers different query options for mining the database across different types of experiments. The database is publically available at http://agbase.msstate.edu.
The Ensembl Web Site: Mechanics of a Genome Browser
Stalker, James; Gibbins, Brian; Meidl, Patrick; Smith, James; Spooner, William; Hotz, Hans-Rudolf; Cox, Antony V.
2004-01-01
The Ensembl Web site (http://www.ensembl.org/) is the principal user interface to the data of the Ensembl project, and currently serves >500,000 pages (∼2.5 million hits) per week, providing access to >80 GB (gigabyte) of data to users in more than 80 countries. Built atop an open-source platform comprising Apache/mod_perl and the MySQL relational database management system, it is modular, extensible, and freely available. It is being actively reused and extended in several different projects, and has been downloaded and installed in companies and academic institutions worldwide. Here, we describe some of the technical features of the site, with particular reference to its dynamic configuration that enables it to handle disparate data from multiple species. PMID:15123591
The Ensembl Web site: mechanics of a genome browser.
Stalker, James; Gibbins, Brian; Meidl, Patrick; Smith, James; Spooner, William; Hotz, Hans-Rudolf; Cox, Antony V
2004-05-01
The Ensembl Web site (http://www.ensembl.org/) is the principal user interface to the data of the Ensembl project, and currently serves >500,000 pages (approximately 2.5 million hits) per week, providing access to >80 GB (gigabyte) of data to users in more than 80 countries. Built atop an open-source platform comprising Apache/mod_perl and the MySQL relational database management system, it is modular, extensible, and freely available. It is being actively reused and extended in several different projects, and has been downloaded and installed in companies and academic institutions worldwide. Here, we describe some of the technical features of the site, with particular reference to its dynamic configuration that enables it to handle disparate data from multiple species.
Introduction to an Open Source Internet-Based Testing Program for Medical Student Examinations
2009-01-01
The author developed a freely available open source internet-based testing program for medical examination. PHP and Java script were used as the programming language and postgreSQL as the database management system on an Apache web server and Linux operating system. The system approach was that a super user inputs the items, each school administrator inputs the examinees' information, and examinees access the system. The examinee's score is displayed immediately after examination with item analysis. The set-up of the system beginning with installation is described. This may help medical professors to easily adopt an internet-based testing system for medical education. PMID:20046457
Introduction to an open source internet-based testing program for medical student examinations.
Lee, Yoon-Hwan
2009-12-20
The author developed a freely available open source internet-based testing program for medical examination. PHP and Java script were used as the programming language and postgreSQL as the database management system on an Apache web server and Linux operating system. The system approach was that a super user inputs the items, each school administrator inputs the examinees' information, and examinees access the system. The examinee's score is displayed immediately after examination with item analysis. The set-up of the system beginning with installation is described. This may help medical professors to easily adopt an internet-based testing system for medical education.
The Data Acquisition System of the Stockholm Educational Air Shower Array
NASA Astrophysics Data System (ADS)
Hofverberg, P.; Johansson, H.; Pearce, M.; Rydstrom, S.; Wikstrom, C.
2005-12-01
The Stockholm Educational Air Shower Array (SEASA) project is deploying an array of plastic scintillator detector stations on school roofs in the Stockholm area. Signals from GPS satellites are used to time synchronise signals from the widely separated detector stations, allowing cosmic ray air showers to be identified and studied. A low-cost and highly scalable data acquisition system has been produced using embedded Linux processors which communicate station data to a central server running a MySQL database. Air shower data can be visualised in real-time using a Java-applet client. It is also possible to query the database and manage detector stations from the client. In this paper, the design and performance of the system are described
Construction of the Database for Pulsating Variable Stars
NASA Astrophysics Data System (ADS)
Chen, Bing-Qiu; Yang, Ming; Jiang, Bi-Wei
2012-01-01
A database for pulsating variable stars is constructed to favor the study of variable stars in China. The database includes about 230,000 variable stars in the Galactic bulge, LMC and SMC observed in an about 10 yr period by the MACHO(MAssive Compact Halo Objects) and OGLE(Optical Gravitational Lensing Experiment) projects. The software used for the construction is LAMP, i.e., Linux+Apache+MySQL+PHP. A web page is provided for searching the photometric data and light curves in the database through the right ascension and declination of an object. Because of the flexibility of this database, more up-to-date data of variable stars can be incorporated into the database conveniently.
DOE Office of Scientific and Technical Information (OSTI.GOV)
2012-08-09
Sophia Daemon Version 12 contains the code that is exclusively used by the sophiad application. It runs as a service on a Linux host and analyzes network traffic obtained from libpcap and produces a network fingerprint based on hosts and channels. Sophia Daemon Version 12 can, if desired by the user, produce alerts when its fingerprint changes. Sophia Daemon Version 12 can receive data from another Sophia Daemon or raw packet data. It can output data to another Sophia Daemon Version 12, OglNet Version 12 or MySQL. Sophia Daemon Version 12 runs in a passive real-time manner that allows itmore » to be used on a SCADA network. Its network fingerprint is designed to be applicable to SCADA networks rather than general IT networks.« less
Schacht Hansen, M; Dørup, J
2001-01-01
The Wireless Application Protocol technology implemented in newer mobile phones has built-in facilities for handling much of the information processing needed in clinical work. To test a practical approach we ported a relational database of the Danish pharmaceutical catalogue to Wireless Application Protocol using open source freeware at all steps. We used Apache 1.3 web software on a Linux server. Data containing the Danish pharmaceutical catalogue were imported from an ASCII file into a MySQL 3.22.32 database using a Practical Extraction and Report Language script for easy update of the database. Data were distributed in 35 interrelated tables. Each pharmaceutical brand name was given its own card with links to general information about the drug, active substances, contraindications etc. Access was available through 1) browsing therapeutic groups and 2) searching for a brand name. The database interface was programmed in the server-side scripting language PHP3. A free, open source Wireless Application Protocol gateway to a pharmaceutical catalogue was established to allow dial-in access independent of commercial Wireless Application Protocol service providers. The application was tested on the Nokia 7110 and Ericsson R320s cellular phones. We have demonstrated that Wireless Application Protocol-based access to a dynamic clinical database can be established using open source freeware. The project opens perspectives for a further integration of Wireless Application Protocol phone functions in clinical information processing: Global System for Mobile communication telephony for bilateral communication, asynchronous unilateral communication via e-mail and Short Message Service, built-in calculator, calendar, personal organizer, phone number catalogue and Dictaphone function via answering machine technology. An independent Wireless Application Protocol gateway may be placed within hospital firewalls, which may be an advantage with respect to security. However, if Wireless Application Protocol phones are to become effective tools for physicians, special attention must be paid to the limitations of the devices. Input tools of Wireless Application Protocol phones should be improved, for instance by increased use of speech control.
Hansen, Michael Schacht
2001-01-01
Background The Wireless Application Protocol technology implemented in newer mobile phones has built-in facilities for handling much of the information processing needed in clinical work. Objectives To test a practical approach we ported a relational database of the Danish pharmaceutical catalogue to Wireless Application Protocol using open source freeware at all steps. Methods We used Apache 1.3 web software on a Linux server. Data containing the Danish pharmaceutical catalogue were imported from an ASCII file into a MySQL 3.22.32 database using a Practical Extraction and Report Language script for easy update of the database. Data were distributed in 35 interrelated tables. Each pharmaceutical brand name was given its own card with links to general information about the drug, active substances, contraindications etc. Access was available through 1) browsing therapeutic groups and 2) searching for a brand name. The database interface was programmed in the server-side scripting language PHP3. Results A free, open source Wireless Application Protocol gateway to a pharmaceutical catalogue was established to allow dial-in access independent of commercial Wireless Application Protocol service providers. The application was tested on the Nokia 7110 and Ericsson R320s cellular phones. Conclusions We have demonstrated that Wireless Application Protocol-based access to a dynamic clinical database can be established using open source freeware. The project opens perspectives for a further integration of Wireless Application Protocol phone functions in clinical information processing: Global System for Mobile communication telephony for bilateral communication, asynchronous unilateral communication via e-mail and Short Message Service, built-in calculator, calendar, personal organizer, phone number catalogue and Dictaphone function via answering machine technology. An independent Wireless Application Protocol gateway may be placed within hospital firewalls, which may be an advantage with respect to security. However, if Wireless Application Protocol phones are to become effective tools for physicians, special attention must be paid to the limitations of the devices. Input tools of Wireless Application Protocol phones should be improved, for instance by increased use of speech control. PMID:11720946
Survey of MapReduce frame operation in bioinformatics.
Zou, Quan; Li, Xu-Bin; Jiang, Wen-Rui; Lin, Zi-Yu; Li, Gui-Lin; Chen, Ke
2014-07-01
Bioinformatics is challenged by the fact that traditional analysis tools have difficulty in processing large-scale data from high-throughput sequencing. The open source Apache Hadoop project, which adopts the MapReduce framework and a distributed file system, has recently given bioinformatics researchers an opportunity to achieve scalable, efficient and reliable computing performance on Linux clusters and on cloud computing services. In this article, we present MapReduce frame-based applications that can be employed in the next-generation sequencing and other biological domains. In addition, we discuss the challenges faced by this field as well as the future works on parallel computing in bioinformatics. © The Author 2013. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
BioPepDB: an integrated data platform for food-derived bioactive peptides.
Li, Qilin; Zhang, Chao; Chen, Hongjun; Xue, Jitong; Guo, Xiaolei; Liang, Ming; Chen, Ming
2018-03-12
Food-derived bioactive peptides play critical roles in regulating most biological processes and have considerable biological, medical and industrial importance. However, a large number of active peptides data, including sequence, function, source, commercial product information, references and other information are poorly integrated. BioPepDB is a searchable database of food-derived bioactive peptides and their related articles, including more than four thousand bioactive peptide entries. Moreover, BioPepDB provides modules of prediction and hydrolysis-simulation for discovering novel peptides. It can serve as a reference database to investigate the function of different bioactive peptides. BioPepDB is available at http://bis.zju.edu.cn/biopepdbr/ . The web page utilises Apache, PHP5 and MySQL to provide the user interface for accessing the database and predict novel peptides. The database itself is operated on a specialised server.
Development of Web-Based Menu Planning Support System and its Solution Using Genetic Algorithm
NASA Astrophysics Data System (ADS)
Kashima, Tomoko; Matsumoto, Shimpei; Ishii, Hiroaki
2009-10-01
Recently lifestyle-related diseases have become an object of public concern, while at the same time people are being more health conscious. As an essential factor for causing the lifestyle-related diseases, we assume that the knowledge circulation on dietary habits is still insufficient. This paper focuses on everyday meals close to our life and proposes a well-balanced menu planning system as a preventive measure of lifestyle-related diseases. The system is developed by using a Web-based frontend and it provides multi-user services and menu information sharing capabilities like social networking services (SNS). The system is implemented on a Web server running Apache (HTTP server software), MySQL (database management system), and PHP (scripting language for dynamic Web pages). For the menu planning, a genetic algorithm is applied by understanding this problem as multidimensional 0-1 integer programming.
An overview of the Hadoop/MapReduce/HBase framework and its current applications in bioinformatics
2010-01-01
Background Bioinformatics researchers are now confronted with analysis of ultra large-scale data sets, a problem that will only increase at an alarming rate in coming years. Recent developments in open source software, that is, the Hadoop project and associated software, provide a foundation for scaling to petabyte scale data warehouses on Linux clusters, providing fault-tolerant parallelized analysis on such data using a programming style named MapReduce. Description An overview is given of the current usage within the bioinformatics community of Hadoop, a top-level Apache Software Foundation project, and of associated open source software projects. The concepts behind Hadoop and the associated HBase project are defined, and current bioinformatics software that employ Hadoop is described. The focus is on next-generation sequencing, as the leading application area to date. Conclusions Hadoop and the MapReduce programming paradigm already have a substantial base in the bioinformatics community, especially in the field of next-generation sequencing analysis, and such use is increasing. This is due to the cost-effectiveness of Hadoop-based analysis on commodity Linux clusters, and in the cloud via data upload to cloud vendors who have implemented Hadoop/HBase; and due to the effectiveness and ease-of-use of the MapReduce method in parallelization of many data analysis algorithms. PMID:21210976
An overview of the Hadoop/MapReduce/HBase framework and its current applications in bioinformatics.
Taylor, Ronald C
2010-12-21
Bioinformatics researchers are now confronted with analysis of ultra large-scale data sets, a problem that will only increase at an alarming rate in coming years. Recent developments in open source software, that is, the Hadoop project and associated software, provide a foundation for scaling to petabyte scale data warehouses on Linux clusters, providing fault-tolerant parallelized analysis on such data using a programming style named MapReduce. An overview is given of the current usage within the bioinformatics community of Hadoop, a top-level Apache Software Foundation project, and of associated open source software projects. The concepts behind Hadoop and the associated HBase project are defined, and current bioinformatics software that employ Hadoop is described. The focus is on next-generation sequencing, as the leading application area to date. Hadoop and the MapReduce programming paradigm already have a substantial base in the bioinformatics community, especially in the field of next-generation sequencing analysis, and such use is increasing. This is due to the cost-effectiveness of Hadoop-based analysis on commodity Linux clusters, and in the cloud via data upload to cloud vendors who have implemented Hadoop/HBase; and due to the effectiveness and ease-of-use of the MapReduce method in parallelization of many data analysis algorithms.
Ban, Nobuhiko; Takahashi, Fumiaki; Ono, Koji; Hasegawa, Takayuki; Yoshitake, Takayasu; Katsunuma, Yasushi; Sato, Kaoru; Endo, Akira; Kai, Michiaki
2011-07-01
A web-based dose computation system, WAZA-ARI, is being developed for patients undergoing X-ray CT examinations. The system is implemented in Java on a Linux server running Apache Tomcat. Users choose scanning options and input parameters via a web browser over the Internet. Dose coefficients, which were calculated in a Japanese adult male phantom (JM phantom) are called upon user request and are summed over the scan range specified by the user to estimate a normalised dose. Tissue doses are finally computed based on the radiographic exposure (mA s) and the pitch factor. While dose coefficients are currently available only for limited CT scanner models, the system has achieved a high degree of flexibility and scalability without the use of commercial software.
Khan, Mohd Shoaib; Gupta, Amit Kumar; Kumar, Manoj
2016-01-01
To develop a computational resource for viral epigenomic methylation profiles from diverse diseases. Methylation patterns of Epstein-Barr virus and hepatitis B virus genomic regions are provided as web platform developed using open source Linux-Apache-MySQL-PHP (LAMP) bundle: programming and scripting languages, that is, HTML, JavaScript and PERL. A comprehensive and integrated web resource ViralEpi v1.0 is developed providing well-organized compendium of methylation events and statistical analysis associated with several diseases. Additionally, it also facilitates 'Viral EpiGenome Browser' for user-affable browsing experience using JavaScript-based JBrowse. This web resource would be helpful for research community engaged in studying epigenetic biomarkers for appropriate prognosis and diagnosis of diseases and its various stages.
A future Outlook: Web based Simulation of Hydrodynamic models
NASA Astrophysics Data System (ADS)
Islam, A. S.; Piasecki, M.
2003-12-01
Despite recent advances to present simulation results as 3D graphs or animation contours, the modeling user community still faces some shortcomings when trying to move around and analyze data. Typical problems include the lack of common platforms with standard vocabulary to exchange simulation results from different numerical models, insufficient descriptions about data (metadata), lack of robust search and retrieval tools for data, and difficulties to reuse simulation domain knowledge. This research demonstrates how to create a shared simulation domain in the WWW and run a number of models through multi-user interfaces. Firstly, meta-datasets have been developed to describe hydrodynamic model data based on geographic metadata standard (ISO 19115) that has been extended to satisfy the need of the hydrodynamic modeling community. The Extended Markup Language (XML) is used to publish this metadata by the Resource Description Framework (RDF). Specific domain ontology for Web Based Simulation (WBS) has been developed to explicitly define vocabulary for the knowledge based simulation system. Subsequently, this knowledge based system is converted into an object model using Meta Object Family (MOF). The knowledge based system acts as a Meta model for the object oriented system, which aids in reusing the domain knowledge. Specific simulation software has been developed based on the object oriented model. Finally, all model data is stored in an object relational database. Database back-ends help store, retrieve and query information efficiently. This research uses open source software and technology such as Java Servlet and JSP, Apache web server, Tomcat Servlet Engine, PostgresSQL databases, Protégé ontology editor, RDQL and RQL for querying RDF in semantic level, Jena Java API for RDF. Also, we use international standards such as the ISO 19115 metadata standard, and specifications such as XML, RDF, OWL, XMI, and UML. The final web based simulation product is deployed as Web Archive (WAR) files which is platform and OS independent and can be used by Windows, UNIX, or Linux. Keywords: Apache, ISO 19115, Java Servlet, Jena, JSP, Metadata, MOF, Linux, Ontology, OWL, PostgresSQL, Protégé, RDF, RDQL, RQL, Tomcat, UML, UNIX, Windows, WAR, XML
An overview of the Hadoop/MapReduce/HBase framework and its current applications in bioinformatics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taylor, Ronald C.
Bioinformatics researchers are increasingly confronted with analysis of ultra large-scale data sets, a problem that will only increase at an alarming rate in coming years. Recent developments in open source software, that is, the Hadoop project and associated software, provide a foundation for scaling to petabyte scale data warehouses on Linux clusters, providing fault-tolerant parallelized analysis on such data using a programming style named MapReduce. An overview is given of the current usage within the bioinformatics community of Hadoop, a top-level Apache Software Foundation project, and of associated open source software projects. The concepts behind Hadoop and the associated HBasemore » project are defined, and current bioinformatics software that employ Hadoop is described. The focus is on next-generation sequencing, as the leading application area to date.« less
Latorre, Mariano; Silva, Herman; Saba, Juan; Guziolowski, Carito; Vizoso, Paula; Martinez, Veronica; Maldonado, Jonathan; Morales, Andrea; Caroca, Rodrigo; Cambiazo, Veronica; Campos-Vargas, Reinaldo; Gonzalez, Mauricio; Orellana, Ariel; Retamales, Julio; Meisel, Lee A
2006-11-23
Expressed sequence tag (EST) analyses provide a rapid and economical means to identify candidate genes that may be involved in a particular biological process. These ESTs are useful in many Functional Genomics studies. However, the large quantity and complexity of the data generated during an EST sequencing project can make the analysis of this information a daunting task. In an attempt to make this task friendlier, we have developed JUICE, an open source data management system (Apache + PHP + MySQL on Linux), which enables the user to easily upload, organize, visualize and search the different types of data generated in an EST project pipeline. In contrast to other systems, the JUICE data management system allows a branched pipeline to be established, modified and expanded, during the course of an EST project. The web interfaces and tools in JUICE enable the users to visualize the information in a graphical, user-friendly manner. The user may browse or search for sequences and/or sequence information within all the branches of the pipeline. The user can search using terms associated with the sequence name, annotation or other characteristics stored in JUICE and associated with sequences or sequence groups. Groups of sequences can be created by the user, stored in a clipboard and/or downloaded for further analyses. Different user profiles restrict the access of each user depending upon their role in the project. The user may have access exclusively to visualize sequence information, access to annotate sequences and sequence information, or administrative access. JUICE is an open source data management system that has been developed to aid users in organizing and analyzing the large amount of data generated in an EST Project workflow. JUICE has been used in one of the first functional genomics projects in Chile, entitled "Functional Genomics in nectarines: Platform to potentiate the competitiveness of Chile in fruit exportation". However, due to its ability to organize and visualize data from external pipelines, JUICE is a flexible data management system that should be useful for other EST/Genome projects. The JUICE data management system is released under the Open Source GNU Lesser General Public License (LGPL). JUICE may be downloaded from http://genoma.unab.cl/juice_system/ or http://www.genomavegetal.cl/juice_system/.
Latorre, Mariano; Silva, Herman; Saba, Juan; Guziolowski, Carito; Vizoso, Paula; Martinez, Veronica; Maldonado, Jonathan; Morales, Andrea; Caroca, Rodrigo; Cambiazo, Veronica; Campos-Vargas, Reinaldo; Gonzalez, Mauricio; Orellana, Ariel; Retamales, Julio; Meisel, Lee A
2006-01-01
Background Expressed sequence tag (EST) analyses provide a rapid and economical means to identify candidate genes that may be involved in a particular biological process. These ESTs are useful in many Functional Genomics studies. However, the large quantity and complexity of the data generated during an EST sequencing project can make the analysis of this information a daunting task. Results In an attempt to make this task friendlier, we have developed JUICE, an open source data management system (Apache + PHP + MySQL on Linux), which enables the user to easily upload, organize, visualize and search the different types of data generated in an EST project pipeline. In contrast to other systems, the JUICE data management system allows a branched pipeline to be established, modified and expanded, during the course of an EST project. The web interfaces and tools in JUICE enable the users to visualize the information in a graphical, user-friendly manner. The user may browse or search for sequences and/or sequence information within all the branches of the pipeline. The user can search using terms associated with the sequence name, annotation or other characteristics stored in JUICE and associated with sequences or sequence groups. Groups of sequences can be created by the user, stored in a clipboard and/or downloaded for further analyses. Different user profiles restrict the access of each user depending upon their role in the project. The user may have access exclusively to visualize sequence information, access to annotate sequences and sequence information, or administrative access. Conclusion JUICE is an open source data management system that has been developed to aid users in organizing and analyzing the large amount of data generated in an EST Project workflow. JUICE has been used in one of the first functional genomics projects in Chile, entitled "Functional Genomics in nectarines: Platform to potentiate the competitiveness of Chile in fruit exportation". However, due to its ability to organize and visualize data from external pipelines, JUICE is a flexible data management system that should be useful for other EST/Genome projects. The JUICE data management system is released under the Open Source GNU Lesser General Public License (LGPL). JUICE may be downloaded from or . PMID:17123449
Geena 2, improved automated analysis of MALDI/TOF mass spectra.
Romano, Paolo; Profumo, Aldo; Rocco, Mattia; Mangerini, Rosa; Ferri, Fabio; Facchiano, Angelo
2016-03-02
Mass spectrometry (MS) is producing high volumes of data supporting oncological sciences, especially for translational research. Most of related elaborations can be carried out by combining existing tools at different levels, but little is currently available for the automation of the fundamental steps. For the analysis of MALDI/TOF spectra, a number of pre-processing steps are required, including joining of isotopic abundances for a given molecular species, normalization of signals against an internal standard, background noise removal, averaging multiple spectra from the same sample, and aligning spectra from different samples. In this paper, we present Geena 2, a public software tool for the automated execution of these pre-processing steps for MALDI/TOF spectra. Geena 2 has been developed in a Linux-Apache-MySQL-PHP web development environment, with scripts in PHP and Perl. Input and output are managed as simple formats that can be consumed by any database system and spreadsheet software. Input data may also be stored in a MySQL database. Processing methods are based on original heuristic algorithms which are introduced in the paper. Three simple and intuitive web interfaces are available: the Standard Search Interface, which allows a complete control over all parameters, the Bright Search Interface, which leaves to the user the possibility to tune parameters for alignment of spectra, and the Quick Search Interface, which limits the number of parameters to a minimum by using default values for the majority of parameters. Geena 2 has been utilized, in conjunction with a statistical analysis tool, in three published experimental works: a proteomic study on the effects of long-term cryopreservation on the low molecular weight fraction of serum proteome, and two retrospective serum proteomic studies, one on the risk of developing breat cancer in patients affected by gross cystic disease of the breast (GCDB) and the other for the identification of a predictor of breast cancer mortality following breast cancer surgery, whose results were validated by ELISA, a completely alternative method. Geena 2 is a public tool for the automated pre-processing of MS data originated by MALDI/TOF instruments, with a simple and intuitive web interface. It is now under active development for the inclusion of further filtering options and for the adoption of standard formats for MS spectra.
NASA Astrophysics Data System (ADS)
Ogle, G.; Bode, C.; Fung, I.
2010-12-01
The Keck HydroWatch Project is a multidisciplinary project devoted to understanding how water interacts with atmosphere, vegetation, soil, and fractured bedrock. It is experimenting with novel techniques to monitor and trace water pathways through these mediums, including developing an intensive wireless sensor network, in the Angelo Coast Range and Sagehen Reserves in California. The sensor time-series data is being supplemented with periodic campaigns experimenting with sampling and tracing techniques, including water chemistry, stable isotope analysis, electrical resistivity tomography (ERT), and neutron probes. Mechanistic and statistical modeling is being performed with these datasets. One goal of the HydroWatch project is to prototype technologies for intensive sampling that can be upscaled to the watershed scale. The Berkeley Sensor Database was designed to manage the large volumes of heterogeneous data coming from this sensor network. This system is based on the Observations Data Model (ODM) developed by the Consortium of Universities for the Advancement of Hydrologic Science, Inc. (CUAHSI). Due to need for the use of open-source software, UC Berkeley ported the ODM to a LAMP system (Linux, Apache, MySQL, Perl). As of August 2010, the Berkeley Sensor Database contains 33 million measurements from 1200 devices, with several thousand new measurements being added each hour. Data for this research is being collected from a wide variety of equipment. Some of this equipment is experimental and subject to constant modification, others are industry standards. Well pressure transducers, sap flow sensors, experimental microclimate motes, standard weather stations, and multiple rock and soil moisture sensors are some examples. While the Hydrologic Information System (HIS) and the ODM are optimized for data interoperability, they are not focused on facility management and data quality control which occur at a complex research site. In this presentation, we describe our implementation of the ODM, the modifications we made to the ODM schema to include incident reports, concepts of 'stations', reuse and moving of equipment, and NASA data quality levels. The HydroWatch researchers' data use vary radically, so we implemented a number of different accessors to the data, from real-time graphing during storms to direct SQL queries for automated analysis to full data dumps for heavy statistical modeling.
Secure web book to store structural genomics research data.
Manjasetty, Babu A; Höppner, Klaus; Mueller, Uwe; Heinemann, Udo
2003-01-01
Recently established collaborative structural genomics programs aim at significantly accelerating the crystal structure analysis of proteins. These large-scale projects require efficient data management systems to ensure seamless collaboration between different groups of scientists working towards the same goal. Within the Berlin-based Protein Structure Factory, the synchrotron X-ray data collection and the subsequent crystal structure analysis tasks are located at BESSY, a third-generation synchrotron source. To organize file-based communication and data transfer at the BESSY site of the Protein Structure Factory, we have developed the web-based BCLIMS, the BESSY Crystallography Laboratory Information Management System. BCLIMS is a relational data management system which is powered by MySQL as the database engine and Apache HTTP as the web server. The database interface routines are written in Python programing language. The software is freely available to academic users. Here we describe the storage, retrieval and manipulation of laboratory information, mainly pertaining to the synchrotron X-ray diffraction experiments and the subsequent protein structure analysis, using BCLIMS.
EVpedia: a community web portal for extracellular vesicles research.
Kim, Dae-Kyum; Lee, Jaewook; Kim, Sae Rom; Choi, Dong-Sic; Yoon, Yae Jin; Kim, Ji Hyun; Go, Gyeongyun; Nhung, Dinh; Hong, Kahye; Jang, Su Chul; Kim, Si-Hyun; Park, Kyong-Su; Kim, Oh Youn; Park, Hyun Taek; Seo, Ji Hye; Aikawa, Elena; Baj-Krzyworzeka, Monika; van Balkom, Bas W M; Belting, Mattias; Blanc, Lionel; Bond, Vincent; Bongiovanni, Antonella; Borràs, Francesc E; Buée, Luc; Buzás, Edit I; Cheng, Lesley; Clayton, Aled; Cocucci, Emanuele; Dela Cruz, Charles S; Desiderio, Dominic M; Di Vizio, Dolores; Ekström, Karin; Falcon-Perez, Juan M; Gardiner, Chris; Giebel, Bernd; Greening, David W; Gross, Julia Christina; Gupta, Dwijendra; Hendrix, An; Hill, Andrew F; Hill, Michelle M; Nolte-'t Hoen, Esther; Hwang, Do Won; Inal, Jameel; Jagannadham, Medicharla V; Jayachandran, Muthuvel; Jee, Young-Koo; Jørgensen, Malene; Kim, Kwang Pyo; Kim, Yoon-Keun; Kislinger, Thomas; Lässer, Cecilia; Lee, Dong Soo; Lee, Hakmo; van Leeuwen, Johannes; Lener, Thomas; Liu, Ming-Lin; Lötvall, Jan; Marcilla, Antonio; Mathivanan, Suresh; Möller, Andreas; Morhayim, Jess; Mullier, François; Nazarenko, Irina; Nieuwland, Rienk; Nunes, Diana N; Pang, Ken; Park, Jaesung; Patel, Tushar; Pocsfalvi, Gabriella; Del Portillo, Hernando; Putz, Ulrich; Ramirez, Marcel I; Rodrigues, Marcio L; Roh, Tae-Young; Royo, Felix; Sahoo, Susmita; Schiffelers, Raymond; Sharma, Shivani; Siljander, Pia; Simpson, Richard J; Soekmadji, Carolina; Stahl, Philip; Stensballe, Allan; Stępień, Ewa; Tahara, Hidetoshi; Trummer, Arne; Valadi, Hadi; Vella, Laura J; Wai, Sun Nyunt; Witwer, Kenneth; Yáñez-Mó, María; Youn, Hyewon; Zeidler, Reinhard; Gho, Yong Song
2015-03-15
Extracellular vesicles (EVs) are spherical bilayered proteolipids, harboring various bioactive molecules. Due to the complexity of the vesicular nomenclatures and components, online searches for EV-related publications and vesicular components are currently challenging. We present an improved version of EVpedia, a public database for EVs research. This community web portal contains a database of publications and vesicular components, identification of orthologous vesicular components, bioinformatic tools and a personalized function. EVpedia includes 6879 publications, 172 080 vesicular components from 263 high-throughput datasets, and has been accessed more than 65 000 times from more than 750 cities. In addition, about 350 members from 73 international research groups have participated in developing EVpedia. This free web-based database might serve as a useful resource to stimulate the emerging field of EV research. The web site was implemented in PHP, Java, MySQL and Apache, and is freely available at http://evpedia.info. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Cao, Han; Ng, Marcus C K; Jusoh, Siti Azma; Tai, Hio Kuan; Siu, Shirley W I
2017-09-01
[Formula: see text]-Helical transmembrane proteins are the most important drug targets in rational drug development. However, solving the experimental structures of these proteins remains difficult, therefore computational methods to accurately and efficiently predict the structures are in great demand. We present an improved structure prediction method TMDIM based on Park et al. (Proteins 57:577-585, 2004) for predicting bitopic transmembrane protein dimers. Three major algorithmic improvements are introduction of the packing type classification, the multiple-condition decoy filtering, and the cluster-based candidate selection. In a test of predicting nine known bitopic dimers, approximately 78% of our predictions achieved a successful fit (RMSD <2.0 Å) and 78% of the cases are better predicted than the two other methods compared. Our method provides an alternative for modeling TM bitopic dimers of unknown structures for further computational studies. TMDIM is freely available on the web at https://cbbio.cis.umac.mo/TMDIM . Website is implemented in PHP, MySQL and Apache, with all major browsers supported.
An open source, web based, simple solution for seismic data dissemination and collaborative research
NASA Astrophysics Data System (ADS)
Diviacco, Paolo
2005-06-01
Collaborative research and data dissemination in the field of geophysical exploration need network tools that can access large amounts of data from anywhere using any PC or workstation. Simple solutions based on a combination of Open Source software can be developed to address such requests, exploiting the possibilities offered by the web technologies, and at the same time avoiding the costs and inflexibility of commercial systems. A viable solution consists of MySQL for data storage and retrieval, CWP/SU and GMT for data visualisation and a scripting layer driven by PHP that allows users to access the system via an Apache web server. In the light of the experience building the on-line archive of seismic data of the Istituto Nazionale di Oceanografia e di Geofisica Sperimentale (OGS), we describe the solutions and the methods adopted, with a view to stimulate both the attitude of network collaborative research of other institutions similar to ours, and the development of different applications.
TMDIM: an improved algorithm for the structure prediction of transmembrane domains of bitopic dimers
NASA Astrophysics Data System (ADS)
Cao, Han; Ng, Marcus C. K.; Jusoh, Siti Azma; Tai, Hio Kuan; Siu, Shirley W. I.
2017-09-01
α-Helical transmembrane proteins are the most important drug targets in rational drug development. However, solving the experimental structures of these proteins remains difficult, therefore computational methods to accurately and efficiently predict the structures are in great demand. We present an improved structure prediction method TMDIM based on Park et al. (Proteins 57:577-585, 2004) for predicting bitopic transmembrane protein dimers. Three major algorithmic improvements are introduction of the packing type classification, the multiple-condition decoy filtering, and the cluster-based candidate selection. In a test of predicting nine known bitopic dimers, approximately 78% of our predictions achieved a successful fit (RMSD <2.0 Å) and 78% of the cases are better predicted than the two other methods compared. Our method provides an alternative for modeling TM bitopic dimers of unknown structures for further computational studies. TMDIM is freely available on the web at https://cbbio.cis.umac.mo/TMDIM. Website is implemented in PHP, MySQL and Apache, with all major browsers supported.
PathVisio-Faceted Search: an exploration tool for multi-dimensional navigation of large pathways
Fried, Jake Y.; Luna, Augustin
2013-01-01
Purpose: The PathVisio-Faceted Search plugin helps users explore and understand complex pathways by overlaying experimental data and data from webservices, such as Ensembl BioMart, onto diagrams drawn using formalized notations in PathVisio. The plugin then provides a filtering mechanism, known as a faceted search, to find and highlight diagram nodes (e.g. genes and proteins) of interest based on imported data. The tool additionally provides a flexible scripting mechanism to handle complex queries. Availability: The PathVisio-Faceted Search plugin is compatible with PathVisio 3.0 and above. PathVisio is compatible with Windows, Mac OS X and Linux. The plugin, documentation, example diagrams and Groovy scripts are available at http://PathVisio.org/wiki/PathVisioFacetedSearchHelp. The plugin is free, open-source and licensed by the Apache 2.0 License. Contact: augustin@mail.nih.gov or jakeyfried@gmail.com PMID:23547033
EXP-PAC: providing comparative analysis and storage of next generation gene expression data.
Church, Philip C; Goscinski, Andrzej; Lefèvre, Christophe
2012-07-01
Microarrays and more recently RNA sequencing has led to an increase in available gene expression data. How to manage and store this data is becoming a key issue. In response we have developed EXP-PAC, a web based software package for storage, management and analysis of gene expression and sequence data. Unique to this package is SQL based querying of gene expression data sets, distributed normalization of raw gene expression data and analysis of gene expression data across experiments and species. This package has been populated with lactation data in the international milk genomic consortium web portal (http://milkgenomics.org/). Source code is also available which can be hosted on a Windows, Linux or Mac APACHE server connected to a private or public network (http://mamsap.it.deakin.edu.au/~pcc/Release/EXP_PAC.html). Copyright © 2012 Elsevier Inc. All rights reserved.
BOLDMirror: a global mirror system of DNA barcode data.
Liu, D; Liu, L; Guo, G; Wang, W; Sun, Q; Parani, M; Ma, J
2013-11-01
DNA barcoding is a novel concept for taxonomic identification using short, specific genetic markers and has been applied to study a large number of eukaryotes. The huge amount of data output generated by DNA barcoding requires well-organized information systems. Besides the Barcode of Life Data system (BOLD) established in Canada, the mirror system is also important for the international barcode of life project (iBOL). For this purpose, we developed the BOLDMirror, a global mirror system of DNA barcode data. It is open-sourced and can run on the LAMP (Linux + Apache + MySQL + PHP) environment. BOLDMirror has data synchronization, data representation and statistics modules, and also provides spaces to store user operation history. BOLDMirror can be accessed at http://www.boldmirror.net and several countries have used it to setup their site of DNA barcoding. © 2012 John Wiley & Sons Ltd.
Development and implementation of a web-based system to study children with malnutrition.
Syed-Mohamad, Sharifah-Mastura
2009-01-01
To develop and implement a collective web-based system to monitor child growth in order to study children with malnutrition. The system was developed using prototyping system development methodology. The implementation was carried out using open-source technologies that include Apache Web Server, PHP scripting, and MySQL database management system. There were four datasets collected by the system: demographic data, measurement data, parent data, and food program data. The system was designed to be used by two groups of users, the clinics and the researchers. The Growth Monitor System was successfully developed and used for the study, "Geoinformation System (GIS) and Remote Sensing in Mapping of Children with Malnutrition." Data collection was implemented in public clinics from two districts in the state of Kelantan, Malaysia. The development of an integrated web-based system, Growth Monitor, for the study of children with malnutrition has been achieved. This system can be expanded to new partners who are involved in the study of children with malnutrition in other parts of Malaysia as well as other countries.
ExportAid: database of RNA elements regulating nuclear RNA export in mammals.
Giulietti, Matteo; Milantoni, Sara Armida; Armeni, Tatiana; Principato, Giovanni; Piva, Francesco
2015-01-15
Regulation of nuclear mRNA export or retention is carried out by RNA elements but the mechanism is not yet well understood. To understand the mRNA export process, it is important to collect all the involved RNA elements and their trans-acting factors. By hand-curated literature screening we collected, in ExportAid database, experimentally assessed data about RNA elements regulating nuclear export or retention of endogenous, heterologous or artificial RNAs in mammalian cells. This database could help to understand the RNA export language and to study the possible export efficiency alterations owing to mutations or polymorphisms. Currently, ExportAid stores 235 and 96 RNA elements, respectively, increasing and decreasing export efficiency, and 98 neutral assessed sequences. Freely accessible without registration at http://www.introni.it/ExportAid/ExportAid.html. Database and web interface are implemented in Perl, MySQL, Apache and JavaScript with all major browsers supported. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
TOPDOM: database of conservatively located domains and motifs in proteins.
Varga, Julia; Dobson, László; Tusnády, Gábor E
2016-09-01
The TOPDOM database-originally created as a collection of domains and motifs located consistently on the same side of the membranes in α-helical transmembrane proteins-has been updated and extended by taking into consideration consistently localized domains and motifs in globular proteins, too. By taking advantage of the recently developed CCTOP algorithm to determine the type of a protein and predict topology in case of transmembrane proteins, and by applying a thorough search for domains and motifs as well as utilizing the most up-to-date version of all source databases, we managed to reach a 6-fold increase in the size of the whole database and a 2-fold increase in the number of transmembrane proteins. TOPDOM database is available at http://topdom.enzim.hu The webpage utilizes the common Apache, PHP5 and MySQL software to provide the user interface for accessing and searching the database. The database itself is generated on a high performance computer. tusnady.gabor@ttk.mta.hu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.
NASA Astrophysics Data System (ADS)
Sumarudin, A.; Ghozali, A. L.; Hasyim, A.; Effendi, A.
2016-04-01
Indonesian agriculture has great potensial for development. Agriculture a lot yet based on data collection for soil or plant, data soil can use for analys soil fertility. We propose e-agriculture system for monitoring soil. This system can monitoring soil status. Monitoring system based on wireless sensor mote that sensing soil status. Sensor monitoring utilize soil moisture, humidity and temperature. System monitoring design with mote based on microcontroler and xbee connection. Data sensing send to gateway with star topology with one gateway. Gateway utilize with mini personal computer and connect to xbee cordinator mode. On gateway, gateway include apache server for store data based on My-SQL. System web base with YII framework. System done implementation and can show soil status real time. Result the system can connection other mote 40 meters and mote lifetime 7 hours and minimum voltage 7 volt. The system can help famer for monitoring soil and farmer can making decision for treatment soil based on data. It can improve the quality in agricultural production and would decrease the management and farming costs.
Secure UNIX socket-based controlling system for high-throughput protein crystallography experiments.
Gaponov, Yurii; Igarashi, Noriyuki; Hiraki, Masahiko; Sasajima, Kumiko; Matsugaki, Naohiro; Suzuki, Mamoru; Kosuge, Takashi; Wakatsuki, Soichi
2004-01-01
A control system for high-throughput protein crystallography experiments has been developed based on a multilevel secure (SSL v2/v3) UNIX socket under the Linux operating system. Main features of protein crystallography experiments (purification, crystallization, loop preparation, data collecting, data processing) are dealt with by the software. All information necessary to perform protein crystallography experiments is stored (except raw X-ray data, that are stored in Network File Server) in a relational database (MySQL). The system consists of several servers and clients. TCP/IP secure UNIX sockets with four predefined behaviors [(a) listening to a request followed by a reply, (b) sending a request and waiting for a reply, (c) listening to a broadcast message, and (d) sending a broadcast message] support communications between all servers and clients allowing one to control experiments, view data, edit experimental conditions and perform data processing remotely. The usage of the interface software is well suited for developing well organized control software with a hierarchical structure of different software units (Gaponov et al., 1998), which will pass and receive different types of information. All communication is divided into two parts: low and top levels. Large and complicated control tasks are split into several smaller ones, which can be processed by control clients independently. For communicating with experimental equipment (beamline optical elements, robots, and specialized experimental equipment etc.), the STARS server, developed at the Photon Factory, is used (Kosuge et al., 2002). The STARS server allows any application with an open socket to be connected with any other clients that control experimental equipment. Majority of the source code is written in C/C++. GUI modules of the system were built mainly using Glade user interface builder for GTK+ and Gnome under Red Hat Linux 7.1 operating system.
NASA Astrophysics Data System (ADS)
Boichard, Jean-Luc; Brissebrat, Guillaume; Cloche, Sophie; Eymard, Laurence; Fleury, Laurence; Mastrorillo, Laurence; Moulaye, Oumarou; Ramage, Karim
2010-05-01
The AMMA project includes aircraft, ground-based and ocean measurements, an intensive use of satellite data and diverse modelling studies. Therefore, the AMMA database aims at storing a great amount and a large variety of data, and at providing the data as rapidly and safely as possible to the AMMA research community. In order to stimulate the exchange of information and collaboration between researchers from different disciplines or using different tools, the database provides a detailed description of the products and uses standardized formats. The AMMA database contains: - AMMA field campaigns datasets; - historical data in West Africa from 1850 (operational networks and previous scientific programs); - satellite products from past and future satellites, (re-)mapped on a regular latitude/longitude grid and stored in NetCDF format (CF Convention); - model outputs from atmosphere or ocean operational (re-)analysis and forecasts, and from research simulations. The outputs are processed as the satellite products are. Before accessing the data, any user has to sign the AMMA data and publication policy. This chart only covers the use of data in the framework of scientific objectives and categorically excludes the redistribution of data to third parties and the usage for commercial applications. Some collaboration between data producers and users, and the mention of the AMMA project in any publication is also required. The AMMA database and the associated on-line tools have been fully developed and are managed by two teams in France (IPSL Database Centre, Paris and OMP, Toulouse). Users can access data of both data centres using an unique web portal. This website is composed of different modules : - Registration: forms to register, read and sign the data use chart when an user visits for the first time - Data access interface: friendly tool allowing to build a data extraction request by selecting various criteria like location, time, parameters... The request can concern local, satellite and model data. - Documentation: catalogue of all the available data and their metadata. These tools have been developed using standard and free languages and softwares: - Linux system with an Apache web server and a Tomcat application server; - J2EE tools : JSF and Struts frameworks, hibernate; - relational database management systems: PostgreSQL and MySQL; - OpenLDAP directory. In order to facilitate the access to the data by African scientists, the complete system has been mirrored at AGHRYMET Regional Centre in Niamey and is operational there since January 2009. Users can now access metadata and request data through one or the other of two equivalent portals: http://database.amma-international.org or http://amma.agrhymet.ne/amma-data.
NASA Astrophysics Data System (ADS)
Park, Chan-Hee; Lee, Cholwoo
2016-04-01
Raspberry Pi series is a low cost, smaller than credit-card sized computers that various operating systems such as linux and recently even Windows 10 are ported to run on. Thanks to massive production and rapid technology development, the price of various sensors that can be attached to Raspberry Pi has been dropping at an increasing speed. Therefore, the device can be an economic choice as a small portable computer to monitor temporal hydrogeological data in fields. In this study, we present a Raspberry Pi system that measures a flow rate, and temperature of groundwater at sites, stores them into mysql database, and produces interactive figures and tables such as google charts online or bokeh offline for further monitoring and analysis. Since all the data are to be monitored on internet, any computers or mobile devices can be good monitoring tools at convenience. The measured data are further integrated with OpenGeoSys, one of the hydrogeological models that is also ported to the Raspberry Pi series. This leads onsite hydrogeological modeling fed by temporal sensor data to meet various needs.
compendiumdb: an R package for retrieval and storage of functional genomics data.
Nandal, Umesh K; van Kampen, Antoine H C; Moerland, Perry D
2016-09-15
Currently, the Gene Expression Omnibus (GEO) contains public data of over 1 million samples from more than 40 000 microarray-based functional genomics experiments. This provides a rich source of information for novel biological discoveries. However, unlocking this potential often requires retrieving and storing a large number of expression profiles from a wide range of different studies and platforms. The compendiumdb R package provides an environment for downloading functional genomics data from GEO, parsing the information into a local or remote database and interacting with the database using dedicated R functions, thus enabling seamless integration with other tools available in R/Bioconductor. The compendiumdb package is written in R, MySQL and Perl. Source code and binaries are available from CRAN (http://cran.r-project.org/web/packages/compendiumdb/) for all major platforms (Linux, MS Windows and OS X) under the GPLv3 license. p.d.moerland@amc.uva.nl Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Digital time stamping system based on open source technologies.
Miskinis, Rimantas; Smirnov, Dmitrij; Urba, Emilis; Burokas, Andrius; Malysko, Bogdan; Laud, Peeter; Zuliani, Francesco
2010-03-01
A digital time stamping system based on open source technologies (LINUX-UBUNTU, OpenTSA, OpenSSL, MySQL) is described in detail, including all important testing results. The system, called BALTICTIME, was developed under a project sponsored by the European Commission under the Program FP 6. It was designed to meet the requirements posed to the systems of legal and accountable time stamping and to be applicable to the hardware commonly used by the national time metrology laboratories. The BALTICTIME system is intended for the use of governmental and other institutions as well as personal bodies. Testing results demonstrate that the time stamps issued to the user by BALTICTIME and saved in BALTICTIME's archives (which implies that the time stamps are accountable) meet all the regulatory requirements. Moreover, the BALTICTIME in its present implementation is able to issue more than 10 digital time stamps per second. The system can be enhanced if needed. The test version of the BALTICTIME service is free and available at http://baltictime. pfi.lt:8080/btws/ and http://baltictime.lnmc.lv:8080/btws/.
Comprehensive Routing Security Development and Deployment for the Internet
2015-02-01
feature enhancement and bug fixes. • MySQL : MySQL is a widely used and popular open source database package. It was chosen for database support in the...RPSTIR depends on several other open source packages. • MySQL : MySQL is used for the the local RPKI database cache. • OpenSSL: OpenSSL is used for...cryptographic libraries for X.509 certificates. • ODBC mySql Connector: ODBC (Open Database Connectivity) is a standard programming interface (API) for
Development of Human Face Literature Database Using Text Mining Approach: Phase I.
Kaur, Paramjit; Krishan, Kewal; Sharma, Suresh K
2018-06-01
The face is an important part of the human body by which an individual communicates in the society. Its importance can be highlighted by the fact that a person deprived of face cannot sustain in the living world. The amount of experiments being performed and the number of research papers being published under the domain of human face have surged in the past few decades. Several scientific disciplines, which are conducting research on human face include: Medical Science, Anthropology, Information Technology (Biometrics, Robotics, and Artificial Intelligence, etc.), Psychology, Forensic Science, Neuroscience, etc. This alarms the need of collecting and managing the data concerning human face so that the public and free access of it can be provided to the scientific community. This can be attained by developing databases and tools on human face using bioinformatics approach. The current research emphasizes on creating a database concerning literature data of human face. The database can be accessed on the basis of specific keywords, journal name, date of publication, author's name, etc. The collected research papers will be stored in the form of a database. Hence, the database will be beneficial to the research community as the comprehensive information dedicated to the human face could be found at one place. The information related to facial morphologic features, facial disorders, facial asymmetry, facial abnormalities, and many other parameters can be extracted from this database. The front end has been developed using Hyper Text Mark-up Language and Cascading Style Sheets. The back end has been developed using hypertext preprocessor (PHP). The JAVA Script has used as scripting language. MySQL (Structured Query Language) is used for database development as it is most widely used Relational Database Management System. XAMPP (X (cross platform), Apache, MySQL, PHP, Perl) open source web application software has been used as the server.The database is still under the developmental phase and discusses the initial steps of its creation. The current paper throws light on the work done till date.
Johnson, Z. P.; Eady, R. D.; Ahmad, S. F.; Agravat, S.; Morris, T; Else, J; Lank, S. M.; Wiseman, R. W.; O’Connor, D. H.; Penedo, M. C. T.; Larsen, C. P.
2012-01-01
Here we describe the Immunogenetic Management Software (IMS) system, a novel web-based application that permitsmultiplexed analysis of complex immunogenetic traits that are necessary for the accurate planning and execution of experiments involving large animal models, including nonhuman primates. IMS is capable of housing complex pedigree relationships, microsatellite-based MHC typing data, as well as MHC pyrosequencing expression analysis of class I alleles. It includes a novel, automated MHC haplotype naming algorithm and has accomplished an innovative visualization protocol that allows users to view multiple familial and MHC haplotype relationships through a single, interactive graphical interface. Detailed DNA and RNA-based data can also be queried and analyzed in a highly accessible fashion, and flexible search capabilities allow experimental choices to be made based on multiple, individualized and expandable immunogenetic factors. This web application is implemented in Java, MySQL, Tomcat, and Apache, with supported browsers including Internet Explorer and Firefox onWindows and Safari on Mac OS. The software is freely available for distribution to noncommercial users by contacting Leslie. kean@emory.edu. A demonstration site for the software is available at http://typing.emory.edu/typing_demo, user name: imsdemo7@gmail.com and password: imsdemo. PMID:22080300
YODA++: A proposal for a semi-automatic space mission control
NASA Astrophysics Data System (ADS)
Casolino, M.; de Pascale, M. P.; Nagni, M.; Picozza, P.
YODA++ is a proposal for a semi-automated data handling and analysis system for the PAMELA space experiment. The core of the routines have been developed to process a stream of raw data downlinked from the Resurs DK1 satellite (housing PAMELA) to the ground station in Moscow. Raw data consist of scientific data and are complemented by housekeeping information. Housekeeping information will be analyzed within a short time from download (1 h) in order to monitor the status of the experiment and to foreseen the mission acquisition planning. A prototype for the data visualization will run on an APACHE TOMCAT web application server, providing an off-line analysis tool using a browser and part of code for the system maintenance. Data retrieving development is in production phase, while a GUI interface for human friendly monitoring is on preliminary phase as well as a JavaServerPages/JavaServerFaces (JSP/JSF) web application facility. On a longer timescale (1 3 h from download) scientific data are analyzed. The data storage core will be a mix of CERNs ROOT files structure and MySQL as a relational database. YODA++ is currently being used in the integration and testing on ground of PAMELA data.
A Web-Based Information System for Field Data Management
NASA Astrophysics Data System (ADS)
Weng, Y. H.; Sun, F. S.
2014-12-01
A web-based field data management system has been designed and developed to allow field geologists to store, organize, manage, and share field data online. System requirements were analyzed and clearly defined first regarding what data are to be stored, who the potential users are, and what system functions are needed in order to deliver the right data in the right way to the right user. A 3-tiered architecture was adopted to create this secure, scalable system that consists of a web browser at the front end while a database at the back end and a functional logic server in the middle. Specifically, HTML, CSS, and JavaScript were used to implement the user interface in the front-end tier, the Apache web server runs PHP scripts, and MySQL to server is used for the back-end database. The system accepts various types of field information, including image, audio, video, numeric, and text. It allows users to select data and populate them on either Google Earth or Google Maps for the examination of the spatial relations. It also makes the sharing of field data easy by converting them into XML format that is both human-readable and machine-readable, and thus ready for reuse.
NASA Astrophysics Data System (ADS)
Anugrah, Wirdah; Suryono; Suseno, Jatmiko Endro
2018-02-01
Management of water resources based on Geographic Information System can provide substantial benefits to water availability settings. Monitoring the potential water level is needed in the development sector, agriculture, energy and others. In this research is developed water resource information system using real-time Geographic Information System concept for monitoring the potential water level of web based area by applying rule based system method. GIS consists of hardware, software, and database. Based on the web-based GIS architecture, this study uses a set of computer that are connected to the network, run on the Apache web server and PHP programming language using MySQL database. The Ultrasound Wireless Sensor System is used as a water level data input. It also includes time and geographic location information. This GIS maps the five sensor locations. GIS is processed through a rule based system to determine the level of potential water level of the area. Water level monitoring information result can be displayed on thematic maps by overlaying more than one layer, and also generating information in the form of tables from the database, as well as graphs are based on the timing of events and the water level values.
Johnson, Z P; Eady, R D; Ahmad, S F; Agravat, S; Morris, T; Else, J; Lank, S M; Wiseman, R W; O'Connor, D H; Penedo, M C T; Larsen, C P; Kean, L S
2012-04-01
Here we describe the Immunogenetic Management Software (IMS) system, a novel web-based application that permits multiplexed analysis of complex immunogenetic traits that are necessary for the accurate planning and execution of experiments involving large animal models, including nonhuman primates. IMS is capable of housing complex pedigree relationships, microsatellite-based MHC typing data, as well as MHC pyrosequencing expression analysis of class I alleles. It includes a novel, automated MHC haplotype naming algorithm and has accomplished an innovative visualization protocol that allows users to view multiple familial and MHC haplotype relationships through a single, interactive graphical interface. Detailed DNA and RNA-based data can also be queried and analyzed in a highly accessible fashion, and flexible search capabilities allow experimental choices to be made based on multiple, individualized and expandable immunogenetic factors. This web application is implemented in Java, MySQL, Tomcat, and Apache, with supported browsers including Internet Explorer and Firefox on Windows and Safari on Mac OS. The software is freely available for distribution to noncommercial users by contacting Leslie.kean@emory.edu. A demonstration site for the software is available at http://typing.emory.edu/typing_demo , user name: imsdemo7@gmail.com and password: imsdemo.
A Web-based telemedicine system for diabetic retinopathy screening using digital fundus photography.
Wei, Jack C; Valentino, Daniel J; Bell, Douglas S; Baker, Richard S
2006-02-01
The purpose was to design and implement a Web-based telemedicine system for diabetic retinopathy screening using digital fundus cameras and to make the software publicly available through Open Source release. The process of retinal imaging and case reviewing was modeled to optimize workflow and implement use of computer system. The Web-based system was built on Java Servlet and Java Server Pages (JSP) technologies. Apache Tomcat was chosen as the JSP engine, while MySQL was used as the main database and Laboratory of Neuro Imaging (LONI) Image Storage Architecture, from the LONI-UCLA, as the platform for image storage. For security, all data transmissions were carried over encrypted Internet connections such as Secure Socket Layer (SSL) and HyperText Transfer Protocol over SSL (HTTPS). User logins were required and access to patient data was logged for auditing. The system was deployed at Hubert H. Humphrey Comprehensive Health Center and Martin Luther King/Drew Medical Center of Los Angeles County Department of Health Services. Within 4 months, 1500 images of more than 650 patients were taken at Humphrey's Eye Clinic and successfully transferred to King/Drew's Department of Ophthalmology. This study demonstrates an effective architecture for remote diabetic retinopathy screening.
BRISK--research-oriented storage kit for biology-related data.
Tan, Alan; Tripp, Ben; Daley, Denise
2011-09-01
In genetic science, large-scale international research collaborations represent a growing trend. These collaborations have demanding and challenging database, storage, retrieval and communication needs. These studies typically involve demographic and clinical data, in addition to the results from numerous genomic studies (omics studies) such as gene expression, eQTL, genome-wide association and methylation studies, which present numerous challenges, thus the need for data integration platforms that can handle these complex data structures. Inefficient methods of data transfer and access control still plague research collaboration. As science becomes more and more collaborative in nature, the need for a system that adequately manages data sharing becomes paramount. Biology-Related Information Storage Kit (BRISK) is a package of several web-based data management tools that provide a cohesive data integration and management platform. It was specifically designed to provide the architecture necessary to promote collaboration and expedite data sharing between scientists. The software, documentation, Java source code and demo are available at http://genapha.icapture.ubc.ca/brisk/index.jsp. BRISK was developed in Java, and tested on an Apache Tomcat 6 server with a MySQL database. denise.daley@hli.ubc.ca.
CAGEd-oPOSSUM: motif enrichment analysis from CAGE-derived TSSs.
Arenillas, David J; Forrest, Alistair R R; Kawaji, Hideya; Lassmann, Timo; Wasserman, Wyeth W; Mathelier, Anthony
2016-09-15
With the emergence of large-scale Cap Analysis of Gene Expression (CAGE) datasets from individual labs and the FANTOM consortium, one can now analyze the cis-regulatory regions associated with gene transcription at an unprecedented level of refinement. By coupling transcription factor binding site (TFBS) enrichment analysis with CAGE-derived genomic regions, CAGEd-oPOSSUM can identify TFs that act as key regulators of genes involved in specific mammalian cell and tissue types. The webtool allows for the analysis of CAGE-derived transcription start sites (TSSs) either provided by the user or selected from ∼1300 mammalian samples from the FANTOM5 project with pre-computed TFBS predicted with JASPAR TF binding profiles. The tool helps power insights into the regulation of genes through the study of the specific usage of TSSs within specific cell types and/or under specific conditions. The CAGEd-oPOSUM web tool is implemented in Perl, MySQL and Apache and is available at http://cagedop.cmmt.ubc.ca/CAGEd_oPOSSUM CONTACTS: anthony.mathelier@ncmm.uio.no or wyeth@cmmt.ubc.ca Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.
Implementation of electronic logbook for trainees of general surgery in Thailand.
Aphinives, Potchavit
2013-01-01
All trainees are required to keep a record of their surgical skill and experiences throughout the trainingperiod in a logbook format. Paper-based logbook has several limitations. Therefore, an electronic logbook was introduced to replace the paper-based logbook. An electronic logbook program was developed in November 2005. This program was designed as web-based application based upon PHP scripts beneath Apache web server and MySQL database implementation. Only simpliJfied and essential data, such as hospital number diagnosis, surgical procedure, and pathological findings, etc. are recorded. The electronic logbook databases between Academic year 2006 and 2011 were analyzed. The annual recordedsurgical procedures gradually increasedfrom 41,214 procedures in 2006 to 66,643 procedures in 2011. Around one-third of all records were not verified by attending staffs, i.e. 27.59% (2006), 31.69% (2007), 18.06% (2008), 28.42% (2009), 30.18% (2010), and 31.41% (2011). On the Education year 2011, the three most common procedural groups included colon, rectum & anus group, appendix group, and vascular group, respectively. Advantages of the electronic logbook included more efficient data access, increased ability to monitor trainees and trainers, and analysis of procedural varieties among the training institutes.
Ni, Ming; Ye, Fuqiang; Zhu, Juanjuan; Li, Zongwei; Yang, Shuai; Yang, Bite; Han, Lu; Wu, Yongge; Chen, Ying; Li, Fei; Wang, Shengqi; Bo, Xiaochen
2014-12-01
Numerous public microarray datasets are valuable resources for the scientific communities. Several online tools have made great steps to use these data by querying related datasets with users' own gene signatures or expression profiles. However, dataset annotation and result exhibition still need to be improved. ExpTreeDB is a database that allows for queries on human and mouse microarray experiments from Gene Expression Omnibus with gene signatures or profiles. Compared with similar applications, ExpTreeDB pays more attention to dataset annotations and result visualization. We introduced a multiple-level annotation system to depict and organize original experiments. For example, a tamoxifen-treated cell line experiment is hierarchically annotated as 'agent→drug→estrogen receptor antagonist→tamoxifen'. Consequently, retrieved results are exhibited by an interactive tree-structured graphics, which provide an overview for related experiments and might enlighten users on key items of interest. The database is freely available at http://biotech.bmi.ac.cn/ExpTreeDB. Web site is implemented in Perl, PHP, R, MySQL and Apache. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
MetReS, an Efficient Database for Genomic Applications.
Vilaplana, Jordi; Alves, Rui; Solsona, Francesc; Mateo, Jordi; Teixidó, Ivan; Pifarré, Marc
2018-02-01
MetReS (Metabolic Reconstruction Server) is a genomic database that is shared between two software applications that address important biological problems. Biblio-MetReS is a data-mining tool that enables the reconstruction of molecular networks based on automated text-mining analysis of published scientific literature. Homol-MetReS allows functional (re)annotation of proteomes, to properly identify both the individual proteins involved in the processes of interest and their function. The main goal of this work was to identify the areas where the performance of the MetReS database performance could be improved and to test whether this improvement would scale to larger datasets and more complex types of analysis. The study was started with a relational database, MySQL, which is the current database server used by the applications. We also tested the performance of an alternative data-handling framework, Apache Hadoop. Hadoop is currently used for large-scale data processing. We found that this data handling framework is likely to greatly improve the efficiency of the MetReS applications as the dataset and the processing needs increase by several orders of magnitude, as expected to happen in the near future.
[Establishment of a comprehensive database for laryngeal cancer related genes and the miRNAs].
Li, Mengjiao; E, Qimin; Liu, Jialin; Huang, Tingting; Liang, Chuanyu
2015-09-01
By collecting and analyzing the laryngeal cancer related genes and the miRNAs, to build a comprehensive laryngeal cancer-related gene database, which differs from the current biological information database with complex and clumsy structure and focuses on the theme of gene and miRNA, and it could make the research and teaching more convenient and efficient. Based on the B/S architecture, using Apache as a Web server, MySQL as coding language of database design and PHP as coding language of web design, a comprehensive database for laryngeal cancer-related genes was established, providing with the gene tables, protein tables, miRNA tables and clinical information tables of the patients with laryngeal cancer. The established database containsed 207 laryngeal cancer related genes, 243 proteins, 26 miRNAs, and their particular information such as mutations, methylations, diversified expressions, and the empirical references of laryngeal cancer relevant molecules. The database could be accessed and operated via the Internet, by which browsing and retrieval of the information were performed. The database were maintained and updated regularly. The database for laryngeal cancer related genes is resource-integrated and user-friendly, providing a genetic information query tool for the study of laryngeal cancer.
CAGEd-oPOSSUM: motif enrichment analysis from CAGE-derived TSSs
Arenillas, David J.; Forrest, Alistair R. R.; Kawaji, Hideya; Lassmann, Timo; Wasserman, Wyeth W.; Mathelier, Anthony
2016-01-01
With the emergence of large-scale Cap Analysis of Gene Expression (CAGE) datasets from individual labs and the FANTOM consortium, one can now analyze the cis-regulatory regions associated with gene transcription at an unprecedented level of refinement. By coupling transcription factor binding site (TFBS) enrichment analysis with CAGE-derived genomic regions, CAGEd-oPOSSUM can identify TFs that act as key regulators of genes involved in specific mammalian cell and tissue types. The webtool allows for the analysis of CAGE-derived transcription start sites (TSSs) either provided by the user or selected from ∼1300 mammalian samples from the FANTOM5 project with pre-computed TFBS predicted with JASPAR TF binding profiles. The tool helps power insights into the regulation of genes through the study of the specific usage of TSSs within specific cell types and/or under specific conditions. Availability and Implementation: The CAGEd-oPOSUM web tool is implemented in Perl, MySQL and Apache and is available at http://cagedop.cmmt.ubc.ca/CAGEd_oPOSSUM. Contacts: anthony.mathelier@ncmm.uio.no or wyeth@cmmt.ubc.ca Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27334471
Standard Port-Visit Cost Forecasting Model for U.S. Navy Husbanding Contracts
2009-12-01
Protocol (HTTP) server.35 2. MySQL . An open-source database.36 3. PHP . A common scripting language used for Web development.37 E. IMPLEMENTATION OF...Inc. (2009). MySQL Community Server (Version 5.1) [Software]. Available from http://dev.mysql.com/downloads/ 37 The PHP Group (2009). PHP (Version...Logistics Services MySQL My Structured Query Language NAVSUP Navy Supply Systems Command NC Non-Contract Items NPS Naval Postgraduate
Blind Seer: A Scalable Private DBMS
2014-05-01
searchable index terms per DB row, in time comparable to (insecure) MySQL (many practical queries can be privately executed with work 1.2-3 times slower...than MySQL , although some queries are costlier). We support a rich query set, including searching on arbitrary boolean formulas on keywords and ranges...index terms per DB row, in time comparable to (insecure) MySQL (many practical queries can be privately executed with work 1.2-3 times slower than MySQL
FBIS: A regional DNA barcode archival & analysis system for Indian fishes.
Nagpure, Naresh Sahebrao; Rashid, Iliyas; Pathak, Ajey Kumar; Singh, Mahender; Singh, Shri Prakash; Sarkar, Uttam Kumar
2012-01-01
DNA barcode is a new tool for taxon recognition and classification of biological organisms based on sequence of a fragment of mitochondrial gene, cytochrome c oxidase I (COI). In view of the growing importance of the fish DNA barcoding for species identification, molecular taxonomy and fish diversity conservation, we developed a Fish Barcode Information System (FBIS) for Indian fishes, which will serve as a regional DNA barcode archival and analysis system. The database presently contains 2334 sequence records of COI gene for 472 aquatic species belonging to 39 orders and 136 families, collected from available published data sources. Additionally, it contains information on phenotype, distribution and IUCN Red List status of fishes. The web version of FBIS was designed using MySQL, Perl and PHP under Linux operating platform to (a) store and manage the acquisition (b) analyze and explore DNA barcode records (c) identify species and estimate genetic divergence. FBIS has also been integrated with appropriate tools for retrieving and viewing information about the database statistics and taxonomy. It is expected that FBIS would be useful as a potent information system in fish molecular taxonomy, phylogeny and genomics. The database is available for free at http://mail.nbfgr.res.in/fbis/
Zabolotskikh, I B; Musaeva, T S; Denisova, E A
2012-01-01
to estimate efficiency of APACHE II, APACHE III, SAPS II, SAPS III, SOFA scales for obstetric patients with heavy sepsis. 186 medical cards retrospective analysis of pregnant women with pulmonary sepsis, 40 women with urosepsis and puerperas with abdominal sepsis--66 was performed. Middle age of women was 26.7 (22.4-34.5). In population of puerperas with abdominal sepsis APACHE II, APACHE III, SAPS 2, SAPS 3, SOFA scales showed to good calibration, however, high resolution was observed only in APACHE III, SAPS 3 and SOFA (AUROC 0.95; 0.93; 0.92 respectively). APACHE III and SOFA scales provided qualitative prognosis in pregnant women with urosepsis; resolution ratio of these scales considerably exceeds APACHE II, SAPS 2 and SAPS 3 (AUROC 0.73; 0.74; 0.79 respectively). APACHE II scale is inapplicable because of a lack of calibration (X2 = 13.1; p < 0.01), and at other scales (APACHE III, SAPS 2, SAPS 3, SOFA) was observed the insufficient resolution (AUROC < 0.9) in pregnant women with pulmonary sepsis. Prognostic possibilities assessment of score scales showed that APACHE III, SAPS 3 and SOFA scales can be used for a lethality prognosis for puerperas with abdominal sepsis, in population of pregnant women with urosepsis--only APACHE III and SOFA, and with pulmonary sepsis--SAPS 3 and APACHE III only in case of additional clinical information.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-26
... Mountain Apache Tribe of the Fort Apache Reservation, Arizona; Yavapai-Apache Nation of the Camp Verde... Tribe of the Fort Apache Reservation, Arizona; and Yavapai-Apache Nation of the Camp Verde Indian...-Apache Nation of the Camp Verde Indian Reservation, Arizona. Other credible lines of evidence, including...
Hsu, Kuo-Yao; Tsai, Yun-Fang; Huang, Chu-Ching; Yeh, Wen-Ling; Chang, Kai-Ping; Lin, Chen-Chun; Chen, Ching-Yen; Lee, Hsiu-Lan
2018-06-11
Smoking tobacco, drinking alcohol, and chewing betel quid are health-risk behaviors for several diseases, such as cancer, cardiovascular disease, and diabetes, with severe impacts on health. However, health care providers often have limited time to assess clients' behaviors regarding smoking tobacco, drinking alcohol, and chewing betel quid and intervene, if needed. The objective of this study was to develop a Web-based survey system; determine the rates of tobacco-smoking, alcohol-drinking, and betel-quid-chewing behaviors; and estimate the efficiency of the system (time to complete the survey). Patients and their family members or friends were recruited from gastrointestinal medical-surgical, otolaryngology, orthopedics, and rehabilitation clinics or wards at a medical center in northern Taiwan. Data for this descriptive, cross-sectional study were extracted from a large series of research studies. A Web-based survey system was developed using a Linux, Apache, MySQL, PHP stack solution. The Web survey was set up to include four questionnaires: the Chinese-version Fagerstrom Tolerance Questionnaire, the Chinese-version Alcohol Use Disorders Identification Test, the Betel Nut Dependency Scale, and a sociodemographic form with several chronic diseases. After the participants completed the survey, the system automatically calculated their score, categorized their risk level for each behavior, and immediately presented and explained their results. The system also recorded the time each participant took to complete the survey. Of 782 patient participants, 29.6% were addicted to nicotine, 13.3% were hazardous, harmful, or dependent alcohol drinkers, and 1.5% were dependent on chewing betel quid. Of 425 family or friend participants, 19.8% were addicted to nicotine, 5.6% were hazardous, harmful, or dependent alcohol drinkers, and 0.9% were dependent on chewing betel quid. Regarding the mean time to complete the survey, patients took 7.9 minutes (SD 3.0; range 3-20) and family members or friends took 7.7 minutes (SD 2.8; range 3-18). Most of the participants completed the survey within 5-10 minutes. The Web-based survey was easy to self-administer. Health care providers can use this Web-based survey system to save time in assessing these risk behaviors in clinical settings. All smokers had mild-to-severe nicotine addiction, and 5.6%-12.3% of patients and their family members or friends were at risk of alcohol dependence. Considering that these three behaviors, particularly in combination, dramatically increase the risk of esophageal cancer, appropriate and convenient interventions are necessary for preserving public health in Taiwan. ©Kuo-Yao Hsu, Yun-Fang Tsai, Chu-Ching Huang, Wen-Ling Yeh, Kai-Ping Chang, Chen-Chun Lin, Ching-Yen Chen, Hsiu-Lan Lee. Originally published in JMIR Mhealth and Uhealth (http://mhealth.jmir.org), 11.06.2018.
Collaborative Cyberinfrastructure: Crowdsourcing of Knowledge and Discoveries (Invited)
NASA Astrophysics Data System (ADS)
Gay, P.
2013-12-01
The design and implementation of programs to crowdsource science presents a unique set of challenges to system architects, programmers, and designers. In this presentation, one solution, CosmoQuest's Citizen Science Builder (CSB), will be discussed. CSB combines a clean user interface with a powerful back end to allow the quick design and deployment of citizen science sites that meet the needs of both the random Joe Public, and the detail driven Albert Professional. In this talk, the software will be overviewed, and the results of usability testing and accuracy testing with both citizen and professional scientists will be discussed. The software is designed to run on one or more LINUX systems running Apache webserver with MySQL and PHP. The interface is HTML5 and relies on javascript and AJAX to provide a dynamic interactive experience. CosmoQuest currently runs on Amazon Web Services and uses VBulletin for logins. The public-facing aspects of CSB provide a uniform experience that allows citizen scientists to use a simple set of tools to achieve a diversity of tasks. This interface presents users with a large view window for data, a toolbar reminiscent of MS Word or Adobe Photoshop with tools from drawing circles or segmented lines, flagging features from a dropdown menu, or marking specific objects with a set marker. The toolbar also allows users to select checkboxes describing the image as a whole. In addition to the viewer and toolbar, volunteers can also access tooltips, examples, and a video tutorial. The scientist interface for CSB gives the science team the ability to prioritize images, download results, create comparison data to validate volunteer data, and also provides access to downloadable tools for doing data analysis. Both these interfaces are controlled through a simple set of config files, although some tasks require customization of the controlling javascript. These are used to point the software at YouTube tutorials, graphics, and the correct toolsets. The only part of the interface requiring direct CSB administrator attention is the uploading of new images/movies onto the server and uploading of meta-data about the data into the database. This step must be customized for each unique data set. Initial research shows that professionals using the software to annotate images - marking craters on the moon to be specific - are as accurate with CSB as they are with their favourite professional software. It also shows that the results of members of the public are within error of the results of the professionals, with roughly the same level of error in each group and across many crater scales. Results of interviews with volunteers about their ease moving between interfaces for different projects, and response to the aesthetics of the site will also be discussed during this presentation
Urgent Virtual Machine Eviction with Enlightened Post-Copy
2015-12-01
memory is in use, almost all of which is by Memcached. MySQL : The VMs run MySQL 5.6, and the clients execute OLTPBenchmark [3] using the Twitter...workload with scale factor of 960. The VMs are each allocated 16 cores and 30 GB of memory, and MySQL is configured with a 16 GB buffer pool in memory. The...operation mix for 5 minutes as a warm-up. At the time of migration, MySQL uses approximately 17 GB of memory, and almost all of the 30 GB memory is
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-24
... Apache Tribe of the Fort Apache Reservation, Arizona; and the Yavapai-Apache Nation of the Camp Verde... of the Fort Apache Reservation, Arizona; Yavapai-Apache Nation of the Camp Verde Indian Reservation...
Petaminer: Using ROOT for efficient data storage in MySQL database
NASA Astrophysics Data System (ADS)
Cranshaw, J.; Malon, D.; Vaniachine, A.; Fine, V.; Lauret, J.; Hamill, P.
2010-04-01
High Energy and Nuclear Physics (HENP) experiments store Petabytes of event data and Terabytes of calibration data in ROOT files. The Petaminer project is developing a custom MySQL storage engine to enable the MySQL query processor to directly access experimental data stored in ROOT files. Our project is addressing the problem of efficient navigation to PetaBytes of HENP experimental data described with event-level TAG metadata, which is required by data intensive physics communities such as the LHC and RHIC experiments. Physicists need to be able to compose a metadata query and rapidly retrieve the set of matching events, where improved efficiency will facilitate the discovery process by permitting rapid iterations of data evaluation and retrieval. Our custom MySQL storage engine enables the MySQL query processor to directly access TAG data stored in ROOT TTrees. As ROOT TTrees are column-oriented, reading them directly provides improved performance over traditional row-oriented TAG databases. Leveraging the flexible and powerful SQL query language to access data stored in ROOT TTrees, the Petaminer approach enables rich MySQL index-building capabilities for further performance optimization.
A Business Case Study of Open Source Software
2001-07-01
LinuxPPC LinuxPPC www.linuxppc.com MandrakeSoft Linux -Mandrake www.linux-mandrake.com/ en / CLE Project CLE cle.linux.org.tw/CLE/e_index.shtml Red Hat... en Coyote Linux www2.vortech.net/coyte/coyte.htm MNIS www.mnis.fr Data-Portal www.data-portal.com Mr O’s Linux Emporium www.ouin.com DLX Linux www.wu...1998 1999 Year S h ip m en ts ( in m ill io n s) Source: IDC, 2000. Figure 11. Worldwide New Linux Shipments (Client and Server) 3.2.2 Market
DIBS: a repository of disordered binding sites mediating interactions with ordered proteins.
Schad, Eva; Fichó, Erzsébet; Pancsa, Rita; Simon, István; Dosztányi, Zsuzsanna; Mészáros, Bálint
2018-02-01
Intrinsically Disordered Proteins (IDPs) mediate crucial protein-protein interactions, most notably in signaling and regulation. As their importance is increasingly recognized, the detailed analyses of specific IDP interactions opened up new opportunities for therapeutic targeting. Yet, large scale information about IDP-mediated interactions in structural and functional details are lacking, hindering the understanding of the mechanisms underlying this distinct binding mode. Here, we present DIBS, the first comprehensive, curated collection of complexes between IDPs and ordered proteins. DIBS not only describes by far the highest number of cases, it also provides the dissociation constants of their interactions, as well as the description of potential post-translational modifications modulating the binding strength and linear motifs involved in the binding. Together with the wide range of structural and functional annotations, DIBS will provide the cornerstone for structural and functional studies of IDP complexes. DIBS is freely accessible at http://dibs.enzim.ttk.mta.hu/. The DIBS application is hosted by Apache web server and was implemented in PHP. To enrich querying features and to enhance backend performance a MySQL database was also created. dosztanyi@caesar.elte.hu or bmeszaros@caesar.elte.hu. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press.
Wen, Can-Hong; Ou, Shao-Min; Guo, Xiao-Bo; Liu, Chen-Feng; Shen, Yan-Bo; You, Na; Cai, Wei-Hong; Shen, Wen-Jun; Wang, Xue-Qin; Tan, Hai-Zhu
2017-12-12
Breast cancer is a high-risk heterogeneous disease with myriad subtypes and complicated biological features. The Cancer Genome Atlas (TCGA) breast cancer database provides researchers with the large-scale genome and clinical data via web portals and FTP services. Researchers are able to gain new insights into their related fields, and evaluate experimental discoveries with TCGA. However, it is difficult for researchers who have little experience with database and bioinformatics to access and operate on because of TCGA's complex data format and diverse files. For ease of use, we build the breast cancer (B-CAN) platform, which enables data customization, data visualization, and private data center. The B-CAN platform runs on Apache server and interacts with the backstage of MySQL database by PHP. Users can customize data based on their needs by combining tables from original TCGA database and selecting variables from each table. The private data center is applicable for private data and two types of customized data. A key feature of the B-CAN is that it provides single table display and multiple table display. Customized data with one barcode corresponding to many records and processed customized data are allowed in Multiple Tables Display. The B-CAN is an intuitive and high-efficient data-sharing platform.
The Western Apache home: landscape management and failing ecosystems
Seth Pilsk; Jeanette C. Cassa
2005-01-01
The traditional Western Apache home lies largely within the Madrean Archipelago. The natural resources of the region make up the basis of the Apache home and culture. Profound landscape changes in the region have occurred over the past 150 years. A survey of traditional Western Apache place names documents many of these changes. An analysis of the history and Apache...
Maintaining Multimedia Data in a Geospatial Database
2012-09-01
at PostgreSQL and MySQL as spatial databases was offered. Given their results, as each database produced result sets from zero to 100,000, it was...excelled given multiple conditions. A different look at PostgreSQL and MySQL as spatial databases was offered. Given their results, as each database... MySQL ................................................................................................14 B. BENCHMARKING DATA RETRIEVED FROM TABLE
Venkataraman, Ramesh; Gopichandran, Vijayaprasad; Ranganathan, Lakshmi; Rajagopal, Senthilkumar; Abraham, Babu K; Ramakrishnan, Nagarajan
2018-01-01
Background: Mortality prediction in the Intensive Care Unit (ICU) setting is complex, and there are several scoring systems utilized for this process. The Acute Physiology and Chronic Health Evaluation (APACHE) II has been the most widely used scoring system; although, the more recent APACHE IV is considered an updated and advanced prediction model. However, these two systems may not give similar mortality predictions. Objectives: The aim of this study is to compare the mortality prediction ability of APACHE II and APACHE IV scoring systems among patients admitted to a tertiary care ICU. Methods: In this prospective longitudinal observational study, APACHE II and APACHE IV scores of ICU patients were computed using an online calculator. The outcome of the ICU admissions for all the patients was collected as discharged or deceased. The data were analyzed to compare the discrimination and calibration of the mortality prediction ability of the two scores. Results: Out of the 1670 patients' data analyzed, the area under the receiver operating characteristic of APACHE II score was 0.906 (95% confidence interval [CI] – 0.890–0.992), and APACHE IV score was 0.881 (95% CI – 0.862–0.890). The mean predicted mortality rate of the study population as given by the APACHE II scoring system was 44.8 ± 26.7 and as given by APACHE IV scoring system was 29.1 ± 28.5. The observed mortality rate was 22.4%. Conclusions: The APACHE II and IV scoring systems have comparable discrimination ability, but the calibration of APACHE IV seems to be better than that of APACHE II. There is a need to recalibrate the scales with weights derived from the Indian population. PMID:29910542
Venkataraman, Ramesh; Gopichandran, Vijayaprasad; Ranganathan, Lakshmi; Rajagopal, Senthilkumar; Abraham, Babu K; Ramakrishnan, Nagarajan
2018-05-01
Mortality prediction in the Intensive Care Unit (ICU) setting is complex, and there are several scoring systems utilized for this process. The Acute Physiology and Chronic Health Evaluation (APACHE) II has been the most widely used scoring system; although, the more recent APACHE IV is considered an updated and advanced prediction model. However, these two systems may not give similar mortality predictions. The aim of this study is to compare the mortality prediction ability of APACHE II and APACHE IV scoring systems among patients admitted to a tertiary care ICU. In this prospective longitudinal observational study, APACHE II and APACHE IV scores of ICU patients were computed using an online calculator. The outcome of the ICU admissions for all the patients was collected as discharged or deceased. The data were analyzed to compare the discrimination and calibration of the mortality prediction ability of the two scores. Out of the 1670 patients' data analyzed, the area under the receiver operating characteristic of APACHE II score was 0.906 (95% confidence interval [CI] - 0.890-0.992), and APACHE IV score was 0.881 (95% CI - 0.862-0.890). The mean predicted mortality rate of the study population as given by the APACHE II scoring system was 44.8 ± 26.7 and as given by APACHE IV scoring system was 29.1 ± 28.5. The observed mortality rate was 22.4%. The APACHE II and IV scoring systems have comparable discrimination ability, but the calibration of APACHE IV seems to be better than that of APACHE II. There is a need to recalibrate the scales with weights derived from the Indian population.
FBIS: A regional DNA barcode archival & analysis system for Indian fishes
Nagpure, Naresh Sahebrao; Rashid, Iliyas; Pathak, Ajey Kumar; Singh, Mahender; Singh, Shri Prakash; Sarkar, Uttam Kumar
2012-01-01
DNA barcode is a new tool for taxon recognition and classification of biological organisms based on sequence of a fragment of mitochondrial gene, cytochrome c oxidase I (COI). In view of the growing importance of the fish DNA barcoding for species identification, molecular taxonomy and fish diversity conservation, we developed a Fish Barcode Information System (FBIS) for Indian fishes, which will serve as a regional DNA barcode archival and analysis system. The database presently contains 2334 sequence records of COI gene for 472 aquatic species belonging to 39 orders and 136 families, collected from available published data sources. Additionally, it contains information on phenotype, distribution and IUCN Red List status of fishes. The web version of FBIS was designed using MySQL, Perl and PHP under Linux operating platform to (a) store and manage the acquisition (b) analyze and explore DNA barcode records (c) identify species and estimate genetic divergence. FBIS has also been integrated with appropriate tools for retrieving and viewing information about the database statistics and taxonomy. It is expected that FBIS would be useful as a potent information system in fish molecular taxonomy, phylogeny and genomics. Availability The database is available for free at http://mail.nbfgr.res.in/fbis/ PMID:22715304
LCG MCDB—a knowledgebase of Monte-Carlo simulated events
NASA Astrophysics Data System (ADS)
Belov, S.; Dudko, L.; Galkin, E.; Gusev, A.; Pokorski, W.; Sherstnev, A.
2008-02-01
In this paper we report on LCG Monte-Carlo Data Base (MCDB) and software which has been developed to operate MCDB. The main purpose of the LCG MCDB project is to provide a storage and documentation system for sophisticated event samples simulated for the LHC Collaborations by experts. In many cases, the modern Monte-Carlo simulation of physical processes requires expert knowledge in Monte-Carlo generators or significant amount of CPU time to produce the events. MCDB is a knowledgebase mainly dedicated to accumulate simulated events of this type. The main motivation behind LCG MCDB is to make the sophisticated MC event samples available for various physical groups. All the data from MCDB is accessible in several convenient ways. LCG MCDB is being developed within the CERN LCG Application Area Simulation project. Program summaryProgram title: LCG Monte-Carlo Data Base Catalogue identifier: ADZX_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADZX_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public Licence No. of lines in distributed program, including test data, etc.: 30 129 No. of bytes in distributed program, including test data, etc.: 216 943 Distribution format: tar.gz Programming language: Perl Computer: CPU: Intel Pentium 4, RAM: 1 Gb, HDD: 100 Gb Operating system: Scientific Linux CERN 3/4 RAM: 1 073 741 824 bytes (1 Gb) Classification: 9 External routines:perl >= 5.8.5; Perl modules DBD-mysql >= 2.9004, File::Basename, GD::SecurityImage, GD::SecurityImage::AC, Linux::Statistics, XML::LibXML > 1.6, XML::SAX, XML::NamespaceSupport; Apache HTTP Server >= 2.0.59; mod auth external >= 2.2.9; edg-utils-system RPM package; gd >= 2.0.28; rpm package CASTOR-client >= 2.1.2-4; arc-server (optional) Nature of problem: Often, different groups of experimentalists prepare similar samples of particle collision events or turn to the same group of authors of Monte-Carlo (MC) generators to prepare the events. For example, the same MC samples of Standard Model (SM) processes can be employed for the investigations either in the SM analyses (as a signal) or in searches for new phenomena in Beyond Standard Model analyses (as a background). If the samples are made available publicly and equipped with corresponding and comprehensive documentation, it can speed up cross checks of the samples themselves and physical models applied. Some event samples require a lot of computing resources for preparation. So, a central storage of the samples prevents possible waste of researcher time and computing resources, which can be used to prepare the same events many times. Solution method: Creation of a special knowledgebase (MCDB) designed to keep event samples for the LHC experimental and phenomenological community. The knowledgebase is realized as a separate web-server ( http://mcdb.cern.ch). All event samples are kept on types at CERN. Documentation describing the events is the main contents of MCDB. Users can browse the knowledgebase, read and comment articles (documentation), and download event samples. Authors can upload new event samples, create new articles, and edit own articles. Restrictions: The software is adopted to solve the problems, described in the article and there are no any additional restrictions. Unusual features: The software provides a framework to store and document large files with flexible authentication and authorization system. Different external storages with large capacity can be used to keep the files. The WEB Content Management System provides all of the necessary interfaces for the authors of the files, end-users and administrators. Running time: Real time operations. References: [1] The main LCG MCDB server, http://mcdb.cern.ch/. [2] P. Bartalini, L. Dudko, A. Kryukov, I.V. Selyuzhenkov, A. Sherstnev, A. Vologdin, LCG Monte-Carlo data base, hep-ph/0404241. [3] J.P. Baud, B. Couturier, C. Curran, J.D. Durand, E. Knezo, S. Occhetti, O. Barring, CASTOR: status and evolution, cs.oh/0305047.
NASA Astrophysics Data System (ADS)
Joyce, M.; Ramirez, P.; Boustani, M.; Mattmann, C. A.; Khudikyan, S.; McGibbney, L. J.; Whitehall, K. D.
2014-12-01
Apache Open Climate Workbench (OCW; https://climate.apache.org/) is a Top-Level Project at the Apache Software Foundation that aims to provide a suite of tools for performing climate science evaluations using model outputs from a multitude of different sources (ESGF, CORDEX, U.S. NCA, NARCCAP) with remote sensing data from NASA, NOAA, and other agencies. Apache OCW is the second NASA project to become a Top-Level Project at the Apache Software Foundation. It grew out of the Jet Propulsion Laboratory's (JPL) Regional Climate Model Evaluation System (RCMES) project, a collaboration between JPL and the University of California, Los Angeles' Joint Institute for Regional Earth System Science and Engineering (JIFRESSE). Apache OCW provides scientists and developers with tools for data manipulation, metrics for dataset comparisons, and a visualization suite. In addition to a powerful low-level API, Apache OCW also supports a web application for quick, browser-controlled evaluations, a command line application for local evaluations, and a virtual machine for isolated experimentation with minimal setup. This talk will look at the difficulties and successes of moving a closed community research project out into the wild world of open source. We'll explore the growing pains Apache OCW went through to become a Top-Level Project at the Apache Software Foundation as well as the benefits gained by opening up development to the broader climate and computer science communities.
SeqLib: a C ++ API for rapid BAM manipulation, sequence alignment and sequence assembly
Wala, Jeremiah; Beroukhim, Rameen
2017-01-01
Abstract We present SeqLib, a C ++ API and command line tool that provides a rapid and user-friendly interface to BAM/SAM/CRAM files, global sequence alignment operations and sequence assembly. Four C libraries perform core operations in SeqLib: HTSlib for BAM access, BWA-MEM and BLAT for sequence alignment and Fermi for error correction and sequence assembly. Benchmarking indicates that SeqLib has lower CPU and memory requirements than leading C ++ sequence analysis APIs. We demonstrate an example of how minimal SeqLib code can extract, error-correct and assemble reads from a CRAM file and then align with BWA-MEM. SeqLib also provides additional capabilities, including chromosome-aware interval queries and read plotting. Command line tools are available for performing integrated error correction, micro-assemblies and alignment. Availability and Implementation: SeqLib is available on Linux and OSX for the C ++98 standard and later at github.com/walaj/SeqLib. SeqLib is released under the Apache2 license. Additional capabilities for BLAT alignment are available under the BLAT license. Contact: jwala@broadinstitue.org; rameen@broadinstitute.org PMID:28011768
IPeak: An open source tool to combine results from multiple MS/MS search engines.
Wen, Bo; Du, Chaoqin; Li, Guilin; Ghali, Fawaz; Jones, Andrew R; Käll, Lukas; Xu, Shaohang; Zhou, Ruo; Ren, Zhe; Feng, Qiang; Xu, Xun; Wang, Jun
2015-09-01
Liquid chromatography coupled tandem mass spectrometry (LC-MS/MS) is an important technique for detecting peptides in proteomics studies. Here, we present an open source software tool, termed IPeak, a peptide identification pipeline that is designed to combine the Percolator post-processing algorithm and multi-search strategy to enhance the sensitivity of peptide identifications without compromising accuracy. IPeak provides a graphical user interface (GUI) as well as a command-line interface, which is implemented in JAVA and can work on all three major operating system platforms: Windows, Linux/Unix and OS X. IPeak has been designed to work with the mzIdentML standard from the Proteomics Standards Initiative (PSI) as an input and output, and also been fully integrated into the associated mzidLibrary project, providing access to the overall pipeline, as well as modules for calling Percolator on individual search engine result files. The integration thus enables IPeak (and Percolator) to be used in conjunction with any software packages implementing the mzIdentML data standard. IPeak is freely available and can be downloaded under an Apache 2.0 license at https://code.google.com/p/mzidentml-lib/. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
SeqLib: a C ++ API for rapid BAM manipulation, sequence alignment and sequence assembly.
Wala, Jeremiah; Beroukhim, Rameen
2017-03-01
We present SeqLib, a C ++ API and command line tool that provides a rapid and user-friendly interface to BAM/SAM/CRAM files, global sequence alignment operations and sequence assembly. Four C libraries perform core operations in SeqLib: HTSlib for BAM access, BWA-MEM and BLAT for sequence alignment and Fermi for error correction and sequence assembly. Benchmarking indicates that SeqLib has lower CPU and memory requirements than leading C ++ sequence analysis APIs. We demonstrate an example of how minimal SeqLib code can extract, error-correct and assemble reads from a CRAM file and then align with BWA-MEM. SeqLib also provides additional capabilities, including chromosome-aware interval queries and read plotting. Command line tools are available for performing integrated error correction, micro-assemblies and alignment. SeqLib is available on Linux and OSX for the C ++98 standard and later at github.com/walaj/SeqLib. SeqLib is released under the Apache2 license. Additional capabilities for BLAT alignment are available under the BLAT license. jwala@broadinstitue.org ; rameen@broadinstitute.org. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
2004-03-01
with MySQL . This choice was made because MySQL is open source. Any significant database engine such as Oracle or MS- SQL or even MS Access can be used...10 Figure 6. The DoD vs . Commercial Life Cycle...necessarily be interested in SCADA network security 13. MySQL (Database server) – This station represents a typical data server for a web page
Methods to Secure Databases Against Vulnerabilities
2015-12-01
for several languages such as C, C++, PHP, Java and Python [16]. MySQL will work well with very large databases. The documentation references...using Eclipse and connected to each database management system using Python and Java drivers provided by MySQL , MongoDB, and Datastax (for Cassandra...tiers in Python and Java . Problem MySQL MongoDB Cassandra 1. Injection a. Tautologies Vulnerable Vulnerable Not Vulnerable b. Illegal query
Analysis and Development of a Web-Enabled Planning and Scheduling Database Application
2013-09-01
establishes an entity—relationship diagram for the desired process, constructs an operable database using MySQL , and provides a web- enabled interface for...development, develop, design, process, re- engineering, reengineering, MySQL , structured query language, SQL, myPHPadmin. 15. NUMBER OF PAGES 107 16...relationship diagram for the desired process, constructs an operable database using MySQL , and provides a web-enabled interface for the population of
NASA Astrophysics Data System (ADS)
Çay, M. Taşkin
Recently the ATLAS suite (Kurucz) was ported to LINUX OS (Sbordone et al.). Those users of the suite unfamiliar with LINUX need to know some basic information to use these versions. This paper is a quick overview and introduction to LINUX OS. The reader is highly encouraged to own a book on LINUX OS for comprehensive use. Although the subjects and examples in this paper are for general use, they to help with the installation and running the ATLAS suite.
Sharma, Parichit; Mantri, Shrikant S
2014-01-01
The function of a newly sequenced gene can be discovered by determining its sequence homology with known proteins. BLAST is the most extensively used sequence analysis program for sequence similarity search in large databases of sequences. With the advent of next generation sequencing technologies it has now become possible to study genes and their expression at a genome-wide scale through RNA-seq and metagenome sequencing experiments. Functional annotation of all the genes is done by sequence similarity search against multiple protein databases. This annotation task is computationally very intensive and can take days to obtain complete results. The program mpiBLAST, an open-source parallelization of BLAST that achieves superlinear speedup, can be used to accelerate large-scale annotation by using supercomputers and high performance computing (HPC) clusters. Although many parallel bioinformatics applications using the Message Passing Interface (MPI) are available in the public domain, researchers are reluctant to use them due to lack of expertise in the Linux command line and relevant programming experience. With these limitations, it becomes difficult for biologists to use mpiBLAST for accelerating annotation. No web interface is available in the open-source domain for mpiBLAST. We have developed WImpiBLAST, a user-friendly open-source web interface for parallel BLAST searches. It is implemented in Struts 1.3 using a Java backbone and runs atop the open-source Apache Tomcat Server. WImpiBLAST supports script creation and job submission features and also provides a robust job management interface for system administrators. It combines script creation and modification features with job monitoring and management through the Torque resource manager on a Linux-based HPC cluster. Use case information highlights the acceleration of annotation analysis achieved by using WImpiBLAST. Here, we describe the WImpiBLAST web interface features and architecture, explain design decisions, describe workflows and provide a detailed analysis.
Sharma, Parichit; Mantri, Shrikant S.
2014-01-01
The function of a newly sequenced gene can be discovered by determining its sequence homology with known proteins. BLAST is the most extensively used sequence analysis program for sequence similarity search in large databases of sequences. With the advent of next generation sequencing technologies it has now become possible to study genes and their expression at a genome-wide scale through RNA-seq and metagenome sequencing experiments. Functional annotation of all the genes is done by sequence similarity search against multiple protein databases. This annotation task is computationally very intensive and can take days to obtain complete results. The program mpiBLAST, an open-source parallelization of BLAST that achieves superlinear speedup, can be used to accelerate large-scale annotation by using supercomputers and high performance computing (HPC) clusters. Although many parallel bioinformatics applications using the Message Passing Interface (MPI) are available in the public domain, researchers are reluctant to use them due to lack of expertise in the Linux command line and relevant programming experience. With these limitations, it becomes difficult for biologists to use mpiBLAST for accelerating annotation. No web interface is available in the open-source domain for mpiBLAST. We have developed WImpiBLAST, a user-friendly open-source web interface for parallel BLAST searches. It is implemented in Struts 1.3 using a Java backbone and runs atop the open-source Apache Tomcat Server. WImpiBLAST supports script creation and job submission features and also provides a robust job management interface for system administrators. It combines script creation and modification features with job monitoring and management through the Torque resource manager on a Linux-based HPC cluster. Use case information highlights the acceleration of annotation analysis achieved by using WImpiBLAST. Here, we describe the WImpiBLAST web interface features and architecture, explain design decisions, describe workflows and provide a detailed analysis. PMID:24979410
Analyzing Enron Data: Bitmap Indexing Outperforms MySQL Queries bySeveral Orders of Magnitude
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stockinger, Kurt; Rotem, Doron; Shoshani, Arie
2006-01-28
FastBit is an efficient, compressed bitmap indexing technology that was developed in our group. In this report we evaluate the performance of MySQL and FastBit for analyzing the email traffic of the Enron dataset. The first finding shows that materializing the join results of several tables significantly improves the query performance. The second finding shows that FastBit outperforms MySQL by several orders of magnitude.
Database Entity Persistence with Hibernate for the Network Connectivity Analysis Model
2014-04-01
time savings in the Java coding development process. Appendices A and B describe address setup procedures for installing the MySQL database...development environment is required: • The open source MySQL Database Management System (DBMS) from Oracle, which is a Java Database Connectivity (JDBC...compliant DBMS • MySQL JDBC Driver library that comes as a plug-in with the Netbeans distribution • The latest Java Development Kit with the latest
Learning Asset Technology Integration Support Tool Design Document
2010-05-11
language known as Hypertext Preprocessor ( PHP ) and by MySQL – a relational database management system that can also be used for content management. It...Requirements The LATIST tool will be implemented utilizing a WordPress platform with MySQL as the database. Also the LATIST system must effectively work... MySQL . When designing the LATIST system there are several considerations which must be accounted for in the working prototype. These include: • DAU
Geologic influences on Apache trout habitat in the White Mountains of Arizona
Jonathan W. Long; Alvin L. Medina
2006-01-01
Geologic variation has important influences on habitat quality for species of concern, but it can be difficult to evaluate due to subtle variations, complex terminology, and inadequate maps. To better understand habitat of the Apache trout (Onchorhynchus apache or O. gilae apache Miller), a threatened endemic species of the White...
Curriculum Program for the Apache Language.
ERIC Educational Resources Information Center
Whiteriver Public Schools, AZ.
These curriculum materials from the Whiteriver (Arizona) Elementary School consist of--(1) an English-Apache word list of some of the most commonly used words in Apache, 29p.; (2) a list of enclitics with approximate or suggested meanings and illustrations of usage, 5 p.; (3) an illustrated chart of Apache vowels and consonants, various written…
Multi-Resolution Playback of Network Trace Files
2015-06-01
a com- plete MySQL database, C++ developer tools and the libraries utilized in the development of the system (Boost and Libcrafter), and Wireshark...XE suite has a limit to the allowed size of each database. In order to be scalable, the project had to switch to the MySQL database suite. The...programs that access the database use the MySQL C++ connector, provided by Oracle, and the supplied methods and libraries. 4.4 Flow Generator Chapter 3
NASA Astrophysics Data System (ADS)
Zhou, Jianfeng; Xu, Benda; Peng, Chuan; Yang, Yang; Huo, Zhuoxi
2015-08-01
AIRE-Linux is a dedicated Linux system for astronomers. Modern astronomy faces two big challenges: massive observed raw data which covers the whole electromagnetic spectrum, and overmuch professional data processing skill which exceeds personal or even a small team's abilities. AIRE-Linux, which is a specially designed Linux and will be distributed to users by Virtual Machine (VM) images in Open Virtualization Format (OVF), is to help astronomers confront the challenges. Most astronomical software packages, such as IRAF, MIDAS, CASA, Heasoft etc., will be integrated into AIRE-Linux. It is easy for astronomers to configure and customize the system and use what they just need. When incorporated into cloud computing platforms, AIRE-Linux will be able to handle data intensive and computing consuming tasks for astronomers. Currently, a Beta version of AIRE-Linux is ready for download and testing.
The Jicarilla Apaches. A Study in Survival.
ERIC Educational Resources Information Center
Gunnerson, Dolores A.
Focusing on the ultimate fate of the Cuartelejo and/or Paloma Apaches known in archaeological terms as the Dismal River people of the Central Plains, this book is divided into 2 parts. The early Apache (1525-1700) and the Jicarilla Apache (1700-1800) tribes are studied in terms of their: persistent cultural survival, social/political adaptability,…
Establishment of Kawasaki disease database based on metadata standard.
Park, Yu Rang; Kim, Jae-Jung; Yoon, Young Jo; Yoon, Young-Kwang; Koo, Ha Yeong; Hong, Young Mi; Jang, Gi Young; Shin, Soo-Yong; Lee, Jong-Keuk
2016-07-01
Kawasaki disease (KD) is a rare disease that occurs predominantly in infants and young children. To identify KD susceptibility genes and to develop a diagnostic test, a specific therapy, or prevention method, collecting KD patients' clinical and genomic data is one of the major issues. For this purpose, Kawasaki Disease Database (KDD) was developed based on the efforts of Korean Kawasaki Disease Genetics Consortium (KKDGC). KDD is a collection of 1292 clinical data and genomic samples of 1283 patients from 13 KKDGC-participating hospitals. Each sample contains the relevant clinical data, genomic DNA and plasma samples isolated from patients' blood, omics data and KD-associated genotype data. Clinical data was collected and saved using the common data elements based on the ISO/IEC 11179 metadata standard. Two genome-wide association study data of total 482 samples and whole exome sequencing data of 12 samples were also collected. In addition, KDD includes the rare cases of KD (16 cases with family history, 46 cases with recurrence, 119 cases with intravenous immunoglobulin non-responsiveness, and 52 cases with coronary artery aneurysm). As the first public database for KD, KDD can significantly facilitate KD studies. All data in KDD can be searchable and downloadable. KDD was implemented in PHP, MySQL and Apache, with all major browsers supported.Database URL: http://www.kawasakidisease.kr. © The Author(s) 2016. Published by Oxford University Press.
2013-01-01
Background Professionals in the biomedical domain are confronted with an increasing mass of data. Developing methods to assist professional end users in the field of Knowledge Discovery to identify, extract, visualize and understand useful information from these huge amounts of data is a huge challenge. However, there are so many diverse methods and methodologies available, that for biomedical researchers who are inexperienced in the use of even relatively popular knowledge discovery methods, it can be very difficult to select the most appropriate method for their particular research problem. Results A web application, called KNODWAT (KNOwledge Discovery With Advanced Techniques) has been developed, using Java on Spring framework 3.1. and following a user-centered approach. The software runs on Java 1.6 and above and requires a web server such as Apache Tomcat and a database server such as the MySQL Server. For frontend functionality and styling, Twitter Bootstrap was used as well as jQuery for interactive user interface operations. Conclusions The framework presented is user-centric, highly extensible and flexible. Since it enables methods for testing using existing data to assess suitability and performance, it is especially suitable for inexperienced biomedical researchers, new to the field of knowledge discovery and data mining. For testing purposes two algorithms, CART and C4.5 were implemented using the WEKA data mining framework. PMID:23763826
Holzinger, Andreas; Zupan, Mario
2013-06-13
Professionals in the biomedical domain are confronted with an increasing mass of data. Developing methods to assist professional end users in the field of Knowledge Discovery to identify, extract, visualize and understand useful information from these huge amounts of data is a huge challenge. However, there are so many diverse methods and methodologies available, that for biomedical researchers who are inexperienced in the use of even relatively popular knowledge discovery methods, it can be very difficult to select the most appropriate method for their particular research problem. A web application, called KNODWAT (KNOwledge Discovery With Advanced Techniques) has been developed, using Java on Spring framework 3.1. and following a user-centered approach. The software runs on Java 1.6 and above and requires a web server such as Apache Tomcat and a database server such as the MySQL Server. For frontend functionality and styling, Twitter Bootstrap was used as well as jQuery for interactive user interface operations. The framework presented is user-centric, highly extensible and flexible. Since it enables methods for testing using existing data to assess suitability and performance, it is especially suitable for inexperienced biomedical researchers, new to the field of knowledge discovery and data mining. For testing purposes two algorithms, CART and C4.5 were implemented using the WEKA data mining framework.
footprintDB: a database of transcription factors with annotated cis elements and binding interfaces.
Sebastian, Alvaro; Contreras-Moreira, Bruno
2014-01-15
Traditional and high-throughput techniques for determining transcription factor (TF) binding specificities are generating large volumes of data of uneven quality, which are scattered across individual databases. FootprintDB integrates some of the most comprehensive freely available libraries of curated DNA binding sites and systematically annotates the binding interfaces of the corresponding TFs. The first release contains 2422 unique TF sequences, 10 112 DNA binding sites and 3662 DNA motifs. A survey of the included data sources, organisms and TF families was performed together with proprietary database TRANSFAC, finding that footprintDB has a similar coverage of multicellular organisms, while also containing bacterial regulatory data. A search engine has been designed that drives the prediction of DNA motifs for input TFs, or conversely of TF sequences that might recognize input regulatory sequences, by comparison with database entries. Such predictions can also be extended to a single proteome chosen by the user, and results are ranked in terms of interface similarity. Benchmark experiments with bacterial, plant and human data were performed to measure the predictive power of footprintDB searches, which were able to correctly recover 10, 55 and 90% of the tested sequences, respectively. Correctly predicted TFs had a higher interface similarity than the average, confirming its diagnostic value. Web site implemented in PHP,Perl, MySQL and Apache. Freely available from http://floresta.eead.csic.es/footprintdb.
Wen, Can-Hong; Ou, Shao-Min; Guo, Xiao-Bo; Liu, Chen-Feng; Shen, Yan-Bo; You, Na; Cai, Wei-Hong; Shen, Wen-Jun; Wang, Xue-Qin; Tan, Hai-Zhu
2017-01-01
Breast cancer is a high-risk heterogeneous disease with myriad subtypes and complicated biological features. The Cancer Genome Atlas (TCGA) breast cancer database provides researchers with the large-scale genome and clinical data via web portals and FTP services. Researchers are able to gain new insights into their related fields, and evaluate experimental discoveries with TCGA. However, it is difficult for researchers who have little experience with database and bioinformatics to access and operate on because of TCGA’s complex data format and diverse files. For ease of use, we build the breast cancer (B-CAN) platform, which enables data customization, data visualization, and private data center. The B-CAN platform runs on Apache server and interacts with the backstage of MySQL database by PHP. Users can customize data based on their needs by combining tables from original TCGA database and selecting variables from each table. The private data center is applicable for private data and two types of customized data. A key feature of the B-CAN is that it provides single table display and multiple table display. Customized data with one barcode corresponding to many records and processed customized data are allowed in Multiple Tables Display. The B-CAN is an intuitive and high-efficient data-sharing platform. PMID:29312567
Spectrum Savings from High Performance Recording and Playback Onboard the Test Article
2013-02-20
execute within a Windows 7 environment, and data is recorded on SSDs. The underlying database is implemented using MySQL . Figure 1 illustrates the... MySQL database. This is effectively the time at which the recorded data are available for retransmission. CPU and Memory utilization were collected...17.7% MySQL avg. 3.9% EQDR Total avg. 21.6% Table 1 CPU Utilization with260 Mbits/sec Load The difference between the total System CPU (27.8
Flexible Decision Support in Device-Saturated Environments
2003-10-01
also output tuples to a remote MySQL or Postgres database. 3.3 GUI The GUI allows the user to pose queries using SQL and to display query...DatabaseConnection.java – handles connections to an external database (such as MySQL or Postgres ). • Debug.java – contains the code for printing out Debug messages...also provided. It is possible to output the results of queries to a MySQL or Postgres database for archival and the GUI can query those results
Collaborative Data Publication Utilizing the Open Data Repository's (ODR) Data Publisher
NASA Technical Reports Server (NTRS)
Stone, N.; Lafuente, B.; Bristow, T.; Keller, R. M.; Downs, R. T.; Blake, D.; Fonda, M.; Dateo, C.; Pires, A.
2017-01-01
Introduction: For small communities in diverse fields such as astrobiology, publishing and sharing data can be a difficult challenge. While large, homogenous fields often have repositories and existing data standards, small groups of independent researchers have few options for publishing standards and data that can be utilized within their community. In conjunction with teams at NASA Ames and the University of Arizona, the Open Data Repository's (ODR) Data Publisher has been conducting ongoing pilots to assess the needs of diverse research groups and to develop software to allow them to publish and share their data collaboratively. Objectives: The ODR's Data Publisher aims to provide an easy-to-use and implement software tool that will allow researchers to create and publish database templates and related data. The end product will facilitate both human-readable interfaces (web-based with embedded images, files, and charts) and machine-readable interfaces utilizing semantic standards. Characteristics: The Data Publisher software runs on the standard LAMP (Linux, Apache, MySQL, PHP) stack to provide the widest server base available. The software is based on Symfony (www.symfony.com) which provides a robust framework for creating extensible, object-oriented software in PHP. The software interface consists of a template designer where individual or master database templates can be created. A master database template can be shared by many researchers to provide a common metadata standard that will set a compatibility standard for all derivative databases. Individual researchers can then extend their instance of the template with custom fields, file storage, or visualizations that may be unique to their studies. This allows groups to create compatible databases for data discovery and sharing purposes while still providing the flexibility needed to meet the needs of scientists in rapidly evolving areas of research. Research: As part of this effort, a number of ongoing pilot and test projects are currently in progress. The Astrobiology Habitable Environments Database Working Group is developing a shared database standard using the ODR's Data Publisher and has a number of example databases where astrobiology data are shared. Soon these databases will be integrated via the template-based standard. Work with this group helps determine what data researchers in these diverse fields need to share and archive. Additionally, this pilot helps determine what standards are viable for sharing these types of data from internally developed standards to existing open standards such as the Dublin Core (http://dublincore.org) and Darwin Core (http://rs.twdg.org) metadata standards. Further studies are ongoing with the University of Arizona Department of Geosciences where a number of mineralogy databases are being constructed within the ODR Data Publisher system. Conclusions: Through the ongoing pilots and discussions with individual researchers and small research teams, a definition of the tools desired by these groups is coming into focus. As the software development moves forward, the goal is to meet the publication and collaboration needs of these scientists in an unobtrusive and functional way.
NASA Astrophysics Data System (ADS)
Thubaasini, P.; Rusnida, R.; Rohani, S. M.
This paper describes Linux, an open source platform used to develop and run a virtual architectural walkthrough application. It proposes some qualitative reflections and observations on the nature of Linux in the concept of Virtual Reality (VR) and on the most popular and important claims associated with the open source approach. The ultimate goal of this paper is to measure and evaluate the performance of Linux used to build the virtual architectural walkthrough and develop a proof of concept based on the result obtain through this project. Besides that, this study reveals the benefits of using Linux in the field of virtual reality and reflects a basic comparison and evaluation between Windows and Linux base operating system. Windows platform is use as a baseline to evaluate the performance of Linux. The performance of Linux is measured based on three main criteria which is frame rate, image quality and also mouse motion.
NASA Astrophysics Data System (ADS)
Sonoda, Jun; Yamaki, Kota
We develop an automatic Live Linux rebuilding system for science and engineering education, such as information processing education, numerical analysis and so on. Our system is enable to easily and automatically rebuild a customized Live Linux from a ISO image of Ubuntu, which is one of the Linux distribution. Also, it is easily possible to install/uninstall packages and to enable/disable init daemons. When we rebuild a Live Linux CD using our system, we show number of the operations is 8, and the rebuilding time is about 33 minutes on CD version and about 50 minutes on DVD version. Moreover, we have applied the rebuilded Live Linux CD in a class of information processing education in our college. As the results of a questionnaires survey from our 43 students who used the Live Linux CD, we obtain that the our Live Linux is useful for about 80 percents of students. From these results, we conclude that our system is able to easily and automatically rebuild a useful Live Linux in short time.
Gajic, Ognjen; Afessa, Bekele
2012-01-01
Background: There are few comparisons among the most recent versions of the major adult ICU prognostic systems (APACHE [Acute Physiology and Chronic Health Evaluation] IV, Simplified Acute Physiology Score [SAPS] 3, Mortality Probability Model [MPM]0III). Only MPM0III includes resuscitation status as a predictor. Methods: We assessed the discrimination, calibration, and overall performance of the models in 2,596 patients in three ICUs at our tertiary referral center in 2006. For APACHE and SAPS, the analyses were repeated with and without inclusion of resuscitation status as a predictor variable. Results: Of the 2,596 patients studied, 283 (10.9%) died before hospital discharge. The areas under the curve (95% CI) of the models for prediction of hospital mortality were 0.868 (0.854-0.880), 0.861 (0.847-0.874), 0.801 (0.785-0.816), and 0.721 (0.704-0.738) for APACHE III, APACHE IV, SAPS 3, and MPM0III, respectively. The Hosmer-Lemeshow statistics for the models were 33.7, 31.0, 36.6, and 21.8 for APACHE III, APACHE IV, SAPS 3, and MPM0III, respectively. Each of the Hosmer-Lemeshow statistics generated P values < .05, indicating poor calibration. Brier scores for the models were 0.0771, 0.0749, 0.0890, and 0.0932, respectively. There were no significant differences between the discriminative ability or the calibration of APACHE or SAPS with and without “do not resuscitate” status. Conclusions: APACHE III and IV had similar discriminatory capability and both were better than SAPS 3, which was better than MPM0III. The calibrations of the models studied were poor. Overall, models with more predictor variables performed better than those with fewer. The addition of resuscitation status did not improve APACHE III or IV or SAPS 3 prediction. PMID:22499827
Su, Yingying; Wang, Miao; Liu, Yifei; Ye, Hong; Gao, Daiquan; Chen, Weibi; Zhang, Yunzhou; Zhang, Yan
2014-12-01
This study aimed to conduct and assess a module modified acute physiology and chronic health evaluation (MM-APACHE) II model, based on disease categories modified-acute physiology and chronic health evaluation (DCM-APACHE) II model, in predicting mortality more accurately in neuro-intensive care units (N-ICUs). In total, 1686 patients entered into this prospective study. Acute physiology and chronic health evaluation (APACHE) II scores of all patients on admission and worst 24-, 48-, 72-hour scores were obtained. Neurological diagnosis on admission was classified into five categories: cerebral infarction, intracranial hemorrhage, neurological infection, spinal neuromuscular (SNM) disease, and other neurological diseases. The APACHE II scores of cerebral infarction, intracranial hemorrhage, and neurological infection patients were used for building the MM-APACHE II model. There were 1386 cases for cerebral infarction disease, intracranial hemorrhage disease, and neurological infection disease. The logistic linear regression showed that 72-hour APACHE II score (Wals = 173.04, P < 0.001) and disease classification (Wals = 12.51, P = 0.02) were of importance in forecasting hospital mortality. Module modified acute physiology and chronic health evaluation II model, built on the variables of the 72-hour APACHE II score and disease category, had good discrimination (area under the receiver operating characteristic curve (AU-ROC = 0.830)) and calibration (χ2 = 12.518, P = 0.20), and was better than the Knaus APACHE II model (AU-ROC = 0.778). The APACHE II severity of disease classification system cannot provide accurate prognosis for all kinds of the diseases. A MM-APACHE II model can accurately predict hospital mortality for cerebral infarction, intracranial hemorrhage, and neurologic infection patients in N-ICU.
PandASoft: Open Source Instructional Laboratory Administration Software
NASA Astrophysics Data System (ADS)
Gay, P. L.; Braasch, P.; Synkova, Y. N.
2004-12-01
PandASoft (Physics and Astronomy Software) is software for organizing and archiving a department's teaching resources and materials. An easy to use, secure interface allows faculty and staff to explore equipment inventories, see what laboratory experiments are available, find handouts, and track what has been used in different classes in the past. Divided into five sections: classes, equipment, laboratories, links, and media, its database cross links materials, allowing users to see what labs are used with which classes, what media and equipment are used with which labs, or simply what equipment is lurking in which room. Written in PHP and MySQL, this software can be installed on any UNIX / Linux platform, including Macintosh OS X. It is designed to allow users to easily customize the headers, footers and colors to blend with existing sites - no programming experience required. While initial data input is labor intensive, the system will save time later by allowing users to quickly answer questions related to what is in inventory, where it is located, how many are in stock, and where online they can learn more. It will also provide a central location for storing PDFs of handouts, and links to applets and cool sites at other universities. PandASoft comes with over 100 links to online resources pre-installed. We would like to thank Dr. Wolfgang Rueckner and the Harvard University Science Center for providing computers and resources for this project.
Preparing a scientific manuscript in Linux: Today's possibilities and limitations.
Tchantchaleishvili, Vakhtang; Schmitto, Jan D
2011-10-22
Increasing number of scientists are enthusiastic about using free, open source software for their research purposes. Authors' specific goal was to examine whether a Linux-based operating system with open source software packages would allow to prepare a submission-ready scientific manuscript without the need to use the proprietary software. Preparation and editing of scientific manuscripts is possible using Linux and open source software. This letter to the editor describes key steps for preparation of a publication-ready scientific manuscript in a Linux-based operating system, as well as discusses the necessary software components. This manuscript was created using Linux and open source programs for Linux.
TICK: Transparent Incremental Checkpointing at Kernel Level
DOE Office of Scientific and Technical Information (OSTI.GOV)
Petrini, Fabrizio; Gioiosa, Roberto
2004-10-25
TICK is a software package implemented in Linux 2.6 that allows the save and restore of user processes, without any change to the user code or binary. With TICK a process can be suspended by the Linux kernel upon receiving an interrupt and saved in a file. This file can be later thawed in another computer running Linux (potentially the same computer). TICK is implemented as a Linux kernel module, in the Linux version 2.6.5
Potential performance bottleneck in Linux TCP
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Wenji; Crawford, Matt; /Fermilab
2006-12-01
TCP is the most widely used transport protocol on the Internet today. Over the years, especially recently, due to requirements of high bandwidth transmission, various approaches have been proposed to improve TCP performance. The Linux 2.6 kernel is now preemptible. It can be interrupted mid-task, making the system more responsive and interactive. However, we have noticed that Linux kernel preemption can interact badly with the performance of the networking subsystem. In this paper we investigate the performance bottleneck in Linux TCP. We systematically describe the trip of a TCP packet from its ingress into a Linux network end system tomore » its final delivery to the application; we study the performance bottleneck in Linux TCP through mathematical modeling and practical experiments; finally we propose and test one possible solution to resolve this performance bottleneck in Linux TCP.« less
Development of Innovative Design Processor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, Y.S.; Park, C.O.
2004-07-01
The nuclear design analysis requires time-consuming and erroneous model-input preparation, code run, output analysis and quality assurance process. To reduce human effort and improve design quality and productivity, Innovative Design Processor (IDP) is being developed. Two basic principles of IDP are the document-oriented design and the web-based design. The document-oriented design is that, if the designer writes a design document called active document and feeds it to a special program, the final document with complete analysis, table and plots is made automatically. The active documents can be written with ordinary HTML editors or created automatically on the web, which ismore » another framework of IDP. Using the proper mix-up of server side and client side programming under the LAMP (Linux/Apache/MySQL/PHP) environment, the design process on the web is modeled as a design wizard style so that even a novice designer makes the design document easily. This automation using the IDP is now being implemented for all the reload design of Korea Standard Nuclear Power Plant (KSNP) type PWRs. The introduction of this process will allow large reduction in all reload design efforts of KSNP and provide a platform for design and R and D tasks of KNFC. (authors)« less
ProteoWizard: open source software for rapid proteomics tools development.
Kessner, Darren; Chambers, Matt; Burke, Robert; Agus, David; Mallick, Parag
2008-11-01
The ProteoWizard software project provides a modular and extensible set of open-source, cross-platform tools and libraries. The tools perform proteomics data analyses; the libraries enable rapid tool creation by providing a robust, pluggable development framework that simplifies and unifies data file access, and performs standard proteomics and LCMS dataset computations. The library contains readers and writers of the mzML data format, which has been written using modern C++ techniques and design principles and supports a variety of platforms with native compilers. The software has been specifically released under the Apache v2 license to ensure it can be used in both academic and commercial projects. In addition to the library, we also introduce a rapidly growing set of companion tools whose implementation helps to illustrate the simplicity of developing applications on top of the ProteoWizard library. Cross-platform software that compiles using native compilers (i.e. GCC on Linux, MSVC on Windows and XCode on OSX) is available for download free of charge, at http://proteowizard.sourceforge.net. This website also provides code examples, and documentation. It is our hope the ProteoWizard project will become a standard platform for proteomics development; consequently, code use, contribution and further development are strongly encouraged.
77 FR 51475 - Safety Zone; Apache Pier Labor Day Fireworks; Myrtle Beach, SC
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-24
...-AA00 Safety Zone; Apache Pier Labor Day Fireworks; Myrtle Beach, SC AGENCY: Coast Guard, DHS. ACTION... Atlantic Ocean in the vicinity of Apache Pier in Myrtle Beach, SC, during the Labor Day fireworks... [[Page 51476
Preparing a scientific manuscript in Linux: Today's possibilities and limitations
2011-01-01
Background Increasing number of scientists are enthusiastic about using free, open source software for their research purposes. Authors' specific goal was to examine whether a Linux-based operating system with open source software packages would allow to prepare a submission-ready scientific manuscript without the need to use the proprietary software. Findings Preparation and editing of scientific manuscripts is possible using Linux and open source software. This letter to the editor describes key steps for preparation of a publication-ready scientific manuscript in a Linux-based operating system, as well as discusses the necessary software components. This manuscript was created using Linux and open source programs for Linux. PMID:22018246
The Linux operating system: An introduction
NASA Technical Reports Server (NTRS)
Bokhari, Shahid H.
1995-01-01
Linux is a Unix-like operating system for Intel 386/486/Pentium based IBM-PCs and compatibles. The kernel of this operating system was written from scratch by Linus Torvalds and, although copyrighted by the author, may be freely distributed. A world-wide group has collaborated in developing Linux on the Internet. Linux can run the powerful set of compilers and programming tools of the Free Software Foundation, and XFree86, a port of the X Window System from MIT. Most capabilities associated with high performance workstations, such as networking, shared file systems, electronic mail, TeX, LaTeX, etc. are freely available for Linux. It can thus transform cheap IBM-PC compatible machines into Unix workstations with considerable capabilities. The author explains how Linux may be obtained, installed and networked. He also describes some interesting applications for Linux that are freely available. The enormous consumer market for IBM-PC compatible machines continually drives down prices of CPU chips, memory, hard disks, CDROMs, etc. Linux can convert such machines into powerful workstations that can be used for teaching, research and software development. For professionals who use Unix based workstations at work, Linux permits virtually identical working environments on their personal home machines. For cost conscious educational institutions Linux can create world-class computing environments from cheap, easily maintained, PC clones. Finally, for university students, it provides an essentially cost-free path away from DOS into the world of Unix and X Windows.
40 CFR 52.150 - Yavapai-Apache Reservation.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 3 2010-07-01 2010-07-01 false Yavapai-Apache Reservation. 52.150 Section 52.150 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) APPROVAL AND PROMULGATION OF IMPLEMENTATION PLANS Arizona § 52.150 Yavapai-Apache Reservation. (a...
40 CFR 52.150 - Yavapai-Apache Reservation.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 3 2013-07-01 2013-07-01 false Yavapai-Apache Reservation. 52.150 Section 52.150 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) APPROVAL AND PROMULGATION OF IMPLEMENTATION PLANS Arizona § 52.150 Yavapai-Apache Reservation. (a...
40 CFR 52.150 - Yavapai-Apache Reservation.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 3 2014-07-01 2014-07-01 false Yavapai-Apache Reservation. 52.150 Section 52.150 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) APPROVAL AND PROMULGATION OF IMPLEMENTATION PLANS Arizona § 52.150 Yavapai-Apache Reservation. (a...
40 CFR 52.150 - Yavapai-Apache Reservation.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 3 2011-07-01 2011-07-01 false Yavapai-Apache Reservation. 52.150 Section 52.150 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) APPROVAL AND PROMULGATION OF IMPLEMENTATION PLANS Arizona § 52.150 Yavapai-Apache Reservation. (a...
40 CFR 52.150 - Yavapai-Apache Reservation.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 3 2012-07-01 2012-07-01 false Yavapai-Apache Reservation. 52.150 Section 52.150 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) APPROVAL AND PROMULGATION OF IMPLEMENTATION PLANS Arizona § 52.150 Yavapai-Apache Reservation. (a...
Abstract of talk for Silicon Valley Linux Users Group
NASA Technical Reports Server (NTRS)
Clanton, Sam
2003-01-01
The use of Linux for research at NASA Ames is discussed.Topics include:work with the Atmospheric Physics branch on software for a spectrometer to be used in the CRYSTAL-FACE mission this summer; work on in the Neuroengineering Lab with code IC including an introduction to the extension of the human senses project,advantages with using linux for real-time biological data processing,algorithms utilized on a linux system, goals of the project,slides of people with Neuroscan caps on, and progress that has been made and how linux has helped.
A Reusable Framework for Regional Climate Model Evaluation
NASA Astrophysics Data System (ADS)
Hart, A. F.; Goodale, C. E.; Mattmann, C. A.; Lean, P.; Kim, J.; Zimdars, P.; Waliser, D. E.; Crichton, D. J.
2011-12-01
Climate observations are currently obtained through a diverse network of sensors and platforms that include space-based observatories, airborne and seaborne platforms, and distributed, networked, ground-based instruments. These global observational measurements are critical inputs to the efforts of the climate modeling community and can provide a corpus of data for use in analysis and validation of climate models. The Regional Climate Model Evaluation System (RCMES) is an effort currently being undertaken to address the challenges of integrating this vast array of observational climate data into a coherent resource suitable for performing model analysis at the regional level. Developed through a collaboration between the NASA Jet Propulsion Laboratory (JPL) and the UCLA Joint Institute for Regional Earth System Science and Engineering (JIFRESSE), the RCMES uses existing open source technologies (MySQL, Apache Hadoop, and Apache OODT), to construct a scalable, parametric, geospatial data store that incorporates decades of observational data from a variety of NASA Earth science missions, as well as other sources into a consistently annotated, highly available scientific resource. By eliminating arbitrary partitions in the data (individual file boundaries, differing file formats, etc), and instead treating each individual observational measurement as a unique, geospatially referenced data point, the RCMES is capable of transforming large, heterogeneous collections of disparate observational data into a unified resource suitable for comparison to climate model output. This facility is further enhanced by the availability of a model evaluation toolkit which consists of a set of Python libraries, a RESTful web service layer, and a browser-based graphical user interface that allows for orchestration of model-to-data comparisons by composing them visually through web forms. This combination of tools and interfaces dramatically simplifies the process of interacting with and utilizing large volumes of observational data for model evaluation research. We feel that the RCMES is particularly appealing in that it represents a principled, reusable architectural approach rather than a one-off technological implementation. In fact, early RCMES prototypes have already utilized a variety of implementation technologies in an effort to address different performance and scalability concerns. This has been greatly facilitated by the fact that, at the architectural level, the RCMES is fundamentally domain agnostic. Strictly separating the data model from the implementation has enabled us to create a reusable architecture that we believe can be modified and configured to suit the demands of researchers in other domains.
Biology and distribution of Lutzomyia apache as it relates to VSV
USDA-ARS?s Scientific Manuscript database
Phlebotomine sand flies are vectors of bacteria, parasites, and viruses. Lutzomyia apache was incriminated as a vector of vesicular stomatitis viruses(VSV)due to overlapping ranges of the sand fly and outbreaks of VSV. I report on newly discovered populations of L. apache in Wyoming from Albany and ...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-02-19
... DEPARTMENT OF THE INTERIOR National Park Service [NPS-WASO-NAGPRA-12186; 2200-1100-665] Notice of Inventory Completion: U.S. Department of Agriculture, Forest Service, Apache-Sitgreaves National Forests.... ACTION: Notice. SUMMARY: The U.S. Department of Agriculture (USDA), Forest Service, Apache-Sitgreaves...
75 FR 57290 - Notice of Inventory Completion: University of Colorado Museum, Boulder, CO
Federal Register 2010, 2011, 2012, 2013, 2014
2010-09-20
...; Winnemucca Indian Colony of Nevada; Yavapai-Apache Nation of the Camp Verde Indian Reservation, Arizona... of Oklahoma; Susanville Indian Rancheria, California; and Yavapai-Apache Nation of the Camp Verde...; Winnemucca Indian Colony of Nevada; Yavapai-Apache Nation of the Camp Verde Indian Reservation, Arizona...
KinderApache Song and Dance Project.
ERIC Educational Resources Information Center
Shanklin, M. Trevor; Paciotto, Carla; Prater, Greg
This paper describes activities and evaluation of the KinderApache Song and Dance Project, piloted in a kindergarten class in Cedar Creek (Arizona) on the White Mountain Apache Reservation. Introducing Native-language song and dance in kindergarten could help foster a sense of community and cultural pride and greater awareness of traditional…
75 FR 68607 - BP Canada Energy Marketing Corp. Apache Corporation; Notice for Temporary Waivers
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-08
... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Docket No. RP11-1479-000] BP Canada Energy Marketing Corp. Apache Corporation; Notice for Temporary Waivers November 1, 2010. Take notice that on October 29, 2010, BP Canada Energy Marketing Corp. and Apache Corporation filed with the...
Escape from Albuquerque: An Apache Memorate.
ERIC Educational Resources Information Center
Greenfeld, Philip J.
2001-01-01
Clarence Hawkins, a White Mountain Apache, escaped from the Albuquerque Indian School around 1920. His 300-mile trip home, made with two other boys, exemplifies the reaction of many Indian youths to the American government's plans for cultural assimilation. The tale is told in the form of traditional Apache narrative. (TD)
CBD: a biomarker database for colorectal cancer.
Zhang, Xueli; Sun, Xiao-Feng; Cao, Yang; Ye, Benchen; Peng, Qiliang; Liu, Xingyun; Shen, Bairong; Zhang, Hong
2018-01-01
Colorectal cancer (CRC) biomarker database (CBD) was established based on 870 identified CRC biomarkers and their relevant information from 1115 original articles in PubMed published from 1986 to 2017. In this version of the CBD, CRC biomarker data were collected, sorted, displayed and analysed. The CBD with the credible contents as a powerful and time-saving tool provide more comprehensive and accurate information for further CRC biomarker research. The CBD was constructed under MySQL server. HTML, PHP and JavaScript languages have been used to implement the web interface. The Apache was selected as HTTP server. All of these web operations were implemented under the Windows system. The CBD could provide to users the multiple individual biomarker information and categorized into the biological category, source and application of biomarkers; the experiment methods, results, authors and publication resources; the research region, the average age of cohort, gender, race, the number of tumours, tumour location and stage. We only collect data from the articles with clear and credible results to prove the biomarkers are useful in the diagnosis, treatment or prognosis of CRC. The CBD can also provide a professional platform to researchers who are interested in CRC research to communicate, exchange their research ideas and further design high-quality research in CRC. They can submit their new findings to our database via the submission page and communicate with us in the CBD.Database URL: http://sysbio.suda.edu.cn/CBD/.
Earthquake forecasting studies using radon time series data in Taiwan
NASA Astrophysics Data System (ADS)
Walia, Vivek; Kumar, Arvind; Fu, Ching-Chou; Lin, Shih-Jung; Chou, Kuang-Wu; Wen, Kuo-Liang; Chen, Cheng-Hong
2017-04-01
For few decades, growing number of studies have shown usefulness of data in the field of seismogeochemistry interpreted as geochemical precursory signals for impending earthquakes and radon is idendified to be as one of the most reliable geochemical precursor. Radon is recognized as short-term precursor and is being monitored in many countries. This study is aimed at developing an effective earthquake forecasting system by inspecting long term radon time series data. The data is obtained from a network of radon monitoring stations eastblished along different faults of Taiwan. The continuous time series radon data for earthquake studies have been recorded and some significant variations associated with strong earthquakes have been observed. The data is also examined to evaluate earthquake precursory signals against environmental factors. An automated real-time database operating system has been developed recently to improve the data processing for earthquake precursory studies. In addition, the study is aimed at the appraisal and filtrations of these environmental parameters, in order to create a real-time database that helps our earthquake precursory study. In recent years, automatic operating real-time database has been developed using R, an open source programming language, to carry out statistical computation on the data. To integrate our data with our working procedure, we use the popular and famous open source web application solution, AMP (Apache, MySQL, and PHP), creating a website that could effectively show and help us manage the real-time database.
Sakurai, Nozomu; Ara, Takeshi; Kanaya, Shigehiko; Nakamura, Yukiko; Iijima, Yoko; Enomoto, Mitsuo; Motegi, Takeshi; Aoki, Koh; Suzuki, Hideyuki; Shibata, Daisuke
2013-01-15
High-accuracy mass values detected by high-resolution mass spectrometry analysis enable prediction of elemental compositions, and thus are used for metabolite annotations in metabolomic studies. Here, we report an application of a relational database to significantly improve the rate of elemental composition predictions. By searching a database of pre-calculated elemental compositions with fixed kinds and numbers of atoms, the approach eliminates redundant evaluations of the same formula that occur in repeated calculations with other tools. When our approach is compared with HR2, which is one of the fastest tools available, our database search times were at least 109 times shorter than those of HR2. When a solid-state drive (SSD) was applied, the search time was 488 times shorter at 5 ppm mass tolerance and 1833 times at 0.1 ppm. Even if the search by HR2 was performed with 8 threads in a high-spec Windows 7 PC, the database search times were at least 26 and 115 times shorter without and with the SSD. These improvements were enhanced in a low spec Windows XP PC. We constructed a web service 'MFSearcher' to query the database in a RESTful manner. Available for free at http://webs2.kazusa.or.jp/mfsearcher. The web service is implemented in Java, MySQL, Apache and Tomcat, with all major browsers supported. sakurai@kazusa.or.jp Supplementary data are available at Bioinformatics online.
Deserno, Thomas M; Haak, Daniel; Brandenburg, Vincent; Deserno, Verena; Classen, Christoph; Specht, Paula
2014-12-01
Especially for investigator-initiated research at universities and academic institutions, Internet-based rare disease registries (RDR) are required that integrate electronic data capture (EDC) with automatic image analysis or manual image annotation. We propose a modular framework merging alpha-numerical and binary data capture. In concordance with the Office of Rare Diseases Research recommendations, a requirement analysis was performed based on several RDR databases currently hosted at Uniklinik RWTH Aachen, Germany. With respect to the study management tool that is already successfully operating at the Clinical Trial Center Aachen, the Google Web Toolkit was chosen with Hibernate and Gilead connecting a MySQL database management system. Image and signal data integration and processing is supported by Apache Commons FileUpload-Library and ImageJ-based Java code, respectively. As a proof of concept, the framework is instantiated to the German Calciphylaxis Registry. The framework is composed of five mandatory core modules: (1) Data Core, (2) EDC, (3) Access Control, (4) Audit Trail, and (5) Terminology as well as six optional modules: (6) Binary Large Object (BLOB), (7) BLOB Analysis, (8) Standard Operation Procedure, (9) Communication, (10) Pseudonymization, and (11) Biorepository. Modules 1-7 are implemented in the German Calciphylaxis Registry. The proposed RDR framework is easily instantiated and directly integrates image management and analysis. As open source software, it may assist improved data collection and analysis of rare diseases in near future.
MFIB: a repository of protein complexes with mutual folding induced by binding.
Fichó, Erzsébet; Reményi, István; Simon, István; Mészáros, Bálint
2017-11-15
It is commonplace that intrinsically disordered proteins (IDPs) are involved in crucial interactions in the living cell. However, the study of protein complexes formed exclusively by IDPs is hindered by the lack of data and such analyses remain sporadic. Systematic studies benefited other types of protein-protein interactions paving a way from basic science to therapeutics; yet these efforts require reliable datasets that are currently lacking for synergistically folding complexes of IDPs. Here we present the Mutual Folding Induced by Binding (MFIB) database, the first systematic collection of complexes formed exclusively by IDPs. MFIB contains an order of magnitude more data than any dataset used in corresponding studies and offers a wide coverage of known IDP complexes in terms of flexibility, oligomeric composition and protein function from all domains of life. The included complexes are grouped using a hierarchical classification and are complemented with structural and functional annotations. MFIB is backed by a firm development team and infrastructure, and together with possible future community collaboration it will provide the cornerstone for structural and functional studies of IDP complexes. MFIB is freely accessible at http://mfib.enzim.ttk.mta.hu/. The MFIB application is hosted by Apache web server and was implemented in PHP. To enrich querying features and to enhance backend performance a MySQL database was also created. simon.istvan@ttk.mta.hu, meszaros.balint@ttk.mta.hu. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press.
CBD: a biomarker database for colorectal cancer
Zhang, Xueli; Sun, Xiao-Feng; Ye, Benchen; Peng, Qiliang; Liu, Xingyun; Shen, Bairong; Zhang, Hong
2018-01-01
Abstract Colorectal cancer (CRC) biomarker database (CBD) was established based on 870 identified CRC biomarkers and their relevant information from 1115 original articles in PubMed published from 1986 to 2017. In this version of the CBD, CRC biomarker data were collected, sorted, displayed and analysed. The CBD with the credible contents as a powerful and time-saving tool provide more comprehensive and accurate information for further CRC biomarker research. The CBD was constructed under MySQL server. HTML, PHP and JavaScript languages have been used to implement the web interface. The Apache was selected as HTTP server. All of these web operations were implemented under the Windows system. The CBD could provide to users the multiple individual biomarker information and categorized into the biological category, source and application of biomarkers; the experiment methods, results, authors and publication resources; the research region, the average age of cohort, gender, race, the number of tumours, tumour location and stage. We only collect data from the articles with clear and credible results to prove the biomarkers are useful in the diagnosis, treatment or prognosis of CRC. The CBD can also provide a professional platform to researchers who are interested in CRC research to communicate, exchange their research ideas and further design high-quality research in CRC. They can submit their new findings to our database via the submission page and communicate with us in the CBD. Database URL: http://sysbio.suda.edu.cn/CBD/ PMID:29846545
BioBarcode: a general DNA barcoding database and server platform for Asian biodiversity resources.
Lim, Jeongheui; Kim, Sang-Yoon; Kim, Sungmin; Eo, Hae-Seok; Kim, Chang-Bae; Paek, Woon Kee; Kim, Won; Bhak, Jong
2009-12-03
DNA barcoding provides a rapid, accurate, and standardized method for species-level identification using short DNA sequences. Such a standardized identification method is useful for mapping all the species on Earth, particularly when DNA sequencing technology is cheaply available. There are many nations in Asia with many biodiversity resources that need to be mapped and registered in databases. We have built a general DNA barcode data processing system, BioBarcode, with open source software - which is a general purpose database and server. It uses mySQL RDBMS 5.0, BLAST2, and Apache httpd server. An exemplary database of BioBarcode has around 11,300 specimen entries (including GenBank data) and registers the biological species to map their genetic relationships. The BioBarcode database contains a chromatogram viewer which improves the performance in DNA sequence analyses. Asia has a very high degree of biodiversity and the BioBarcode database server system aims to provide an efficient bioinformatics protocol that can be freely used by Asian researchers and research organizations interested in DNA barcoding. The BioBarcode promotes the rapid acquisition of biological species DNA sequence data that meet global standards by providing specialized services, and provides useful tools that will make barcoding cheaper and faster in the biodiversity community such as standardization, depository, management, and analysis of DNA barcode data. The system can be downloaded upon request, and an exemplary server has been constructed with which to build an Asian biodiversity system http://www.asianbarcode.org.
Cellular Consequences of Telomere Shortening in Histologically Normal Breast Tissues
2013-09-01
using the open source, JAVA -based image analysis software package ImageJ (http://rsb.info.nih.gov/ij/) and a custom designed plugin (“Telometer...Tabulated data were stored in a MySQL (http://www.mysql.com) database and viewed through Microsoft Access (Microsoft Corp.). Statistical Analysis For
ERIC Educational Resources Information Center
Arnold, Adele R.
Among the Native Americans, few tribes were as warlike as the Apaches of the Southwest. The courage and ferocity of Apache warriors like Geronimo, Cochise, Victorio, and Mangas Coloradas is legendary. Based on a true story, this book is about an Apache boy who was captured by an enemy tribe and sold to a white man. Carlos Gentile, a photographer…
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-28
... Reservation, Arizona; Yavapai-Apache Nation of the Camp Verde Indian Reservation, Arizona; Yavapai-Prescott... Tribe of the Fort Apache Reservation, Arizona; Yavapai-Apache Nation of the Camp Verde Indian... Camp Verde Indian Reservation, Arizona; Yavapai-Prescott Tribe of the Yavapai Reservation, Arizona; and...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-28
... Reservation, Arizona; Yavapai-Apache Nation of the Camp Verde Indian Reservation, Arizona; Yavapai-Prescott... of the Fort Apache Reservation, Arizona; Yavapai-Apache Nation of the Camp Verde Indian Reservation... Camp Verde Indian Reservation, Arizona; Yavapai-Prescott Tribe of the Yavapai Reservation, Arizona; and...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-28
... Reservation, Arizona; Yavapai-Apache Nation of the Camp Verde Indian Reservation, Arizona; Yavapai-Prescott...; Yavapai-Apache Nation of the Camp Verde Indian Reservation, Arizona; and Yavapai-Prescott Tribe of the... Reservation, Arizona; Yavapai-Apache Nation of the Camp Verde Indian Reservation, Arizona; Yavapai-Prescott...
Go-Gii-Ya [A Jicarilla Apache Religious Celebration].
ERIC Educational Resources Information Center
Pesata, Levi; And Others
Developed by utilizing only Jicarilla Apache people as resources to preserve the authenticity of the material and information, this booklet presents information on the Jicarilla Apache celebration of "Go-gii-ya". "Go-gii-ya" is a religious feast and ceremony held annually over a three-day period which climaxes on the fifteenth…
Evidence of sexually dimorphic introgression in Pinaleno Mountain Apache trout
Porath, M.T.; Nielsen, J.L.
2003-01-01
The high-elevation headwater streams of the Pinaleno Mountains support small populations of threatened Apache trout Oncorhynchus apache that were stocked following the chemical removal of nonnative salmonids in the 1960s. A fisheries survey to assess population composition, growth, and size structure confirmed angler reports of infrequent occurrences of Oncorhynchus spp. exhibiting the external morphological characteristics of both Apache trout and rainbow trout O. mykiss. Nonlethal tissue samples were collected from 50 individuals in the headwaters of each stream. Mitochondrial DNA (mtDNA) sequencing and amplification of nuclear microsatellite loci were used to determine the levels of genetic introgression by rainbow trout in Apache trout populations at these locations. Sexually dimorphic introgression from the spawning of male rainbow trout with female Apache trout was detected using mtDNA and microsatellites. Estimates of the degree of hybridization based on three microsatellite loci were 10-88%. The use of nonlethal DNA genetic analyses can supplement information obtained from standard survey methods and be useful in assessing the relative importance of small and sensitive populations with a history of nonnative introductions.
The Apache OODT Project: An Introduction
NASA Astrophysics Data System (ADS)
Mattmann, C. A.; Crichton, D. J.; Hughes, J. S.; Ramirez, P.; Goodale, C. E.; Hart, A. F.
2012-12-01
Apache OODT is a science data system framework, borne over the past decade, with 100s of FTEs of investment, tens of sponsoring agencies (NASA, NIH/NCI, DoD, NSF, universities, etc.), and hundreds of projects and science missions that it powers everyday to their success. At its core, Apache OODT carries with it two fundamental classes of software services and components: those that deal with information integration from existing science data repositories and archives, that themselves have already-in-use business processes and models for populating those archives. Information integration allows search, retrieval, and dissemination across these heterogeneous systems, and ultimately rapid, interactive data access, and retrieval. The other suite of services and components within Apache OODT handle population and processing of those data repositories and archives. Workflows, resource management, crawling, remote data retrieval, curation and ingestion, along with science data algorithm integration all are part of these Apache OODT software elements. In this talk, I will provide an overview of the use of Apache OODT to unlock and populate information from science data repositories and archives. We'll cover the basics, along with some advanced use cases and success stories.
Building CHAOS: An Operating System for Livermore Linux Clusters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garlick, J E; Dunlap, C M
2003-02-21
The Livermore Computing (LC) Linux Integration and Development Project (the Linux Project) produces and supports the Clustered High Availability Operating System (CHAOS), a cluster operating environment based on Red Hat Linux. Each CHAOS release begins with a set of requirements and ends with a formally tested, packaged, and documented release suitable for use on LC's production Linux clusters. One characteristic of CHAOS is that component software packages come from different sources under varying degrees of project control. Some are developed by the Linux Project, some are developed by other LC projects, some are external open source projects, and some aremore » commercial software packages. A challenge to the Linux Project is to adhere to release schedules and testing disciplines in a diverse, highly decentralized development environment. Communication channels are maintained for externally developed packages in order to obtain support, influence development decisions, and coordinate/understand release schedules. The Linux Project embraces open source by releasing locally developed packages under open source license, by collaborating with open source projects where mutually beneficial, and by preferring open source over proprietary software. Project members generally use open source development tools. The Linux Project requires system administrators and developers to work together to resolve problems that arise in production. This tight coupling of production and development is a key strategy for making a product that directly addresses LC's production requirements. It is another challenge to balance support and development activities in such a way that one does not overwhelm the other.« less
Development of an Autonomous Navigation Technology Test Vehicle
2004-08-01
as an independent thread on processors using the Linux operating system. The computer hardware selected for the nodes that host the MRS threads...communications system design. Linux was chosen as the operating system for all of the single board computers used on the Mule. Linux was specifically...used for system analysis and development. The simple realization of multi-thread processing and inter-process communications in Linux made it a
Determination of habitat requirements for Apache Trout
Petre, Sally J.; Bonar, Scott A.
2017-01-01
The Apache Trout Oncorhynchus apache, a salmonid endemic to east-central Arizona, is currently listed as threatened under the U.S. Endangered Species Act. Establishing and maintaining recovery streams for Apache Trout and other endemic species requires determination of their specific habitat requirements. We built upon previous studies of Apache Trout habitat by defining both stream-specific and generalized optimal and suitable ranges of habitat criteria in three streams located in the White Mountains of Arizona. Habitat criteria were measured at the time thought to be most limiting to juvenile and adult life stages, the summer base flow period. Based on the combined results from three streams, we found that Apache Trout use relatively deep (optimal range = 0.15–0.32 m; suitable range = 0.032–0.470 m) pools with slow stream velocities (suitable range = 0.00–0.22 m/s), gravel or smaller substrate (suitable range = 0.13–2.0 [Wentworth scale]), overhead cover (suitable range = 26–88%), and instream cover (large woody debris and undercut banks were occupied at higher rates than other instream cover types). Fish were captured at cool to moderate temperatures (suitable range = 10.4–21.1°C) in streams with relatively low maximum seasonal temperatures (optimal range = 20.1–22.9°C; suitable range = 17.1–25.9°C). Multiple logistic regression generally confirmed the importance of these variables for predicting the presence of Apache Trout. All measured variables except mean velocity were significant predictors in our model. Understanding habitat needs is necessary in managing for persistence, recolonization, and recruitment of Apache Trout. Management strategies such as fencing areas to restrict ungulate use and grazing and planting native riparian vegetation might favor Apache Trout persistence and recolonization by providing overhead cover and large woody debris to form pools and instream cover, shading streams and lowering temperatures.
A Photographic Essay of Apache Chiefs and Warriors, Volume 2-Part B.
ERIC Educational Resources Information Center
Barkan, Gerald; Jacobs, Ben
As part of a series designed for instruction of American Indian children and youth, this resource guide constitutes a pictorial essay describing forts, Indian agents, and Apache chiefs, warriors, and scouts of the 19th century. Accompanying each picture is a brief historical-biographical narrative. Focus is on Apache resistance to the reservation.…
ERIC Educational Resources Information Center
Hammond, Vanessa Lea; Watson, P. J.; O'Leary, Brian J.; Cothran, D. Lisa
2009-01-01
Hopelessness is central to prominent mental health problems within American Indian (AI) communities. Apaches living on a reservation in Arizona responded to diverse expressions of hope along with Hopelessness, Personal Self-Esteem, and Collective Self-Esteem scales. An Apache Hopefulness Scale expressed five themes of hope and correlated…
ERIC Educational Resources Information Center
Cwik, Mary F.; Barlow, Allison; Tingey, Lauren; Larzelere-Hinton, Francene; Goklish, Novalene; Walkup, John T.
2011-01-01
Objective: To describe characteristics and correlates of nonsuicidal self-injury (NSSI) among the White Mountain Apache Tribe. NSSI has not been studied before in American Indian samples despite associated risks for suicide, which disproportionately affect American Indian youth. Method: Apache case managers collected data through a tribally…
2001-09-01
Readily Available Linux has been copyrighted under the terms of the GNU General Public 5 License (GPL)1. This is a license written by the Free...GNOME and KDE . d. Portability Linux is highly compatible with many common operating systems. For...using suitable libraries, Linux is able to run programs written for other operating systems. [Ref. 8] 1 The GNU Project is coordinated by the
Khwannimit, Bodin; Bhurayanontachai, Rungsun; Vattanavanit, Veerapong
2017-06-01
Recently, the Sepsis Severity Score (SSS) was constructed to predict mortality in sepsis patients. The aim of this study was to compare performance of the SSS with the Acute Physiology and Chronic Health Evaluation (APACHE) II-IV, Simplified Acute Physiology Score (SAPS) II, and SAPS 3 scores in predicting hospital outcome in sepsis patients. A retroprospective analysis was conducted in the medical intensive care unit of a tertiary university hospital. A total of 913 patients were enrolled; 476 of these patients (52.1%) had septic shock. The median SSS was 80 (range 20-137). The SSS presented good discrimination with an area under the receiver operating characteristic curve (AUC) of 0.892. However, the AUC of the SSS did not differ significantly from that of APACHE II (P = 0.07), SAPS II (P = 0.06), and SAPS 3 (P = 0.11). The APACHE IV score showed the best discrimination with an AUC of 0.948 and the overall performance by a Brier score of 0.096. The AUC of the APACHE IV score was statistically greater than the SSS, APACHE II, SAPS II, and SAPS 3 (P <0.0001 for all) and APACHE III (P = 0.0002). The calibration of all scores was poor with the Hosmer-Lemeshow goodness-of-fit H test <0.05. The SSS provided as good discrimination as the APACHE II, SAPS II, and SAPS 3 scores. However, the APACHE IV score had the best discrimination and overall performance in our sepsis patients. The SSS needs to be adapted and modified with new parameters to improve its performance.
Novel Advancements in Internet-Based Real Time Data Technologies
NASA Technical Reports Server (NTRS)
Myers, Gerry; Welch, Clara L. (Technical Monitor)
2002-01-01
AZ Technology has been working with MSFC Ground Systems Department to find ways to make it easier for remote experimenters (RPI's) to monitor their International Space Station (ISS) payloads in real-time from anywhere using standard/familiar devices. AZ Technology was awarded an SBIR Phase I grant to research the technologies behind and advancements of distributing live ISS data across the Internet. That research resulted in a product called "EZStream" which is in use on several ISS-related projects. Although the initial implementation is geared toward ISS, the architecture and lessons learned are applicable to other space-related programs. This paper presents the high-level architecture and components that make up EZStream. A combination of commercial-off-the-shelf (COTS) and custom components were used and their interaction will be discussed. The server is powered by Apache's Jakarta-Tomcat web server/servlet engine. User accounts are maintained in a My SQL database. Both Tomcat and MySQL are Open Source products. When used for ISS, EZStream pulls the live data directly from NASA's Telescience Resource Kit (TReK) API. TReK parses the ISS data stream into individual measurement parameters and performs on-the- fly engineering unit conversion and range checking before passing the data to EZStream for distribution. TReK is provided by NASA at no charge to ISS experimenters. By using a combination of well established Open Source, NASA-supplied. and AZ Technology-developed components, operations using EZStream are robust and economical. Security over the Internet is a major concern on most space programs. This paper describes how EZStream provides for secure connection to and transmission of space- related data over the public Internet. Display pages that show sensitive data can be placed under access control by EZStream. Users are required to login before being allowed to pull up those web pages. To enhance security, the EZStream client/server data transmissions can be encrypted to preclude interception. EZStream was developed to make use of a host of standard platforms and protocols. Each are discussed in detail in this paper. The I3ZStream server is written as Java Servlets. This allows different platforms (i.e. Windows, Unix, Linux . Mac) to host the server portion. The EZStream client component is written in two different flavors: JavaBean and ActiveX. The JavaBean component is used to develop Java Applet displays. The ActiveX component is used for developing ActiveX-based displays. Remote user devices will be covered including web browsers on PC#s and scaled-down displays for PDA's and smart cell phones. As mentioned. the interaction between EZStream (web/data server) and TReK (data source) will be covered as related to ISS. EZStream is being enhanced to receive and parse binary data stream directly. This makes EZStream beneficial to both the ISS International Partners and non-NASA applications (i.e. factory floor monitoring). The options for developing client-side display web pages will be addressed along with the development of tools to allow creation of display web pages by non-programmers.
NASA Astrophysics Data System (ADS)
Chapeland, S.; Carena, F.; Carena, W.; Chibante Barroso, V.; Costa, F.; Dénes, E.; Divià, R.; Fuchs, U.; Grigore, A.; Ionita, C.; Delort, C.; Simonetti, G.; Soós, C.; Telesca, A.; Vande Vyvre, P.; Von Haller, B.; Alice Collaboration
2014-04-01
ALICE (A Large Ion Collider Experiment) is a heavy-ion experiment studying the physics of strongly interacting matter and the quark-gluon plasma at the CERN LHC (Large Hadron Collider). The ALICE DAQ (Data Acquisition System) is based on a large farm of commodity hardware consisting of more than 600 devices (Linux PCs, storage, network switches). The DAQ reads the data transferred from the detectors through 500 dedicated optical links at an aggregated and sustained rate of up to 10 Gigabytes per second and stores at up to 2.5 Gigabytes per second. The infoLogger is the log system which collects centrally the messages issued by the thousands of processes running on the DAQ machines. It allows to report errors on the fly, and to keep a trace of runtime execution for later investigation. More than 500000 messages are stored every day in a MySQL database, in a structured table keeping track for each message of 16 indexing fields (e.g. time, host, user, ...). The total amount of logs for 2012 exceeds 75GB of data and 150 million rows. We present in this paper the architecture and implementation of this distributed logging system, consisting of a client programming API, local data collector processes, a central server, and interactive human interfaces. We review the operational experience during the 2012 run, in particular the actions taken to ensure shifters receive manageable and relevant content from the main log stream. Finally, we present the performance of this log system, and future evolutions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gaponov, Yu.A.; Igarashi, N.; Hiraki, M.
2004-05-12
An integrated controlling system and a unified database for high throughput protein crystallography experiments have been developed. Main features of protein crystallography experiments (purification, crystallization, crystal harvesting, data collection, data processing) were integrated into the software under development. All information necessary to perform protein crystallography experiments is stored (except raw X-ray data that are stored in a central data server) in a MySQL relational database. The database contains four mutually linked hierarchical trees describing protein crystals, data collection of protein crystal and experimental data processing. A database editor was designed and developed. The editor supports basic database functions to view,more » create, modify and delete user records in the database. Two search engines were realized: direct search of necessary information in the database and object oriented search. The system is based on TCP/IP secure UNIX sockets with four predefined sending and receiving behaviors, which support communications between all connected servers and clients with remote control functions (creating and modifying data for experimental conditions, data acquisition, viewing experimental data, and performing data processing). Two secure login schemes were designed and developed: a direct method (using the developed Linux clients with secure connection) and an indirect method (using the secure SSL connection using secure X11 support from any operating system with X-terminal and SSH support). A part of the system has been implemented on a new MAD beam line, NW12, at the Photon Factory Advanced Ring for general user experiments.« less
Base-By-Base: single nucleotide-level analysis of whole viral genome alignments.
Brodie, Ryan; Smith, Alex J; Roper, Rachel L; Tcherepanov, Vasily; Upton, Chris
2004-07-14
With ever increasing numbers of closely related virus genomes being sequenced, it has become desirable to be able to compare two genomes at a level more detailed than gene content because two strains of an organism may share the same set of predicted genes but still differ in their pathogenicity profiles. For example, detailed comparison of multiple isolates of the smallpox virus genome (each approximately 200 kb, with 200 genes) is not feasible without new bioinformatics tools. A software package, Base-By-Base, has been developed that provides visualization tools to enable researchers to 1) rapidly identify and correct alignment errors in large, multiple genome alignments; and 2) generate tabular and graphical output of differences between the genomes at the nucleotide level. Base-By-Base uses detailed annotation information about the aligned genomes and can list each predicted gene with nucleotide differences, display whether variations occur within promoter regions or coding regions and whether these changes result in amino acid substitutions. Base-By-Base can connect to our mySQL database (Virus Orthologous Clusters; VOCs) to retrieve detailed annotation information about the aligned genomes or use information from text files. Base-By-Base enables users to quickly and easily compare large viral genomes; it highlights small differences that may be responsible for important phenotypic differences such as virulence. It is available via the Internet using Java Web Start and runs on Macintosh, PC and Linux operating systems with the Java 1.4 virtual machine.
Wolf Testing: Open Source Testing Software
NASA Astrophysics Data System (ADS)
Braasch, P.; Gay, P. L.
2004-12-01
Wolf Testing is software for easily creating and editing exams. Wolf Testing allows the user to create an exam from a database of questions, view it on screen, and easily print it along with the corresponding answer guide. The questions can be multiple choice, short answer, long answer, or true and false varieties. This software can be accessed securely from any location, allowing the user to easily create exams from home. New questions, which can include associated pictures, can be added through a web-interface. After adding in questions, they can be edited, deleted, or duplicated into multiple versions. Long-term test creation is simplified, as you are able to quickly see what questions you have asked in the past and insert them, with or without editing, into future tests. All tests are archived in the database. Written in PHP and MySQL, this software can be installed on any UNIX / Linux platform, including Macintosh OS X. The secure interface keeps students out, and allows you to decide who can create tests and who can edit information already in the database. Tests can be output as either html with pictures or rich text without pictures, and there are plans to add PDF and MS Word formats as well. We would like to thank Dr. Wolfgang Rueckner and the Harvard University Science Center for providing incentive to start this project, computers and resources to complete this project, and inspiration for the project's name. We would also like to thank Dr. Ronald Newburgh for his assistance in beta testing.
NASA Technical Reports Server (NTRS)
Muniz, R.; Hochstadt, J.; Boelke J.; Dalton, A.
2011-01-01
The Content Documents are created and managed under the System Software group with. Launch Control System (LCS) project. The System Software product group is lead by NASA Engineering Control and Data Systems branch (NEC3) at Kennedy Space Center. The team is working on creating Operating System Images (OSI) for different platforms (i.e. AIX, Linux, Solaris and Windows). Before the OSI can be created, the team must create a Content Document which provides the information of a workstation or server, with the list of all the software that is to be installed on it and also the set where the hardware belongs. This can be for example in the LDS, the ADS or the FR-l. The objective of this project is to create a User Interface Web application that can manage the information of the Content Documents, with all the correct validations and filters for administrator purposes. For this project we used one of the most excellent tools in agile development applications called Ruby on Rails. This tool helps pragmatic programmers develop Web applications with Rails framework and Ruby programming language. It is very amazing to see how a student can learn about OOP features with the Ruby language, manage the user interface with HTML and CSS, create associations and queries with gems, manage databases and run a server with MYSQL, run shell commands with command prompt and create Web frameworks with Rails. All of this in a real world project and in just fifteen weeks!
A computational genomics pipeline for prokaryotic sequencing projects.
Kislyuk, Andrey O; Katz, Lee S; Agrawal, Sonia; Hagen, Matthew S; Conley, Andrew B; Jayaraman, Pushkala; Nelakuditi, Viswateja; Humphrey, Jay C; Sammons, Scott A; Govil, Dhwani; Mair, Raydel D; Tatti, Kathleen M; Tondella, Maria L; Harcourt, Brian H; Mayer, Leonard W; Jordan, I King
2010-08-01
New sequencing technologies have accelerated research on prokaryotic genomes and have made genome sequencing operations outside major genome sequencing centers routine. However, no off-the-shelf solution exists for the combined assembly, gene prediction, genome annotation and data presentation necessary to interpret sequencing data. The resulting requirement to invest significant resources into custom informatics support for genome sequencing projects remains a major impediment to the accessibility of high-throughput sequence data. We present a self-contained, automated high-throughput open source genome sequencing and computational genomics pipeline suitable for prokaryotic sequencing projects. The pipeline has been used at the Georgia Institute of Technology and the Centers for Disease Control and Prevention for the analysis of Neisseria meningitidis and Bordetella bronchiseptica genomes. The pipeline is capable of enhanced or manually assisted reference-based assembly using multiple assemblers and modes; gene predictor combining; and functional annotation of genes and gene products. Because every component of the pipeline is executed on a local machine with no need to access resources over the Internet, the pipeline is suitable for projects of a sensitive nature. Annotation of virulence-related features makes the pipeline particularly useful for projects working with pathogenic prokaryotes. The pipeline is licensed under the open-source GNU General Public License and available at the Georgia Tech Neisseria Base (http://nbase.biology.gatech.edu/). The pipeline is implemented with a combination of Perl, Bourne Shell and MySQL and is compatible with Linux and other Unix systems.
Use of Graph Database for the Integration of Heterogeneous Biological Data.
Yoon, Byoung-Ha; Kim, Seon-Kyu; Kim, Seon-Young
2017-03-01
Understanding complex relationships among heterogeneous biological data is one of the fundamental goals in biology. In most cases, diverse biological data are stored in relational databases, such as MySQL and Oracle, which store data in multiple tables and then infer relationships by multiple-join statements. Recently, a new type of database, called the graph-based database, was developed to natively represent various kinds of complex relationships, and it is widely used among computer science communities and IT industries. Here, we demonstrate the feasibility of using a graph-based database for complex biological relationships by comparing the performance between MySQL and Neo4j, one of the most widely used graph databases. We collected various biological data (protein-protein interaction, drug-target, gene-disease, etc.) from several existing sources, removed duplicate and redundant data, and finally constructed a graph database containing 114,550 nodes and 82,674,321 relationships. When we tested the query execution performance of MySQL versus Neo4j, we found that Neo4j outperformed MySQL in all cases. While Neo4j exhibited a very fast response for various queries, MySQL exhibited latent or unfinished responses for complex queries with multiple-join statements. These results show that using graph-based databases, such as Neo4j, is an efficient way to store complex biological relationships. Moreover, querying a graph database in diverse ways has the potential to reveal novel relationships among heterogeneous biological data.
Use of Graph Database for the Integration of Heterogeneous Biological Data
Yoon, Byoung-Ha; Kim, Seon-Kyu
2017-01-01
Understanding complex relationships among heterogeneous biological data is one of the fundamental goals in biology. In most cases, diverse biological data are stored in relational databases, such as MySQL and Oracle, which store data in multiple tables and then infer relationships by multiple-join statements. Recently, a new type of database, called the graph-based database, was developed to natively represent various kinds of complex relationships, and it is widely used among computer science communities and IT industries. Here, we demonstrate the feasibility of using a graph-based database for complex biological relationships by comparing the performance between MySQL and Neo4j, one of the most widely used graph databases. We collected various biological data (protein-protein interaction, drug-target, gene-disease, etc.) from several existing sources, removed duplicate and redundant data, and finally constructed a graph database containing 114,550 nodes and 82,674,321 relationships. When we tested the query execution performance of MySQL versus Neo4j, we found that Neo4j outperformed MySQL in all cases. While Neo4j exhibited a very fast response for various queries, MySQL exhibited latent or unfinished responses for complex queries with multiple-join statements. These results show that using graph-based databases, such as Neo4j, is an efficient way to store complex biological relationships. Moreover, querying a graph database in diverse ways has the potential to reveal novel relationships among heterogeneous biological data. PMID:28416946
ERIC Educational Resources Information Center
Gonzalez-Santin, Edwin, Comp.
This curriculum manual provides 8 days of training for child protective services (CPS) personnel (social workers and administrators) working in the White Mountain Apache tribal community. Each of the first seven units in the manual contains a brief description of contents, course objectives, time required, key concepts, possible discussion topics,…
Tuning Linux to meet real time requirements
NASA Astrophysics Data System (ADS)
Herbel, Richard S.; Le, Dang N.
2007-04-01
There is a desire to use Linux in military systems. Customers are requesting contractors to use open source to the maximal possible extent in contracts. Linux is probably the best operating system of choice to meet this need. It is widely used. It is free. It is royalty free, and, best of all, it is completely open source. However, there is a problem. Linux was not originally built to be a real time operating system. There are many places where interrupts can and will be blocked for an indeterminate amount of time. There have been several attempts to bridge this gap. One of them is from RTLinux, which attempts to build a microkernel underneath Linux. The microkernel will handle all interrupts and then pass it up to the Linux operating system. This does insure good interrupt latency; however, it is not free [1]. Another is RTAI, which provides a similar typed interface; however, the PowerPC platform, which is used widely in real time embedded community, was stated as "recovering" [2]. Thus this is not suited for military usage. This paper provides a method for tuning a standard Linux kernel so it can meet the real time requirement of an embedded system.
Open Radio Communications Architecture Core Framework V1.1.0 Volume 1 Software Users Manual
2005-02-01
on a PC utilizing the KDE desktop that comes with Red Hat Linux . The default desktop for most Red Hat Linux installations is the GNOME desktop. The...SCA) v2.2. The software was designed for a desktop computer running the Linux operating system (OS). It was developed in C++, uses ACE/TAO for CORBA...middleware, Xerces for the XML parser, and Red Hat Linux for the Operating System. The software is referred to as, Open Radio Communication
Investigating the Limitations of Advanced Design Methods through Real World Application
2016-03-31
36 War Room Laptop Display ( MySQL , JMP 9 Pro, 64-bit Windows) Georgia Tech Secure Collaborative Visualization Environment ( MySQL , JMP 9 Pro...investigate expanding the EA for VC3ATS • Would like to consider both an expansion of the use of current Java -based BPM approach and other potential EA
Evolution of the LBT Telemetry System
NASA Astrophysics Data System (ADS)
Summers, K.; Biddick, C.; De La Peña, M. D.; Summers, D.
2014-05-01
The Large Binocular Telescope (LBT) Telescope Control System (TCS) records about 10GB of telemetry data per night. Additionally, the vibration monitoring system records about 9GB of telemetry data per night. Through 2013, we have amassed over 6TB of Hierarchical Data Format (HDF5) files and almost 9TB in a MySQL database of TCS and vibration data. The LBT telemetry system, in its third major revision since 2004, provides the mechanism to capture and store this data. The telemetry system has evolved from a simple HDF file system with MySQL stream definitions within the TCS, to a separate system using a MySQL database system for the definitions and data, and finally to no database use at all, using HDF5 files.
ERIC Educational Resources Information Center
Axelrod, Melissa; de Garcia, Jule Gomez; Lachler, Jordan
2003-01-01
Reports on the progress of a project to produce a dictionary of the Jicarilla Apache language. Jicarilla, an Eastern Apachean language is spoken on the Jicarilla Apache reservation in Northern New Mexico. The project has revealed much about the role of literacy in language standardization and in speaker empowerment. Suggests that many parallels…
A Photographic Essay of Apache Children in Early Times, Volume 2-Part C.
ERIC Educational Resources Information Center
Thompson, Doris; Jacobs, Ben
As part of a series of guides designed for instruction of American Indian children and youth, this resource guide constitutes a pictorial essay on life of the Apache child from 1880 to the early 20th century. Each of the 12 photographs is accompanied by an historical narrative which describes one or more cultural aspects of Apache childhood.…
Linux thin-client conversion in a large cardiology practice: initial experience.
Echt, Martin P; Rosen, Jordan
2004-01-01
Capital Cardiology Associates (CCA) is a single-specialty cardiology practice with offices in New York and Massachusetts. In 2003, CCA converted its IT system from a Microsoft-based network to a Linux network employing Linux thin-client technology with overall positive outcomes.
NASA Astrophysics Data System (ADS)
Chugh, Saryu; Arivu Selvan, K.; Nadesh, RK
2017-11-01
Numerous destructive things influence the working arrangement of human body as hypertension, smoking, obesity, inappropriate medication taking which causes many contrasting diseases as diabetes, thyroid, strokes and coronary diseases. The impermanence and horribleness of the environment situation is also the reason for the coronary disease. The structure of Apache start relies on the evolution which requires gathering of the data. To break down the significance of use programming focused on data structure the Apache stop ought to be utilized and it gives various central focuses as it is fast in light as it uses memory worked in preparing. Apache Spark continues running on dispersed environment and chops down the data in bunches giving a high profitability rate. Utilizing mining procedure as a part of the determination of coronary disease has been exhaustively examined indicating worthy levels of precision. Decision trees, Neural Network, Gradient Boosting Algorithm are the various apache spark proficiencies which help in collecting the information.
ERIC Educational Resources Information Center
Ove, Robert S.; Stockel, H. Henrietta
In 1948, a young and naive Robert Ove arrived at Whitetail, on the Mescalero Apache Reservation, to teach at the Bureau of Indian Affairs day school. Living there were the Chiricahua Apaches--descendants of Geronimo and the survivors of nearly 30 years of incarceration by the U.S. government. With help from Indian historian H. Henrietta Stockel,…
Nakhoda, Shazia; Zimrin, Ann B; Baer, Maria R; Law, Jennie Y
2017-04-01
Hypertriglyceridemic (HTG) pancreatitis carries significant morbidity and mortality and often requires intensive care unit (ICU) admission. Therapeutic plasma exchange (TPE) rapidly lowers serum triglyceride (TG) levels. However, evidence supporting TPE for HTG pancreatitis is lacking. Ten patients admitted to the ICU for HTG pancreatitis underwent TPE at our institution from 2005-2015. We retrospectively calculated the Acute Physiology and Chronic Health Examination II (APACHE II) score at the time of initial TPE and again after the final TPE session to assess the impact of triglyceride apheresis on morbidity and mortality associated with HTG pancreatitis. All 10 patients had rapid reduction in TG level after TPE, but only 5 had improvement in their APACHE II score. The median APACHE II score decreased from 19% to 17% after TPE, correlating with an 8% and 9% decrease in median predicted non-operative and post-operative mortality, respectively. The APACHE II score did not differ statistically before and after TPE implementation in our patient group (p=0.39). TPE is a clinically useful tool to rapidly lower TG levels, but its impact on mortality of HTG pancreatitis as assessed by the APACHE II score remains uncertain. Copyright © 2016 Elsevier Ltd. All rights reserved.
Use of APACHE II and SAPS II to predict mortality for hemorrhagic and ischemic stroke patients.
Moon, Byeong Hoo; Park, Sang Kyu; Jang, Dong Kyu; Jang, Kyoung Sool; Kim, Jong Tae; Han, Yong Min
2015-01-01
We studied the applicability of the Acute Physiology and Chronic Health Evaluation II (APACHE II) and Simplified Acute Physiology Score II (SAPS II) in patients admitted to the intensive care unit (ICU) with acute stroke and compared the results with the Glasgow Coma Scale (GCS) and National Institutes of Health Stroke Scale (NIHSS). We also conducted a comparative study of accuracy for predicting hemorrhagic and ischemic stroke mortality. Between January 2011 and December 2012, ischemic or hemorrhagic stroke patients admitted to the ICU were included in the study. APACHE II and SAPS II-predicted mortalities were compared using a calibration curve, the Hosmer-Lemeshow goodness-of-fit test, and the receiver operating characteristic (ROC) curve, and the results were compared with the GCS and NIHSS. Overall 498 patients were included in this study. The observed mortality was 26.3%, whereas APACHE II and SAPS II-predicted mortalities were 35.12% and 35.34%, respectively. The mean GCS and NIHSS scores were 9.43 and 21.63, respectively. The calibration curve was close to the line of perfect prediction. The ROC curve showed a slightly better prediction of mortality for APACHE II in hemorrhagic stroke patients and SAPS II in ischemic stroke patients. The GCS and NIHSS were inferior in predicting mortality in both patient groups. Although both the APACHE II and SAPS II systems can be used to measure performance in the neurosurgical ICU setting, the accuracy of APACHE II in hemorrhagic stroke patients and SAPS II in ischemic stroke patients was superior. Copyright © 2014 Elsevier Ltd. All rights reserved.
Donahoe, Laura; McDonald, Ellen; Kho, Michelle E; Maclennan, Margaret; Stratford, Paul W; Cook, Deborah J
2009-01-01
Given their clinical, research, and administrative purposes, scores on the Acute Physiology and Chronic Health Evaluation (APACHE) II should be reliable, whether calculated by health care personnel or a clinical information system. To determine reliability of APACHE II scores calculated by a clinical information system and by health care personnel before and after a multifaceted quality improvement intervention. APACHE II scores of 37 consecutive patients admitted to a closed, 15-bed, university-affiliated intensive care unit were collected by a research coordinator, a database clerk, and a clinical information system. After a quality improvement intervention focused on health care personnel and the clinical information system, the same methods were used to collect data on 32 consecutive patients. The research coordinator and the clerk did not know each other's scores or the information system's score. The data analyst did not know the source of the scores until analysis was complete. APACHE II scores obtained by the clerk and the research coordinator were highly reliable (intraclass correlation coefficient, 0.88 before vs 0.80 after intervention; P = .25). No significant changes were detected after the intervention; however, compared with scores of the research coordinator, the overall reliability of APACHE II scores calculated by the clinical information system improved (intraclass correlation coefficient, 0.24 before intervention vs 0.91 after intervention, P < .001). After completion of a quality improvement intervention, health care personnel and a computerized clinical information system calculated sufficiently reliable APACHE II scores for clinical, research, and administrative purposes.
Markgraf, Rainer; Deutschinoff, Gerd; Pientka, Ludger; Scholten, Theo; Lorenz, Cristoph
2001-01-01
Background: Mortality predictions calculated using scoring scales are often not accurate in populations other than those in which the scales were developed because of differences in case-mix. The present study investigates the effect of first-level customization, using a logistic regression technique, on discrimination and calibration of the Acute Physiology and Chronic Health Evaluation (APACHE) II and III scales. Method: Probabilities of hospital death for patients were estimated by applying APACHE II and III and comparing these with observed outcomes. Using the split sample technique, a customized model to predict outcome was developed by logistic regression. The overall goodness-of-fit of the original and the customized models was assessed. Results: Of 3383 consecutive intensive care unit (ICU) admissions over 3 years, 2795 patients could be analyzed, and were split randomly into development and validation samples. The discriminative powers of APACHE II and III were unchanged by customization (areas under the receiver operating characteristic [ROC] curve 0.82 and 0.85, respectively). Hosmer-Lemeshow goodness-of-fit tests showed good calibration for APACHE II, but insufficient calibration for APACHE III. Customization improved calibration for both models, with a good fit for APACHE III as well. However, fit was different for various subgroups. Conclusions: The overall goodness-of-fit of APACHE III mortality prediction was improved significantly by customization, but uniformity of fit in different subgroups was not achieved. Therefore, application of the customized model provides no advantage, because differences in case-mix still limit comparisons of quality of care. PMID:11178223
2006-09-01
work-horse for this thesis. He spent hours writing some of the more tedious code, and as much time helping me learn C++ and Linux . He was always there...compared with C++, and the need to use Linux as the operating system, the filter was coded using C++ and KDevelop [28] in SUSE LINUX Professional 9.2 [42...The driving factor for using Linux was the operating system’s ability to access the serial ports in a reliable fashion. Under the original MATLAB® and
Open discovery: An integrated live Linux platform of Bioinformatics tools.
Vetrivel, Umashankar; Pilla, Kalabharath
2008-01-01
Historically, live linux distributions for Bioinformatics have paved way for portability of Bioinformatics workbench in a platform independent manner. Moreover, most of the existing live Linux distributions limit their usage to sequence analysis and basic molecular visualization programs and are devoid of data persistence. Hence, open discovery - a live linux distribution has been developed with the capability to perform complex tasks like molecular modeling, docking and molecular dynamics in a swift manner. Furthermore, it is also equipped with complete sequence analysis environment and is capable of running windows executable programs in Linux environment. Open discovery portrays the advanced customizable configuration of fedora, with data persistency accessible via USB drive or DVD. The Open Discovery is distributed free under Academic Free License (AFL) and can be downloaded from http://www.OpenDiscovery.org.in.
BioBarcode: a general DNA barcoding database and server platform for Asian biodiversity resources
2009-01-01
Background DNA barcoding provides a rapid, accurate, and standardized method for species-level identification using short DNA sequences. Such a standardized identification method is useful for mapping all the species on Earth, particularly when DNA sequencing technology is cheaply available. There are many nations in Asia with many biodiversity resources that need to be mapped and registered in databases. Results We have built a general DNA barcode data processing system, BioBarcode, with open source software - which is a general purpose database and server. It uses mySQL RDBMS 5.0, BLAST2, and Apache httpd server. An exemplary database of BioBarcode has around 11,300 specimen entries (including GenBank data) and registers the biological species to map their genetic relationships. The BioBarcode database contains a chromatogram viewer which improves the performance in DNA sequence analyses. Conclusion Asia has a very high degree of biodiversity and the BioBarcode database server system aims to provide an efficient bioinformatics protocol that can be freely used by Asian researchers and research organizations interested in DNA barcoding. The BioBarcode promotes the rapid acquisition of biological species DNA sequence data that meet global standards by providing specialized services, and provides useful tools that will make barcoding cheaper and faster in the biodiversity community such as standardization, depository, management, and analysis of DNA barcode data. The system can be downloaded upon request, and an exemplary server has been constructed with which to build an Asian biodiversity system http://www.asianbarcode.org. PMID:19958506
Profile-IQ: Web-based data query system for local health department infrastructure and activities.
Shah, Gulzar H; Leep, Carolyn J; Alexander, Dayna
2014-01-01
To demonstrate the use of National Association of County & City Health Officials' Profile-IQ, a Web-based data query system, and how policy makers, researchers, the general public, and public health professionals can use the system to generate descriptive statistics on local health departments. This article is a descriptive account of an important health informatics tool based on information from the project charter for Profile-IQ and the authors' experience and knowledge in design and use of this query system. Profile-IQ is a Web-based data query system that is based on open-source software: MySQL 5.5, Google Web Toolkit 2.2.0, Apache Commons Math library, Google Chart API, and Tomcat 6.0 Web server deployed on an Amazon EC2 server. It supports dynamic queries of National Profile of Local Health Departments data on local health department finances, workforce, and activities. Profile-IQ's customizable queries provide a variety of statistics not available in published reports and support the growing information needs of users who do not wish to work directly with data files for lack of staff skills or time, or to avoid a data use agreement. Profile-IQ also meets the growing demand of public health practitioners and policy makers for data to support quality improvement, community health assessment, and other processes associated with voluntary public health accreditation. It represents a step forward in the recent health informatics movement of data liberation and use of open source information technology solutions to promote public health.
QuIN: A Web Server for Querying and Visualizing Chromatin Interaction Networks.
Thibodeau, Asa; Márquez, Eladio J; Luo, Oscar; Ruan, Yijun; Menghi, Francesca; Shin, Dong-Guk; Stitzel, Michael L; Vera-Licona, Paola; Ucar, Duygu
2016-06-01
Recent studies of the human genome have indicated that regulatory elements (e.g. promoters and enhancers) at distal genomic locations can interact with each other via chromatin folding and affect gene expression levels. Genomic technologies for mapping interactions between DNA regions, e.g., ChIA-PET and HiC, can generate genome-wide maps of interactions between regulatory elements. These interaction datasets are important resources to infer distal gene targets of non-coding regulatory elements and to facilitate prioritization of critical loci for important cellular functions. With the increasing diversity and complexity of genomic information and public ontologies, making sense of these datasets demands integrative and easy-to-use software tools. Moreover, network representation of chromatin interaction maps enables effective data visualization, integration, and mining. Currently, there is no software that can take full advantage of network theory approaches for the analysis of chromatin interaction datasets. To fill this gap, we developed a web-based application, QuIN, which enables: 1) building and visualizing chromatin interaction networks, 2) annotating networks with user-provided private and publicly available functional genomics and interaction datasets, 3) querying network components based on gene name or chromosome location, and 4) utilizing network based measures to identify and prioritize critical regulatory targets and their direct and indirect interactions. QuIN's web server is available at http://quin.jax.org QuIN is developed in Java and JavaScript, utilizing an Apache Tomcat web server and MySQL database and the source code is available under the GPLV3 license available on GitHub: https://github.com/UcarLab/QuIN/.
Ko, Gary T; So, Wing-Yee; Tong, Peter C; Le Coguiec, Francois; Kerr, Debborah; Lyubomirsky, Greg; Tamesis, Beaver; Wolthers, Troels; Nan, Jennifer; Chan, Juliana
2010-05-13
The Joint Asia Diabetes Evaluation (JADE) Program is a web-based program incorporating a comprehensive risk engine, care protocols, and clinical decision support to improve ambulatory diabetes care. The JADE Program uses information technology to facilitate healthcare professionals to create a diabetes registry and to deliver an evidence-based care and education protocol tailored to patients' risk profiles. With written informed consent from participating patients and care providers, all data are anonymized and stored in a databank to establish an Asian Diabetes Database for research and publication purpose. The JADE electronic portal (e-portal: http://www.jade-adf.org) is implemented as a Java application using the Apache web server, the mySQL database and the Cocoon framework. The JADE e-portal comprises a risk engine which predicts 5-year probability of major clinical events based on parameters collected during an annual comprehensive assessment. Based on this risk stratification, the JADE e-portal recommends a care protocol tailored to these risk levels with decision support triggered by various risk factors. Apart from establishing a registry for quality assurance and data tracking, the JADE e-portal also displays trends of risk factor control at each visit to promote doctor-patient dialogues and to empower both parties to make informed decisions. The JADE Program is a prototype using information technology to facilitate implementation of a comprehensive care model, as recommended by the International Diabetes Federation. It also enables health care teams to record, manage, track and analyze the clinical course and outcomes of people with diabetes.
SLIMS--a user-friendly sample operations and inventory management system for genotyping labs.
Van Rossum, Thea; Tripp, Ben; Daley, Denise
2010-07-15
We present the Sample-based Laboratory Information Management System (SLIMS), a powerful and user-friendly open source web application that provides all members of a laboratory with an interface to view, edit and create sample information. SLIMS aims to simplify common laboratory tasks with tools such as a user-friendly shopping cart for subjects, samples and containers that easily generates reports, shareable lists and plate designs for genotyping. Further key features include customizable data views, database change-logging and dynamically filled pre-formatted reports. Along with being feature-rich, SLIMS' power comes from being able to handle longitudinal data from multiple time-points and biological sources. This type of data is increasingly common from studies searching for susceptibility genes for common complex diseases that collect thousands of samples generating millions of genotypes and overwhelming amounts of data. LIMSs provide an efficient way to deal with this data while increasing accessibility and reducing laboratory errors; however, professional LIMS are often too costly to be practical. SLIMS gives labs a feasible alternative that is easily accessible, user-centrically designed and feature-rich. To facilitate system customization, and utilization for other groups, manuals have been written for users and developers. Documentation, source code and manuals are available at http://genapha.icapture.ubc.ca/SLIMS/index.jsp. SLIMS was developed using Java 1.6.0, JSPs, Hibernate 3.3.1.GA, DB2 and mySQL, Apache Tomcat 6.0.18, NetBeans IDE 6.5, Jasper Reports 3.5.1 and JasperSoft's iReport 3.5.1.
Technical development of PubMed interact: an improved interface for MEDLINE/PubMed searches.
Muin, Michael; Fontelo, Paul
2006-11-03
The project aims to create an alternative search interface for MEDLINE/PubMed that may provide assistance to the novice user and added convenience to the advanced user. An earlier version of the project was the 'Slider Interface for MEDLINE/PubMed searches' (SLIM) which provided JavaScript slider bars to control search parameters. In this new version, recent developments in Web-based technologies were implemented. These changes may prove to be even more valuable in enhancing user interactivity through client-side manipulation and management of results. PubMed Interact is a Web-based MEDLINE/PubMed search application built with HTML, JavaScript and PHP. It is implemented on a Windows Server 2003 with Apache 2.0.52, PHP 4.4.1 and MySQL 4.1.18. PHP scripts provide the backend engine that connects with E-Utilities and parses XML files. JavaScript manages client-side functionalities and converts Web pages into interactive platforms using dynamic HTML (DHTML), Document Object Model (DOM) tree manipulation and Ajax methods. With PubMed Interact, users can limit searches with JavaScript slider bars, preview result counts, delete citations from the list, display and add related articles and create relevance lists. Many interactive features occur at client-side, which allow instant feedback without reloading or refreshing the page resulting in a more efficient user experience. PubMed Interact is a highly interactive Web-based search application for MEDLINE/PubMed that explores recent trends in Web technologies like DOM tree manipulation and Ajax. It may become a valuable technical development for online medical search applications.
Observations of the larval stages of Diceroprocta apache Davis (Homoptera: Tibicinidae)
Ellingson, A.R.; Andersen, D.C.; Kondratieff, B.C.
2002-01-01
Diceroprocta apache Davis is a locally abundant cicada in the riparian woodlands of the southwestern United States. While its ecological importance has often been hypothesized, very little is known of its specific life history. This paper presents preliminary information on life history of D. apache from larvae collected in the field at seasonal intervals as well as a smaller number of reared specimens. Morphological development of the fore-femoral comb closely parallels growth through distinct size classes. The data indicate the presence of five larval instars in D. apache. Development times from greenhouse-reared specimens suggest a 3-4 year life span and overlapping broods were present in the field. Sex ratios among pre-emergent larvae suggest the asynchronous emergence of sexes.
SLURM: Simple Linux Utility for Resource Management
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jette, M; Grondona, M
2002-12-19
Simple Linux Utility for Resource Management (SLURM) is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for Linux clusters of thousands of nodes. Components include machine status, partition management, job management, scheduling and stream copy modules. This paper presents an overview of the SLURM architecture and functionality.
SLURM: Simplex Linux Utility for Resource Management
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jette, M; Grondona, M
2003-04-22
Simple Linux Utility for Resource Management (SLURM) is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for Linux clusters of thousands of nodes. Components include machine status, partition management, job management, scheduling, and stream copy modules. This paper presents an overview of the SLURM architecture and functionality.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-06
... SECURITIES AND EXCHANGE COMMISSION [File No. 500-1] BluePoint Linux Software Corp., China Bottles Inc., Long-e International, Inc., and Nano Superlattice Technology, Inc.; Order of Suspension of... current and accurate information concerning the securities of BluePoint Linux Software Corp. because it...
Kernel-based Linux emulation for Plan 9.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Minnich, Ronald G.
2010-09-01
CNKemu is a kernel-based system for the 9k variant of the Plan 9 kernel. It is designed to provide transparent binary support for programs compiled for IBM's Compute Node Kernel (CNK) on the Blue Gene series of supercomputers. This support allows users to build applications with the standard Blue Gene toolchain, including C++ and Fortran compilers. While the CNK is not Linux, IBM designed the CNK so that the user interface has much in common with the Linux 2.0 system call interface. The Plan 9 CNK emulator hence provides the foundation of kernel-based Linux system call support on Plan 9.more » In this paper we discuss cnkemu's implementation and some of its more interesting features, such as the ability to easily intermix Plan 9 and Linux system calls.« less
Open discovery: An integrated live Linux platform of Bioinformatics tools
Vetrivel, Umashankar; Pilla, Kalabharath
2008-01-01
Historically, live linux distributions for Bioinformatics have paved way for portability of Bioinformatics workbench in a platform independent manner. Moreover, most of the existing live Linux distributions limit their usage to sequence analysis and basic molecular visualization programs and are devoid of data persistence. Hence, open discovery ‐ a live linux distribution has been developed with the capability to perform complex tasks like molecular modeling, docking and molecular dynamics in a swift manner. Furthermore, it is also equipped with complete sequence analysis environment and is capable of running windows executable programs in Linux environment. Open discovery portrays the advanced customizable configuration of fedora, with data persistency accessible via USB drive or DVD. Availability The Open Discovery is distributed free under Academic Free License (AFL) and can be downloaded from http://www.OpenDiscovery.org.in PMID:19238235
The Research on Linux Memory Forensics
NASA Astrophysics Data System (ADS)
Zhang, Jun; Che, ShengBing
2018-03-01
Memory forensics is a branch of computer forensics. It does not depend on the operating system API, and analyzes operating system information from binary memory data. Based on the 64-bit Linux operating system, it analyzes system process and thread information from physical memory data. Using ELF file debugging information and propose a method for locating kernel structure member variable, it can be applied to different versions of the Linux operating system. The experimental results show that the method can successfully obtain the sytem process information from physical memory data, and can be compatible with multiple versions of the Linux kernel.
Growth and survival of Apache Trout under static and fluctuating temperature regimes
Recsetar, Matthew S.; Bonar, Scott A.; Feuerbacher, Olin
2014-01-01
Increasing stream temperatures have important implications for arid-region fishes. Little is known about effects of high water temperatures that fluctuate over extended periods on Apache Trout Oncorhynchus gilae apache, a federally threatened species of southwestern USA streams. We compared survival and growth of juvenile Apache Trout held for 30 d in static temperatures (16, 19, 22, 25, and 28°C) and fluctuating diel temperatures (±3°C from 16, 19, 22 and 25°C midpoints and ±6°C from 19°C and 22°C midpoints). Lethal temperature for 50% (LT50) of the Apache Trout under static temperatures (mean [SD] = 22.8 [0.6]°C) was similar to that of ±3°C diel temperature fluctuations (23.1 [0.1]°C). Mean LT50 for the midpoint of the ±6°C fluctuations could not be calculated because survival in the two treatments (19 ± 6°C and 22 ± 6°C) was not below 50%; however, it probably was also between 22°C and 25°C because the upper limb of a ±6°C fluctuation on a 25°C midpoint is above critical thermal maximum for Apache Trout (28.5–30.4°C). Growth decreased as temperatures approached the LT50. Apache Trout can survive short-term exposure to water temperatures with daily maxima that remain below 25°C and midpoint diel temperatures below 22°C. However, median summer stream temperatures must remain below 19°C for best growth and even lower if daily fluctuations are high (≥12°C).
2001-09-01
100 miles southwest of Melrose AFR near Ruidoso , New Mexico. The Jicarilla Apache Reservation is 195 miles northwest of the range. The Comanche Tribe...of the MOAs near Ruidoso , New Mexico. The Jicarilla Apache Reservation is about 150 miles northwest of the MOAs; and the Comanche Reservation is in...and Comanche. The Mescalero Apache Reservation is located approximately 25 miles south of VRs-100/125 near Ruidoso , New Mexico. The Jicarilla
Almog, Yaniv; Perl, Yael; Novack, Victor; Galante, Ori; Klein, Moti; Pencina, Michael J.; Douvdevani, Amos
2014-01-01
Aim The aim of the current study is to assess the mortality prediction accuracy of circulating cell-free DNA (CFD) level at admission measured by a new simplified method. Materials and Methods CFD levels were measured by a direct fluorescence assay in severe sepsis patients on intensive care unit (ICU) admission. In-hospital and/or twenty eight day all-cause mortality was the primary outcome. Results Out of 108 patients with median APACHE II of 20, 32.4% have died in hospital/or at 28-day. CFD levels were higher in decedents: median 3469.0 vs. 1659 ng/ml, p<0.001. In multivariable model APACHE II score and CFD (quartiles) were significantly associated with the mortality: odds ratio of 1.05, p = 0.049 and 2.57, p<0.001 per quartile respectively. C-statistics for the models was 0.79 for CFD and 0.68 for APACHE II. Integrated discrimination improvement (IDI) analyses showed that CFD and CFD+APACHE II score models had better discriminatory ability than APACHE II score alone. Conclusions CFD level assessed by a new, simple fluorometric-assay is an accurate predictor of acute mortality among ICU patients with severe sepsis. Comparison of CFD to APACHE II score and Procalcitonin (PCT), suggests that CFD has the potential to improve clinical decision making. PMID:24955978
Vasilyeva, I V; Shvirev, S L; Arseniev, S B; Zarubina, T V
2013-01-01
The aim of the present study is to assess a possibility and validity of prognostic scales ISS-RTS-TRISS, PRISM, APACHE II and PTS to be used for the automated calculation in decision support when treating children with severe mechanical traumas. The mentioned scales are used in the Hospital Information System (HIS) MEDIALOG. The retrospective study was conducted using clinical and physiological data collected at the admission and during the first 24 hours of hospitalization in 166 patients. Scales PRISM, APACHE II, ISS-RTS-TRISS were used for calculating the severity of injury and for prognosis in death outcomes. Scale PTS was used for evaluating the severity index only. Our research has shown that ISS-RTS-TRISS has excellent discrimination ability, PRISM and APACHE II prognostic scales have acceptable discrimination ability; moreover, they all have significant calibration ability. PTS scale has acceptable discrimination ability. It has been showed that automated calculation scales ISS-RTS-TRISS, PRISM, APACHE II and PTS are useful for assessing outcomes in children with severe mechanical trauma.
NASA Astrophysics Data System (ADS)
Kaur, Jagreet; Singh Mann, Kulwinder, Dr.
2018-01-01
AI in Healthcare needed to bring real, actionable insights and Individualized insights in real time for patients and Doctors to support treatment decisions., We need a Patient Centred Platform for integrating EHR Data, Patient Data, Prescriptions, Monitoring, Clinical research and Data. This paper proposes a generic architecture for enabling AI based healthcare analytics Platform by using open sources Technologies Apache beam, Apache Flink Apache Spark, Apache NiFi, Kafka, Tachyon, Gluster FS, NoSQL- Elasticsearch, Cassandra. This paper will show the importance of applying AI based predictive and prescriptive analytics techniques in Health sector. The system will be able to extract useful knowledge that helps in decision making and medical monitoring in real-time through an intelligent process analysis and big data processing.
Real-time data collection in Linux: a case study.
Finney, S A
2001-05-01
Multiuser UNIX-like operating systems such as Linux are often considered unsuitable for real-time data collection because of the potential for indeterminate timing latencies resulting from preemptive scheduling. In this paper, Linux is shown to be fully adequate for precisely controlled programming with millisecond resolution or better. The Linux system calls that subserve such timing control are described and tested and then utilized in a MIDI-based program for tapping and music performance experiments. The timing of this program, including data input and output, is shown to be accurate at the millisecond level. This demonstrates that Linux, with proper programming, is suitable for real-time experiment software. In addition, the detailed description and test of both the operating system facilities and the application program itself may serve as a model for publicly documenting programming methods and software performance on other operating systems.
T-Check in Technologies for Interoperability: Web Services and Security--Single Sign-On
2007-12-01
following tools: • Apache Tomcat 6.0—a Java Servlet container to host the Web services and a simple Web client application [Apache 2007a] • Apache Axis...Eclipse. Eclipse – an open development platform. http://www.eclipse.org/ (2007) [Hunter 2001] Hunter, Jason. Java Servlet Programming, 2nd Edition...Citation SAML 1.1 Java Toolkit SAML Ping Identity’s SAML-1.1 implementation [SourceID 2006] OpenSAML SAML An open source implementation of SAML 1.1
2009-12-01
forward-looking infrared FOV field-of-view HDU helmet display unit HMD helmet-mounted display IHADSS Integrated Helmet and Display...monocular Integrated Helmet and Display Sighting System (IHADSS) helmet-mounted display ( HMD ) in the British Army’s Apache AH Mk 1 attack helicopter has any...Integrated Helmet and Display Sighting System, IHADSS, Helmet-mounted display, HMD , Apache helicopter, Visual performance UNCLAS UNCLAS UNCLAS SAR 96
Apache sharply expands western Egypt acreage position
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1997-02-10
Apache Corp. became Egypt`s second largest acreage holder with acquisition of Mobil Corp.`s nonoperating interests in three western desert exploration concessions covering a combined 7.7 million gross acres. Apache assumed a 50% contractor interest in the Repsol SA operated East Bahariya concession, a 33% contractor interest in the Repsol operated West Mediterranean Block 1 concession, and a 24% contractor interest in the Royal Dutch/Shell operated Northeast Abu Gharadig concession. The concessions carry a total drilling obligation of 11 wells the next 3 years.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-31
...NMFS received an application from Apache Alaska Corporation (Apache) for an Incidental Harassment Authorization (IHA) to take marine mammals, by harassment, incidental to a proposed 3D seismic survey in Cook Inlet, Alaska, between March 1, 2014, and December 31, 2014. Pursuant to the Marine Mammal Protection Act (MMPA), NMFS requests comments on its proposal to issue an IHA to Apache to take, by Level B harassment only, five species of marine mammals during the specified activity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Murphy, TW; Adelberger, Eric G.; Battat, J.
2008-01-01
A next-generation lunar laser ranging apparatus using the 3.5 m telescope at the Apache Point Observatory in southern New Mexico has begun science operation. APOLLO (the Apache Point Observatory Lunar Laser-ranging Operation) has achieved one-millimeter range precision to the moon which should lead to aproximately one-orderof-magnitude improvements in the precision of several tests of fundamental properties of gravity. We briefly motivate the scientific goals, and then give a detailed discussion of the APOLLO instrumentation.
Alignment of high-throughput sequencing data inside in-memory databases.
Firnkorn, Daniel; Knaup-Gregori, Petra; Lorenzo Bermejo, Justo; Ganzinger, Matthias
2014-01-01
In times of high-throughput DNA sequencing techniques, performance-capable analysis of DNA sequences is of high importance. Computer supported DNA analysis is still an intensive time-consuming task. In this paper we explore the potential of a new In-Memory database technology by using SAP's High Performance Analytic Appliance (HANA). We focus on read alignment as one of the first steps in DNA sequence analysis. In particular, we examined the widely used Burrows-Wheeler Aligner (BWA) and implemented stored procedures in both, HANA and the free database system MySQL, to compare execution time and memory management. To ensure that the results are comparable, MySQL has been running in memory as well, utilizing its integrated memory engine for database table creation. We implemented stored procedures, containing exact and inexact searching of DNA reads within the reference genome GRCh37. Due to technical restrictions in SAP HANA concerning recursion, the inexact matching problem could not be implemented on this platform. Hence, performance analysis between HANA and MySQL was made by comparing the execution time of the exact search procedures. Here, HANA was approximately 27 times faster than MySQL which means, that there is a high potential within the new In-Memory concepts, leading to further developments of DNA analysis procedures in the future.
SLURM: Simple Linux Utility for Resource Management
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jette, M; Dunlap, C; Garlick, J
2002-07-08
Simple Linux Utility for Resource Management (SLURM) is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for Linux clusters of thousands of nodes. Components include machine status, partition management, job management, scheduling and stream copy modules. The design also includes a scalable, general-purpose communication infrastructure. This paper presents a overview of the SLURM architecture and functionality.
Lectindb: a plant lectin database.
Chandra, Nagasuma R; Kumar, Nirmal; Jeyakani, Justin; Singh, Desh Deepak; Gowda, Sharan B; Prathima, M N
2006-10-01
Lectins, a class of carbohydrate-binding proteins, are now widely recognized to play a range of crucial roles in many cell-cell recognition events triggering several important cellular processes. They encompass different members that are diverse in their sequences, structures, binding site architectures, quaternary structures, carbohydrate affinities, and specificities as well as their larger biological roles and potential applications. It is not surprising, therefore, that the vast amount of experimental data on lectins available in the literature is so diverse, that it becomes difficult and time consuming, if not impossible to comprehend the advances in various areas and obtain the maximum benefit. To achieve an effective use of all the data toward understanding the function and their possible applications, an organization of these seemingly independent data into a common framework is essential. An integrated knowledge base ( Lectindb, http://nscdb.bic.physics.iisc.ernet.in ) together with appropriate analytical tools has therefore been developed initially for plant lectins by collating and integrating diverse data. The database has been implemented using MySQL on a Linux platform and web-enabled using PERL-CGI and Java tools. Data for each lectin pertain to taxonomic, biochemical, domain architecture, molecular sequence, and structural details as well as carbohydrate and hence blood group specificities. Extensive links have also been provided for relevant bioinformatics resources and analytical tools. Availability of diverse data integrated into a common framework is expected to be of high value not only for basic studies in lectin biology but also for basic studies in pursuing several applications in biotechnology, immunology, and clinical practice, using these molecules.
A computational genomics pipeline for prokaryotic sequencing projects
Kislyuk, Andrey O.; Katz, Lee S.; Agrawal, Sonia; Hagen, Matthew S.; Conley, Andrew B.; Jayaraman, Pushkala; Nelakuditi, Viswateja; Humphrey, Jay C.; Sammons, Scott A.; Govil, Dhwani; Mair, Raydel D.; Tatti, Kathleen M.; Tondella, Maria L.; Harcourt, Brian H.; Mayer, Leonard W.; Jordan, I. King
2010-01-01
Motivation: New sequencing technologies have accelerated research on prokaryotic genomes and have made genome sequencing operations outside major genome sequencing centers routine. However, no off-the-shelf solution exists for the combined assembly, gene prediction, genome annotation and data presentation necessary to interpret sequencing data. The resulting requirement to invest significant resources into custom informatics support for genome sequencing projects remains a major impediment to the accessibility of high-throughput sequence data. Results: We present a self-contained, automated high-throughput open source genome sequencing and computational genomics pipeline suitable for prokaryotic sequencing projects. The pipeline has been used at the Georgia Institute of Technology and the Centers for Disease Control and Prevention for the analysis of Neisseria meningitidis and Bordetella bronchiseptica genomes. The pipeline is capable of enhanced or manually assisted reference-based assembly using multiple assemblers and modes; gene predictor combining; and functional annotation of genes and gene products. Because every component of the pipeline is executed on a local machine with no need to access resources over the Internet, the pipeline is suitable for projects of a sensitive nature. Annotation of virulence-related features makes the pipeline particularly useful for projects working with pathogenic prokaryotes. Availability and implementation: The pipeline is licensed under the open-source GNU General Public License and available at the Georgia Tech Neisseria Base (http://nbase.biology.gatech.edu/). The pipeline is implemented with a combination of Perl, Bourne Shell and MySQL and is compatible with Linux and other Unix systems. Contact: king.jordan@biology.gatech.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:20519285
A Tour of Big Data, Open Source Data Management Technologies from the Apache Software Foundation
NASA Astrophysics Data System (ADS)
Mattmann, C. A.
2012-12-01
The Apache Software Foundation, a non-profit foundation charged with dissemination of open source software for the public good, provides a suite of data management technologies for distributed archiving, data ingestion, data dissemination, processing, triage and a host of other functionalities that are becoming critical in the Big Data regime. Apache is the world's largest open source software organization, boasting over 3000 developers from around the world all contributing to some of the most pervasive technologies in use today, from the HTTPD web server that powers a majority of Internet web sites to the Hadoop technology that is now projected at over a $1B dollar industry. Apache data management technologies are emerging as de facto off-the-shelf components for searching, distributing, processing and archiving key science data sets both geophysical, space and planetary based, all the way to biomedicine. In this talk, I will give a virtual tour of the Apache Software Foundation, its meritocracy and governance structure, and also its key big data technologies that organizations can take advantage of today and use to save cost, schedule, and resources in implementing their Big Data needs. I'll illustrate the Apache technologies in the context of several national priority projects, including the U.S. National Climate Assessment (NCA), and in the International Square Kilometre Array (SKA) project that are stretching the boundaries of volume, velocity, complexity, and other key Big Data dimensions.
FingerScanner: Embedding a Fingerprint Scanner in a Raspberry Pi.
Sapes, Jordi; Solsona, Francesc
2016-02-06
Nowadays, researchers are paying increasing attention to embedding systems. Cost reduction has lead to an increase in the number of platforms supporting the operating system Linux, jointly with the Raspberry Pi motherboard. Thus, embedding devices on Raspberry-Linux systems is a goal in order to make competitive commercial products. This paper presents a low-cost fingerprint recognition system embedded into a Raspberry Pi with Linux.
Joint Battlespace Infosphere: Information Management Within a C2 Enterprise
2005-06-01
using. In version 1.2, we support both MySQL and Oracle as underlying implementations where the XML metadata schema is mapped into relational tables in...Identity Servers, Role-Based Access Control, and Policy Representation – Databases: Oracle , MySQL , TigerLogic, Berkeley XML DB 15 Instrumentation Services...converted to SQL for execution. Invocations are then forwarded to the appropriate underlying IOR core components that have the responsibility of issuing
Factors Leading to Effectiveness and Satisfaction in Civil Engineer Information Systems
2008-03-01
recently acquired MySQL in 2008 shortly after Oracle failed to acquire MySQL in 2007. For more information on policy implications concerning the use...individual level serves as the pertinent outcome variable and is used to evaluate and compare information systems in this study. Researchers have found...interim work information management system used by the Civil Engineer Operations Flight. The functions served by this system date back to the late
Large-scale Graph Computation on Just a PC
2014-05-01
edges for several vertices simultaneously). We compared the performance of GraphChi-DB to Neo4j using their Java API (we discuss MySQL comparison in the...75 4.7.6 Comparison to RDBMS ( MySQL ) . . . . . . . . . . . . . . . . . . . . . 75 4.7.7 Summary of the...Windows method, GraphChi. The C++ implementation has circa 8,000 lines of code. We have also de- veloped a Java -version of GraphChi, but it does not
Interactive DataBase of Cosmic Ray Anisotropy (DB A10)
NASA Astrophysics Data System (ADS)
Asipenka, A.S.; Belov, A.V.; Eroshenko, E.F.; Klepach, E.G.; Oleneva, V.A.; Yake, V.G.
Data on the hourly means of cosmic ray density and anisotropy derived by the GSM method over the 1957-2006 are introduced in to MySQL database. This format allowed an access to data both in local and in the Internet. Using the realized combination of script-language Php and My SQL database the Internet project was created on the access for users data on the CR anisotropy in different formats (http://cr20.izmiran.ru/AnisotropyCR/main.htm/). Usage the sheaf Php and MySQL provides fast receiving data even in the Internet since a request and following process of data are accomplished on the project server. Usage of MySQL basis for the storing data on cosmic ray variations give a possibility to construct requests of different structures, extends the variety of data reflection, makes it possible the conformity data to other systems and usage them in other projects.
Mortality in Code Blue; can APACHE II and PRISM scores be used as markers for prognostication?
Bakan, Nurten; Karaören, Gülşah; Tomruk, Şenay Göksu; Keskin Kayalar, Sinem
2018-03-01
Code blue (CB) is an emergency call system developed to respond to cardiac and respiratory arrest in hospitals. However, in literature, no scoring system has been reported that can predict mortality in CB procedures. In this study, we aimed to investigate the effectiveness of estimated APACHE II and PRISM scores in the prediction of mortality in patients assessed using CB to retrospectively analyze CB calls. We retrospectively examined 1195 patients who were evaluated by the CB team at our hospital between 2009 and 2013. The demographic data of the patients, diagnosis and relevant de-partments, reasons for CB, cardiopulmonary resuscitation duration, mortality calculated from the APACHE II and PRISM scores, and the actual mortality rates were retrospectively record-ed from CB notification forms and the hospital database. In all age groups, there was a significant difference between actual mortality rate and the expected mortality rate as estimated using APACHE II and PRISM scores in CB calls (p<0.05). The actual mortality rate was significantly lower than the expected mortality. APACHE and PRISM scores with the available parameters will not help predict mortality in CB procedures. Therefore, novels scoring systems using different parameters are needed.
Andersen, Douglas C.
1994-01-01
Apache cicada (Homoptera: Cicadidae: Diceroprocta apache Davis) densities were estimated to be 10 individuals/m2 within a closed-canopy stand of Fremont cottonwood (Populus fremontii) and Goodding willow (Salix gooddingii) in a revegetated site adjacent to the Colorado River near Parker, Arizona. Coupled with data drawn from the literature, I estimate that up to 1.3 cm (13 1/m2) of water may be added to the upper soil layers annually through the feeding activities of cicada nymphs. This is equivalent to 12% of the annual precipitation received in the study area. Apache cicadas may have significant effects on ecosystem functioning via effects on water transport and thus act as a critical-link species in this southwest desert riverine ecosystem. Cicadas emerged later within the cottonwood-willow stand than in relatively open saltcedar-mesquite stands; this difference in temporal dynamics would affect their availability to several insectivorous bird species and may help explain the birds' recent declines. Resource managers in this region should be sensitive to the multiple and strong effects that Apache cicadas may have on ecosystem structure and functioning.
VIRmiRNA: a comprehensive resource for experimentally validated viral miRNAs and their targets.
Qureshi, Abid; Thakur, Nishant; Monga, Isha; Thakur, Anamika; Kumar, Manoj
2014-01-01
Viral microRNAs (miRNAs) regulate gene expression of viral and/or host genes to benefit the virus. Hence, miRNAs play a key role in host-virus interactions and pathogenesis of viral diseases. Lately, miRNAs have also shown potential as important targets for the development of novel antiviral therapeutics. Although several miRNA and their target repositories are available for human and other organisms in literature, but a dedicated resource on viral miRNAs and their targets are lacking. Therefore, we have developed a comprehensive viral miRNA resource harboring information of 9133 entries in three subdatabases. This includes 1308 experimentally validated miRNA sequences with their isomiRs encoded by 44 viruses in viral miRNA ' VIRMIRNA: ' and 7283 of their target genes in ' VIRMIRTAR': . Additionally, there is information of 542 antiviral miRNAs encoded by the host against 24 viruses in antiviral miRNA ' AVIRMIR': . The web interface was developed using Linux-Apache-MySQL-PHP (LAMP) software bundle. User-friendly browse, search, advanced search and useful analysis tools are also provided on the web interface. VIRmiRNA is the first specialized resource of experimentally proven virus-encoded miRNAs and their associated targets. This database would enhance the understanding of viral/host gene regulation and may also prove beneficial in the development of antiviral therapeutics. Database URL: http://crdd.osdd.net/servers/virmirna. © The Author(s) 2014. Published by Oxford University Press.
SBSI: an extensible distributed software infrastructure for parameter estimation in systems biology.
Adams, Richard; Clark, Allan; Yamaguchi, Azusa; Hanlon, Neil; Tsorman, Nikos; Ali, Shakir; Lebedeva, Galina; Goltsov, Alexey; Sorokin, Anatoly; Akman, Ozgur E; Troein, Carl; Millar, Andrew J; Goryanin, Igor; Gilmore, Stephen
2013-03-01
Complex computational experiments in Systems Biology, such as fitting model parameters to experimental data, can be challenging to perform. Not only do they frequently require a high level of computational power, but the software needed to run the experiment needs to be usable by scientists with varying levels of computational expertise, and modellers need to be able to obtain up-to-date experimental data resources easily. We have developed a software suite, the Systems Biology Software Infrastructure (SBSI), to facilitate the parameter-fitting process. SBSI is a modular software suite composed of three major components: SBSINumerics, a high-performance library containing parallelized algorithms for performing parameter fitting; SBSIDispatcher, a middleware application to track experiments and submit jobs to back-end servers; and SBSIVisual, an extensible client application used to configure optimization experiments and view results. Furthermore, we have created a plugin infrastructure to enable project-specific modules to be easily installed. Plugin developers can take advantage of the existing user-interface and application framework to customize SBSI for their own uses, facilitated by SBSI's use of standard data formats. All SBSI binaries and source-code are freely available from http://sourceforge.net/projects/sbsi under an Apache 2 open-source license. The server-side SBSINumerics runs on any Unix-based operating system; both SBSIVisual and SBSIDispatcher are written in Java and are platform independent, allowing use on Windows, Linux and Mac OS X. The SBSI project website at http://www.sbsi.ed.ac.uk provides documentation and tutorials.
Jentzer, Jacob C; Bennett, Courtney; Wiley, Brandon M; Murphree, Dennis H; Keegan, Mark T; Gajic, Ognjen; Wright, R Scott; Barsness, Gregory W
2018-03-10
Optimal methods of mortality risk stratification in patients in the cardiac intensive care unit (CICU) remain uncertain. We evaluated the ability of the Sequential Organ Failure Assessment (SOFA) score to predict mortality in a large cohort of unselected patients in the CICU. Adult patients admitted to the CICU from January 1, 2007, to December 31, 2015, at a single tertiary care hospital were retrospectively reviewed. SOFA scores were calculated daily, and Acute Physiology and Chronic Health Evaluation (APACHE)-III and APACHE-IV scores were calculated on CICU day 1. Discrimination of hospital mortality was assessed using area under the receiver-operator characteristic curve values. We included 9961 patients, with a mean age of 67.5±15.2 years; all-cause hospital mortality was 9.0%. Day 1 SOFA score predicted hospital mortality, with an area under the receiver-operator characteristic curve value of 0.83; area under the receiver-operator characteristic curve values were similar for the APACHE-III score, and APACHE-IV predicted mortality ( P >0.05). Mean and maximum SOFA scores over multiple CICU days had greater discrimination for hospital mortality ( P <0.01). Patients with an increasing SOFA score from day 1 and day 2 had higher mortality. Patients with day 1 SOFA score <2 were at low risk of mortality. Increasing tertiles of day 1 SOFA score predicted higher long-term mortality ( P <0.001 by log-rank test). The day 1 SOFA score has good discrimination for short-term mortality in unselected patients in the CICU, which is comparable to APACHE-III and APACHE-IV. Advantages of the SOFA score over APACHE include simplicity, improved discrimination using serial scores, and prediction of long-term mortality. © 2018 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley.
Source Code Analysis Laboratory (SCALe)
2012-04-01
Versus Flagged Nonconformities (FNC) Software System TP/FNC Ratio Mozilla Firefox version 2.0 6/12 50% Linux kernel version 2.6.15 10/126 8...is inappropriately tuned for analysis of the Linux kernel, which has anomalous results. Customizing SCALe to work with software for a particular...servers support a collection of virtual machines (VMs) that can be configured to support analysis in various environments, such as Windows XP and Linux . A
FingerScanner: Embedding a Fingerprint Scanner in a Raspberry Pi
Sapes, Jordi; Solsona, Francesc
2016-01-01
Nowadays, researchers are paying increasing attention to embedding systems. Cost reduction has lead to an increase in the number of platforms supporting the operating system Linux, jointly with the Raspberry Pi motherboard. Thus, embedding devices on Raspberry-Linux systems is a goal in order to make competitive commercial products. This paper presents a low-cost fingerprint recognition system embedded into a Raspberry Pi with Linux. PMID:26861340
PDS4: Harnessing the Power of Generate and Apache Velocity
NASA Astrophysics Data System (ADS)
Padams, J.; Cayanan, M.; Hardman, S.
2018-04-01
The PDS4 Generate Tool is a Java-based command-line tool developed by the Cartography and Imaging Sciences Nodes (PDSIMG) for generating PDS4 XML labels, from Apache Velocity templates and input metadata.
Improving Block-level Efficiency with scsi-mq
DOE Office of Scientific and Technical Information (OSTI.GOV)
Caldwell, Blake A
2015-01-01
Current generation solid-state storage devices are exposing a new bottlenecks in the SCSI and block layers of the Linux kernel, where IO throughput is limited by lock contention, inefficient interrupt handling, and poor memory locality. To address these limitations, the Linux kernel block layer underwent a major rewrite with the blk-mq project to move from a single request queue to a multi-queue model. The Linux SCSI subsystem rework to make use of this new model, known as scsi-mq, has been merged into the Linux kernel and work is underway for dm-multipath support in the upcoming Linux 4.0 kernel. These piecesmore » were necessary to make use of the multi-queue block layer in a Lustre parallel filesystem with high availability requirements. We undertook adding support of the 3.18 kernel to Lustre with scsi-mq and dm-multipath patches to evaluate the potential of these efficiency improvements. In this paper we evaluate the block-level performance of scsi-mq with backing storage hardware representative of a HPC-targerted Lustre filesystem. Our findings show that SCSI write request latency is reduced by as much as 13.6%. Additionally, when profiling the CPU usage of our prototype Lustre filesystem, we found that CPU idle time increased by a factor of 7 with Linux 3.18 and blk-mq as compared to a standard 2.6.32 Linux kernel. Our findings demonstrate increased efficiency of the multi-queue block layer even with disk-based caching storage arrays used in existing parallel filesystems.« less
MicroRNA Gene Regulatory Networks in Peripheral Nerve Sheath Tumors
2013-09-01
3.0 hierarchical clustering of both the X and the Y-axis using Centroid linkage. The resulting clustered matrixes were visualized using Java Treeview...To score potential ceRNA interactions, the 54979 human interactions were loaded into a mySQL database and when the user selects a given mRNA all...on the fly using PHP interactions with mySQL in a similar fashion as previously described in our publicly available databases such as sarcoma
Quantifying Uncertainty in Expert Judgment: Initial Results
2013-03-01
lines of source code were added in . ---------- C++ = 32%; JavaScript = 29%; XML = 15%; C = 7%; CSS = 7%; Java = 5%; Oth- er = 5% LOC = 927,266...much total effort in person years has been spent on this project? CMU/SEI-2013-TR-001 | 33 5 MySQL , the most popular Open Source SQL...as MySQL , Oracle, PostgreSQL, MS SQL Server, ODBC, or Interbase. Features include email reminders, iCal/vCal import/export, re- mote subscriptions
An External Independent Validation of APACHE IV in a Malaysian Intensive Care Unit.
Wong, Rowena S Y; Ismail, Noor Azina; Tan, Cheng Cheng
2015-04-01
Intensive care unit (ICU) prognostic models are predominantly used in more developed nations such as the United States, Europe and Australia. These are not that popular in Southeast Asian countries due to costs and technology considerations. The purpose of this study is to evaluate the suitability of the acute physiology and chronic health evaluation (APACHE) IV model in a single centre Malaysian ICU. A prospective study was conducted at the single centre ICU in Hospital Sultanah Aminah (HSA) Malaysia. External validation of APACHE IV involved a cohort of 916 patients who were admitted in 2009. Model performance was assessed through its calibration and discrimination abilities. A first-level customisation using logistic regression approach was also applied to improve model calibration. APACHE IV exhibited good discrimination, with an area under receiver operating characteristic (ROC) curve of 0.78. However, the model's overall fit was observed to be poor, as indicated by the Hosmer-Lemeshow goodness-of-fit test (Ĉ = 113, P <0.001). Predicted in-ICU mortality rate (28.1%) was significantly higher than the actual in-ICU mortality rate (18.8%). Model calibration was improved after applying first-level customisation (Ĉ = 6.39, P = 0.78) although discrimination was not affected. APACHE IV is not suitable for application in HSA ICU, without further customisation. The model's lack of fit in the Malaysian study is attributed to differences in the baseline characteristics between HSA ICU and APACHE IV datasets. Other possible factors could be due to differences in clinical practice, quality and services of health care systems between Malaysia and the United States.
Linux Kernel Co-Scheduling and Bulk Synchronous Parallelism
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, Terry R
2012-01-01
This paper describes a kernel scheduling algorithm that is based on coscheduling principles and that is intended for parallel applications running on 1000 cores or more. Experimental results for a Linux implementation on a Cray XT5 machine are presented. The results indicate that Linux is a suitable operating system for this new scheduling scheme, and that this design provides a dramatic improvement in scaling performance for synchronizing collective operations at scale.
Wang, Shengyun; Chen, Dechang
2015-02-01
To investigate the correlation between procalcitonin (PCT), C-reactive protein (CRP) and acute physiology and chronic health evaluation II (APACHE II) score and sequential organ failure assessment (SOFA) score, and to investigate the value in assessment of PCT and CRP in prognosis in patients with sepsis. Clinical data of patients admitted to intensive care unit (ICU) of Changzheng Hospital Affiliated to the Second Military Medical University from January 2011 to June 2014 were retrospectively analyzed. 201 sepsis patients who received PCT and CRP tests, and evaluation of APACHE II score and SOFA score were enrolled. The values of PCT, CRP, APACHE II score and SOFA score between survivals (n = 136) and non-survivals (n = 65) were compared. The values of PCT and CRP among groups with different APACHE II scores and SOFA scores were compared. The relationships between PCT, CRP and APACHE II score and SOFA score were analyzed by Spearman correlation analysis. Receiver operating characteristic (ROC) curve was plotted to assess the prognostic value of PCT and CRP for prognosis of patients with sepsis. Compared with survival group, the values of PCT [μg/L: 11.03 (19.17) vs. 1.39 (2.61), Z = -4.572, P < 0.001], APACHE II score (19.16±5.32 vs. 10.01±3.88, t = -13.807, P < 0.001) and SOFA score (9.66±4.28 vs. 4.27±3.19, t = -9.993, P < 0.001) in non-survival group were significantly increased, but the value of CRP was not significantly different between non-survival group and survival group [mg/L: 75.22 (110.94) vs. 56.93 (100.75), Z = -0.731, P = 0.665]. The values of PCT were significantly correlated with APACHE II score and SOFA score (r1 = 0.373, r2 = 0.392, both P < 0.001), but the values of CRP were not significantly correlated with APACHE II score and SOFA score (r1 = -0.073, P1 = 0.411; r2 = -0.106, P2 = 0.282). The values of PCT rose significantly as the APACHE II score and SOFA score became higher, but the value of CRP was not significantly increased. When APACHE II score was 0-10, 11-20, and > 20, the value of PCT was 1.45 (2.62), 1.96 (9.04), and 7.41 (28.9) μg/L, respectively, and the value of CRP was 57.50 (83.40), 59.00 (119.70), and 77.60 (120.00) mg/L, respectively. When SOFA score was 0-5, 6-10, and > 10, the value of PCT was respectively 1.43 (3.09), 3.41 (9.75), and 5.43 (29.60) μg/L, and the value of CRP was 49.30 (86.20), 76.00 (108.70), and 75.60 (118.10) mg/L, respectively. There was significant difference in PCT between any two groups with different APACHE II and SOFA scores (P < 0.05 or P < 0.01), but no significant differences in CRP were found. The area under the ROC curve (AUC) of PCT for prognosis was significantly greater than that of CRP [0.872 (95% confidence interval 0.811-0.943) vs. 0.512 (95% confidence interval 0.427-0.612), P < 0.001]. When the cut-off value of PCT was 3.36 μg/L, the sensitivity was 66.8%, and the specificity was 45.4%. When the cut-off value of CRP was 44.50 mg/L, the sensitivity was 82.2%, and the specificity was 80.3%. Compared with CRP, PCT was more significantly correlated with APACHE II score and SOFA score. PCT can be a better indicator for evaluation of degree of severity, and also prognosis in sepsis patients.
A public HTLV-1 molecular epidemiology database for sequence management and data mining.
Araujo, Thessika Hialla Almeida; Souza-Brito, Leandro Inacio; Libin, Pieter; Deforche, Koen; Edwards, Dustin; de Albuquerque-Junior, Antonio Eduardo; Vandamme, Anne-Mieke; Galvao-Castro, Bernardo; Alcantara, Luiz Carlos Junior
2012-01-01
It is estimated that 15 to 20 million people are infected with the human T-cell lymphotropic virus type 1 (HTLV-1). At present, there are more than 2,000 unique HTLV-1 isolate sequences published. A central database to aggregate sequence information from a range of epidemiological aspects including HTLV-1 infections, pathogenesis, origins, and evolutionary dynamics would be useful to scientists and physicians worldwide. Described here, we have developed a database that collects and annotates sequence data and can be accessed through a user-friendly search interface. The HTLV-1 Molecular Epidemiology Database website is available at http://htlv1db.bahia.fiocruz.br/. All data was obtained from publications available at GenBank or through contact with the authors. The database was developed using Apache Webserver 2.1.6 and SGBD MySQL. The webpage interfaces were developed in HTML and sever-side scripting written in PHP. The HTLV-1 Molecular Epidemiology Database is hosted on the Gonçalo Moniz/FIOCRUZ Research Center server. There are currently 2,457 registered sequences with 2,024 (82.37%) of those sequences representing unique isolates. Of these sequences, 803 (39.67%) contain information about clinical status (TSP/HAM, 17.19%; ATL, 7.41%; asymptomatic, 12.89%; other diseases, 2.17%; and no information, 60.32%). Further, 7.26% of sequences contain information on patient gender while 5.23% of sequences provide the age of the patient. The HTLV-1 Molecular Epidemiology Database retrieves and stores annotated HTLV-1 proviral sequences from clinical, epidemiological, and geographical studies. The collected sequences and related information are now accessible on a publically available and user-friendly website. This open-access database will support clinical research and vaccine development related to viral genotype.
2010-01-01
Background The Joint Asia Diabetes Evaluation (JADE) Program is a web-based program incorporating a comprehensive risk engine, care protocols, and clinical decision support to improve ambulatory diabetes care. Methods The JADE Program uses information technology to facilitate healthcare professionals to create a diabetes registry and to deliver an evidence-based care and education protocol tailored to patients' risk profiles. With written informed consent from participating patients and care providers, all data are anonymized and stored in a databank to establish an Asian Diabetes Database for research and publication purpose. Results The JADE electronic portal (e-portal: http://www.jade-adf.org) is implemented as a Java application using the Apache web server, the mySQL database and the Cocoon framework. The JADE e-portal comprises a risk engine which predicts 5-year probability of major clinical events based on parameters collected during an annual comprehensive assessment. Based on this risk stratification, the JADE e-portal recommends a care protocol tailored to these risk levels with decision support triggered by various risk factors. Apart from establishing a registry for quality assurance and data tracking, the JADE e-portal also displays trends of risk factor control at each visit to promote doctor-patient dialogues and to empower both parties to make informed decisions. Conclusions The JADE Program is a prototype using information technology to facilitate implementation of a comprehensive care model, as recommended by the International Diabetes Federation. It also enables health care teams to record, manage, track and analyze the clinical course and outcomes of people with diabetes. PMID:20465815
QuIN: A Web Server for Querying and Visualizing Chromatin Interaction Networks
Thibodeau, Asa; Márquez, Eladio J.; Luo, Oscar; Ruan, Yijun; Shin, Dong-Guk; Stitzel, Michael L.; Ucar, Duygu
2016-01-01
Recent studies of the human genome have indicated that regulatory elements (e.g. promoters and enhancers) at distal genomic locations can interact with each other via chromatin folding and affect gene expression levels. Genomic technologies for mapping interactions between DNA regions, e.g., ChIA-PET and HiC, can generate genome-wide maps of interactions between regulatory elements. These interaction datasets are important resources to infer distal gene targets of non-coding regulatory elements and to facilitate prioritization of critical loci for important cellular functions. With the increasing diversity and complexity of genomic information and public ontologies, making sense of these datasets demands integrative and easy-to-use software tools. Moreover, network representation of chromatin interaction maps enables effective data visualization, integration, and mining. Currently, there is no software that can take full advantage of network theory approaches for the analysis of chromatin interaction datasets. To fill this gap, we developed a web-based application, QuIN, which enables: 1) building and visualizing chromatin interaction networks, 2) annotating networks with user-provided private and publicly available functional genomics and interaction datasets, 3) querying network components based on gene name or chromosome location, and 4) utilizing network based measures to identify and prioritize critical regulatory targets and their direct and indirect interactions. AVAILABILITY: QuIN’s web server is available at http://quin.jax.org QuIN is developed in Java and JavaScript, utilizing an Apache Tomcat web server and MySQL database and the source code is available under the GPLV3 license available on GitHub: https://github.com/UcarLab/QuIN/. PMID:27336171
Technical development of PubMed Interact: an improved interface for MEDLINE/PubMed searches
Muin, Michael; Fontelo, Paul
2006-01-01
Background The project aims to create an alternative search interface for MEDLINE/PubMed that may provide assistance to the novice user and added convenience to the advanced user. An earlier version of the project was the 'Slider Interface for MEDLINE/PubMed searches' (SLIM) which provided JavaScript slider bars to control search parameters. In this new version, recent developments in Web-based technologies were implemented. These changes may prove to be even more valuable in enhancing user interactivity through client-side manipulation and management of results. Results PubMed Interact is a Web-based MEDLINE/PubMed search application built with HTML, JavaScript and PHP. It is implemented on a Windows Server 2003 with Apache 2.0.52, PHP 4.4.1 and MySQL 4.1.18. PHP scripts provide the backend engine that connects with E-Utilities and parses XML files. JavaScript manages client-side functionalities and converts Web pages into interactive platforms using dynamic HTML (DHTML), Document Object Model (DOM) tree manipulation and Ajax methods. With PubMed Interact, users can limit searches with JavaScript slider bars, preview result counts, delete citations from the list, display and add related articles and create relevance lists. Many interactive features occur at client-side, which allow instant feedback without reloading or refreshing the page resulting in a more efficient user experience. Conclusion PubMed Interact is a highly interactive Web-based search application for MEDLINE/PubMed that explores recent trends in Web technologies like DOM tree manipulation and Ajax. It may become a valuable technical development for online medical search applications. PMID:17083729
Li, Min; Dong, Xiang-yu; Liang, Hao; Leng, Li; Zhang, Hui; Wang, Shou-zhi; Li, Hui; Du, Zhi-Qiang
2017-05-20
Effective management and analysis of precisely recorded phenotypic traits are important components of the selection and breeding of superior livestocks. Over two decades, we divergently selected chicken lines for abdominal fat content at Northeast Agricultural University (Northeast Agricultural University High and Low Fat, NEAUHLF), and collected large volume of phenotypic data related to the investigation on molecular genetic basis of adipose tissue deposition in broilers. To effectively and systematically store, manage and analyze phenotypic data, we built the NEAUHLF Phenome Database (NEAUHLFPD). NEAUHLFPD included the following phenotypic records: pedigree (generations 1-19) and 29 phenotypes, such as body sizes and weights, carcass traits and their corresponding rates. The design and construction strategy of NEAUHLFPD were executed as follows: (1) Framework design. We used Apache as our web server, MySQL and Navicat as database management tools, and PHP as the HTML-embedded language to create dynamic interactive website. (2) Structural components. On the main interface, detailed introduction on the composition, function, and the index buttons of the basic structure of the database could be found. The functional modules of NEAUHLFPD had two main components: the first module referred to the physical storage space for phenotypic data, in which functional manipulation on data can be realized, such as data indexing, filtering, range-setting, searching, etc.; the second module related to the calculation of basic descriptive statistics, where data filtered from the database can be used for the computation of basic statistical parameters and the simultaneous conditional sorting. NEAUHLFPD could be used to effectively store and manage not only phenotypic, but also genotypic and genomics data, which can facilitate further investigation on the molecular genetic basis of chicken adipose tissue growth and development, and expedite the selection and breeding of broilers with low fat content.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-25
... Ranger, Lakeside Ranger District, Apache-Sitgreaves National Forests, c/o TEC Inc., 514 Via de la Valle... to other papers serving areas affected by this proposal: Tucson Citizen, Sierra Vista Herald, Nogales...
4. APACHE INDIAN LABORER WITH TEAM AND SCRAPER WORKING ON ...
4. APACHE INDIAN LABORER WITH TEAM AND SCRAPER WORKING ON THE POWER CANAL LINE FOUR MILES ABOVE LIVINGSTONE, ARIZONA Photographer: Walter J. Lubken, June 14, 1906 - Roosevelt Power Canal & Diversion Dam, Parallels Salt River, Roosevelt, Gila County, AZ
Knaus, W. A.; Draper, E. A.; Wagner, D. P.
1991-01-01
The APACHE III data base reflects the disease, physiologic status, and outcome data from 17,400 ICU patients at 40 hospitals, 26 of which were randomly selected from representative geographic regions, bed size, and teaching status. This provides a nationally representative standard for measuring several important aspects of ICU performance. Results from the study have now been used to develop an automated information system to provide real time information about expected ICU patient outcome, length of stay, production cost, and ICU performance. The information system provides several new capabilities to ICU clinicians, clinic, and hospital administrators. Among the system's capabilities are: the ability to compare local ICU performance against predetermined criteria; the ability to forecast nursing requirements; and, the ability to make both individual and group patient outcome predictions. The system also provides improved administrative support by tracking ICU charges at the point of origin and reduces staff workload eliminating the requirement for several manually maintained logs and patient lists. APACHE III has the capability to electronically interface with and utilize data already captured in existing hospital information systems, automated laboratory information systems, and patient monitoring systems. APACHE III will also be completely integrated with several CIS vendors' products. PMID:1807779
Better prognostic marker in ICU - APACHE II, SOFA or SAP II!
Naqvi, Iftikhar Haider; Mahmood, Khalid; Ziaullaha, Syed; Kashif, Syed Mohammad; Sharif, Asim
2016-01-01
This study was designed to determine the comparative efficacy of different scoring system in assessing the prognosis of critically ill patients. This was a retrospective study conducted in medical intensive care unit (MICU) and high dependency unit (HDU) Medical Unit III, Civil Hospital, from April 2012 to August 2012. All patients over age 16 years old who have fulfilled the criteria for MICU admission were included. Predictive mortality of APACHE II, SAP II and SOFA were calculated. Calibration and discrimination were used for validity of each scoring model. A total of 96 patients with equal gender distribution were enrolled. The average APACHE II score in non-survivors (27.97+8.53) was higher than survivors (15.82+8.79) with statistically significant p value (<0.001). The average SOFA score in non-survivors (9.68+4.88) was higher than survivors (5.63+3.63) with statistically significant p value (<0.001). SAP II average score in non-survivors (53.71+19.05) was higher than survivors (30.18+16.24) with statistically significant p value (<0.001). All three tested scoring models (APACHE II, SAP II and SOFA) would be accurate enough for a general description of our ICU patients. APACHE II has showed better calibration and discrimination power than SAP II and SOFA.
2014-06-01
central location. Each of the SQLite databases are converted and stored in one MySQL database and the pcap files are parsed to extract call information...from the specific communications applications used during the experiment. This extracted data is then stored in the same MySQL database. With all...rhythm of the event. Figure 3 demonstrates the application usage over the course of the experiment for the EXDIR. As seen, the EXDIR spent the majority
Evans, Philip; Wolf, Bob
2005-01-01
Corporate leaders seeking to boost growth, learning, and innovation may find the answer in a surprising place: the Linux open-source software community. Linux is developed by an essentially volunteer, self-organizing community of thousands of programmers. Most leaders would sell their grandmothers for workforces that collaborate as efficiently, frictionlessly, and creatively as the self-styled Linux hackers. But Linux is software, and software is hardly a model for mainstream business. The authors have, nonetheless, found surprising parallels between the anarchistic, caffeinated, hirsute world of Linux hackers and the disciplined, tea-sipping, clean-cut world of Toyota engineering. Specifically, Toyota and Linux operate by rules that blend the self-organizing advantages of markets with the low transaction costs of hierarchies. In place of markets' cash and contracts and hierarchies' authority are rules about how individuals and groups work together (with rigorous discipline); how they communicate (widely and with granularity); and how leaders guide them toward a common goal (through example). Those rules, augmented by simple communication technologies and a lack of legal barriers to sharing information, create rich common knowledge, the ability to organize teams modularly, extraordinary motivation, and high levels of trust, which radically lowers transaction costs. Low transaction costs, in turn, make it profitable for organizations to perform more and smaller transactions--and so increase the pace and flexibility typical of high-performance organizations. Once the system achieves critical mass, it feeds on itself. The larger the system, the more broadly shared the knowledge, language, and work style. The greater individuals' reputational capital, the louder the applause and the stronger the motivation. The success of Linux is evidence of the power of that virtuous circle. Toyota's success is evidence that it is also powerful in conventional companies.
Elan4/SPARC V9 Cross Loader and Dynamic Linker
DOE Office of Scientific and Technical Information (OSTI.GOV)
anf Fabien Lebaillif-Delamare, Fabrizio Petrini
2004-10-25
The Elan4/Sparc V9 Croos Loader and Liner is a part of the Linux system software that allows the dynamic loading and linking of user code in the network interface Quadrics QsNETII, also called as Elan4 Quadrics. Elan44 uses a thread processor that is based on the assembly instruction set of the Sparc V9. All this software is integrated as a Linux kernel module in the Linux 2.6.5 release.
2015-06-01
examine how a computer forensic investigator/incident handler, without specialised computer memory or software reverse engineering skills , can successfully...memory images and malware, this new series of reports will be directed at those who must analyse Linux malware-infected memory images. The skills ...disable 1287 1000 1000 /usr/lib/policykit-1-gnome/polkit-gnome-authentication- agent-1 1310 1000 1000 /usr/lib/pulseaudio/pulse/gconf- helper 1350
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-01
...; Yavapai-Apache Nation of the Camp Verde Indian Reservation, Arizona; and Zuni Tribe of the Zuni... Band of Paiutes); San Juan Southern Paiute Tribe of Arizona; Yavapai- Apache Nation of the Camp Verde...
De Oliveira, T; Miller, R; Tarin, M; Cassol, S
2003-01-01
Sequence databases encode a wealth of information needed to develop improved vaccination and treatment strategies for the control of HIV and other important pathogens. To facilitate effective utilization of these datasets, we developed a user-friendly GDE-based LINUX interface that reduces input/output file formatting. GDE was adapted to the Linux operating system, bioinformatics tools were integrated with microbe-specific databases, and up-to-date GDE menus were developed for several clinically important viral, bacterial and parasitic genomes. Each microbial interface was designed for local access and contains Genbank, BLAST-formatted and phylogenetic databases. GDE-Linux is available for research purposes by direct application to the corresponding author. Application-specific menus and support files can be downloaded from (http://www.bioafrica.net).
Rathnakar, Surag Kajoor; Vishnu, Vikram Hubbanageri; Muniyappa, Shridhar; Prasath, Arun
2017-02-01
Acute Pancreatitis (AP) is one of the common conditions encountered in the emergency room. The course of the disease ranges from mild form to severe acute form. Most of these episodes are mild and spontaneously subsiding within 3 to 5 days. In contrast, Severe Acute Pancreatitis (SAP) occurring in around 15-20% of all cases, mortality can range between 10 to 85% across various centres and countries. In such a situation we need an indicator which can predict the outcome of an attack, as severe or mild, as early as possible and such an indicator should be sensitive and specific enough to trust upon. PANC-3 scoring is such a scoring system in predicting the outcome of an attack of AP. To assess the accuracy and predictability of PANC-3 scoring system over APACHE II in predicting severity in an attack of AP. This prospective study was conducted on 82 patients admitted with the diagnosis of pancreatitis. Investigations to evaluate PANC-3 and APACHE II were done on all the patients and the PANC-3 and APACHE II score was calculated. PANC-3 score has a sensitivity of 82.6% and specificity of 77.9%, the test had a Positive Predictive Value (PPV) of 0.59 and Negative Predictive Value (NPV) of 0.92. Sensitivity of APACHE II in predicting SAP was 91.3% and specificity was 96.6% with PPV of 0.91, NPV was 0.96. Our study shows that PANC-3 can be used to predict the severity of pancreatitis as efficiently as APACHE II. The interpretation of PANC-3 does not need expertise and can be applied at the time of admission which is an advantage when compared to classical scoring systems.
Significance of blood pressure variability in patients with sepsis.
Pandey, Nishant Raj; Bian, Yu-Yao; Shou, Song-Tao
2014-01-01
This study was undertaken to observe the characteristics of blood pressure variability (BPV) and sepsis and to investigate changes in blood pressure and its value on the severity of illness in patients with sepsis. Blood parameters, APACHE II score, and 24-hour ambulatory BP were analyzed in 89 patients with sepsis. In patients with APACHE II score>19, the values of systolic blood pressure (SBPV), diasystolic blood pressure (DBPV), non-dipper percentage, cortisol (COR), lactate (LAC), platelet count (PLT) and glucose (GLU) were significantly higher than in those with APACHE II score ≤19 (P<0.05), whereas the values of procalcitonin (PCT), white blood cell (WBC), creatinine (Cr), PaO2, C-reactive protein (CRP), adrenocorticotropic hormone (ACTH) and tumor necrosis factor α (TNF-α) were not statistically significant (P>0.05). Correlation analysis showed that APACHE II scores correlated significantly with SBPV and DBPV (P<0.01, r=0.732 and P<0.01, r=0.762). SBPV and DBPV were correlated with COR (P=0.018 and r=0.318; P=0.008 and r=0.353 respectively). However, SBPV and DBPV were not correlated with TNF-α, IL-10, and PCT (P>0.05). Logistic regression analysis of SBPV, DBPV, APACHE II score, and LAC was used to predict prognosis in terms of survival and non-survival rates. Receiver operating characteristics curve (ROC) showed that DBPV was a better predictor of survival rate with an AUC value of 0.890. However, AUC of SBPV, APACHE II score, and LAC was 0.746, 0.831 and 0.915, respectively. The values of SBPV, DBPV and non-dipper percentage are higher in patients with sepsis. DBPV and SBPV can be used to predict the survival rate of patients with sepsis.
Mortality Probability Model III and Simplified Acute Physiology Score II
Vasilevskis, Eduard E.; Kuzniewicz, Michael W.; Cason, Brian A.; Lane, Rondall K.; Dean, Mitzi L.; Clay, Ted; Rennie, Deborah J.; Vittinghoff, Eric; Dudley, R. Adams
2009-01-01
Background: To develop and compare ICU length-of-stay (LOS) risk-adjustment models using three commonly used mortality or LOS prediction models. Methods: Between 2001 and 2004, we performed a retrospective, observational study of 11,295 ICU patients from 35 hospitals in the California Intensive Care Outcomes Project. We compared the accuracy of the following three LOS models: a recalibrated acute physiology and chronic health evaluation (APACHE) IV-LOS model; and models developed using risk factors in the mortality probability model III at zero hours (MPM0) and the simplified acute physiology score (SAPS) II mortality prediction model. We evaluated models by calculating the following: (1) grouped coefficients of determination; (2) differences between observed and predicted LOS across subgroups; and (3) intraclass correlations of observed/expected LOS ratios between models. Results: The grouped coefficients of determination were APACHE IV with coefficients recalibrated to the LOS values of the study cohort (APACHE IVrecal) [R2 = 0.422], mortality probability model III at zero hours (MPM0 III) [R2 = 0.279], and simplified acute physiology score (SAPS II) [R2 = 0.008]. For each decile of predicted ICU LOS, the mean predicted LOS vs the observed LOS was significantly different (p ≤ 0.05) for three, two, and six deciles using APACHE IVrecal, MPM0 III, and SAPS II, respectively. Plots of the predicted vs the observed LOS ratios of the hospitals revealed a threefold variation in LOS among hospitals with high model correlations. Conclusions: APACHE IV and MPM0 III were more accurate than SAPS II for the prediction of ICU LOS. APACHE IV is the most accurate and best calibrated model. Although it is less accurate, MPM0 III may be a reasonable option if the data collection burden or the treatment effect bias is a consideration. PMID:19363210
Vinnik, Y S; Dunaevskaya, S S; Antufrieva, D A
2015-01-01
The aim of the study was to evaluate the diagnostic value of specific and nonspecific scoring systems Tolstoy-Krasnogorov score, Ranson, BISAP, Glasgow, MODS 2, APACHE II and CTSI, which used at urgent pancreatology for estimation the severity of acute pancreatitis and status of patient. 1550 case reports of patients which had inpatient surgical treatment at Road clinical hospital at the station Krasnoyarsk from 2009 till 2013 were analyzed. Diagnosis of severe acute pancreatitis and its complications were determined based on anamnestic data, physical exami- nation, clinical indexes, ultrasonic examination and computed tomography angiography. Specific and nonspecific scores (scoring system of estimation by Tolstoy-Krasnogorov, Ranson, Glasgow, BISAP, MODS 2, APACHE II, CTSI) were used for estimation the severity of acute pancreatitis and patient's general condition. Effectiveness of these scoring systems was determined based on some parameters: accuracy (Ac), sensitivity (Se), specificity (Sp), positive predictive value (PPV) and negative predictive value (NPV). Most valuables score for estimation of acute pancreatitis's severity is BISAP (Se--98.10%), for estimation of organ failure--MODS 2 (Sp--100%, PPV--100%) and APACHE II (Sp--100%, PPV--100%), for detection of pancreatonecrosis sings--CTSI (Sp--100%, NPV--100%), for estimation of need for intensive care--MODS 2 (Sp--100%, PPV--100%, NPV--96.29%) and APACHE II (Sp--100%, PPV--100%, NPV--97.21%), for prediction of lethality--MODS 2 (Se-- 100%, Sp--98.14%, NPV--100%) and APACHE II (Se--95.00%, NPV-.99.86%). Most effective scores for estimation of acute pancreatitis's severity are Score of estimation by Tolstoy-Krasnogorov, Ranson, Glasgow and BISAP Scoring systems MODS 2, APACHE I high specificity and positive predictive value allow using it at clinical practice.
Linux Kernel Co-Scheduling For Bulk Synchronous Parallel Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, Terry R
2011-01-01
This paper describes a kernel scheduling algorithm that is based on co-scheduling principles and that is intended for parallel applications running on 1000 cores or more where inter-node scalability is key. Experimental results for a Linux implementation on a Cray XT5 machine are presented.1 The results indicate that Linux is a suitable operating system for this new scheduling scheme, and that this design provides a dramatic improvement in scaling performance for synchronizing collective operations at scale.
ERIC Educational Resources Information Center
Arizona Univ., Tucson. Coll. of Medicine.
Designed to provide health services for American Indians living on rurally isolated reservations, the Arizona TeleMedicine Project proposes to link Phoenix and Tucson medical centers, via a statewide telecommunications system, with the Hopi, San Carlos Apache, Papago, Navajo, and White Mountain Apache reservations. Advisory boards are being…
25 CFR 183.1 - What is the purpose of this part?
Code of Federal Regulations, 2010 CFR
2010-04-01
... SAN CARLOS APACHE TRIBE DEVELOPMENT TRUST FUND AND SAN CARLOS APACHE TRIBE LEASE FUND Introduction... Tribe Water Settlement Act (the Act), Public Law 102-575, 106 Stat. 4748, that requires regulations to administer the Trust Fund, and the Lease Fund established by the Act. ...
DOT National Transportation Integrated Search
2016-03-03
This report summarizes the observations and findings of an interagency transportation assistance group (TAG) convened to discuss the long-term future of Arizona State Route 88, also known as the Apache Trail, a historic road on the Tonto Nation...
An expression database for roots of the model legume Medicago truncatula under salt stress
2009-01-01
Background Medicago truncatula is a model legume whose genome is currently being sequenced by an international consortium. Abiotic stresses such as salt stress limit plant growth and crop productivity, including those of legumes. We anticipate that studies on M. truncatula will shed light on other economically important legumes across the world. Here, we report the development of a database called MtED that contains gene expression profiles of the roots of M. truncatula based on time-course salt stress experiments using the Affymetrix Medicago GeneChip. Our hope is that MtED will provide information to assist in improving abiotic stress resistance in legumes. Description The results of our microarray experiment with roots of M. truncatula under 180 mM sodium chloride were deposited in the MtED database. Additionally, sequence and annotation information regarding microarray probe sets were included. MtED provides functional category analysis based on Gene and GeneBins Ontology, and other Web-based tools for querying and retrieving query results, browsing pathways and transcription factor families, showing metabolic maps, and comparing and visualizing expression profiles. Utilities like mapping probe sets to genome of M. truncatula and In-Silico PCR were implemented by BLAT software suite, which were also available through MtED database. Conclusion MtED was built in the PHP script language and as a MySQL relational database system on a Linux server. It has an integrated Web interface, which facilitates ready examination and interpretation of the results of microarray experiments. It is intended to help in selecting gene markers to improve abiotic stress resistance in legumes. MtED is available at http://bioinformatics.cau.edu.cn/MtED/. PMID:19906315
An expression database for roots of the model legume Medicago truncatula under salt stress.
Li, Daofeng; Su, Zhen; Dong, Jiangli; Wang, Tao
2009-11-11
Medicago truncatula is a model legume whose genome is currently being sequenced by an international consortium. Abiotic stresses such as salt stress limit plant growth and crop productivity, including those of legumes. We anticipate that studies on M. truncatula will shed light on other economically important legumes across the world. Here, we report the development of a database called MtED that contains gene expression profiles of the roots of M. truncatula based on time-course salt stress experiments using the Affymetrix Medicago GeneChip. Our hope is that MtED will provide information to assist in improving abiotic stress resistance in legumes. The results of our microarray experiment with roots of M. truncatula under 180 mM sodium chloride were deposited in the MtED database. Additionally, sequence and annotation information regarding microarray probe sets were included. MtED provides functional category analysis based on Gene and GeneBins Ontology, and other Web-based tools for querying and retrieving query results, browsing pathways and transcription factor families, showing metabolic maps, and comparing and visualizing expression profiles. Utilities like mapping probe sets to genome of M. truncatula and In-Silico PCR were implemented by BLAT software suite, which were also available through MtED database. MtED was built in the PHP script language and as a MySQL relational database system on a Linux server. It has an integrated Web interface, which facilitates ready examination and interpretation of the results of microarray experiments. It is intended to help in selecting gene markers to improve abiotic stress resistance in legumes. MtED is available at http://bioinformatics.cau.edu.cn/MtED/.
NATIONAL GEOSCIENCE DATA REPOSITORY SYSTEM PHASE III: IMPLEMENTATION AND OPERATION ON THE REPOSITORY
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marcus Milling
2001-10-01
The NGDRS has attained 72% of its targeted goal for cores and cuttings transfers, with over 12 million linear feet of cores and cuttings, in addition to large numbers of paleontological samples and are now available for public use. Additionally, large-scale transfers of seismic data have been evaluated, but based on the recommendation of the NGDRS steering committee, cores have been given priority because of the vast scale of the seismic data problem relative to the available funding. The rapidly changing industry conditions have required that the primary core and cuttings preservation strategy evolve as well. Additionally, the NGDRS clearinghousemore » is evaluating the viability of transferring seismic data covering the western shelf of the Florida Gulf Coast. AGI remained actively involved in assisting the National Research Council with background materials and presentations for their panel convened to study the data preservation issue. A final report of the panel is expected in early 2002. GeoTrek has been ported to Linux and MySQL, ensuring a purely open-source version of the software. This effort is key in ensuring long-term viability of the software so that is can continue basic operation regardless of specific funding levels. Work has commenced on a major revision of GeoTrek, using the open-source MapServer project and its related MapScript language. This effort will address a number of key technology issues that appear to be rising for 2002, including the discontinuation of the use of Java in future Microsoft operating systems. Discussions have been held regarding establishing potential new public data repositories, with hope for final determination in 2002.« less
NATIONAL GEOSCIENCE DATA REPOSITORY SYSTEM PHASE III: IMPLEMENTATION AND OPERATION OF THE REPOSITORY
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marcus Milling
2003-04-01
The NGDRS has facilitated 85% of cores, cuttings, and other data identified available for transfer to the public sector. Over 12 million linear feet of cores and cuttings, in addition to large numbers of paleontological samples and are now available for public use. To date, with industry contributions for program operations and data transfers, the NGDRS project has realized a 6.5 to 1 return on investment to Department of Energy funds. Large-scale transfers of seismic data have been evaluated, but based on the recommendation of the NGDRS steering committee, cores have been given priority because of the vast scale ofmore » the seismic data problem relative to the available funding. The rapidly changing industry conditions have required that the primary core and cuttings preservation strategy evolve as well. Additionally, the NGDRS clearinghouse is evaluating the viability of transferring seismic data covering the western shelf of the Florida Gulf Coast. AGI remains actively involved in working to realize the vision of the National Research Council's report of geoscience data preservation. GeoTrek has been ported to Linux and MySQL, ensuring a purely open-source version of the software. This effort is key in ensuring long-term viability of the software so that is can continue basic operation regardless of specific funding levels. Work has commenced on a major revision of GeoTrek, using the open-source MapServer project and its related MapScript language. This effort will address a number of key technology issues that appear to be rising for 2002, including the discontinuation of the use of Java in future Microsoft operating systems. Discussions have been held regarding establishing potential new public data repositories, with hope for final determination in 2002.« less
IMPLEMENTATION AND OPERATION OF THE REPOSITORY
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marcus Milling
2003-10-01
The NGDRS has facilitated 85% of cores, cuttings, and other data identified available for transfer to the public sector. Over 12 million linear feet of cores and cuttings, in addition to large numbers of paleontological samples and are now available for public use. To date, with industry contributions for program operations and data transfers, the NGDRS project has realized a 6.5 to 1 return on investment to Department of Energy funds. Large-scale transfers of seismic data have been evaluated, but based on the recommendation of the NGDRS steering committee, cores have been given priority because of the vast scale ofmore » the seismic data problem relative to the available funding. The rapidly changing industry conditions have required that the primary core and cuttings preservation strategy evolve as well. Additionally, the NGDRS clearinghouse is evaluating the viability of transferring seismic data covering the western shelf of the Florida Gulf Coast. AGI remains actively involved in working to realize the vision of the National Research Council's report of geoscience data preservation. GeoTrek has been ported to Linux and MySQL, ensuring a purely open-source version of the software. This effort is key in ensuring long-term viability of the software so that is can continue basic operation regardless of specific funding levels. Work has been on a major revision of GeoTrek, using the open-source MapServer project and its related MapScript language. This effort will address a number of key technology issues that appear to be rising for 2003, including the discontinuation of the use of Java in future Microsoft operating systems. The recent donation of BPAmoco's Houston core facility to the Texas Bureau of Economic Geology has provided substantial short-term relief of the space constraints for public repository space.« less
NATIONAL GEOSCIENCE DATA REPOSITORY SYSTEM PHASE III: IMPLEMENTATION AND OPERATION OF THE REPOSITORY
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marcus Milling
2002-10-01
The NGDRS has facilitated 85% of cores, cuttings, and other data identified available for transfer to the public sector. Over 12 million linear feet of cores and cuttings, in addition to large numbers of paleontological samples and are now available for public use. To date, with industry contributions for program operations and data transfers, the NGDRS project has realized a 6.5 to 1 return on investment to Department of Energy funds. Large-scale transfers of seismic data have been evaluated, but based on the recommendation of the NGDRS steering committee, cores have been given priority because of the vast scale ofmore » the seismic data problem relative to the available funding. The rapidly changing industry conditions have required that the primary core and cuttings preservation strategy evolve as well. Additionally, the NGDRS clearinghouse is evaluating the viability of transferring seismic data covering the western shelf of the Florida Gulf Coast. AGI remains actively involved in working to realize the vision of the National Research Council's report of geoscience data preservation. GeoTrek has been ported to Linux and MySQL, ensuring a purely open-source version of the software. This effort is key in ensuring long-term viability of the software so that is can continue basic operation regardless of specific funding levels. Work has commenced on a major revision of GeoTrek, using the open-source MapServer project and its related MapScript language. This effort will address a number of key technology issues that appear to be rising for 2002, including the discontinuation of the use of Java in future Microsoft operating systems. Discussions have been held regarding establishing potential new public data repositories, with hope for final determination in 2002.« less
Falcone, Emmanuela; Grandoni, Luca; Garibaldi, Francesca; Manni, Isabella; Filligoi, Giancarlo; Piaggio, Giulia; Gurtner, Aymone
2016-01-01
miRNAs are potent regulators of gene expression and modulate multiple cellular processes in physiology and pathology. Deregulation of miRNAs expression has been found in various cancer types, thus, miRNAs may be potential targets for cancer therapy. However, the mechanisms through which miRNAs are regulated in cancer remain unclear. Therefore, the identification of transcriptional factor-miRNA crosstalk is one of the most update aspects of the study of miRNAs regulation. In the present study we describe the development of a fast and user-friendly software, named infinity, able to find the presence of DNA matrices, such as binding sequences for transcriptional factors, on ~65kb (kilobase) of 939 human miRNA genomic sequences, simultaneously. Of note, the power of this software has been validated in vivo by performing chromatin immunoprecipitation assays on a subset of new in silico identified target sequences (CCAAT) for the transcription factor NF-Y on colon cancer deregulated miRNA loci. Moreover, for the first time, we have demonstrated that NF-Y, through its CCAAT binding activity, regulates the expression of miRNA-181a, -181b, -21, -17, -130b, -301b in colon cancer cells. The infinity software that we have developed is a powerful tool to underscore new TF/miRNA regulatory networks. Infinity was implemented in pure Java using Eclipse framework, and runs on Linux and MS Windows machine, with MySQL database. The software is freely available on the web at https://github.com/bio-devel/infinity. The website is implemented in JavaScript, PHP and HTML with all major browsers supported.
Garibaldi, Francesca; Manni, Isabella; Filligoi, Giancarlo; Piaggio, Giulia; Gurtner, Aymone
2016-01-01
Motivation miRNAs are potent regulators of gene expression and modulate multiple cellular processes in physiology and pathology. Deregulation of miRNAs expression has been found in various cancer types, thus, miRNAs may be potential targets for cancer therapy. However, the mechanisms through which miRNAs are regulated in cancer remain unclear. Therefore, the identification of transcriptional factor–miRNA crosstalk is one of the most update aspects of the study of miRNAs regulation. Results In the present study we describe the development of a fast and user-friendly software, named infinity, able to find the presence of DNA matrices, such as binding sequences for transcriptional factors, on ~65kb (kilobase) of 939 human miRNA genomic sequences, simultaneously. Of note, the power of this software has been validated in vivo by performing chromatin immunoprecipitation assays on a subset of new in silico identified target sequences (CCAAT) for the transcription factor NF-Y on colon cancer deregulated miRNA loci. Moreover, for the first time, we have demonstrated that NF-Y, through its CCAAT binding activity, regulates the expression of miRNA-181a, -181b, -21, -17, -130b, -301b in colon cancer cells. Conclusions The infinity software that we have developed is a powerful tool to underscore new TF/miRNA regulatory networks. Availability and Implementation Infinity was implemented in pure Java using Eclipse framework, and runs on Linux and MS Windows machine, with MySQL database. The software is freely available on the web at https://github.com/bio-devel/infinity. The website is implemented in JavaScript, PHP and HTML with all major browsers supported. PMID:27082112
Rahman, Mahabubur; Watabe, Hiroshi
2018-05-01
Molecular imaging serves as an important tool for researchers and clinicians to visualize and investigate complex biochemical phenomena using specialized instruments; these instruments are either used individually or in combination with targeted imaging agents to obtain images related to specific diseases with high sensitivity, specificity, and signal-to-noise ratios. However, molecular imaging, which is a multidisciplinary research field, faces several challenges, including the integration of imaging informatics with bioinformatics and medical informatics, requirement of reliable and robust image analysis algorithms, effective quality control of imaging facilities, and those related to individualized disease mapping, data sharing, software architecture, and knowledge management. As a cost-effective and open-source approach to address these challenges related to molecular imaging, we develop a flexible, transparent, and secure infrastructure, named MIRA, which stands for Molecular Imaging Repository and Analysis, primarily using the Python programming language, and a MySQL relational database system deployed on a Linux server. MIRA is designed with a centralized image archiving infrastructure and information database so that a multicenter collaborative informatics platform can be built. The capability of dealing with metadata, image file format normalization, and storing and viewing different types of documents and multimedia files make MIRA considerably flexible. With features like logging, auditing, commenting, sharing, and searching, MIRA is useful as an Electronic Laboratory Notebook for effective knowledge management. In addition, the centralized approach for MIRA facilitates on-the-fly access to all its features remotely through any web browser. Furthermore, the open-source approach provides the opportunity for sustainable continued development. MIRA offers an infrastructure that can be used as cross-boundary collaborative MI research platform for the rapid achievement in cancer diagnosis and therapeutics. Copyright © 2018 Elsevier Ltd. All rights reserved.
Stone, Paul; Cossette, P.M.
2000-01-01
The Apache Canyon 7.5-minute quadrangle is located in southwestern California about 55 km northeast of Santa Barbara and 65 km southwest of Bakersfield. This report presents the results of a geologic mapping investigation of the Apache Canyon quadrangle that was carried out in 1997-1999 as part of the U.S. Geological Survey's Southern California Areal Mapping Project. This quadrangle was chosen for study because it is in an area of complex, incompletely understood Cenozoic stratigraphy and structure of potential importance for regional tectonic interpretations, particularly those involving the San Andreas fault located just northwest of the quadrangle and the Big Pine fault about 10 km to the south. In addition, the quadrangle is notable for its well-exposed sequences of folded Neogene nonmarine strata including the Caliente Formation of Miocene age from which previous workers have collected and described several biostratigraphically significant land-mammal fossil assemblages. During the present study, these strata were mapped in detail throughout the quadrangle to provide an improved framework for possible future paleontologic investigations. The Apache Canyon quadrangle is in the eastern part of the Cuyama 30-minute by 60-minute quadrangle and is largely part of an erosionally dissected terrain known as the Cuyama badlands at the east end of Cuyama Valley. Most of the Apache Canyon quadrangle consists of public lands in the Los Padres National Forest.
Survival of Apache Trout eggs and alevins under static and fluctuating temperature regimes
Recsetar, Matthew S.; Bonar, Scott A.
2013-01-01
Increased stream temperatures due to global climate change, livestock grazing, removal of riparian cover, reduction of stream flow, and urbanization will have important implications for fishes worldwide. Information exists that describes the effects of elevated water temperatures on fish eggs, but less information is available on the effects of fluctuating water temperatures on egg survival, especially those of threatened and endangered species. We tested the posthatch survival of eyed eggs and alevins of Apache Trout Oncorhynchus gilae apache, a threatened salmonid, in static temperatures of 15, 18, 21, 24, and 27°C, and also in treatments with diel fluctuations of ±3°C around those temperatures. The LT50 for posthatch survival of Apache Trout eyed eggs and alevins was 17.1°C for static temperatures treatments and 17.9°C for the midpoints of ±3°C fluctuating temperature treatments. There was no significant difference in survival between static temperatures and fluctuating temperatures that shared the same mean temperature, yet there was a slight difference in LT50s. Upper thermal tolerance of Apache Trout eyed eggs and alevins is much lower than that of fry to adult life stages (22–23°C). Information on thermal tolerance of early life stages (eyed egg and alevin) will be valuable to those restoring streams or investigating thermal tolerances of imperiled fishes.
Computing Visitors who do not need a HEP linux account Visitors with laptops can use wireless network HEP linux account Step 1: Click Here for New Account Application After submitting the application, you
NASA Technical Reports Server (NTRS)
Kikuchi, Hideaki; Kalia, Rajiv K.; Nakano, Aiichiro; Vashishta, Priya; Shimojo, Fuyuki; Saini, Subhash
2003-01-01
Scalability of a low-cost, Intel Xeon-based, multi-Teraflop Linux cluster is tested for two high-end scientific applications: Classical atomistic simulation based on the molecular dynamics method and quantum mechanical calculation based on the density functional theory. These scalable parallel applications use space-time multiresolution algorithms and feature computational-space decomposition, wavelet-based adaptive load balancing, and spacefilling-curve-based data compression for scalable I/O. Comparative performance tests are performed on a 1,024-processor Linux cluster and a conventional higher-end parallel supercomputer, 1,184-processor IBM SP4. The results show that the performance of the Linux cluster is comparable to that of the SP4. We also study various effects, such as the sharing of memory and L2 cache among processors, on the performance.
Lutzomyia (Helcocyrtomyia) Apache Young and Perkins (Diptera: Psychodidae) feeds on reptiles
USDA-ARS?s Scientific Manuscript database
Phlebotomine sand flies are vectors of bacteria, parasites, and viruses. In the western USA a sand fly, Lutzomyia apache Young and Perkins, was initially associated with epizootics of vesicular stomatitis virus (VSV), because sand flies were trapped at sites of an outbreak. Additional studies indica...
ERIC Educational Resources Information Center
Pono, Filomena P.; And Others
The Jicarilla Apache people celebrate a young girl's coming of age by having a feast called "Keesda". Derived from the Spanish word "fiesta", "Keesda" is a Jicarilla Apache word meaning "feast". This feast is held for four days, usually during the summer months. However, it may be held at any time during the…
RELAP5-3D developmental assessment: Comparison of version 4.2.1i on Linux and Windows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bayless, Paul D.
2014-06-01
Figures have been generated comparing the parameters used in the developmental assessment of the RELAP5-3D code, version 4.2i, compiled on Linux and Windows platforms. The figures, which are the same as those used in Volume III of the RELAP5-3D code manual, compare calculations using the semi-implicit solution scheme with available experiment data. These figures provide a quick, visual indication of how the code predictions differ between the Linux and Windows versions.
RELAP5-3D Developmental Assessment. Comparison of Version 4.3.4i on Linux and Windows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bayless, Paul David
2015-10-01
Figures have been generated comparing the parameters used in the developmental assessment of the RELAP5-3D code, version 4.3i, compiled on Linux and Windows platforms. The figures, which are the same as those used in Volume III of the RELAP5-3D code manual, compare calculations using the semi-implicit solution scheme with available experiment data. These figures provide a quick, visual indication of how the code predictions differ between the Linux and Windows versions.
Container-Based Clinical Solutions for Portable and Reproducible Image Analysis.
Matelsky, Jordan; Kiar, Gregory; Johnson, Erik; Rivera, Corban; Toma, Michael; Gray-Roncal, William
2018-05-08
Medical imaging analysis depends on the reproducibility of complex computation. Linux containers enable the abstraction, installation, and configuration of environments so that software can be both distributed in self-contained images and used repeatably by tool consumers. While several initiatives in neuroimaging have adopted approaches for creating and sharing more reliable scientific methods and findings, Linux containers are not yet mainstream in clinical settings. We explore related technologies and their efficacy in this setting, highlight important shortcomings, demonstrate a simple use-case, and endorse the use of Linux containers for medical image analysis.
Performance and scalability evaluation of "Big Memory" on Blue Gene Linux.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoshii, K.; Iskra, K.; Naik, H.
2011-05-01
We address memory performance issues observed in Blue Gene Linux and discuss the design and implementation of 'Big Memory' - an alternative, transparent memory space introduced to eliminate the memory performance issues. We evaluate the performance of Big Memory using custom memory benchmarks, NAS Parallel Benchmarks, and the Parallel Ocean Program, at a scale of up to 4,096 nodes. We find that Big Memory successfully resolves the performance issues normally encountered in Blue Gene Linux. For the ocean simulation program, we even find that Linux with Big Memory provides better scalability than does the lightweight compute node kernel designed solelymore » for high-performance applications. Originally intended exclusively for compute node tasks, our new memory subsystem dramatically improves the performance of certain I/O node applications as well. We demonstrate this performance using the central processor of the LOw Frequency ARray radio telescope as an example.« less
A General Purpose High Performance Linux Installation Infrastructure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wachsmann, Alf
2002-06-17
With more and more and larger and larger Linux clusters, the question arises how to install them. This paper addresses this question by proposing a solution using only standard software components. This installation infrastructure scales well for a large number of nodes. It is also usable for installing desktop machines or diskless Linux clients, thus, is not designed for cluster installations in particular but is, nevertheless, highly performant. The infrastructure proposed uses PXE as the network boot component on the nodes. It uses DHCP and TFTP servers to get IP addresses and a bootloader to all nodes. It then usesmore » kickstart to install Red Hat Linux over NFS. We have implemented this installation infrastructure at SLAC with our given server hardware and installed a 256 node cluster in 30 minutes. This paper presents the measurements from this installation and discusses the bottlenecks in our installation.« less
Conservation priorities in the Apache Highlands ecoregion
Dale Turner; Rob Marshall; Carolyn A. F. Enquist; Anne Gondor; David F. Gori; Eduardo Lopez; Gonzalo Luna; Rafaela Paredes Aguilar; Chris Watts; Sabra Schwartz
2005-01-01
The Apache Highlands ecoregion incorporates the entire Madrean Archipelago/Sky Island region. We analyzed the current distribution of 223 target species and 26 terrestrial ecological systems there, and compared them with constraints on ecosystem integrity (e.g., road density) to determine the most efficient set of areas needed to maintain current biodiversity. The...
Recapturing the Past with Digital Imaging
ERIC Educational Resources Information Center
Gronseth, Susie
2008-01-01
Theodore Roosevelt School (TRS) is surrounded by culture and history. Located on the grounds of the former Fort Apache Army Post, TRS serves sixth- through eighth-grade native students, primarily from the White Mountain Apache Tribe. Tradition and culture are so much a part of the TRS students' background of experiences that teachers at the school…
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-28
...; Hualapai Indian Tribe of the Hualapai Indian Reservation, Arizona; Jicarilla Apache Nation, New Mexico; Kaibab Band of Paiute Indians of the Kaibab Indian Reservation, Arizona; Kewa Pueblo, New Mexico (formerly the Pueblo of Santo Domingo); Mescalero Apache Tribe of the Mescalero Reservation, New Mexico...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-26
.... Apache Junction Public Library, 1177 N. Idaho Road, Apache Junction, Arizona 85219. Buckeye Public Library, 310 North 6th Street, Buckeye, Arizona 85326. Casa Grande Public Library, 449 North Dry Lake, Casa Grande, Arizona 85222. Gila Bend Public Library, 202 North Euclid Avenue, Gila Bend, Arizona 85337...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-28
... Verde Indian Reservation, Arizona; Yavapai-Prescott Tribe of the Yavapai Reservation, Arizona; Ysleta...-Apache Nation of the Camp Verde Indian Reservation, Arizona; and Yavapai-Prescott Tribe of the Yavapai... Reservation, Arizona; Yavapai-Apache Nation of the Camp Verde Indian Reservation, Arizona; Yavapai-Prescott...
Forest resources of the Forest resources of the Apache-Sitgreaves National Forest
Paul Rogers
2008-01-01
The Interior West Forest Inventory and Analysis (IWFIA) program of the USDA Forest Service, Rocky Mountain Research Station, as part of its national Forest Inventory and Analysis (FIA) duties, conducted forest resource inventories of the Southwestern Region (Region 3) National Forests. This report presents highlights of the Apache-Sitgreaves National Forest...
Context-Based Mobile Security Enclave
2012-09-01
29 c. Change IMSI .............................30 d. Change CellID ...........................31 e. Change Geolocation ...Assisted Global Positioning System ADB Android Debugger API Application Programming Interface APK Android Application Package BSC Base Station...Programming Interfaces ( APIs ), which use Java compatible libraries based on Apache Harmony (an open source Java implementation developed by the Apache
Saad, Sameh; Mohamed, Naglaa; Moghazy, Amr; Ellabban, Gouda; El-Kamash, Soliman
2016-01-01
The trauma and injury severity score (TRISS) and Acute Physiology and Chronic Health Evaluation IV (APACHE IV) are accurate but complex. This study aimed to compare venous glucose, levels of serum lactate, and base deficit in polytraumatized patients as simple parameters to predict the mortality in these patients versus (TRISS) and (APACHE IV). This was a comparative cross-sectional study of 282 patients with polytrauma presented to the Emergency Department (ED). The best cut off value of TRISS probability of survival score for prediction of mortality among poly-traumatized patients was ≤90. APACHE IV demonstrated 67% sensitivity and 95% specificity at 95% CI at cut off point 99. The best cutoff value of Random Blood Sugar was >140 mg/dl, with 89% sensitivity, 49% specificity; base deficit was less than -5.6 with 64% sensitivity, 93% specificity; lactate was >2.6 mmol/L with 92%, sensitivity, 42% specificity. Venous glucose, serum lactate and base deficit are easy and rapid biochemical predictors of mortality in patients with polytrauma. These predictors could be used as TRISS and APACHE IV in predicting mortality.
Cwik, Mary F; Tingey, Lauren; Maschino, Alexandra; Goklish, Novalene; Larzelere-Hinton, Francene; Walkup, John; Barlow, Allison
2016-12-01
We evaluated the impact of a comprehensive, multitiered youth suicide prevention program among the White Mountain Apache of Arizona since its implementation in 2006. Using data from the tribally mandated Celebrating Life surveillance system, we compared the rates, numbers, and characteristics of suicide deaths and attempts from 2007 to 2012 with those from 2001 to 2006. The overall Apache suicide death rates dropped from 40.0 to 24.7 per 100 000 (38.3% decrease), and the rate among those aged 15 to 24 years dropped from 128.5 to 99.0 per 100 000 (23.0% decrease). The annual number of attempts also dropped from 75 (in 2007) to 35 individuals (in 2012). National rates remained relatively stable during this time, at 10 to 13 per 100 000. Although national rates remained stable or increased slightly, the overall Apache suicide death rates dropped following the suicide prevention program. The community surveillance system served a critical role in providing a foundation for prevention programming and evaluation.
Schein, M; Gecelter, G
1989-07-01
This study examined the prognostic value of the APACHE II scoring system in patients undergoing emergency operations for bleeding peptic ulcer. There were 96 operations for gastric ulcers and 58 for duodenal ulcers. The mean scores in survivors and in patients who died were 10.8 and 17.5 respectively. None of the 66 patients with an APACHE II score less than 11 died, while the mortality rate in those scored greater than 10 was 22 per cent. In patients scored greater than 10 non-resective procedures carried less risk of mortality than gastrectomy. The APACHE II score is useful when measuring the severity of the acute disease and predicting the outcome in these patients. If used in daily practice it may assist the surgeon in stratifying patients into a low-risk group (score less than 11) in which major operations are well tolerated and outcome is favourable and a high-risk group (score greater than 10) in which the risk of mortality is high and the performance of procedures of lesser magnitude is probably more likely to improve survival.
[Study for lung sound acquisition module based on ARM and Linux].
Lu, Qiang; Li, Wenfeng; Zhang, Xixue; Li, Junmin; Liu, Longqing
2011-07-01
A acquisition module with ARM and Linux as a core was developed. This paper presents the hardware configuration and the software design. It is shown that the module can extract human lung sound reliably and effectively.
Berghmans, T; Paesmans, M; Sculier, J P
2004-04-01
To evaluate the effectiveness of a specific oncologic scoring system-the ICU Cancer Mortality model (ICM)-in predicting hospital mortality in comparison to two general severity scores-the Acute Physiology and Chronic Health Evaluation (APACHE II) and the Simplified Acute Physiology Score (SAPS II). All 247 patients admitted for a medical acute complication over an 18-month period in an oncological medical intensive care unit were prospectively registered. Their data, including type of complication, vital status at discharge and cancer characteristics as well as other variables necessary to calculate the three scoring systems were retrospectively assessed. Observed in-hospital mortality was 34%. The predicted in-hospital mortality rate for APACHE II was 32%; SAPS II, 24%; and ICM, 28%. The goodness of fit was inadequate except for the ICM score. Comparison of the area under the ROC curves revealed a better fit for ICM (area 0.79). The maximum correct classification rate was 72% for APACHE II, 74% for SAPS II and 77% for ICM. APACHE II and SAPS II were better at predicting outcome for survivors to hospital discharge, although ICM was better for non-survivors. Two variables were independently predicting the risk of death during hospitalisation: ICM (OR=2.31) and SAPS II (OR=1.05). Gravity scores were the single independent predictors for hospital mortality, and ICM was equivalent to APACHE II and SAPS II.
Que, Ri-sheng; Cao, Li-ping; Ding, Guo-ping; Hu, Jun-an; Mao, Ke-jie; Wang, Gui-feng
2010-05-01
To investigate the correlation of nitric oxide (NO) and other free radicals with the severity of acute pancreatitis (AP) and complicated systemic inflammatory response syndrome (SIRS). Fifty AP patients (24 simple AP patients and 26 patients with AP complicated by SIRS) were involved in the study. Fifty healthy volunteers were included as controls. Acute Physiology and Chronic Health Evaluation II (APACHE II) scores were evaluated, and plasma NO, plasma lipid peroxides, plasma vitamin E, plasma beta-carotene, whole-blood glutathione (GSH), and the activity of plasma GSH peroxidase were measured. Compared with the control group, the APACHE II scores heightened in the AP group, and the SIRS group had the highest APACHE II scores (P < 0.005, P < 0.001, respectively). Plasma NO and plasma lipid peroxides increased with the heightening APACHE II scores, demonstrating a significant linear positive correlation (r = 0.618, r = 0.577, respectively; P < 0.001). Plasma vitamin E, plasma beta-carotene, whole-blood GSH, and the activity of plasma GSH peroxidase decreased with the heightening APACHE II scores, demonstrating a significant linear negative correlation (r = -0.600, r = -0.609, r = -0.559, r = -0.592, respectively; P < 0.001). Nitric oxide and other free radicals take part in the aggravation of oxidative stress and oxidative injury and may play important roles in the pathogenesis of AP and SIRS. It may be valuable to measure free radicals to predict the severity of AP.
WU, XINKUAN; XIE, WEI; CHENG, YUELEI; GUAN, QINGLONG
2016-01-01
The aim of the present study was to investigate the plasma levels of C-reactive protein (CRP) and copeptin, in addition to the acute physiology and chronic health evaluation II (APACHE II) scores, in patients with acute organophosphorus pesticide poisoning (AOPP). A total of 100 patients with AOPP were included and divided into mild, moderate and severe groups according to AOPP diagnosis and classification standards. Blood samples were collected from all patients on days 1, 3 and 7 following AOPP. The concentrations of CRP and copeptin in the plasma were determined using enzyme-linked immunosorbent assay. All AOPP patients underwent APACHE II scoring and the diagnostic value of these scores was analyzed using receiver operating characteristic curves (ROCs). On days 1, 3 and 7 after AOPP, the levels of CRP and copeptin were increased in correlation with the increase in AOPP severity, and were significantly higher compared with the control groups. Furthermore, elevated CRP and copeptin plasma levels were detected in patients with severe AOPP on day 7, whereas these levels were reduced in patients with mild or moderate AOPP. APACHE II scores, blood lactate level, acetylcholine esterase level, twitch disappearance time, reactivating agent dose and inability to raise the head were the high-risk factors that affected the prognosis of AOPP. Patients with plasma CRP and copeptin levels higher than median values had worse prognoses. The areas under curve for ROCs were 0.89, 0.75 and 0.72 for CRP levels, copeptin levels and APACHE II scores, respectively. In addition, the plasma contents of CRP and copeptin are increased according to the severity of AOPP. Therefore, the results of the present study suggest that CRP and copeptin levels and APACHE II scores may be used for the determination of AOPP severity and the prediction of AOPP prognosis. PMID:26997996
Wu, Xinkuan; Xie, Wei; Cheng, Yuelei; Guan, Qinglong
2016-03-01
The aim of the present study was to investigate the plasma levels of C-reactive protein (CRP) and copeptin, in addition to the acute physiology and chronic health evaluation II (APACHE II) scores, in patients with acute organophosphorus pesticide poisoning (AOPP). A total of 100 patients with AOPP were included and divided into mild, moderate and severe groups according to AOPP diagnosis and classification standards. Blood samples were collected from all patients on days 1, 3 and 7 following AOPP. The concentrations of CRP and copeptin in the plasma were determined using enzyme-linked immunosorbent assay. All AOPP patients underwent APACHE II scoring and the diagnostic value of these scores was analyzed using receiver operating characteristic curves (ROCs). On days 1, 3 and 7 after AOPP, the levels of CRP and copeptin were increased in correlation with the increase in AOPP severity, and were significantly higher compared with the control groups. Furthermore, elevated CRP and copeptin plasma levels were detected in patients with severe AOPP on day 7, whereas these levels were reduced in patients with mild or moderate AOPP. APACHE II scores, blood lactate level, acetylcholine esterase level, twitch disappearance time, reactivating agent dose and inability to raise the head were the high-risk factors that affected the prognosis of AOPP. Patients with plasma CRP and copeptin levels higher than median values had worse prognoses. The areas under curve for ROCs were 0.89, 0.75 and 0.72 for CRP levels, copeptin levels and APACHE II scores, respectively. In addition, the plasma contents of CRP and copeptin are increased according to the severity of AOPP. Therefore, the results of the present study suggest that CRP and copeptin levels and APACHE II scores may be used for the determination of AOPP severity and the prediction of AOPP prognosis.
Sathe, Prachee M; Bapat, Sharda N
2014-01-01
To assess the performance and utility of two mortality prediction models viz. Acute Physiology and Chronic Health Evaluation II (APACHE II) and Simplified Acute Physiology Score II (SAPS II) in a single Indian mixed tertiary intensive care unit (ICU). Secondary objectives were bench-marking and setting a base line for research. In this observational cohort, data needed for calculation of both scores were prospectively collected for all consecutive admissions to 28-bedded ICU in the year 2011. After excluding readmissions, discharges within 24 h and age <18 years, the records of 1543 patients were analyzed using appropriate statistical methods. Both models overpredicted mortality in this cohort [standardized mortality ratio (SMR) 0.88 ± 0.05 and 0.95 ± 0.06 using APACHE II and SAPS II respectively]. Patterns of predicted mortality had strong association with true mortality (R (2) = 0.98 for APACHE II and R (2) = 0.99 for SAPS II). Both models performed poorly in formal Hosmer-Lemeshow goodness-of-fit testing (Chi-square = 12.8 (P = 0.03) for APACHE II, Chi-square = 26.6 (P = 0.001) for SAPS II) but showed good discrimination (area under receiver operating characteristic curve 0.86 ± 0.013 SE (P < 0.001) and 0.83 ± 0.013 SE (P < 0.001) for APACHE II and SAPS II, respectively). There were wide variations in SMRs calculated for subgroups based on International Classification of Disease, 10(th) edition (standard deviation ± 0.27 for APACHE II and 0.30 for SAPS II). Lack of fit of data to the models and wide variation in SMRs in subgroups put a limitation on utility of these models as tools for assessing quality of care and comparing performances of different units without customization. Considering comparable performance and simplicity of use, efforts should be made to adapt SAPS II.
Lee, Young-Joo; Park, Chan-Hee; Yun, Jang-Woon; Lee, Young-Suk
2004-02-29
Procalcitonin (PCT) is a newly introduced marker of systemic inflammation and bacterial infection. A marked increase in circulating PCT level in critically ill patients has been related with the severity of illness and poor survival. The goal of this study was to compare the prognostic power of PCT and three other parameters, the arterial ketone body ratio (AKBR), the acute physiology, age, chronic health evaluation (APACHE) III score and the multiple organ dysfunction score (MODS), in the differentiation between survivors and nonsurvivors of systemic inflammatory response syndrome (SIRS). The study was performed in 95 patients over 16 years of age who met the criteria of SIRS. PCT and AKBR were assayed in arterial blood samples. The APACHE III score and MODS were recorded after the first 24 hours of surgical ICU (SICU) admission and then daily for two weeks or until either discharge or death. The patients were divided into two groups, survivors (n=71) and nonsurvivors (n=24), in accordance with the ICU outcome. They were also divided into three groups according to the trend of PCT level: declining, increasing or no change. Significant differences between survivors and nonsurvivors were found in APACHE III score and MODS throughout the study period, but in PCT value only up to the 7th day and in AKBR only up to the 3rd day. PCT values of the three groups were not significantly different on the first day between survivors and nonsurvivors. Receiver operating characteristic (ROC) curves for prediction of mortality by PCT, AKBR, APACHE III score and MODS were 0.690, 0.320, 0.915 and 0.913, respectively, on the admission day. In conclusion, PCT could have some use as a mortality predictor in SIRS patients but was less reliable than APACHE III score or MODS.
The Grid[Way] Job Template Manager, a tool for parameter sweeping
NASA Astrophysics Data System (ADS)
Lorca, Alejandro; Huedo, Eduardo; Llorente, Ignacio M.
2011-04-01
Parameter sweeping is a widely used algorithmic technique in computational science. It is specially suited for high-throughput computing since the jobs evaluating the parameter space are loosely coupled or independent. A tool that integrates the modeling of a parameter study with the control of jobs in a distributed architecture is presented. The main task is to facilitate the creation and deletion of job templates, which are the elements describing the jobs to be run. Extra functionality relies upon the GridWay Metascheduler, acting as the middleware layer for job submission and control. It supports interesting features like multi-dimensional sweeping space, wildcarding of parameters, functional evaluation of ranges, value-skipping and job template automatic indexation. The use of this tool increases the reliability of the parameter sweep study thanks to the systematic bookkeeping of job templates and respective job statuses. Furthermore, it simplifies the porting of the target application to the grid reducing the required amount of time and effort. Program summaryProgram title: Grid[Way] Job Template Manager (version 1.0) Catalogue identifier: AEIE_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIE_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Apache license 2.0 No. of lines in distributed program, including test data, etc.: 3545 No. of bytes in distributed program, including test data, etc.: 126 879 Distribution format: tar.gz Programming language: Perl 5.8.5 and above Computer: Any (tested on PC x86 and x86_64) Operating system: Unix, GNU/Linux (tested on Ubuntu 9.04, Scientific Linux 4.7, centOS 5.4), Mac OS X (tested on Snow Leopard 10.6) RAM: 10 MB Classification: 6.5 External routines: The GridWay Metascheduler [1]. Nature of problem: To parameterize and manage an application running on a grid or cluster. Solution method: Generation of job templates as a cross product of the input parameter sets. Also management of the job template files including the job submission to the grid, control and information retrieval. Restrictions: The parameter sweep is limited by disk space during generation of the job templates. The wild-carding of parameters cannot be done in decreasing order. Job submission, control and information is delegated to the GridWay Metascheduler. Running time: From half a second in the simplest operation to a few minutes for thousands of exponential sampling parameters.
PhyLIS: a simple GNU/Linux distribution for phylogenetics and phyloinformatics.
Thomson, Robert C
2009-07-30
PhyLIS is a free GNU/Linux distribution that is designed to provide a simple, standardized platform for phylogenetic and phyloinformatic analysis. The operating system incorporates most commonly used phylogenetic software, which has been pre-compiled and pre-configured, allowing for straightforward application of phylogenetic methods and development of phyloinformatic pipelines in a stable Linux environment. The software is distributed as a live CD and can be installed directly or run from the CD without making changes to the computer. PhyLIS is available for free at http://www.eve.ucdavis.edu/rcthomson/phylis/.
PhyLIS: A Simple GNU/Linux Distribution for Phylogenetics and Phyloinformatics
Thomson, Robert C.
2009-01-01
PhyLIS is a free GNU/Linux distribution that is designed to provide a simple, standardized platform for phylogenetic and phyloinformatic analysis. The operating system incorporates most commonly used phylogenetic software, which has been pre-compiled and pre-configured, allowing for straightforward application of phylogenetic methods and development of phyloinformatic pipelines in a stable Linux environment. The software is distributed as a live CD and can be installed directly or run from the CD without making changes to the computer. PhyLIS is available for free at http://www.eve.ucdavis.edu/rcthomson/phylis/. PMID:19812729
A System for Web-based Access to the HSOS Database
NASA Astrophysics Data System (ADS)
Lin, G.
Huairou Solar Observing Station's (HSOS) magnetogram and dopplergram are world-class instruments. Access to their data has opened to the world. Web-based access to the data will provide a powerful, convenient tool for data searching and solar physics. It is necessary that our data be provided to users via the Web when it is opened to the world. In this presentation, the author describes general design and programming construction of the system. The system will be generated by PHP and MySQL. The author also introduces basic feature of PHP and MySQL.
25 CFR 183.8 - How can the Tribe spend funds?
Code of Federal Regulations, 2014 CFR
2014-04-01
... 25 Indians 1 2014-04-01 2014-04-01 false How can the Tribe spend funds? 183.8 Section 183.8... SAN CARLOS APACHE TRIBE DEVELOPMENT TRUST FUND AND SAN CARLOS APACHE TRIBE LEASE FUND Trust Fund Disposition Limitations § 183.8 How can the Tribe spend funds? (a) The Tribe must spend principal or income...
25 CFR 183.15 - Must the Tribe submit any reports?
Code of Federal Regulations, 2010 CFR
2010-04-01
... 25 Indians 1 2010-04-01 2010-04-01 false Must the Tribe submit any reports? 183.15 Section 183.15... SAN CARLOS APACHE TRIBE DEVELOPMENT TRUST FUND AND SAN CARLOS APACHE TRIBE LEASE FUND Reports § 183.15 Must the Tribe submit any reports? Yes. The Tribe must submit the following reports after receiving...
25 CFR 183.8 - How can the Tribe spend funds?
Code of Federal Regulations, 2010 CFR
2010-04-01
... 25 Indians 1 2010-04-01 2010-04-01 false How can the Tribe spend funds? 183.8 Section 183.8... SAN CARLOS APACHE TRIBE DEVELOPMENT TRUST FUND AND SAN CARLOS APACHE TRIBE LEASE FUND Trust Fund Disposition Limitations § 183.8 How can the Tribe spend funds? (a) The Tribe must spend principal or income...
25 CFR 183.8 - How can the Tribe spend funds?
Code of Federal Regulations, 2011 CFR
2011-04-01
... 25 Indians 1 2011-04-01 2011-04-01 false How can the Tribe spend funds? 183.8 Section 183.8... SAN CARLOS APACHE TRIBE DEVELOPMENT TRUST FUND AND SAN CARLOS APACHE TRIBE LEASE FUND Trust Fund Disposition Limitations § 183.8 How can the Tribe spend funds? (a) The Tribe must spend principal or income...
25 CFR 183.15 - Must the Tribe submit any reports?
Code of Federal Regulations, 2013 CFR
2013-04-01
... 25 Indians 1 2013-04-01 2013-04-01 false Must the Tribe submit any reports? 183.15 Section 183.15... SAN CARLOS APACHE TRIBE DEVELOPMENT TRUST FUND AND SAN CARLOS APACHE TRIBE LEASE FUND Reports § 183.15 Must the Tribe submit any reports? Yes. The Tribe must submit the following reports after receiving...
25 CFR 183.15 - Must the Tribe submit any reports?
Code of Federal Regulations, 2012 CFR
2012-04-01
... 25 Indians 1 2012-04-01 2011-04-01 true Must the Tribe submit any reports? 183.15 Section 183.15... SAN CARLOS APACHE TRIBE DEVELOPMENT TRUST FUND AND SAN CARLOS APACHE TRIBE LEASE FUND Reports § 183.15 Must the Tribe submit any reports? Yes. The Tribe must submit the following reports after receiving...
25 CFR 183.15 - Must the Tribe submit any reports?
Code of Federal Regulations, 2014 CFR
2014-04-01
... 25 Indians 1 2014-04-01 2014-04-01 false Must the Tribe submit any reports? 183.15 Section 183.15... SAN CARLOS APACHE TRIBE DEVELOPMENT TRUST FUND AND SAN CARLOS APACHE TRIBE LEASE FUND Reports § 183.15 Must the Tribe submit any reports? Yes. The Tribe must submit the following reports after receiving...
25 CFR 183.15 - Must the Tribe submit any reports?
Code of Federal Regulations, 2011 CFR
2011-04-01
... 25 Indians 1 2011-04-01 2011-04-01 false Must the Tribe submit any reports? 183.15 Section 183.15... SAN CARLOS APACHE TRIBE DEVELOPMENT TRUST FUND AND SAN CARLOS APACHE TRIBE LEASE FUND Reports § 183.15 Must the Tribe submit any reports? Yes. The Tribe must submit the following reports after receiving...
25 CFR 183.8 - How can the Tribe spend funds?
Code of Federal Regulations, 2012 CFR
2012-04-01
... 25 Indians 1 2012-04-01 2011-04-01 true How can the Tribe spend funds? 183.8 Section 183.8 Indians... CARLOS APACHE TRIBE DEVELOPMENT TRUST FUND AND SAN CARLOS APACHE TRIBE LEASE FUND Trust Fund Disposition Limitations § 183.8 How can the Tribe spend funds? (a) The Tribe must spend principal or income distributed...
25 CFR 183.8 - How can the Tribe spend funds?
Code of Federal Regulations, 2013 CFR
2013-04-01
... 25 Indians 1 2013-04-01 2013-04-01 false How can the Tribe spend funds? 183.8 Section 183.8... SAN CARLOS APACHE TRIBE DEVELOPMENT TRUST FUND AND SAN CARLOS APACHE TRIBE LEASE FUND Trust Fund Disposition Limitations § 183.8 How can the Tribe spend funds? (a) The Tribe must spend principal or income...
75 FR 20608 - Notice of Re-Designation of the Service Delivery Area for the Cowlitz Indian Tribe
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-20
..., Louisiana Grand Parish, LA,\\22\\ LaSalle Parish, LA, Rapides Parish, LA. Jicarilla Apache Nation, New Mexico... Mexico. NM. Miccosukee Tribe of Indians of Florida. Broward, FL, Collier, FL, Miami- Dade, FL, Hendry, FL.... Narragansett Indian Tribe of Rhode Washington, RI.\\32\\ Island. Navajo Nation, Arizona, New Mexico and Apache...
Nutrition Survey of White Mountain Apache Preschool Children.
ERIC Educational Resources Information Center
Owen, George M.; And Others
As part of a national study of the nutrition of preschool children, data were collected on 201 Apache children, 1 to 6 years of age, living on an Indian reservation in Arizona. This report reviews procedures and clinical findings, and gives an analysis of growth data including skeletal maturation, nutrient intakes and clinical biochemical data. In…
An assessment of the spatial extent and condition of grasslands in the Apache Highlands ecoregion
Carolyn A. F. Enquist; David F. Gori
2005-01-01
Grasslands in the Apache Highlands ecoregion have experienced dramatic changes. To assess and identify remaining native grasslands for conservation planning and management, we used a combination of expert consultation and field verification. Over two-thirds of native grasslands have experienced shrub encroachment. More than 30% of these may be restorable with...
Publications - GMC 397 | Alaska Division of Geological & Geophysical
: Apache Corp., Alaska Division of Oil and Gas, and Weatherford Laboratories Publication Date: Nov 2011 Apache Corp., Alaska Division of Oil and Gas, and Weatherford Laboratories, 2011, Porosity and Files gmc397.pdf (2.8 M) gmc397.zip (24.2 M) Keywords Cook Inlet Basin; Oil and Gas; Permeability
A Photographic Essay of the San Carlos Apache Indians, Volume 2-Part A.
ERIC Educational Resources Information Center
Soto, Ed; And Others
As part of a series of guides designed for instruction of American Indian children and youth, this resource guide constitutes a pictorial essay on the San Carlos Apache Reservation founded in the late 1800's and located in Arizona's Gila County. An historical narrative and discussion questions accompany each of the 12 photographs. Photographic…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mcguckin, Theodore
2008-10-01
The Jefferson Lab Accelerator Controls Environment (ACE) was predominantly based on the HP-UX Unix platform from 1987 through the summer of 2004. During this period the Accelerator Machine Control Center (MCC) underwent a major renovation which included introducing Redhat Enterprise Linux machines, first as specialized process servers and then gradually as general login servers. As computer programs and scripts required to run the accelerator were modified, and inherent problems with the HP-UX platform compounded, more development tools became available for use with Linux and the MCC began to be converted over. In May 2008 the last HP-UX Unix login machinemore » was removed from the MCC, leaving only a few Unix-based remote-login servers still available. This presentation will explore the process of converting an operational Control Room environment from the HP-UX to Linux platform as well as the many hurdles that had to be overcome throughout the transition period (including a discussion of« less
Real Time Linux - The RTOS for Astronomy?
NASA Astrophysics Data System (ADS)
Daly, P. N.
The BoF was attended by about 30 participants and a free CD of real time Linux-based upon RedHat 5.2-was available. There was a detailed presentation on the nature of real time Linux and the variants for hard real time: New Mexico Tech's RTL and DIAPM's RTAI. Comparison tables between standard Linux and real time Linux responses to time interval generation and interrupt response latency were presented (see elsewhere in these proceedings). The present recommendations are to use RTL for UP machines running the 2.0.x kernels and RTAI for SMP machines running the 2.2.x kernel. Support, both academically and commercially, is available. Some known limitations were presented and the solutions reported e.g., debugging and hardware support. The features of RTAI (scheduler, fifos, shared memory, semaphores, message queues and RPCs) were described. Typical performance statistics were presented: Pentium-based oneshot tasks running > 30kHz, 486-based oneshot tasks running at ~ 10 kHz, periodic timer tasks running in excess of 90 kHz with average zero jitter peaking to ~ 13 mus (UP) and ~ 30 mus (SMP). Some detail on kernel module programming, including coding examples, were presented showing a typical data acquisition system generating simulated (random) data writing to a shared memory buffer and a fifo buffer to communicate between real time Linux and user space. All coding examples were complete and tested under RTAI v0.6 and the 2.2.12 kernel. Finally, arguments were raised in support of real time Linux: it's open source, free under GPL, enables rapid prototyping, has good support and the ability to have a fully functioning workstation capable of co-existing hard real time performance. The counter weight-the negatives-of lack of platforms (x86 and PowerPC only at present), lack of board support, promiscuous root access and the danger of ignorance of real time programming issues were also discussed. See ftp://orion.tuc.noao.edu/pub/pnd/rtlbof.tgz for the StarOffice overheads for this presentation.
The Mescalero Apaches. The Civilization of the American Indian Series.
ERIC Educational Resources Information Center
Sonnichsen, C. L.
The history of the Eastern Apache tribe called the Mescaleros is one of hardship and oppression altering with wars of revenge. They were friendly to the Spaniard until victimized by them. They were also friendly to the white man until they were betrayed again. For three hundred years they fought the Spaniards and Mexicans. For forty more they…
25 CFR 183.9 - Can the Tribe request the principal of the Lease Fund?
Code of Federal Regulations, 2011 CFR
2011-04-01
... 25 Indians 1 2011-04-01 2011-04-01 false Can the Tribe request the principal of the Lease Fund... AND DISTRIBUTION OF THE SAN CARLOS APACHE TRIBE DEVELOPMENT TRUST FUND AND SAN CARLOS APACHE TRIBE LEASE FUND Lease Fund Disposition Use of Principal and Income § 183.9 Can the Tribe request the...
A Photographic Essay of Apache Clothing, War Charms, and Weapons, Volume 2-Part D.
ERIC Educational Resources Information Center
Thompson, Doris; Jacobs, Ben
As part of a series of guides designed for instruction of American Indian children and youth, this resource guide constitutes a pictorial essay on Apache clothing, war charms, and weaponry. A brief historical introduction is followed by 21 question suggestions for classroom use. Each of the 12 photographic topics is accompanied by a descriptive…
Jeffrey F. Kelly; Deborah M. Finch
1999-01-01
We compared diversity, abundance and energetic condition of migrant landbirds captured in four different vegetation types in the Bosque del Apache National Wildlife Refuge. We found lower species diversity among migrants caught in exotic saltcedar vegetation than in native willow or cottonwood. In general, Migrants were most abundant in agricultural edge and least...
The Apache Campaigns. Values in Conflict
1985-06-01
cultural aspects as land use, property ownership, criminal justice, re- ligious faith, and family and group loyalty differed sharply. Conceptual...and emphasized the primary importance of family and group loyalties. Initially, the Apache and Frontier Army co-habited the Southwest peacefully. Then...guidance during my research and writing this year. For intellectual stim- ulation and timely encouragement, I particularly thank my Committee Chairman
Fallugia paradoxa (D. Don) Endl. ex Torr.: Apache-plume
Susan E. Meyer
2008-01-01
The genus Fallugia contains a single species - Apache-plume, F. paradoxa (D. Don) Endl. ex Torr. - found throughout the southwestern United States and northern Mexico. It occurs mostly on coarse soils on benches and especially along washes and canyons in both warm and cool desert shrub communities and up into the pinyon-juniper vegetation type. It is a sprawling, much-...
Restoration of Soldier Spring: an isolated habitat for native Apache trout
Jonathan W. Long; B. Mae Burnette; Alvin L. Medina; Joshua L. Parker
2004-01-01
Degradation of streams is a threat to the recovery of the Apache trout, an endemic fish of the White Mountains of Arizona. Historic efforts to improve trout habitat in the Southwest relied heavily on placement of in-stream log structures. However, the effects of structural interventions on trout habitat and populations have not been adequately evaluated. We treated an...
Solar Feasibility Study May 2013 - San Carlos Apache Tribe
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rapp, Jim; Duncan, Ken; Albert, Steve
2013-05-01
The San Carlos Apache Tribe (Tribe) in the interests of strengthening tribal sovereignty, becoming more energy self-sufficient, and providing improved services and economic opportunities to tribal members and San Carlos Apache Reservation (Reservation) residents and businesses, has explored a variety of options for renewable energy development. The development of renewable energy technologies and generation is consistent with the Tribe’s 2011 Strategic Plan. This Study assessed the possibilities for both commercial-scale and community-scale solar development within the southwestern portions of the Reservation around the communities of San Carlos, Peridot, and Cutter, and in the southeastern Reservation around the community of Bylas.more » Based on the lack of any commercial-scale electric power transmission between the Reservation and the regional transmission grid, Phase 2 of this Study greatly expanded consideration of community-scale options. Three smaller sites (Point of Pines, Dudleyville/Winkleman, and Seneca Lake) were also evaluated for community-scale solar potential. Three building complexes were identified within the Reservation where the development of site-specific facility-scale solar power would be the most beneficial and cost-effective: Apache Gold Casino/Resort, Tribal College/Skill Center, and the Dudleyville (Winkleman) Casino.« less
Change Detection of Mobile LIDAR Data Using Cloud Computing
NASA Astrophysics Data System (ADS)
Liu, Kun; Boehm, Jan; Alis, Christian
2016-06-01
Change detection has long been a challenging problem although a lot of research has been conducted in different fields such as remote sensing and photogrammetry, computer vision, and robotics. In this paper, we blend voxel grid and Apache Spark together to propose an efficient method to address the problem in the context of big data. Voxel grid is a regular geometry representation consisting of the voxels with the same size, which fairly suites parallel computation. Apache Spark is a popular distributed parallel computing platform which allows fault tolerance and memory cache. These features can significantly enhance the performance of Apache Spark and results in an efficient and robust implementation. In our experiments, both synthetic and real point cloud data are employed to demonstrate the quality of our method.
GEOMAGIA50: An archeointensity database with PHP and MySQL
NASA Astrophysics Data System (ADS)
Korhonen, K.; Donadini, F.; Riisager, P.; Pesonen, L. J.
2008-04-01
The GEOMAGIA50 database stores 3798 archeomagnetic and paleomagnetic intensity determinations dated to the past 50,000 years. It also stores details of the measurement setup for each determination, which are used for ranking the data according to prescribed reliability criteria. The ranking system aims to alleviate the data reliability problem inherent in this kind of data. GEOMAGIA50 is based on two popular open source technologies. The MySQL database management system is used for storing the data, whereas the functionality and user interface are provided by server-side PHP scripts. This technical brief gives a detailed description of GEOMAGIA50 from a technical viewpoint.
Linux Incident Response Volatile Data Analysis Framework
ERIC Educational Resources Information Center
McFadden, Matthew
2013-01-01
Cyber incident response is an emphasized subject area in cybersecurity in information technology with increased need for the protection of data. Due to ongoing threats, cybersecurity imposes many challenges and requires new investigative response techniques. In this study a Linux Incident Response Framework is designed for collecting volatile data…
Onboard Flow Sensing For Downwash Detection and Avoidance On Small Quadrotor Helicopters
2015-01-01
onboard computers, one for flight stabilization and a Linux computer for sensor integration and control calculations . The Linux computer runs Robot...Hirokawa, D. Kubo , S. Suzuki, J. Meguro, and T. Suzuki. Small uav for immediate hazard map generation. In AIAA Infotech@Aerospace Conf, May 2007. 8F
Cross platform development using Delphi and Kylix
DOE Office of Scientific and Technical Information (OSTI.GOV)
McDonald, J.L.; Nishimura, H.; Timossi, C.
2002-10-08
A cross platform component for EPICS Simple Channel Access (SCA) has been developed for the use with Delphi on Windows and Kylix on Linux. An EPICS controls GUI application developed on Windows runs on Linux by simply rebuilding it, and vice versa. This paper describes the technical details of the component.
25 CFR 183.10 - How can the Tribe use income from the Lease Fund?
Code of Federal Regulations, 2011 CFR
2011-04-01
... 25 Indians 1 2011-04-01 2011-04-01 false How can the Tribe use income from the Lease Fund? 183.10... DISTRIBUTION OF THE SAN CARLOS APACHE TRIBE DEVELOPMENT TRUST FUND AND SAN CARLOS APACHE TRIBE LEASE FUND Lease Fund Disposition Use of Principal and Income § 183.10 How can the Tribe use income from the Lease Fund...
ERIC Educational Resources Information Center
Velarde, Hubert
The statement by the President of the Jicarilla Apache Tribe emphasizes reservation problems that need to be examined. Presented at a 1972 Civil Rights Commission hearing on Indian Concerns, Velarde's statement listed employment, education, the administration of justice, water rights, and medical services as areas for investigation. (KM)
Ellingson, A.R.; Andersen, D.C.
2002-01-01
1. The hypothesis that the habitat-scale spatial distribution of the, Apache cicada Diceroprocta apache Davis is unaffected by the presence of the invasive exotic saltcedar Tamarix ramosissima was tested using data from 205 1-m2 quadrats placed within the flood-plain of the Bill Williams River, Arizona, U.S.A. Spatial dependencies within and between cicada density and habitat variables were estimated using Moran's I and its bivariate analogue to discern patterns and associations at spatial scales from 1 to 30 m. 2. Apache cicadas were spatially aggregated in high-density clusters averaging 3m in diameter. A positive association between cicada density, estimated by exuvial density, and the per cent canopy cover of a native tree, Goodding's willow Salix gooddingii, was detected in a non-spatial correlation analysis. No non-spatial association between cicada density and saltcedar canopy cover was detected. 3. Tests for spatial cross-correlation using the bivariate IYZ indicated the presence of a broad-scale negative association between cicada density and saltcedar canopy cover. This result suggests that large continuous stands of saltcedar are associated with reduced cicada density. In contrast, positive associations detected at spatial scales larger than individual quadrats suggested a spill-over of high cicada density from areas featuring Goodding's willow canopy into surrounding saltcedar monoculture. 4. Taken together and considered in light of the Apache cicada's polyphagous habits, the observed spatial patterns suggest that broad-scale factors such as canopy heterogeneity affect cicada habitat use more than host plant selection. This has implications for management of lower Colorado River riparian woodlands to promote cicada presence and density through maintenance or creation of stands of native trees as well as manipulation of the characteristically dense and homogeneous saltcedar canopies.
Ellingson, A.R.; Andersen, D.C.
2002-01-01
1. The hypothesis that the habitat-scale spatial distribution of the Apache cicada Diceroprocta apache Davis is unaffected by the presence of the invasive exotic saltcedar Tamarix ramosissima was tested using data from 205 1-m2 quadrats placed within the flood-plain of the Bill Williams River, Arizona, U.S.A. Spatial dependencies within and between cicada density and habitat variables were estimated using Moran's I and its bivariate analogue to discern patterns and associations at spatial scales from 1 to 30 m.2. Apache cicadas were spatially aggregated in high-density clusters averaging 3 m in diameter. A positive association between cicada density, estimated by exuvial density, and the per cent canopy cover of a native tree, Goodding's willow Salix gooddingii, was detected in a non-spatial correlation analysis. No non-spatial association between cicada density and saltcedar canopy cover was detected.3. Tests for spatial cross-correlation using the bivariate IYZ indicated the presence of a broad-scale negative association between cicada density and saltcedar canopy cover. This result suggests that large continuous stands of saltcedar are associated with reduced cicada density. In contrast, positive associations detected at spatial scales larger than individual quadrats suggested a spill-over of high cicada density from areas featuring Goodding's willow canopy into surrounding saltcedar monoculture.4. Taken together and considered in light of the Apache cicada's polyphagous habits, the observed spatial patterns suggest that broad-scale factors such as canopy heterogeneity affect cicada habitat use more than host plant selection. This has implications for management of lower Colorado River riparian woodlands to promote cicada presence and density through maintenance or creation of stands of native trees as well as manipulation of the characteristically dense and homogeneous saltcedar canopies.
Khwannimit, Bodin
2008-01-01
The Logistic Organ Dysfunction score (LOD) is an organ dysfunction score that can predict hospital mortality. The aim of this study was to validate the performance of the LOD score compared with the Acute Physiology and Chronic Health Evaluation II (APACHE II) score in a mixed intensive care unit (ICU) at a tertiary referral university hospital in Thailand. The data were collected prospectively on consecutive ICU admissions over a 24 month period from July1, 2004 until June 30, 2006. Discrimination was evaluated by the area under the receiver operating characteristic curve (AUROC). The calibration was assessed by the Hosmer-Lemeshow goodness-of-fit H statistic. The overall fit of the model was evaluated by the Brier's score. Overall, 1,429 patients were enrolled during the study period. The mortality in the ICU was 20.9% and in the hospital was 27.9%. The median ICU and hospital lengths of stay were 3 and 18 days, respectively, for all patients. Both models showed excellent discrimination. The AUROC for the LOD and APACHE II were 0.860 [95% confidence interval (CI) = 0.838-0.882] and 0.898 (95% Cl = 0.879-0.917), respectively. The LOD score had perfect calibration with the Hosmer-Lemeshow goodness-of-fit H chi-2 = 10 (p = 0.44). However, the APACHE II had poor calibration with the Hosmer-Lemeshow goodness-of-fit H chi-2 = 75.69 (p < 0.001). Brier's score showed the overall fit for both models were 0.123 (95%Cl = 0.107-0.141) and 0.114 (0.098-0.132) for the LOD and APACHE II, respectively. Thus, the LOD score was found to be accurate for predicting hospital mortality for general critically ill patients in Thailand.
VijayGanapathy, Sundaramoorthy; Karthikeyan, VIlvapathy Senguttuvan; Sreenivas, Jayaram; Mallya, Ashwin; Keshavamurthy, Ramaiah
2017-11-01
Urosepsis implies clinically evident severe infection of urinary tract with features of systemic inflammatory response syndrome (SIRS). We validate the role of a single Acute Physiology and Chronic Health Evaluation II (APACHE II) score at 24 hours after admission in predicting mortality in urosepsis. A prospective observational study was done in 178 patients admitted with urosepsis in the Department of Urology, in a tertiary care institute from January 2015 to August 2016. Patients >18 years diagnosed as urosepsis using SIRS criteria with positive urine or blood culture for bacteria were included. At 24 hours after admission to intensive care unit, APACHE II score was calculated using 12 physiological variables, age and chronic health. Mean±standard deviation (SD) APACHE II score was 26.03±7.03. It was 24.31±6.48 in survivors and 32.39±5.09 in those expired (p<0.001). Among patients undergoing surgery, mean±SD score was higher (30.74±4.85) than among survivors (24.30±6.54) (p<0.001). Receiver operating characteristic (ROC) analysis revealed area under curve (AUC) of 0.825 with cutoff 25.5 being 94.7% sensitive and 56.4% specific to predict mortality. Mean±SD score in those undergoing surgery was 25.22±6.70 and was lesser than those who did not undergo surgery (28.44±7.49) (p=0.007). ROC analysis revealed AUC of 0.760 with cutoff 25.5 being 94.7% sensitive and 45.6% specific to predict mortality even after surgery. A single APACHE II score assessed at 24 hours after admission was able to predict morbidity, mortality, need for surgical intervention, length of hospitalization, treatment success and outcome in urosepsis patients.
Kaymak, Cetin; Sencan, Irfan; Izdes, Seval; Sari, Aydin; Yagmurdur, Hatice; Karadas, Derya; Oztuna, Derya
2018-04-01
The aim of this study was to evaluate intensive care unit (ICU) performance using risk-adjusted ICU mortality rates nationally, assessing patients who died or had been discharged from the ICU. For this purpose, this study analyzed the Acute Physiology and Chronic Health Evaluation (APACHE) II and Sequential Organ Failure Assessment (SOFA) databases, containing detailed clinical and physiological information and mortality of mixed critically ill patients in a medical ICU at secondary and tertiary referral ICUs in Turkey. A total of 690 adult intensive care units in Turkey were included in the study. Among 690 ICUs evaluated, 39.7% were secondary and 60.3% were tertiary ICUs. A total of 4188 patients were enrolled in this study. Intensive care units of ministry, university, and private hospitals were evaluated all over Turkey. During the study period, clinical data that were collected concurrently for each patient contained demographic details and the diagnostic category leading to ICU admission. APACHE II and SOFA scores following ICU admission were calculated and recorded. Patients were followed up for outcome data until death or ICU discharge. The mean age of patients was 68.8 ±19 and 54% of them were male. The mean APACHE II score was 20 ±8.7. The ICUs' mortality rate was 46.3%, and mean predicted mortality was 37.2% for APACHE II. The standardized mortality ratio was 1.28 (95% confidence interval: 1.21-1.31). There was a wide difference in outcome for patients admitted to different ICUs and severity of illness using risk adjustment methods. The high mortality rate in patients could be related to comorbid diseases, high mechanical ventilation rates and older ages.
Prognostic scores in cirrhotic patients admitted to a gastroenterology intensive care unit.
Freire, Paulo; Romãozinho, José M; Amaro, Pedro; Ferreira, Manuela; Sofia, Carlos
2011-04-01
prognostic scores have been validated in cirrhotic patients admitted to general Intensive Care Units. No assessment of these scores was performed in cirrhotics admitted to specialized Gastroenterology Intensive Care Units (GICUs). to assess the prognostic accuracy of Acute Physiology and Chronic Health Evaluation (APACHE) II, Simplified Acute Physiology Score (SAPS) II, Sequential Organ Failure Assessment (SOFA), Model for End-stage Liver Disease (MELD) and Child-Pugh-Turcotte (CPT) in predicting GICU mortality in cirrhotic patients. the study involved 124 consecutive cirrhotic admissions to a GICU. Clinical data, prognostic scores and mortality were recorded. Discrimination was evaluated with area under receiver operating characteristic curves (AUC). Calibration was assessed with Hosmer-Lemeshow goodness-of-fit test. GICU mortality was 9.7%. Mean APACHE II, SAPS II, SOFA, MELD and CPT scores for survivors (13.6, 25.4, 3.5,18.0 and 8.6, respectively) were found to be significantly lower than those of non-survivors (22.0, 47.5, 10.1, 30.7 and 12.5,respectively) (p < 0.001). All the prognostic systems showed good discrimination, with AUC = 0.860, 0.911, 0.868, 0.897 and 0.914 for APACHE II, SAPS II, SOFA, MELD and CPT, respectively. Similarly, APACHE II, SAPS II, SOFA, MELD and CPT scores achieved good calibration, with p = 0.146, 0.120, 0.686,0.267 and 0.120, respectively. The overall correctness of prediction was 81.9%, 86.1%, 93.3%, 90.7% and 87.7% for the APA-CHE II, SAPS II, SOFA, MELD and CPT scores, respectively. in cirrhotics admitted to a GICU, all the tested scores have good prognostic accuracy, with SOFA and MELD showing the greatest overall correctness of prediction.
SLURM: Simple Linux Utility for Resource Management
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jette, M; Dunlap, C; Garlick, J
2002-04-24
Simple Linux Utility for Resource Management (SLURM) is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for Linux clusters of thousands of nodes. Components include machine status, partition management, job management, and scheduling modules. The design also includes a scalable, general-purpose communication infrastructure. Development will take place in four phases: Phase I results in a solid infrastructure; Phase II produces a functional but limited interactive job initiation capability without use of the interconnect/switch; Phase III provides switch support and documentation; Phase IV provides job status, fault-tolerance, and job queuing and control through Livermore's Distributed Productionmore » Control System (DPCS), a meta-batch and resource management system.« less
Implementation of image transmission server system using embedded Linux
NASA Astrophysics Data System (ADS)
Park, Jong-Hyun; Jung, Yeon Sung; Nam, Boo Hee
2005-12-01
In this paper, we performed the implementation of image transmission server system using embedded system that is for the specified object and easy to install and move. Since the embedded system has lower capability than the PC, we have to reduce the quantity of calculation of the baseline JPEG image compression and transmission. We used the Redhat Linux 9.0 OS at the host PC and the target board based on embedded Linux. The image sequences are obtained from the camera attached to the FPGA (Field Programmable Gate Array) board with ALTERA cooperation chip. For effectiveness and avoiding some constraints from the vendor's own, we made the device driver using kernel module.
SBSI: an extensible distributed software infrastructure for parameter estimation in systems biology
Adams, Richard; Clark, Allan; Yamaguchi, Azusa; Hanlon, Neil; Tsorman, Nikos; Ali, Shakir; Lebedeva, Galina; Goltsov, Alexey; Sorokin, Anatoly; Akman, Ozgur E.; Troein, Carl; Millar, Andrew J.; Goryanin, Igor; Gilmore, Stephen
2013-01-01
Summary: Complex computational experiments in Systems Biology, such as fitting model parameters to experimental data, can be challenging to perform. Not only do they frequently require a high level of computational power, but the software needed to run the experiment needs to be usable by scientists with varying levels of computational expertise, and modellers need to be able to obtain up-to-date experimental data resources easily. We have developed a software suite, the Systems Biology Software Infrastructure (SBSI), to facilitate the parameter-fitting process. SBSI is a modular software suite composed of three major components: SBSINumerics, a high-performance library containing parallelized algorithms for performing parameter fitting; SBSIDispatcher, a middleware application to track experiments and submit jobs to back-end servers; and SBSIVisual, an extensible client application used to configure optimization experiments and view results. Furthermore, we have created a plugin infrastructure to enable project-specific modules to be easily installed. Plugin developers can take advantage of the existing user-interface and application framework to customize SBSI for their own uses, facilitated by SBSI’s use of standard data formats. Availability and implementation: All SBSI binaries and source-code are freely available from http://sourceforge.net/projects/sbsi under an Apache 2 open-source license. The server-side SBSINumerics runs on any Unix-based operating system; both SBSIVisual and SBSIDispatcher are written in Java and are platform independent, allowing use on Windows, Linux and Mac OS X. The SBSI project website at http://www.sbsi.ed.ac.uk provides documentation and tutorials. Contact: stg@inf.ed.ac.uk Supplementary information: Supplementary data are available at Bioinformatics online. PMID:23329415
Enabling Diverse Software Stacks on Supercomputers using High Performance Virtual Clusters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Younge, Andrew J.; Pedretti, Kevin; Grant, Ryan
While large-scale simulations have been the hallmark of the High Performance Computing (HPC) community for decades, Large Scale Data Analytics (LSDA) workloads are gaining attention within the scientific community not only as a processing component to large HPC simulations, but also as standalone scientific tools for knowledge discovery. With the path towards Exascale, new HPC runtime systems are also emerging in a way that differs from classical distributed com- puting models. However, system software for such capabilities on the latest extreme-scale DOE supercomputing needs to be enhanced to more appropriately support these types of emerging soft- ware ecosystems. In thismore » paper, we propose the use of Virtual Clusters on advanced supercomputing resources to enable systems to support not only HPC workloads, but also emerging big data stacks. Specifi- cally, we have deployed the KVM hypervisor within Cray's Compute Node Linux on a XC-series supercomputer testbed. We also use libvirt and QEMU to manage and provision VMs directly on compute nodes, leveraging Ethernet-over-Aries network emulation. To our knowledge, this is the first known use of KVM on a true MPP supercomputer. We investigate the overhead our solution using HPC benchmarks, both evaluating single-node performance as well as weak scaling of a 32-node virtual cluster. Overall, we find single node performance of our solution using KVM on a Cray is very efficient with near-native performance. However overhead increases by up to 20% as virtual cluster size increases, due to limitations of the Ethernet-over-Aries bridged network. Furthermore, we deploy Apache Spark with large data analysis workloads in a Virtual Cluster, ef- fectively demonstrating how diverse software ecosystems can be supported by High Performance Virtual Clusters.« less
a Hadoop-Based Algorithm of Generating dem Grid from Point Cloud Data
NASA Astrophysics Data System (ADS)
Jian, X.; Xiao, X.; Chengfang, H.; Zhizhong, Z.; Zhaohui, W.; Dengzhong, Z.
2015-04-01
Airborne LiDAR technology has proven to be the most powerful tools to obtain high-density, high-accuracy and significantly detailed surface information of terrain and surface objects within a short time, and from which the Digital Elevation Model of high quality can be extracted. Point cloud data generated from the pre-processed data should be classified by segmentation algorithms, so as to differ the terrain points from disorganized points, then followed by a procedure of interpolating the selected points to turn points into DEM data. The whole procedure takes a long time and huge computing resource due to high-density, that is concentrated on by a number of researches. Hadoop is a distributed system infrastructure developed by the Apache Foundation, which contains a highly fault-tolerant distributed file system (HDFS) with high transmission rate and a parallel programming model (Map/Reduce). Such a framework is appropriate for DEM generation algorithms to improve efficiency. Point cloud data of Dongting Lake acquired by Riegl LMS-Q680i laser scanner was utilized as the original data to generate DEM by a Hadoop-based algorithms implemented in Linux, then followed by another traditional procedure programmed by C++ as the comparative experiment. Then the algorithm's efficiency, coding complexity, and performance-cost ratio were discussed for the comparison. The results demonstrate that the algorithm's speed depends on size of point set and density of DEM grid, and the non-Hadoop implementation can achieve a high performance when memory is big enough, but the multiple Hadoop implementation can achieve a higher performance-cost ratio, while point set is of vast quantities on the other hand.
Linux Adventures on a Laptop. Computers in Small Libraries
ERIC Educational Resources Information Center
Roberts, Gary
2005-01-01
This article discusses the pros and cons of open source software, such as Linux. It asserts that despite the technical difficulties of installing and maintaining this type of software, ultimately it is helpful in terms of knowledge acquisition and as a beneficial investment librarians can make in themselves, their libraries, and their patrons.…
Drowning in PC Management: Could a Linux Solution Save Us?
ERIC Educational Resources Information Center
Peters, Kathleen A.
2004-01-01
Short on funding and IT staff, a Western Canada library struggled to provide adequate public computing resources. Staff turned to a Linux-based solution that supports up to 10 users from a single computer, and blends Web browsing and productivity applications with session management, Internet filtering, and user authentication. In this article,…
78 FR 57648 - Notice of Issuance of Final Determination Concerning Video Teleconferencing Server
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-19
... the Chinese- origin Video Board and the Filter Board, impart the essential character to the video... includes the codec; a network filter electronic circuit board (``Filter Board''); a housing case; a power... (``Linux software''). The Linux software allows the Filter Board to inspect each Ethernet packet of...
Impact of the Shodan Computer Search Engine on Internet-facing Industrial Control System Devices
2014-03-27
bridge implementation. The transparent bridge is designed using a Raspberry Pi configured with Linux IPtables and bridge-utils to bridge the on board...Ethernet card and a second USB Ethernet adapter. A Raspberry Pi is a credit-card-sized single-board computer running a version of Debian Linux. There
Chicks in Charge: Andrea Baker & Amy Daniels--Airport High School Media Center, Columbia, SC
ERIC Educational Resources Information Center
Library Journal, 2004
2004-01-01
This article briefly discusses two librarians exploration of Linux. Andrea Baker and Amy Daniels were tired of telling their students that new technology items were not in the budget. They explored Linux, which is a program that recycles older computers, installs free operating systems and free software.
Diversifying the Department of Defense Network Enterprise with Linux
2010-03-01
Cyberspace, Cyberwar, Legacy, Inventory, Acquisition, Competitive Advantage, Coalition Communications, Ubiquitous, Strategic, Centricity, Kaizen , ISO... Kaizen , ISO, Outsource CLASSIFICATION: Unclassified Historically, the United States and its closest allies have grown increasingly reliant...control through the use of continuous improvement processes ( Kaizen )34. In choosing the Linux client operating system, the move encourages open standards
Development of a portable Linux-based ECG measurement and monitoring system.
Tan, Tan-Hsu; Chang, Ching-Su; Huang, Yung-Fa; Chen, Yung-Fu; Lee, Cheng
2011-08-01
This work presents a portable Linux-based electrocardiogram (ECG) signals measurement and monitoring system. The proposed system consists of an ECG front end and an embedded Linux platform (ELP). The ECG front end digitizes 12-lead ECG signals acquired from electrodes and then delivers them to the ELP via a universal serial bus (USB) interface for storage, signal processing, and graphic display. The proposed system can be installed anywhere (e.g., offices, homes, healthcare centers and ambulances) to allow people to self-monitor their health conditions at any time. The proposed system also enables remote diagnosis via Internet. Additionally, the system has a 7-in. interactive TFT-LCD touch screen that enables users to execute various functions, such as scaling a single-lead or multiple-lead ECG waveforms. The effectiveness of the proposed system was verified by using a commercial 12-lead ECG signal simulator and in vivo experiments. In addition to its portability, the proposed system is license-free as Linux, an open-source code, is utilized during software development. The cost-effectiveness of the system significantly enhances its practical application for personal healthcare.
Managing a Real-Time Embedded Linux Platform with Buildroot
DOE Office of Scientific and Technical Information (OSTI.GOV)
Diamond, J.; Martin, K.
2015-01-01
Developers of real-time embedded software often need to build the operating system, kernel, tools and supporting applications from source to work with the differences in their hardware configuration. The first attempts to introduce Linux-based real-time embedded systems into the Fermilab accelerator controls system used this approach but it was found to be time-consuming, difficult to maintain and difficult to adapt to different hardware configurations. Buildroot is an open source build system with a menu-driven configuration tool (similar to the Linux kernel build system) that automates this process. A customized Buildroot [1] system has been developed for use in the Fermilabmore » accelerator controls system that includes several hardware configuration profiles (including Intel, ARM and PowerPC) and packages for Fermilab support software. A bootable image file is produced containing the Linux kernel, shell and supporting software suite that varies from 3 to 20 megabytes large – ideal for network booting. The result is a platform that is easier to maintain and deploy in diverse hardware configurations« less
NSTX-U Control System Upgrades
Erickson, K. G.; Gates, D. A.; Gerhardt, S. P.; ...
2014-06-01
The National Spherical Tokamak Experiment (NSTX) is undergoing a wealth of upgrades (NSTX-U). These upgrades, especially including an elongated pulse length, require broad changes to the control system that has served NSTX well. A new fiber serial Front Panel Data Port input and output (I/O) stream will supersede the aging copper parallel version. Driver support for the new I/O and cyber security concerns require updating the operating system from Redhat Enterprise Linux (RHEL) v4 to RedHawk (based on RHEL) v6. While the basic control system continues to use the General Atomics Plasma Control System (GA PCS), the effort to forwardmore » port the entire software package to run under 64-bit Linux instead of 32-bit Linux included PCS modifications subsequently shared with GA and other PCS users. Software updates focused on three key areas: (1) code modernization through coding standards (C99/C11), (2) code portability and maintainability through use of the GA PCS code generator, and (3) support of 64-bit platforms. Central to the control system upgrade is the use of a complete real time (RT) Linux platform provided by Concurrent Computer Corporation, consisting of a computer (iHawk), an operating system and drivers (RedHawk), and RT tools (NightStar). Strong vendor support coupled with an extensive RT toolset influenced this decision. The new real-time Linux platform, I/O, and software engineering will foster enhanced capability and performance for NSTX-U plasma control.« less
Pre-fire treatment effects and post-fire forest dynamics on the Rodeo-Chediski burn area, Arizona
Barbara A. Strom
2005-01-01
The 2002 Rodeo-Chediski fire was the largest wildfire in Arizona history at 189,000 ha (468,000 acres), and exhibited some of the most extreme fire behavior ever seen in the Southwest. Pre-fire fuel reduction treatments of thinning, timber harvesting, and prescribed burning on the White Mountain Apache Tribal lands (WMAT) and thinning on the Apache-Sitgreaves National...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Churchill, R. Michael
Apache Spark is explored as a tool for analyzing large data sets from the magnetic fusion simulation code XGCI. Implementation details of Apache Spark on the NERSC Edison supercomputer are discussed, including binary file reading, and parameter setup. Here, an unsupervised machine learning algorithm, k-means clustering, is applied to XGCI particle distribution function data, showing that highly turbulent spatial regions do not have common coherent structures, but rather broad, ring-like structures in velocity space.
25 CFR 183.5 - What documents must the Tribe submit to request money from the Trust Fund?
Code of Federal Regulations, 2010 CFR
2010-04-01
... 25 Indians 1 2010-04-01 2010-04-01 false What documents must the Tribe submit to request money from the Trust Fund? 183.5 Section 183.5 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR LAND AND WATER USE AND DISTRIBUTION OF THE SAN CARLOS APACHE TRIBE DEVELOPMENT TRUST FUND AND SAN CARLOS APACHE TRIBE LEASE FUND Trust Fund Dispositio...
76 FR 72969 - Proclaiming Certain Lands as Reservation for the Fort Sill Apache Indian Tribe
Federal Register 2010, 2011, 2012, 2013, 2014
2011-11-28
... the Fort Sill Apache Tribe of Indians. FOR FURTHER INFORMATION CONTACT: Ben Burshia, Bureau of Indian... from a tangent which bears N. 89[deg]56'18'' W., having a radius of 789.30 feet, a delta angle of 32... radius of 1096.00 feet, a delta angle of 39[deg]58'50'', a chord which bears S. 77[deg]15'43'' W., 749.36...
Satellite Imagery Production and Processing Using Apache Hadoop
NASA Astrophysics Data System (ADS)
Hill, D. V.; Werpy, J.
2011-12-01
The United States Geological Survey's (USGS) Earth Resources Observation and Science (EROS) Center Land Science Research and Development (LSRD) project has devised a method to fulfill its processing needs for Essential Climate Variable (ECV) production from the Landsat archive using Apache Hadoop. Apache Hadoop is the distributed processing technology at the heart of many large-scale, processing solutions implemented at well-known companies such as Yahoo, Amazon, and Facebook. It is a proven framework and can be used to process petabytes of data on thousands of processors concurrently. It is a natural fit for producing satellite imagery and requires only a few simple modifications to serve the needs of science data processing. This presentation provides an invaluable learning opportunity and should be heard by anyone doing large scale image processing today. The session will cover a description of the problem space, evaluation of alternatives, feature set overview, configuration of Hadoop for satellite image processing, real-world performance results, tuning recommendations and finally challenges and ongoing activities. It will also present how the LSRD project built a 102 core processing cluster with no financial hardware investment and achieved ten times the initial daily throughput requirements with a full time staff of only one engineer. Satellite Imagery Production and Processing Using Apache Hadoop is presented by David V. Hill, Principal Software Architect for USGS LSRD.
Pietraszek-Grzywaczewska, Iwona; Bernas, Szymon; Łojko, Piotr; Piechota, Anna; Piechota, Mariusz
2016-01-01
Scoring systems in critical care patients are essential for predicting of the patient outcome and evaluating the therapy. In this study, we determined the value of the Acute Physiology and Chronic Health Evaluation II (APACHE II), Simplified Acute Physiology Score II (SAPS II), Sequential Organ Failure Assessment (SOFA) and Glasgow Coma Scale (GCS) scoring systems in the prediction of mortality in adult patients admitted to the intensive care unit (ICU) with severe purulent bacterial meningitis. We retrospectively analysed data from 98 adult patients with severe purulent bacterial meningitis who were admitted to the single ICU between March 2006 and September 2015. Univariate logistic regression identified the following risk factors of death in patients with severe purulent bacterial meningitis: APACHE II, SAPS II, SOFA, and GCS scores, and the lengths of ICU stay and hospital stay. The independent risk factors of patient death in multivariate analysis were the SAPS II score, the length of ICU stay and the length of hospital stay. In the prediction of mortality according to the area under the curve, the SAPS II score had the highest accuracy followed by the APACHE II, GCS and SOFA scores. For the prediction of mortality in a patient with severe purulent bacterial meningitis, SAPS II had the highest accuracy.
Afessa, B
2000-04-01
This study's aim was to determine the prognostic factors and to develop a triage system for intensive care unit (ICU) admission of patients with gastrointestinal bleeding (GIB). This prospective, observational study included 411 adults consecutively hospitalized for GIB. Each patient's selected clinical findings and laboratory values at presentation were obtained. The Acute Physiology and Chronic Health Evaluation (APACHE) II scores were calculated from the initial findings in the emergency department. Poor outcome was defined as recurrent GIB, emergency surgery, or death. The role of hepatic cirrhosis, APACHE II score, active GIB, end-organ dysfunction, and hypotension in predicting outcome was evaluated. Chi-square, Student's t, Mann-Whitney U, and logistic regression analysis tests were used for statistical comparisons. Poor outcome developed in 81 (20%) patients; 39 died, 23 underwent emergency surgery, and 47 rebled. End-organ dysfunction, active bleeding, hepatic cirrhosis, and high APACHE II scores were independent predictors of poor outcome with odds ratios of 3:1, 3:1, 2:3, and 1:1, respectively. The ICU admission rate was 37%. High APACHE II score, active bleeding, end-organ dysfunction, and hepatic cirrhosis are independent predictors of poor outcome in patients with GIB and can be used in the triage of these patients for ICU admission.
Design and implementation of the first nationwide, web-based Chinese Renal Data System (CNRDS)
2012-01-01
Background In April 2010, with an endorsement from the Ministry of Health of the People's Republic of China, the Chinese Society of Nephrology launched the first nationwide, web-based prospective renal data registration platform, the Chinese Renal Data System (CNRDS), to collect structured demographic, clinical, and laboratory data for dialysis cases, as well as to establish a kidney disease database for researchers and policy makers. Methods The CNRDS program uses information technology to facilitate healthcare professionals to create a blood purification registry and to deliver an evidence-based care and education protocol tailored to chronic kidney disease, as well as online forum for communication between nephrologists. The online portal https://www.cnrds.net is implemented as a Java web application using an Apache Tomcat web server and a MySQL database. All data are stored in a central databank to establish a Chinese renal database for research and publication purposes. Results Currently, over 270,000 clinical cases, including general patient information, diagnostics, therapies, medications, and laboratory tests, have been registered in CNRDS by 3,669 healthcare institutions qualified for hemodialysis therapy. At the 2011 annual blood purification forum of the Chinese Society of Nephrology, the CNRDS 2010 annual report was reviewed and accepted by the society members and government representatives. Conclusions CNRDS is the first national, web-based application for collecting and managing electronic medical records of patients with dialysis in China. It provides both an easily accessible platform for nephrologists to store and organize their patient data and acts as a communication platform among participating doctors. Moreover, it is the largest database for treatment and patient care of end-stage renal disease (ESRD) patients in China, which will be beneficial for scientific research and epidemiological investigations aimed at improving the quality of life of such patients. Furthermore, it is a model nationwide disease registry, which could potentially be used for other diseases. PMID:22369692
SU-E-T-220: A Web-Based Research System for Outcome Analysis of NSCLC Treated with SABR.
Le, A; Yang, Y; Michalski, D; Heron, D; Huq, M
2012-06-01
To establish a web-based software system, an electronic patient record (ePR), to consolidate and evaluate clinical data, dose delivery and treatment outcomes for non small cell lung cancer (NSCLC) patients treated with hypofractionated stereotactic ablative radiation therapy (SABR) across institutions. The new trend of information technology in medical imaging and informatics is towards the development of an electronic patient record (ePR), in which all health and medical information of each patient are organized under the patient's name and identification number. The system has been developed using the Wamp Server, a package of Apache web server, PHP and MySQL database to facilitate patient data input and management, and evaluation of patient clinical data and dose delivery across institution using web technology. The data of each patient to be recorded in the database include pre-treatment clinical data, treatment plan in DICOM-RT format and follow-up data. The pre-treatment data include demographics data, pathology condition, cancer staging. The follow-up data include the survival status, local tumor control condition and toxicity. The clinical data are entered to the system through the web page while the treatment plan data will be imported from the treatment planning system (TPS) using DICOM communication. The collection of data of NSCLC patients treated with SABR stored in the ePR is always accessible and can be retrieved and processed in the future. The core of the ePR is the database which integrates all patient data in one location. The web-based DICOM RT ePR system utilizes the current state-of-the-art medical informatics approach to investigate the combination and consolidation of patient data and outcome results. This will allow clinically-driven data mining for dose distributions and resulting treatment outcome in connection with biological modeling of the treatment parameters to quantify the efficacy of SABR in treating NSCLC patients. © 2012 American Association of Physicists in Medicine.
Design and implementation of the first nationwide, web-based Chinese Renal Data System (CNRDS).
Xie, Fengbo; Zhang, Dong; Wu, Jinzhao; Zhang, Yunfeng; Yang, Qing; Sun, Xuefeng; Cheng, Jing; Chen, Xiangmei
2012-02-28
In April 2010, with an endorsement from the Ministry of Health of the People's Republic of China, the Chinese Society of Nephrology launched the first nationwide, web-based prospective renal data registration platform, the Chinese Renal Data System (CNRDS), to collect structured demographic, clinical, and laboratory data for dialysis cases, as well as to establish a kidney disease database for researchers and policy makers. The CNRDS program uses information technology to facilitate healthcare professionals to create a blood purification registry and to deliver an evidence-based care and education protocol tailored to chronic kidney disease, as well as online forum for communication between nephrologists. The online portal https://www.cnrds.net is implemented as a Java web application using an Apache Tomcat web server and a MySQL database. All data are stored in a central databank to establish a Chinese renal database for research and publication purposes. Currently, over 270,000 clinical cases, including general patient information, diagnostics, therapies, medications, and laboratory tests, have been registered in CNRDS by 3,669 healthcare institutions qualified for hemodialysis therapy. At the 2011 annual blood purification forum of the Chinese Society of Nephrology, the CNRDS 2010 annual report was reviewed and accepted by the society members and government representatives. CNRDS is the first national, web-based application for collecting and managing electronic medical records of patients with dialysis in China. It provides both an easily accessible platform for nephrologists to store and organize their patient data and acts as a communication platform among participating doctors. Moreover, it is the largest database for treatment and patient care of end-stage renal disease (ESRD) patients in China, which will be beneficial for scientific research and epidemiological investigations aimed at improving the quality of life of such patients. Furthermore, it is a model nationwide disease registry, which could potentially be used for other diseases.
Adeniyi, D A; Wei, Z; Yang, Y
2018-01-30
A wealth of data are available within the health care system, however, effective analysis tools for exploring the hidden patterns in these datasets are lacking. To alleviate this limitation, this paper proposes a simple but promising hybrid predictive model by suitably combining the Chi-square distance measurement with case-based reasoning technique. The study presents the realization of an automated risk calculator and death prediction in some life-threatening ailments using Chi-square case-based reasoning (χ 2 CBR) model. The proposed predictive engine is capable of reducing runtime and speeds up execution process through the use of critical χ 2 distribution value. This work also showcases the development of a novel feature selection method referred to as frequent item based rule (FIBR) method. This FIBR method is used for selecting the best feature for the proposed χ 2 CBR model at the preprocessing stage of the predictive procedures. The implementation of the proposed risk calculator is achieved through the use of an in-house developed PHP program experimented with XAMP/Apache HTTP server as hosting server. The process of data acquisition and case-based development is implemented using the MySQL application. Performance comparison between our system, the NBY, the ED-KNN, the ANN, the SVM, the Random Forest and the traditional CBR techniques shows that the quality of predictions produced by our system outperformed the baseline methods studied. The result of our experiment shows that the precision rate and predictive quality of our system in most cases are equal to or greater than 70%. Our result also shows that the proposed system executes faster than the baseline methods studied. Therefore, the proposed risk calculator is capable of providing useful, consistent, faster, accurate and efficient risk level prediction to both the patients and the physicians at any time, online and on a real-time basis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
David Fritz, John Floren
2013-08-27
Minimega is a simple emulytics platform for creating testbeds of networked devices. The platform consists of easily deployable tools to facilitate bringing up large networks of virtual machines including Windows, Linux, and Android. Minimega attempts to allow experiments to be brought up quickly with nearly no configuration. Minimega also includes tools for simple cluster management, as well as tools for creating Linux based virtual machine images.
2014-10-01
indication that not a single scanner was able to detect the rootkit as malicious or infected. SHA256 ...clear indication that not a single scanner was able detect it as malicious, infected or associated to the Jynx2 rootkit. SHA256
Teaching Hands-On Linux Host Computer Security
ERIC Educational Resources Information Center
Shumba, Rose
2006-01-01
In the summer of 2003, a project to augment and improve the teaching of information assurance courses was started at IUP. Thus far, ten hands-on exercises have been developed. The exercises described in this article, and presented in the appendix, are based on actions required to secure a Linux host. Publicly available resources were used to…
A PC parallel port button box provides millisecond response time accuracy under Linux.
Stewart, Neil
2006-02-01
For psychologists, it is sometimes necessary to measure people's reaction times to the nearest millisecond. This article describes how to use the PC parallel port to receive signals from a button box to achieve millisecond response time accuracy. The workings of the parallel port, the corresponding port addresses, and a simple Linux program for controlling the port are described. A test of the speed and reliability of button box signal detection is reported. If the reader is moderately familiar with Linux, this article should provide sufficient instruction for him or her to build and test his or her own parallel port button box. This article also describes how the parallel port could be used to control an external apparatus.
Will Your Next Supercomputer Come from Costco?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Farber, Rob
2007-04-15
A fun topic for April, one that is not an April fool’s joke, is that you can purchase a commodity 200+ Gflop (single-precision) Linux supercomputer for around $600 from your favorite electronic vendor. Yes, it’s true. Just walk in and ask for a Sony Playstation 3 (PS3), take it home and install Linux on it. IBM has provided an excellent tutorial for installing Linux and building applications at http://www-128.ibm.com/developerworks/power/library/pa-linuxps3-1. If you want to raise some eyebrows at work, then submit a purchase request for a Sony PS3 game console and watch the reactions as your paperwork wends its way throughmore » the procurement process.« less
ReSEARCH: A Requirements Search Engine: Progress Report 2
2008-09-01
and provides a convenient user interface for the search process. Ideally, the web application would be based on Tomcat, a free Java Servlet and JSP...Implementation issues Lucene Java is an Open Source project, available under the Apache License, which provides an accessible API for the development of...from the Apache Lucene website (Lucene- java Wiki , 2008). A search application developed with Lucene consists of the same two major com- ponents
ERIC Educational Resources Information Center
Fay, George E., Comp.
The Museum of Anthropology of the University of Northern Colorado (formerly known as Colorado State College) has assembled a large number of Indian tribal charters, constitutions, and by-laws to be reproduced as a series of publications. Included in this volume are the amended charter and constitution of the Jicarilla Apache Tribe, Dulce, New…
25 CFR 183.4 - How can the Tribe use the principal and income from the Trust Fund?
Code of Federal Regulations, 2010 CFR
2010-04-01
... 25 Indians 1 2010-04-01 2010-04-01 false How can the Tribe use the principal and income from the Trust Fund? 183.4 Section 183.4 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR LAND AND WATER USE AND DISTRIBUTION OF THE SAN CARLOS APACHE TRIBE DEVELOPMENT TRUST FUND AND SAN CARLOS APACHE TRIBE LEASE FUND Trust Fund Disposition Use o...
25 CFR 183.3 - Does the American Indian Trust Fund Management Reform Act of 1994 apply to this part?
Code of Federal Regulations, 2010 CFR
2010-04-01
... 25 Indians 1 2010-04-01 2010-04-01 false Does the American Indian Trust Fund Management Reform Act of 1994 apply to this part? 183.3 Section 183.3 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR LAND AND WATER USE AND DISTRIBUTION OF THE SAN CARLOS APACHE TRIBE DEVELOPMENT TRUST FUND AND SAN CARLOS APACHE TRIBE LEASE FUND...
Modeling methods for high-fidelity rotorcraft flight mechanics simulation
NASA Technical Reports Server (NTRS)
Mansur, M. Hossein; Tischler, Mark B.; Chaimovich, Menahem; Rosen, Aviv; Rand, Omri
1992-01-01
The cooperative effort being carried out under the agreements of the United States-Israel Memorandum of Understanding is discussed. Two different models of the AH-64 Apache Helicopter, which may differ in their approach to modeling the main rotor, are presented. The first model, the Blade Element Model for the Apache (BEMAP), was developed at Ames Research Center, and is the only model of the Apache to employ a direct blade element approach to calculating the coupled flap-lag motion of the blades and the rotor force and moment. The second model was developed at the Technion-Israel Institute of Technology and uses an harmonic approach to analyze the rotor. The approach allows two different levels of approximation, ranging from the 'first harmonic' (similar to a tip-path-plane model) to 'complete high harmonics' (comparable to a blade element approach). The development of the two models is outlined and the two are compared using available flight test data.
Integrating Radar Image Data with Google Maps
NASA Technical Reports Server (NTRS)
Chapman, Bruce D.; Gibas, Sarah
2010-01-01
A public Web site has been developed as a method for displaying the multitude of radar imagery collected by NASA s Airborne Synthetic Aperture Radar (AIRSAR) instrument during its 16-year mission. Utilizing NASA s internal AIRSAR site, the new Web site features more sophisticated visualization tools that enable the general public to have access to these images. The site was originally maintained at NASA on six computers: one that held the Oracle database, two that took care of the software for the interactive map, and three that were for the Web site itself. Several tasks were involved in moving this complicated setup to just one computer. First, the AIRSAR database was migrated from Oracle to MySQL. Then the back-end of the AIRSAR Web site was updated in order to access the MySQL database. To do this, a few of the scripts needed to be modified; specifically three Perl scripts that query that database. The database connections were then updated from Oracle to MySQL, numerous syntax errors were corrected, and a query was implemented that replaced one of the stored Oracle procedures. Lastly, the interactive map was designed, implemented, and tested so that users could easily browse and access the radar imagery through the Google Maps interface.
Wang, Hao; Li, Zhong; Yin, Mei; Chen, Xiao-Mei; Ding, Shi-Fang; Li, Chen; Zhai, Qian; Li, Yuan; Liu, Han; Wu, Da-Wei
2015-04-01
Given the high mortality rates in elderly patients with septic shock, the early recognition of patients at greatest risk of death is crucial for the implementation of early intervention strategies. Serum lactate and N-terminal prohormone of brain natriuretic peptide (NT-proBNP) levels are often elevated in elderly patients with septic shock and are therefore important biomarkers of metabolic and cardiac dysfunction. We hypothesized that a risk stratification system that incorporates the Acute Physiology and Chronic Health Evaluation (APACHE) II score and lactate and NT-proBNP biomarkers would better predict mortality in geriatric patients with septic shock than the APACHE II score alone. A single-center prospective study was conducted from January 2012 to December 2013 in a 30-bed intensive care unit of a triservice hospital. The lactate area score was defined as the sum of the area under the curve of serial lactate levels measured during the 24 hours following admission divided by 24. The NT-proBNP score was assigned based on NT-proBNP levels measured at admission. The combined score was calculated by adding the lactate area and NT-proBNP scores to the APACHE II score. Multivariate logistic regression analyses and receiver operating characteristic curves were used to evaluate which variables and scoring systems served as the best predictors of mortality in elderly septic patients. A total of 115 patients with septic shock were included in the study. The overall 28-day mortality rate was 67.0%. When compared to survivors, nonsurvivors had significantly higher lactate area scores, NT-proBNP scores, APACHE II scores, and combined scores. In the multivariate regression model, the combined score, lactate area score, and mechanical ventilation were independent risk factors associated with death. Receiver operating characteristic curves indicated that the combined score had significantly greater predictive power when compared to the APACHE II score or the NT-proBNP score (P < .05). A combined score that incorporates the APACHE II score with early lactate area and NT-proBNP levels is a useful method for risk stratification in geriatric patients with septic shock. Copyright © 2014 Elsevier Inc. All rights reserved.
2005-01-01
Introduction Risk prediction scores usually overestimate mortality in obstetric populations because mortality rates in this group are considerably lower than in others. Studies examining this effect were generally small and did not distinguish between obstetric and nonobstetric pathologies. We evaluated the performance of the Acute Physiology and Chronic Health Evaluation (APACHE) II model in obstetric admissions to critical care units contributing to the ICNARC Case Mix Programme. Methods All obstetric admissions were extracted from the ICNARC Case Mix Programme Database of 219,468 admissions to UK critical care units from 1995 to 2003 inclusive. Cases were divided into direct obstetric pathologies and indirect or coincidental pathologies, and compared with a control cohort of all women aged 16–50 years not included in the obstetric categories. The predictive ability of APACHE II was evaluated in the three groups. A prognostic model was developed for direct obstetric admissions to predict the risk for hospital mortality. A log-linear model was developed to predict the length of stay in the critical care unit. Results A total of 1452 direct obstetric admissions were identified, the most common pathologies being haemorrhage and hypertensive disorders of pregnancy. There were 278 admissions identified as indirect or coincidental and 22,938 in the nonpregnant control cohort. Hospital mortality rates were 2.2%, 6.0% and 19.6% for the direct obstetric group, the indirect or coincidental group, and the control cohort, respectively. Cox regression calibration analysis showed a reasonable fit of the APACHE II model for the nonpregnant control cohort (slope = 1.1, intercept = -0.1). However, the APACHE II model vastly overestimated mortality for obstetric admissions (mortality ratio = 0.25). Risk prediction modelling demonstrated that the Glasgow Coma Scale score was the best discriminator between survival and death in obstetric admissions. Conclusion This study confirms that APACHE II overestimates mortality in obstetric admissions to critical care units. This may be because of the physiological changes in pregnancy or the unique scoring profile of obstetric pathologies such as HELLP syndrome. It may be possible to recalibrate the APACHE II score for obstetric admissions or to devise an alternative score specifically for obstetric admissions.
A Real-Time Linux for Multicore Platforms
2013-12-20
under ARO support) to obtain a fully-functional OS for supporting real-time workloads on multicore platforms. This system, called LITMUS -RT...to be specified as plugin components. LITMUS -RT is open-source software (available at The views, opinions and/or findings contained in this report... LITMUS -RT (LInux Testbed for MUltiprocessor Scheduling in Real-Time systems), allows different multiprocessor real-time scheduling and
Linux OS Jitter Measurements at Large Node Counts using a BlueGene/L
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, Terry R; Tauferner, Mr. Andrew; Inglett, Mr. Todd
2010-01-01
We present experimental results for a coordinated scheduling implementation of the Linux operating system. Results were collected on an IBM Blue Gene/L machine at scales up to 16K nodes. Our results indicate coordinated scheduling was able to provide a dramatic improvement in scaling performance for two applications characterized as bulk synchronous parallel programs.
Computer-Aided Design of Drugs on Emerging Hybrid High Performance Computers
2013-09-01
solutions to virtualization include lightweight, user-level implementations on Linux operating systems , but these solutions are often dependent on a...virtualization include lightweight, user-level implementations on Linux operating systems , but these solutions are often dependent on a specific version of...Washington Headquarters Services, Directorate for Information Operations and Reports, 1215 Jefferson Davis Highway, Suite 1204, Arlington VA, 22202-4302
Interactivity vs. fairness in networked linux systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Wenji; Crawford, Matt; /Fermilab
In general, the Linux 2.6 scheduler can ensure fairness and provide excellent interactive performance at the same time. However, our experiments and mathematical analysis have shown that the current Linux interactivity mechanism tends to incorrectly categorize non-interactive network applications as interactive, which can lead to serious fairness or starvation issues. In the extreme, a single process can unjustifiably obtain up to 95% of the CPU! The root cause is due to the facts that: (1) network packets arrive at the receiver independently and discretely, and the 'relatively fast' non-interactive network process might frequently sleep to wait for packet arrival. Thoughmore » each sleep lasts for a very short period of time, the wait-for-packet sleeps occur so frequently that they lead to interactive status for the process. (2) The current Linux interactivity mechanism provides the possibility that a non-interactive network process could receive a high CPU share, and at the same time be incorrectly categorized as 'interactive.' In this paper, we propose and test a possible solution to address the interactivity vs. fairness problems. Experiment results have proved the effectiveness of the proposed solution.« less
2014-12-01
An Investigation of Multiple Unmanned Aircraft Systems Control from the Cockpit of an AH-64 Apache Helicopter by Jamison S Hicks and David B...estimate or any other aspect of this collection of information, including suggestions for reducing the burden, to Department of Defense , Washington...infantrymen, aircraft pilots, or dedicated UAS ground control station (GCS) operators. The purpose of the UAS is to allow for longer and more discrete
Sam, Kishore Gnana; Kondabolu, Krishnakanth; Pati, Dipanwita; Kamath, Asha; Pradeep Kumar, G; Rao, Padma G M
2009-07-01
Self-poisoning with organophosphorus (OP) compounds is a major cause of morbidity and mortality across South Asian countries. To develop uniform and effective management guidelines, the severity of acute OP poisoning should be assessed through scientific methods and a clinical database should be maintained. A prospective descriptive survey was carried out to assess the utility of severity scales in predicting the outcome of 71 organophosphate (OP) and carbamate poisoning patients admitted during a one year period at the Kasturba Hospital, Manipal, India. The Glasgow coma scale (GCS) scores, acute physiology and chronic health evaluation II (APACHE II) scores, predicted mortality rate (PMR) and Poisoning severity score (PSS) were estimated within 24h of admission. Significant correlation (P<0.05) between PSS and GCS and APACHE II and PMR scores were observed with the PSS scores predicting mortality significantly (P< or =0.001). A total of 84.5% patients improved after treatment while 8.5% of the patients were discharged with severe morbidity. The mortality rate was 7.0%. Suicidal poisoning was observed to be the major cause (80.2%), while other reasons attributed were occupational (9.1%), accidental (6.6%), homicidal (1.6%) and unknown (2.5%) reasons. This study highlights the application of clinical indices like GCS, APACHE, PMR and severity scores in predicting mortality and may be considered for planning standard treatment guidelines.
Zhou, Lianjie; Chen, Nengcheng; Chen, Zeqiang
2017-01-01
The efficient data access of streaming vehicle data is the foundation of analyzing, using and mining vehicle data in smart cities, which is an approach to understand traffic environments. However, the number of vehicles in urban cities has grown rapidly, reaching hundreds of thousands in number. Accessing the mass streaming data of vehicles is hard and takes a long time due to limited computation capability and backward modes. We propose an efficient streaming spatio-temporal data access based on Apache Storm (ESDAS) to achieve real-time streaming data access and data cleaning. As a popular streaming data processing tool, Apache Storm can be applied to streaming mass data access and real time data cleaning. By designing the Spout/bolt workflow of topology in ESDAS and by developing the speeding bolt and other bolts, Apache Storm can achieve the prospective aim. In our experiments, Taiyuan BeiDou bus location data is selected as the mass spatio-temporal data source. In the experiments, the data access results with different bolts are shown in map form, and the filtered buses’ aggregation forms are different. In terms of performance evaluation, the consumption time in ESDAS for ten thousand records per second for a speeding bolt is approximately 300 milliseconds, and that for MongoDB is approximately 1300 milliseconds. The efficiency of ESDAS is approximately three times higher than that of MongoDB. PMID:28394287
Amaral Gonçalves Fusatto, Helena; Castilho de Figueiredo, Luciana; Ragonete Dos Anjos Agostini, Ana Paula; Sibinelli, Melissa; Dragosavac, Desanka
2018-01-01
The aim of this study was to identify pulmonary dysfunction and factors associated with prolonged mechanical ventilation, hospital stay, weaning failure and mortality in patients undergoing coronary artery bypass grafting with use of intra-aortic balloon pump (IABP). This observational study analyzed respiratory, surgical, clinical and demographic variables and related them to outcomes. We analyzed 39 patients with a mean age of 61.2 years. Pulmonary dysfunction, characterized by mildly impaired gas exchange, was present from the immediate postoperative period to the third postoperative day. Mechanical ventilation time was influenced by the use of IABP and PaO2/FiO2, female gender and smoking. Intensive care unit (ICU) stay was influenced by APACHE II score and use of IABP. Mortality was strongly influenced by APACHE II score, followed by weaning failure. Pulmonary dysfunction was present from the first to the third postoperative day. Mechanical ventilation time was influenced by female gender, smoking, duration of IABP use and PaO2/FiO2 on the first postoperative day. ICU stay was influenced by APACHE II score and duration of IABP. Mortality was influenced by APACHE II score, followed by weaning failure. Copyright © 2017 Sociedade Portuguesa de Cardiologia. Publicado por Elsevier España, S.L.U. All rights reserved.
Large-scale virtual screening on public cloud resources with Apache Spark.
Capuccini, Marco; Ahmed, Laeeq; Schaal, Wesley; Laure, Erwin; Spjuth, Ola
2017-01-01
Structure-based virtual screening is an in-silico method to screen a target receptor against a virtual molecular library. Applying docking-based screening to large molecular libraries can be computationally expensive, however it constitutes a trivially parallelizable task. Most of the available parallel implementations are based on message passing interface, relying on low failure rate hardware and fast network connection. Google's MapReduce revolutionized large-scale analysis, enabling the processing of massive datasets on commodity hardware and cloud resources, providing transparent scalability and fault tolerance at the software level. Open source implementations of MapReduce include Apache Hadoop and the more recent Apache Spark. We developed a method to run existing docking-based screening software on distributed cloud resources, utilizing the MapReduce approach. We benchmarked our method, which is implemented in Apache Spark, docking a publicly available target receptor against [Formula: see text]2.2 M compounds. The performance experiments show a good parallel efficiency (87%) when running in a public cloud environment. Our method enables parallel Structure-based virtual screening on public cloud resources or commodity computer clusters. The degree of scalability that we achieve allows for trying out our method on relatively small libraries first and then to scale to larger libraries. Our implementation is named Spark-VS and it is freely available as open source from GitHub (https://github.com/mcapuccini/spark-vs).Graphical abstract.
Zhou, Lianjie; Chen, Nengcheng; Chen, Zeqiang
2017-04-10
The efficient data access of streaming vehicle data is the foundation of analyzing, using and mining vehicle data in smart cities, which is an approach to understand traffic environments. However, the number of vehicles in urban cities has grown rapidly, reaching hundreds of thousands in number. Accessing the mass streaming data of vehicles is hard and takes a long time due to limited computation capability and backward modes. We propose an efficient streaming spatio-temporal data access based on Apache Storm (ESDAS) to achieve real-time streaming data access and data cleaning. As a popular streaming data processing tool, Apache Storm can be applied to streaming mass data access and real time data cleaning. By designing the Spout/bolt workflow of topology in ESDAS and by developing the speeding bolt and other bolts, Apache Storm can achieve the prospective aim. In our experiments, Taiyuan BeiDou bus location data is selected as the mass spatio-temporal data source. In the experiments, the data access results with different bolts are shown in map form, and the filtered buses' aggregation forms are different. In terms of performance evaluation, the consumption time in ESDAS for ten thousand records per second for a speeding bolt is approximately 300 milliseconds, and that for MongoDB is approximately 1300 milliseconds. The efficiency of ESDAS is approximately three times higher than that of MongoDB.
X-LUNA: Extending Free/Open Source Real Time Executive for On-Board Space Applications
NASA Astrophysics Data System (ADS)
Braga, P.; Henriques, L.; Zulianello, M.
2008-08-01
In this paper we present xLuna, a system based on the RTEMS [1] Real-Time Operating System that is able to run on demand a GNU/Linux Operating System [2] as RTEMS' lowest priority task. Linux runs in user-mode and in a different memory partition. This allows running Hard Real-Time tasks and Linux applications on the same system sharing the Hardware resources while keeping a safe isolation and the Real-Time characteristics of RTEMS. Communication between both Systems is possible through a loose coupled mechanism based on message queues. Currently only SPARC LEON2 processor with Memory Management Unit (MMU) is supported. The advantage in having two isolated systems is that non critical components are quickly developed or simply ported reducing time-to-market and budget.
Continuous optical monitoring of a near-shore sea-water column
NASA Astrophysics Data System (ADS)
Bensky, T. J.; Neff, B.
2006-12-01
Cal Poly San Luis Obispo runs the Central Coast Marine Sciences Center, south-facing, 1-km-long pier in San Luis Bay, on the west coast of California, midway between Los Angeles and San Fransisco. The facility is secure and dedicated to marine science research. We have constructed an automated optical profiling system that collects sunlight samples, in half-foot increments, from a 30 foot vertical column of sea-water below the pier. Our implementation lowers a high quality, optically pure fiber cable into the water at 30 minute intervals. Light collected by the submersed fiber aperture is routed to the pier surface where it is spectrally analyzed using an Ocean Optics HR2000 spectrometer. The spectrometer instantly yields the spectrum of the light collected at a given depth. The "spectrum" here is light intensity as a function of wavelength between 200 and 1100 nm in increments of 0.1 nm. Each dive of the instrument takes approximately 80 seconds, lowers the fiber from the surface to a depth of 30 feet, and yields approximately 60 spectra, each one taken at a such successively larger depth. A computer logs each spectra as a function of depth. From such data, we are able to extract total downward photon flux, quantify ocean color, and compute attenuation coefficients. The system is entirely autonomous, includes an integrated data-browser, and can be checked-on, or even controlled over the Internet, using a web-browser. Linux runs the computer, data is logged directly to a mySQL database for easy extraction, and a PHP-script ties the system together. Current work involves studying light-energy deposition trends and effects of surface action on downward photon flux. This work has been funded by the Office of Naval Research (ONR) and the California Central Coast Research Park Initiative (C3RP).
CancerLectinDB: a database of lectins relevant to cancer.
Damodaran, Deepa; Jeyakani, Justin; Chauhan, Alok; Kumar, Nirmal; Chandra, Nagasuma R; Surolia, Avadhesha
2008-04-01
The role of lectins in mediating cancer metastasis, apoptosis as well as various other signaling events has been well established in the past few years. Data on various aspects of the role of lectins in cancer is being accumulated at a rapid pace. The data on lectins available in the literature is so diverse, that it becomes difficult and time-consuming, if not impossible to comprehend the advances in various areas and obtain the maximum benefit. Not only do the lectins vary significantly in their individual functional roles, but they are also diverse in their sequences, structures, binding site architectures, quaternary structures, carbohydrate affinities and specificities as well as their potential applications. An organization of these seemingly independent data into a common framework is essential in order to achieve effective use of all the data towards understanding the roles of different lectins in different aspects of cancer and any resulting applications. An integrated knowledge base (CancerLectinDB) together with appropriate analytical tools has therefore been developed for lectins relevant for any aspect of cancer, by collating and integrating diverse data. This database is unique in terms of providing sequence, structural, and functional annotations for lectins from all known sources in cancer and is expected to be a useful addition to the number of glycan related resources now available to the community. The database has been implemented using MySQL on a Linux platform and web-enabled using Perl-CGI and Java tools. Data for individual lectins pertain to taxonomic, biochemical, domain architecture, molecular sequence and structural details as well as carbohydrate specificities. Extensive links have also been provided for relevant bioinformatics resources and analytical tools. Availability of diverse data integrated into a common framework is expected to be of high value for various studies on lectin cancer biology. CancerLectinDB can be accessed through http://proline.physics.iisc.ernet.in/cancerdb .
Monitoring SLAC High Performance UNIX Computing Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lettsome, Annette K.; /Bethune-Cookman Coll. /SLAC
2005-12-15
Knowledge of the effectiveness and efficiency of computers is important when working with high performance systems. The monitoring of such systems is advantageous in order to foresee possible misfortunes or system failures. Ganglia is a software system designed for high performance computing systems to retrieve specific monitoring information. An alternative storage facility for Ganglia's collected data is needed since its default storage system, the round-robin database (RRD), struggles with data integrity. The creation of a script-driven MySQL database solves this dilemma. This paper describes the process took in the creation and implementation of the MySQL database for use by Ganglia.more » Comparisons between data storage by both databases are made using gnuplot and Ganglia's real-time graphical user interface.« less
MaGnET: Malaria Genome Exploration Tool.
Sharman, Joanna L; Gerloff, Dietlind L
2013-09-15
The Malaria Genome Exploration Tool (MaGnET) is a software tool enabling intuitive 'exploration-style' visualization of functional genomics data relating to the malaria parasite, Plasmodium falciparum. MaGnET provides innovative integrated graphic displays for different datasets, including genomic location of genes, mRNA expression data, protein-protein interactions and more. Any selection of genes to explore made by the user is easily carried over between the different viewers for different datasets, and can be changed interactively at any point (without returning to a search). Free online use (Java Web Start) or download (Java application archive and MySQL database; requires local MySQL installation) at http://malariagenomeexplorer.org joanna.sharman@ed.ac.uk or dgerloff@ffame.org Supplementary data are available at Bioinformatics online.
Yuan, Shaoxin; Gao, Yusong; Ji, Wenqing; Song, Junshuai; Mei, Xue
2018-05-01
The aim of this study was to assess the ability of acute physiology and chronic health evaluation II (APACHE II) score, poisoning severity score (PSS) as well as sequential organ failure assessment (SOFA) score combining with lactate (Lac) to predict mortality in the Emergency Department (ED) patients who were poisoned with organophosphate.A retrospective review of 59 stands-compliant patients was carried out. Receiver operating characteristic (ROC) curves were constructed based on the APACHE II score, PSS, SOFA score with or without Lac, respectively, and the areas under the ROC curve (AUCs) were determined to assess predictive value. According to SOFA-Lac (a combination of SOFA and Lac) classification standard, acute organophosphate pesticide poisoning (AOPP) patients were divided into low-risk and high-risk groups. Then mortality rates were compared between risk levels.Between survivors and non-survivors, there were significant differences in the APACHE II score, PSS, SOFA score, and Lac (all P < .05). The AUCs of the APACHE II score, PSS, and SOFA score were 0.876, 0.811, and 0.837, respectively. However, after combining with Lac, the AUCs were 0.922, 0.878, and 0.956, respectively. According to SOFA-Lac, the mortality of high-risk group was significantly higher than low-risk group (P < .05) and the patients of the non-survival group were all at high risk.These data suggest the APACHE II score, PSS, SOFA score can all predict the prognosis of AOPP patients. For its simplicity and objectivity, the SOFA score is a superior predictor. Lac significantly improved the predictive abilities of the 3 scoring systems, especially for the SOFA score. The SOFA-Lac system effectively distinguished the high-risk group from the low-risk group. Therefore, the SOFA-Lac system is significantly better at predicting mortality in AOPP patients.
Usefulness of Glycemic Gap to Predict ICU Mortality in Critically Ill Patients With Diabetes.
Liao, Wen-I; Wang, Jen-Chun; Chang, Wei-Chou; Hsu, Chin-Wang; Chu, Chi-Ming; Tsai, Shih-Hung
2015-09-01
Stress-induced hyperglycemia (SIH) has been independently associated with an increased risk of mortality in critically ill patients without diabetes. However, it is also necessary to consider preexisting hyperglycemia when investigating the relationship between SIH and mortality in patients with diabetes. We therefore assessed whether the gap between admission glucose and A1C-derived average glucose (ADAG) levels could be a predictor of mortality in critically ill patients with diabetes.We retrospectively reviewed the Acute Physiology and Chronic Health Evaluation II (APACHE-II) scores and clinical outcomes of patients with diabetes admitted to our medical intensive care unit (ICU) between 2011 and 2014. The glycosylated hemoglobin (HbA1c) levels were converted to the ADAG by the equation, ADAG = [(28.7 × HbA1c) - 46.7]. We also used receiver operating characteristic (ROC) curves to determine the optimal cut-off value for the glycemic gap when predicting ICU mortality and used the net reclassification improvement (NRI) to measure the improvement in prediction performance gained by adding the glycemic gap to the APACHE-II score.We enrolled 518 patients, of which 87 (17.0%) died during their ICU stay. Nonsurvivors had significantly higher APACHE-II scores and glycemic gaps than survivors (P < 0.001). Critically ill patients with diabetes and a glycemic gap ≥80 mg/dL had significantly higher ICU mortality and adverse outcomes than those with a glycemic gap <80 mg/dL (P < 0.001). Incorporation of the glycemic gap into the APACHE-II score increased the discriminative performance for predicting ICU mortality by increasing the area under the ROC curve from 0.755 to 0.794 (NRI = 13.6%, P = 0.0013).The glycemic gap can be used to assess the severity and prognosis of critically ill patients with diabetes. The addition of the glycemic gap to the APACHE-II score significantly improved its ability to predict ICU mortality.
[Prevalence of severe sepsis in intensive care units. A national multicentric study].
Dougnac, Alberto L; Mercado, Marcelo F; Cornejo, Rodrigo R; Cariaga, Mario V; Hernández, Glenn P; Andresen, Max H; Bugedo, Guillermo T; Castillo, Luis F
2007-05-01
Severe sepsis (SS) is the leading cause of death in the Intensive Care Units (ICU). To study the prevalence of SS in Chilean ICUs. An observational, cross-sectional study using a predesigned written survey was done in all ICUs of Chile on April 21st, 2004. General hospital and ICU data and the number of hospitalized patients in the hospital and in the ICU at the survey day, were recorded. Patients were followed for 28 days. Ninety four percent of ICUs participated in the survey. The ICU occupation index was 66%. Mean age of patients was 57.7+/-18 years and 59% were male, APACHE II score was 15+/-7.5 and SOFA score was 6+/-4. SS was the admission diagnosis of 94 of the 283 patients (33%) and 38 patients presented SS after admission. On the survey day, 112 patients fulfilled SS criteria (40%). APACHE II and SOFA scores were significantly higher in SS patients than in non SS patients. Global case-fatality ratio at 28 days was 15.9% (45/283). Case-fatality ratio in patients with or without SS at the moment of the survey was 26.7% (30/112) and 8.7% (17/171), respectively p <0.05. Thirteen percent of patients who developed SS after admission, died. Case-fatality ratios for patients with SS from Santiago and the other cities were similar, but APACHE II score was significantly higher in patients from Santiago. In SS patients, the independent predictors of mortality were SS as cause of hospital admission, APACHE II and SOFA scores. Ninety nine percent of SS patients had a known sepsis focus (48% respiratory and 30% abdominal). Eighty five patients that presented SS after admission, had a respiratory focus. SS is highly prevalent in Chilean ICUs and represents the leading diagnosis at admission. SS as cause of hospitalization, APACHE II and SOFA scores were independent predictors of mortality.
Linux containers for fun and profit in HPC
Priedhorsky, Reid; Randles, Timothy C.
2017-10-01
This article outlines options for user-defined software stacks from an HPC perspective. Here, we argue that a lightweight approach based on Linux containers is most suitable for HPC centers because it provides the best balance between maximizing service of user needs and minimizing risks. We also discuss how containers work and several implementations, including Charliecloud, our own open-source solution developed at Los Alamos.
Linux containers for fun and profit in HPC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Priedhorsky, Reid; Randles, Timothy C.
This article outlines options for user-defined software stacks from an HPC perspective. Here, we argue that a lightweight approach based on Linux containers is most suitable for HPC centers because it provides the best balance between maximizing service of user needs and minimizing risks. We also discuss how containers work and several implementations, including Charliecloud, our own open-source solution developed at Los Alamos.
NASA Astrophysics Data System (ADS)
Senthilkumar, K.; Ruchika Mehra Vijayan, E.
2017-11-01
This paper aims to illustrate real time analysis of large scale data. For practical implementation we are performing sentiment analysis on live Twitter feeds for each individual tweet. To analyze sentiments we will train our data model on sentiWordNet, a polarity assigned wordNet sample by Princeton University. Our main objective will be to efficiency analyze large scale data on the fly using distributed computation. Apache Spark and Apache Hadoop eco system is used as distributed computation platform with Java as development language
Developer Initiation and Social Interactions in OSS: A Case Study of the Apache Software Foundation
2014-08-01
public interaction with the Apache Pluto community is on the mailing list in August 2006: Hello all, I’am John from the University [...], we are...developing the Prototype for the JSR 286. I hope that we can discuss the code [...] we have made and then develop new code for Pluto together [...], referring...to his and some of his fellow student’s intentions to contribute to Pluto . John gets the attention of Pluto committers and is immediately welcomed as
The customization of APACHE II for patients receiving orthotopic liver transplants
Moreno, Rui
2002-01-01
General outcome prediction models developed for use with large, multicenter databases of critically ill patients may not correctly estimate mortality if applied to a particular group of patients that was under-represented in the original database. The development of new diagnostic weights has been proposed as a method of adapting the general model – the Acute Physiology and Chronic Health Evaluation (APACHE) II in this case – to a new group of patients. Such customization must be empirically tested, because the original model cannot contain an appropriate set of predictive variables for the particular group. In this issue of Critical Care, Arabi and co-workers present the results of the validation of a modified model of the APACHE II system for patients receiving orthotopic liver transplants. The use of a highly heterogeneous database for which not all important variables were taken into account and of a sample too small to use the Hosmer–Lemeshow goodness-of-fit test appropriately makes their conclusions uncertain. PMID:12133174
VizieR Online Data Catalog: RefleX : X-ray-tracing code (Paltani+, 2017)
NASA Astrophysics Data System (ADS)
Paltani, S.; Ricci, C.
2017-11-01
We provide here the RefleX executable, for both Linux and MacOSX, together with the User Manual and example script file and output file Running (for instance): reflex_linux will produce the file reflex.out Note that the results may differ slightly depending on the OS, because of slight differences in some implementations numerical computations. The difference are scientifically meaningless. (5 data files).
Adaptive Multilevel Middleware for Object Systems
2006-12-01
the system at the system-call level or using the CORBA-standard Extensible Transport Framework ( ETF ). Transparent insertion is highly desirable from an...often as it needs to. This is remedied by using the real-time scheduling class in a stock Linux kernel. We used schedsetscheduler system call (with...real-time scheduling class (SCHEDFIFO) for all the ML-NFD programs, later experiments with CPU load indicate that a stock Linux kernel is not
NAVO MSRC Navigator. Fall 2006
2006-01-01
UNIX Manual Pages: xdm (1x). 7. Buddenhagen, Oswald, “The KDM Handbook,” KDE Documentation, http://docs.kde.org/development/ en /kdebase/kdm/. 8... Linux Opteron cluster was recently determined through a series of simulations that employed both fixed and adaptive meshes. The fixed-mesh scalability...approximately eight in the total number of cells in the 3-D simulation. The fixed-mesh and AMR scalability results on the Linux Opteron cluster are
Chen, Wan-Ling; Chen, Chin-Ming; Kung, Shu-Chen; Wang, Ching-Min; Lai, Chih-Cheng; Chao, Chien-Ming
2018-01-23
This retrospective cohort study investigated the outcomes and prognostic factors in nonagenarians (patients 90 years old or older) with acute respiratory failure. Between 2006 and 2016, all nonagenarians with acute respiratory failure requiring invasive mechanical ventilation (MV) were enrolled. Outcomes including in-hospital mortality and ventilator dependency were measured. A total of 173 nonagenarians with acute respiratory failure were admitted to the intensive care unit (ICU). A total of 56 patients died during the hospital stay and the rate of in-hospital mortality was 32.4%. Patients with higher APACHE (Acute Physiology and Chronic Health Evaluation) II scores (adjusted odds ratio [OR], 5.91; 95 % CI, 1.55-22.45; p = 0.009, APACHE II scores ≥ 25 vs APACHE II scores < 15), use of vasoactive agent (adjust OR, 2.67; 95% CI, 1.12-6.37; p = 0.03) and more organ dysfunction (adjusted OR, 11.13; 95% CI, 3.38-36.36, p < 0.001; ≥ 3 organ dysfunction vs ≤ 1 organ dysfunction) were more likely to die. Among the 117 survivors, 25 (21.4%) patients became dependent on MV. Female gender (adjusted OR, 3.53; 95% CI, 1.16-10.76, p = 0.027) and poor consciousness level (adjusted OR, 4.98; 95% CI, 1.41-17.58, p = 0.013) were associated with MV dependency. In conclusion, the mortality rate of nonagenarians with acute respiratory failure was high, especially for those with higher APACHE II scores or more organ dysfunction.
Mica, Ladislav; Rufibach, Kaspar; Keel, Marius; Trentz, Otmar
2013-01-01
The early hemodynamic normalization of polytrauma patients may lead to better survival outcomes. The aim of this study was to assess the diagnostic quality of trauma and physiological scores from widely used scoring systems in polytrauma patients. In total, 770 patients with ISS > 16 who were admitted to a trauma center within the first 24 hours after injury were included in this retrospective study. The patients were subdivided into three groups: those who died on the day of admission, those who died within the first three days, and those who survived for longer than three days. ISS, NISS, APACHE II score, and prothrombin time were recorded at admission. The descriptive statistics for early death in polytrauma patients who died on the day of admission, 1-3 days after admission, and > 3 days after admission were: ISS of 41.0, 34.0, and 29.0, respectively; NISS of 50.0, 50.0, and 41.0, respectively; APACHE II score of 30.0, 25.0, and 15.0, respectively; and prothrombin time of 37.0%, 56.0%, and 84%, respectively. These data indicate that prothrombin time (AUC: 0.89) and APACHE II (AUC: 0.88) have the greatest prognostic utility for early death. The estimated densities of the scores may suggest a direction for resuscitative procedures in polytrauma patients. "Retrospektive Analysen in der Chirurgischen Intensivmedizin"StV01-2008.
Afessa, B; Kubilis, P S
2000-02-01
We conducted this study to describe the complications and validate the accuracy of previously reported prognostic indices in predicting the mortality of cirrhotic patients hospitalized for upper GI bleeding. This prospective, observational study included 111 consecutive hospitalizations of 85 cirrhotic patients admitted for GI bleeding. Data obtained included intensive care unit (ICU) admission status, Child-Pugh score, the development of systemic inflammatory response syndrome (SIRS), organ failure, and inhospital mortality. The performances of Garden's, Gatta's, and Acute Physiology and Chronic Health Evaluation (APACHE) II prognostic systems in predicting mortality were assessed. Patients' mean age was 48.7 yr, and the median APACHE II and Child-Pugh scores were 17 and 9, respectively. Their ICU admission rate was 71%. Organ failure developed in 57%, and SIRS in 46% of the patients. Nine patients had acute respiratory distress syndrome, and three patients had hepatorenal syndrome. The inhospital mortality was 21%. The APACHE II, Garden's, and Gatta' s predicted mortality rates were 39%, 24%, and 20%, respectively, and their areas under the receiver operating characteristic curve (AUC) were 0.78, 0.70, and 0.71, respectively. The AUC for Child-Pugh score was 0.76. SIRS and organ failure develop in many patients with hepatic cirrhosis hospitalized for upper GI bleeding, and are associated with increased mortality. Although the APACHE II prognostic system overestimated the mortality of these patients, the receiver operating characteristic curves did not show significant differences between the various prognostic systems.
NASA Astrophysics Data System (ADS)
Gaspar Aparicio, R.; Gomez, D.; Coterillo Coz, I.; Wojcik, D.
2012-12-01
At CERN a number of key database applications are running on user-managed MySQL database services. The database on demand project was born out of an idea to provide the CERN user community with an environment to develop and run database services outside of the actual centralised Oracle based database services. The Database on Demand (DBoD) empowers the user to perform certain actions that had been traditionally done by database administrators, DBA's, providing an enterprise platform for database applications. It also allows the CERN user community to run different database engines, e.g. presently open community version of MySQL and single instance Oracle database server. This article describes a technology approach to face this challenge, a service level agreement, the SLA that the project provides, and an evolution of possible scenarios.
Source Code Analysis Laboratory (SCALe) for Energy Delivery Systems
2010-12-01
the software for reevaluation. Once the ree- valuation process is completed, CERT provides the client a report detailing the software’s con - formance...Flagged Nonconformities (FNC) Software System TP/FNC Ratio Mozilla Firefox version 2.0 6/12 50% Linux kernel version 2.6.15 10/126 8% Wine...inappropriately tuned for analysis of the Linux kernel, which has anomalous results. Customizing SCALe to work with energy system software will help
2012-06-14
the attacker . Thus, this race condition causes a privilege escalation . 2.2.5 Summary This section reviewed software exploitation of a Linux kernel...has led to increased targeting by malware writers. Android attacks have naturally sparked interest in researching protections for Android . This...release, Android 4.0 Ice Cream Sandwich. These rootkits focused on covert techniques to hide the presence of data used by an attacker to infect a
The Ubuntu Chat Corpus for Multiparticipant Chat Analysis
2013-03-01
Intelligence (www.aaai.org). All rights reserved. the # LINUX corpus (Elsner and Charniak 2010), and the #IPHONE/#PHYSICS/#PYTHON corpus (Adams 2008). For many...made publicly available, making it difficult to comparatively evaluate dif- ferent techniques. Corpus Description Ubuntu, a Linux -based operating...Kubuntu (Ubuntu with KDE ) support #ubuntu-devel 2 112 074 12 140 53.7 2004-10-01 Developmental team coordination #ubuntu+1 1 621 680 26 805 52.6 2007-04-04
[Making a low cost IPSec router on Linux and the assessment for practical use].
Amiki, M; Horio, M
2001-09-01
We installed Linux and FreeS/WAN on a PC/AT compatible machine to make an IPSec router. We measured the time of ping/ftp, only in the university, between the university and the external network. Between the university and the external network (the Internet), there were no differences. Therefore, we concluded that CPU load was not remarkable at low speed networks, because packets exchanged via the Internet are small, or compressions of VPN are more effective than encoding and decoding. On the other hand, in the university, the IPSec router performed down about 20-30% compared with normal IP communication, but this is not a serious problem for practical use. Recently, VPN machines are becoming cheaper, but they do not function sufficiently to create a fundamental VPN environment. Therefore, if one wants a fundamental VPN environment at a low cost, we believe you should select a VPN router on Linux.
LXtoo: an integrated live Linux distribution for the bioinformatics community
2012-01-01
Background Recent advances in high-throughput technologies dramatically increase biological data generation. However, many research groups lack computing facilities and specialists. This is an obstacle that remains to be addressed. Here, we present a Linux distribution, LXtoo, to provide a flexible computing platform for bioinformatics analysis. Findings Unlike most of the existing live Linux distributions for bioinformatics limiting their usage to sequence analysis and protein structure prediction, LXtoo incorporates a comprehensive collection of bioinformatics software, including data mining tools for microarray and proteomics, protein-protein interaction analysis, and computationally complex tasks like molecular dynamics. Moreover, most of the programs have been configured and optimized for high performance computing. Conclusions LXtoo aims to provide well-supported computing environment tailored for bioinformatics research, reducing duplication of efforts in building computing infrastructure. LXtoo is distributed as a Live DVD and freely available at http://bioinformatics.jnu.edu.cn/LXtoo. PMID:22813356
LXtoo: an integrated live Linux distribution for the bioinformatics community.
Yu, Guangchuang; Wang, Li-Gen; Meng, Xiao-Hua; He, Qing-Yu
2012-07-19
Recent advances in high-throughput technologies dramatically increase biological data generation. However, many research groups lack computing facilities and specialists. This is an obstacle that remains to be addressed. Here, we present a Linux distribution, LXtoo, to provide a flexible computing platform for bioinformatics analysis. Unlike most of the existing live Linux distributions for bioinformatics limiting their usage to sequence analysis and protein structure prediction, LXtoo incorporates a comprehensive collection of bioinformatics software, including data mining tools for microarray and proteomics, protein-protein interaction analysis, and computationally complex tasks like molecular dynamics. Moreover, most of the programs have been configured and optimized for high performance computing. LXtoo aims to provide well-supported computing environment tailored for bioinformatics research, reducing duplication of efforts in building computing infrastructure. LXtoo is distributed as a Live DVD and freely available at http://bioinformatics.jnu.edu.cn/LXtoo.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sayan Ghosh, Jeff Hammond
OpenSHMEM is a community effort to unifyt and standardize the SHMEM programming model. MPI (Message Passing Interface) is a well-known community standard for parallel programming using distributed memory. The most recen t release of MPI, version 3.0, was designed in part to support programming models like SHMEM.OSHMPI is an implementation of the OpenSHMEM standard using MPI-3 for the Linux operating system. It is the first implementation of SHMEM over MPI one-sided communication and has the potential to be widely adopted due to the portability and widely availability of Linux and MPI-3. OSHMPI has been tested on a variety of systemsmore » and implementations of MPI-3, includingInfiniBand clusters using MVAPICH2 and SGI shared-memory supercomputers using MPICH. Current support is limited to Linux but may be extended to Apple OSX if there is sufficient interest. The code is opensource via https://github.com/jeffhammond/oshmpi« less
NASA Astrophysics Data System (ADS)
Dinkins, Matthew; Colley, Stephen
2008-07-01
Hardware and software specialized for real time control reduce the timing jitter of executables when compared to off-the-shelf hardware and software. However, these specialized environments are costly in both money and development time. While conventional systems have a cost advantage, the jitter in these systems is much larger and potentially problematic. This study analyzes the timing characterstics of a standard Dell server running a fully featured Linux operating system to determine if such a system would be capable of meeting the timing requirements for closed loop operations. Investigations are preformed on the effectiveness of tools designed to make off-the-shelf system performance closer to specialized real time systems. The Gnu Compiler Collection (gcc) is compared to the Intel C Compiler (icc), compiler optimizations are investigated, and real-time extensions to Linux are evaluated.
Web-based Quality Control Tool used to validate CERES products on a cluster of Linux servers
NASA Astrophysics Data System (ADS)
Chu, C.; Sun-Mack, S.; Heckert, E.; Chen, Y.; Mlynczak, P.; Mitrescu, C.; Doelling, D.
2014-12-01
There have been a few popular desktop tools used in the Earth Science community to validate science data. Because of the limitation on the capacity of desktop hardware such as disk space and CPUs, those softwares are not able to display large amount of data from files.This poster will talk about an in-house developed web-based software built on a cluster of Linux servers. That allows users to take advantage of a few Linux servers working in parallel to generate hundreds images in a short period of time. The poster will demonstrate:(1) The hardware and software architecture is used to provide high throughput of images. (2) The software structure that can incorporate new products and new requirement quickly. (3) The user interface about how users can manipulate the data and users can control how the images are displayed.
2010-03-31
Davis Highway, Suite 1204, Arlington, VA 22202-4302, and to the O ffice of Management and Budget, Paperwork Reduction Project (0704-0188) Washington...Jafferson Davis Highway, Suite 1204, Arlington, VA 22202-4302, and to the Office of Management and Budget, Paperwork Reduction Project (0704-0100...General William T. Sherman, upon Crook’s death, said he was, "the greatest Indian-fighter and manager the army of the United States ever had.,,4
2010-10-01
Requirements Application Server BEA Weblogic Express 9.2 or higher Java v5Apache Struts v2 Hibernate v2 C3PO SQL*Net client / JDBC Database Server...designed for the desktop o An HTML and JavaScript browser-based front end designed for mobile Smartphones - A Java -based framework utilizing Apache...Technology Requirements The recommended technologies are as follows: Technology Use Requirements Java Application Provides the backend application
Auxiliary Salvage Tow and Rescue: T-STAR
2011-08-01
These agencies also operate four ships of the T-ATF class (Fleet Ocean Tug): Catawba (T-ATF 168), Navajo (T-ATF 169), Sioux (T-ATF 171), and Apache (T...Ocean Tug): CATAWBA (T-ATF 168), NAVAJO (T-ATF 169), SIOUX (T-ATF 171), and APACHE (T-ATF 172). These ships were commissioned during the 1980’s and...Bottles 1 0.6 Portable HP Air Plant 10’x18’x10’ 1 40.2 200 Amp Welder 2 0.4 Power Pack Unit 1 8.4 Salvage Equipment 400 Amp
mod_bio: Apache modules for Next-Generation sequencing data.
Lindenbaum, Pierre; Redon, Richard
2015-01-01
We describe mod_bio, a set of modules for the Apache HTTP server that allows the users to access and query fastq, tabix, fasta and bam files through a Web browser. Those data are made available in plain text, HTML, XML, JSON and JSON-P. A javascript-based genome browser using the JSON-P communication technique is provided as an example of cross-domain Web service. https://github.com/lindenb/mod_bio. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
[Prediction of mortality in patients with acute hepatic failure].
Eremeeva, L F; Berdnikov, A P; Musaeva, T S; Zabolotskikh, I B
2013-01-01
The article deals with a study of 243 patients (from 18 to 65 years old) with acute hepatic failure. Purpose of the study was to evaluate the predictive capability of severity scales APACHE III, SOFA, MODS, Child-Pugh and to identify mortality predictors in patients with acute hepatic failure. Results; The best predictive ability in patients with acute hepatic failure and multiple organ failure had APACHE III and SOFA scales. The strongest mortality predictors were: serum creatinine > 132 mmol/L, fibrinogen < 1.4 g/L, Na < 129 mmol/L.
Predictive ability of the ISS, NISS, and APACHE II score for SIRS and sepsis in polytrauma patients.
Mica, L; Furrer, E; Keel, M; Trentz, O
2012-12-01
Systemic inflammatory response syndrome (SIRS) and sepsis as causes of multiple organ dysfunction syndrome (MODS) remain challenging to treat in polytrauma patients. In this study, the focus was set on widely used scoring systems to assess their diagnostic quality. A total of 512 patients (mean age: 39.2 ± 16.2, range: 16-88 years) who had an Injury Severity Score (ISS) ≥17 were included in this retrospective study. The patients were subdivided into four groups: no SIRS, slight SIRS, severe SIRS, and sepsis. The ISS, New Injury Severity Score (NISS), Acute Physiology and Chronic Health Evaluation II (APACHE II) scores, and prothrombin time were collected at admission. The Kruskal-Wallis test and χ(2)-test, multinomial regression analysis, and kernel density estimates were performed. Receiver operating characteristic (ROC) analysis is reported as the area under the curve (AUC). Data were considered as significant if p < 0.05. All variables were significantly different in all groups (p < 0.001). The odds ratio increased with increasing SIRS severity for NISS (slight vs. no SIRS, 1.06, p = 0.07; severe vs. no SIRS, 1.07, p = 0.04; and sepsis vs. no SIRS, 1.11, p = 0.0028) and APACHE II score (slight vs. no SIRS, 0.97, p = 0.44; severe vs. no SIRS, 1.08, p = 0.02; and sepsis vs. no SIRS, 1.12, p = 0.0028). ROC analysis revealed that the NISS (slight vs. no SIRS, AUC 0.61; severe vs. no SIRS, AUC 0.67; and sepsis vs. no SIRS, AUC 0.77) and APACHE II score (slight vs. no SIRS, AUC 0.60; severe vs. no SIRS, AUC 0.74; and sepsis vs. no SIRS, AUC 0.82) had the best predictive ability for SIRS and sepsis. Quick assessment with the NISS or APACHE II score could preselect possible candidates for sepsis following polytrauma and provide guidance in trauma surgeons' decision-making.
NASA Astrophysics Data System (ADS)
McGibbney, L. J.; Whitehall, K. D.; Mattmann, C. A.; Goodale, C. E.; Joyce, M.; Ramirez, P.; Zimdars, P.
2014-12-01
We detail how Apache Open Climate Workbench (OCW) (recently open sourced by NASA JPL) was adapted to facilitate an ongoing study of Mesoscale Convective Complexes (MCCs) in West Africa and their contributions within the weather-climate continuum as it relates to climate variability. More than 400 MCCs occur annually over various locations on the globe. In West Africa, approximately one-fifth of that total occur during the summer months (June-November) alone and are estimated to contribute more than 50% of the seasonal rainfall amounts. Furthermore, in general the non-discriminatory socio-economic geospatial distribution of these features correlates with currently and projected densely populated locations. As such, the convective nature of MCCs raises questions regarding their seasonal variability and frequency in current and future climates, amongst others. However, in spite of the formal observation criteria of these features in 1980, these questions have remained comprehensively unanswered because of the untimely and subjective methods for identifying and characterizing MCCs due to limitations data-handling limitations. The main outcome of this work therefore documents how a graph-based search algorithm was implemented on top of the OCW stack with the ultimate goal of improving fully automated end-to-end identification and characterization of MCCs in high resolution observational datasets. Apache OCW as an open source project was demonstrated from inception and we display how it was again utilized to advance understanding and knowledge within the above domain. The project was born out of refactored code donated by NASA JPL from the Earth science community's Regional Climate Model Evaluation System (RCMES), a joint project between the Joint Institute for Regional Earth System Science and Engineering (JIFRESSE), and a scientific collaboration between the University of California at Los Angeles (UCLA) and NASA JPL. The Apache OCW project was then integrated back into the donor code with the aim of more efficiently powering that project. Notwithstanding, the object-oriented approach to creating a core set of libraries Apache OCW has scaled the usability of the project beyond climate model evaluation as displayed in the MCC use case detailed herewith.
Hosseini, Seyed Hossein; Ayyasi, Mitra; Akbari, Hooshang; Heidari Gorji, Mohammad Ali
2016-01-01
Background Traumatic brain injury (TBI) is a common cause of mortality and disability worldwide. Choosing an appropriate diagnostic tool is critical in early stage for appropriate decision about primary diagnosis, medical care and prognosis. Objectives This study aimed to compare the Glasgow coma scale (GCS), full outline of unresponsiveness (FOUR) and acute physiology and chronic health evaluation (APACHE II) with respect to prediction of the mortality rate of patients with TBI admitted to intensive care unit. Patients and Methods This diagnostic study was conducted on 80 patients with TBI in educational hospitals. The scores of APACHE II, GCS and FOUR were recorded during the first 24 hours of admission of patients. In this study, early mortality means the patient death before 14 days and delayed mortality means the patient death 15 days after admitting to hospital. The collected data were analyzed using descriptive and inductive statistics. Results The results showed that the mean age of the patients was 33.80 ± 12.60. From a total of 80 patients with TBI, 16 (20%) were females and 64 (80%) males. The mortality rate was 15 (18.7%). The results showed no significant difference among three tools. In prediction of early mortality, the areas under the curve (AUCs) were 0.92 (CI = 0.95. 0.81 - 0.97), 0.90 (CI = 0.95. 0.74 - 0.94), and 0.96 (CI = 0.95. 0.87 - 0.9) for FOUR, APACHE II and GCS, respectively. In delayed mortality, the AUCs were 0.89 (CI = 0.95. 0.81-0.94), 0.94 (CI = 0.95. 0.74 - 0.97) and 0.90 (CI = 0.95. 0.87 - 0.95) for FOUR, APACHE II and GCS, respectively. Conclusions Considering that GCS is easy to use and the FOUR can diagnose a locking syndrome along same values of subscales. These two subscales are superior to APACHI II in prediction of early mortality. Conversation APACHE II is more punctual in the prediction of delayed mortality. PMID:29696116
Developing and Benchmarking Native Linux Applications on Android
NASA Astrophysics Data System (ADS)
Batyuk, Leonid; Schmidt, Aubrey-Derrick; Schmidt, Hans-Gunther; Camtepe, Ahmet; Albayrak, Sahin
Smartphones get increasingly popular where more and more smartphone platforms emerge. Special attention was gained by the open source platform Android which was presented by the Open Handset Alliance (OHA) hosting members like Google, Motorola, and HTC. Android uses a Linux kernel and a stripped-down userland with a custom Java VM set on top. The resulting system joins the advantages of both environments, while third-parties are intended to develop only Java applications at the moment.
System Data Model (SDM) Source Code
2012-08-23
CROSS_COMPILE=/opt/gumstix/build_arm_nofpu/staging_dir/bin/arm-linux-uclibcgnueabi- 8 : CC=$(CROSS_COMPILE)gcc 9: CXX=$(CROSS_COMPILE)g++ 10 : AR...and flags to pass to it 6: LEX=flex 7: LEXFLAGS=-B 8 : 9: ## The parser generator to invoke and flags to pass to it 10 : YACC=bison 11: YACCFLAGS...5: # Point to default PetaLinux root directory 6: ifndef ROOTDIR 7: ROOTDIR=$(PETALINUX)/software/petalinux-dist 8 : endif 9: 10 : PATH:=$(PATH
Navigation/Prop Software Suite
NASA Technical Reports Server (NTRS)
Bruchmiller, Tomas; Tran, Sanh; Lee, Mathew; Bucker, Scott; Bupane, Catherine; Bennett, Charles; Cantu, Sergio; Kwong, Ping; Propst, Carolyn
2012-01-01
Navigation (Nav)/Prop software is used to support shuttle mission analysis, production, and some operations tasks. The Nav/Prop suite containing configuration items (CIs) resides on IPS/Linux workstations. It features lifecycle documents, and data files used for shuttle navigation and propellant analysis for all flight segments. This suite also includes trajectory server, archive server, and RAT software residing on MCC/Linux workstations. Navigation/Prop represents tool versions established during or after IPS Equipment Rehost-3 or after the MCC Rehost.
2015-04-01
report is to examine how a computer forensic investigator/incident handler, without specialised computer memory or software reverse engineering skills ...The skills amassed by incident handlers and investigators alike while using Volatility to examine Windows memory images will be of some help...bin/pulseaudio --start --log-target=syslog 1362 1000 1000 nautilus 1366 1000 1000 /usr/lib/pulseaudio/pulse/gconf- helper 1370 1000 1000 nm-applet
Welter, David E.; White, Jeremy T.; Hunt, Randall J.; Doherty, John E.
2015-09-18
The PEST++ Version 3 software suite can be compiled for Microsoft Windows®4 and Linux®5 operating systems; the source code is available in a Microsoft Visual Studio®6 2013 solution; Linux Makefiles are also provided. PEST++ Version 3 continues to build a foundation for an open-source framework capable of producing robust and efficient parameter estimation tools for large environmental models.
Porting and refurbishment of the WSS TNG control software
NASA Astrophysics Data System (ADS)
Caproni, Alessandro; Zacchei, Andrea; Vuerli, Claudio; Pucillo, Mauro
2004-09-01
The Workstation Software Sytem (WSS) is the high level control software of the Italian Galileo Galilei Telescope settled in La Palma Canary Island developed at the beginning of '90 for HP-UX workstations. WSS may be seen as a middle layer software system that manages the communications between the real time systems (VME), different workstations and high level applications providing a uniform distributed environment. The project to port the control software from the HP workstation to Linux environment started at the end of 2001. It is aimed to refurbish the control software introducing some of the new software technologies and languages, available for free in the Linux operating system. The project was realized by gradually substituting each HP workstation with a Linux PC with the goal to avoid main changes in the original software running under HP-UX. Three main phases characterized the project: creation of a simulated control room with several Linux PCs running WSS (to check all the functionality); insertion in the simulated control room of some HPs (to check the mixed environment); substitution of HP workstation in the real control room. From a software point of view, the project introduces some new technologies, like multi-threading, and the possibility to develop high level WSS applications with almost every programming language that implements the Berkley sockets. A library to develop java applications has also been created and tested.
Kung, Shu-Chen; Wang, Ching-Min; Lai, Chih-Cheng; Chao, Chien-Ming
2018-01-01
This retrospective cohort study investigated the outcomes and prognostic factors in nonagenarians (patients 90 years old or older) with acute respiratory failure. Between 2006 and 2016, all nonagenarians with acute respiratory failure requiring invasive mechanical ventilation (MV) were enrolled. Outcomes including in-hospital mortality and ventilator dependency were measured. A total of 173 nonagenarians with acute respiratory failure were admitted to the intensive care unit (ICU). A total of 56 patients died during the hospital stay and the rate of in-hospital mortality was 32.4%. Patients with higher APACHE (Acute Physiology and Chronic Health Evaluation) II scores (adjusted odds ratio [OR], 5.91; 95 % CI, 1.55-22.45; p = 0.009, APACHE II scores ≥ 25 vs APACHE II scores < 15), use of vasoactive agent (adjust OR, 2.67; 95% CI, 1.12-6.37; p = 0.03) and more organ dysfunction (adjusted OR, 11.13; 95% CI, 3.38-36.36, p < 0.001; ≥ 3 organ dysfunction vs ≤ 1 organ dysfunction) were more likely to die. Among the 117 survivors, 25 (21.4%) patients became dependent on MV. Female gender (adjusted OR, 3.53; 95% CI, 1.16-10.76, p = 0.027) and poor consciousness level (adjusted OR, 4.98; 95% CI, 1.41-17.58, p = 0.013) were associated with MV dependency. In conclusion, the mortality rate of nonagenarians with acute respiratory failure was high, especially for those with higher APACHE II scores or more organ dysfunction. PMID:29467961
Banderas-Bravo, María Esther; Arias-Verdú, Maria Dolores; Macías-Guarasa, Ines; Castillo-Lorente, Encarnación; Pérez-Costillas, Lucia; Gutierrez-Rodriguez, Raquel; Quesada-García, Guillermo; Rivera-Fernández, Ricardo
2017-01-01
Objectives. To evaluate the gravity and mortality of those patients admitted to the intensive care unit for poisoning. Also, the applicability and predicted capacity of prognostic scales most frequently used in ICU must be evaluated. Methods. Multicentre study between 2008 and 2013 on all patients admitted for poisoning. Results. The results are from 119 patients. The causes of poisoning were medication, 92 patients (77.3%), caustics, 11 (9.2%), and alcohol, 20 (16,8%). 78.3% attempted suicides. Mean age was 44.42 ± 13.85 years. 72.5% had a Glasgow Coma Scale (GCS) ≤8 points. The ICU mortality was 5.9% and the hospital mortality was 6.7%. The mortality from caustic poisoning was 54.5%, and it was 1.9% for noncaustic poisoning (p < 0.001). After adjusting for SAPS-3 (OR: 1.19 (1.02–1.39)) the mortality of patients who had ingested caustics was far higher than the rest (OR: 560.34 (11.64–26973.83)). There was considerable discrepancy between mortality predicted by SAPS-3 (26.8%) and observed (6.7%) (Hosmer-Lemeshow test: H = 35.10; p < 0.001). The APACHE-II (7,57%) and APACHE-III (8,15%) were no discrepancies. Conclusions. Admission to ICU for poisoning is rare in our country. Medication is the most frequent cause, but mortality of caustic poisoning is higher. APACHE-II and APACHE-III provide adequate predictions about mortality, while SAPS-3 tends to overestimate. PMID:28459061
Cholongitas, E; Senzolo, M; Patch, D; Kwong, K; Nikolopoulou, V; Leandro, G; Shaw, S; Burroughs, A K
2006-04-01
Prognostic scores in an intensive care unit (ICU) evaluate outcomes, but derive from cohorts containing few cirrhotic patients. To evaluate 6-week mortality in cirrhotic patients admitted to an ICU, and to compare general and liver-specific prognostic scores. A total of 312 consecutive cirrhotic patients (65% alcoholic; mean age 49.6 years). Multivariable logistic regression to evaluate admission factors associated with survival. Child-Pugh, Model for End-stage Liver Disease (MELD), Acute Physiology and Chronic Health Evaluation (APACHE) II and Sequential Organ Failure Assessment (SOFA) scores were compared by receiver operating characteristic curves. Major indication for admission was respiratory failure (35.6%). Median (range) Child-Pugh, APACHE II, MELD and SOFA scores were 11 (5-15), 18 (0-44), 24 (6-40) and 11 (0-21), respectively; 65% (n = 203) died. Survival improved over time (P = 0.005). Multivariate model factors: more organs failing (FOS) (<3 = 49.5%, > or =3 = 90%), higher FiO(2), lactate, urea and bilirubin; resulting in good discrimination [area under receiver operating characteristic curve (AUC) = 0.83], similar to SOFA and MELD (AUC = 0.83 and 0.81, respectively) and superior to APACHE II and Child-Pugh (AUC = 0.78 and 0.72, respectively). Cirrhotics admitted to ICU with > or =3 failing organ systems have 90% mortality. The Royal Free model discriminated well and contained key variables of organ function. SOFA and MELD were better predictors than APACHE II or Child-Pugh scores.
ExplorEnz: a MySQL database of the IUBMB enzyme nomenclature
McDonald, Andrew G; Boyce, Sinéad; Moss, Gerard P; Dixon, Henry BF; Tipton, Keith F
2007-01-01
Background We describe the database ExplorEnz, which is the primary repository for EC numbers and enzyme data that are being curated on behalf of the IUBMB. The enzyme nomenclature is incorporated into many other resources, including the ExPASy-ENZYME, BRENDA and KEGG bioinformatics databases. Description The data, which are stored in a MySQL database, preserve the formatting of chemical and enzyme names. A simple, easy to use, web-based query interface is provided, along with an advanced search engine for more complex queries. The database is publicly available at . The data are available for download as SQL and XML files via FTP. Conclusion ExplorEnz has powerful and flexible search capabilities and provides the scientific community with the most up-to-date version of the IUBMB Enzyme List. PMID:17662133
ExplorEnz: a MySQL database of the IUBMB enzyme nomenclature.
McDonald, Andrew G; Boyce, Sinéad; Moss, Gerard P; Dixon, Henry B F; Tipton, Keith F
2007-07-27
We describe the database ExplorEnz, which is the primary repository for EC numbers and enzyme data that are being curated on behalf of the IUBMB. The enzyme nomenclature is incorporated into many other resources, including the ExPASy-ENZYME, BRENDA and KEGG bioinformatics databases. The data, which are stored in a MySQL database, preserve the formatting of chemical and enzyme names. A simple, easy to use, web-based query interface is provided, along with an advanced search engine for more complex queries. The database is publicly available at http://www.enzyme-database.org. The data are available for download as SQL and XML files via FTP. ExplorEnz has powerful and flexible search capabilities and provides the scientific community with the most up-to-date version of the IUBMB Enzyme List.
Wiley, Laura K.; Sivley, R. Michael; Bush, William S.
2013-01-01
Efficient storage and retrieval of genomic annotations based on range intervals is necessary, given the amount of data produced by next-generation sequencing studies. The indexing strategies of relational database systems (such as MySQL) greatly inhibit their use in genomic annotation tasks. This has led to the development of stand-alone applications that are dependent on flat-file libraries. In this work, we introduce MyNCList, an implementation of the NCList data structure within a MySQL database. MyNCList enables the storage, update and rapid retrieval of genomic annotations from the convenience of a relational database system. Range-based annotations of 1 million variants are retrieved in under a minute, making this approach feasible for whole-genome annotation tasks. Database URL: https://github.com/bushlab/mynclist PMID:23894185
Wiley, Laura K; Sivley, R Michael; Bush, William S
2013-01-01
Efficient storage and retrieval of genomic annotations based on range intervals is necessary, given the amount of data produced by next-generation sequencing studies. The indexing strategies of relational database systems (such as MySQL) greatly inhibit their use in genomic annotation tasks. This has led to the development of stand-alone applications that are dependent on flat-file libraries. In this work, we introduce MyNCList, an implementation of the NCList data structure within a MySQL database. MyNCList enables the storage, update and rapid retrieval of genomic annotations from the convenience of a relational database system. Range-based annotations of 1 million variants are retrieved in under a minute, making this approach feasible for whole-genome annotation tasks. Database URL: https://github.com/bushlab/mynclist.
A MYSQL-BASED DATA ARCHIVER: PRELIMINARY RESULTS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matthew Bickley; Christopher Slominski
2008-01-23
Following an evaluation of the archival requirements of the Jefferson Laboratory accelerator’s user community, a prototyping effort was executed to determine if an archiver based on MySQL had sufficient functionality to meet those requirements. This approach was chosen because an archiver based on a relational database enables the development effort to focus on data acquisition and management, letting the database take care of storage, indexing and data consistency. It was clear from the prototype effort that there were no performance impediments to successful implementation of a final system. With our performance concerns addressed, the lab undertook the design and developmentmore » of an operational system. The system is in its operational testing phase now. This paper discusses the archiver system requirements, some of the design choices and their rationale, and presents the acquisition, storage and retrieval performance.« less