Sample records for web growth machine

  1. Large area sheet task. Advanced dendritic web growth development. [silicon films

    NASA Technical Reports Server (NTRS)

    Duncan, C. S.; Seidensticker, R. G.; Mchugh, J. P.; Hopkins, R. H.; Meier, D.; Frantti, E.; Schruben, J.

    1981-01-01

    The development of a silicon dendritic web growth machine is discussed. Several refinements to the sensing and control equipment for melt replenishment during web growth are described and several areas for cost reduction in the components of the prototype automated web growth furnace are identified. A circuit designed to eliminate the sensitivity of the detector signal to the intensity of the reflected laser beam used to measure melt level is also described. A variable speed motor for the silicon feeder is discussed which allows pellet feeding to be accomplished at a rate programmed to match exactly the silicon removed by web growth.

  2. Large-area sheet task advanced dendritic web growth development

    NASA Technical Reports Server (NTRS)

    Duncan, C. S.; Seidensticker, R. G.; Mchugh, J. P.; Hopkins, R. H.; Meier, D. L.; Schruben, J.

    1982-01-01

    Thermal models were developed that accurately predict the thermally generated stresses in the web crystal which, if too high, cause the crystal to degenerate. The application of the modeling results to the design of low-stress experimental growth configurations will allow the growth of wider web crystals at higher growth velocities. A new experimental web growth machine was constructed. This facility includes all the features necessary for carrying out growth experiments under steady thermal conditions. Programmed growth initiation was developed to give reproducible crystal starts. Width control permits the growth of long ribbons at constant width. Melt level is controlled to 0.1 mm or better. Thus, the capability exists to grow long web crystals of constant width and thickness with little operator intervention, and web growth experiments can now be performed with growth variables controlled to a degree not previously possible.

  3. Large area sheet task: Advanced dendritic web growth development

    NASA Technical Reports Server (NTRS)

    Duncan, C. S.; Seidensticker, R. G.; Mchugh, J. P.; Hopkins, R. H.; Meier, D.; Schruben, J.

    1981-01-01

    The growth of silicon dendritic web for photovoltaic applications was investigated. The application of a thermal model for calculating buckling stresses as a function of temperature profile in the web is discussed. Lid and shield concepts were evaluated to provide the data base for enhancing growth velocity. An experimental web growth machine which embodies in one unit the mechanical and electronic features developed in previous work was developed. In addition, evaluation of a melt level control system was begun, along with preliminary tests of an elongated crucible design. The economic analysis was also updated to incorporate some minor cost changes. The initial applications of the thermal model to a specific configuration gave results consistent with experimental observation in terms of the initiation of buckling vs. width for a given crystal thickness.

  4. FwWebViewPlus: integration of web technologies into WinCC OA based Human-Machine Interfaces at CERN

    NASA Astrophysics Data System (ADS)

    Golonka, Piotr; Fabian, Wojciech; Gonzalez-Berges, Manuel; Jasiun, Piotr; Varela-Rodriguez, Fernando

    2014-06-01

    The rapid growth in popularity of web applications gives rise to a plethora of reusable graphical components, such as Google Chart Tools and JQuery Sparklines, implemented in JavaScript and run inside a web browser. In the paper we describe the tool that allows for seamless integration of web-based widgets into WinCC Open Architecture, the SCADA system used commonly at CERN to build complex Human-Machine Interfaces. Reuse of widely available widget libraries and pushing the development efforts to a higher abstraction layer based on a scripting language allow for significant reduction in maintenance of the code in multi-platform environments compared to those currently used in C++ visualization plugins. Adequately designed interfaces allow for rapid integration of new web widgets into WinCC OA. At the same time, the mechanisms familiar to HMI developers are preserved, making the use of new widgets "native". Perspectives for further integration between the realms of WinCC OA and Web development are also discussed.

  5. Web Mining: Machine Learning for Web Applications.

    ERIC Educational Resources Information Center

    Chen, Hsinchun; Chau, Michael

    2004-01-01

    Presents an overview of machine learning research and reviews methods used for evaluating machine learning systems. Ways that machine-learning algorithms were used in traditional information retrieval systems in the "pre-Web" era are described, and the field of Web mining and how machine learning has been used in different Web mining…

  6. A novel architecture for information retrieval system based on semantic web

    NASA Astrophysics Data System (ADS)

    Zhang, Hui

    2011-12-01

    Nowadays, the web has enabled an explosive growth of information sharing (there are currently over 4 billion pages covering most areas of human endeavor) so that the web has faced a new challenge of information overhead. The challenge that is now before us is not only to help people locating relevant information precisely but also to access and aggregate a variety of information from different resources automatically. Current web document are in human-oriented formats and they are suitable for the presentation, but machines cannot understand the meaning of document. To address this issue, Berners-Lee proposed a concept of semantic web. With semantic web technology, web information can be understood and processed by machine. It provides new possibilities for automatic web information processing. A main problem of semantic web information retrieval is that when these is not enough knowledge to such information retrieval system, the system will return to a large of no sense result to uses due to a huge amount of information results. In this paper, we present the architecture of information based on semantic web. In addiction, our systems employ the inference Engine to check whether the query should pose to Keyword-based Search Engine or should pose to the Semantic Search Engine.

  7. Web-dendritic growth. [single crystal silicon ribbons for solar cells

    NASA Technical Reports Server (NTRS)

    Hilborn, R. B.; Faust, J. W., Jr.; Rhodes, C.

    1977-01-01

    The effects of various machine design parameters on the growth of web dendritic silicon ribbon were investigated. Ribbons were grown up to lengths of one meter, with widths increasing linearly up to one cm at the point of termination of growth. Thermal data were collected and evaluated for actual seeding and growth with variations in parameters affecting heat loss. It was found that for suitable growth, the mechanical system should be very rigid and stable, and the tolerances and specifications of the quartz crucibles must be far tighter than normal quartz tolerances. The widening rates of the ribbons were found to be a function of the temperature gradient rather than the temperature differences alone. A twin spacing in the seed of 3 microns to 2 microns was found to be unfavorable for growth; whereas spacing of 0.9 microns to 2 microns and 8 microns to 2 microns were favorable. Thermal modeling studies of the effects of furnace design parameters on the temperature distributions in melt and the growth of the dendritic web ribbon showed that the pull rate of the ribbon is strongly dependent on the temperature of the top thermal shield, the spacing between this shield and the melt, and the thickness of the growing web.

  8. Effect of Temporal Relationships in Associative Rule Mining for Web Log Data

    PubMed Central

    Mohd Khairudin, Nazli; Mustapha, Aida

    2014-01-01

    The advent of web-based applications and services has created such diverse and voluminous web log data stored in web servers, proxy servers, client machines, or organizational databases. This paper attempts to investigate the effect of temporal attribute in relational rule mining for web log data. We incorporated the characteristics of time in the rule mining process and analysed the effect of various temporal parameters. The rules generated from temporal relational rule mining are then compared against the rules generated from the classical rule mining approach such as the Apriori and FP-Growth algorithms. The results showed that by incorporating the temporal attribute via time, the number of rules generated is subsequently smaller but is comparable in terms of quality. PMID:24587757

  9. Topic Models for Link Prediction in Document Networks

    ERIC Educational Resources Information Center

    Kataria, Saurabh

    2012-01-01

    Recent explosive growth of interconnected document collections such as citation networks, network of web pages, content generated by crowd-sourcing in collaborative environments, etc., has posed several challenging problems for data mining and machine learning community. One central problem in the domain of document networks is that of "link…

  10. WebWatcher: Machine Learning and Hypertext

    DTIC Science & Technology

    1995-05-29

    WebWatcher: Machine Learning and Hypertext Thorsten Joachims, Tom Mitchell, Dayne Freitag, and Robert Armstrong School of Computer Science Carnegie...HTML-page about machine learning in which we in- serted a hyperlink to WebWatcher (line 6). The user follows this hyperlink and gets to a page which...AND SUBTITLE WebWatcher: Machine Learning and Hypertext 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT

  11. Provenance-Based Approaches to Semantic Web Service Discovery and Usage

    ERIC Educational Resources Information Center

    Narock, Thomas William

    2012-01-01

    The World Wide Web Consortium defines a Web Service as "a software system designed to support interoperable machine-to-machine interaction over a network." Web Services have become increasingly important both within and across organizational boundaries. With the recent advent of the Semantic Web, web services have evolved into semantic…

  12. A Framework for Finding and Summarizing Product Defects, and Ranking Helpful Threads from Online Customer Forums through Machine Learning

    ERIC Educational Resources Information Center

    Jiao, Jian

    2013-01-01

    The Internet has revolutionized the way users share and acquire knowledge. As important and popular Web-based applications, online discussion forums provide interactive platforms for users to exchange information and report problems. With the rapid growth of social networks and an ever increasing number of Internet users, online forums have…

  13. Introduction to the JASIST Special Topic Issue on Web Retrieval and Mining: A Machine Learning Perspective.

    ERIC Educational Resources Information Center

    Chen, Hsinchun

    2003-01-01

    Discusses information retrieval techniques used on the World Wide Web. Topics include machine learning in information extraction; relevance feedback; information filtering and recommendation; text classification and text clustering; Web mining, based on data mining techniques; hyperlink structure; and Web size. (LRW)

  14. repRNA: a web server for generating various feature vectors of RNA sequences.

    PubMed

    Liu, Bin; Liu, Fule; Fang, Longyun; Wang, Xiaolong; Chou, Kuo-Chen

    2016-02-01

    With the rapid growth of RNA sequences generated in the postgenomic age, it is highly desired to develop a flexible method that can generate various kinds of vectors to represent these sequences by focusing on their different features. This is because nearly all the existing machine-learning methods, such as SVM (support vector machine) and KNN (k-nearest neighbor), can only handle vectors but not sequences. To meet the increasing demands and speed up the genome analyses, we have developed a new web server, called "representations of RNA sequences" (repRNA). Compared with the existing methods, repRNA is much more comprehensive, flexible and powerful, as reflected by the following facts: (1) it can generate 11 different modes of feature vectors for users to choose according to their investigation purposes; (2) it allows users to select the features from 22 built-in physicochemical properties and even those defined by users' own; (3) the resultant feature vectors and the secondary structures of the corresponding RNA sequences can be visualized. The repRNA web server is freely accessible to the public at http://bioinformatics.hitsz.edu.cn/repRNA/ .

  15. Framework for Building Collaborative Research Environment

    DOE PAGES

    Devarakonda, Ranjeet; Palanisamy, Giriprakash; San Gil, Inigo

    2014-10-25

    Wide range of expertise and technologies are the key to solving some global problems. Semantic web technology can revolutionize the nature of how scientific knowledge is produced and shared. The semantic web is all about enabling machine-machine readability instead of a routine human-human interaction. Carefully structured data, as in machine readable data is the key to enabling these interactions. Drupal is an example of one such toolset that can render all the functionalities of Semantic Web technology right out of the box. Drupal’s content management system automatically stores the data in a structured format enabling it to be machine. Withinmore » this paper, we will discuss how Drupal promotes collaboration in a research setting such as Oak Ridge National Laboratory (ORNL) and Long Term Ecological Research Center (LTER) and how it is effectively using the Semantic Web in achieving this.« less

  16. Progress on big data publication and documentation for machine-to-machine discovery, access, and processing

    NASA Astrophysics Data System (ADS)

    Walker, J. I.; Blodgett, D. L.; Suftin, I.; Kunicki, T.

    2013-12-01

    High-resolution data for use in environmental modeling is increasingly becoming available at broad spatial and temporal scales. Downscaled climate projections, remotely sensed landscape parameters, and land-use/land-cover projections are examples of datasets that may exceed an individual investigation's data management and analysis capacity. To allow projects on limited budgets to work with many of these data sets, the burden of working with them must be reduced. The approach being pursued at the U.S. Geological Survey Center for Integrated Data Analytics uses standard self-describing web services that allow machine to machine data access and manipulation. These techniques have been implemented and deployed in production level server-based Web Processing Services that can be accessed from a web application or scripted workflow. Data publication techniques that allow machine-interpretation of large collections of data have also been implemented for numerous datasets at U.S. Geological Survey data centers as well as partner agencies and academic institutions. Discovery of data services is accomplished using a method in which a machine-generated metadata record holds content--derived from the data's source web service--that is intended for human interpretation as well as machine interpretation. A distributed search application has been developed that demonstrates the utility of a decentralized search of data-owner metadata catalogs from multiple agencies. The integrated but decentralized system of metadata, data, and server-based processing capabilities will be presented. The design, utility, and value of these solutions will be illustrated with applied science examples and success stories. Datasets such as the EPA's Integrated Climate and Land Use Scenarios, USGS/NASA MODIS derived land cover attributes, and downscaled climate projections from several sources are examples of data this system includes. These and other datasets, have been published as standard, self-describing, web services that provide the ability to inspect and subset the data. This presentation will demonstrate this file-to-web service concept and how it can be used from script-based workflows or web applications.

  17. 40 CFR 63.460 - Applicability and designation of source.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 18, material safety data sheets, or engineering calculations. Wipe cleaning activities, such as using... continuous web cleaning machine subject to this subpart shall achieve compliance with the provisions of this... products, solvent cleaning machines used in the manufacture of narrow tubing, and continuous web cleaning...

  18. 40 CFR 63.460 - Applicability and designation of source.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 18, material safety data sheets, or engineering calculations. Wipe cleaning activities, such as using... continuous web cleaning machine subject to this subpart shall achieve compliance with the provisions of this... products, solvent cleaning machines used in the manufacture of narrow tubing, and continuous web cleaning...

  19. Creating Web-Based Scientific Applications Using Java Servlets

    NASA Technical Reports Server (NTRS)

    Palmer, Grant; Arnold, James O. (Technical Monitor)

    2001-01-01

    There are many advantages to developing web-based scientific applications. Any number of people can access the application concurrently. The application can be accessed from a remote location. The application becomes essentially platform-independent because it can be run from any machine that has internet access and can run a web browser. Maintenance and upgrades to the application are simplified since only one copy of the application exists in a centralized location. This paper details the creation of web-based applications using Java servlets. Java is a powerful, versatile programming language that is well suited to developing web-based programs. A Java servlet provides the interface between the central server and the remote client machines. The servlet accepts input data from the client, runs the application on the server, and sends the output back to the client machine. The type of servlet that supports the HTTP protocol will be discussed in depth. Among the topics the paper will discuss are how to write an http servlet, how the servlet can run applications written in Java and other languages, and how to set up a Java web server. The entire process will be demonstrated by building a web-based application to compute stagnation point heat transfer.

  20. A Java-based enterprise system architecture for implementing a continuously supported and entirely Web-based exercise solution.

    PubMed

    Wang, Zhihui; Kiryu, Tohru

    2006-04-01

    Since machine-based exercise still uses local facilities, it is affected by time and place. We designed a web-based system architecture based on the Java 2 Enterprise Edition that can accomplish continuously supported machine-based exercise. In this system, exercise programs and machines are loosely coupled and dynamically integrated on the site of exercise via the Internet. We then extended the conventional health promotion model, which contains three types of players (users, exercise trainers, and manufacturers), by adding a new player: exercise program creators. Moreover, we developed a self-describing strategy to accommodate a variety of exercise programs and provide ease of use to users on the web. We illustrate our novel design with examples taken from our feasibility study on a web-based cycle ergometer exercise system. A biosignal-based workload control approach was introduced to ensure that users performed appropriate exercise alone.

  1. Background Equatorial Astronomical Measurements Focal Plane Assembly (Refurbished HI STAR SOUTH)

    DTIC Science & Technology

    1984-09-01

    Subassembly RPT41412 MOSFETs during assembly and test. The old and new designs are shown in Fig- ure 7. The copper webs between the first and second and...machined in the remaining webs be- tween the detector recesses and through a small hole drilled through the frame to connect the traces of all four...gold wirebond routed through a notch machined in the frame web between one of the detector recesses and the board recess. The sap- phire support

  2. Develop, Build, and Test a Virtual Lab to Support a Vulnerability Training System

    DTIC Science & Technology

    2004-09-01

    docs.us.dell.com/support/edocs/systems/pe1650/ en /it/index.htm> (20 August 2004) “HOWTO: Installing Web Services with Linux /Tomcat/Apache/Struts...configured as host machines with VMware and VNC running on a Linux RedHat 9 Kernel. An Apache-Tomcat web server was configured as the external interface to...1650, dual processor, blade servers were configured as host machines with VMware and VNC running on a Linux RedHat 9 Kernel. An Apache-Tomcat web

  3. Web-Based Machine Translation as a Tool for Promoting Electronic Literacy and Language Awareness

    ERIC Educational Resources Information Center

    Williams, Lawrence

    2006-01-01

    This article addresses a pervasive problem of concern to teachers of many foreign languages: the use of Web-Based Machine Translation (WBMT) by students who do not understand the complexities of this relatively new tool. Although networked technologies have greatly increased access to many language and communication tools, WBMT is still…

  4. A Web-Based Visualization and Animation Platform for Digital Logic Design

    ERIC Educational Resources Information Center

    Shoufan, Abdulhadi; Lu, Zheng; Huss, Sorin A.

    2015-01-01

    This paper presents a web-based education platform for the visualization and animation of the digital logic design process. This includes the design of combinatorial circuits using logic gates, multiplexers, decoders, and look-up-tables as well as the design of finite state machines. Various configurations of finite state machines can be selected…

  5. Our Policies, Their Text: German Language Students' Strategies with and Beliefs about Web-Based Machine Translation

    ERIC Educational Resources Information Center

    White, Kelsey D.; Heidrich, Emily

    2013-01-01

    Most educators are aware that some students utilize web-based machine translators for foreign language assignments, however, little research has been done to determine how and why students utilize these programs, or what the implications are for language learning and teaching. In this mixed-methods study we utilized surveys, a translation task,…

  6. Kernel Methods for Mining Instance Data in Ontologies

    NASA Astrophysics Data System (ADS)

    Bloehdorn, Stephan; Sure, York

    The amount of ontologies and meta data available on the Web is constantly growing. The successful application of machine learning techniques for learning of ontologies from textual data, i.e. mining for the Semantic Web, contributes to this trend. However, no principal approaches exist so far for mining from the Semantic Web. We investigate how machine learning algorithms can be made amenable for directly taking advantage of the rich knowledge expressed in ontologies and associated instance data. Kernel methods have been successfully employed in various learning tasks and provide a clean framework for interfacing between non-vectorial data and machine learning algorithms. In this spirit, we express the problem of mining instances in ontologies as the problem of defining valid corresponding kernels. We present a principled framework for designing such kernels by means of decomposing the kernel computation into specialized kernels for selected characteristics of an ontology which can be flexibly assembled and tuned. Initial experiments on real world Semantic Web data enjoy promising results and show the usefulness of our approach.

  7. An Educational Tool for Browsing the Semantic Web

    ERIC Educational Resources Information Center

    Yoo, Sujin; Kim, Younghwan; Park, Seongbin

    2013-01-01

    The Semantic Web is an extension of the current Web where information is represented in a machine processable way. It is not separate from the current Web and one of the confusions that novice users might have is where the Semantic Web is. In fact, users can easily encounter RDF documents that are components of the Semantic Web while they navigate…

  8. The Semantic Web in Education

    ERIC Educational Resources Information Center

    Ohler, Jason

    2008-01-01

    The semantic web or Web 3.0 makes information more meaningful to people by making it more understandable to machines. In this article, the author examines the implications of Web 3.0 for education. The author considers three areas of impact: knowledge construction, personal learning network maintenance, and personal educational administration.…

  9. Realising the Full Potential of the Web.

    ERIC Educational Resources Information Center

    Berners-Lee, Tim

    1999-01-01

    Argues that the first phase of the Web is communication through shared knowledge. Predicts that the second side to the Web, yet to emerge, is that of machine-understandable information, with humans providing the inspiration and the intuition. (CR)

  10. 75 FR 34673 - Approval of the Clean Air Act, Section 112(l), Authority for Hazardous Air Pollutants: Air...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-18

    ... Halogenated Solvent Cleaning Machines: State of Rhode Island Department of Environmental Management AGENCY... machines in Rhode Island, except for continuous web cleaning machines. This approval would grant RI DEM the... Halogenated Solvent NESHAP for organic solvent cleaning machines and would make the Rhode Island Department of...

  11. Rapid Prediction of Bacterial Heterotrophic Fluxomics Using Machine Learning and Constraint Programming.

    PubMed

    Wu, Stephen Gang; Wang, Yuxuan; Jiang, Wu; Oyetunde, Tolutola; Yao, Ruilian; Zhang, Xuehong; Shimizu, Kazuyuki; Tang, Yinjie J; Bao, Forrest Sheng

    2016-04-01

    13C metabolic flux analysis (13C-MFA) has been widely used to measure in vivo enzyme reaction rates (i.e., metabolic flux) in microorganisms. Mining the relationship between environmental and genetic factors and metabolic fluxes hidden in existing fluxomic data will lead to predictive models that can significantly accelerate flux quantification. In this paper, we present a web-based platform MFlux (http://mflux.org) that predicts the bacterial central metabolism via machine learning, leveraging data from approximately 100 13C-MFA papers on heterotrophic bacterial metabolisms. Three machine learning methods, namely Support Vector Machine (SVM), k-Nearest Neighbors (k-NN), and Decision Tree, were employed to study the sophisticated relationship between influential factors and metabolic fluxes. We performed a grid search of the best parameter set for each algorithm and verified their performance through 10-fold cross validations. SVM yields the highest accuracy among all three algorithms. Further, we employed quadratic programming to adjust flux profiles to satisfy stoichiometric constraints. Multiple case studies have shown that MFlux can reasonably predict fluxomes as a function of bacterial species, substrate types, growth rate, oxygen conditions, and cultivation methods. Due to the interest of studying model organism under particular carbon sources, bias of fluxome in the dataset may limit the applicability of machine learning models. This problem can be resolved after more papers on 13C-MFA are published for non-model species.

  12. Rapid Prediction of Bacterial Heterotrophic Fluxomics Using Machine Learning and Constraint Programming

    PubMed Central

    Wu, Stephen Gang; Wang, Yuxuan; Jiang, Wu; Oyetunde, Tolutola; Yao, Ruilian; Zhang, Xuehong; Shimizu, Kazuyuki; Tang, Yinjie J.; Bao, Forrest Sheng

    2016-01-01

    13C metabolic flux analysis (13C-MFA) has been widely used to measure in vivo enzyme reaction rates (i.e., metabolic flux) in microorganisms. Mining the relationship between environmental and genetic factors and metabolic fluxes hidden in existing fluxomic data will lead to predictive models that can significantly accelerate flux quantification. In this paper, we present a web-based platform MFlux (http://mflux.org) that predicts the bacterial central metabolism via machine learning, leveraging data from approximately 100 13C-MFA papers on heterotrophic bacterial metabolisms. Three machine learning methods, namely Support Vector Machine (SVM), k-Nearest Neighbors (k-NN), and Decision Tree, were employed to study the sophisticated relationship between influential factors and metabolic fluxes. We performed a grid search of the best parameter set for each algorithm and verified their performance through 10-fold cross validations. SVM yields the highest accuracy among all three algorithms. Further, we employed quadratic programming to adjust flux profiles to satisfy stoichiometric constraints. Multiple case studies have shown that MFlux can reasonably predict fluxomes as a function of bacterial species, substrate types, growth rate, oxygen conditions, and cultivation methods. Due to the interest of studying model organism under particular carbon sources, bias of fluxome in the dataset may limit the applicability of machine learning models. This problem can be resolved after more papers on 13C-MFA are published for non-model species. PMID:27092947

  13. Silicon dendritic web growth

    NASA Technical Reports Server (NTRS)

    Duncan, S.

    1984-01-01

    Technological goals for a silicon dendritic web growth program effort are presented. Principle objectives for this program include: (1) grow long web crystals front continuously replenished melt; (2) develop temperature distribution in web and melt; (3) improve reproductibility of growth; (4) develop configurations for increased growth rates (width and speed); (5) develop new growth system components as required for improved growth; and (6) evaluate quality of web growth.

  14. UIVerify: A Web-Based Tool for Verification and Automatic Generation of User Interfaces

    NASA Technical Reports Server (NTRS)

    Shiffman, Smadar; Degani, Asaf; Heymann, Michael

    2004-01-01

    In this poster, we describe a web-based tool for verification and automatic generation of user interfaces. The verification component of the tool accepts as input a model of a machine and a model of its interface, and checks that the interface is adequate (correct). The generation component of the tool accepts a model of a given machine and the user's task, and then generates a correct and succinct interface. This write-up will demonstrate the usefulness of the tool by verifying the correctness of a user interface to a flight-control system. The poster will include two more examples of using the tool: verification of the interface to an espresso machine, and automatic generation of a succinct interface to a large hypothetical machine.

  15. An efficient scheme for automatic web pages categorization using the support vector machine

    NASA Astrophysics Data System (ADS)

    Bhalla, Vinod Kumar; Kumar, Neeraj

    2016-07-01

    In the past few years, with an evolution of the Internet and related technologies, the number of the Internet users grows exponentially. These users demand access to relevant web pages from the Internet within fraction of seconds. To achieve this goal, there is a requirement of an efficient categorization of web page contents. Manual categorization of these billions of web pages to achieve high accuracy is a challenging task. Most of the existing techniques reported in the literature are semi-automatic. Using these techniques, higher level of accuracy cannot be achieved. To achieve these goals, this paper proposes an automatic web pages categorization into the domain category. The proposed scheme is based on the identification of specific and relevant features of the web pages. In the proposed scheme, first extraction and evaluation of features are done followed by filtering the feature set for categorization of domain web pages. A feature extraction tool based on the HTML document object model of the web page is developed in the proposed scheme. Feature extraction and weight assignment are based on the collection of domain-specific keyword list developed by considering various domain pages. Moreover, the keyword list is reduced on the basis of ids of keywords in keyword list. Also, stemming of keywords and tag text is done to achieve a higher accuracy. An extensive feature set is generated to develop a robust classification technique. The proposed scheme was evaluated using a machine learning method in combination with feature extraction and statistical analysis using support vector machine kernel as the classification tool. The results obtained confirm the effectiveness of the proposed scheme in terms of its accuracy in different categories of web pages.

  16. Simulation Platform: a cloud-based online simulation environment.

    PubMed

    Yamazaki, Tadashi; Ikeno, Hidetoshi; Okumura, Yoshihiro; Satoh, Shunji; Kamiyama, Yoshimi; Hirata, Yutaka; Inagaki, Keiichiro; Ishihara, Akito; Kannon, Takayuki; Usui, Shiro

    2011-09-01

    For multi-scale and multi-modal neural modeling, it is needed to handle multiple neural models described at different levels seamlessly. Database technology will become more important for these studies, specifically for downloading and handling the neural models seamlessly and effortlessly. To date, conventional neuroinformatics databases have solely been designed to archive model files, but the databases should provide a chance for users to validate the models before downloading them. In this paper, we report our on-going project to develop a cloud-based web service for online simulation called "Simulation Platform". Simulation Platform is a cloud of virtual machines running GNU/Linux. On a virtual machine, various software including developer tools such as compilers and libraries, popular neural simulators such as GENESIS, NEURON and NEST, and scientific software such as Gnuplot, R and Octave, are pre-installed. When a user posts a request, a virtual machine is assigned to the user, and the simulation starts on that machine. The user remotely accesses to the machine through a web browser and carries out the simulation, without the need to install any software but a web browser on the user's own computer. Therefore, Simulation Platform is expected to eliminate impediments to handle multiple neural models that require multiple software. Copyright © 2011 Elsevier Ltd. All rights reserved.

  17. Reprint of: Simulation Platform: a cloud-based online simulation environment.

    PubMed

    Yamazaki, Tadashi; Ikeno, Hidetoshi; Okumura, Yoshihiro; Satoh, Shunji; Kamiyama, Yoshimi; Hirata, Yutaka; Inagaki, Keiichiro; Ishihara, Akito; Kannon, Takayuki; Usui, Shiro

    2011-11-01

    For multi-scale and multi-modal neural modeling, it is needed to handle multiple neural models described at different levels seamlessly. Database technology will become more important for these studies, specifically for downloading and handling the neural models seamlessly and effortlessly. To date, conventional neuroinformatics databases have solely been designed to archive model files, but the databases should provide a chance for users to validate the models before downloading them. In this paper, we report our on-going project to develop a cloud-based web service for online simulation called "Simulation Platform". Simulation Platform is a cloud of virtual machines running GNU/Linux. On a virtual machine, various software including developer tools such as compilers and libraries, popular neural simulators such as GENESIS, NEURON and NEST, and scientific software such as Gnuplot, R and Octave, are pre-installed. When a user posts a request, a virtual machine is assigned to the user, and the simulation starts on that machine. The user remotely accesses to the machine through a web browser and carries out the simulation, without the need to install any software but a web browser on the user's own computer. Therefore, Simulation Platform is expected to eliminate impediments to handle multiple neural models that require multiple software. Copyright © 2011 Elsevier Ltd. All rights reserved.

  18. Best face forward.

    PubMed

    Rayport, Jeffrey F; Jaworski, Bernard J

    2004-12-01

    Most companies serve customers through a broad array of interfaces, from retail sales clerks to Web sites to voice-response telephone systems. But while the typical company has an impressive interface collection, it doesn't have an interface system. That is, the whole set does not add up to the sum of its parts in its ability to provide service and build customer relationships. Too many people and too many machines operating with insufficient coordination (and often at cross-purposes) mean rising complexity, costs, and customer dissatisfaction. In a world where companies compete not on what they sell but on how they sell it, turning that liability into an asset is what separates winners from losers. In this adaptation of their forthcoming book by the same title, Jeffrey Rayport and Bernard Jaworski explain how companies must reengineer their customer interface systems for optimal efficiency and effectiveness. Part of that transformation, they observe, will involve a steady encroachment by machine interfaces into areas that have long been the sacred province of humans. Managers now have opportunities unprecedented in the history of business to use machines, not just people, to credibly manage their interactions with customers. Because people and machines each have their strengths and weaknesses, company executives must identify what people do best, what machines do best, and how to deploy them separately and together. Front-office reengineering subjects every current and potential service interface to an analysis of opportunities for substitution (using machines instead of people), complementarity (using a mix of machines and people), and displacement (using networks to shift physical locations of people and machines), with the twin objectives of compressing costs and driving top-line growth through increased customer value.

  19. Apparatus for silicon web growth of higher output and improved growth stability

    DOEpatents

    Duncan, Charles S.; Piotrowski, Paul A.

    1989-01-01

    This disclosure describes an apparatus to improve the web growth attainable from prior web growth configurations. This apparatus modifies the heat loss at the growth interface in a manner that minimizes thickness variations across the web, especially regions of the web adjacent to the two bounding dendrites. In the unmodified configuration, thinned regions of web, adjacent to the dendrites, were found to be the origin of crystal degradation which ultimately led to termination of the web growth. According to the present invention, thinning adjacent to the dendrites is reduced and the incidence of crystal degradation is similarly reduced.

  20. Applying Semantic Web technologies to improve the retrieval, credibility and use of health-related web resources.

    PubMed

    Mayer, Miguel A; Karampiperis, Pythagoras; Kukurikos, Antonis; Karkaletsis, Vangelis; Stamatakis, Kostas; Villarroel, Dagmar; Leis, Angela

    2011-06-01

    The number of health-related websites is increasing day-by-day; however, their quality is variable and difficult to assess. Various "trust marks" and filtering portals have been created in order to assist consumers in retrieving quality medical information. Consumers are using search engines as the main tool to get health information; however, the major problem is that the meaning of the web content is not machine-readable in the sense that computers cannot understand words and sentences as humans can. In addition, trust marks are invisible to search engines, thus limiting their usefulness in practice. During the last five years there have been different attempts to use Semantic Web tools to label health-related web resources to help internet users identify trustworthy resources. This paper discusses how Semantic Web technologies can be applied in practice to generate machine-readable labels and display their content, as well as to empower end-users by providing them with the infrastructure for expressing and sharing their opinions on the quality of health-related web resources.

  1. Exploring the Further Integration of Machine Translation in English-Chinese Cross Language Information Access

    ERIC Educational Resources Information Center

    Wu, Dan; He, Daqing

    2012-01-01

    Purpose: This paper seeks to examine the further integration of machine translation technologies with cross language information access in providing web users the capabilities of accessing information beyond language barriers. Machine translation and cross language information access are related technologies, and yet they have their own unique…

  2. Semantic Web repositories for genomics data using the eXframe platform.

    PubMed

    Merrill, Emily; Corlosquet, Stéphane; Ciccarese, Paolo; Clark, Tim; Das, Sudeshna

    2014-01-01

    With the advent of inexpensive assay technologies, there has been an unprecedented growth in genomics data as well as the number of databases in which it is stored. In these databases, sample annotation using ontologies and controlled vocabularies is becoming more common. However, the annotation is rarely available as Linked Data, in a machine-readable format, or for standardized queries using SPARQL. This makes large-scale reuse, or integration with other knowledge bases very difficult. To address this challenge, we have developed the second generation of our eXframe platform, a reusable framework for creating online repositories of genomics experiments. This second generation model now publishes Semantic Web data. To accomplish this, we created an experiment model that covers provenance, citations, external links, assays, biomaterials used in the experiment, and the data collected during the process. The elements of our model are mapped to classes and properties from various established biomedical ontologies. Resource Description Framework (RDF) data is automatically produced using these mappings and indexed in an RDF store with a built-in Sparql Protocol and RDF Query Language (SPARQL) endpoint. Using the open-source eXframe software, institutions and laboratories can create Semantic Web repositories of their experiments, integrate it with heterogeneous resources and make it interoperable with the vast Semantic Web of biomedical knowledge.

  3. myChEMBL: a virtual machine implementation of open data and cheminformatics tools.

    PubMed

    Ochoa, Rodrigo; Davies, Mark; Papadatos, George; Atkinson, Francis; Overington, John P

    2014-01-15

    myChEMBL is a completely open platform, which combines public domain bioactivity data with open source database and cheminformatics technologies. myChEMBL consists of a Linux (Ubuntu) Virtual Machine featuring a PostgreSQL schema with the latest version of the ChEMBL database, as well as the latest RDKit cheminformatics libraries. In addition, a self-contained web interface is available, which can be modified and improved according to user specifications. The VM is available at: ftp://ftp.ebi.ac.uk/pub/databases/chembl/VM/myChEMBL/current. The web interface and web services code is available at: https://github.com/rochoa85/myChEMBL.

  4. Web-dendritic ribbon growth

    NASA Technical Reports Server (NTRS)

    Hilborn, R. B., Jr.; Faust, J. W., Jr.

    1976-01-01

    A web furnace was constructed for pulling dendritic-web samples. The effect of changes in the furnace thermal geometry on the growth of dendritic-web was studied. Several attempts were made to grow primitive dendrites for use as the dendritic seed crystals for web growth and to determine the optimum twin spacing in the dendritic seed crystal for web growth. Mathematical models and computer programs were used to determine the thermal geometries in the susceptor, crucible melt, meniscus, and web. Several geometries were determined for particular furnace geometries and growth conditions. The information obtained was used in conjunction with results from the experimental growth investigations in order to achieve proper conditions for sustained pulling of two dendrite web ribbons. In addition, the facilities for obtaining the following data were constructed: twin spacing, dislocation density, web geometry, resistivity, majority charge carrier type, and minority carrier lifetime.

  5. Noise and Vibration Risk Prevention Virtual Web for Ubiquitous Training

    ERIC Educational Resources Information Center

    Redel-Macías, María Dolores; Cubero-Atienza, Antonio J.; Martínez-Valle, José Miguel; Pedrós-Pérez, Gerardo; del Pilar Martínez-Jiménez, María

    2015-01-01

    This paper describes a new Web portal offering experimental labs for ubiquitous training of university engineering students in work-related risk prevention. The Web-accessible computer program simulates the noise and machine vibrations met in the work environment, in a series of virtual laboratories that mimic an actual laboratory and provide the…

  6. Challenges Facing the Semantic Web and Social Software as Communication Technology Agents in E-Learning Environments

    ERIC Educational Resources Information Center

    Olaniran, Bolanle A.

    2010-01-01

    The semantic web describes the process whereby information content is made available for machine consumption. With increased reliance on information communication technologies, the semantic web promises effective and efficient information acquisition and dissemination of products and services in the global economy, in particular, e-learning.…

  7. jORCA: easily integrating bioinformatics Web Services.

    PubMed

    Martín-Requena, Victoria; Ríos, Javier; García, Maximiliano; Ramírez, Sergio; Trelles, Oswaldo

    2010-02-15

    Web services technology is becoming the option of choice to deploy bioinformatics tools that are universally available. One of the major strengths of this approach is that it supports machine-to-machine interoperability over a network. However, a weakness of this approach is that various Web Services differ in their definition and invocation protocols, as well as their communication and data formats-and this presents a barrier to service interoperability. jORCA is a desktop client aimed at facilitating seamless integration of Web Services. It does so by making a uniform representation of the different web resources, supporting scalable service discovery, and automatic composition of workflows. Usability is at the top of the jORCA agenda; thus it is a highly customizable and extensible application that accommodates a broad range of user skills featuring double-click invocation of services in conjunction with advanced execution-control, on the fly data standardization, extensibility of viewer plug-ins, drag-and-drop editing capabilities, plus a file-based browsing style and organization of favourite tools. The integration of bioinformatics Web Services is made easier to support a wider range of users. .

  8. Multipurpose Prepregging Machine

    NASA Technical Reports Server (NTRS)

    Johnston, N. J.; Wilkinson, Steven; Marchello, J. M.; Dixon, D.

    1995-01-01

    Machine designed and built for variety of uses involving coating or impregnating ("prepregging") fibers, tows, yarns, or webs or tapes made of such fibrous materials with thermoplastic or thermosetting resins. Prepreg materials produced used to make matrix/fiber composite materials. Comprises modules operated individually, sequentially, or simultaneously, depending on nature of specific prepreg material and prepregging technique used. Machine incorporates number of safety features.

  9. 40 CFR 63.463 - Batch vapor and in-line cleaning machine standards.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... described in § 63.463(e)(2)(ii). (2) Each cleaning machine shall have a freeboard ratio of 0.75 or greater... of 1.0, superheated vapor. 2 Freeboard refrigeration device, superheated vapor. 3 Working-mode cover...) or (2) of this section as appropriate. The owner or operator of a continuous web cleaning machine...

  10. Effect of Machining Parameters on Oxidation Behavior of Mild Steel

    NASA Astrophysics Data System (ADS)

    Majumdar, P.; Shekhar, S.; Mondal, K.

    2015-01-01

    This study aims to find out a correlation between machining parameters, resultant microstructure, and isothermal oxidation behavior of lathe-machined mild steel in the temperature range of 660-710 °C. The tool rake angles "α" used were +20°, 0°, and -20°, and cutting speeds used were 41, 232, and 541 mm/s. Under isothermal conditions, non-machined and machined mild steel samples follow parabolic oxidation kinetics with activation energy of 181 and ~400 kJ/mol, respectively. Exaggerated grain growth of the machined surface was observed, whereas, the center part of the machined sample showed minimal grain growth during oxidation at higher temperatures. Grain growth on the surface was attributed to the reduction of strain energy at high temperature oxidation, which was accumulated on the sub-region of the machined surface during machining. It was also observed that characteristic surface oxide controlled the oxidation behavior of the machined samples. This study clearly demonstrates the effect of equivalent strain, roughness, and grain size due to machining, and subsequent grain growth on the oxidation behavior of the mild steel.

  11. Design and Implementation of Distributed Crawler System Based on Scrapy

    NASA Astrophysics Data System (ADS)

    Fan, Yuhao

    2018-01-01

    At present, some large-scale search engines at home and abroad only provide users with non-custom search services, and a single-machine web crawler cannot sovle the difficult task. In this paper, Through the study and research of the original Scrapy framework, the original Scrapy framework is improved by combining Scrapy and Redis, a distributed crawler system based on Web information Scrapy framework is designed and implemented, and Bloom Filter algorithm is applied to dupefilter modul to reduce memory consumption. The movie information captured from douban is stored in MongoDB, so that the data can be processed and analyzed. The results show that distributed crawler system based on Scrapy framework is more efficient and stable than the single-machine web crawler system.

  12. Semantic Annotations and Querying of Web Data Sources

    NASA Astrophysics Data System (ADS)

    Hornung, Thomas; May, Wolfgang

    A large part of the Web, actually holding a significant portion of the useful information throughout the Web, consists of views on hidden databases, provided by numerous heterogeneous interfaces that are partly human-oriented via Web forms ("Deep Web"), and partly based on Web Services (only machine accessible). In this paper we present an approach for annotating these sources in a way that makes them citizens of the Semantic Web. We illustrate how queries can be stated in terms of the ontology, and how the annotations are used to selected and access appropriate sources and to answer the queries.

  13. Biotea: RDFizing PubMed Central in support for the paper as an interface to the Web of Data

    PubMed Central

    2013-01-01

    Background The World Wide Web has become a dissemination platform for scientific and non-scientific publications. However, most of the information remains locked up in discrete documents that are not always interconnected or machine-readable. The connectivity tissue provided by RDF technology has not yet been widely used to support the generation of self-describing, machine-readable documents. Results In this paper, we present our approach to the generation of self-describing machine-readable scholarly documents. We understand the scientific document as an entry point and interface to the Web of Data. We have semantically processed the full-text, open-access subset of PubMed Central. Our RDF model and resulting dataset make extensive use of existing ontologies and semantic enrichment services. We expose our model, services, prototype, and datasets at http://biotea.idiginfo.org/ Conclusions The semantic processing of biomedical literature presented in this paper embeds documents within the Web of Data and facilitates the execution of concept-based queries against the entire digital library. Our approach delivers a flexible and adaptable set of tools for metadata enrichment and semantic processing of biomedical documents. Our model delivers a semantically rich and highly interconnected dataset with self-describing content so that software can make effective use of it. PMID:23734622

  14. Proof of Concept Integration of a Single-Level Service-Oriented Architecture into a Multi-Domain Secure Environment

    DTIC Science & Technology

    2008-03-01

    Machine [29]. OC4J applications support Java Servlets , Web services, and the following J2EE specific standards: Extensible Markup Language (XML...IMAP Internet Message Access Protocol IP Internet Protocol IT Information Technology xviii J2EE Java Enterprise Environment JSR 168 Java ...LDAP), World Wide Web Distributed Authoring and Versioning (WebDav), Java Specification Request 168 (JSR 168), and Web Services for Remote

  15. Semantic Web repositories for genomics data using the eXframe platform

    PubMed Central

    2014-01-01

    Background With the advent of inexpensive assay technologies, there has been an unprecedented growth in genomics data as well as the number of databases in which it is stored. In these databases, sample annotation using ontologies and controlled vocabularies is becoming more common. However, the annotation is rarely available as Linked Data, in a machine-readable format, or for standardized queries using SPARQL. This makes large-scale reuse, or integration with other knowledge bases very difficult. Methods To address this challenge, we have developed the second generation of our eXframe platform, a reusable framework for creating online repositories of genomics experiments. This second generation model now publishes Semantic Web data. To accomplish this, we created an experiment model that covers provenance, citations, external links, assays, biomaterials used in the experiment, and the data collected during the process. The elements of our model are mapped to classes and properties from various established biomedical ontologies. Resource Description Framework (RDF) data is automatically produced using these mappings and indexed in an RDF store with a built-in Sparql Protocol and RDF Query Language (SPARQL) endpoint. Conclusions Using the open-source eXframe software, institutions and laboratories can create Semantic Web repositories of their experiments, integrate it with heterogeneous resources and make it interoperable with the vast Semantic Web of biomedical knowledge. PMID:25093072

  16. SIP: A Web-Based Astronomical Image Processing Program

    NASA Astrophysics Data System (ADS)

    Simonetti, J. H.

    1999-12-01

    I have written an astronomical image processing and analysis program designed to run over the internet in a Java-compatible web browser. The program, Sky Image Processor (SIP), is accessible at the SIP webpage (http://www.phys.vt.edu/SIP). Since nothing is installed on the user's machine, there is no need to download upgrades; the latest version of the program is always instantly available. Furthermore, the Java programming language is designed to work on any computer platform (any machine and operating system). The program could be used with students in web-based instruction or in a computer laboratory setting; it may also be of use in some research or outreach applications. While SIP is similar to other image processing programs, it is unique in some important respects. For example, SIP can load images from the user's machine or from the Web. An instructor can put images on a web server for students to load and analyze on their own personal computer. Or, the instructor can inform the students of images to load from any other web server. Furthermore, since SIP was written with students in mind, the philosophy is to present the user with the most basic tools necessary to process and analyze astronomical images. Images can be combined (by addition, subtraction, multiplication, or division), multiplied by a constant, smoothed, cropped, flipped, rotated, and so on. Statistics can be gathered for pixels within a box drawn by the user. Basic tools are available for gathering data from an image which can be used for performing simple differential photometry, or astrometry. Therefore, students can learn how astronomical image processing works. Since SIP is not part of a commercial CCD camera package, the program is written to handle the most common denominator image file, the FITS format.

  17. Reference Architecture for MNE 5 Technical System

    DTIC Science & Technology

    2007-05-30

    of being available in most experiments. Core Services A core set of applications whi directories, web portal and collaboration applications etc. A...classifications Messages (xml, JMS, content level…) Meta data filtering, who can initiate services Web browsing Collaboration & messaging Border...Exchange Ref Architecture for MNE5 Tech System.doc 9 of 21 audit logging Person and machine Data lev objects, web services, messages rification el

  18. 40 CFR 52.254 - Organic solvent usage.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Air Quality Control Regions (the “Regions”), as described in 40 CFR part 81, dated July 1, 1979... contrivances designed for processing continuous web, strip, or wire that emit organic materials in the course... articles, machines, equipment, or other contrivances designed for processing a continuous web, strip, or...

  19. 40 CFR 52.254 - Organic solvent usage.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Air Quality Control Regions (the “Regions”), as described in 40 CFR part 81, dated July 1, 1979... contrivances designed for processing continuous web, strip, or wire that emit organic materials in the course... articles, machines, equipment, or other contrivances designed for processing a continuous web, strip, or...

  20. 40 CFR 52.254 - Organic solvent usage.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Air Quality Control Regions (the “Regions”), as described in 40 CFR part 81, dated July 1, 1979... contrivances designed for processing continuous web, strip, or wire that emit organic materials in the course... articles, machines, equipment, or other contrivances designed for processing a continuous web, strip, or...

  1. 40 CFR 52.254 - Organic solvent usage.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Air Quality Control Regions (the “Regions”), as described in 40 CFR part 81, dated July 1, 1979... contrivances designed for processing continuous web, strip, or wire that emit organic materials in the course... articles, machines, equipment, or other contrivances designed for processing a continuous web, strip, or...

  2. GOTree Machine (GOTM): a web-based platform for interpreting sets of interesting genes using Gene Ontology hierarchies

    PubMed Central

    Zhang, Bing; Schmoyer, Denise; Kirov, Stefan; Snoddy, Jay

    2004-01-01

    Background Microarray and other high-throughput technologies are producing large sets of interesting genes that are difficult to analyze directly. Bioinformatics tools are needed to interpret the functional information in the gene sets. Results We have created a web-based tool for data analysis and data visualization for sets of genes called GOTree Machine (GOTM). This tool was originally intended to analyze sets of co-regulated genes identified from microarray analysis but is adaptable for use with other gene sets from other high-throughput analyses. GOTree Machine generates a GOTree, a tree-like structure to navigate the Gene Ontology Directed Acyclic Graph for input gene sets. This system provides user friendly data navigation and visualization. Statistical analysis helps users to identify the most important Gene Ontology categories for the input gene sets and suggests biological areas that warrant further study. GOTree Machine is available online at . Conclusion GOTree Machine has a broad application in functional genomic, proteomic and other high-throughput methods that generate large sets of interesting genes; its primary purpose is to help users sort for interesting patterns in gene sets. PMID:14975175

  3. Advanced dendritic web growth development

    NASA Technical Reports Server (NTRS)

    Hopkins, R. H.

    1985-01-01

    A program to develop the technology of the silicon dendritic web ribbon growth process is examined. The effort is being concentrated on the area rate and quality requirements necessary to meet the JPL/DOE goals for terrestrial PV applications. Closed loop web growth system development and stress reduction for high area rate growth is considered.

  4. Untangling the Tangled Webs We Weave: A Team Approach to Cyberspace.

    ERIC Educational Resources Information Center

    Broidy, Ellen; And Others

    Working in a cooperative team environment across libraries and job classifications, librarians and support staff at the University of California at Irvine (UCI) have mounted several successful web projects, including two versions of the Libraries' home page, a virtual reference collection, and Science Library "ANTswer Machine." UCI's…

  5. Energy-Efficient Hosting Rich Content from Mobile Platforms with Relative Proximity Sensing.

    PubMed

    Park, Ki-Woong; Lee, Younho; Baek, Sung Hoon

    2017-08-08

    In this paper, we present a tiny networked mobile platform, termed Tiny-Web-Thing ( T-Wing ), which allows the sharing of data-intensive content among objects in cyber physical systems. The object includes mobile platforms like a smartphone, and Internet of Things (IoT) platforms for Human-to-Human (H2H), Human-to-Machine (H2M), Machine-to-Human (M2H), and Machine-to-Machine (M2M) communications. T-Wing makes it possible to host rich web content directly on their objects, which nearby objects can access instantaneously. Using a new mechanism that allows the Wi-Fi interface of the object to be turned on purely on-demand, T-Wing achieves very high energy efficiency. We have implemented T-Wing on an embedded board, and present evaluation results from our testbed. From the evaluation result of T-Wing , we compare our system against alternative approaches to implement this functionality using only the cellular or Wi-Fi (but not both), and show that in typical usage, T-Wing consumes less than 15× the energy and is faster by an order of magnitude.

  6. Cloud services for the Fermilab scientific stakeholders

    DOE PAGES

    Timm, S.; Garzoglio, G.; Mhashilkar, P.; ...

    2015-12-23

    As part of the Fermilab/KISTI cooperative research project, Fermilab has successfully run an experimental simulation workflow at scale on a federation of Amazon Web Services (AWS), FermiCloud, and local FermiGrid resources. We used the CernVM-FS (CVMFS) file system to deliver the application software. We established Squid caching servers in AWS as well, using the Shoal system to let each individual virtual machine find the closest squid server. We also developed an automatic virtual machine conversion system so that we could transition virtual machines made on FermiCloud to Amazon Web Services. We used this system to successfully run a cosmic raymore » simulation of the NOvA detector at Fermilab, making use of both AWS spot pricing and network bandwidth discounts to minimize the cost. On FermiCloud we also were able to run the workflow at the scale of 1000 virtual machines, using a private network routable inside of Fermilab. As a result, we present in detail the technological improvements that were used to make this work a reality.« less

  7. Cloud services for the Fermilab scientific stakeholders

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Timm, S.; Garzoglio, G.; Mhashilkar, P.

    As part of the Fermilab/KISTI cooperative research project, Fermilab has successfully run an experimental simulation workflow at scale on a federation of Amazon Web Services (AWS), FermiCloud, and local FermiGrid resources. We used the CernVM-FS (CVMFS) file system to deliver the application software. We established Squid caching servers in AWS as well, using the Shoal system to let each individual virtual machine find the closest squid server. We also developed an automatic virtual machine conversion system so that we could transition virtual machines made on FermiCloud to Amazon Web Services. We used this system to successfully run a cosmic raymore » simulation of the NOvA detector at Fermilab, making use of both AWS spot pricing and network bandwidth discounts to minimize the cost. On FermiCloud we also were able to run the workflow at the scale of 1000 virtual machines, using a private network routable inside of Fermilab. As a result, we present in detail the technological improvements that were used to make this work a reality.« less

  8. Large-area sheet task: Advanced dendritic-web-growth development

    NASA Technical Reports Server (NTRS)

    Duncan, C. S.; Seidensticker, R. G.; Mchugh, J. P.; Schruben, J.

    1983-01-01

    Thermally generated stresses in the growing web crystal were reduced. These stresses, which if too high cause the ribbon to degenerate, were reduced by a factor of three, resulting in the demonstrated growth of high-quality web crystals to widths of 5.4 cm. This progress was brought about chiefly by the application of thermal models to the development of low-stress growth configurations. A new temperature model was developed which can analyze the thermal effects of much more complex lid and top shield configurations than was possible with the old lumped shield model. Growth experiments which supplied input data such as actual shield temperature and melt levels were used to verify the modeling results. Desirable modifications in the melt level-sensing circuitry were made in the new experimental web growth furnace, and this furnace has been used to carry out growth experiments under steady-state conditions. New growth configurations were tested in long growth runs at Westinghouse AESD which produced wider, lower stress and higher quality web crystals than designs previously used.

  9. Searching to Translate and Translating to Search: When Information Retrieval Meets Machine Translation

    ERIC Educational Resources Information Center

    Ture, Ferhan

    2013-01-01

    With the adoption of web services in daily life, people have access to tremendous amounts of information, beyond any human's reading and comprehension capabilities. As a result, search technologies have become a fundamental tool for accessing information. Furthermore, the web contains information in multiple languages, introducing another barrier…

  10. Exploration of Web Users' Search Interests through Automatic Subject Categorization of Query Terms.

    ERIC Educational Resources Information Center

    Pu, Hsiao-tieh; Yang, Chyan; Chuang, Shui-Lung

    2001-01-01

    Proposes a mechanism that carefully integrates human and machine efforts to explore Web users' search interests. The approach consists of a four-step process: extraction of core terms; construction of subject taxonomy; automatic subject categorization of query terms; and observation of users' search interests. Research findings are proved valuable…

  11. Finding Those Missing Links

    ERIC Educational Resources Information Center

    Gunn, Holly

    2004-01-01

    In this article, the author stresses not to give up on a site when a URL returns an error message. Many web sites can be found by using strategies such as URL trimming, searching cached sites, site searching and searching the WayBack Machine. Methods and tips for finding web sites are contained within this article.

  12. The Semantic Web: From Representation to Realization

    NASA Astrophysics Data System (ADS)

    Thórisson, Kristinn R.; Spivack, Nova; Wissner, James M.

    A semantically-linked web of electronic information - the Semantic Web - promises numerous benefits including increased precision in automated information sorting, searching, organizing and summarizing. Realizing this requires significantly more reliable meta-information than is readily available today. It also requires a better way to represent information that supports unified management of diverse data and diverse Manipulation methods: from basic keywords to various types of artificial intelligence, to the highest level of intelligent manipulation - the human mind. How this is best done is far from obvious. Relying solely on hand-crafted annotation and ontologies, or solely on artificial intelligence techniques, seems less likely for success than a combination of the two. In this paper describe an integrated, complete solution to these challenges that has already been implemented and tested with hundreds of thousands of users. It is based on an ontological representational level we call SemCards that combines ontological rigour with flexible user interface constructs. SemCards are machine- and human-readable digital entities that allow non-experts to create and use semantic content, while empowering machines to better assist and participate in the process. SemCards enable users to easily create semantically-grounded data that in turn acts as examples for automation processes, creating a positive iterative feedback loop of metadata creation and refinement between user and machine. They provide a holistic solution to the Semantic Web, supporting powerful management of the full lifecycle of data, including its creation, retrieval, classification, sorting and sharing. We have implemented the SemCard technology on the semantic Web site Twine.com, showing that the technology is indeed versatile and scalable. Here we present the key ideas behind SemCards and describe the initial implementation of the technology.

  13. CernVM WebAPI - Controlling Virtual Machines from the Web

    NASA Astrophysics Data System (ADS)

    Charalampidis, I.; Berzano, D.; Blomer, J.; Buncic, P.; Ganis, G.; Meusel, R.; Segal, B.

    2015-12-01

    Lately, there is a trend in scientific projects to look for computing resources in the volunteering community. In addition, to reduce the development effort required to port the scientific software stack to all the known platforms, the use of Virtual Machines (VMs)u is becoming increasingly popular. Unfortunately their use further complicates the software installation and operation, restricting the volunteer audience to sufficiently expert people. CernVM WebAPI is a software solution addressing this specific case in a way that opens wide new application opportunities. It offers a very simple API for setting-up, controlling and interfacing with a VM instance in the users computer, while in the same time offloading the user from all the burden of downloading, installing and configuring the hypervisor. WebAPI comes with a lightweight javascript library that guides the user through the application installation process. Malicious usage is prohibited by offering a per-domain PKI validation mechanism. In this contribution we will overview this new technology, discuss its security features and examine some test cases where it is already in use.

  14. [Development of quality assurance/quality control web system in radiotherapy].

    PubMed

    Okamoto, Hiroyuki; Mochizuki, Toshihiko; Yokoyama, Kazutoshi; Wakita, Akihisa; Nakamura, Satoshi; Ueki, Heihachi; Shiozawa, Keiko; Sasaki, Koji; Fuse, Masashi; Abe, Yoshihisa; Itami, Jun

    2013-12-01

    Our purpose is to develop a QA/QC (quality assurance/quality control) web system using a server-side script language such as HTML (HyperText Markup Language) and PHP (Hypertext Preprocessor), which can be useful as a tool to share information about QA/QC in radiotherapy. The system proposed in this study can be easily built in one's own institute, because HTML can be easily handled. There are two desired functions in a QA/QC web system: (i) To review the results of QA/QC for a radiotherapy machine, manuals, and reports necessary for routinely performing radiotherapy through this system. By disclosing the results, transparency can be maintained, (ii) To reveal a protocol for QA/QC in one's own institute using pictures and movies relating to QA/QC for simplicity's sake, which can also be used as an educational tool for junior radiation technologists and medical physicists. By using this system, not only administrators, but also all staff involved in radiotherapy, can obtain information about the conditions and accuracy of treatment machines through the QA/QC web system.

  15. Automatic Control of Silicon Melt Level

    NASA Technical Reports Server (NTRS)

    Duncan, C. S.; Stickel, W. B.

    1982-01-01

    A new circuit, when combined with melt-replenishment system and melt level sensor, offers continuous closed-loop automatic control of melt-level during web growth. Installed on silicon-web furnace, circuit controls melt-level to within 0.1 mm for as long as 8 hours. Circuit affords greater area growth rate and higher web quality, automatic melt-level control also allows semiautomatic growth of web over long periods which can greatly reduce costs.

  16. Improving the interactivity and functionality of Web-based radiology teaching files with the Java programming language.

    PubMed

    Eng, J

    1997-01-01

    Java is a programming language that runs on a "virtual machine" built into World Wide Web (WWW)-browsing programs on multiple hardware platforms. Web pages were developed with Java to enable Web-browsing programs to overlay transparent graphics and text on displayed images so that the user could control the display of labels and annotations on the images, a key feature not available with standard Web pages. This feature was extended to include the presentation of normal radiologic anatomy. Java programming was also used to make Web browsers compatible with the Digital Imaging and Communications in Medicine (DICOM) file format. By enhancing the functionality of Web pages, Java technology should provide greater incentive for using a Web-based approach in the development of radiology teaching material.

  17. WebBio, a web-based management and analysis system for patient data of biological products in hospital.

    PubMed

    Lu, Ying-Hao; Kuo, Chen-Chun; Huang, Yaw-Bin

    2011-08-01

    We selected HTML, PHP and JavaScript as the programming languages to build "WebBio", a web-based system for patient data of biological products and used MySQL as database. WebBio is based on the PHP-MySQL suite and is run by Apache server on Linux machine. WebBio provides the functions of data management, searching function and data analysis for 20 kinds of biological products (plasma expanders, human immunoglobulin and hematological products). There are two particular features in WebBio: (1) pharmacists can rapidly find out whose patients used contaminated products for medication safety, and (2) the statistics charts for a specific product can be automatically generated to reduce pharmacist's work loading. WebBio has successfully turned traditional paper work into web-based data management.

  18. Large area sheet task: Advanced Dendritic Web Growth Development

    NASA Technical Reports Server (NTRS)

    Duncan, C. S.; Seidensticker, R. G.; Mchugh, J. P.; Hopkins, R. H.; Meier, D.; Schruben, J.

    1981-01-01

    A melt level control system was implemented to provide stepless silicon feed rates from zero to rates exactly matching the silicon consumed during web growth. Bench tests of the unit were successfully completed and the system mounted in a web furnace for operational verification. Tests of long term temperature drift correction techniques were made; web width monitoring seems most appropriate for feedback purposes. A system to program the initiation of the web growth cycle was successfully tested. A low cost temperature controller was tested which functions as well as units four times as expensive.

  19. Machine Translation-Assisted Language Learning: Writing for Beginners

    ERIC Educational Resources Information Center

    Garcia, Ignacio; Pena, Maria Isabel

    2011-01-01

    The few studies that deal with machine translation (MT) as a language learning tool focus on its use by advanced learners, never by beginners. Yet, freely available MT engines (i.e. Google Translate) and MT-related web initiatives (i.e. Gabble-on.com) position themselves to cater precisely to the needs of learners with a limited command of a…

  20. ICTNET at Web Track 2012 Ad-hoc Task

    DTIC Science & Technology

    2012-11-01

    Model and use it as baseline this year. 3.2 Learning to rank Learning to rank (LTR) introduces machine learning to retrieval ranking problem. It...Yoram Singer. An efficient boosting algorithm  for  combining preferences [J]. The Journal of  Machine   Learning  Research. 2003. 

  1. Semantic similarity measures in the biomedical domain by leveraging a web search engine.

    PubMed

    Hsieh, Sheau-Ling; Chang, Wen-Yung; Chen, Chi-Huang; Weng, Yung-Ching

    2013-07-01

    Various researches in web related semantic similarity measures have been deployed. However, measuring semantic similarity between two terms remains a challenging task. The traditional ontology-based methodologies have a limitation that both concepts must be resided in the same ontology tree(s). Unfortunately, in practice, the assumption is not always applicable. On the other hand, if the corpus is sufficiently adequate, the corpus-based methodologies can overcome the limitation. Now, the web is a continuous and enormous growth corpus. Therefore, a method of estimating semantic similarity is proposed via exploiting the page counts of two biomedical concepts returned by Google AJAX web search engine. The features are extracted as the co-occurrence patterns of two given terms P and Q, by querying P, Q, as well as P AND Q, and the web search hit counts of the defined lexico-syntactic patterns. These similarity scores of different patterns are evaluated, by adapting support vector machines for classification, to leverage the robustness of semantic similarity measures. Experimental results validating against two datasets: dataset 1 provided by A. Hliaoutakis; dataset 2 provided by T. Pedersen, are presented and discussed. In dataset 1, the proposed approach achieves the best correlation coefficient (0.802) under SNOMED-CT. In dataset 2, the proposed method obtains the best correlation coefficient (SNOMED-CT: 0.705; MeSH: 0.723) with physician scores comparing with measures of other methods. However, the correlation coefficients (SNOMED-CT: 0.496; MeSH: 0.539) with coder scores received opposite outcomes. In conclusion, the semantic similarity findings of the proposed method are close to those of physicians' ratings. Furthermore, the study provides a cornerstone investigation for extracting fully relevant information from digitizing, free-text medical records in the National Taiwan University Hospital database.

  2. Publication, discovery and interoperability of Clinical Decision Support Systems: A Linked Data approach.

    PubMed

    Marco-Ruiz, Luis; Pedrinaci, Carlos; Maldonado, J A; Panziera, Luca; Chen, Rong; Bellika, J Gustav

    2016-08-01

    The high costs involved in the development of Clinical Decision Support Systems (CDSS) make it necessary to share their functionality across different systems and organizations. Service Oriented Architectures (SOA) have been proposed to allow reusing CDSS by encapsulating them in a Web service. However, strong barriers in sharing CDS functionality are still present as a consequence of lack of expressiveness of services' interfaces. Linked Services are the evolution of the Semantic Web Services paradigm to process Linked Data. They aim to provide semantic descriptions over SOA implementations to overcome the limitations derived from the syntactic nature of Web services technologies. To facilitate the publication, discovery and interoperability of CDS services by evolving them into Linked Services that expose their interfaces as Linked Data. We developed methods and models to enhance CDS SOA as Linked Services that define a rich semantic layer based on machine interpretable ontologies that powers their interoperability and reuse. These ontologies provided unambiguous descriptions of CDS services properties to expose them to the Web of Data. We developed models compliant with Linked Data principles to create a semantic representation of the components that compose CDS services. To evaluate our approach we implemented a set of CDS Linked Services using a Web service definition ontology. The definitions of Web services were linked to the models developed in order to attach unambiguous semantics to the service components. All models were bound to SNOMED-CT and public ontologies (e.g. Dublin Core) in order to count on a lingua franca to explore them. Discovery and analysis of CDS services based on machine interpretable models was performed reasoning over the ontologies built. Linked Services can be used effectively to expose CDS services to the Web of Data by building on current CDS standards. This allows building shared Linked Knowledge Bases to provide machine interpretable semantics to the CDS service description alleviating the challenges on interoperability and reuse. Linked Services allow for building 'digital libraries' of distributed CDS services that can be hosted and maintained in different organizations. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. Open chemistry: RESTful web APIs, JSON, NWChem and the modern web application.

    PubMed

    Hanwell, Marcus D; de Jong, Wibe A; Harris, Christopher J

    2017-10-30

    An end-to-end platform for chemical science research has been developed that integrates data from computational and experimental approaches through a modern web-based interface. The platform offers an interactive visualization and analytics environment that functions well on mobile, laptop and desktop devices. It offers pragmatic solutions to ensure that large and complex data sets are more accessible. Existing desktop applications/frameworks were extended to integrate with high-performance computing resources, and offer command-line tools to automate interaction-connecting distributed teams to this software platform on their own terms. The platform was developed openly, and all source code hosted on the GitHub platform with automated deployment possible using Ansible coupled with standard Ubuntu-based machine images deployed to cloud machines. The platform is designed to enable teams to reap the benefits of the connected web-going beyond what conventional search and analytics platforms offer in this area. It also has the goal of offering federated instances, that can be customized to the sites/research performed. Data gets stored using JSON, extending upon previous approaches using XML, building structures that support computational chemistry calculations. These structures were developed to make it easy to process data across different languages, and send data to a JavaScript-based web client.

  4. Open chemistry: RESTful web APIs, JSON, NWChem and the modern web application

    DOE PAGES

    Hanwell, Marcus D.; de Jong, Wibe A.; Harris, Christopher J.

    2017-10-30

    An end-to-end platform for chemical science research has been developed that integrates data from computational and experimental approaches through a modern web-based interface. The platform offers an interactive visualization and analytics environment that functions well on mobile, laptop and desktop devices. It offers pragmatic solutions to ensure that large and complex data sets are more accessible. Existing desktop applications/frameworks were extended to integrate with high-performance computing resources, and offer command-line tools to automate interaction - connecting distributed teams to this software platform on their own terms. The platform was developed openly, and all source code hosted on the GitHub platformmore » with automated deployment possible using Ansible coupled with standard Ubuntu-based machine images deployed to cloud machines. The platform is designed to enable teams to reap the benefits of the connected web - going beyond what conventional search and analytics platforms offer in this area. It also has the goal of offering federated instances, that can be customized to the sites/research performed. Data gets stored using JSON, extending upon previous approaches using XML, building structures that support computational chemistry calculations. These structures were developed to make it easy to process data across different languages, and send data to a JavaScript-based web client.« less

  5. Open chemistry: RESTful web APIs, JSON, NWChem and the modern web application

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hanwell, Marcus D.; de Jong, Wibe A.; Harris, Christopher J.

    An end-to-end platform for chemical science research has been developed that integrates data from computational and experimental approaches through a modern web-based interface. The platform offers an interactive visualization and analytics environment that functions well on mobile, laptop and desktop devices. It offers pragmatic solutions to ensure that large and complex data sets are more accessible. Existing desktop applications/frameworks were extended to integrate with high-performance computing resources, and offer command-line tools to automate interaction - connecting distributed teams to this software platform on their own terms. The platform was developed openly, and all source code hosted on the GitHub platformmore » with automated deployment possible using Ansible coupled with standard Ubuntu-based machine images deployed to cloud machines. The platform is designed to enable teams to reap the benefits of the connected web - going beyond what conventional search and analytics platforms offer in this area. It also has the goal of offering federated instances, that can be customized to the sites/research performed. Data gets stored using JSON, extending upon previous approaches using XML, building structures that support computational chemistry calculations. These structures were developed to make it easy to process data across different languages, and send data to a JavaScript-based web client.« less

  6. Virtual Reality Simulations and Animations in a Web-Based Interactive Manufacturing Engineering Module

    ERIC Educational Resources Information Center

    Ong, S. K.; Mannan, M. A.

    2004-01-01

    This paper presents a web-based interactive teaching package that provides a comprehensive and conducive yet dynamic and interactive environment for a module on automated machine tools in the Manufacturing Division at the National University of Singapore. The use of Internet technologies in this teaching tool makes it possible to conjure…

  7. Supporting Open Access to European Academic Courses: The ASK-CDM-ECTS Tool

    ERIC Educational Resources Information Center

    Sampson, Demetrios G.; Zervas, Panagiotis

    2013-01-01

    Purpose: This paper aims to present and evaluate a web-based tool, namely ASK-CDM-ECTS, which facilitates authoring and publishing on the web descriptions of (open) academic courses in machine-readable format using an application profile of the Course Description Metadata (CDM) specification, namely CDM-ECTS. Design/methodology/approach: The paper…

  8. Web-Based Distributed Simulation of Aeronautical Propulsion System

    NASA Technical Reports Server (NTRS)

    Zheng, Desheng; Follen, Gregory J.; Pavlik, William R.; Kim, Chan M.; Liu, Xianyou; Blaser, Tammy M.; Lopez, Isaac

    2001-01-01

    An application was developed to allow users to run and view the Numerical Propulsion System Simulation (NPSS) engine simulations from web browsers. Simulations were performed on multiple INFORMATION POWER GRID (IPG) test beds. The Common Object Request Broker Architecture (CORBA) was used for brokering data exchange among machines and IPG/Globus for job scheduling and remote process invocation. Web server scripting was performed by JavaServer Pages (JSP). This application has proven to be an effective and efficient way to couple heterogeneous distributed components.

  9. Energy-Efficient Hosting Rich Content from Mobile Platforms with Relative Proximity Sensing

    PubMed Central

    Baek, Sung Hoon

    2017-01-01

    In this paper, we present a tiny networked mobile platform, termed Tiny-Web-Thing (T-Wing), which allows the sharing of data-intensive content among objects in cyber physical systems. The object includes mobile platforms like a smartphone, and Internet of Things (IoT) platforms for Human-to-Human (H2H), Human-to-Machine (H2M), Machine-to-Human (M2H), and Machine-to-Machine (M2M) communications. T-Wing makes it possible to host rich web content directly on their objects, which nearby objects can access instantaneously. Using a new mechanism that allows the Wi-Fi interface of the object to be turned on purely on-demand, T-Wing achieves very high energy efficiency. We have implemented T-Wing on an embedded board, and present evaluation results from our testbed. From the evaluation result of T-Wing, we compare our system against alternative approaches to implement this functionality using only the cellular or Wi-Fi (but not both), and show that in typical usage, T-Wing consumes less than 15× the energy and is faster by an order of magnitude. PMID:28786942

  10. PhD7Faster: predicting clones propagating faster from the Ph.D.-7 phage display peptide library.

    PubMed

    Ru, Beibei; 't Hoen, Peter A C; Nie, Fulei; Lin, Hao; Guo, Feng-Biao; Huang, Jian

    2014-02-01

    Phage display can rapidly discover peptides binding to any given target; thus, it has been widely used in basic and applied research. Each round of panning consists of two basic processes: Selection and amplification. However, recent studies have showed that the amplification step would decrease the diversity of phage display libraries due to different propagation capacity of phage clones. This may induce phages with growth advantage rather than specific affinity to appear in the final experimental results. The peptides displayed by such phages are termed as propagation-related target-unrelated peptides (PrTUPs). They would mislead further analysis and research if not removed. In this paper, we describe PhD7Faster, an ensemble predictor based on support vector machine (SVM) for predicting clones with growth advantage from the Ph.D.-7 phage display peptide library. By using reduced dipeptide composition (ReDPC) as features, an accuracy (Acc) of 79.67% and a Matthews correlation coefficient (MCC) of 0.595 were achieved in 5-fold cross-validation. In addition, the SVM-based model was demonstrated to perform better than several representative machine learning algorithms. We anticipate that PhD7Faster can assist biologists to exclude potential PrTUPs and accelerate the finding of specific binders from the popular Ph.D.-7 library. The web server of PhD7Faster can be freely accessed at http://immunet.cn/sarotup/cgi-bin/PhD7Faster.pl.

  11. Evaluating the effects of trophic complexity on a keystone predator by disassembling a partial intraguild predation food web.

    PubMed

    Davenport, Jon M; Chalcraft, David R

    2012-01-01

    1. Many taxa can be found in food webs that differ in trophic complexity, but it is unclear how trophic complexity affects the performance of particular taxa. In pond food webs, larvae of the salamander Ambystoma opacum occupy the intermediate predator trophic position in a partial intraguild predation (IGP) food web and can function as keystone predators. Larval A. opacum are also found in simpler food webs lacking either top predators or shared prey. 2. We conducted an experiment where a partial IGP food web was simplified, and we measured the growth and survival of larval A. opacum in each set of food webs. Partial IGP food webs that had either a low abundance or high abundance of total prey were also simplified by independently removing top predators and/or shared prey. 3. Removing top predators always increased A. opacum survival, but removal of shared prey had no effect on A. opacum survival, regardless of total prey abundance. 4. Surprisingly, food web simplification had no effect on the growth of A. opacum when present in food webs with a low abundance of prey but had important effects on A. opacum growth in food webs with a high abundance of prey. Simplifying a partial IGP food web with a high abundance of prey reduced A. opacum growth when either top predators or shared prey were removed from the food web and the loss of top predators and shared prey influenced A. opacum growth in a non-additive fashion. 5. The non-additive response in A. opacum growth appears to be the result of supplemental prey availability augmenting the beneficial effects of top predators. Top predators had a beneficial effect on A. opacum populations by reducing the abundance of A. opacum present and thereby reducing the intensity of intraspecific competition. 6. Our study indicates that the effects of food web simplification on the performance of A. opacum are complex and depend on both how a partial IGP food web is simplified and how abundant prey are in the food web. These findings are important because they demonstrate how trophic complexity can create variation in the performance of intermediate predators that play important roles in temporary pond food webs. © 2011 The Authors. Journal of Animal Ecology © 2011 British Ecological Society.

  12. WEBSLIDE: A "Virtual" Slide Projector Based on World Wide Web

    NASA Astrophysics Data System (ADS)

    Barra, Maria; Ferrandino, Salvatore; Scarano, Vittorio

    1999-03-01

    We present here the design key concepts of WEBSLIDE, a software project whose objective is to provide a simple, cheap and efficient solution for showing slides during lessons in computer labs. In fact, WEBSLIDE allows the video monitors of several client machines (the "STUDENTS") to be synchronously updated by the actions of a particular client machine, called the "INSTRUCTOR." The system is based on the World Wide Web and the software components of WEBSLIDE mainly consists in a WWW server, browsers and small Cgi-Bill scripts. What makes WEBSLIDE particularly appealing for small educational institutions is that WEBSLIDE is built with "off the shelf" products: it does not involve using a specifically designed program but any Netscape browser, one of the most popular browsers available on the market, is sufficient. Another possible use is to use our system to implement "guided automatic tours" through several pages or Intranets internal news bulletins: the company Web server can broadcast to all employees relevant information on their browser.

  13. New Web Server - the Java Version of Tempest - Produced

    NASA Technical Reports Server (NTRS)

    York, David W.; Ponyik, Joseph G.

    2000-01-01

    A new software design and development effort has produced a Java (Sun Microsystems, Inc.) version of the award-winning Tempest software (refs. 1 and 2). In 1999, the Embedded Web Technology (EWT) team received a prestigious R&D 100 Award for Tempest, Java Version. In this article, "Tempest" will refer to the Java version of Tempest, a World Wide Web server for desktop or embedded systems. Tempest was designed at the NASA Glenn Research Center at Lewis Field to run on any platform for which a Java Virtual Machine (JVM, Sun Microsystems, Inc.) exists. The JVM acts as a translator between the native code of the platform and the byte code of Tempest, which is compiled in Java. These byte code files are Java executables with a ".class" extension. Multiple byte code files can be zipped together as a "*.jar" file for more efficient transmission over the Internet. Today's popular browsers, such as Netscape (Netscape Communications Corporation) and Internet Explorer (Microsoft Corporation) have built-in Virtual Machines to display Java applets.

  14. A system framework of inter-enterprise machining quality control based on fractal theory

    NASA Astrophysics Data System (ADS)

    Zhao, Liping; Qin, Yongtao; Yao, Yiyong; Yan, Peng

    2014-03-01

    In order to meet the quality control requirement of dynamic and complicated product machining processes among enterprises, a system framework of inter-enterprise machining quality control based on fractal was proposed. In this system framework, the fractal-specific characteristic of inter-enterprise machining quality control function was analysed, and the model of inter-enterprise machining quality control was constructed by the nature of fractal structures. Furthermore, the goal-driven strategy of inter-enterprise quality control and the dynamic organisation strategy of inter-enterprise quality improvement were constructed by the characteristic analysis on this model. In addition, the architecture of inter-enterprise machining quality control based on fractal was established by means of Web service. Finally, a case study for application was presented. The result showed that the proposed method was available, and could provide guidance for quality control and support for product reliability in inter-enterprise machining processes.

  15. GENIUS: web server to predict local gene networks and key genes for biological functions.

    PubMed

    Puelma, Tomas; Araus, Viviana; Canales, Javier; Vidal, Elena A; Cabello, Juan M; Soto, Alvaro; Gutiérrez, Rodrigo A

    2017-03-01

    GENIUS is a user-friendly web server that uses a novel machine learning algorithm to infer functional gene networks focused on specific genes and experimental conditions that are relevant to biological functions of interest. These functions may have different levels of complexity, from specific biological processes to complex traits that involve several interacting processes. GENIUS also enriches the network with new genes related to the biological function of interest, with accuracies comparable to highly discriminative Support Vector Machine methods. GENIUS currently supports eight model organisms and is freely available for public use at http://networks.bio.puc.cl/genius . genius.psbl@gmail.com. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  16. Large-area sheet task advanced dendritic web growth development

    NASA Technical Reports Server (NTRS)

    Duncan, C. S.; Seidensticker, R. G.; Mchugh, J. P.

    1984-01-01

    The thermal models used for analyzing dendritic web growth and calculating the thermal stress were reexamined to establish the validity limits imposed by the assumptions of the models. Also, the effects of thermal conduction through the gas phase were evaluated and found to be small. New growth designs, both static and dynamic, were generated using the modeling results. Residual stress effects in dendritic web were examined. In the laboratory, new techniques for the control of temperature distributions in three dimensions were developed. A new maximum undeformed web width of 5.8 cm was achieved. A 58% increase in growth velocity of 150 micrometers thickness was achieved with dynamic hardware. The area throughput goals for transient growth of 30 and 35 sq cm/min were exceeded.

  17. Blast protection of infrastructure using advanced composites

    NASA Astrophysics Data System (ADS)

    Brodsky, Evan

    This research was a systematic investigation detailing the energy absorption mechanisms of an E-glass web core composite sandwich panel subjected to an impulse loading applied orthogonal to the facesheet. Key roles of the fiberglass and polyisocyanurate foam material were identified, characterized, and analyzed. A quasi-static test fixture was used to compressively load a unit cell web core specimen machined from the sandwich panel. The web and foam both exhibited non-linear stress-strain responses during axial compressive loading. Through several analyses, the composite web situated in the web core had failed in axial compression. Optimization studies were performed on the sandwich panel unit cell in order to maximize the energy absorption capabilities of the web core. Ultimately, a sandwich panel was designed to optimize the energy dissipation subjected to through-the-thickness compressive loading.

  18. Controlling Thermal Gradients During Silicon Web Growth

    NASA Technical Reports Server (NTRS)

    Duncan, C. S.; Mchugh, J. P.; Skutch, M. E.; Piotrowski, P. A.

    1983-01-01

    Strategically placed slot helps to control critical thermal gradients in crucible for silicon web growth. Slot thermally isolates feed region of crucible from growth region; region where pellets are added stays hot. Heat absorbed by pellets during melting causes thermal unbalance than upsets growth conditions.

  19. Federated Space-Time Query for Earth Science Data Using OpenSearch Conventions

    NASA Astrophysics Data System (ADS)

    Lynnes, C.; Beaumont, B.; Duerr, R. E.; Hua, H.

    2009-12-01

    The past decade has seen a burgeoning of remote sensing and Earth science data providers, as evidenced in the growth of the Earth Science Information Partner (ESIP) federation. At the same time, the need to combine diverse data sets to enable understanding of the Earth as a system has also grown. While the expansion of data providers is in general a boon to such studies, the diversity presents a challenge to finding useful data for a given study. Locating all the data files with aerosol information for a particular volcanic eruption, for example, may involve learning and using several different search tools to execute the requisite space-time queries. To address this issue, the ESIP federation is developing a federated space-time query framework, based on the OpenSearch convention (www.opensearch.org), with Geo and Time extensions. In this framework, data providers publish OpenSearch Description Documents that describe in a machine-readable form how to execute queries against the provider. The novelty of OpenSearch is that the space-time query interface becomes both machine callable and easy enough to integrate into the web browser's search box. This flexibility, together with a simple REST (HTTP-get) interface, should allow a variety of data providers to participate in the federated search framework, from large institutional data centers to individual scientists. The simple interface enables trivial querying of multiple data sources and participation in recursive-like federated searches--all using the same common OpenSearch interface. This simplicity also makes the construction of clients easy, as does existing OpenSearch client libraries in a variety of languages. Moreover, a number of clients and aggregation services already exist and OpenSearch is already supported by a number of web browsers such as Firefox and Internet Explorer.

  20. Thirteen Year Loblolly Pine Growth Following Machine Application of Cut-Stump Treament Herbicides For Hardwood Stump-Sprout Control

    Treesearch

    Clyde G. Vidrine; John C. Adams

    2002-01-01

    Thirteen year growth results of 1-0 out-planted loblolly pine seedlings on nonintensively prepared up-land mixed pine-hardwood sites receiving machine applied cut-stump treatment (CST) herbicides onto hardwood stumps at the time of harvesting is presented. Plantation pine growth shows significantly higher growth for pine in the CST treated plots compared to non-CST...

  1. Pellet Feed for Dendritic-Web Growth

    NASA Technical Reports Server (NTRS)

    Duncan, C. S.; Skutch, M. E.; Mchugh, J. P.

    1983-01-01

    Melt replenishment system sustains continuous growth of silicon dentritic web for several days. Substantially increases size of batch, limited mainly by level of impurities and life of crucible. Silicon pellets automatically added to crucible sustain crystal growth for days.

  2. A Modular Framework for Transforming Structured Data into HTML with Machine-Readable Annotations

    NASA Astrophysics Data System (ADS)

    Patton, E. W.; West, P.; Rozell, E.; Zheng, J.

    2010-12-01

    There is a plethora of web-based Content Management Systems (CMS) available for maintaining projects and data, i.a. However, each system varies in its capabilities and often content is stored separately and accessed via non-uniform web interfaces. Moving from one CMS to another (e.g., MediaWiki to Drupal) can be cumbersome, especially if a large quantity of data must be adapted to the new system. To standardize the creation, display, management, and sharing of project information, we have assembled a framework that uses existing web technologies to transform data provided by any service that supports the SPARQL Protocol and RDF Query Language (SPARQL) queries into HTML fragments, allowing it to be embedded in any existing website. The framework utilizes a two-tier XML Stylesheet Transformation (XSLT) that uses existing ontologies (e.g., Friend-of-a-Friend, Dublin Core) to interpret query results and render them as HTML documents. These ontologies can be used in conjunction with custom ontologies suited to individual needs (e.g., domain-specific ontologies for describing data records). Furthermore, this transformation process encodes machine-readable annotations, namely, the Resource Description Framework in attributes (RDFa), into the resulting HTML, so that capable parsers and search engines can extract the relationships between entities (e.g, people, organizations, datasets). To facilitate editing of content, the framework provides a web-based form system, mapping each query to a dynamically generated form that can be used to modify and create entities, while keeping the native data store up-to-date. This open framework makes it easy to duplicate data across many different sites, allowing researchers to distribute their data in many different online forums. In this presentation we will outline the structure of queries and the stylesheets used to transform them, followed by a brief walkthrough that follows the data from storage to human- and machine-accessible web page. We conclude with a discussion on content caching and steps toward performing queries across multiple domains.

  3. The Environmental Assessment and Management (TEAM) Guide: Iowa Supplement

    DTIC Science & Technology

    2010-02-01

    limited to, paper shredding, copying, photographic activities, and blueprinting machines. This does not include incinerators. r. Laundry dryers ...exemptions from these standards may apply. (Part 63, Subpart IIII) cj. Emission standards for hazardous air pollutants: paper and other web ...coating. This standard applies to a facility that is engaged in the coating of paper , plastic film, metallic foil, and other web surfaces located at a

  4. Model Driven Development of Web Services and Dynamic Web Services Composition

    DTIC Science & Technology

    2005-01-01

    27 2.4.1 Feature-Oriented Domain Analysis ( FODA ).......................................27 2.4.2 The need of automation for Feature-Oriented...Diagram Algebra FDL Feature Description Language FODA Feature-Oriented Domain Analysis FSM Finite State Machine GDM Generative Domain...Oriented Domain Analysis ( FODA ) in Section 2.4 and Aspect-Oriented Generative Do- main Modeling (AOGDM) in Section 2.5, which not only represent two

  5. Silicon Web Process Development

    NASA Technical Reports Server (NTRS)

    Duncan, C. S.; Seidensticker, R. G.; Hopkins, R. H.; Mchugh, J. P.; Hill, F. E.; Heimlich, M. E.; Driggers, J. M.

    1978-01-01

    Progress in the development of techniques to grow silicon web at 25 wq cm/min output rate is reported. Feasibility of web growth with simultaneous melt replenishment is discussed. Other factors covered include: (1) tests of aftertrimmers to improve web width; (2) evaluation of growth lid designs to raise speed and output rate; (3) tests of melt replenishment hardware; and (4) investigation of directed gas flow systems to control unwanted oxide deposition in the system and to improve convective cooling of the web. Compatibility with sufficient solar cell performance is emphasized.

  6. Development of processes for the production of low cost silicon dendritic web for solar cells

    NASA Technical Reports Server (NTRS)

    Duncan, C. S.; Seidensticker, R. G.; Mchugh, J. P.; Hopkins, R. H.; Skutch, M. E.; Driggers, J. M.; Hill, F. E.

    1980-01-01

    High area output rates and continuous, automated growth are two key technical requirements for the growth of low-cost silicon ribbons for solar cells. By means of computer-aided furnace design, silicon dendritic web output rates as high as 27 sq cm/min have been achieved, a value in excess of that projected to meet a $0.50 per peak watt solar array manufacturing cost. The feasibility of simultaneous web growth while the melt is replenished with pelletized silicon has also been demonstrated. This step is an important precursor to the development of an automated growth system. Solar cells made on the replenished material were just as efficient as devices fabricated on typical webs grown without replenishment. Moreover, web cells made on a less-refined, pelletized polycrystalline silicon synthesized by the Battelle process yielded efficiencies up to 13% (AM1).

  7. Virtualization of open-source secure web services to support data exchange in a pediatric critical care research network

    PubMed Central

    Sward, Katherine A; Newth, Christopher JL; Khemani, Robinder G; Cryer, Martin E; Thelen, Julie L; Enriquez, Rene; Shaoyu, Su; Pollack, Murray M; Harrison, Rick E; Meert, Kathleen L; Berg, Robert A; Wessel, David L; Shanley, Thomas P; Dalton, Heidi; Carcillo, Joseph; Jenkins, Tammara L; Dean, J Michael

    2015-01-01

    Objectives To examine the feasibility of deploying a virtual web service for sharing data within a research network, and to evaluate the impact on data consistency and quality. Material and Methods Virtual machines (VMs) encapsulated an open-source, semantically and syntactically interoperable secure web service infrastructure along with a shadow database. The VMs were deployed to 8 Collaborative Pediatric Critical Care Research Network Clinical Centers. Results Virtual web services could be deployed in hours. The interoperability of the web services reduced format misalignment from 56% to 1% and demonstrated that 99% of the data consistently transferred using the data dictionary and 1% needed human curation. Conclusions Use of virtualized open-source secure web service technology could enable direct electronic abstraction of data from hospital databases for research purposes. PMID:25796596

  8. Large-area sheet task advanced dendritic web growth development

    NASA Technical Reports Server (NTRS)

    Duncan, C. S.; Seidensticker, R. G.; Mchugh, J. P.

    1983-01-01

    Modeling in the development of low stress configurations for wide web growth is presented. Parametric sensitivity to identify design features which can be used for dynamic trimming of the furnace element was studied. Temperature measurements of experimental growth behavior led to modification in the growth system to improve lateral temperature distributions.

  9. Web Service Distributed Management Framework for Autonomic Server Virtualization

    NASA Astrophysics Data System (ADS)

    Solomon, Bogdan; Ionescu, Dan; Litoiu, Marin; Mihaescu, Mircea

    Virtualization for the x86 platform has imposed itself recently as a new technology that can improve the usage of machines in data centers and decrease the cost and energy of running a high number of servers. Similar to virtualization, autonomic computing and more specifically self-optimization, aims to improve server farm usage through provisioning and deprovisioning of instances as needed by the system. Autonomic systems are able to determine the optimal number of server machines - real or virtual - to use at a given time, and add or remove servers from a cluster in order to achieve optimal usage. While provisioning and deprovisioning of servers is very important, the way the autonomic system is built is also very important, as a robust and open framework is needed. One such management framework is the Web Service Distributed Management (WSDM) system, which is an open standard of the Organization for the Advancement of Structured Information Standards (OASIS). This paper presents an open framework built on top of the WSDM specification, which aims to provide self-optimization for applications servers residing on virtual machines.

  10. Neonatal blood cultures: effect of delayed entry into the blood culture machine and bacterial concentration on the time to positive growth in a simulated model.

    PubMed

    Jardine, Luke Anthony; Sturgess, Barbara Ruth; Inglis, Garry Donald Trevor; Davies, Mark William

    2009-04-01

    To determine if: time from blood culture inoculation to positive growth (total time to positive) and time from blood culture machine entry to positive growth (machine time to positive) is altered by delayed entry into the automated blood culture machine, and if the total time to positive differs by the concentration of organisms inoculated into blood culture bottles. Staphylococcus epidermidis, Escherichia coli and group B beta-haemolytic streptococci were chosen as clinically significant representative organisms. Two concentrations (> or =10 colony-forming units per millilitre and <1 colony-forming units per millilitre) were inoculated into PEDS BacT/Alert blood culture bottles and randomly allocated to one of three delayed automated blood culture machine entry times (30 min/8.5 h/15.5 h). For all organisms at all concentrations, except the Staphylococcus epidermidis, the machine time to positive was significantly decreased by delayed entry. For all organisms at all concentrations, the mean total time to positive significantly increased with increasing delayed entry into the blood culture machine. Higher concentrations of group B beta-haemolytic streptococci and Escherichia coli grew significantly faster than lower concentrations. Bacterial growth in inoculated bottles, stored at room temperature, continues although at a slower rate than in those blood culture bottles immediately entered into the machine. If a blood culture specimen has been stored at room temperature for greater than 15.5 h, the currently allowed safety margin of 36 h (before declaring a result negative) may be insufficient.

  11. Integrating Statistical Machine Learning in a Semantic Sensor Web for Proactive Monitoring and Control.

    PubMed

    Adeleke, Jude Adekunle; Moodley, Deshendran; Rens, Gavin; Adewumi, Aderemi Oluyinka

    2017-04-09

    Proactive monitoring and control of our natural and built environments is important in various application scenarios. Semantic Sensor Web technologies have been well researched and used for environmental monitoring applications to expose sensor data for analysis in order to provide responsive actions in situations of interest. While these applications provide quick response to situations, to minimize their unwanted effects, research efforts are still necessary to provide techniques that can anticipate the future to support proactive control, such that unwanted situations can be averted altogether. This study integrates a statistical machine learning based predictive model in a Semantic Sensor Web using stream reasoning. The approach is evaluated in an indoor air quality monitoring case study. A sliding window approach that employs the Multilayer Perceptron model to predict short term PM 2 . 5 pollution situations is integrated into the proactive monitoring and control framework. Results show that the proposed approach can effectively predict short term PM 2 . 5 pollution situations: precision of up to 0.86 and sensitivity of up to 0.85 is achieved over half hour prediction horizons, making it possible for the system to warn occupants or even to autonomously avert the predicted pollution situations within the context of Semantic Sensor Web.

  12. Integrating Statistical Machine Learning in a Semantic Sensor Web for Proactive Monitoring and Control

    PubMed Central

    Adeleke, Jude Adekunle; Moodley, Deshendran; Rens, Gavin; Adewumi, Aderemi Oluyinka

    2017-01-01

    Proactive monitoring and control of our natural and built environments is important in various application scenarios. Semantic Sensor Web technologies have been well researched and used for environmental monitoring applications to expose sensor data for analysis in order to provide responsive actions in situations of interest. While these applications provide quick response to situations, to minimize their unwanted effects, research efforts are still necessary to provide techniques that can anticipate the future to support proactive control, such that unwanted situations can be averted altogether. This study integrates a statistical machine learning based predictive model in a Semantic Sensor Web using stream reasoning. The approach is evaluated in an indoor air quality monitoring case study. A sliding window approach that employs the Multilayer Perceptron model to predict short term PM2.5 pollution situations is integrated into the proactive monitoring and control framework. Results show that the proposed approach can effectively predict short term PM2.5 pollution situations: precision of up to 0.86 and sensitivity of up to 0.85 is achieved over half hour prediction horizons, making it possible for the system to warn occupants or even to autonomously avert the predicted pollution situations within the context of Semantic Sensor Web. PMID:28397776

  13. Populating the Semantic Web by Macro-reading Internet Text

    NASA Astrophysics Data System (ADS)

    Mitchell, Tom M.; Betteridge, Justin; Carlson, Andrew; Hruschka, Estevam; Wang, Richard

    A key question regarding the future of the semantic web is "how will we acquire structured information to populate the semantic web on a vast scale?" One approach is to enter this information manually. A second approach is to take advantage of pre-existing databases, and to develop common ontologies, publishing standards, and reward systems to make this data widely accessible. We consider here a third approach: developing software that automatically extracts structured information from unstructured text present on the web. We also describe preliminary results demonstrating that machine learning algorithms can learn to extract tens of thousands of facts to populate a diverse ontology, with imperfect but reasonably good accuracy.

  14. A Comparison Study of Machine Learning Based Algorithms for Fatigue Crack Growth Calculation.

    PubMed

    Wang, Hongxun; Zhang, Weifang; Sun, Fuqiang; Zhang, Wei

    2017-05-18

    The relationships between the fatigue crack growth rate ( d a / d N ) and stress intensity factor range ( Δ K ) are not always linear even in the Paris region. The stress ratio effects on fatigue crack growth rate are diverse in different materials. However, most existing fatigue crack growth models cannot handle these nonlinearities appropriately. The machine learning method provides a flexible approach to the modeling of fatigue crack growth because of its excellent nonlinear approximation and multivariable learning ability. In this paper, a fatigue crack growth calculation method is proposed based on three different machine learning algorithms (MLAs): extreme learning machine (ELM), radial basis function network (RBFN) and genetic algorithms optimized back propagation network (GABP). The MLA based method is validated using testing data of different materials. The three MLAs are compared with each other as well as the classical two-parameter model ( K * approach). The results show that the predictions of MLAs are superior to those of K * approach in accuracy and effectiveness, and the ELM based algorithms show overall the best agreement with the experimental data out of the three MLAs, for its global optimization and extrapolation ability.

  15. The backend design of an environmental monitoring system upon real-time prediction of groundwater level fluctuation under the hillslope.

    PubMed

    Lin, Hsueh-Chun; Hong, Yao-Ming; Kan, Yao-Chiang

    2012-01-01

    The groundwater level represents a critical factor to evaluate hillside landslides. A monitoring system upon the real-time prediction platform with online analytical functions is important to forecast the groundwater level due to instantaneously monitored data when the heavy precipitation raises the groundwater level under the hillslope and causes instability. This study is to design the backend of an environmental monitoring system with efficient algorithms for machine learning and knowledge bank for the groundwater level fluctuation prediction. A Web-based platform upon the model-view controller-based architecture is established with technology of Web services and engineering data warehouse to support online analytical process and feedback risk assessment parameters for real-time prediction. The proposed system incorporates models of hydrological computation, machine learning, Web services, and online prediction to satisfy varieties of risk assessment requirements and approaches of hazard prevention. The rainfall data monitored from the potential landslide area at Lu-Shan, Nantou and Li-Shan, Taichung, in Taiwan, are applied to examine the system design.

  16. Silicon web process development. [for low cost solar cells

    NASA Technical Reports Server (NTRS)

    Duncan, C. S.; Hopkins, R. H.; Seidensticker, R. G.; Mchugh, J. P.; Hill, F. E.; Heimlich, M. E.; Driggers, J. M.

    1979-01-01

    Silicon dendritic web, a single crystal ribbon shaped during growth by crystallographic forces and surface tension (rather than dies), is a highly promising base material for efficient low cost solar cells. The form of the product smooth, flexible strips 100 to 200 microns thick, conserves expensive silicon and facilitates automation of crystal growth and the subsequent manufacturing of solar cells. These characteristics, coupled with the highest demonstrated ribbon solar cell efficiency-15.5%-make silicon web a leading candidate to achieve, or better, the 1986 Low Cost Solar Array (LSA) Project cost objective of 50 cents per peak watt of photovoltaic output power. The main objective of the Web Program, technology development to significantly increase web output rate, and to show the feasibility for simultaneous melt replenishment and growth, have largely been accomplished. Recently, web output rates of 23.6 sq cm/min, nearly three times the 8 sq cm/min maximum rate of a year ago, were achieved. Webs 4 cm wide or greater were grown on a number of occassions.

  17. Development of Advanced Czochralski Growth Process to produce low cost 150 KG silicon ingots from a single crucible for technology readiness

    NASA Technical Reports Server (NTRS)

    1981-01-01

    The modified CG2000 crystal grower construction, installation, and machine check-out was completed. The process development check-out proceeded with several dry runs and one growth run. Several machine calibrations and functional problems were discovered and corrected. Several exhaust gas analysis system alternatives were evaluated and an integrated system approved and ordered. A contract presentation was made at the Project Integration Meeting at JPL, including cost-projections using contract projected throughput and machine parameters. Several growth runs on a development CG200 RC grower show that complete neck, crown, and body automated growth can be achieved with only one operator input. Work continued for melt level, melt temperature, and diameter sensor development.

  18. Web GIS in practice IV: publishing your health maps and connecting to remote WMS sources using the Open Source UMN MapServer and DM Solutions MapLab

    PubMed Central

    Boulos, Maged N Kamel; Honda, Kiyoshi

    2006-01-01

    Open Source Web GIS software systems have reached a stage of maturity, sophistication, robustness and stability, and usability and user friendliness rivalling that of commercial, proprietary GIS and Web GIS server products. The Open Source Web GIS community is also actively embracing OGC (Open Geospatial Consortium) standards, including WMS (Web Map Service). WMS enables the creation of Web maps that have layers coming from multiple different remote servers/sources. In this article we present one easy to implement Web GIS server solution that is based on the Open Source University of Minnesota (UMN) MapServer. By following the accompanying step-by-step tutorial instructions, interested readers running mainstream Microsoft® Windows machines and with no prior technical experience in Web GIS or Internet map servers will be able to publish their own health maps on the Web and add to those maps additional layers retrieved from remote WMS servers. The 'digital Asia' and 2004 Indian Ocean tsunami experiences in using free Open Source Web GIS software are also briefly described. PMID:16420699

  19. GeneMachine: gene prediction and sequence annotation.

    PubMed

    Makalowska, I; Ryan, J F; Baxevanis, A D

    2001-09-01

    A number of free-standing programs have been developed in order to help researchers find potential coding regions and deduce gene structure for long stretches of what is essentially 'anonymous DNA'. As these programs apply inherently different criteria to the question of what is and is not a coding region, multiple algorithms should be used in the course of positional cloning and positional candidate projects to assure that all potential coding regions within a previously-identified critical region are identified. We have developed a gene identification tool called GeneMachine which allows users to query multiple exon and gene prediction programs in an automated fashion. BLAST searches are also performed in order to see whether a previously-characterized coding region corresponds to a region in the query sequence. A suite of Perl programs and modules are used to run MZEF, GENSCAN, GRAIL 2, FGENES, RepeatMasker, Sputnik, and BLAST. The results of these runs are then parsed and written into ASN.1 format. Output files can be opened using NCBI Sequin, in essence using Sequin as both a workbench and as a graphical viewer. The main feature of GeneMachine is that the process is fully automated; the user is only required to launch GeneMachine and then open the resulting file with Sequin. Annotations can then be made to these results prior to submission to GenBank, thereby increasing the intrinsic value of these data. GeneMachine is freely-available for download at http://genome.nhgri.nih.gov/genemachine. A public Web interface to the GeneMachine server for academic and not-for-profit users is available at http://genemachine.nhgri.nih.gov. The Web supplement to this paper may be found at http://genome.nhgri.nih.gov/genemachine/supplement/.

  20. Exploiting Recurring Structure in a Semantic Network

    NASA Technical Reports Server (NTRS)

    Wolfe, Shawn R.; Keller, Richard M.

    2004-01-01

    With the growing popularity of the Semantic Web, an increasing amount of information is becoming available in machine interpretable, semantically structured networks. Within these semantic networks are recurring structures that could be mined by existing or novel knowledge discovery methods. The mining of these semantic structures represents an interesting area that focuses on mining both for and from the Semantic Web, with surprising applicability to problems confronting the developers of Semantic Web applications. In this paper, we present representative examples of recurring structures and show how these structures could be used to increase the utility of a semantic repository deployed at NASA.

  1. DisGeNET-RDF: harnessing the innovative power of the Semantic Web to explore the genetic basis of diseases.

    PubMed

    Queralt-Rosinach, Núria; Piñero, Janet; Bravo, Àlex; Sanz, Ferran; Furlong, Laura I

    2016-07-15

    DisGeNET-RDF makes available knowledge on the genetic basis of human diseases in the Semantic Web. Gene-disease associations (GDAs) and their provenance metadata are published as human-readable and machine-processable web resources. The information on GDAs included in DisGeNET-RDF is interlinked to other biomedical databases to support the development of bioinformatics approaches for translational research through evidence-based exploitation of a rich and fully interconnected linked open data. http://rdf.disgenet.org/ support@disgenet.org. © The Author 2016. Published by Oxford University Press.

  2. Silicon web process development

    NASA Technical Reports Server (NTRS)

    Duncan, C. S.; Seidensticker, R. G.; Mchugh, J. P.; Blais, P. D.; Davis, J. R., Jr.

    1977-01-01

    Thirty-five (35) furnace runs were carried out during this quarter, of which 25 produced a total of 120 web crystals. The two main thermal models for the dendritic growth process were completed and are being used to assist the design of the thermal geometry of the web growth apparatus. The first model, a finite element representation of the susceptor and crucible, was refined to give greater precision and resolution in the critical central region of the melt. The second thermal model, which describes the dissipation of the latent heat to generate thickness-velocity data, was completed. Dendritic web samples were fabricated into solar cells using a standard configuration and a standard process for a N(+) -P-P(+) configuration. The detailed engineering design was completed for a new dendritic web growth facility of greater width capability than previous facilities.

  3. Crystal Growth Technology

    NASA Astrophysics Data System (ADS)

    Scheel, Hans J.; Fukuda, Tsuguo

    2004-06-01

    This volume deals with the technologies of crystal fabrication, of crystal machining, and of epilayer production and is the first book on industrial and scientific aspects of crystal and layer production. The major industrial crystals are treated: Si, GaAs, GaP, InP, CdTe, sapphire, oxide and halide scintillator crystals, crystals for optical, piezoelectric and microwave applications and more. Contains 29 contributions from leading crystal technologists covering the following topics:

      General aspects of crystal growth technology Silicon Compound semiconductors Oxides and halides Crystal machining Epitaxy and layer deposition Scientific and technological problems of production and machining of industrial crystals are discussed by top experts, most of them from the major growth industries and crystal growth centers. In addition, it will be useful for the users of crystals, for teachers and graduate students in materials sciences, in electronic and other functional materials, chemical and metallurgical engineering, micro-and optoelectronics including nanotechnology, mechanical engineering and precision-machining, microtechnology, and in solid-state sciences.

    • Improved Radiative Control of Ribbon Growth

      NASA Technical Reports Server (NTRS)

      Mchugh, J. P.; Seidensticker, R. G.; Skutch, M. E.

      1984-01-01

      Shield modifications enhance growth rate while reducing silicon oxide formation. Control of dendritic-web crystal growth requires precise control of web temperature profile. Achieved by using series of thermal radiation shields to control thermal-radiation field in region where melt solidifying onto crystal ribbon being pulled from melt.

    • 40 CFR 63.471 - Facility-wide standards.

      Code of Federal Regulations, 2010 CFR

      2010-07-01

      ... manufacture of narrow tubing, and continuous web cleaning machines, located at a major source that are subject... engineering calculations included in the compliance report. (4) Each owner or operator of an affected facility...

    • 40 CFR 63.471 - Facility-wide standards.

      Code of Federal Regulations, 2011 CFR

      2011-07-01

      ... manufacture of narrow tubing, and continuous web cleaning machines, located at a major source that are subject... engineering calculations included in the compliance report. (4) Each owner or operator of an affected facility...

    • Natural Language Processing.

      ERIC Educational Resources Information Center

      Chowdhury, Gobinda G.

      2003-01-01

      Discusses issues related to natural language processing, including theoretical developments; natural language understanding; tools and techniques; natural language text processing systems; abstracting; information extraction; information retrieval; interfaces; software; Internet, Web, and digital library applications; machine translation for…

    • Development and implementation of (Q)SAR modeling within the CHARMMing web-user interface.

      PubMed

      Weidlich, Iwona E; Pevzner, Yuri; Miller, Benjamin T; Filippov, Igor V; Woodcock, H Lee; Brooks, Bernard R

      2015-01-05

      Recent availability of large publicly accessible databases of chemical compounds and their biological activities (PubChem, ChEMBL) has inspired us to develop a web-based tool for structure activity relationship and quantitative structure activity relationship modeling to add to the services provided by CHARMMing (www.charmming.org). This new module implements some of the most recent advances in modern machine learning algorithms-Random Forest, Support Vector Machine, Stochastic Gradient Descent, Gradient Tree Boosting, so forth. A user can import training data from Pubchem Bioassay data collections directly from our interface or upload his or her own SD files which contain structures and activity information to create new models (either categorical or numerical). A user can then track the model generation process and run models on new data to predict activity. © 2014 Wiley Periodicals, Inc.

    • Knowledge-Based Object Detection in Laser Scanning Point Clouds

      NASA Astrophysics Data System (ADS)

      Boochs, F.; Karmacharya, A.; Marbs, A.

      2012-07-01

      Object identification and object processing in 3D point clouds have always posed challenges in terms of effectiveness and efficiency. In practice, this process is highly dependent on human interpretation of the scene represented by the point cloud data, as well as the set of modeling tools available for use. Such modeling algorithms are data-driven and concentrate on specific features of the objects, being accessible to numerical models. We present an approach that brings the human expert knowledge about the scene, the objects inside, and their representation by the data and the behavior of algorithms to the machine. This "understanding" enables the machine to assist human interpretation of the scene inside the point cloud. Furthermore, it allows the machine to understand possibilities and limitations of algorithms and to take this into account within the processing chain. This not only assists the researchers in defining optimal processing steps, but also provides suggestions when certain changes or new details emerge from the point cloud. Our approach benefits from the advancement in knowledge technologies within the Semantic Web framework. This advancement has provided a strong base for applications based on knowledge management. In the article we will present and describe the knowledge technologies used for our approach such as Web Ontology Language (OWL), used for formulating the knowledge base and the Semantic Web Rule Language (SWRL) with 3D processing and topologic built-ins, aiming to combine geometrical analysis of 3D point clouds, and specialists' knowledge of the scene and algorithmic processing.

    • An Ontology of Quality Initiatives and a Model for Decentralized, Collaborative Quality Management on the (Semantic) World Wide Web

      PubMed Central

      2001-01-01

      This editorial provides a model of how quality initiatives concerned with health information on the World Wide Web may in the future interact with each other. This vision fits into the evolving "Semantic Web" architecture - ie, the prospective that the World Wide Web may evolve from a mess of unstructured, human-readable information sources into a global knowledge base with an additional layer providing richer and more meaningful relationships between resources. One first prerequisite for forming such a "Semantic Web" or "web of trust" among the players active in quality management of health information is that these initiatives make statements about themselves and about each other in a machine-processable language. I present a concrete model on how this collaboration could look, and provide some recommendations on what the role of the World Health Organization (WHO) and other policy makers in this framework could be. PMID:11772549

    • An ontology of quality initiatives and a model for decentralized, collaborative quality management on the (semantic) World-Wide-Web.

      PubMed

      Eysenbach, G

      2001-01-01

      This editorial provides a model of how quality initiatives concerned with health information on the World Wide Web may in the future interact with each other. This vision fits into the evolving "Semantic Web" architecture - ie, the prospective that the World Wide Web may evolve from a mess of unstructured, human-readable information sources into a global knowledge base with an additional layer providing richer and more meaningful relationships between resources. One first prerequisite for forming such a "Semantic Web" or "web of trust" among the players active in quality management of health information is that these initiatives make statements about themselves and about each other in a machine-processable language. I present a concrete model on how this collaboration could look, and provide some recommendations on what the role of the World Health Organization (WHO) and other policy makers in this framework could be.

    • Electrochemical removal of material from metallic work

      DOEpatents

      Csakvary, Tibor; Fromson, Robert E.

      1980-05-13

      Deburring, polishing, surface forming and the like are carried out by electrochemical machining with conformable electrode means including an electrically conducting and an insulating web. The surface of the work to be processed is covered by a deformable electrically insulating web or cloth which is perforated and conforms with the work. The web is covered by a deformable perforated electrically conducting screen electrode which also conforms with, and is insulated from, the work by the insulating web. An electrolyte is conducted through the electrode and insulating web and along the work through a perforated elastic member which engages the electrode under pressure pressing the electrode and web against the work. High current under low voltage is conducted betwen the electrode and work through the insulator, removing material from the work. Under the pressure of the elastic member, the electrode and insulator continue to conform with the work and the spacing between the electrode and work is maintained constant.

    • Advanced dendritic web growth development and development of single-crystal silicon dendritic ribbon and high-efficiency solar cell program

      NASA Technical Reports Server (NTRS)

      Duncan, C. S.; Seidensticker, R. G.; Mchugh, J. P.; Hopkins, R. H.

      1986-01-01

      Efforts to demonstrate that the dendritic web technology is ready for commercial use by the end of 1986 continues. A commercial readiness goal involves improvements to crystal growth furnace throughput to demonstrate an area growth rate of greater than 15 sq cm/min while simultaneously growing 10 meters or more of ribbon under conditions of continuous melt replenishment. Continuous means that the silicon melt is being replenished at the same rate that it is being consumed by ribbon growth so that the melt level remains constant. Efforts continue on computer thermal modeling required to define high speed, low stress, continuous growth configurations; the study of convective effects in the molten silicon and growth furnace cover gas; on furnace component modifications; on web quality assessments; and on experimental growth activities.

    • Web-based newborn screening system for metabolic diseases: machine learning versus clinicians.

      PubMed

      Chen, Wei-Hsin; Hsieh, Sheau-Ling; Hsu, Kai-Ping; Chen, Han-Ping; Su, Xing-Yu; Tseng, Yi-Ju; Chien, Yin-Hsiu; Hwu, Wuh-Liang; Lai, Feipei

      2013-05-23

      A hospital information system (HIS) that integrates screening data and interpretation of the data is routinely requested by hospitals and parents. However, the accuracy of disease classification may be low because of the disease characteristics and the analytes used for classification. The objective of this study is to describe a system that enhanced the neonatal screening system of the Newborn Screening Center at the National Taiwan University Hospital. The system was designed and deployed according to a service-oriented architecture (SOA) framework under the Web services .NET environment. The system consists of sample collection, testing, diagnosis, evaluation, treatment, and follow-up services among collaborating hospitals. To improve the accuracy of newborn screening, machine learning and optimal feature selection mechanisms were investigated for screening newborns for inborn errors of metabolism. The framework of the Newborn Screening Hospital Information System (NSHIS) used the embedded Health Level Seven (HL7) standards for data exchanges among heterogeneous platforms integrated by Web services in the C# language. In this study, machine learning classification was used to predict phenylketonuria (PKU), hypermethioninemia, and 3-methylcrotonyl-CoA-carboxylase (3-MCC) deficiency. The classification methods used 347,312 newborn dried blood samples collected at the Center between 2006 and 2011. Of these, 220 newborns had values over the diagnostic cutoffs (positive cases) and 1557 had values that were over the screening cutoffs but did not meet the diagnostic cutoffs (suspected cases). The original 35 analytes and the manifested features were ranked based on F score, then combinations of the top 20 ranked features were selected as input features to support vector machine (SVM) classifiers to obtain optimal feature sets. These feature sets were tested using 5-fold cross-validation and optimal models were generated. The datasets collected in year 2011 were used as predicting cases. The feature selection strategies were implemented and the optimal markers for PKU, hypermethioninemia, and 3-MCC deficiency were obtained. The results of the machine learning approach were compared with the cutoff scheme. The number of the false positive cases were reduced from 21 to 2 for PKU, from 30 to 10 for hypermethioninemia, and 209 to 46 for 3-MCC deficiency. This SOA Web service-based newborn screening system can accelerate screening procedures effectively and efficiently. An SVM learning methodology for PKU, hypermethioninemia, and 3-MCC deficiency metabolic diseases classification, including optimal feature selection strategies, is presented. By adopting the results of this study, the number of suspected cases could be reduced dramatically.

    • Web-Based Newborn Screening System for Metabolic Diseases: Machine Learning Versus Clinicians

      PubMed Central

      Chen, Wei-Hsin; Hsu, Kai-Ping; Chen, Han-Ping; Su, Xing-Yu; Tseng, Yi-Ju; Chien, Yin-Hsiu; Hwu, Wuh-Liang; Lai, Feipei

      2013-01-01

      Background A hospital information system (HIS) that integrates screening data and interpretation of the data is routinely requested by hospitals and parents. However, the accuracy of disease classification may be low because of the disease characteristics and the analytes used for classification. Objective The objective of this study is to describe a system that enhanced the neonatal screening system of the Newborn Screening Center at the National Taiwan University Hospital. The system was designed and deployed according to a service-oriented architecture (SOA) framework under the Web services .NET environment. The system consists of sample collection, testing, diagnosis, evaluation, treatment, and follow-up services among collaborating hospitals. To improve the accuracy of newborn screening, machine learning and optimal feature selection mechanisms were investigated for screening newborns for inborn errors of metabolism. Methods The framework of the Newborn Screening Hospital Information System (NSHIS) used the embedded Health Level Seven (HL7) standards for data exchanges among heterogeneous platforms integrated by Web services in the C# language. In this study, machine learning classification was used to predict phenylketonuria (PKU), hypermethioninemia, and 3-methylcrotonyl-CoA-carboxylase (3-MCC) deficiency. The classification methods used 347,312 newborn dried blood samples collected at the Center between 2006 and 2011. Of these, 220 newborns had values over the diagnostic cutoffs (positive cases) and 1557 had values that were over the screening cutoffs but did not meet the diagnostic cutoffs (suspected cases). The original 35 analytes and the manifested features were ranked based on F score, then combinations of the top 20 ranked features were selected as input features to support vector machine (SVM) classifiers to obtain optimal feature sets. These feature sets were tested using 5-fold cross-validation and optimal models were generated. The datasets collected in year 2011 were used as predicting cases. Results The feature selection strategies were implemented and the optimal markers for PKU, hypermethioninemia, and 3-MCC deficiency were obtained. The results of the machine learning approach were compared with the cutoff scheme. The number of the false positive cases were reduced from 21 to 2 for PKU, from 30 to 10 for hypermethioninemia, and 209 to 46 for 3-MCC deficiency. Conclusions This SOA Web service–based newborn screening system can accelerate screening procedures effectively and efficiently. An SVM learning methodology for PKU, hypermethioninemia, and 3-MCC deficiency metabolic diseases classification, including optimal feature selection strategies, is presented. By adopting the results of this study, the number of suspected cases could be reduced dramatically. PMID:23702487

    • Computer modeling of dendritic web growth processes and characterization of the material

      NASA Technical Reports Server (NTRS)

      Seidensticker, R. G.; Kothmann, R. E.; Mchugh, J. P.; Duncan, C. S.; Hopkins, R. H.; Blais, P. D.; Davis, J. R.; Rohatgi, A.

      1978-01-01

      High area throughput rate will be required for the economical production of silicon dendritic web for solar cells. Web width depends largely on the temperature distribution on the melt surface while growth speed is controlled by the dissipation of the latent heat of fusion. Thermal models were developed to investigate each of these aspects, and were used to engineer the design of laboratory equipment capable of producing crystals over 4 cm wide; growth speeds up to 10 cm/min were achieved. The web crystals were characterized by resistivity, lifetime and etch pit density data as well as by detailed solar cell I-V data. Solar cells ranged in efficiency from about 10 to 14.5% (AM-1) depending on growth conditions. Cells with lower efficiency displayed lowered bulk lifetime believed to be due to surface contamination.

    • Towards Web 3.0: taxonomies and ontologies for medical education -- a systematic review.

      PubMed

      Blaum, Wolf E; Jarczweski, Anne; Balzer, Felix; Stötzner, Philip; Ahlers, Olaf

      2013-01-01

      Both for curricular development and mapping, as well as for orientation within the mounting supply of learning resources in medical education, the Semantic Web ("Web 3.0") poses a low-threshold, effective tool that enables identification of content related items across system boundaries. Replacement of the currently required manual with an automatically generated link, which is based on content and semantics, requires the use of a suitably structured vocabulary for a machine-readable description of object content. Aim of this study is to compile the existing taxonomies and ontologies used for the annotation of medical content and learning resources, to compare those using selected criteria, and to verify their suitability in the context described above. Based on a systematic literature search, existing taxonomies and ontologies for the description of medical learning resources were identified. Through web searches and/or direct contact with the respective editors, each of the structured vocabularies thus identified were examined in regards to topic, structure, language, scope, maintenance, and technology of the taxonomy/ontology. In addition, suitability for use in the Semantic Web was verified. Among 20 identified publications, 14 structured vocabularies were identified, which differed rather strongly in regards to language, scope, currency, and maintenance. None of the identified vocabularies fulfilled the necessary criteria for content description of medical curricula and learning resources in the German-speaking world. While moving towards Web 3.0, a significant problem lies in the selection and use of an appropriate German vocabulary for the machine-readable description of object content. Possible solutions include development, translation and/or combination of existing vocabularies, possibly including partial translations of English vocabularies.

    • Content-based image retrieval with ontological ranking

      NASA Astrophysics Data System (ADS)

      Tsai, Shen-Fu; Tsai, Min-Hsuan; Huang, Thomas S.

      2010-02-01

      Images are a much more powerful medium of expression than text, as the adage says: "One picture is worth a thousand words." It is because compared with text consisting of an array of words, an image has more degrees of freedom and therefore a more complicated structure. However, the less limited structure of images presents researchers in the computer vision community a tough task of teaching machines to understand and organize images, especially when a limit number of learning examples and background knowledge are given. The advance of internet and web technology in the past decade has changed the way human gain knowledge. People, hence, can exchange knowledge with others by discussing and contributing information on the web. As a result, the web pages in the internet have become a living and growing source of information. One is therefore tempted to wonder whether machines can learn from the web knowledge base as well. Indeed, it is possible to make computer learn from the internet and provide human with more meaningful knowledge. In this work, we explore this novel possibility on image understanding applied to semantic image search. We exploit web resources to obtain links from images to keywords and a semantic ontology constituting human's general knowledge. The former maps visual content to related text in contrast to the traditional way of associating images with surrounding text; the latter provides relations between concepts for machines to understand to what extent and in what sense an image is close to the image search query. With the aid of these two tools, the resulting image search system is thus content-based and moreover, organized. The returned images are ranked and organized such that semantically similar images are grouped together and given a rank based on the semantic closeness to the input query. The novelty of the system is twofold: first, images are retrieved not only based on text cues but their actual contents as well; second, the grouping is different from pure visual similarity clustering. More specifically, the inferred concepts of each image in the group are examined in the context of a huge concept ontology to determine their true relations with what people have in mind when doing image search.

    • Web-based resources for mass-spectrometry-based metabolomics: a user's guide.

      PubMed

      Tohge, Takayuki; Fernie, Alisdair R

      2009-03-01

      In recent years, a plethora of web-based tools aimed at supporting mass-spectrometry-based metabolite profiling and metabolomics applications have appeared. Given the huge hurdles presented by the chemical diversity and dynamic range of the metabolites present in the plant kingdom, profiling the levels of a broad range of metabolites is highly challenging. Given the scale and costs involved in defining the plant metabolome, it is imperative that data are effectively shared between laboratories pursuing this goal. However, ensuring accurate comparison of samples run on the same machine within the same laboratory, let alone cross-machine and cross-laboratory comparisons, requires both careful experimentation and data interpretation. In this review, we present an overview of currently available software that aids either in peak identification or in the related field of peak alignment as well as those with utility in defining structural information of compounds and metabolic pathways.

    • Content-Based Discovery for Web Map Service using Support Vector Machine and User Relevance Feedback

      PubMed Central

      Cheng, Xiaoqiang; Qi, Kunlun; Zheng, Jie; You, Lan; Wu, Huayi

      2016-01-01

      Many discovery methods for geographic information services have been proposed. There are approaches for finding and matching geographic information services, methods for constructing geographic information service classification schemes, and automatic geographic information discovery. Overall, the efficiency of the geographic information discovery keeps improving., There are however, still two problems in Web Map Service (WMS) discovery that must be solved. Mismatches between the graphic contents of a WMS and the semantic descriptions in the metadata make discovery difficult for human users. End-users and computers comprehend WMSs differently creating semantic gaps in human-computer interactions. To address these problems, we propose an improved query process for WMSs based on the graphic contents of WMS layers, combining Support Vector Machine (SVM) and user relevance feedback. Our experiments demonstrate that the proposed method can improve the accuracy and efficiency of WMS discovery. PMID:27861505

  1. Design and development of linked data from the National Map

    USGS Publications Warehouse

    Usery, E. Lynn; Varanka, Dalia E.

    2012-01-01

    The development of linked data on the World-Wide Web provides the opportunity for the U.S. Geological Survey (USGS) to supply its extensive volumes of geospatial data, information, and knowledge in a machine interpretable form and reach users and applications that heretofore have been unavailable. To pilot a process to take advantage of this opportunity, the USGS is developing an ontology for The National Map and converting selected data from nine research test areas to a Semantic Web format to support machine processing and linked data access. In a case study, the USGS has developed initial methods for legacy vector and raster formatted geometry, attributes, and spatial relationships to be accessed in a linked data environment maintaining the capability to generate graphic or image output from semantic queries. The description of an initial USGS approach to developing ontology, linked data, and initial query capability from The National Map databases is presented.

  2. Content-Based Discovery for Web Map Service using Support Vector Machine and User Relevance Feedback.

    PubMed

    Hu, Kai; Gui, Zhipeng; Cheng, Xiaoqiang; Qi, Kunlun; Zheng, Jie; You, Lan; Wu, Huayi

    2016-01-01

    Many discovery methods for geographic information services have been proposed. There are approaches for finding and matching geographic information services, methods for constructing geographic information service classification schemes, and automatic geographic information discovery. Overall, the efficiency of the geographic information discovery keeps improving., There are however, still two problems in Web Map Service (WMS) discovery that must be solved. Mismatches between the graphic contents of a WMS and the semantic descriptions in the metadata make discovery difficult for human users. End-users and computers comprehend WMSs differently creating semantic gaps in human-computer interactions. To address these problems, we propose an improved query process for WMSs based on the graphic contents of WMS layers, combining Support Vector Machine (SVM) and user relevance feedback. Our experiments demonstrate that the proposed method can improve the accuracy and efficiency of WMS discovery.

  3. Large-area sheet task advanced dendritic web growth development

    NASA Technical Reports Server (NTRS)

    Duncan, C. S.; Seidensticker, R. G.; Mchugh, J. P.; Hopkins, R. H.; Meier, D.; Schruben, J.

    1982-01-01

    The thermal stress model was used to generate the design of a low stress lid and shield configuration, which was fabricated and tested experimentally. In preliminary tests, the New Experimental Web Growth Facility performed as designed, producing web on the first run. These experiments suggested desirable design modifications in the melt level sensing system to improve further its performance, and these are being implemented.

  4. Silicon web process development

    NASA Technical Reports Server (NTRS)

    Duncan, C. S.; Seidensticker, R. G.; Mchugh, J. P.; Hill, F. E.; Skutch, M. E.; Driggers, J. M.; Hopkins, R. H.

    1980-01-01

    A barrier crucible design which consistently maintains melt stability over long periods of time was successfully tested and used in long growth runs. The pellet feeder for melt replenishment was operated continuously for growth runs of up to 17 hours. The liquid level sensor comprising a laser/sensor system was operated, performed well, and meets the requirements for maintaining liquid level height during growth and melt replenishment. An automated feedback loop connecting the feed mechanism and the liquid level sensing system was designed and constructed and operated successfully for 3.5 hours demonstrating the feasibility of semi-automated dendritic web growth. The sensitivity of the cost of sheet, to variations in capital equipment cost and recycling dendrites was calculated and it was shown that these factors have relatively little impact on sheet cost. Dendrites from web which had gone all the way through the solar cell fabrication process, when melted and grown into web, produce crystals which show no degradation in cell efficiency. Material quality remains high and cells made from web grown at the start, during, and the end of a run from a replenished melt show comparable efficiencies.

  5. Development of advanced Czochralski growth process to produce low-cost 150 kG silicon ingots from a single crucible for technology readiness

    NASA Technical Reports Server (NTRS)

    1981-01-01

    The modified CG2000 crystal grower construction, installation, and machine check out was completed. The process development check out proceeded with several dry runs and one growth run. Several machine calibrations and functional problems were discovered and corrected. Exhaust gas analysis system alternatives were evaluated and an integrated system approved and ordered. Several growth runs on a development CG2000 RC grower show that complete neck, crown, and body automated growth can be achieved with only one operator input.

  6. Mind, Machine, and Creativity: An Artist's Perspective.

    PubMed

    Sundararajan, Louise

    2014-06-01

    Harold Cohen is a renowned painter who has developed a computer program, AARON, to create art. While AARON has been hailed as one of the most creative AI programs, Cohen consistently rejects the claims of machine creativity. Questioning the possibility for AI to model human creativity, Cohen suggests in so many words that the human mind takes a different route to creativity, a route that privileges the relational, rather than the computational, dimension of cognition. This unique perspective on the tangled web of mind, machine, and creativity is explored by an application of three relational models of the mind to an analysis of Cohen's talks and writings, which are available on his website: www.aaronshome.com.

  7. Mind, Machine, and Creativity: An Artist's Perspective

    PubMed Central

    Sundararajan, Louise

    2014-01-01

    Harold Cohen is a renowned painter who has developed a computer program, AARON, to create art. While AARON has been hailed as one of the most creative AI programs, Cohen consistently rejects the claims of machine creativity. Questioning the possibility for AI to model human creativity, Cohen suggests in so many words that the human mind takes a different route to creativity, a route that privileges the relational, rather than the computational, dimension of cognition. This unique perspective on the tangled web of mind, machine, and creativity is explored by an application of three relational models of the mind to an analysis of Cohen's talks and writings, which are available on his website: www.aaronshome.com. PMID:25541564

  8. Actionable, long-term stable and semantic web compatible identifiers for access to biological collection objects

    PubMed Central

    Hyam, Roger; Hagedorn, Gregor; Chagnoux, Simon; Röpert, Dominik; Casino, Ana; Droege, Gabi; Glöckler, Falko; Gödderz, Karsten; Groom, Quentin; Hoffmann, Jana; Holleman, Ayco; Kempa, Matúš; Koivula, Hanna; Marhold, Karol; Nicolson, Nicky; Smith, Vincent S.; Triebel, Dagmar

    2017-01-01

    With biodiversity research activities being increasingly shifted to the web, the need for a system of persistent and stable identifiers for physical collection objects becomes increasingly pressing. The Consortium of European Taxonomic Facilities agreed on a common system of HTTP-URI-based stable identifiers which is now rolled out to its member organizations. The system follows Linked Open Data principles and implements redirection mechanisms to human-readable and machine-readable representations of specimens facilitating seamless integration into the growing semantic web. The implementation of stable identifiers across collection organizations is supported with open source provider software scripts, best practices documentations and recommendations for RDF metadata elements facilitating harmonized access to collection information in web portals. Database URL: http://cetaf.org/cetaf-stable-identifiers PMID:28365724

  9. Anti-rewet felt for use in a papermaking machine

    DOEpatents

    Beck, David A.

    2003-09-09

    An anti-rewet fabric is used for carrying a fiber web through an air press. The anti-rewet fabric includes at least one air distribution fabric layer, one air distribution fabric layer being configured for contacting the fiber web, and a perforated film layer, the perforated film layer being made of a polyester film. The perforated film layer has a first film side and a second film side, the first film side being one of laminated and attached to the one air distribution fabric layer.

  10. Apparatus for growing a dendritic web

    DOEpatents

    Duncan, Charles S.; Piotrowski, Paul A.; Skutch, Maria E.; McHugh, James P.

    1983-06-21

    A melt system including a susceptor-crucible assembly having improved gradient control when melt replenishment is used during dendritic web growth. The improvement lies in the formation of a thermal barrier in the base of the receptor which is in the form of a vertical slot in the region of the susceptor underlying the crucible at the location of a compartmental separator dividing the crucible into a growth compartment and a melt replenishment compartment. The result achieved is a step change in temperature gradient in the melt thereby providing a more uniform temperature in the growth compartment from which the dendritic web is drawn.

  11. Virtualization of open-source secure web services to support data exchange in a pediatric critical care research network.

    PubMed

    Frey, Lewis J; Sward, Katherine A; Newth, Christopher J L; Khemani, Robinder G; Cryer, Martin E; Thelen, Julie L; Enriquez, Rene; Shaoyu, Su; Pollack, Murray M; Harrison, Rick E; Meert, Kathleen L; Berg, Robert A; Wessel, David L; Shanley, Thomas P; Dalton, Heidi; Carcillo, Joseph; Jenkins, Tammara L; Dean, J Michael

    2015-11-01

    To examine the feasibility of deploying a virtual web service for sharing data within a research network, and to evaluate the impact on data consistency and quality. Virtual machines (VMs) encapsulated an open-source, semantically and syntactically interoperable secure web service infrastructure along with a shadow database. The VMs were deployed to 8 Collaborative Pediatric Critical Care Research Network Clinical Centers. Virtual web services could be deployed in hours. The interoperability of the web services reduced format misalignment from 56% to 1% and demonstrated that 99% of the data consistently transferred using the data dictionary and 1% needed human curation. Use of virtualized open-source secure web service technology could enable direct electronic abstraction of data from hospital databases for research purposes. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  12. MACHINE COOLANT WASTE REDUCTION BY OPTIMIZING COOLANT LIFE

    EPA Science Inventory

    Machine shops use coolants to improve the life and function of machine tools. hese coolants become contaminated with oils with use, and this contamination can lead to growth of anaerobic bacteria and shortened coolant life. his project investigated methods to extend coolant life ...

  13. Method and system of measuring ultrasonic signals in the plane of a moving web

    DOEpatents

    Hall, Maclin S.; Jackson, Theodore G.; Wink, Wilmer A.; Knerr, Christopher

    1996-01-01

    An improved system for measuring the velocity of ultrasonic signals within the plane of moving web-like materials, such as paper, paperboard and the like. In addition to velocity measurements of ultrasonic signals in the plane of the web in the machine direction, MD, and a cross direction, CD, generally perpendicular to the direction of the traveling web, therefor, one embodiment of the system in accordance with the present invention is also adapted to provide on-line indication of the polar specific stiffness of the moving web. In another embodiment of the invention, the velocity of ultrasonic signals in the plane of the web are measured by way of a plurality of ultrasonic transducers carried by synchronously driven wheels or cylinders, thus eliminating undue transducer wear due to any speed differences between the transducers and the web. In order to provide relatively constant contact force between the transducers and the webs, the transducers are mounted in a sensor housings which include a spring for biasing the transducer radially outwardly. The sensor housings are adapted to be easily and conveniently mounted to the carrier to provide a relatively constant contact force between the transducers and the moving web.

  14. Method and system of measuring ultrasonic signals in the plane of a moving web

    DOEpatents

    Hall, M.S.; Jackson, T.G.; Wink, W.A.; Knerr, C.

    1996-02-27

    An improved system for measuring the velocity of ultrasonic signals within the plane of moving web-like materials, such as paper, paperboard and the like is disclosed. In addition to velocity measurements of ultrasonic signals in the plane of the web in the machine direction, MD, and a cross direction, CD, generally perpendicular to the direction of the traveling web, therefore, one embodiment of the system in accordance with the present invention is also adapted to provide on-line indication of the polar specific stiffness of the moving web. In another embodiment of the invention, the velocity of ultrasonic signals in the plane of the web are measured by way of a plurality of ultrasonic transducers carried by synchronously driven wheels or cylinders, thus eliminating undue transducer wear due to any speed differences between the transducers and the web. In order to provide relatively constant contact force between the transducers and the webs, the transducers are mounted in a sensor housings which include a spring for biasing the transducer radially outwardly. The sensor housings are adapted to be easily and conveniently mounted to the carrier to provide a relatively constant contact force between the transducers and the moving web. 37 figs.

  15. Technology and Web-Based Support

    ERIC Educational Resources Information Center

    Smith, Carol

    2008-01-01

    Many types of technology support caregiving: (1) Assistive devices include medicine dispensers, feeding and bathing machines, clothing with polypropylene fibers that stimulate muscles, intelligent ambulatory walkers for those with both vision and mobility impairment, medication reminders, and safety alarms; (2) Telecare devices ranging from…

  16. Machine intelligence and robotics: Report of the NASA study group

    NASA Technical Reports Server (NTRS)

    1980-01-01

    Opportunities for the application of machine intelligence and robotics in NASA missions and systems were identified. The benefits of successful adoption of machine intelligence and robotics techniques were estimated and forecasts were prepared to show their growth potential. Program options for research, advanced development, and implementation of machine intelligence and robot technology for use in program planning are presented.

  17. Confinement of Screw Dislocations to Predetermined Lateral Positions in (0001) 4H-SiC Epilayers Using Homoepitaxial Web Growth

    NASA Technical Reports Server (NTRS)

    Neudeck, Philip G.; Spry, Andrew J.; Trunek, Andrew J.; Powell, J. Anthony; Beheim, Glenn M.

    2002-01-01

    This paper reports initial demonstration of a cantilevered homoepitaxial growth process that places screw dislocations at predetermined lateral positions in on-axis 4H-SiC mesa epilayers. Thin cantilevers were grown extending toward the interior of hollow pre-growth mesa shapes etched into an on-axis 4H-SiC wafer, eventually completely coalescing to form roofed cavities. Each completely coalesced cavity exhibited either: 1) a screw dislocation growth spiral located exactly where final cantilever coalescence occurred, or 2) no growth spiral. The fact that growth spirals are not observed at any other position except the central coalescence point suggests that substrate screw dislocations, initially surrounded by the hollow portion of the pre-growth mesa shape, are relocated to the final coalescence point of the webbed epilayer roof. Molten potassium hydroxide etch studies revealed that properly grown webbed cantilevers exhibited no etch pits, confirming the superior crystal quality of the cantilevers.

  18. Disruption of the lower food web in Lake Ontario: Did it affect alewife growth or condition?

    USGS Publications Warehouse

    O'Gorman, R.; Prindle, S.E.; Lantry, J.R.; Lantry, B.F.

    2008-01-01

    From the early 1980s to the late 1990s, a succession of non-native invertebrates colonized Lake Ontario and the suite of consequences caused by their colonization became known as "food web disruption". For example, the native burrowing amphipod Diporeia spp., a key link in the profundal food web, declined to near absence, exotic predaceous cladocerans with long spines proliferated, altering the zooplankton community, and depth distributions of fishes shifted. These changes had the potential to affect growth and condition of planktivorous alewife Alosa pseudoharengus, the most abundant fish in the lake. To determine if food web disruption affected alewife, we used change-point analysis to examine alewife growth and adult alewife condition during 1976-2006 and analysis-of-variance to determine if values between change points differed significantly. There were no change points in growth during the first year of life. Of three change points in growth during the second year of life, one coincided with the shift in springtime distribution of alewife to deeper water but it was not associated with a significant change in growth. After the second year of life, no change points in growth were evident, although growth in the third year of life spiked in those years when Bythotrephes, the largest of the exotic cladocerans, was abundant suggesting that it was a profitable prey item for age-2 fish. We detected two change points in condition of adult alewife in fall, but the first occurred in 1981, well before disruption began. A second change point occurred in 2003, well after disruption began. After the springtime distribution of alewife shifted deeper during 1992-1994, growth in the first two years of life became more variable, and growth in years of life two and older became correlated (P < 0.05). In conclusion, food web disruption had no negative affect on growth and condition of alewife in Lake Ontario although it appears to have resulted in growth in the first two years of life becoming more variable, growth in years of life two and older becoming correlated (P < 0.05), and growth spurts in year of life three. Copyright ?? 2008 AEHMS.

  19. Finding Atmospheric Composition (AC) Metadata

    NASA Technical Reports Server (NTRS)

    Strub, Richard F..; Falke, Stefan; Fiakowski, Ed; Kempler, Steve; Lynnes, Chris; Goussev, Oleg

    2015-01-01

    The Atmospheric Composition Portal (ACP) is an aggregator and curator of information related to remotely sensed atmospheric composition data and analysis. It uses existing tools and technologies and, where needed, enhances those capabilities to provide interoperable access, tools, and contextual guidance for scientists and value-adding organizations using remotely sensed atmospheric composition data. The initial focus is on Essential Climate Variables identified by the Global Climate Observing System CH4, CO, CO2, NO2, O3, SO2 and aerosols. This poster addresses our efforts in building the ACP Data Table, an interface to help discover and understand remotely sensed data that are related to atmospheric composition science and applications. We harvested GCMD, CWIC, GEOSS metadata catalogs using machine to machine technologies - OpenSearch, Web Services. We also manually investigated the plethora of CEOS data providers portals and other catalogs where that data might be aggregated. This poster is our experience of the excellence, variety, and challenges we encountered.Conclusions:1.The significant benefits that the major catalogs provide are their machine to machine tools like OpenSearch and Web Services rather than any GUI usability improvements due to the large amount of data in their catalog.2.There is a trend at the large catalogs towards simulating small data provider portals through advanced services. 3.Populating metadata catalogs using ISO19115 is too complex for users to do in a consistent way, difficult to parse visually or with XML libraries, and too complex for Java XML binders like CASTOR.4.The ability to search for Ids first and then for data (GCMD and ECHO) is better for machine to machine operations rather than the timeouts experienced when returning the entire metadata entry at once. 5.Metadata harvest and export activities between the major catalogs has led to a significant amount of duplication. (This is currently being addressed) 6.Most (if not all) Earth science atmospheric composition data providers store a reference to their data at GCMD.

  20. Automated Atmospheric Composition Dataset Level Metadata Discovery. Difficulties and Surprises

    NASA Astrophysics Data System (ADS)

    Strub, R. F.; Falke, S. R.; Kempler, S.; Fialkowski, E.; Goussev, O.; Lynnes, C.

    2015-12-01

    The Atmospheric Composition Portal (ACP) is an aggregator and curator of information related to remotely sensed atmospheric composition data and analysis. It uses existing tools and technologies and, where needed, enhances those capabilities to provide interoperable access, tools, and contextual guidance for scientists and value-adding organizations using remotely sensed atmospheric composition data. The initial focus is on Essential Climate Variables identified by the Global Climate Observing System - CH4, CO, CO2, NO2, O3, SO2 and aerosols. This poster addresses our efforts in building the ACP Data Table, an interface to help discover and understand remotely sensed data that are related to atmospheric composition science and applications. We harvested GCMD, CWIC, GEOSS metadata catalogs using machine to machine technologies - OpenSearch, Web Services. We also manually investigated the plethora of CEOS data providers portals and other catalogs where that data might be aggregated. This poster is our experience of the excellence, variety, and challenges we encountered.Conclusions:1.The significant benefits that the major catalogs provide are their machine to machine tools like OpenSearch and Web Services rather than any GUI usability improvements due to the large amount of data in their catalog.2.There is a trend at the large catalogs towards simulating small data provider portals through advanced services. 3.Populating metadata catalogs using ISO19115 is too complex for users to do in a consistent way, difficult to parse visually or with XML libraries, and too complex for Java XML binders like CASTOR.4.The ability to search for Ids first and then for data (GCMD and ECHO) is better for machine to machine operations rather than the timeouts experienced when returning the entire metadata entry at once. 5.Metadata harvest and export activities between the major catalogs has led to a significant amount of duplication. (This is currently being addressed) 6.Most (if not all) Earth science atmospheric composition data providers store a reference to their data at GCMD.

  1. The impact of machine learning techniques in the study of bipolar disorder: A systematic review.

    PubMed

    Librenza-Garcia, Diego; Kotzian, Bruno Jaskulski; Yang, Jessica; Mwangi, Benson; Cao, Bo; Pereira Lima, Luiza Nunes; Bermudez, Mariane Bagatin; Boeira, Manuela Vianna; Kapczinski, Flávio; Passos, Ives Cavalcante

    2017-09-01

    Machine learning techniques provide new methods to predict diagnosis and clinical outcomes at an individual level. We aim to review the existing literature on the use of machine learning techniques in the assessment of subjects with bipolar disorder. We systematically searched PubMed, Embase and Web of Science for articles published in any language up to January 2017. We found 757 abstracts and included 51 studies in our review. Most of the included studies used multiple levels of biological data to distinguish the diagnosis of bipolar disorder from other psychiatric disorders or healthy controls. We also found studies that assessed the prediction of clinical outcomes and studies using unsupervised machine learning to build more consistent clinical phenotypes of bipolar disorder. We concluded that given the clinical heterogeneity of samples of patients with BD, machine learning techniques may provide clinicians and researchers with important insights in fields such as diagnosis, personalized treatment and prognosis orientation. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Machining of Two-Dimensional Sinusoidal Defects on Ignition-Type Capsules to Study Hydrodynamic Instability at the National Ignition Facility

    DOE PAGES

    Giraldez, E. M.; Hoppe Jr., M. L.; Hoover, D. E.; ...

    2016-07-07

    Hydrodynamic instability growth and its effects on capsule implosion performance are being studied at the National Ignition Facility (NIF). Experimental results have shown that low-mode instabilities are the primary culprit for yield degradation. Ignition type capsules with machined 2D sinusoidal defects were used to measure low-mode hydrodynamic instability growth in the acceleration phase of the capsule implosion. The capsules were imploded using ignition-relevant laser pulses and the ablation-front modulation growth was measured using x-ray radiography. The experimentally measured growth was in good agreement with simulations.

  3. BOWS (bioinformatics open web services) to centralize bioinformatics tools in web services.

    PubMed

    Velloso, Henrique; Vialle, Ricardo A; Ortega, J Miguel

    2015-06-02

    Bioinformaticians face a range of difficulties to get locally-installed tools running and producing results; they would greatly benefit from a system that could centralize most of the tools, using an easy interface for input and output. Web services, due to their universal nature and widely known interface, constitute a very good option to achieve this goal. Bioinformatics open web services (BOWS) is a system based on generic web services produced to allow programmatic access to applications running on high-performance computing (HPC) clusters. BOWS intermediates the access to registered tools by providing front-end and back-end web services. Programmers can install applications in HPC clusters in any programming language and use the back-end service to check for new jobs and their parameters, and then to send the results to BOWS. Programs running in simple computers consume the BOWS front-end service to submit new processes and read results. BOWS compiles Java clients, which encapsulate the front-end web service requisitions, and automatically creates a web page that disposes the registered applications and clients. Bioinformatics open web services registered applications can be accessed from virtually any programming language through web services, or using standard java clients. The back-end can run in HPC clusters, allowing bioinformaticians to remotely run high-processing demand applications directly from their machines.

  4. Design and implementation of adaptive PI control schemes for web tension control in roll-to-roll (R2R) manufacturing.

    PubMed

    Raul, Pramod R; Pagilla, Prabhakar R

    2015-05-01

    In this paper, two adaptive Proportional-Integral (PI) control schemes are designed and discussed for control of web tension in Roll-to-Roll (R2R) manufacturing systems. R2R systems are used to transport continuous materials (called webs) on rollers from the unwind roll to the rewind roll. Maintaining web tension at the desired value is critical to many R2R processes such as printing, coating, lamination, etc. Existing fixed gain PI tension control schemes currently used in industrial practice require extensive tuning and do not provide the desired performance for changing operating conditions and material properties. The first adaptive PI scheme utilizes the model reference approach where the controller gains are estimated based on matching of the actual closed-loop tension control systems with an appropriately chosen reference model. The second adaptive PI scheme utilizes the indirect adaptive control approach together with relay feedback technique to automatically initialize the adaptive PI gains. These adaptive tension control schemes can be implemented on any R2R manufacturing system. The key features of the two adaptive schemes is that their designs are simple for practicing engineers, easy to implement in real-time, and automate the tuning process. Extensive experiments are conducted on a large experimental R2R machine which mimics many features of an industrial R2R machine. These experiments include trials with two different polymer webs and a variety of operating conditions. Implementation guidelines are provided for both adaptive schemes. Experimental results comparing the two adaptive schemes and a fixed gain PI tension control scheme used in industrial practice are provided and discussed. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  5. Drying of fiber webs

    DOEpatents

    Warren, David W.

    1997-01-01

    A process and an apparatus for high-intensity drying of fiber webs or sheets, such as newsprint, printing and writing papers, packaging paper, and paperboard or linerboard, as they are formed on a paper machine. The invention uses direct contact between the wet fiber web or sheet and various molten heat transfer fluids, such as liquified eutectic metal alloys, to impart heat at high rates over prolonged durations, in order to achieve ambient boiling of moisture contained within the web. The molten fluid contact process causes steam vapor to emanate from the web surface, without dilution by ambient air; and it is differentiated from the evaporative drying techniques of the prior industrial art, which depend on the uses of steam-heated cylinders to supply heat to the paper web surface, and ambient air to carry away moisture, which is evaporated from the web surface. Contact between the wet fiber web and the molten fluid can be accomplished either by submersing the web within a molten bath or by coating the surface of the web with the molten media. Because of the high interfacial surface tension between the molten media and the cellulose fiber comprising the paper web, the molten media does not appreciately stick to the paper after it is dried. Steam generated from the paper web is collected and condensed without dilution by ambient air to allow heat recovery at significantly higher temperature levels than attainable in evaporative dryers.

  6. Norovirus Illness: Key Facts

    MedlinePlus

    ... should— • handle soiled items carefully without agitating them, • wear rubber or disposable gloves while handling soiled items and wash your hands after, and wash the items with detergent at the maximum available cycle length then machine dry them. Visit CDC’s Norovirus Web site at ...

  7. Text categorization models for identifying unproven cancer treatments on the web.

    PubMed

    Aphinyanaphongs, Yin; Aliferis, Constantin

    2007-01-01

    The nature of the internet as a non-peer-reviewed (and largely unregulated) publication medium has allowed wide-spread promotion of inaccurate and unproven medical claims in unprecedented scale. Patients with conditions that are not currently fully treatable are particularly susceptible to unproven and dangerous promises about miracle treatments. In extreme cases, fatal adverse outcomes have been documented. Most commonly, the cost is financial, psychological, and delayed application of imperfect but proven scientific modalities. To help protect patients, who may be desperately ill and thus prone to exploitation, we explored the use of machine learning techniques to identify web pages that make unproven claims. This feasibility study shows that the resulting models can identify web pages that make unproven claims in a fully automatic manner, and substantially better than previous web tools and state-of-the-art search engine technology.

  8. Ribbon Growth of Single Crystal GaAs for Solar Cell Application.

    DTIC Science & Technology

    1981-11-01

    Entered) 20. Abstract (Cont.) 7growth techniques, dendrite seeds, and melt chemistry were optimized during the course of the program; however...Faceted Web. 10 Crystal Grown From a Melt Doped With 1.0 Atomic% Ge. 17 The Ge-Doped Crystals Grew at Low Undercooling and Contained Flatter Textured-Web...Ge Melt Doping. The 18 Textured-Web Sections Were the Widest Achieved at Small Undercooling, ɝ.0°C. 12 Radiation Exchange Between the Melt Surface

  9. Semi-supervised morphosyntactic classification of Old Icelandic.

    PubMed

    Urban, Kryztof; Tangherlini, Timothy R; Vijūnas, Aurelijus; Broadwell, Peter M

    2014-01-01

    We present IceMorph, a semi-supervised morphosyntactic analyzer of Old Icelandic. In addition to machine-read corpora and dictionaries, it applies a small set of declension prototypes to map corpus words to dictionary entries. A web-based GUI allows expert users to modify and augment data through an online process. A machine learning module incorporates prototype data, edit-distance metrics, and expert feedback to continuously update part-of-speech and morphosyntactic classification. An advantage of the analyzer is its ability to achieve competitive classification accuracy with minimum training data.

  10. Lysine acetylation sites prediction using an ensemble of support vector machine classifiers.

    PubMed

    Xu, Yan; Wang, Xiao-Bo; Ding, Jun; Wu, Ling-Yun; Deng, Nai-Yang

    2010-05-07

    Lysine acetylation is an essentially reversible and high regulated post-translational modification which regulates diverse protein properties. Experimental identification of acetylation sites is laborious and expensive. Hence, there is significant interest in the development of computational methods for reliable prediction of acetylation sites from amino acid sequences. In this paper we use an ensemble of support vector machine classifiers to perform this work. The experimentally determined acetylation lysine sites are extracted from Swiss-Prot database and scientific literatures. Experiment results show that an ensemble of support vector machine classifiers outperforms single support vector machine classifier and other computational methods such as PAIL and LysAcet on the problem of predicting acetylation lysine sites. The resulting method has been implemented in EnsemblePail, a web server for lysine acetylation sites prediction available at http://www.aporc.org/EnsemblePail/. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  11. Nonlinear programming for classification problems in machine learning

    NASA Astrophysics Data System (ADS)

    Astorino, Annabella; Fuduli, Antonio; Gaudioso, Manlio

    2016-10-01

    We survey some nonlinear models for classification problems arising in machine learning. In the last years this field has become more and more relevant due to a lot of practical applications, such as text and web classification, object recognition in machine vision, gene expression profile analysis, DNA and protein analysis, medical diagnosis, customer profiling etc. Classification deals with separation of sets by means of appropriate separation surfaces, which is generally obtained by solving a numerical optimization model. While linear separability is the basis of the most popular approach to classification, the Support Vector Machine (SVM), in the recent years using nonlinear separating surfaces has received some attention. The objective of this work is to recall some of such proposals, mainly in terms of the numerical optimization models. In particular we tackle the polyhedral, ellipsoidal, spherical and conical separation approaches and, for some of them, we also consider the semisupervised versions.

  12. The IHMC CmapTools software in research and education: a multi-level use case in Space Meteorology

    NASA Astrophysics Data System (ADS)

    Messerotti, Mauro

    2010-05-01

    The IHMC (Institute for Human and Machine Cognition, Florida University System, USA) CmapTools software is a powerful multi-platform tool for knowledge modelling in graphical form based on concept maps. In this work we present its application for the high-level development of a set of multi-level concept maps in the framework of Space Meteorology to act as the kernel of a space meteorology domain ontology. This is an example of a research use case, as a domain ontology coded in machine-readable form via e.g. OWL (Web Ontology Language) is suitable to be an active layer of any knowledge management system embedded in a Virtual Observatory (VO). Apart from being manageable at machine level, concept maps developed via CmapTools are intrinsically human-readable and can embed hyperlinks and objects of many kinds. Therefore they are suitable to be published on the web: the coded knowledge can be exploited for educational purposes by the students and the public, as the level of information can be naturally organized among linked concept maps in progressively increasing complexity levels. Hence CmapTools and its advanced version COE (Concept-map Ontology Editor) represent effective and user-friendly software tools for high-level knowledge represention in research and education.

  13. SVM-Prot 2016: A Web-Server for Machine Learning Prediction of Protein Functional Families from Sequence Irrespective of Similarity.

    PubMed

    Li, Ying Hong; Xu, Jing Yu; Tao, Lin; Li, Xiao Feng; Li, Shuang; Zeng, Xian; Chen, Shang Ying; Zhang, Peng; Qin, Chu; Zhang, Cheng; Chen, Zhe; Zhu, Feng; Chen, Yu Zong

    2016-01-01

    Knowledge of protein function is important for biological, medical and therapeutic studies, but many proteins are still unknown in function. There is a need for more improved functional prediction methods. Our SVM-Prot web-server employed a machine learning method for predicting protein functional families from protein sequences irrespective of similarity, which complemented those similarity-based and other methods in predicting diverse classes of proteins including the distantly-related proteins and homologous proteins of different functions. Since its publication in 2003, we made major improvements to SVM-Prot with (1) expanded coverage from 54 to 192 functional families, (2) more diverse protein descriptors protein representation, (3) improved predictive performances due to the use of more enriched training datasets and more variety of protein descriptors, (4) newly integrated BLAST analysis option for assessing proteins in the SVM-Prot predicted functional families that were similar in sequence to a query protein, and (5) newly added batch submission option for supporting the classification of multiple proteins. Moreover, 2 more machine learning approaches, K nearest neighbor and probabilistic neural networks, were added for facilitating collective assessment of protein functions by multiple methods. SVM-Prot can be accessed at http://bidd2.nus.edu.sg/cgi-bin/svmprot/svmprot.cgi.

  14. Growth and structure of the World Wide Web: Towards realistic modeling

    NASA Astrophysics Data System (ADS)

    Tadić, Bosiljka

    2002-08-01

    We simulate evolution of the World Wide Web from the dynamic rules incorporating growth, bias attachment, and rewiring. We show that the emergent double-hierarchical structure with distinct distributions of out- and in-links is comparable with the observed empirical data when the control parameter (average graph flexibility β) is kept in the range β=3-4. We then explore the Web graph by simulating (a) Web crawling to determine size and depth of connected components, and (b) a random walker that discovers the structure of connected subgraphs with dominant attractor and promoter nodes. A random walker that adapts its move strategy to mimic local node linking preferences is shown to have a short access time to "important" nodes on the Web graph.

  15. An Interactive Web-Based Analysis Framework for Remote Sensing Cloud Computing

    NASA Astrophysics Data System (ADS)

    Wang, X. Z.; Zhang, H. M.; Zhao, J. H.; Lin, Q. H.; Zhou, Y. C.; Li, J. H.

    2015-07-01

    Spatiotemporal data, especially remote sensing data, are widely used in ecological, geographical, agriculture, and military research and applications. With the development of remote sensing technology, more and more remote sensing data are accumulated and stored in the cloud. An effective way for cloud users to access and analyse these massive spatiotemporal data in the web clients becomes an urgent issue. In this paper, we proposed a new scalable, interactive and web-based cloud computing solution for massive remote sensing data analysis. We build a spatiotemporal analysis platform to provide the end-user with a safe and convenient way to access massive remote sensing data stored in the cloud. The lightweight cloud storage system used to store public data and users' private data is constructed based on open source distributed file system. In it, massive remote sensing data are stored as public data, while the intermediate and input data are stored as private data. The elastic, scalable, and flexible cloud computing environment is built using Docker, which is a technology of open-source lightweight cloud computing container in the Linux operating system. In the Docker container, open-source software such as IPython, NumPy, GDAL, and Grass GIS etc., are deployed. Users can write scripts in the IPython Notebook web page through the web browser to process data, and the scripts will be submitted to IPython kernel to be executed. By comparing the performance of remote sensing data analysis tasks executed in Docker container, KVM virtual machines and physical machines respectively, we can conclude that the cloud computing environment built by Docker makes the greatest use of the host system resources, and can handle more concurrent spatial-temporal computing tasks. Docker technology provides resource isolation mechanism in aspects of IO, CPU, and memory etc., which offers security guarantee when processing remote sensing data in the IPython Notebook. Users can write complex data processing code on the web directly, so they can design their own data processing algorithm.

  16. Automatic energy expenditure measurement for health science.

    PubMed

    Catal, Cagatay; Akbulut, Akhan

    2018-04-01

    It is crucial to predict the human energy expenditure in any sports activity and health science application accurately to investigate the impact of the activity. However, measurement of the real energy expenditure is not a trivial task and involves complex steps. The objective of this work is to improve the performance of existing estimation models of energy expenditure by using machine learning algorithms and several data from different sensors and provide this estimation service in a cloud-based platform. In this study, we used input data such as breathe rate, and hearth rate from three sensors. Inputs are received from a web form and sent to the web service which applies a regression model on Azure cloud platform. During the experiments, we assessed several machine learning models based on regression methods. Our experimental results showed that our novel model which applies Boosted Decision Tree Regression in conjunction with the median aggregation technique provides the best result among other five regression algorithms. This cloud-based energy expenditure system which uses a web service showed that cloud computing technology is a great opportunity to develop estimation systems and the new model which applies Boosted Decision Tree Regression with the median aggregation provides remarkable results. Copyright © 2018 Elsevier B.V. All rights reserved.

  17. GREAT: a web portal for Genome Regulatory Architecture Tools

    PubMed Central

    Bouyioukos, Costas; Bucchini, François; Elati, Mohamed; Képès, François

    2016-01-01

    GREAT (Genome REgulatory Architecture Tools) is a novel web portal for tools designed to generate user-friendly and biologically useful analysis of genome architecture and regulation. The online tools of GREAT are freely accessible and compatible with essentially any operating system which runs a modern browser. GREAT is based on the analysis of genome layout -defined as the respective positioning of co-functional genes- and its relation with chromosome architecture and gene expression. GREAT tools allow users to systematically detect regular patterns along co-functional genomic features in an automatic way consisting of three individual steps and respective interactive visualizations. In addition to the complete analysis of regularities, GREAT tools enable the use of periodicity and position information for improving the prediction of transcription factor binding sites using a multi-view machine learning approach. The outcome of this integrative approach features a multivariate analysis of the interplay between the location of a gene and its regulatory sequence. GREAT results are plotted in web interactive graphs and are available for download either as individual plots, self-contained interactive pages or as machine readable tables for downstream analysis. The GREAT portal can be reached at the following URL https://absynth.issb.genopole.fr/GREAT and each individual GREAT tool is available for downloading. PMID:27151196

  18. Constructive Ontology Engineering

    ERIC Educational Resources Information Center

    Sousan, William L.

    2010-01-01

    The proliferation of the Semantic Web depends on ontologies for knowledge sharing, semantic annotation, data fusion, and descriptions of data for machine interpretation. However, ontologies are difficult to create and maintain. In addition, their structure and content may vary depending on the application and domain. Several methods described in…

  19. POOL server: machine learning application for functional site prediction in proteins.

    PubMed

    Somarowthu, Srinivas; Ondrechen, Mary Jo

    2012-08-01

    We present an automated web server for partial order optimum likelihood (POOL), a machine learning application that combines computed electrostatic and geometric information for high-performance prediction of catalytic residues from 3D structures. Input features consist of THEMATICS electrostatics data and pocket information from ConCavity. THEMATICS measures deviation from typical, sigmoidal titration behavior to identify functionally important residues and ConCavity identifies binding pockets by analyzing the surface geometry of protein structures. Both THEMATICS and ConCavity (structure only) do not require the query protein to have any sequence or structure similarity to other proteins. Hence, POOL is applicable to proteins with novel folds and engineered proteins. As an additional option for cases where sequence homologues are available, users can include evolutionary information from INTREPID for enhanced accuracy in site prediction. The web site is free and open to all users with no login requirements at http://www.pool.neu.edu. m.ondrechen@neu.edu Supplementary data are available at Bioinformatics online.

  20. Distributing flight dynamics products via the World Wide Web

    NASA Technical Reports Server (NTRS)

    Woodard, Mark; Matusow, David

    1996-01-01

    The NASA Flight Dynamics Products Center (FDPC), which make available selected operations products via the World Wide Web, is reported on. The FDPC can be accessed from any host machine connected to the Internet. It is a multi-mission service which provides Internet users with unrestricted access to the following standard products: antenna contact predictions; ground tracks; orbit ephemerides; mean and osculating orbital elements; earth sensor sun and moon interference predictions; space flight tracking data network summaries; and Shuttle transport system predictions. Several scientific data bases are available through the service.

  1. Analysis of Factors Limiting Bacterial Growth in PDMS Mother Machine Devices.

    PubMed

    Yang, Da; Jennings, Anna D; Borrego, Evalynn; Retterer, Scott T; Männik, Jaan

    2018-01-01

    The microfluidic mother machine platform has attracted much interest for its potential in studies of bacterial physiology, cellular organization, and cell mechanics. Despite numerous experiments and development of dedicated analysis software, differences in bacterial growth and morphology in narrow mother machine channels compared to typical liquid media conditions have not been systematically characterized. Here we determine changes in E. coli growth rates and cell dimensions in different sized dead-end microfluidic channels using high resolution optical microscopy. We find that E. coli adapt to the confined channel environment by becoming narrower and longer compared to the same strain grown in liquid culture. Cell dimensions decrease as the channel length increases and width decreases. These changes are accompanied by increases in doubling times in agreement with the universal growth law. In channels 100 μm and longer, cell doublings can completely stop as a result of frictional forces that oppose cell elongation. Before complete cessation of elongation, mechanical stresses lead to substantial deformation of cells and changes in their morphology. Our work shows that mechanical forces rather than nutrient limitation are the main growth limiting factor for bacterial growth in long and narrow channels.

  2. Analysis of Factors Limiting Bacterial Growth in PDMS Mother Machine Devices

    DOE PAGES

    Yang, Da; Jennings, Anna D.; Borrego, Evalynn; ...

    2018-05-01

    The microfluidic mother machine platform has attracted much interest for its potential in studies of bacterial physiology, cellular organization, and cell mechanics. Despite numerous experiments and development of dedicated analysis software, differences in bacterial growth and morphology in narrow mother machine channels compared to typical liquid media conditions have not been systematically characterized. Here we determine changes in E. coli growth rates and cell dimensions in different sized dead-end microfluidic channels using high resolution optical microscopy. We find that E. coli adapt to the confined channel environment by becoming narrower and longer compared to the same strain grown in liquidmore » culture. Cell dimensions decrease as the channel length increases and width decreases. These changes are accompanied by increases in doubling times in agreement with the universal growth law. In channels 100 μm and longer, cell doublings can completely stop as a result of frictional forces that oppose cell elongation. Before complete cessation of elongation, mechanical stresses lead to substantial deformation of cells and changes in their morphology. Lastly, our work shows that mechanical forces rather than nutrient limitation are the main growth limiting factor for bacterial growth in long and narrow channels.« less

  3. Analysis of Factors Limiting Bacterial Growth in PDMS Mother Machine Devices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Da; Jennings, Anna D.; Borrego, Evalynn

    The microfluidic mother machine platform has attracted much interest for its potential in studies of bacterial physiology, cellular organization, and cell mechanics. Despite numerous experiments and development of dedicated analysis software, differences in bacterial growth and morphology in narrow mother machine channels compared to typical liquid media conditions have not been systematically characterized. Here we determine changes in E. coli growth rates and cell dimensions in different sized dead-end microfluidic channels using high resolution optical microscopy. We find that E. coli adapt to the confined channel environment by becoming narrower and longer compared to the same strain grown in liquidmore » culture. Cell dimensions decrease as the channel length increases and width decreases. These changes are accompanied by increases in doubling times in agreement with the universal growth law. In channels 100 μm and longer, cell doublings can completely stop as a result of frictional forces that oppose cell elongation. Before complete cessation of elongation, mechanical stresses lead to substantial deformation of cells and changes in their morphology. Lastly, our work shows that mechanical forces rather than nutrient limitation are the main growth limiting factor for bacterial growth in long and narrow channels.« less

  4. Web-based counseling for problem gambling: exploring motivations and recommendations.

    PubMed

    Rodda, Simone; Lubman, Dan I; Dowling, Nicki A; Bough, Anna; Jackson, Alun C

    2013-05-24

    For highly stigmatized disorders, such as problem gambling, Web-based counseling has the potential to address common barriers to treatment, including issues of shame and stigma. Despite the exponential growth in the uptake of immediate synchronous Web-based counseling (ie, provided without appointment), little is known about why people choose this service over other modes of treatment. The aim of the current study was to determine motivations for choosing and recommending Web-based counseling over telephone or face-to-face services. The study involved 233 Australian participants who had completed an online counseling session for problem gambling on the Gambling Help Online website between November 2010 and February 2012. Participants were all classified as problem gamblers, with a greater proportion of males (57.4%) and 60.4% younger than 40 years of age. Participants completed open-ended questions about their reasons for choosing online counseling over other modes (ie, face-to-face and telephone), as well as reasons for recommending the service to others. A content analysis revealed 4 themes related to confidentiality/anonymity (reported by 27.0%), convenience/accessibility (50.9%), service system access (34.2%), and a preference for the therapeutic medium (26.6%). Few participants reported helpful professional support as a reason for accessing counseling online, but 43.2% of participants stated that this was a reason for recommending the service. Those older than 40 years were more likely than younger people in the sample to use Web-based counseling as an entry point into the service system (P=.045), whereas those engaged in nonstrategic gambling (eg, machine gambling) were more likely to access online counseling as an entry into the service system than those engaged in strategic gambling (ie, cards, sports; P=.01). Participants older than 40 years were more likely to recommend the service because of its potential for confidentiality and anonymity (P=.04), whereas those younger than 40 years were more likely to recommend the service due to it being helpful (P=.02). This study provides important information about why online counseling for gambling is attractive to people with problem gambling, thereby informing the development of targeted online programs, campaigns, and promotional material.

  5. The Machines Are Coming: Future Directions in Instructional Communication Research. Forum: The Future of Instructional Communication

    ERIC Educational Resources Information Center

    Edwards, Autumn; Edwards, Chad

    2017-01-01

    Educational encounters of the future (and increasingly, of the present) will involve a complex collaboration of human and machine intelligences and agents, partnering to enhance learning and growth. Increasingly, "students and instructors are not only talking 'through' machines, but also [talking] 'to them', and 'within them'" (Edwards…

  6. A machine learning approach to galaxy-LSS classification - I. Imprints on halo merger trees

    NASA Astrophysics Data System (ADS)

    Hui, Jianan; Aragon, Miguel; Cui, Xinping; Flegal, James M.

    2018-04-01

    The cosmic web plays a major role in the formation and evolution of galaxies and defines, to a large extent, their properties. However, the relation between galaxies and environment is still not well understood. Here, we present a machine learning approach to study imprints of environmental effects on the mass assembly of haloes. We present a galaxy-LSS machine learning classifier based on galaxy properties sensitive to the environment. We then use the classifier to assess the relevance of each property. Correlations between galaxy properties and their cosmic environment can be used to predict galaxy membership to void/wall or filament/cluster with an accuracy of 93 per cent. Our study unveils environmental information encoded in properties of haloes not normally considered directly dependent on the cosmic environment such as merger history and complexity. Understanding the physical mechanism by which the cosmic web is imprinted in a halo can lead to significant improvements in galaxy formation models. This is accomplished by extracting features from galaxy properties and merger trees, computing feature scores for each feature and then applying support vector machine (SVM) to different feature sets. To this end, we have discovered that the shape and depth of the merger tree, formation time, and density of the galaxy are strongly associated with the cosmic environment. We describe a significant improvement in the original classification algorithm by performing LU decomposition of the distance matrix computed by the feature vectors and then using the output of the decomposition as input vectors for SVM.

  7. A cloud platform for remote diagnosis of breast cancer in mammography by fusion of machine and human intelligence

    NASA Astrophysics Data System (ADS)

    Jiang, Guodong; Fan, Ming; Li, Lihua

    2016-03-01

    Mammography is the gold standard for breast cancer screening, reducing mortality by about 30%. The application of a computer-aided detection (CAD) system to assist a single radiologist is important to further improve mammographic sensitivity for breast cancer detection. In this study, a design and realization of the prototype for remote diagnosis system in mammography based on cloud platform were proposed. To build this system, technologies were utilized including medical image information construction, cloud infrastructure and human-machine diagnosis model. Specifically, on one hand, web platform for remote diagnosis was established by J2EE web technology. Moreover, background design was realized through Hadoop open-source framework. On the other hand, storage system was built up with Hadoop distributed file system (HDFS) technology which enables users to easily develop and run on massive data application, and give full play to the advantages of cloud computing which is characterized by high efficiency, scalability and low cost. In addition, the CAD system was realized through MapReduce frame. The diagnosis module in this system implemented the algorithms of fusion of machine and human intelligence. Specifically, we combined results of diagnoses from doctors' experience and traditional CAD by using the man-machine intelligent fusion model based on Alpha-Integration and multi-agent algorithm. Finally, the applications on different levels of this system in the platform were also discussed. This diagnosis system will have great importance for the balanced health resource, lower medical expense and improvement of accuracy of diagnosis in basic medical institutes.

  8. New design environment for defect detection in web inspection systems

    NASA Astrophysics Data System (ADS)

    Hajimowlana, S. Hossain; Muscedere, Roberto; Jullien, Graham A.; Roberts, James W.

    1997-09-01

    One of the aims of industrial machine vision is to develop computer and electronic systems destined to replace human vision in the process of quality control of industrial production. In this paper we discuss the development of a new design environment developed for real-time defect detection using reconfigurable FPGA and DSP processor mounted inside a DALSA programmable CCD camera. The FPGA is directly connected to the video data-stream and outputs data to a low bandwidth output bus. The system is targeted for web inspection but has the potential for broader application areas. We describe and show test results of the prototype system board, mounted inside a DALSA camera and discuss some of the algorithms currently simulated and implemented for web inspection applications.

  9. Large-area sheet task advanced dendritic web growth development

    NASA Technical Reports Server (NTRS)

    Duncan, C. S.; Seidensticker, R. G.; Mchugh, J. P.; Hopkins, R. H.; Meier, D.; Schruben, J.

    1982-01-01

    The computer code for calculating web temperature distribution was expanded to provide a graphics output in addition to numerical and punch card output. The new code was used to examine various modifications of the J419 configuration and, on the basis of the results, a new growth geometry was designed. Additionally, several mathematically defined temperature profiles were evaluated for the effects of the free boundary (growth front) on the thermal stress generation. Experimental growth runs were made with modified J419 configurations to complement the modeling work. A modified J435 configuration was evaluated.

  10. Paper and Other Web Coating National Emission Standards for Hazardous Air Pollutants (NESHAP): Applicability Determination Memo

    EPA Pesticide Factsheets

    This November 2003 memo indicates that size presses or size press alternative (SP/SPA), and on-machine coaters that apply sizing or water-based clays as a component of papermaking system are not subject to requirement of Subpart JJJJ.

  11. Scale-Independent Relational Query Processing

    ERIC Educational Resources Information Center

    Armbrust, Michael Paul

    2013-01-01

    An increasingly common pattern is for newly-released web applications to succumb to a "Success Disaster". In this scenario, overloaded database machines and resultant high response times destroy a previously good user experience, just as a site is becoming popular. Unfortunately, the data independence provided by a traditional relational…

  12. Classification of Encrypted Web Traffic Using Machine Learning Algorithms

    DTIC Science & Technology

    2013-06-01

    DPI devices to block certain websites; Yu, Cong, Chen, and Lei [52] suggest hashing the domains of pornographic and illegal websites so ISPs can...Zhenming Lei. “Blocking pornographic , illegal websites by internet host domain using FPGA and Bloom Filter”. Network Infrastructure and Digital Content

  13. Auto-Relevancy Baseline: A Hybrid System Without Human Feedback

    DTIC Science & Technology

    2010-11-01

    classical Bayes algorithm upon the pseudo-hybridization of SemanticA and Latent Semantic IndexingBC systems should smooth out historically high yet...black box emulated a machine learning topic expert. Similar to some Web methods, the initial topics within the legal document were expanded upon

  14. No Computer Left Behind

    ERIC Educational Resources Information Center

    Cohen, Daniel J.; Rosenzweig, Roy

    2006-01-01

    The combination of the Web and the cell phone forecasts the end of the inexpensive technologies of multiple-choice tests and grading machines. These technological developments are likely to bring the multiple-choice test to the verge of obsolescence, mounting a substantial challenge to the presentation of history and other disciplines.

  15. Webmail: an Automated Web Publishing System

    NASA Astrophysics Data System (ADS)

    Bell, David

    A system for publishing frequently updated information to the World Wide Web will be described. Many documents now hosted by the NOAO Web server require timely posting and frequent updates, but need only minor changes in markup or are in a standard format requiring only conversion to HTML. These include information from outside the organization, such as electronic bulletins, and a number of internal reports, both human and machine generated. Webmail uses procmail and Perl scripts to process incoming email messages in a variety of ways. This processing may include wrapping or conversion to HTML, posting to the Web or internal newsgroups, updating search indices or links on related pages, and sending email notification of the new pages to interested parties. The Webmail system has been in use at NOAO since early 1997 and has steadily grown to include fourteen recipes that together handle about fifty messages per week.

  16. In-plane ultrasonic velocity measurement of longitudinal and shear waves in the machine direction with transducers in rotating wheels

    DOEpatents

    Hall, M.S.; Jackson, T.G.; Knerr, C.

    1998-02-17

    An improved system for measuring the velocity of ultrasonic signals within the plane of moving web-like materials, such as paper, paperboard and the like. In addition to velocity measurements of ultrasonic signals in the plane of the web in the MD and CD, one embodiment of the system in accordance with the present invention is also adapted to provide on-line indication of the polar specific stiffness of the moving web. In another embodiment of the invention, the velocity of ultrasonic signals in the plane of the web are measured by way of a plurality of ultrasonic transducers carried by synchronously driven wheels or cylinders, thus eliminating undue transducer wear due to any speed differences between the transducers and the web. In order to provide relatively constant contact force between the transducers and the webs, the transducers are mounted in a sensor housings which include a spring for biasing the transducer radially outwardly. The sensor housings are adapted to be easily and conveniently mounted to the carrier to provide a relatively constant contact force between the transducers and the moving web. 37 figs.

  17. In-plane ultrasonic velocity measurement of longitudinal and shear waves in the machine direction with transducers in rotating wheels

    DOEpatents

    Hall, Maclin S.; Jackson, Theodore G.; Knerr, Christopher

    1998-02-17

    An improved system for measuring the velocity of ultrasonic signals within the plane of moving web-like materials, such as paper, paperboard and the like. In addition to velocity measurements of ultrasonic signals in the plane of the web in the MD and CD, one embodiment of the system in accordance with the present invention is also adapted to provide on-line indication of the polar specific stiffness of the moving web. In another embodiment of the invention, the velocity of ultrasonic signals in the plane of the web are measured by way of a plurality of ultrasonic transducers carried by synchronously driven wheels or cylinders, thus eliminating undue transducer wear due to any speed differences between the transducers and the web. In order to provide relatively constant contact force between the transducers and the webs, the transducers are mounted in a sensor housings which include a spring for biasing the transducer radially outwardly. The sensor housings are adapted to be easily and conveniently mounted to the carrier to provide a relatively constant contact force between the transducers and the moving web.

  18. VisualUrText: A Text Analytics Tool for Unstructured Textual Data

    NASA Astrophysics Data System (ADS)

    Zainol, Zuraini; Jaymes, Mohd T. H.; Nohuddin, Puteri N. E.

    2018-05-01

    The growing amount of unstructured text over Internet is tremendous. Text repositories come from Web 2.0, business intelligence and social networking applications. It is also believed that 80-90% of future growth data is available in the form of unstructured text databases that may potentially contain interesting patterns and trends. Text Mining is well known technique for discovering interesting patterns and trends which are non-trivial knowledge from massive unstructured text data. Text Mining covers multidisciplinary fields involving information retrieval (IR), text analysis, natural language processing (NLP), data mining, machine learning statistics and computational linguistics. This paper discusses the development of text analytics tool that is proficient in extracting, processing, analyzing the unstructured text data and visualizing cleaned text data into multiple forms such as Document Term Matrix (DTM), Frequency Graph, Network Analysis Graph, Word Cloud and Dendogram. This tool, VisualUrText, is developed to assist students and researchers for extracting interesting patterns and trends in document analyses.

  19. Drying of fiber webs

    DOEpatents

    Warren, D.W.

    1997-04-15

    A process and an apparatus are disclosed for high-intensity drying of fiber webs or sheets, such as newsprint, printing and writing papers, packaging paper, and paperboard or linerboard, as they are formed on a paper machine. The invention uses direct contact between the wet fiber web or sheet and various molten heat transfer fluids, such as liquefied eutectic metal alloys, to impart heat at high rates over prolonged durations, in order to achieve ambient boiling of moisture contained within the web. The molten fluid contact process causes steam vapor to emanate from the web surface, without dilution by ambient air; and it is differentiated from the evaporative drying techniques of the prior industrial art, which depend on the uses of steam-heated cylinders to supply heat to the paper web surface, and ambient air to carry away moisture, which is evaporated from the web surface. Contact between the wet fiber web and the molten fluid can be accomplished either by submersing the web within a molten bath or by coating the surface of the web with the molten media. Because of the high interfacial surface tension between the molten media and the cellulose fiber comprising the paper web, the molten media does not appreciatively stick to the paper after it is dried. Steam generated from the paper web is collected and condensed without dilution by ambient air to allow heat recovery at significantly higher temperature levels than attainable in evaporative dryers. 6 figs.

  20. Impact of corpus domain for sentiment classification: An evaluation study using supervised machine learning techniques

    NASA Astrophysics Data System (ADS)

    Karsi, Redouane; Zaim, Mounia; El Alami, Jamila

    2017-07-01

    Thanks to the development of the internet, a large community now has the possibility to communicate and express its opinions and preferences through multiple media such as blogs, forums, social networks and e-commerce sites. Today, it becomes clearer that opinions published on the web are a very valuable source for decision-making, so a rapidly growing field of research called “sentiment analysis” is born to address the problem of automatically determining the polarity (Positive, negative, neutral,…) of textual opinions. People expressing themselves in a particular domain often use specific domain language expressions, thus, building a classifier, which performs well in different domains is a challenging problem. The purpose of this paper is to evaluate the impact of domain for sentiment classification when using machine learning techniques. In our study three popular machine learning techniques: Support Vector Machines (SVM), Naive Bayes and K nearest neighbors(KNN) were applied on datasets collected from different domains. Experimental results show that Support Vector Machines outperforms other classifiers in all domains, since it achieved at least 74.75% accuracy with a standard deviation of 4,08.

  1. The status of silicon ribbon growth technology for high-efficiency silicon solar cells

    NASA Technical Reports Server (NTRS)

    Ciszek, T. F.

    1985-01-01

    More than a dozen methods have been applied to the growth of silicon ribbons, beginning as early as 1963. The ribbon geometry has been particularly intriguing for photovoltaic applications, because it might provide large area, damage free, nearly continuous substrates without the material loss or cost of ingot wafering. In general, the efficiency of silicon ribbon solar cells has been lower than that of ingot cells. The status of some ribbon growth techniques that have achieved laboratory efficiencies greater than 13.5% are reviewed, i.e., edge-defined, film-fed growth (EFG), edge-supported pulling (ESP), ribbon against a drop (RAD), and dendritic web growth (web).

  2. Management of business economic growth as function of resource rents

    NASA Astrophysics Data System (ADS)

    Prljić, Stefan; Nikitović, Zorana; Stojanović, Aleksandra Golubović; Cogoljević, Dušan; Pešić, Gordana; Alizamir, Meysam

    2018-02-01

    Economic profit could be influenced by economic rents. However natural resource rents provided different impact on the economic growth or economic profit. The main focus of the study was to evaluate the economic growth as function of natural resource rents. For such a purpose machine learning approach, artificial neural network, was used. The used natural resource rents were coal rents, forest rents, mineral rents, natural gas rents and oil rents. Based on the results it is concluded that the machine learning approach could be used as the tool for the economic growth evaluation as function of natural resource rents. Moreover the more advanced approaches should be incorporated to improve more the forecasting accuracy.

  3. Supporting Empathy in Online Learning with Artificial Expressions

    ERIC Educational Resources Information Center

    Lyons, Michael J.; Kluender, Daniel; Tetsutani, Nobuji

    2005-01-01

    Motivated by a consideration of the machine-mediated nature of human interaction in web-based tutoring, we propose the construction of artificial expressions, displays which reflect users' felt bodily experience, to support the development of greater empathy in remote interaction. To demonstrate the concept of artificial expressions we have…

  4. Group Cohesion in Experiential Growth Groups

    ERIC Educational Resources Information Center

    Steen, Sam; Vasserman-Stokes, Elaina; Vannatta, Rachel

    2014-01-01

    This article explores the effect of web-based journaling on changes in group cohesion within experiential growth groups. Master's students were divided into 2 groups. Both used a web-based platform to journal after each session; however, only 1 of the groups was able to read each other's journals. Quantitative data collected before and…

  5. Dealing with extreme data diversity: extraction and fusion from the growing types of document formats

    NASA Astrophysics Data System (ADS)

    David, Peter; Hansen, Nichole; Nolan, James J.; Alcocer, Pedro

    2015-05-01

    The growth in text data available online is accompanied by a growth in the diversity of available documents. Corpora with extreme heterogeneity in terms of file formats, document organization, page layout, text style, and content are common. The absence of meaningful metadata describing the structure of online and open-source data leads to text extraction results that contain no information about document structure and are cluttered with page headers and footers, web navigation controls, advertisements, and other items that are typically considered noise. We describe an approach to document structure and metadata recovery that uses visual analysis of documents to infer the communicative intent of the author. Our algorithm identifies the components of documents such as titles, headings, and body content, based on their appearance. Because it operates on an image of a document, our technique can be applied to any type of document, including scanned images. Our approach to document structure recovery considers a finer-grained set of component types than prior approaches. In this initial work, we show that a machine learning approach to document structure recovery using a feature set based on the geometry and appearance of images of documents achieves a 60% greater F1- score than a baseline random classifier.

  6. Lid for improved dendritic web growth

    DOEpatents

    Duncan, Charles S.; Kochka, Edgar L.; Piotrowski, Paul A.; Seidensticker, Raymond G.

    1992-03-24

    A lid for a susceptor in which a crystalline material is melted by induction heating to form a pool or melt of molten material from which a dendritic web of essentially a single crystal of the material is pulled through an elongated slot in the lid and the lid has a pair of generally round openings adjacent the ends of the slot and a groove extends between each opening and the end of the slot. The grooves extend from the outboard surface of the lid to adjacent the inboard surface providing a strip contiguous with the inboard surface of the lid to produce generally uniform radiational heat loss across the width of the dendritic web adjacent the inboard surface of the lid to reduce thermal stresses in the web and facilitate the growth of wider webs at a greater withdrawal rate.

  7. Effect of Moisture Content of Paper Material on Laser Cutting

    NASA Astrophysics Data System (ADS)

    Stepanov, Alexander; Saukkonen, Esa; Piili, Heidi; Salminen, Antti

    Laser technology has been used in industrial processes for several decades. The most advanced development and implementation took place in laser welding and cutting of metals in automotive and ship building industries. However, there is high potential to apply laser processing to other materials in various industrial fields. One of these potential fields could be paper industry to fulfill the demand for high quality, fast and reliable cutting technology. Difficulties in industrial application of laser cutting for paper industry are associated to lack of basic information, awareness of technology and its application possibilities. Nowadays possibilities of using laser cutting for paper materials are widened and high automation level of equipment has made this technology more interesting for manufacturing processes. Promising area of laser cutting application at paper making machines is longitudinal cutting of paper web (edge trimming). There are few locations at a paper making machine where edge trimming is usually done: wet press section, calender or rewinder. Paper web is characterized with different moisture content at different points of the paper making machine. The objective of this study was to investigate the effect of moisture content of paper material on laser cutting parameters. Effect of moisture content on cellulose fibers, laser absorption and energy needed for cutting is described as well. Laser cutting tests were carried out using CO2 laser.

  8. The Frictionless Data Package: Data Containerization for Automated Scientific Workflows

    NASA Astrophysics Data System (ADS)

    Shepherd, A.; Fils, D.; Kinkade, D.; Saito, M. A.

    2017-12-01

    As cross-disciplinary geoscience research increasingly relies on machines to discover and access data, one of the critical questions facing data repositories is how data and supporting materials should be packaged for consumption. Traditionally, data repositories have relied on a human's involvement throughout discovery and access workflows. This human could assess fitness for purpose by reading loosely coupled, unstructured information from web pages and documentation. In attempts to shorten the time to science and access data resources across may disciplines, expectations for machines to mediate the process of discovery and access is challenging data repository infrastructure. This challenge is to find ways to deliver data and information in ways that enable machines to make better decisions by enabling them to understand the data and metadata of many data types. Additionally, once machines have recommended a data resource as relevant to an investigator's needs, the data resource should be easy to integrate into that investigator's toolkits for analysis and visualization. The Biological and Chemical Oceanography Data Management Office (BCO-DMO) supports NSF-funded OCE and PLR investigators with their project's data management needs. These needs involve a number of varying data types some of which require multiple files with differing formats. Presently, BCO-DMO has described these data types and the important relationships between the type's data files through human-readable documentation on web pages. For machines directly accessing data files from BCO-DMO, this documentation could be overlooked and lead to misinterpreting the data. Instead, BCO-DMO is exploring the idea of data containerization, or packaging data and related information for easier transport, interpretation, and use. In researching the landscape of data containerization, the Frictionlessdata Data Package (http://frictionlessdata.io/) provides a number of valuable advantages over similar solutions. This presentation will focus on these advantages and how the Frictionlessdata Data Package addresses a number of real-world use cases faced for data discovery, access, analysis and visualization.

  9. Technical progress in silicon sheet growth under DOE/JPL FSA program, 1975-1986

    NASA Technical Reports Server (NTRS)

    Kalejs, J. P.

    1986-01-01

    The technical progress made in the Silicon Sheet Growth Program during its 11 years was reviewed. At present, in 1986, only two of the original 9 techniques have survived to the start-up, pilot-plant stage in industry. These two techniques are the edge-defined, film-fed growth (EFG) technique that produces closed shape polygons, and the WEB dendritic technique that produces single ribbons. Both the status and future concerns of the EFG and WEB techniques were discussed.

  10. A web ontology for brain trauma patient computer-assisted rehabilitation.

    PubMed

    Zikos, Dimitrios; Galatas, George; Metsis, Vangelis; Makedon, Fillia

    2013-01-01

    In this paper we describe CABROnto, which is a web ontology for the semantic representation of the computer assisted brain trauma rehabilitation. This is a novel and emerging domain, since it employs the use of robotic devices, adaptation software and machine learning to facilitate interactive and adaptive rehabilitation care. We used Protégé 4.2 and Protégé-Owl schema editor. The primary goal of this ontology is to enable the reuse of the domain knowledge. CABROnto has nine main classes, more than 50 subclasses, existential and cardinality restrictions. The ontology can be found online at Bioportal.

  11. A web-based procedure for liver segmentation in CT images

    NASA Astrophysics Data System (ADS)

    Yuan, Rong; Luo, Ming; Wang, Luyao; Xie, Qingguo

    2015-03-01

    Liver segmentation in CT images has been acknowledged as a basic and indispensable part in systems of computer aided liver surgery for operation design and risk evaluation. In this paper, we will introduce and implement a web-based procedure for liver segmentation to help radiologists and surgeons get an accurate result efficiently and expediently. Several clinical datasets are used to evaluate the accessibility and the accuracy. This procedure seems a promising approach for extraction of liver volumetry of various shapes. Moreover, it is possible for user to access the segmentation wherever the Internet is available without any specific machine.

  12. Comparison of two different types of heat and moisture exchangers in ventilated patients.

    PubMed

    Ahmed, Syed Moied; Mahajan, Jyotsna; Nadeem, Abu

    2009-09-01

    To compare the efficacy of two different types of Heat and Moisture Exchangers (HME filters) in reducing transmission of infection from the patient to ventilator and vice versa and also its cost effectiveness. Randomized, controlled, double blind, prospective study. 60 patients admitted to the ICU from May 1, 2007 to July 31, 2007 of either sex, age ranging between 20 and 60 years, requiring mechanical ventilation were screened for the study. Following intubation of the patients, the HME device was attached to the breathing circuit randomly by the chit-in-a box method. The patients were divided into two groups according to the HME filters attached. Both the groups were comparable with respect to age and sex ratio. In Type A HME filters, 80% showed growth on the patient end within 24 h and in 27% filters, culture was positive both on the patient and the machine ends. The organisms detected were Staphylococcus aureus, Escherichia coli and Pseudomonas aeruginosa and co-related with the endotracheal aspirate culture. After 48 h, 87% filters developed organisms on the patient end, whereas 64% filters were culture positive both on the patient and the machine end. In Type B HME filters, 70% showed growth on patient's end after 24 h. Organisms detected were S. aureus, E. coli, P. aeruginosa and Acinetobacter. Thirty percent of filters were culture negative on both the patient and machine ends. No growth was found on the machine end in any of the filters after 24 h. After 48 h, 73% of the filters had microbial growth on the patient end, whereas only 3% filters had growth (S. aureus) on the machine end only. Seven percent had growth on both the patient as well as the machine ends. The microorganisms detected on the HME filters co-related with the endotracheal aspirate cultures. HME filter Type B (study group) was significantly better in reducing contamination of ventilator from the patient as compared to Type A (control group), which was routinely used in our ICU. Type B filter was found to be effective for at least 48 h. This study can also be applied to patients coming to emergency department (ED) and requiring emergency surgery and postoperative ventilation; and trauma patients like flail chest, head injury etc. requiring ventilatory support to prevent them from acquiring ventilator-associated pneumonia (VAP).

  13. Flexible software architecture for user-interface and machine control in laboratory automation.

    PubMed

    Arutunian, E B; Meldrum, D R; Friedman, N A; Moody, S E

    1998-10-01

    We describe a modular, layered software architecture for automated laboratory instruments. The design consists of a sophisticated user interface, a machine controller and multiple individual hardware subsystems, each interacting through a client-server architecture built entirely on top of open Internet standards. In our implementation, the user-interface components are built as Java applets that are downloaded from a server integrated into the machine controller. The user-interface client can thereby provide laboratory personnel with a familiar environment for experiment design through a standard World Wide Web browser. Data management and security are seamlessly integrated at the machine-controller layer using QNX, a real-time operating system. This layer also controls hardware subsystems through a second client-server interface. This architecture has proven flexible and relatively easy to implement and allows users to operate laboratory automation instruments remotely through an Internet connection. The software architecture was implemented and demonstrated on the Acapella, an automated fluid-sample-processing system that is under development at the University of Washington.

  14. A collaborative framework for Distributed Privacy-Preserving Support Vector Machine learning.

    PubMed

    Que, Jialan; Jiang, Xiaoqian; Ohno-Machado, Lucila

    2012-01-01

    A Support Vector Machine (SVM) is a popular tool for decision support. The traditional way to build an SVM model is to estimate parameters based on a centralized repository of data. However, in the field of biomedicine, patient data are sometimes stored in local repositories or institutions where they were collected, and may not be easily shared due to privacy concerns. This creates a substantial barrier for researchers to effectively learn from the distributed data using machine learning tools like SVMs. To overcome this difficulty and promote efficient information exchange without sharing sensitive raw data, we developed a Distributed Privacy Preserving Support Vector Machine (DPP-SVM). The DPP-SVM enables privacy-preserving collaborative learning, in which a trusted server integrates "privacy-insensitive" intermediary results. The globally learned model is guaranteed to be exactly the same as learned from combined data. We also provide a free web-service (http://privacy.ucsd.edu:8080/ppsvm/) for multiple participants to collaborate and complete the SVM-learning task in an efficient and privacy-preserving manner.

  15. Expert system for web based collaborative CAE

    NASA Astrophysics Data System (ADS)

    Hou, Liang; Lin, Zusheng

    2006-11-01

    An expert system for web based collaborative CAE was developed based on knowledge engineering, relational database and commercial FEA (Finite element analysis) software. The architecture of the system was illustrated. In this system, the experts' experiences, theories and typical examples and other related knowledge, which will be used in the stage of pre-process in FEA, were categorized into analysis process and object knowledge. Then, the integrated knowledge model based on object-oriented method and rule based method was described. The integrated reasoning process based on CBR (case based reasoning) and rule based reasoning was presented. Finally, the analysis process of this expert system in web based CAE application was illustrated, and an analysis example of a machine tool's column was illustrated to prove the validity of the system.

  16. SEM (Symmetry Equivalent Molecules): a web-based GUI to generate and visualize the macromolecules

    PubMed Central

    Hussain, A. S. Z.; Kumar, Ch. Kiran; Rajesh, C. K.; Sheik, S. S.; Sekar, K.

    2003-01-01

    SEM, Symmetry Equivalent Molecules, is a web-based graphical user interface to generate and visualize the symmetry equivalent molecules (proteins and nucleic acids). In addition, the program allows the users to save the three-dimensional atomic coordinates of the symmetry equivalent molecules in the local machine. The widely recognized graphics program RasMol has been deployed to visualize the reference (input atomic coordinates) and the symmetry equivalent molecules. This program is written using CGI/Perl scripts and has been interfaced with all the three-dimensional structures (solved using X-ray crystallography) available in the Protein Data Bank. The program, SEM, can be accessed over the World Wide Web interface at http://dicsoft2.physics.iisc.ernet.in/sem/ or http://144.16.71.11/sem/. PMID:12824326

  17. Spider-web inspired multi-resolution graphene tactile sensor.

    PubMed

    Liu, Lu; Huang, Yu; Li, Fengyu; Ma, Ying; Li, Wenbo; Su, Meng; Qian, Xin; Ren, Wanjie; Tang, Kanglai; Song, Yanlin

    2018-05-08

    Multi-dimensional accurate response and smooth signal transmission are critical challenges in the advancement of multi-resolution recognition and complex environment analysis. Inspired by the structure-activity relationship between discrepant microstructures of the spiral and radial threads in a spider web, we designed and printed graphene with porous and densely-packed microstructures to integrate into a multi-resolution graphene tactile sensor. The three-dimensional (3D) porous graphene structure performs multi-dimensional deformation responses. The laminar densely-packed graphene structure contributes excellent conductivity with flexible stability. The spider-web inspired printed pattern inherits orientational and locational kinesis tracking. The multi-structure construction with homo-graphene material can integrate discrepant electronic properties with remarkable flexibility, which will attract enormous attention for electronic skin, wearable devices and human-machine interactions.

  18. GREAT: a web portal for Genome Regulatory Architecture Tools.

    PubMed

    Bouyioukos, Costas; Bucchini, François; Elati, Mohamed; Képès, François

    2016-07-08

    GREAT (Genome REgulatory Architecture Tools) is a novel web portal for tools designed to generate user-friendly and biologically useful analysis of genome architecture and regulation. The online tools of GREAT are freely accessible and compatible with essentially any operating system which runs a modern browser. GREAT is based on the analysis of genome layout -defined as the respective positioning of co-functional genes- and its relation with chromosome architecture and gene expression. GREAT tools allow users to systematically detect regular patterns along co-functional genomic features in an automatic way consisting of three individual steps and respective interactive visualizations. In addition to the complete analysis of regularities, GREAT tools enable the use of periodicity and position information for improving the prediction of transcription factor binding sites using a multi-view machine learning approach. The outcome of this integrative approach features a multivariate analysis of the interplay between the location of a gene and its regulatory sequence. GREAT results are plotted in web interactive graphs and are available for download either as individual plots, self-contained interactive pages or as machine readable tables for downstream analysis. The GREAT portal can be reached at the following URL https://absynth.issb.genopole.fr/GREAT and each individual GREAT tool is available for downloading. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  19. Information, knowledge and the future of machines.

    PubMed

    MacFarlane, Alistair G J

    2003-08-15

    This wide-ranging survey considers the future of machines in terms of information, complexity and the growth of knowledge shared amongst agents. Mechanical and human agents are compared and contrasted, and it is argued that, for the foreseeable future, their roles will be complementary. The future development of machines is examined in terms of unions of human and machine agency evolving as part of economic activity. Limits to, and threats posed by, the continuing evolution of such a society of agency are considered.

  20. Making species checklists understandable to machines - a shift from relational databases to ontologies.

    PubMed

    Laurenne, Nina; Tuominen, Jouni; Saarenmaa, Hannu; Hyvönen, Eero

    2014-01-01

    The scientific names of plants and animals play a major role in Life Sciences as information is indexed, integrated, and searched using scientific names. The main problem with names is their ambiguous nature, because more than one name may point to the same taxon and multiple taxa may share the same name. In addition, scientific names change over time, which makes them open to various interpretations. Applying machine-understandable semantics to these names enables efficient processing of biological content in information systems. The first step is to use unique persistent identifiers instead of name strings when referring to taxa. The most commonly used identifiers are Life Science Identifiers (LSID), which are traditionally used in relational databases, and more recently HTTP URIs, which are applied on the Semantic Web by Linked Data applications. We introduce two models for expressing taxonomic information in the form of species checklists. First, we show how species checklists are presented in a relational database system using LSIDs. Then, in order to gain a more detailed representation of taxonomic information, we introduce meta-ontology TaxMeOn to model the same content as Semantic Web ontologies where taxa are identified using HTTP URIs. We also explore how changes in scientific names can be managed over time. The use of HTTP URIs is preferable for presenting the taxonomic information of species checklists. An HTTP URI identifies a taxon and operates as a web address from which additional information about the taxon can be located, unlike LSID. This enables the integration of biological data from different sources on the web using Linked Data principles and prevents the formation of information silos. The Linked Data approach allows a user to assemble information and evaluate the complexity of taxonomical data based on conflicting views of taxonomic classifications. Using HTTP URIs and Semantic Web technologies also facilitate the representation of the semantics of biological data, and in this way, the creation of more "intelligent" biological applications and services.

  1. A Web Tool for Generating High Quality Machine-readable Biological Pathways.

    PubMed

    Ramirez-Gaona, Miguel; Marcu, Ana; Pon, Allison; Grant, Jason; Wu, Anthony; Wishart, David S

    2017-02-08

    PathWhiz is a web server built to facilitate the creation of colorful, interactive, visually pleasing pathway diagrams that are rich in biological information. The pathways generated by this online application are machine-readable and fully compatible with essentially all web-browsers and computer operating systems. It uses a specially developed, web-enabled pathway drawing interface that permits the selection and placement of different combinations of pre-drawn biological or biochemical entities to depict reactions, interactions, transport processes and binding events. This palette of entities consists of chemical compounds, proteins, nucleic acids, cellular membranes, subcellular structures, tissues, and organs. All of the visual elements in it can be interactively adjusted and customized. Furthermore, because this tool is a web server, all pathways and pathway elements are publicly accessible. This kind of pathway "crowd sourcing" means that PathWhiz already contains a large and rapidly growing collection of previously drawn pathways and pathway elements. Here we describe a protocol for the quick and easy creation of new pathways and the alteration of existing pathways. To further facilitate pathway editing and creation, the tool contains replication and propagation functions. The replication function allows existing pathways to be used as templates to create or edit new pathways. The propagation function allows one to take an existing pathway and automatically propagate it across different species. Pathways created with this tool can be "re-styled" into different formats (KEGG-like or text-book like), colored with different backgrounds, exported to BioPAX, SBGN-ML, SBML, or PWML data exchange formats, and downloaded as PNG or SVG images. The pathways can easily be incorporated into online databases, integrated into presentations, posters or publications, or used exclusively for online visualization and exploration. This protocol has been successfully applied to generate over 2,000 pathway diagrams, which are now found in many online databases including HMDB, DrugBank, SMPDB, and ECMDB.

  2. Using Web Ontology Language to Integrate Heterogeneous Databases in the Neurosciences

    PubMed Central

    Lam, Hugo Y.K.; Marenco, Luis; Shepherd, Gordon M.; Miller, Perry L.; Cheung, Kei-Hoi

    2006-01-01

    Integrative neuroscience involves the integration and analysis of diverse types of neuroscience data involving many different experimental techniques. This data will increasingly be distributed across many heterogeneous databases that are web-accessible. Currently, these databases do not expose their schemas (database structures) and their contents to web applications/agents in a standardized, machine-friendly way. This limits database interoperation. To address this problem, we describe a pilot project that illustrates how neuroscience databases can be expressed using the Web Ontology Language, which is a semantically-rich ontological language, as a common data representation language to facilitate complex cross-database queries. In this pilot project, an existing tool called “D2RQ” was used to translate two neuroscience databases (NeuronDB and CoCoDat) into OWL, and the resulting OWL ontologies were then merged. An OWL-based reasoner (Racer) was then used to provide a sophisticated query language (nRQL) to perform integrated queries across the two databases based on the merged ontology. This pilot project is one step toward exploring the use of semantic web technologies in the neurosciences. PMID:17238384

  3. 78 FR 1206 - Notice of Availability of Government-Owned Inventions; Available for Licensing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-08

    .... Patent No. 8,238,924: Real-Time Optimization of Allocation of Resources//U.S. Patent No. 7,685,207: Adaptive Web-Based Asset Control System. ADDRESSES: Requests for copies of the patents cited should be...: Patent application 12/650,413: Finite State Machine Architecture for Software Development (a system for...

  4. Nondestructive web thickness measurement of micro-drills with an integrated laser inspection system

    NASA Astrophysics Data System (ADS)

    Chuang, Shui-Fa; Chen, Yen-Chung; Chang, Wen-Tung; Lin, Ching-Chih; Tarng, Yeong-Shin

    2010-09-01

    Nowadays, the electric and semiconductor industries use numerous micro-drills to machine micro-holes in printed circuit boards. The measurement of web thickness of micro-drills, a key parameter of micro-drill geometry influencing drill rigidity and chip-removal ability, is quite important to ensure quality control. Traditionally, inefficiently destructive measuring method is adopted by inspectors. To improve quality and efficiency of the web thickness measuring tasks, a nondestructive measuring method is required. In this paper, based on the laser micro-gauge (LMG) and laser confocal displacement meter (LCDM) techniques, a nondestructive measuring principle of web thickness of micro-drills is introduced. An integrated laser inspection system, mainly consisting of a LMG, a LCDM and a two-axis-driven micro-drill fixture device, was developed. Experiments meant to inspect web thickness of micro-drill samples with a nominal diameter of 0.25 mm were conducted to test the feasibility of the developed laser inspection system. The experimental results showed that the web thickness measurement could achieve an estimated repeatability of ± 1.6 μm and a worst repeatability of ± 7.5 μm. The developed laser inspection system, combined with the nondestructive measuring principle, was able to undertake the web thickness measuring tasks for certain micro-drills.

  5. Using Amazon Web Services (AWS) to enable real-time, remote sensing of biophysical and anthropogenic conditions in green infrastructure systems in Philadelphia, an ultra-urban application of the Internet of Things (IoT)

    NASA Astrophysics Data System (ADS)

    Montalto, F. A.; Yu, Z.; Soldner, K.; Israel, A.; Fritch, M.; Kim, Y.; White, S.

    2017-12-01

    Urban stormwater utilities are increasingly using decentralized "green" infrastructure (GI) systems to capture stormwater and achieve compliance with regulations. Because environmental conditions, and design varies by GSI facility, monitoring of GSI systems under a range of conditions is essential. Conventional monitoring efforts can be costly because in-field data logging requires intense data transmission rates. The Internet of Things (IoT) can be used to more cost-effectively collect, store, and publish GSI monitoring data. Using 3G mobile networks, a cloud-based database was built on an Amazon Web Services (AWS) EC2 virtual machine to store and publish data collected with environmental sensors deployed in the field. This database can store multi-dimensional time series data, as well as photos and other observations logged by citizen scientists through a public engagement mobile app through a new Application Programming Interface (API). Also on the AWS EC2 virtual machine, a real-time QAQC flagging algorithm was developed to validate the sensor data streams.

  6. Generation of open biomedical datasets through ontology-driven transformation and integration processes.

    PubMed

    Carmen Legaz-García, María Del; Miñarro-Giménez, José Antonio; Menárguez-Tortosa, Marcos; Fernández-Breis, Jesualdo Tomás

    2016-06-03

    Biomedical research usually requires combining large volumes of data from multiple heterogeneous sources, which makes difficult the integrated exploitation of such data. The Semantic Web paradigm offers a natural technological space for data integration and exploitation by generating content readable by machines. Linked Open Data is a Semantic Web initiative that promotes the publication and sharing of data in machine readable semantic formats. We present an approach for the transformation and integration of heterogeneous biomedical data with the objective of generating open biomedical datasets in Semantic Web formats. The transformation of the data is based on the mappings between the entities of the data schema and the ontological infrastructure that provides the meaning to the content. Our approach permits different types of mappings and includes the possibility of defining complex transformation patterns. Once the mappings are defined, they can be automatically applied to datasets to generate logically consistent content and the mappings can be reused in further transformation processes. The results of our research are (1) a common transformation and integration process for heterogeneous biomedical data; (2) the application of Linked Open Data principles to generate interoperable, open, biomedical datasets; (3) a software tool, called SWIT, that implements the approach. In this paper we also describe how we have applied SWIT in different biomedical scenarios and some lessons learned. We have presented an approach that is able to generate open biomedical repositories in Semantic Web formats. SWIT is able to apply the Linked Open Data principles in the generation of the datasets, so allowing for linking their content to external repositories and creating linked open datasets. SWIT datasets may contain data from multiple sources and schemas, thus becoming integrated datasets.

  7. Dynamics of Investor Attention on the Social Web

    ERIC Educational Resources Information Center

    Li, Xian

    2013-01-01

    The World Wide Web has been revolutionizing how investors produce and consume information while participating in financial markets. Both the amount of information and the speed it flows around have achieved unprecedented magnitudes. The preeminent change is the growth of investor communities on the social web, which give rise to multidimensional…

  8. When the New Application Smell Is Gone: Traditional Intranet Best Practices and Existing Web 2.0 Intranet Infrastructures

    ERIC Educational Resources Information Center

    Yoose, Becky

    2010-01-01

    With the growth of Web 2.0 library intranets in recent years, many libraries are leaving behind legacy, first-generation intranets. As Web 2.0 intranets multiply and mature, how will traditional intranet best practices--especially in the areas of planning, implementation, and evaluation--translate into an existing Web 2.0 intranet infrastructure?…

  9. An Android based location service using GSMCellID and GPS to obtain a graphical guide to the nearest cash machine

    NASA Astrophysics Data System (ADS)

    Jacobsen, Jurma; Edlich, Stefan

    2009-02-01

    There is a broad range of potential useful mobile location-based applications. One crucial point seems to be to make them available to the public at large. This case illuminates the abilities of Android - the operating system for mobile devices - to fulfill this demand in the mashup way by use of some special geocoding web services and one integrated web service for getting the nearest cash machines data. It shows an exemplary approach for building mobile location-based mashups for everyone: 1. As a basis for reaching as many people as possible the open source Android OS is assumed to spread widely. 2. Everyone also means that the handset has not to be an expensive GPS device. This is realized by re-utilization of the existing GSM infrastructure with the Cell of Origin (COO) method which makes a lookup of the CellID in one of the growing web available CellID databases. Some of these databases are still undocumented and not yet published. Furthermore the Google Maps API for Mobile (GMM) and the open source counterpart OpenCellID are used. The user's current position localization via lookup of the closest cell to which the handset is currently connected to (COO) is not as precise as GPS, but appears to be sufficient for lots of applications. For this reason the GPS user is the most pleased one - for this user the system is fully automated. In contrary there could be some users who doesn't own a GPS cellular. This user should refine his/her location by one click on the map inside of the determined circular region. The users are then shown and guided by a path to the nearest cash machine by integrating Google Maps API with an overlay. Additionally, the GPS user can keep track of him- or herself by getting a frequently updated view via constantly requested precise GPS data for his or her position.

  10. Employing machine learning for reliable miRNA target identification in plants

    PubMed Central

    2011-01-01

    Background miRNAs are ~21 nucleotide long small noncoding RNA molecules, formed endogenously in most of the eukaryotes, which mainly control their target genes post transcriptionally by interacting and silencing them. While a lot of tools has been developed for animal miRNA target system, plant miRNA target identification system has witnessed limited development. Most of them have been centered around exact complementarity match. Very few of them considered other factors like multiple target sites and role of flanking regions. Result In the present work, a Support Vector Regression (SVR) approach has been implemented for plant miRNA target identification, utilizing position specific dinucleotide density variation information around the target sites, to yield highly reliable result. It has been named as p-TAREF (plant-Target Refiner). Performance comparison for p-TAREF was done with other prediction tools for plants with utmost rigor and where p-TAREF was found better performing in several aspects. Further, p-TAREF was run over the experimentally validated miRNA targets from species like Arabidopsis, Medicago, Rice and Tomato, and detected them accurately, suggesting gross usability of p-TAREF for plant species. Using p-TAREF, target identification was done for the complete Rice transcriptome, supported by expression and degradome based data. miR156 was found as an important component of the Rice regulatory system, where control of genes associated with growth and transcription looked predominant. The entire methodology has been implemented in a multi-threaded parallel architecture in Java, to enable fast processing for web-server version as well as standalone version. This also makes it to run even on a simple desktop computer in concurrent mode. It also provides a facility to gather experimental support for predictions made, through on the spot expression data analysis, in its web-server version. Conclusion A machine learning multivariate feature tool has been implemented in parallel and locally installable form, for plant miRNA target identification. The performance was assessed and compared through comprehensive testing and benchmarking, suggesting a reliable performance and gross usability for transcriptome wide plant miRNA target identification. PMID:22206472

  11. Employing machine learning for reliable miRNA target identification in plants.

    PubMed

    Jha, Ashwani; Shankar, Ravi

    2011-12-29

    miRNAs are ~21 nucleotide long small noncoding RNA molecules, formed endogenously in most of the eukaryotes, which mainly control their target genes post transcriptionally by interacting and silencing them. While a lot of tools has been developed for animal miRNA target system, plant miRNA target identification system has witnessed limited development. Most of them have been centered around exact complementarity match. Very few of them considered other factors like multiple target sites and role of flanking regions. In the present work, a Support Vector Regression (SVR) approach has been implemented for plant miRNA target identification, utilizing position specific dinucleotide density variation information around the target sites, to yield highly reliable result. It has been named as p-TAREF (plant-Target Refiner). Performance comparison for p-TAREF was done with other prediction tools for plants with utmost rigor and where p-TAREF was found better performing in several aspects. Further, p-TAREF was run over the experimentally validated miRNA targets from species like Arabidopsis, Medicago, Rice and Tomato, and detected them accurately, suggesting gross usability of p-TAREF for plant species. Using p-TAREF, target identification was done for the complete Rice transcriptome, supported by expression and degradome based data. miR156 was found as an important component of the Rice regulatory system, where control of genes associated with growth and transcription looked predominant. The entire methodology has been implemented in a multi-threaded parallel architecture in Java, to enable fast processing for web-server version as well as standalone version. This also makes it to run even on a simple desktop computer in concurrent mode. It also provides a facility to gather experimental support for predictions made, through on the spot expression data analysis, in its web-server version. A machine learning multivariate feature tool has been implemented in parallel and locally installable form, for plant miRNA target identification. The performance was assessed and compared through comprehensive testing and benchmarking, suggesting a reliable performance and gross usability for transcriptome wide plant miRNA target identification.

  12. Semantic knowledge for histopathological image analysis: from ontologies to processing portals and deep learning

    NASA Astrophysics Data System (ADS)

    Kergosien, Yannick L.; Racoceanu, Daniel

    2017-11-01

    This article presents our vision about the next generation of challenges in computational/digital pathology. The key role of the domain ontology, developed in a sustainable manner (i.e. using reference checklists and protocols, as the living semantic repositories), opens the way to effective/sustainable traceability and relevance feedback concerning the use of existing machine learning algorithms, proven to be very performant in the latest digital pathology challenges (i.e. convolutional neural networks). Being able to work in an accessible web-service environment, with strictly controlled issues regarding intellectual property (image and data processing/analysis algorithms) and medical data/image confidentiality is essential for the future. Among the web-services involved in the proposed approach, the living yellow pages in the area of computational pathology seems to be very important in order to reach an operational awareness, validation, and feasibility. This represents a very promising way to go to the next generation of tools, able to bring more guidance to the computer scientists and confidence to the pathologists, towards an effective/efficient daily use. Besides, a consistent feedback and insights will be more likely to emerge in the near future - from these sophisticated machine learning tools - back to the pathologists-, strengthening, therefore, the interaction between the different actors of a sustainable biomedical ecosystem (patients, clinicians, biologists, engineers, scientists etc.). Beside going digital/computational - with virtual slide technology demanding new workflows-, Pathology must prepare for another coming revolution: semantic web technologies now enable the knowledge of experts to be stored in databases, shared through the Internet, and accessible by machines. Traceability, disambiguation of reports, quality monitoring, interoperability between health centers are some of the associated benefits that pathologists were seeking. However, major changes are also to be expected for the relation of human diagnosis to machine based procedures. Improving on a former imaging platform which used a local knowledge base and a reasoning engine to combine image processing modules into higher level tasks, we propose a framework where different actors of the histopathology imaging world can cooperate using web services - exchanging knowledge as well as imaging services - and where the results of such collaborations on diagnostic related tasks can be evaluated in international challenges such as those recently organized for mitosis detection, nuclear atypia, or tissue architecture in the context of cancer grading. This framework is likely to offer an effective context-guidance and traceability to Deep Learning approaches, with an interesting promising perspective given by the multi-task learning (MTL) paradigm, distinguished by its applicability to several different learning algorithms, its non- reliance on specialized architectures and the promising results demonstrated, in particular towards the problem of weak supervision-, an issue found when direct links from pathology terms in reports to corresponding regions within images are missing.

  13. Growth and characterization of III-V epitaxial films

    NASA Astrophysics Data System (ADS)

    Tripathi, A.; Adamski, J.

    1991-11-01

    Investigations were conducted on the growth of epitaxial layers using an Organo Metallic Chemical Vapor Deposition technique of selected III-V materials which are potentially useful for photonics and microwave devices. RL/ERX's MOCVD machine was leak checked for safety. The whole gas handling plumbing system has been leak checked and the problems were reported to the manufacturer, CVD Equipment Corporation of Dear Park, NY. CVD Equipment Corporation is making an effort to correct these problems and also supply the part according to our redesign specifications. One of the main emphasis during this contract period was understanding the operating procedure and writing an operating manual for this MOCVD machine. To study the dynamic fluid flow in the vertical reactor of this MOCVD machine, an experimental apparatus was designed, tested, and put together. This study gave very important information on the turbulent gas flow patterns in this vertical reactor. The turbulent flow affects the epitaxial growth adversely. This study will also help in redesigning a vertical reactor so that the turbulent gas flow can be eliminated.

  14. Teaching Materials to Enhance the Visual Expression of Web Pages for Students Not in Art or Design Majors

    ERIC Educational Resources Information Center

    Ariga, T.; Watanabe, T.

    2008-01-01

    The explosive growth of the Internet has made the knowledge and skills for creating Web pages into general subjects that all students should learn. It is now common to teach the technical side of the production of Web pages and many teaching materials have been developed. However teaching the aesthetic side of Web page design has been neglected,…

  15. Scrutinizing the Cybersell: Teen-Targeted Web Sites as Texts

    ERIC Educational Resources Information Center

    Crovitz, Darren

    2007-01-01

    Darren Crovitz explains that the explosive growth of Web-based content and communication in recent years compels us to teach students how to examine the "rhetorical nature and ethical dimensions of the online world." He demonstrates successful approaches to accomplish this goal through his analysis of the selling techniques of two Web sites…

  16. Perspectives for Electronic Books in the World Wide Web Age.

    ERIC Educational Resources Information Center

    Bry, Francois; Kraus, Michael

    2002-01-01

    Discusses the rapid growth of the World Wide Web and the lack of use of electronic books and suggests that specialized contents and device independence can make Web-based books compete with print. Topics include enhancing the hypertext model of XML; client-side adaptation, including browsers and navigation; and semantic modeling. (Author/LRW)

  17. Design of Web-based Management Information System for Academic Degree & Graduate Education

    NASA Astrophysics Data System (ADS)

    Duan, Rui; Zhang, Mingsheng

    For every organization, the management information system is not only a computer-based human-machine system that can support and help the administrative supervisor but also an open technology system for society. It should supply the interaction function that face the organization and environment, besides gather, transmit and save the information. The authors starts with the intension of contingency theory and design a web-based management information system for academic degree & graduate education which is based on analyzing of work flow of domestic academic degree and graduate education system. What's more, the application of the system is briefly introduced in this paper.

  18. LAVA web-based remote simulation: enhancements for education and technology innovation

    NASA Astrophysics Data System (ADS)

    Lee, Sang Il; Ng, Ka Chun; Orimoto, Takashi; Pittenger, Jason; Horie, Toshi; Adam, Konstantinos; Cheng, Mosong; Croffie, Ebo H.; Deng, Yunfei; Gennari, Frank E.; Pistor, Thomas V.; Robins, Garth; Williamson, Mike V.; Wu, Bo; Yuan, Lei; Neureuther, Andrew R.

    2001-09-01

    The Lithography Analysis using Virtual Access (LAVA) web site at http://cuervo.eecs.berkeley.edu/Volcano/ has been enhanced with new optical and deposition applets, graphical infrastructure and linkage to parallel execution on networks of workstations. More than ten new graphical user interface applets have been designed to support education, illustrate novel concepts from research, and explore usage of parallel machines. These applets have been improved through feedback and classroom use. Over the last year LAVA provided industry and other academic communities 1,300 session and 700 rigorous simulations per month among the SPLAT, SAMPLE2D, SAMPLE3D, TEMPEST, STORM, and BEBS simulators.

  19. Development and implementation of a web-based system to study children with malnutrition.

    PubMed

    Syed-Mohamad, Sharifah-Mastura

    2009-01-01

    To develop and implement a collective web-based system to monitor child growth in order to study children with malnutrition. The system was developed using prototyping system development methodology. The implementation was carried out using open-source technologies that include Apache Web Server, PHP scripting, and MySQL database management system. There were four datasets collected by the system: demographic data, measurement data, parent data, and food program data. The system was designed to be used by two groups of users, the clinics and the researchers. The Growth Monitor System was successfully developed and used for the study, "Geoinformation System (GIS) and Remote Sensing in Mapping of Children with Malnutrition." Data collection was implemented in public clinics from two districts in the state of Kelantan, Malaysia. The development of an integrated web-based system, Growth Monitor, for the study of children with malnutrition has been achieved. This system can be expanded to new partners who are involved in the study of children with malnutrition in other parts of Malaysia as well as other countries.

  20. Platelet-activating factor receptor (PAF-R)-dependent pathways control tumour growth and tumour response to chemotherapy

    PubMed Central

    2010-01-01

    Background Phagocytosis of apoptotic cells by macrophages induces a suppressor phenotype. Previous data from our group suggested that this occurs via Platelet-activating factor receptor (PAF-R)-mediated pathways. In the present study, we investigated the impact of apoptotic cell inoculation or induction by a chemotherapeutic agent (dacarbazine, DTIC) on tumour growth, microenvironmental parameters and survival, and the effect of treatment with a PAF-R antagonist (WEB2170). These studies were performed in murine tumours: Ehrlich Ascitis Tumour (EAT) and B16F10 melanoma. Methods Tumour growth was assessed by direct counting of EAT cells in the ascitis or by measuring the volume of the solid tumour. Parameters of the tumour microenvironment, such as the frequency of cells expressing cyclo-oxygenase-2 (COX-2), caspase-3 and galectin-3, and microvascular density, were determined by immunohistochemistry. Levels of vascular endothelium growth factor (VEGF) and prostaglandin E2 (PGE2) were determined by ELISA, and levels of nitric oxide (NO) by Griess reaction. PAF-R expression was analysed by immunohistochemistry and flow cytometry. Results Inoculation of apoptotic cells before EAT implantation stimulated tumour growth. This effect was reversed by in vivo pre-treatment with WEB2170. This treatment also reduced tumour growth and modified the microenvironment by reducing PGE2, VEGF and NO production. In B16F10 melanoma, WEB2170 alone or in association with DTIC significantly reduced tumour volume. Survival of the tumour-bearing mice was not affected by WEB2170 treatment but was significantly improved by the combination of DTIC with WEB2170. Tumour microenvironment elements were among the targets of the combination therapy since the relative frequency of COX-2 and galectin-3 positive cells and the microvascular density within the tumour mass were significantly reduced by treatment with WEB2170 or DTIC alone or in combination. Antibodies to PAF-R stained the cells from inside the tumour, but not the tumour cells grown in vitro. At the tissue level, a few cells (probably macrophages) stained positively with antibodies to PAF-R. Conclusions We suggest that PAF-R-dependent pathways are activated during experimental tumour growth, modifying the microenvironment and the phenotype of the tumour macrophages in such a way as to favour tumour growth. Combination therapy with a PAF-R antagonist and a chemotherapeutic drug may represent a new and promising strategy for the treatment of some tumours. PMID:20465821

  1. Mining Twitter Data to Improve Detection of Schizophrenia

    PubMed Central

    McManus, Kimberly; Mallory, Emily K.; Goldfeder, Rachel L.; Haynes, Winston A.; Tatum, Jonathan D.

    2015-01-01

    Individuals who suffer from schizophrenia comprise I percent of the United States population and are four times more likely to die of suicide than the general US population. Identification of at-risk individuals with schizophrenia is challenging when they do not seek treatment. Microblogging platforms allow users to share their thoughts and emotions with the world in short snippets of text. In this work, we leveraged the large corpus of Twitter posts and machine-learning methodologies to detect individuals with schizophrenia. Using features from tweets such as emoticon use, posting time of day, and dictionary terms, we trained, built, and validated several machine learning models. Our support vector machine model achieved the best performance with 92% precision and 71% recall on the held-out test set. Additionally, we built a web application that dynamically displays summary statistics between cohorts. This enables outreach to undiagnosed individuals, improved physician diagnoses, and destigmatization of schizophrenia. PMID:26306253

  2. Lifelong personal health data and application software via virtual machines in the cloud.

    PubMed

    Van Gorp, Pieter; Comuzzi, Marco

    2014-01-01

    Personal Health Records (PHRs) should remain the lifelong property of patients, who should be able to show them conveniently and securely to selected caregivers and institutions. In this paper, we present MyPHRMachines, a cloud-based PHR system taking a radically new architectural solution to health record portability. In MyPHRMachines, health-related data and the application software to view and/or analyze it are separately deployed in the PHR system. After uploading their medical data to MyPHRMachines, patients can access them again from remote virtual machines that contain the right software to visualize and analyze them without any need for conversion. Patients can share their remote virtual machine session with selected caregivers, who will need only a Web browser to access the pre-loaded fragments of their lifelong PHR. We discuss a prototype of MyPHRMachines applied to two use cases, i.e., radiology image sharing and personalized medicine.

  3. In Silico Prediction of Chemical Toxicity for Drug Design Using Machine Learning Methods and Structural Alerts

    NASA Astrophysics Data System (ADS)

    Yang, Hongbin; Sun, Lixia; Li, Weihua; Liu, Guixia; Tang, Yun

    2018-02-01

    For a drug, safety is always the most important issue, including a variety of toxicities and adverse drug effects, which should be evaluated in preclinical and clinical trial phases. This review article at first simply introduced the computational methods used in prediction of chemical toxicity for drug design, including machine learning methods and structural alerts. Machine learning methods have been widely applied in qualitative classification and quantitative regression studies, while structural alerts can be regarded as a complementary tool for lead optimization. The emphasis of this article was put on the recent progress of predictive models built for various toxicities. Available databases and web servers were also provided. Though the methods and models are very helpful for drug design, there are still some challenges and limitations to be improved for drug safety assessment in the future.

  4. In Silico Prediction of Chemical Toxicity for Drug Design Using Machine Learning Methods and Structural Alerts

    PubMed Central

    Yang, Hongbin; Sun, Lixia; Li, Weihua; Liu, Guixia; Tang, Yun

    2018-01-01

    During drug development, safety is always the most important issue, including a variety of toxicities and adverse drug effects, which should be evaluated in preclinical and clinical trial phases. This review article at first simply introduced the computational methods used in prediction of chemical toxicity for drug design, including machine learning methods and structural alerts. Machine learning methods have been widely applied in qualitative classification and quantitative regression studies, while structural alerts can be regarded as a complementary tool for lead optimization. The emphasis of this article was put on the recent progress of predictive models built for various toxicities. Available databases and web servers were also provided. Though the methods and models are very helpful for drug design, there are still some challenges and limitations to be improved for drug safety assessment in the future. PMID:29515993

  5. Production and Consumption of University Linked Data

    ERIC Educational Resources Information Center

    Zablith, Fouad; Fernandez, Miriam; Rowe, Matthew

    2015-01-01

    Linked Data increases the value of an organisation's data over the web by introducing explicit and machine processable links at the data level. We have adopted this new stream of data representation to produce and expose existing data within The Open University (OU) as Linked Data. We present in this paper our approach for producing the data,…

  6. 40 CFR 63.463 - Batch vapor and in-line cleaning machine standards.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... be turned off and the solvent vapor layer allowed to collapse before the primary condenser is turned... separate both the continuous web part feed reel and take-up reel from the room atmosphere if the doors are... from the room atmosphere if the doors are checked according to the requirements of paragraph (e)(2)(iii...

  7. 40 CFR 63.463 - Batch vapor and in-line cleaning machine standards.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... be turned off and the solvent vapor layer allowed to collapse before the primary condenser is turned... separate both the continuous web part feed reel and take-up reel from the room atmosphere if the doors are... from the room atmosphere if the doors are checked according to the requirements of paragraph (e)(2)(iii...

  8. Managing Quality, Identity and Adversaries in Public Discourse with Machine Learning

    ERIC Educational Resources Information Center

    Brennan, Michael

    2012-01-01

    Automation can mitigate issues when scaling and managing quality and identity in public discourse on the web. Discourse needs to be curated and filtered. Anonymous speech has to be supported while handling adversaries. Reliance on human curators or analysts does not scale and content can be missed. These scaling and management issues include the…

  9. Using Linguistic Knowledge in Statistical Machine Translation

    DTIC Science & Technology

    2010-09-01

    on newswire test data . . . . . . . . . . . . . . . . . . . . . 65 3.4 Arabic to English MT results for Arabic morphological segmentation, measured on...web test data. . . . . . . . . . . . . . . . . . . . . . . . 65 3.5 Recombination Results. Percentage of sentences with mis-combined words...scores for syntactic reordering of the Spoken Language Domain. 90 5.1 Normalized likelihood of the test set alignments without decision trees, and then

  10. Transport Traffic Analysis for Abusive Infrastructure Characterization

    DTIC Science & Technology

    2012-09-01

    3 month sample of spam directed toward the Hotmail web-mail service. Their false positive rate was between 0.0011 and 0.0014 [11]. Unlike autoRE, our...they used 240 machines to analyze a 220 GB Hotmail log in 1.5 hours. In another experiment on 2 months of Hotmail logs (450 GB), BotGraph was able to

  11. Mind Maps: Hot New Tools Proposed for Cyberspace Librarians.

    ERIC Educational Resources Information Center

    Humphreys, Nancy K.

    1999-01-01

    Describes how online searchers can use a software tool based on back-of-the-book indexes to assist in dealing with search engine databases compiled by spiders that crawl across the entire Internet or through large Web sites. Discusses human versus machine knowledge, conversion of indexes to mind maps or mini-thesauri, middleware, eXtensible Markup…

  12. Using Statistical Techniques and Web Search to Correct ESL Errors

    ERIC Educational Resources Information Center

    Gamon, Michael; Leacock, Claudia; Brockett, Chris; Dolan, William B.; Gao, Jianfeng; Belenko, Dmitriy; Klementiev, Alexandre

    2009-01-01

    In this paper we present a system for automatic correction of errors made by learners of English. The system has two novel aspects. First, machine-learned classifiers trained on large amounts of native data and a very large language model are combined to optimize the precision of suggested corrections. Second, the user can access real-life web…

  13. New Virtual Field Trips. Revised Edition.

    ERIC Educational Resources Information Center

    Cooper, Gail; Cooper, Garry

    This book is an annotated guidebook, arranged by subject matter, of World Wide Web sites for K-12 students. The following chapters are included: (1) Virtual Time Machine (i.e., sites that cover topics in world history); (2) Tour the World (i.e., sites that include information about countries); (3) Outer Space; (4) The Great Outdoors; (5) Aquatic…

  14. Examining Long-Term Global Climate Change on the Web.

    ERIC Educational Resources Information Center

    Huntoon, Jacqueline E.; Ridky, Robert K.

    2002-01-01

    Describes a web-based, inquiry-oriented activity that enables students to examine long-term global climate change. Supports instruction in other topics such as population growth. (Contains 34 references.) (DDR)

  15. Web-Based Counseling for Problem Gambling: Exploring Motivations and Recommendations

    PubMed Central

    Lubman, Dan I; Dowling, Nicki A; Bough, Anna; Jackson, Alun C

    2013-01-01

    Background For highly stigmatized disorders, such as problem gambling, Web-based counseling has the potential to address common barriers to treatment, including issues of shame and stigma. Despite the exponential growth in the uptake of immediate synchronous Web-based counseling (ie, provided without appointment), little is known about why people choose this service over other modes of treatment. Objective The aim of the current study was to determine motivations for choosing and recommending Web-based counseling over telephone or face-to-face services. Methods The study involved 233 Australian participants who had completed an online counseling session for problem gambling on the Gambling Help Online website between November 2010 and February 2012. Participants were all classified as problem gamblers, with a greater proportion of males (57.4%) and 60.4% younger than 40 years of age. Participants completed open-ended questions about their reasons for choosing online counseling over other modes (ie, face-to-face and telephone), as well as reasons for recommending the service to others. Results A content analysis revealed 4 themes related to confidentiality/anonymity (reported by 27.0%), convenience/accessibility (50.9%), service system access (34.2%), and a preference for the therapeutic medium (26.6%). Few participants reported helpful professional support as a reason for accessing counseling online, but 43.2% of participants stated that this was a reason for recommending the service. Those older than 40 years were more likely than younger people in the sample to use Web-based counseling as an entry point into the service system (P=.045), whereas those engaged in nonstrategic gambling (eg, machine gambling) were more likely to access online counseling as an entry into the service system than those engaged in strategic gambling (ie, cards, sports; P=.01). Participants older than 40 years were more likely to recommend the service because of its potential for confidentiality and anonymity (P=.04), whereas those younger than 40 years were more likely to recommend the service due to it being helpful (P=.02). Conclusions This study provides important information about why online counseling for gambling is attractive to people with problem gambling, thereby informing the development of targeted online programs, campaigns, and promotional material. PMID:23709155

  16. Gravity related features of plant growth behavior studied with rotating machines

    NASA Technical Reports Server (NTRS)

    Brown, A. H.

    1996-01-01

    Research in plant physiology consists mostly of studies on plant growth because almost everything a plant does is done by growing. Most aspects of plant growth are strongly influenced by the earth's gravity vector. Research on those phenomena address scientific questions specifically about how plants use gravity to guide their growth processes.

  17. Legacy effects of drought on plant growth and the soil food web.

    PubMed

    de Vries, Franciska Trijntje; Liiri, Mira E; Bjørnlund, Lisa; Setälä, Heikki M; Christensen, Søren; Bardgett, Richard D

    2012-11-01

    Soils deliver important ecosystem services, such as nutrient provision for plants and the storage of carbon (C) and nitrogen (N), which are greatly impacted by drought. Both plants and soil biota affect soil C and N availability, which might in turn affect their response to drought, offering the potential to feed back on each other's performance. In a greenhouse experiment, we compared legacy effects of repeated drought on plant growth and the soil food web in two contrasting land-use systems: extensively managed grassland, rich in C and with a fungal-based food web, and intensively managed wheat lower in C and with a bacterial-based food web. Moreover, we assessed the effect of plant presence on the recovery of the soil food web after drought. Drought legacy effects increased plant growth in both systems, and a plant strongly reduced N leaching. Fungi, bacteria, and their predators were more resilient after drought in the grassland soil than in the wheat soil. The presence of a plant strongly affected the composition of the soil food web, and alleviated the effects of drought for most trophic groups, regardless of the system. This effect was stronger for the bottom trophic levels, whose resilience was positively correlated to soil available C. Our results show that plant belowground inputs have the potential to affect the recovery of belowground communities after drought, with implications for the functions they perform, such as C and N cycling.

  18. Query-Structure Based Web Page Indexing

    DTIC Science & Technology

    2012-11-01

    the massive amount of data present on the web. In our third participation in the web track at TREC 2012, we explore the idea of building an...the ad-hoc and diversity task. 1 INTRODUCTION The rapid growth and massive quantities of data on the Internet have increased the importance and...complexity of information retrieval systems. The amount and the diversity of the web data introduce shortcomings in the way search engines rank their

  19. MLViS: A Web Tool for Machine Learning-Based Virtual Screening in Early-Phase of Drug Discovery and Development

    PubMed Central

    Korkmaz, Selcuk; Zararsiz, Gokmen; Goksuluk, Dincer

    2015-01-01

    Virtual screening is an important step in early-phase of drug discovery process. Since there are thousands of compounds, this step should be both fast and effective in order to distinguish drug-like and nondrug-like molecules. Statistical machine learning methods are widely used in drug discovery studies for classification purpose. Here, we aim to develop a new tool, which can classify molecules as drug-like and nondrug-like based on various machine learning methods, including discriminant, tree-based, kernel-based, ensemble and other algorithms. To construct this tool, first, performances of twenty-three different machine learning algorithms are compared by ten different measures, then, ten best performing algorithms have been selected based on principal component and hierarchical cluster analysis results. Besides classification, this application has also ability to create heat map and dendrogram for visual inspection of the molecules through hierarchical cluster analysis. Moreover, users can connect the PubChem database to download molecular information and to create two-dimensional structures of compounds. This application is freely available through www.biosoft.hacettepe.edu.tr/MLViS/. PMID:25928885

  20. A Collaborative Framework for Distributed Privacy-Preserving Support Vector Machine Learning

    PubMed Central

    Que, Jialan; Jiang, Xiaoqian; Ohno-Machado, Lucila

    2012-01-01

    A Support Vector Machine (SVM) is a popular tool for decision support. The traditional way to build an SVM model is to estimate parameters based on a centralized repository of data. However, in the field of biomedicine, patient data are sometimes stored in local repositories or institutions where they were collected, and may not be easily shared due to privacy concerns. This creates a substantial barrier for researchers to effectively learn from the distributed data using machine learning tools like SVMs. To overcome this difficulty and promote efficient information exchange without sharing sensitive raw data, we developed a Distributed Privacy Preserving Support Vector Machine (DPP-SVM). The DPP-SVM enables privacy-preserving collaborative learning, in which a trusted server integrates “privacy-insensitive” intermediary results. The globally learned model is guaranteed to be exactly the same as learned from combined data. We also provide a free web-service (http://privacy.ucsd.edu:8080/ppsvm/) for multiple participants to collaborate and complete the SVM-learning task in an efficient and privacy-preserving manner. PMID:23304414

  1. MLACP: machine-learning-based prediction of anticancer peptides

    PubMed Central

    Manavalan, Balachandran; Basith, Shaherin; Shin, Tae Hwan; Choi, Sun; Kim, Myeong Ok; Lee, Gwang

    2017-01-01

    Cancer is the second leading cause of death globally, and use of therapeutic peptides to target and kill cancer cells has received considerable attention in recent years. Identification of anticancer peptides (ACPs) through wet-lab experimentation is expensive and often time consuming; therefore, development of an efficient computational method is essential to identify potential ACP candidates prior to in vitro experimentation. In this study, we developed support vector machine- and random forest-based machine-learning methods for the prediction of ACPs using the features calculated from the amino acid sequence, including amino acid composition, dipeptide composition, atomic composition, and physicochemical properties. We trained our methods using the Tyagi-B dataset and determined the machine parameters by 10-fold cross-validation. Furthermore, we evaluated the performance of our methods on two benchmarking datasets, with our results showing that the random forest-based method outperformed the existing methods with an average accuracy and Matthews correlation coefficient value of 88.7% and 0.78, respectively. To assist the scientific community, we also developed a publicly accessible web server at www.thegleelab.org/MLACP.html. PMID:29100375

  2. Case Studies in Describing Scientific Research Efforts as Linked Data

    NASA Astrophysics Data System (ADS)

    Gandara, A.; Villanueva-Rosales, N.; Gates, A.

    2013-12-01

    The Web is growing with numerous scientific resources, prompting increased efforts in information management to consider integration and exchange of scientific resources. Scientists have many options to share scientific resources on the Web; however, existing options provide limited support to scientists in annotating and relating research resources resulting from a scientific research effort. Moreover, there is no systematic approach to documenting scientific research and sharing it on the Web. This research proposes the Collect-Annotate-Refine-Publish (CARP) Methodology as an approach for guiding documentation of scientific research on the Semantic Web as scientific collections. Scientific collections are structured descriptions about scientific research that make scientific results accessible based on context. In addition, scientific collections enhance the Linked Data data space and can be queried by machines. Three case studies were conducted on research efforts at the Cyber-ShARE Research Center of Excellence in order to assess the effectiveness of the methodology to create scientific collections. The case studies exposed the challenges and benefits of leveraging the Semantic Web and Linked Data data space to facilitate access, integration and processing of Web-accessible scientific resources and research documentation. As such, we present the case study findings and lessons learned in documenting scientific research using CARP.

  3. Accelerating Cancer Systems Biology Research through Semantic Web Technology

    PubMed Central

    Wang, Zhihui; Sagotsky, Jonathan; Taylor, Thomas; Shironoshita, Patrick; Deisboeck, Thomas S.

    2012-01-01

    Cancer systems biology is an interdisciplinary, rapidly expanding research field in which collaborations are a critical means to advance the field. Yet the prevalent database technologies often isolate data rather than making it easily accessible. The Semantic Web has the potential to help facilitate web-based collaborative cancer research by presenting data in a manner that is self-descriptive, human and machine readable, and easily sharable. We have created a semantically linked online Digital Model Repository (DMR) for storing, managing, executing, annotating, and sharing computational cancer models. Within the DMR, distributed, multidisciplinary, and inter-organizational teams can collaborate on projects, without forfeiting intellectual property. This is achieved by the introduction of a new stakeholder to the collaboration workflow, the institutional licensing officer, part of the Technology Transfer Office. Furthermore, the DMR has achieved silver level compatibility with the National Cancer Institute’s caBIG®, so users can not only interact with the DMR through a web browser but also through a semantically annotated and secure web service. We also discuss the technology behind the DMR leveraging the Semantic Web, ontologies, and grid computing to provide secure inter-institutional collaboration on cancer modeling projects, online grid-based execution of shared models, and the collaboration workflow protecting researchers’ intellectual property. PMID:23188758

  4. Accelerating cancer systems biology research through Semantic Web technology.

    PubMed

    Wang, Zhihui; Sagotsky, Jonathan; Taylor, Thomas; Shironoshita, Patrick; Deisboeck, Thomas S

    2013-01-01

    Cancer systems biology is an interdisciplinary, rapidly expanding research field in which collaborations are a critical means to advance the field. Yet the prevalent database technologies often isolate data rather than making it easily accessible. The Semantic Web has the potential to help facilitate web-based collaborative cancer research by presenting data in a manner that is self-descriptive, human and machine readable, and easily sharable. We have created a semantically linked online Digital Model Repository (DMR) for storing, managing, executing, annotating, and sharing computational cancer models. Within the DMR, distributed, multidisciplinary, and inter-organizational teams can collaborate on projects, without forfeiting intellectual property. This is achieved by the introduction of a new stakeholder to the collaboration workflow, the institutional licensing officer, part of the Technology Transfer Office. Furthermore, the DMR has achieved silver level compatibility with the National Cancer Institute's caBIG, so users can interact with the DMR not only through a web browser but also through a semantically annotated and secure web service. We also discuss the technology behind the DMR leveraging the Semantic Web, ontologies, and grid computing to provide secure inter-institutional collaboration on cancer modeling projects, online grid-based execution of shared models, and the collaboration workflow protecting researchers' intellectual property. Copyright © 2012 Wiley Periodicals, Inc.

  5. Plastic deformation of silicon dendritic web ribbons during the growth

    NASA Technical Reports Server (NTRS)

    Cheng, L. J.; Dumas, K. A.; Su, B. M.; Leipold, M. H.

    1984-01-01

    The distribution of slip dislocations in silicon dendritic web ribbons due to plastic deformation during the cooling phase of the growth was studied. The results show the existence of two distinguishable stress regions across the ribbon formed during the plastic deformation stage, namely, shear stress at the ribbon edges and tensile stress at the middle. In addition, slip dislocations caused by shear stress near the edges appear to originate at the twin plane.

  6. An Extensible, Modular Architecture Coupling HydroShare and Tethys Platform to Deploy Water Science Web Apps

    NASA Astrophysics Data System (ADS)

    Nelson, J.; Ames, D. P.; Jones, N.; Tarboton, D. G.; Li, Z.; Qiao, X.; Crawley, S.

    2016-12-01

    As water resources data continue to move to the web in the form of well-defined, open access, machine readable web services provided by government, academic, and private institutions, there is increased opportunity to move additional parts of the water science workflow to the web (e.g. analysis, modeling, decision support, and collaboration.) Creating such web-based functionality can be extremely time-consuming and resource-intensive and can lead the erstwhile water scientist down a veritable cyberinfrastructure rabbit hole, through an unintended tunnel of transformation to become a Cyber-Wonderland software engineer. We posit that such transformations were never the intention of the research programs that fund earth science cyberinfrastructure, nor is it in the best interest of water researchers to spend exorbitant effort developing and deploying such technologies. This presentation will introduce a relatively simple and ready-to-use water science web app environment funded by the National Science Foundation that couples the new HydroShare data publishing system with the Tethys Platform web app development toolkit. The coupled system has already been shown to greatly lower the barrier to deploying of web based visualization and analysis tools for the CUAHSI Water Data Center and for the National Weather Service's National Water Model. The design and implementation of the developed web app architecture will be presented together key examples of existing apps created using this system. In each of the cases presented, water resources students with basic programming skills were able to develop and deploy highly functional web apps in a relatively short period of time (days to weeks) - allowing the focus to remain on water science rather on cyberinfrastructure. This presentation is accompanied by an open invitation for new collaborations that use the HydroShare-Tethys web app environment.

  7. WWW Motivation Mining: Finding Treasures for Teaching Evaluation Skills, Grades 1-6. Professional Growth Series.

    ERIC Educational Resources Information Center

    Arnone, Marilyn P.; Small, Ruth V.

    Designed for elementary or middle school teachers and library media specialists, this book provides educators with practical, easy-to-use ways of applying motivation assessment techniques when selecting World Wide Web sites for inclusion in their lessons and offers concrete examples of how to use Web evaluation with young learners. WebMAC…

  8. Project MERLOT: Bringing Peer Review to Web-Based Educational Resources

    ERIC Educational Resources Information Center

    Cafolla, Ralph

    2006-01-01

    The unprecedented growth of the World Wide Web has resulted in a profusion of educational resources. The challenge for faculty is finding these resources and integrating them into their instruction. Even after the resource is found, the instructor must assess the effectiveness of the resource. As the number of educational web sites mount into the…

  9. Implementing a Self-Regulated "WebQuest" Learning System for Chinese Elementary Schools

    ERIC Educational Resources Information Center

    Hsiao, Hsien-Sheng; Tsai, Chung-Chieh; Lin, Chien-Yu; Lin, Chih-Cheng

    2012-01-01

    The rapid growth of Internet has resulted in the rise of WebQuest learning recently. Teachers encourage students to participate in the searching for knowledge on different topics. When using WebQuest, students' self-regulation is often the key to successful learning. Therefore, this study establishes a self-regulated learning system to assist…

  10. Information Privacy in the Marketspace: Implications for the Commercial Uses of Anonymity on the Web.

    ERIC Educational Resources Information Center

    Hoffman, Donna L.; Novak, Thomas P.; Peralta, Marcos A.

    1999-01-01

    Suggests that the primary barrier to successful commercial development of the Web is lack of consumer trust in the medium. Examines how customer concerns are affecting growth and development of consumeroriented commercial activity on the Web and investigates the implications of these concerns for potential industry response. Suggests that radical…

  11. A Feature Selection Method Based on Fisher's Discriminant Ratio for Text Sentiment Classification

    NASA Astrophysics Data System (ADS)

    Wang, Suge; Li, Deyu; Wei, Yingjie; Li, Hongxia

    With the rapid growth of e-commerce, product reviews on the Web have become an important information source for customers' decision making when they intend to buy some product. As the reviews are often too many for customers to go through, how to automatically classify them into different sentiment orientation categories (i.e. positive/negative) has become a research problem. In this paper, based on Fisher's discriminant ratio, an effective feature selection method is proposed for product review text sentiment classification. In order to validate the validity of the proposed method, we compared it with other methods respectively based on information gain and mutual information while support vector machine is adopted as the classifier. In this paper, 6 subexperiments are conducted by combining different feature selection methods with 2 kinds of candidate feature sets. Under 1006 review documents of cars, the experimental results indicate that the Fisher's discriminant ratio based on word frequency estimation has the best performance with F value 83.3% while the candidate features are the words which appear in both positive and negative texts.

  12. Associations between the perceived presence of vending machines and food and beverage logos in schools and adolescents' diet and weight status.

    PubMed

    Minaker, Leia M; Storey, Kate E; Raine, Kim D; Spence, John C; Forbes, Laura E; Plotnikoff, Ronald C; McCargar, Linda J

    2011-08-01

    The increasing prevalence of obesity among youth has elicited calls for schools to become more active in promoting healthy weight. The present study examined associations between various aspects of school food environments (specifically the availability of snack- and beverage-vending machines and the presence of snack and beverage logos) and students' weight status, as well as potential influences of indices of diet and food behaviours. A cross-sectional, self-administered web-based survey. A series of multinomial logistic regressions with generalized estimating equations (GEE) were constructed to examine associations between school environment variables (i.e. the reported presence of beverage- and snack-vending machines and logos) and self-reported weight- and diet-related behaviours. Secondary schools in Alberta, Canada. A total of 4936 students from grades 7 to 10. The presence of beverage-vending machines in schools was associated with the weight status of students. The presence of snack-vending machines and logos was associated with students' frequency of consuming vended goods. The presence of snack-vending machines and logos was associated with the frequency of salty snack consumption. The reported presence of snack- and beverage-vending machines and logos in schools is related to some indices of weight status, diet and meal behaviours but not to others. The present study supported the general hypothesis that the presence of vending machines in schools may affect students' weight through increased consumption of vended goods, but notes that the frequency of 'junk' food consumption does not seem to be related to the presence of vending machines, perhaps reflecting the ubiquity of these foods in the daily lives of students.

  13. Web of Objects Based Ambient Assisted Living Framework for Emergency Psychiatric State Prediction

    PubMed Central

    Alam, Md Golam Rabiul; Abedin, Sarder Fakhrul; Al Ameen, Moshaddique; Hong, Choong Seon

    2016-01-01

    Ambient assisted living can facilitate optimum health and wellness by aiding physical, mental and social well-being. In this paper, patients’ psychiatric symptoms are collected through lightweight biosensors and web-based psychiatric screening scales in a smart home environment and then analyzed through machine learning algorithms to provide ambient intelligence in a psychiatric emergency. The psychiatric states are modeled through a Hidden Markov Model (HMM), and the model parameters are estimated using a Viterbi path counting and scalable Stochastic Variational Inference (SVI)-based training algorithm. The most likely psychiatric state sequence of the corresponding observation sequence is determined, and an emergency psychiatric state is predicted through the proposed algorithm. Moreover, to enable personalized psychiatric emergency care, a service a web of objects-based framework is proposed for a smart-home environment. In this framework, the biosensor observations and the psychiatric rating scales are objectified and virtualized in the web space. Then, the web of objects of sensor observations and psychiatric rating scores are used to assess the dweller’s mental health status and to predict an emergency psychiatric state. The proposed psychiatric state prediction algorithm reported 83.03 percent prediction accuracy in an empirical performance study. PMID:27608023

  14. Automatic Generation of Data Types for Classification of Deep Web Sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ngu, A H; Buttler, D J; Critchlow, T J

    2005-02-14

    A Service Class Description (SCD) is an effective meta-data based approach for discovering Deep Web sources whose data exhibit some regular patterns. However, it is tedious and error prone to create an SCD description manually. Moreover, a manually created SCD is not adaptive to the frequent changes of Web sources. It requires its creator to identify all the possible input and output types of a service a priori. In many domains, it is impossible to exhaustively list all the possible input and output data types of a source in advance. In this paper, we describe machine learning approaches for automaticmore » generation of the data types of an SCD. We propose two different approaches for learning data types of a class of Web sources. The Brute-Force Learner is able to generate data types that can achieve high recall, but with low precision. The Clustering-based Learner generates data types that have a high precision rate, but with a lower recall rate. We demonstrate the feasibility of these two learning-based solutions for automatic generation of data types for citation Web sources and presented a quantitative evaluation of these two solutions.« less

  15. The New Web-Based Hera Data Processing System at the HEASARC

    NASA Technical Reports Server (NTRS)

    Pence, W.

    2011-01-01

    The HEASARC at NASA/GSFC has provide an on-line astronomical data processing system called Hera for several years. Hera provides a complete data processing environment, including installed software packages, local data storage, and the CPU resources needed to process the user's data. The original design of Hera, however, has 2 requirements that has limited it's usefulness for some users, namely, that 1) the user must download and install a small helper program on their own computer before using Hera, and 2) Hera requires that several computer ports/sockets be allowed to communicate through any local firewalls on the users machine. Both of these restrictions can be problematic for some users, therefore we are now migrating Hera into a purely Web based environment which only requires a standard Web browser. The first release of Web Hera is now publicly available at http://heasarc.gsfc.nasa.gov/webheara/. It currently provides a standard graphical interface for running hundreds of different data processing programs that are available in the HEASARC's ftools software package. Over the next year we to add more features to Web Hera, including an interactive command line interface, and more display and line capabilities.

  16. Development of STEP-NC Adaptor for Advanced Web Manufacturing System

    NASA Astrophysics Data System (ADS)

    Ajay Konapala, Mr.; Koona, Ramji, Dr.

    2017-08-01

    Information systems play a key role in the modern era of Information Technology. Rapid developments in IT & global competition calls for many changes in basic CAD/CAM/CAPP/CNC manufacturing chain of operations. ‘STEP-NC’ an enhancement to STEP for operating CNC machines, creating new opportunities for collaborative, concurrent, adaptive works across the manufacturing chain of operations. Schemas and data models defined by ISO14649 in liaison with ISO10303 standards made STEP-NC file rich with feature based, rather than mere point to point information of G/M Code format. But one needs to have a suitable information system to understand and modify these files. Various STEP-NC information systems are reviewed to understand the suitability of STEP-NC for web manufacturing. Present work also deals with the development of an adaptor which imports STEP-NC file, organizes its information, allowing modifications to entity values and finally generates a new STEP-NC file to export. The system is designed and developed to work on web to avail additional benefits through the web and also to be part of a proposed ‘Web based STEP-NC manufacturing platform’ which is under development and explained as future scope.

  17. Improving Performance in Constructing specific Web Directory using Focused Crawler: An Experiment on Botany Domain

    NASA Astrophysics Data System (ADS)

    Khalilian, Madjid; Boroujeni, Farsad Zamani; Mustapha, Norwati

    Nowadays the growth of the web causes some difficulties to search and browse useful information especially in specific domains. However, some portion of the web remains largely underdeveloped, as shown in lack of high quality contents. An example is the botany specific web directory, in which lack of well-structured web directories have limited user's ability to browse required information. In this research we propose an improved framework for constructing a specific web directory. In this framework we use an anchor directory as a foundation for primary web directory. This web directory is completed by information which is gathered with automatic component and filtered by experts. We conduct an experiment for evaluating effectiveness, efficiency and satisfaction.

  18. SLA-based optimisation of virtualised resource for multi-tier web applications in cloud data centres

    NASA Astrophysics Data System (ADS)

    Bi, Jing; Yuan, Haitao; Tie, Ming; Tan, Wei

    2015-10-01

    Dynamic virtualised resource allocation is the key to quality of service assurance for multi-tier web application services in cloud data centre. In this paper, we develop a self-management architecture of cloud data centres with virtualisation mechanism for multi-tier web application services. Based on this architecture, we establish a flexible hybrid queueing model to determine the amount of virtual machines for each tier of virtualised application service environments. Besides, we propose a non-linear constrained optimisation problem with restrictions defined in service level agreement. Furthermore, we develop a heuristic mixed optimisation algorithm to maximise the profit of cloud infrastructure providers, and to meet performance requirements from different clients as well. Finally, we compare the effectiveness of our dynamic allocation strategy with two other allocation strategies. The simulation results show that the proposed resource allocation method is efficient in improving the overall performance and reducing the resource energy cost.

  19. Combination of heterogeneous criteria for the automatic detection of ethical principles on health web sites.

    PubMed

    Gaudinat, Arnaud; Grabar, Natalia; Boyer, Célia

    2007-10-11

    The detection of ethical issues of web sites aims at selection of information helpful to the reader and is an important concern in medical informatics. Indeed, with the ever-increasing volume of online health information, coupled with its uneven reliability and quality, the public should be aware about the quality of information available online. In order to address this issue, we propose methods for the automatic detection of statements related to ethical principles such as those of the HONcode. For the detection of these statements, we combine two kinds of heterogeneous information: content-based categorizations and URL-based categorizations through application of the machine learning algorithms. Our objective is to observe the quality of categorization through URL's for web pages where categorization through content has been proven to be not precise enough. The results obtained indicate that only some of the principles were better processed.

  20. ODISEES: A New Paradigm in Data Access

    NASA Astrophysics Data System (ADS)

    Huffer, E.; Little, M. M.; Kusterer, J.

    2013-12-01

    As part of its ongoing efforts to improve access to data, the Atmospheric Science Data Center has developed a high-precision Earth Science domain ontology (the 'ES Ontology') implemented in a graph database ('the Semantic Metadata Repository') that is used to store detailed, semantically-enhanced, parameter-level metadata for ASDC data products. The ES Ontology provides the semantic infrastructure needed to drive the ASDC's Ontology-Driven Interactive Search Environment for Earth Science ('ODISEES'), a data discovery and access tool, and will support additional data services such as analytics and visualization. The ES ontology is designed on the premise that naming conventions alone are not adequate to provide the information needed by prospective data consumers to assess the suitability of a given dataset for their research requirements; nor are current metadata conventions adequate to support seamless machine-to-machine interactions between file servers and end-user applications. Data consumers need information not only about what two data elements have in common, but also about how they are different. End-user applications need consistent, detailed metadata to support real-time data interoperability. The ES ontology is a highly precise, bottom-up, queriable model of the Earth Science domain that focuses on critical details about the measurable phenomena, instrument techniques, data processing methods, and data file structures. Earth Science parameters are described in detail in the ES Ontology and mapped to the corresponding variables that occur in ASDC datasets. Variables are in turn mapped to well-annotated representations of the datasets that they occur in, the instrument(s) used to create them, the instrument platforms, the processing methods, etc., creating a linked-data structure that allows both human and machine users to access a wealth of information critical to understanding and manipulating the data. The mappings are recorded in the Semantic Metadata Repository as RDF-triples. An off-the-shelf Ontology Development Environment and a custom Metadata Conversion Tool comprise a human-machine/machine-machine hybrid tool that partially automates the creation of metadata as RDF-triples by interfacing with existing metadata repositories and providing a user interface that solicits input from a human user, when needed. RDF-triples are pushed to the Ontology Development Environment, where a reasoning engine executes a series of inference rules whose antecedent conditions can be satisfied by the initial set of RDF-triples, thereby generating the additional detailed metadata that is missing in existing repositories. A SPARQL Endpoint, a web-based query service and a Graphical User Interface allow prospective data consumers - even those with no familiarity with NASA data products - to search the metadata repository to find and order data products that meet their exact specifications. A web-based API will provide an interface for machine-to-machine transactions.

  1. Your Baby's Growth: 5 Months

    MedlinePlus

    ... Search English Español Your Baby's Growth: 5 Months KidsHealth / For Parents / Your Baby's Growth: 5 Months What's ... the Nemours Web site. Note: All information on KidsHealth® is for educational purposes only. For specific medical ...

  2. Your Baby's Growth: 3 Months

    MedlinePlus

    ... Search English Español Your Baby's Growth: 3 Months KidsHealth / For Parents / Your Baby's Growth: 3 Months What's ... the Nemours Web site. Note: All information on KidsHealth® is for educational purposes only. For specific medical ...

  3. Claims-Based Authentication for a Web-Based Enterprise

    DTIC Science & Technology

    2013-07-01

    authority must use known and registered (or in specific cases defined ) certificate revocation and currency-checking software . B. Translation of...Machines and services are issued software certificates that contain the public key with the private key generated and remaining in hardware...publicly available) information. A hardware token that contains the certificate is preferred to software -only certificates. For enterprise users

  4. Programming for physicians: A free online course.

    PubMed

    Kubben, Pieter L

    2016-01-01

    This article is an introduction for clinical readers into programming and computational thinking using the programming language Python. Exercises can be done completely online without any need for installation of software. Participants will be taught the fundamentals of programming, which are necessarily independent of the sort of application (stand-alone, web, mobile, engineering, and statistical/machine learning) that is to be developed afterward.

  5. Peer Assessment of Webpage Design: Behavioral Sequential Analysis Based on Eye-Tracking Evidence

    ERIC Educational Resources Information Center

    Hsu, Ting-Chia; Chang, Shao-Chen; Liu, Nan-Cen

    2018-01-01

    This study employed an eye-tracking machine to record the process of peer assessment. Each web page was divided into several regions of interest (ROIs) based on the frame design and content. A total of 49 undergraduate students with a visual learning style participated in the experiment. This study investigated the peer assessment attitudes of the…

  6. Notice and Credits Page - NOAA's National Weather Service

    Science.gov Websites

    - Visolve is a software application (free for personal use) that transforms colors of the computer display Mac OS X 10.2 or later. (Purchase) - A 30-day free trial of eyePilot is available from eyePilot web site - http://www.colorhelper.com/ Java Java Virtual Machine - free download from java.com Adobe Reader

  7. Library Resources for the Blind and Physically Handicapped: A Directory with FY 1998 Statistics on Readership, Circulation, Budget, Staff, and Collections.

    ERIC Educational Resources Information Center

    Library of Congress, Washington, DC. National Library Service for the Blind and Physically Handicapped.

    This directory lists National Library Service for the Blind and Physically Handicapped libraries and machine-lending agencies alphabetically by state. Each entry includes address, phone and fax numbers, e-mail address, World Wide Web site, area served, librarian name, hours, book collection, special collections, assistive devices, special…

  8. Fighting Through a Logistics Cyber Attack

    DTIC Science & Technology

    2015-06-19

    Chariot 800 - 1350 Gunpowder 1915 Machine Gun 1915 Tanks 1915 Aircraft 1935 Radar 1945 Nuclear Weapons 1960 Satellites 1989 GPS 2009 Cyber Weapon...primarily remained in the scientific and academic communities for the next 22 years ( Griffiths , 2002). The Internet as we recognize it today... Griffiths (2002), defines the Web as an abstract space information containing hyperlinked documents and other resources, identified by their Uniformed

  9. Enacting the Semantic Web: Ontological Orderings, Negotiated Standards, and Human-Machine Translations

    ERIC Educational Resources Information Center

    McCarthy, Matthew T.

    2017-01-01

    Artificial intelligence (AI) that is based upon semantic search has become one of the dominant means for accessing information in recent years. This is particularly the case in mobile contexts, as search-based AI are embedded in each of the major mobile operating systems. The implications are such that information is becoming less a matter of…

  10. An ontological knowledge framework for adaptive medical workflow.

    PubMed

    Dang, Jiangbo; Hedayati, Amir; Hampel, Ken; Toklu, Candemir

    2008-10-01

    As emerging technologies, semantic Web and SOA (Service-Oriented Architecture) allow BPMS (Business Process Management System) to automate business processes that can be described as services, which in turn can be used to wrap existing enterprise applications. BPMS provides tools and methodologies to compose Web services that can be executed as business processes and monitored by BPM (Business Process Management) consoles. Ontologies are a formal declarative knowledge representation model. It provides a foundation upon which machine understandable knowledge can be obtained, and as a result, it makes machine intelligence possible. Healthcare systems can adopt these technologies to make them ubiquitous, adaptive, and intelligent, and then serve patients better. This paper presents an ontological knowledge framework that covers healthcare domains that a hospital encompasses-from the medical or administrative tasks, to hospital assets, medical insurances, patient records, drugs, and regulations. Therefore, our ontology makes our vision of personalized healthcare possible by capturing all necessary knowledge for a complex personalized healthcare scenario involving patient care, insurance policies, and drug prescriptions, and compliances. For example, our ontology facilitates a workflow management system to allow users, from physicians to administrative assistants, to manage, even create context-aware new medical workflows and execute them on-the-fly.

  11. VAT: a computational framework to functionally annotate variants in personal genomes within a cloud-computing environment.

    PubMed

    Habegger, Lukas; Balasubramanian, Suganthi; Chen, David Z; Khurana, Ekta; Sboner, Andrea; Harmanci, Arif; Rozowsky, Joel; Clarke, Declan; Snyder, Michael; Gerstein, Mark

    2012-09-01

    The functional annotation of variants obtained through sequencing projects is generally assumed to be a simple intersection of genomic coordinates with genomic features. However, complexities arise for several reasons, including the differential effects of a variant on alternatively spliced transcripts, as well as the difficulty in assessing the impact of small insertions/deletions and large structural variants. Taking these factors into consideration, we developed the Variant Annotation Tool (VAT) to functionally annotate variants from multiple personal genomes at the transcript level as well as obtain summary statistics across genes and individuals. VAT also allows visualization of the effects of different variants, integrates allele frequencies and genotype data from the underlying individuals and facilitates comparative analysis between different groups of individuals. VAT can either be run through a command-line interface or as a web application. Finally, in order to enable on-demand access and to minimize unnecessary transfers of large data files, VAT can be run as a virtual machine in a cloud-computing environment. VAT is implemented in C and PHP. The VAT web service, Amazon Machine Image, source code and detailed documentation are available at vat.gersteinlab.org.

  12. Machine Aided Indexing and the NASA Thesaurus

    NASA Technical Reports Server (NTRS)

    vonOfenheim, Bill

    2007-01-01

    Machine Aided Indexing (MAI) is a Web-based application program for aiding the indexing of literature in the NASA Scientific and Technical Information (STI) Database. MAI was designed to be a convenient, fully interactive tool for determining the subject matter of documents and identifying keywords. The heart of MAI is a natural-language processor that accepts, as input, any user-supplied text, including abstracts, full documents, and Web pages. Within seconds, the text is analyzed and a ranked list of terms is generated. The 17,800 terms of the NASA Thesaurus serve as the foundation of the knowledge base used by MAI. The NASA Thesaurus defines a standard vocabulary, the use of which enables MAI to assist in ensuring that STI documents are uniformly and consistently accessible. Of particular interest to traditional users of the NASA Thesaurus, MAI incorporates a fully searchable thesaurus display module that affords word-search and hierarchy- navigation capabilities that make it much easier and less time-consuming to look up terms and browse, relative to lookup and browsing in older print and Portable Document Format (PDF) digital versions of the Thesaurus. In addition, because MAI is centrally hosted, the Thesaurus data are always current.

  13. An Introduction to Web Accessibility, Web Standards, and Web Standards Makers

    ERIC Educational Resources Information Center

    McHale, Nina

    2011-01-01

    Librarians and libraries have long been committed to providing equitable access to information. In the past decade and a half, the growth of the Internet and the rapid increase in the number of online library resources and tools have added a new dimension to this core duty of the profession: ensuring accessibility of online resources to users with…

  14. OpenSearch technology for geospatial resources discovery

    NASA Astrophysics Data System (ADS)

    Papeschi, Fabrizio; Enrico, Boldrini; Mazzetti, Paolo

    2010-05-01

    In 2005, the term Web 2.0 has been coined by Tim O'Reilly to describe a quickly growing set of Web-based applications that share a common philosophy of "mutually maximizing collective intelligence and added value for each participant by formalized and dynamic information sharing". Around this same period, OpenSearch a new Web 2.0 technology, was developed. More properly, OpenSearch is a collection of technologies that allow publishing of search results in a format suitable for syndication and aggregation. It is a way for websites and search engines to publish search results in a standard and accessible format. Due to its strong impact on the way the Web is perceived by users and also due its relevance for businesses, Web 2.0 has attracted the attention of both mass media and the scientific community. This explosive growth in popularity of Web 2.0 technologies like OpenSearch, and practical applications of Service Oriented Architecture (SOA) resulted in an increased interest in similarities, convergence, and a potential synergy of these two concepts. SOA is considered as the philosophy of encapsulating application logic in services with a uniformly defined interface and making these publicly available via discovery mechanisms. Service consumers may then retrieve these services, compose and use them according to their current needs. A great degree of similarity between SOA and Web 2.0 may be leading to a convergence between the two paradigms. They also expose divergent elements, such as the Web 2.0 support to the human interaction in opposition to the typical SOA machine-to-machine interaction. According to these considerations, the Geospatial Information (GI) domain, is also moving first steps towards a new approach of data publishing and discovering, in particular taking advantage of the OpenSearch technology. A specific GI niche is represented by the OGC Catalog Service for Web (CSW) that is part of the OGC Web Services (OWS) specifications suite, which provides a set of services for discovery, access, and processing of geospatial resources in a SOA framework. GI-cat is a distributed CSW framework implementation developed by the ESSI Lab of the Italian National Research Council (CNR-IMAA) and the University of Florence. It provides brokering and mediation functionalities towards heterogeneous resources and inventories, exposing several standard interfaces for query distribution. This work focuses on a new GI-cat interface which allows the catalog to be queried according to the OpenSearch syntax specification, thus filling the gap between the SOA architectural design of the CSW and the Web 2.0. At the moment, there is no OGC standard specification about this topic, but an official change request has been proposed in order to enable the OGC catalogues to support OpenSearch queries. In this change request, an OpenSearch extension is proposed providing a standard mechanism to query a resource based on temporal and geographic extents. Two new catalog operations are also proposed, in order to publish a suitable OpenSearch interface. This extended interface is implemented by the modular GI-cat architecture adding a new profiling module called "OpenSearch profiler". Since GI-cat also acts as a clearinghouse catalog, another component called "OpenSearch accessor" is added in order to access OpenSearch compliant services. An important role in the GI-cat extension, is played by the adopted mapping strategy. Two different kind of mappings are required: query, and response elements mapping. Query mapping is provided in order to fit the simple OpenSearch query syntax to the complex CSW query expressed by the OGC Filter syntax. GI-cat internal data model is based on the ISO-19115 profile, that is more complex than the simple XML syndication formats, such as RSS 2.0 and Atom 1.0, suggested by OpenSearch. Once response elements are available, in order to be presented, they need to be translated from the GI-cat internal data model, to the above mentioned syndication formats; the mapping processing, is bidirectional. When GI-cat is used to access OpenSearch compliant services, the CSW query must be mapped to the OpenSearch query, and the response elements, must be translated according to the GI-cat internal data model. As results of such extensions, GI-cat provides a user friendly facade to the complex CSW interface, thus enabling it to be queried, for example, using a browser toolbar.

  15. Hybrid Cloud Computing Environment for EarthCube and Geoscience Community

    NASA Astrophysics Data System (ADS)

    Yang, C. P.; Qin, H.

    2016-12-01

    The NSF EarthCube Integration and Test Environment (ECITE) has built a hybrid cloud computing environment to provides cloud resources from private cloud environments by using cloud system software - OpenStack and Eucalyptus, and also manages public cloud - Amazon Web Service that allow resource synchronizing and bursting between private and public cloud. On ECITE hybrid cloud platform, EarthCube and geoscience community can deploy and manage the applications by using base virtual machine images or customized virtual machines, analyze big datasets by using virtual clusters, and real-time monitor the virtual resource usage on the cloud. Currently, a number of EarthCube projects have deployed or started migrating their projects to this platform, such as CHORDS, BCube, CINERGI, OntoSoft, and some other EarthCube building blocks. To accomplish the deployment or migration, administrator of ECITE hybrid cloud platform prepares the specific needs (e.g. images, port numbers, usable cloud capacity, etc.) of each project in advance base on the communications between ECITE and participant projects, and then the scientists or IT technicians in those projects launch one or multiple virtual machines, access the virtual machine(s) to set up computing environment if need be, and migrate their codes, documents or data without caring about the heterogeneity in structure and operations among different cloud platforms.

  16. Evidence of bottom-up limitations in nearshore marine systems based on otolith proxies of fish growth

    USGS Publications Warehouse

    von Biela, Vanessa R.; Kruse, Gordon H.; Mueter, Franz J.; Black, Bryan A.; Douglas, David C.; Helser, Thomas E.; Zimmerman, Christian E.

    2015-01-01

    Fish otolith growth increments were used as indices of annual production at nine nearshore sites within the Alaska Coastal Current (downwelling region) and California Current (upwelling region) systems (~36–60°N). Black rockfish (Sebastes melanops) and kelp greenling (Hexagrammos decagrammus) were identified as useful indicators in pelagic and benthic nearshore food webs, respectively. To examine the support for bottom-up limitations, common oceanographic indices of production [sea surface temperature (SST), upwelling, and chlorophyll-a concentration] during summer (April–September) were compared to spatial and temporal differences in fish growth using linear mixed models. The relationship between pelagic black rockfish growth and SST was positive in the cooler Alaska Coastal Current and negative in the warmer California Current. These contrasting growth responses to SST among current systems are consistent with the optimal stability window hypothesis in which pelagic production is maximized at intermediate levels of water column stability. Increased growth rates of black rockfish were associated with higher chlorophyll concentrations in the California Current only, but black rockfish growth was unrelated to the upwelling index in either current system. Benthic kelp greenling growth rates were positively associated with warmer temperatures and relaxation of downwelling (upwelling index near zero) in the Alaska Coastal Current, while none of the oceanographic indices were related to their growth in the California Current. Overall, our results are consistent with bottom-up forcing of nearshore marine ecosystems—light and nutrients constrain primary production in pelagic food webs, and temperature constrains benthic food webs.

  17. RAPID and HTML5's potential

    NASA Technical Reports Server (NTRS)

    Torosyan, David

    2012-01-01

    Just as important as the engineering that goes into building a robot is the method of interaction, or how human users will use the machine. As part of the Human-System Interactions group (Conductor) at JPL, I explored using a web interface to interact with ATHLETE, a prototype lunar rover. I investigated the usefulness of HTML 5 and Javascript as a telemetry viewer as well as the feasibility of having a rover communicate with a web server. To test my ideas I built a mobile-compatible website and designed primarily for an Android tablet. The website took input from ATHLETE engineers, and upon its completion I conducted a user test to assess its effectiveness.

  18. BIRD: Bio-Image Referral Database. Design and implementation of a new web based and patient multimedia data focused system for effective medical diagnosis and therapy.

    PubMed

    Pinciroli, Francesco; Masseroli, Marco; Acerbo, Livio A; Bonacina, Stefano; Ferrari, Roberto; Marchente, Mario

    2004-01-01

    This paper presents a low cost software platform prototype supporting health care personnel in retrieving patient referral multimedia data. These information are centralized in a server machine and structured by using a flexible eXtensible Markup Language (XML) Bio-Image Referral Database (BIRD). Data are distributed on demand to requesting client in an Intranet network and transformed via eXtensible Stylesheet Language (XSL) to be visualized in an uniform way on market browsers. The core server operation software has been developed in PHP Hypertext Preprocessor scripting language, which is very versatile and useful for crafting a dynamic Web environment.

  19. User-driven Cloud Implementation of environmental models and data for all

    NASA Astrophysics Data System (ADS)

    Gurney, R. J.; Percy, B. J.; Elkhatib, Y.; Blair, G. S.

    2014-12-01

    Environmental data and models come from disparate sources over a variety of geographical and temporal scales with different resolutions and data standards, often including terabytes of data and model simulations. Unfortunately, these data and models tend to remain solely within the custody of the private and public organisations which create the data, and the scientists who build models and generate results. Although many models and datasets are theoretically available to others, the lack of ease of access tends to keep them out of reach of many. We have developed an intuitive web-based tool that utilises environmental models and datasets located in a cloud to produce results that are appropriate to the user. Storyboards showing the interfaces and visualisations have been created for each of several exemplars. A library of virtual machine images has been prepared to serve these exemplars. Each virtual machine image has been tailored to run computer models appropriate to the end user. Two approaches have been used; first as RESTful web services conforming to the Open Geospatial Consortium (OGC) Web Processing Service (WPS) interface standard using the Python-based PyWPS; second, a MySQL database interrogated using PHP code. In all cases, the web client sends the server an HTTP GET request to execute the process with a number of parameter values and, once execution terminates, an XML or JSON response is sent back and parsed at the client side to extract the results. All web services are stateless, i.e. application state is not maintained by the server, reducing its operational overheads and simplifying infrastructure management tasks such as load balancing and failure recovery. A hybrid cloud solution has been used with models and data sited on both private and public clouds. The storyboards have been transformed into intuitive web interfaces at the client side using HTML, CSS and JavaScript, utilising plug-ins such as jQuery and Flot (for graphics), and Google Maps APIs. We have demonstrated that a cloud infrastructure can be used to assemble a virtual research environment that, coupled with a user-driven development approach, is able to cater to the needs of a wide range of user groups, from domain experts to concerned members of the general public.

  20. Sensitivity analysis of the add-on price estimate for the silicon web growth process

    NASA Technical Reports Server (NTRS)

    Mokashi, A. R.

    1981-01-01

    The web growth process, a silicon-sheet technology option, developed for the flat plate solar array (FSA) project, was examined. Base case data for the technical and cost parameters for the technical and commercial readiness phase of the FSA project are projected. The process add on price, using the base case data for cost parameters such as equipment, space, direct labor, materials and utilities, and the production parameters such as growth rate and run length, using a computer program developed specifically to do the sensitivity analysis with improved price estimation are analyzed. Silicon price, sheet thickness and cell efficiency are also discussed.

  1. Silicon ribbon study program. [dendritic crystals for use in solar cells

    NASA Technical Reports Server (NTRS)

    Seidensticker, R. G.; Duncan, C. S.

    1975-01-01

    The feasibility is studied of growing wide, thin silicon dendritic web for solar cell fabrication and conceptual designs are developed for the apparatus required. An analysis of the mechanisms of dendritic web growth indicated that there were no apparent fundamental limitations to the process. The analysis yielded quantitative guidelines for the thermal conditions required for this mode of crystal growth. Crucible designs were then investigated: the usual quartz crucible configurations and configurations in which silicon itself is used for the crucible. The quartz crucible design is feasible and is incorporated into a conceptual design for a laboratory scale crystal growth facility capable of semi-automated quasi-continuous operation.

  2. On Propagating Interpersonal Trust in Social Networks

    NASA Astrophysics Data System (ADS)

    Ziegler, Cai-Nicolas

    The age of information glut has fostered the proliferation of data and documents on the Web, created by man and machine alike. Hence, there is an enormous wealth of minable knowledge that is yet to be extracted, in particular, on the Semantic Web. However, besides understanding information stated by subjects, knowing about their credibility becomes equally crucial. Hence, trust and trust metrics, conceived as computational means to evaluate trust relationships between individuals, come into play. Our major contribution to Semantic Web trust management through this work is twofold. First, we introduce a classification scheme for trust metrics along various axes and discuss advantages and drawbacks of existing approaches for Semantic Web scenarios. Hereby, we devise an advocacy for local group trust metrics, guiding us to the second part, which presents Appleseed, our novel proposal for local group trust computation. Compelling in its simplicity, Appleseed borrows many ideas from spreading activation models in psychology and relates their concepts to trust evaluation in an intuitive fashion. Moreover, we provide extensions for the Appleseed nucleus that make our trust metric handle distrust statements.

  3. Entrez Neuron RDFa: a pragmatic semantic web application for data integration in neuroscience research.

    PubMed

    Samwald, Matthias; Lim, Ernest; Masiar, Peter; Marenco, Luis; Chen, Huajun; Morse, Thomas; Mutalik, Pradeep; Shepherd, Gordon; Miller, Perry; Cheung, Kei-Hoi

    2009-01-01

    The amount of biomedical data available in Semantic Web formats has been rapidly growing in recent years. While these formats are machine-friendly, user-friendly web interfaces allowing easy querying of these data are typically lacking. We present "Entrez Neuron", a pilot neuron-centric interface that allows for keyword-based queries against a coherent repository of OWL ontologies. These ontologies describe neuronal structures, physiology, mathematical models and microscopy images. The returned query results are organized hierarchically according to brain architecture. Where possible, the application makes use of entities from the Open Biomedical Ontologies (OBO) and the 'HCLS knowledgebase' developed by the W3C Interest Group for Health Care and Life Science. It makes use of the emerging RDFa standard to embed ontology fragments and semantic annotations within its HTML-based user interface. The application and underlying ontologies demonstrate how Semantic Web technologies can be used for information integration within a curated information repository and between curated information repositories. It also demonstrates how information integration can be accomplished on the client side, through simple copying and pasting of portions of documents that contain RDFa markup.

  4. The Fabric of the Universe: Exploring the Cosmic Web in 3D Prints and Woven Textiles

    NASA Astrophysics Data System (ADS)

    Diemer, Benedikt; Facio, Isaac

    2017-05-01

    We introduce The Fabric of the Universe, an art and science collaboration focused on exploring the cosmic web of dark matter with unconventional techniques and materials. We discuss two of our projects in detail. First, we describe a pipeline for translating three-dimensional (3D) density structures from N-body simulations into solid surfaces suitable for 3D printing, and present prints of a cosmological volume and of the infall region around a massive cluster halo. In these models, we discover wall-like features that are invisible in two-dimensional projections. Going beyond the sheer visualization of simulation data, we undertake an exploration of the cosmic web as a three-dimensional woven textile. To this end, we develop experimental 3D weaving techniques to create sphere-like and filamentary shapes and radically simplify a region of the cosmic web into a set of filaments and halos. We translate the resulting tree structure into a series of commands that can be executed by a digital weaving machine, and present a large-scale textile installation.

  5. Comparing the Ecological Stoichiometry in Green and Brown Food Webs - A Review and Meta-analysis of Freshwater Food Webs.

    PubMed

    Evans-White, Michelle A; Halvorson, Halvor M

    2017-01-01

    The framework of ecological stoichiometry was developed primarily within the context of "green" autotroph-based food webs. While stoichiometric principles also apply in "brown" detritus-based systems, these systems have been historically understudied and differ from green ones in several important aspects including carbon (C) quality and the nutrient [nitrogen (N) and phosphorus (P)] contents of food resources for consumers. In this paper, we review work over the last decade that has advanced the application of ecological stoichiometry from green to brown food webs, focusing on freshwater ecosystems. We first review three focal areas where green and brown food webs differ: (1) bottom-up controls by light and nutrient availability, (2) stoichiometric constraints on consumer growth and nutritional regulation, and (3) patterns in consumer-driven nutrient dynamics. Our review highlights the need for further study of how light and nutrient availability affect autotroph-heterotroph interactions on detritus and the subsequent effects on consumer feeding and growth. To complement this conceptual review, we formally quantified differences in stoichiometric principles between green and brown food webs using a meta-analysis across feeding studies of freshwater benthic invertebrates. From 257 datasets collated across 46 publications and several unpublished studies, we compared effect sizes (Pearson's r) of resource N:C and P:C on growth, consumption, excretion, and egestion between herbivorous and detritivorous consumers. The meta-analysis revealed that both herbivore and detritivore growth are limited by resource N:C and P:C contents, but effect sizes only among detritivores were significantly above zero. Consumption effect sizes were negative among herbivores but positive for detritivores in the case of both N:C and P:C, indicating distinct compensatory feeding responses across resource stoichiometry gradients. Herbivore P excretion rates responded significantly positively to resource P:C, whereas detritivore N and P excretion did not respond; detritivore N and P egestion responded positively to resource N:C and P:C, respectively. Our meta-analysis highlights resource N and P contents as broadly limiting in brown and green benthic food webs, but indicates contrasting mechanisms of limitation owing to differing consumer regulation. We suggest that green and brown food webs share fundamental stoichiometric principles, while identifying specific differences toward applying ecological stoichiometry across ecosystems.

  6. Comparing the Ecological Stoichiometry in Green and Brown Food Webs – A Review and Meta-analysis of Freshwater Food Webs

    PubMed Central

    Evans-White, Michelle A.; Halvorson, Halvor M.

    2017-01-01

    The framework of ecological stoichiometry was developed primarily within the context of “green” autotroph-based food webs. While stoichiometric principles also apply in “brown” detritus-based systems, these systems have been historically understudied and differ from green ones in several important aspects including carbon (C) quality and the nutrient [nitrogen (N) and phosphorus (P)] contents of food resources for consumers. In this paper, we review work over the last decade that has advanced the application of ecological stoichiometry from green to brown food webs, focusing on freshwater ecosystems. We first review three focal areas where green and brown food webs differ: (1) bottom–up controls by light and nutrient availability, (2) stoichiometric constraints on consumer growth and nutritional regulation, and (3) patterns in consumer-driven nutrient dynamics. Our review highlights the need for further study of how light and nutrient availability affect autotroph–heterotroph interactions on detritus and the subsequent effects on consumer feeding and growth. To complement this conceptual review, we formally quantified differences in stoichiometric principles between green and brown food webs using a meta-analysis across feeding studies of freshwater benthic invertebrates. From 257 datasets collated across 46 publications and several unpublished studies, we compared effect sizes (Pearson’s r) of resource N:C and P:C on growth, consumption, excretion, and egestion between herbivorous and detritivorous consumers. The meta-analysis revealed that both herbivore and detritivore growth are limited by resource N:C and P:C contents, but effect sizes only among detritivores were significantly above zero. Consumption effect sizes were negative among herbivores but positive for detritivores in the case of both N:C and P:C, indicating distinct compensatory feeding responses across resource stoichiometry gradients. Herbivore P excretion rates responded significantly positively to resource P:C, whereas detritivore N and P excretion did not respond; detritivore N and P egestion responded positively to resource N:C and P:C, respectively. Our meta-analysis highlights resource N and P contents as broadly limiting in brown and green benthic food webs, but indicates contrasting mechanisms of limitation owing to differing consumer regulation. We suggest that green and brown food webs share fundamental stoichiometric principles, while identifying specific differences toward applying ecological stoichiometry across ecosystems. PMID:28706509

  7. Computational methods using weighed-extreme learning machine to predict protein self-interactions with protein evolutionary information.

    PubMed

    An, Ji-Yong; Zhang, Lei; Zhou, Yong; Zhao, Yu-Jun; Wang, Da-Fu

    2017-08-18

    Self-interactions Proteins (SIPs) is important for their biological activity owing to the inherent interaction amongst their secondary structures or domains. However, due to the limitations of experimental Self-interactions detection, one major challenge in the study of prediction SIPs is how to exploit computational approaches for SIPs detection based on evolutionary information contained protein sequence. In the work, we presented a novel computational approach named WELM-LAG, which combined the Weighed-Extreme Learning Machine (WELM) classifier with Local Average Group (LAG) to predict SIPs based on protein sequence. The major improvement of our method lies in presenting an effective feature extraction method used to represent candidate Self-interactions proteins by exploring the evolutionary information embedded in PSI-BLAST-constructed position specific scoring matrix (PSSM); and then employing a reliable and robust WELM classifier to carry out classification. In addition, the Principal Component Analysis (PCA) approach is used to reduce the impact of noise. The WELM-LAG method gave very high average accuracies of 92.94 and 96.74% on yeast and human datasets, respectively. Meanwhile, we compared it with the state-of-the-art support vector machine (SVM) classifier and other existing methods on human and yeast datasets, respectively. Comparative results indicated that our approach is very promising and may provide a cost-effective alternative for predicting SIPs. In addition, we developed a freely available web server called WELM-LAG-SIPs to predict SIPs. The web server is available at http://219.219.62.123:8888/WELMLAG/ .

  8. Exploring the Function Space of Deep-Learning Machines

    NASA Astrophysics Data System (ADS)

    Li, Bo; Saad, David

    2018-06-01

    The function space of deep-learning machines is investigated by studying growth in the entropy of functions of a given error with respect to a reference function, realized by a deep-learning machine. Using physics-inspired methods we study both sparsely and densely connected architectures to discover a layerwise convergence of candidate functions, marked by a corresponding reduction in entropy when approaching the reference function, gain insight into the importance of having a large number of layers, and observe phase transitions as the error increases.

  9. 78 FR 36749 - Determination Under the African Growth and Opportunity Act

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-19

    ... Benin. Articles must be ornamented in characteristic Benin or regional folk style. An article may not... synthetic fibers. Hand-woven on manually operated looms then hand or machine stitched. There [[Page 36750...-woven in manually operated looms then machine stitched together to form a wider substrate. This is a...

  10. Cell-cycle research with synchronous cultures: an evaluation

    NASA Technical Reports Server (NTRS)

    Helmstetter, C. E.; Thornton, M.; Grover, N. B.

    2001-01-01

    The baby-machine system, which produces new-born Escherichia coli cells from cultures immobilized on a membrane, was developed many years ago in an attempt to attain optimal synchrony with minimal disturbance of steady-state growth. In the present article, we put forward a model to describe the behaviour of cells produced by this method, and provide quantitative evaluation of the parameters involved, at each of four different growth rates. Considering the high level of selection achievable with this technique and the natural dispersion in interdivision times, we believe that the output of the baby machine is probably close to optimal in terms of both quality and persistence of synchrony. We show that considerable information on events in the cell cycle can be obtained from populations with age distributions very much broader than those achieved with the baby machine and differing only modestly from steady state. The data presented here, together with the long and fruitful history of findings employing the baby-machine technique, suggest that minimisation of stress on cells is the single most important factor for successful cell-cycle analysis.

  11. Analysis of rolling contact spall life in 440 C steel bearing rims

    NASA Technical Reports Server (NTRS)

    Bastias, P. C.; Bhargava, V.; Bower, A. P.; Du, J.; Gupta, V.; Hahn, G. T.; Kulkarni, S. M.; Kumar, A. M.; Leng, X.; Rubin, C. A.

    1991-01-01

    The results of a two year study of the mechanisms of spall failure in the HPOTP bearings are described. The objective was to build a foundation for detailed analyses of the contact life in terms of: cyclic plasticity, contact mechanics, spall nucleation, and spall growth. Since the laboratory rolling contact testing is carried out in the 3 ball/rod contact fatigue testing machine, the analysis of the contacts and contact lives produced in this machine received attention. The results from the experimentally observed growth lives are compared with calculated predictions derived from the fracture mechanics calculations.

  12. A resource-oriented architecture for a Geospatial Web

    NASA Astrophysics Data System (ADS)

    Mazzetti, Paolo; Nativi, Stefano

    2010-05-01

    In this presentation we discuss some architectural issues on the design of an architecture for a Geospatial Web, that is an information system for sharing geospatial resources according to the Web paradigm. The success of the Web in building a multi-purpose information space, has raised questions about the possibility of adopting the same approach for systems dedicated to the sharing of more specific resources, such as the geospatial information, that is information characterized by spatial/temporal reference. To this aim an investigation on the nature of the Web and on the validity of its paradigm for geospatial resources is required. The Web was born in the early 90's to provide "a shared information space through which people and machines could communicate" [Berners-Lee 1996]. It was originally built around a small set of specifications (e.g. URI, HTTP, HTML, etc.); however, in the last two decades several other technologies and specifications have been introduced in order to extend its capabilities. Most of them (e.g. the SOAP family) actually aimed to transform the Web in a generic Distributed Computing Infrastructure. While these efforts were definitely successful enabling the adoption of service-oriented approaches for machine-to-machine interactions supporting complex business processes (e.g. for e-Government and e-Business applications), they do not fit in the original concept of the Web. In the year 2000, R. T. Fielding, one of the designers of the original Web specifications, proposes a new architectural style for distributed systems, called REST (Representational State Transfer), aiming to capture the fundamental characteristics of the Web as it was originally conceived [Fielding 2000]. In this view, the nature of the Web lies not so much in the technologies, as in the way they are used. Maintaining the Web architecture conform to the REST style would then assure the scalability, extensibility and low entry barrier of the original Web. On the contrary, systems using the same Web technologies and specifications but according to a different architectural style, despite their usefulness, should not be considered part of the Web. If the REST style captures the significant Web characteristics, then, in order to build a Geospatial Web it is necessary that its architecture satisfies all the REST constraints. One of them is of particular importance: the adoption of a Uniform Interface. It prescribes that all the geospatial resources must be accessed through the same interface; moreover according to the REST style this interface must satisfy four further constraints: a) identification of resources; b) manipulation of resources through representations; c) self-descriptive messages; and, d) hypermedia as the engine of application state. In the Web, the uniform interface provides basic operations which are meaningful for generic resources. They typically implement the CRUD pattern (Create-Retrieve-Update-Delete) which demonstrated to be flexible and powerful in several general-purpose contexts (e.g. filesystem management, SQL for database management systems, etc.). Restricting the scope to a subset of resources it would be possible to identify other generic actions which are meaningful for all of them. For example for geospatial resources, subsetting, resampling, interpolation and coordinate reference systems transformations functionalities are candidate functionalities for a uniform interface. However an investigation is needed to clarify the semantics of those actions for different resources, and consequently if they can really ascend the role of generic interface operation. Concerning the point a), (identification of resources), it is required that every resource addressable in the Geospatial Web has its own identifier (e.g. a URI). This allows to implement citation and re-use of resources, simply providing the URI. OPeNDAP and KVP encodings of OGC data access services specifications might provide a basis for it. Concerning point b) (manipulation of resources through representations), the Geospatial Web poses several issues. In fact, while the Web mainly handles semi-structured information, in the Geospatial Web the information is typically structured with several possible data models (e.g. point series, gridded coverages, trajectories, etc.) and encodings. A possibility would be to simplify the interchange formats, choosing to support a subset of data models and format(s). This is what actually the Web designers did choosing to define a common format for hypermedia (HTML), although the underlying protocol would be generic. Concerning point c), self-descriptive messages, the exchanged messages should describe themselves and their content. This would not be actually a major issue considering the effort put in recent years on geospatial metadata models and specifications. The point d), hypermedia as the engine of application state, is actually where the Geospatial Web would mainly differ from existing geospatial information sharing systems. In fact the existing systems typically adopt a service-oriented architecture, where applications are built as a single service or as a workflow of services. On the other hand, in the Geospatial Web, applications should be built following the path between interconnected resources. The link between resources should be made explicit as hyperlinks. The adoption of Semantic Web solutions would allow to define not only the existence of a link between two resources, but also the nature of the link. The implementation of a Geospatial Web would allow to build an information system with the same characteristics of the Web sharing its points-of-strength and weaknesses. The main advantages would be the following: • The user would interact with the Geospatial Web according to the well-known Web navigation paradigm. This would lower the barrier to the access to geospatial applications for non-specialists (e.g. the success of Google Maps and other Web mapping applications); • Successful Web and Web 2.0 applications - search engines, feeds, social network - could be integrated/replicated in the Geospatial Web; The main drawbacks would be the following: • The Uniform Interface simplifies the overall system architecture (e.g. no service registry, and service descriptors required), but moves the complexity to the data representation. Moreover since the interface must stay generic, it results really simple and therefore complex interactions would require several transfers. • In the geospatial domain one of the most valuable resources are processes (e.g. environmental models). How they can be modeled as resources accessed through the common interface is an open issue. Taking into account advantages and drawback it seems that a Geospatial Web would be useful, but its use would be limited to specific use-cases not covering all the possible applications. The Geospatial Web architecture could be partly based on existing specifications, while other aspects need investigation. References [Berners-Lee 1996] T. Berners-Lee, "WWW: Past, present, and future". IEEE Computer, 29(10), Oct. 1996, pp. 69-77. [Fielding 2000] Fielding, R. T. 2000. Architectural styles and the design of network-based software architectures. PhD Dissertation. Dept. of Information and Computer Science, University of California, Irvine

  13. Education and Technology in the 21st Century Experiences of Adult Online Learners Using Web 2.0

    ERIC Educational Resources Information Center

    Bryant, Wanda L.

    2014-01-01

    The emergence of a knowledge-based and technology-driven economy has prompted adults to seek additional knowledge and skills that will enable them to participate effectively in society. The rapid growth and popularity of the internet tools such as Web 2.0 tools have revolutionized adult learning. Through the rich support of Web 2.0 tools, adult…

  14. The Evolution of WebCT in a Baccalaureate Nursing Program: An Alice in Wonderland Reflection

    ERIC Educational Resources Information Center

    Donato, Emily; Hudyma, Shirlene; Carter, Lorraine; Schroeder, Catherine

    2010-01-01

    The use of WebCT in the Laurentian University Bachelor of Science in Nursing program began in 2001 when faculty were eager to explore different modes of delivery for fourth-year courses. Since then, the use of WebCT within the baccalaureate program has increased substantively. This paper outlines the developmental growth of the use of this…

  15. Myria: Scalable Analytics as a Service

    NASA Astrophysics Data System (ADS)

    Howe, B.; Halperin, D.; Whitaker, A.

    2014-12-01

    At the UW eScience Institute, we're working to empower non-experts, especially in the sciences, to write and use data-parallel algorithms. To this end, we are building Myria, a web-based platform for scalable analytics and data-parallel programming. Myria's internal model of computation is the relational algebra extended with iteration, such that every program is inherently data-parallel, just as every query in a database is inherently data-parallel. But unlike databases, iteration is a first class concept, allowing us to express machine learning tasks, graph traversal tasks, and more. Programs can be expressed in a number of languages and can be executed on a number of execution environments, but we emphasize a particular language called MyriaL that supports both imperative and declarative styles and a particular execution engine called MyriaX that uses an in-memory column-oriented representation and asynchronous iteration. We deliver Myria over the web as a service, providing an editor, performance analysis tools, and catalog browsing features in a single environment. We find that this web-based "delivery vector" is critical in reaching non-experts: they are insulated from irrelevant effort technical work associated with installation, configuration, and resource management. The MyriaX backend, one of several execution runtimes we support, is a main-memory, column-oriented, RDBMS-on-the-worker system that supports cyclic data flows as a first-class citizen and has been shown to outperform competitive systems on 100-machine cluster sizes. I will describe the Myria system, give a demo, and present some new results in large-scale oceanographic microbiology.

  16. Growth Life of Surface Cracks in the Rail Web

    DOT National Transportation Integrated Search

    1989-01-01

    The results of a theoretical study of the propagation behavior of surface cracks in the web of railroad rails are presented. Two fracture mechanics models are presented: (1) a conventional LEFM model of an elliptical surface crack of constant aspect ...

  17. Testing simple deceptive honeypot tools

    NASA Astrophysics Data System (ADS)

    Yahyaoui, Aymen; Rowe, Neil C.

    2015-05-01

    Deception can be a useful defensive technique against cyber-attacks; it has the advantage of unexpectedness to attackers and offers a variety of tactics. Honeypots are a good tool for deception. They act as decoy computers to confuse attackers and exhaust their time and resources. This work tested the effectiveness of two free honeypot tools in real networks by varying their location and virtualization, and the effects of adding more deception to them. We tested a Web honeypot tool, Glastopf and an SSH honeypot tool Kippo. We deployed the Web honeypot in both a residential network and our organization's network and as both real and virtual machines; the organization honeypot attracted more attackers starting in the third week. Results also showed that the virtual honeypots received attacks from more unique IP addresses. They also showed that adding deception to the Web honeypot, in the form of additional linked Web pages and interactive features, generated more interest by attackers. For the purpose of comparison, we used examined log files of a legitimate Web-site www.cmand.org. The traffic distributions for the Web honeypot and the legitimate Web site showed similarities (with much malicious traffic from Brazil), but the SSH honeypot was different (with much malicious traffic from China). Contrary to previous experiments where traffic to static honeypots decreased quickly, our honeypots received increasing traffic over a period of three months. It appears that both honeypot tools are useful for providing intelligence about cyber-attack methods, and that additional deception is helpful.

  18. Sharing Human-Generated Observations by Integrating HMI and the Semantic Sensor Web

    PubMed Central

    Sigüenza, Álvaro; Díaz-Pardo, David; Bernat, Jesús; Vancea, Vasile; Blanco, José Luis; Conejero, David; Gómez, Luis Hernández

    2012-01-01

    Current “Internet of Things” concepts point to a future where connected objects gather meaningful information about their environment and share it with other objects and people. In particular, objects embedding Human Machine Interaction (HMI), such as mobile devices and, increasingly, connected vehicles, home appliances, urban interactive infrastructures, etc., may not only be conceived as sources of sensor information, but, through interaction with their users, they can also produce highly valuable context-aware human-generated observations. We believe that the great promise offered by combining and sharing all of the different sources of information available can be realized through the integration of HMI and Semantic Sensor Web technologies. This paper presents a technological framework that harmonizes two of the most influential HMI and Sensor Web initiatives: the W3C's Multimodal Architecture and Interfaces (MMI) and the Open Geospatial Consortium (OGC) Sensor Web Enablement (SWE) with its semantic extension, respectively. Although the proposed framework is general enough to be applied in a variety of connected objects integrating HMI, a particular development is presented for a connected car scenario where drivers' observations about the traffic or their environment are shared across the Semantic Sensor Web. For implementation and evaluation purposes an on-board OSGi (Open Services Gateway Initiative) architecture was built, integrating several available HMI, Sensor Web and Semantic Web technologies. A technical performance test and a conceptual validation of the scenario with potential users are reported, with results suggesting the approach is sound. PMID:22778643

  19. Sharing human-generated observations by integrating HMI and the Semantic Sensor Web.

    PubMed

    Sigüenza, Alvaro; Díaz-Pardo, David; Bernat, Jesús; Vancea, Vasile; Blanco, José Luis; Conejero, David; Gómez, Luis Hernández

    2012-01-01

    Current "Internet of Things" concepts point to a future where connected objects gather meaningful information about their environment and share it with other objects and people. In particular, objects embedding Human Machine Interaction (HMI), such as mobile devices and, increasingly, connected vehicles, home appliances, urban interactive infrastructures, etc., may not only be conceived as sources of sensor information, but, through interaction with their users, they can also produce highly valuable context-aware human-generated observations. We believe that the great promise offered by combining and sharing all of the different sources of information available can be realized through the integration of HMI and Semantic Sensor Web technologies. This paper presents a technological framework that harmonizes two of the most influential HMI and Sensor Web initiatives: the W3C's Multimodal Architecture and Interfaces (MMI) and the Open Geospatial Consortium (OGC) Sensor Web Enablement (SWE) with its semantic extension, respectively. Although the proposed framework is general enough to be applied in a variety of connected objects integrating HMI, a particular development is presented for a connected car scenario where drivers' observations about the traffic or their environment are shared across the Semantic Sensor Web. For implementation and evaluation purposes an on-board OSGi (Open Services Gateway Initiative) architecture was built, integrating several available HMI, Sensor Web and Semantic Web technologies. A technical performance test and a conceptual validation of the scenario with potential users are reported, with results suggesting the approach is sound.

  20. Making large amounts of meteorological plots easily accessible to users

    NASA Astrophysics Data System (ADS)

    Lamy-Thepaut, Sylvie; Siemen, Stephan; Sahin, Cihan; Raoult, Baudouin

    2015-04-01

    The European Centre for Medium-Range Weather Forecasts (ECMWF) is an international organisation providing its member organisations with forecasts in the medium time range of 3 to 15 days, and some longer-range forecasts for up to a year ahead, with varying degrees of detail. As part of its mission, ECMWF generates an increasing number of forecast data products for its users. To support the work of forecasters and researchers and to let them make best use of ECMWF forecasts, the Centre also provides tools and interfaces to visualise their products. This allows users to make use of and explore forecasts without having to transfer large amounts of raw data. This is especially true for products based on ECMWF's 50 member ensemble forecast, where some specific processing and visualisation are applied to extract information. Every day, thousands of raw data are being pushed to the ECMWF's interactive web charts application called ecCharts, and thousands of products are processed and pushed to ECMWF's institutional web site ecCharts provides a highly interactive application to display and manipulate recent numerical forecasts to forecasters in national weather services and ECMWF's commercial customers. With ecCharts forecasters are able to explore ECMWF's medium-range forecasts in far greater detail than has previously been possible on the web, and this as soon as the forecast becomes available. All ecCharts's products are also available through a machine-to-machine web map service based on the OGC Web Map Service (WMS) standard. ECMWF institutional web site provides access to a large number of graphical products. It was entirely redesigned last year. It now shares the same infrastructure as ECMWF's ecCharts, and can benefit of some ecCharts functionalities, for example the dashboard. The dashboard initially developed for ecCharts allows users to organise their own collection of products depending on their work flow, and is being further developed. In its first implementation, It presents the user's products in a single interface with fast access to the original product, and possibilities of synchronous animations between them. But its functionalities are being extended to give users the freedom to collect not only ecCharts's 2D maps and graphs, but also other ECMWF Web products such as monthly and seasonal products, scores, and observation monitoring. The dashboard will play a key role to help the user to interpret the large amount of information that ECMWF is providing. This talk will present examples of how the new user interface can organise complex meteorological maps and graphs and show the new possibilities users have gained by using the web as a medium.

  1. Research of Manufacture Time Management System Based on PLM

    NASA Astrophysics Data System (ADS)

    Jing, Ni; Juan, Zhu; Liangwei, Zhong

    This system is targeted by enterprises manufacturing machine shop, analyzes their business needs and builds the plant management information system of Manufacture time and Manufacture time information management. for manufacturing process Combined with WEB technology, based on EXCEL VBA development of methods, constructs a hybrid model based on PLM workshop Manufacture time management information system framework, discusses the functionality of the system architecture, database structure.

  2. SH2 Ligand Prediction-Guidance for In-Silico Screening.

    PubMed

    Li, Shawn S C; Li, Lei

    2017-01-01

    Systematic identification of binding partners for SH2 domains is important for understanding the biological function of the corresponding SH2 domain-containing proteins. Here, we describe two different web-accessible computer programs, SMALI and DomPep, for predicting binding ligands for SH2 domains. The former was developed using a Scoring Matrix method and the latter based on the Support Vector Machine model.

  3. Combat Ration Network for Technology Implementation (CORANET II) Knurled Seal Heat Bar

    DTIC Science & Technology

    2010-08-01

    bench top comparison of ultrasonic sealing technology that included the participation of five ultrasonic sealing equipment manufacturers . Project...packaging journals • On-line web search yielded no useful research results • Contact with machine manufactures produced anecdotal evidence of improved...seal characteristics without documentation or research results • One manufacturer suggested rounded seal bars or seal rubbers for improved sealing

  4. Self-Directed Learning with Web-Based Sites: How Well Do Students' Perceptions and Thinking Match with Their Teachers?

    ERIC Educational Resources Information Center

    Ng, Wan

    2008-01-01

    With research consistently showing that students can be motivated to learn with ICT, this case study sought to investigate Year 7 students' learning about simple machines in an ICT-enhanced environment where they could self-direct their own learning with minimal intervention from the teacher. The study is focused on how well do students and…

  5. Programming for physicians: A free online course

    PubMed Central

    Kubben, Pieter L.

    2016-01-01

    This article is an introduction for clinical readers into programming and computational thinking using the programming language Python. Exercises can be done completely online without any need for installation of software. Participants will be taught the fundamentals of programming, which are necessarily independent of the sort of application (stand-alone, web, mobile, engineering, and statistical/machine learning) that is to be developed afterward. PMID:27127694

  6. Tailoring Earth Observation To Ranchers For Improved Land Management And Profitability: The VegMachine Online Project

    NASA Astrophysics Data System (ADS)

    Scarth, P.; Trevithick, B.; Beutel, T.

    2016-12-01

    VegMachine Online is a freely available browser application that allows ranchers across Australia to view and interact with satellite derived ground cover state and change maps on their property and extract this information in a graphical format using interactive tools. It supports the delivery and communication of a massive earth observation data set in an accessible, producer friendly way . Around 250,000 Landsat TM, ETM and OLI images were acquired across Australia, converted to terrain corrected surface reflectance and masked for cloud, cloud shadow, terrain shadow and water. More than 2500 field sites across the Australian rangelands were used to derive endmembers used in a constrained unmixing approach to estimate the per-pixel proportion of bare, green and non-green vegetation for all images. A seasonal metoid compositing method was used to produce national fractional cover virtual mosaics for each three month period since 1988. The time series of green fraction is used to estimate the persistent green due to tree and shrub canopies, and this estimate is used to correct the fractional cover to ground cover for our mixed tree-grass rangeland systems. Finally, deciles are produced for key metrics every season to track a pixels relativity to the entire time series. These data are delivered through time series enabled web mapping services and customised web processing services that enable the full time series over any spatial extent to be interrogated in seconds via a RESTful interface. These services interface with a front end browser application that provides product visualization for any date in the time series, tools to draw or import polygon boundaries, plot time series ground cover comparisons, look at the effect of historical rainfall and tools to run the revised universal soil loss equation in web time to assess the effect of proposed changes in cover retention. VegMachine Online is already being used by ranchers monitoring paddock condition, organisations supporting land management initiatives in Great Barrier Reef catchments, by students developing tools to understand land condition and degradation and the underlying data and APIs are supporting several other land condition mapping tools.

  7. The Society of Brains: How Alan Turing and Marvin Minsky Were Both Right

    NASA Astrophysics Data System (ADS)

    Struzik, Zbigniew R.

    2015-04-01

    In his well-known prediction, Alan Turing stated that computer intelligence would surpass human intelligence by the year 2000. Although the Turing Test, as it became known, was devised to be played by one human against one computer, this is not a fair setup. Every human is a part of a social network, and a fairer comparison would be a contest between one human at the console and a network of computers behind the console. Around the year 2000, the number of web pages on the WWW overtook the number of neurons in the human brain. But these websites would be of little use without the ability to search for knowledge. By the year 2000 Google Inc. had become the search engine of choice, and the WWW became an intelligent entity. This was not without good reason. The basis for the search engine was the analysis of the ’network of knowledge’. The PageRank algorithm, linking information on the web according to the hierarchy of ‘link popularity’, continues to provide the basis for all of Google's web search tools. While PageRank was developed by Larry Page and Sergey Brin in 1996 as part of a research project about a new kind of search engine, PageRank is in its essence the key to representing and using static knowledge in an emergent intelligent system. Here I argue that Alan Turing was right, as hybrid human-computer internet machines have already surpassed our individual intelligence - this was done around the year 2000 by the Internet - the socially-minded, human-computer hybrid Homo computabilis-socialis. Ironically, the Internet's intelligence also emerged to a large extent from ‘exploiting’ humans - the key to the emergence of machine intelligence has been discussed by Marvin Minsky in his work on the foundations of intelligence through interacting agents’ knowledge. As a consequence, a decade and a half decade into the 21st century, we appear to be much better equipped to tackle the problem of the social origins of humanity - in particular thanks to the power of the intelligent partner-in-the-quest machine, however, we should not wait too long...

  8. Provenance through Time

    NASA Astrophysics Data System (ADS)

    Chandler, C. L.; Groman, R. C.; Shepherd, A.; Allison, M. D.; Kinkade, D.; Rauch, S.; Wiebe, P. H.; Glover, D. M.

    2014-12-01

    The ability to reproduce scientific results is a cornerstone of the scientific method, and access to the data upon which the results are based is essential to reproducibility. Access to the data alone is not enough though, and research communities have recognized the importance of metadata (data documentation) to enable discovery and data access, and facilitate interpretation and accurate reuse. The Biological and Chemical Oceanography Data Management Office (BCO-DMO) was first funded in late 2006 by the National Science Foundation (NSF) Division of Ocean Sciences (OCE) Biology and Chemistry Sections to help ensure that data generated during NSF OCE funded research would be preserved and available for future use. The BCO-DMO was formed by combining the formerly independent data management offices of two marine research programs: the United States Joint Global Ocean Flux Study (US JGOFS) and the US GLOBal Ocean ECosystems Dynamics (US GLOBEC) program. Since the US JGOFS and US GLOBEC programs were both active (1990s) there have been significant changes in all aspects of the research data life cycle, and the staff at BCO-DMO has modified the way in which we manage data contributed to the office. The supporting documentation that describes each dataset was originally displayed as a human-readable text file retrievable via a Web browser. BCO-DMO still offers that form because our primary audience is marine researchers using Web browser clients; however we are seeing an increased demand to support machine client access. Metadata records from the BCO-DMO data system are now extracted and published out in a variety of formats. The system supports ISO 19115, FGDC, GCMD DIF, schema.org Dataset extension, formal publication with a DOI, and RDF with semantic markup including PROV-O, FOAF and more. In the 1990s, data documentation helped researchers locate data of interest and understand the provenance sufficiently to determine fitness for purpose. Today, providing data documentation in a machine interpretable form enables researchers to make more effective use of machine clients to discover and access data. This presentation will describe the challenges associated with and benefits realized from layering modern Semantic Web technologies on top of a legacy data system. http://bco-dmo.org/

  9. GRIDVIEW: Recent Improvements in Research and Education Software for Exploring Mars Topography

    NASA Technical Reports Server (NTRS)

    Roark, J. H.; Masuoka, C. M.; Frey, H. V.

    2004-01-01

    GRIDVIEW is being developed by the GEODYNAMICS Branch at NASA's Goddard Space Flight Center and can be downloaded on the web at http://geodynamics.gsfc.nasa.gov/gridview/. The program is very mature and has been successfully used for more than four years, but is still under development as we add new features for data analysis and visualization. The software can run on any computer supported by the IDL virtual machine application supplied by RSI. The virtual machine application is currently available for recent versions of MS Windows, MacOS X, Red Hat Linux and UNIX. Minimum system memory requirement is 32 MB, however loading large data sets may require larger amounts of RAM to function adequately.

  10. Counterfeit Electronics Detection Using Image Processing and Machine Learning

    NASA Astrophysics Data System (ADS)

    Asadizanjani, Navid; Tehranipoor, Mark; Forte, Domenic

    2017-01-01

    Counterfeiting is an increasing concern for businesses and governments as greater numbers of counterfeit integrated circuits (IC) infiltrate the global market. There is an ongoing effort in experimental and national labs inside the United States to detect and prevent such counterfeits in the most efficient time period. However, there is still a missing piece to automatically detect and properly keep record of detected counterfeit ICs. Here, we introduce a web application database that allows users to share previous examples of counterfeits through an online database and to obtain statistics regarding the prevalence of known defects. We also investigate automated techniques based on image processing and machine learning to detect different physical defects and to determine whether or not an IC is counterfeit.

  11. The Internet: Past, Present, and Future.

    ERIC Educational Resources Information Center

    Galbreath, Jeremy, Ed.

    1997-01-01

    Examines the "reality behind the hype" surrounding the Internet. Discusses its early development; growth and present state; and key applications, including e-mail, voice/video telephony, integrated messaging, electronic commerce, the World Wide Web, and Web commerce, Intranet, Extranet; education and training; security; ownership; and…

  12. Sustainable Materials Management (SMM) Web Academy Webinar: Pay-As-You Throw: Growth & Opportunity for Sustainable Materials Management

    EPA Pesticide Factsheets

    This is a webinar page for the Sustainable Management of Materials (SMM) Web Academy webinar titled Let’s WRAP (Wrap Recycling Action Program): Best Practices to Boost Plastic Film Recycling in Your Community

  13. Linear positioning laser calibration setup of CNC machine tools

    NASA Astrophysics Data System (ADS)

    Sui, Xiulin; Yang, Congjing

    2002-10-01

    The linear positioning laser calibration setup of CNC machine tools is capable of executing machine tool laser calibraiotn and backlash compensation. Using this setup, hole locations on CNC machien tools will be correct and machien tool geometry will be evaluated and adjusted. Machien tool laser calibration and backlash compensation is a simple and straightforward process. First the setup is to 'find' the stroke limits of the axis. Then the laser head is then brought into correct alignment. Second is to move the machine axis to the other extreme, the laser head is now aligned, using rotation and elevation adjustments. Finally the machine is moved to the start position and final alignment is verified. The stroke of the machine, and the machine compensation interval dictate the amount of data required for each axis. These factors determine the amount of time required for a through compensation of the linear positioning accuracy. The Laser Calibrator System monitors the material temperature and the air density; this takes into consideration machine thermal growth and laser beam frequency. This linear positioning laser calibration setup can be used on CNC machine tools, CNC lathes, horizontal centers and vertical machining centers.

  14. Automatic chemical vapor deposition

    NASA Technical Reports Server (NTRS)

    Kennedy, B. W.

    1981-01-01

    Report reviews chemical vapor deposition (CVD) for processing integrated circuits and describes fully automatic machine for CVD. CVD proceeds at relatively low temperature, allows wide choice of film compositions (including graded or abruptly changing compositions), and deposits uniform films of controllable thickness at fairly high growth rate. Report gives overview of hardware, reactants, and temperature ranges used with CVD machine.

  15. Elevating Virtual Machine Introspection for Fine-Grained Process Monitoring: Techniques and Applications

    ERIC Educational Resources Information Center

    Srinivasan, Deepa

    2013-01-01

    Recent rapid malware growth has exposed the limitations of traditional in-host malware-defense systems and motivated the development of secure virtualization-based solutions. By running vulnerable systems as virtual machines (VMs) and moving security software from inside VMs to the outside, the out-of-VM solutions securely isolate the anti-malware…

  16. Evaluating predictive models for solar energy growth in the US states and identifying the key drivers

    NASA Astrophysics Data System (ADS)

    Chakraborty, Joheen; Banerji, Sugata

    2018-03-01

    Driven by a desire to control climate change and reduce the dependence on fossil fuels, governments around the world are increasing the adoption of renewable energy sources. However, among the US states, we observe a wide disparity in renewable penetration. In this study, we have identified and cleaned over a dozen datasets representing solar energy penetration in each US state, and the potentially relevant socioeconomic and other factors that may be driving the growth in solar. We have applied a number of predictive modeling approaches - including machine learning and regression - on these datasets over a 17-year period and evaluated the relative performance of the models. Our goals were: (1) identify the most important factors that are driving the growth in solar, (2) choose the most effective predictive modeling technique for solar growth, and (3) develop a model for predicting next year’s solar growth using this year’s data. We obtained very promising results with random forests (about 90% efficacy) and varying degrees of success with support vector machines and regression techniques (linear, polynomial, ridge). We also identified states with solar growth slower than expected and representing a potential for stronger growth in future.

  17. SNPversity: a web-based tool for visualizing diversity

    PubMed Central

    Schott, David A; Vinnakota, Abhinav G; Portwood, John L; Andorf, Carson M

    2018-01-01

    Abstract Many stand-alone desktop software suites exist to visualize single nucleotide polymorphism (SNP) diversity, but web-based software that can be easily implemented and used for biological databases is absent. SNPversity was created to answer this need by building an open-source visualization tool that can be implemented on a Unix-like machine and served through a web browser that can be accessible worldwide. SNPversity consists of a HDF5 database back-end for SNPs, a data exchange layer powered by TASSEL libraries that represent data in JSON format, and an interface layer using PHP to visualize SNP information. SNPversity displays data in real-time through a web browser in grids that are color-coded according to a given SNP’s allelic status and mutational state. SNPversity is currently available at MaizeGDB, the maize community’s database, and will be soon available at GrainGenes, the clade-oriented database for Triticeae and Avena species, including wheat, barley, rye, and oat. The code and documentation are uploaded onto github, and they are freely available to the public. We expect that the tool will be highly useful for other biological databases with a similar need to display SNP diversity through their web interfaces. Database URL: https://www.maizegdb.org/snpversity PMID:29688387

  18. Recent advancements on the development of web-based applications for the implementation of seismic analysis and surveillance systems

    NASA Astrophysics Data System (ADS)

    Friberg, P. A.; Luis, R. S.; Quintiliani, M.; Lisowski, S.; Hunter, S.

    2014-12-01

    Recently, a novel set of modules has been included in the Open Source Earthworm seismic data processing system, supporting the use of web applications. These include the Mole sub-system, for storing relevant event data in a MySQL database (see M. Quintiliani and S. Pintore, SRL, 2013), and an embedded webserver, Moleserv, for serving such data to web clients in QuakeML format. These modules have enabled, for the first time using Earthworm, the use of web applications for seismic data processing. These can greatly simplify the operation and maintenance of seismic data processing centers by having one or more servers providing the relevant data as well as the data processing applications themselves to client machines running arbitrary operating systems.Web applications with secure online web access allow operators to work anywhere, without the often cumbersome and bandwidth hungry use of secure shell or virtual private networks. Furthermore, web applications can seamlessly access third party data repositories to acquire additional information, such as maps. Finally, the usage of HTML email brought the possibility of specialized web applications, to be used in email clients. This is the case of EWHTMLEmail, which produces event notification emails that are in fact simple web applications for plotting relevant seismic data.Providing web services as part of Earthworm has enabled a number of other tools as well. One is ISTI's EZ Earthworm, a web based command and control system for an otherwise command line driven system; another is a waveform web service. The waveform web service serves Earthworm data to additional web clients for plotting, picking, and other web-based processing tools. The current Earthworm waveform web service hosts an advanced plotting capability for providing views of event-based waveforms from a Mole database served by Moleserve.The current trend towards the usage of cloud services supported by web applications is driving improvements in JavaScript, css and HTML, as well as faster and more efficient web browsers, including mobile. It is foreseeable that in the near future, web applications are as powerful and efficient as native applications. Hence the work described here has been the first step towards bringing the Open Source Earthworm seismic data processing system to this new paradigm.

  19. The energetics of fish growth and how it constrains food-web trophic structure.

    PubMed

    Barneche, Diego R; Allen, Andrew P

    2018-06-01

    The allocation of metabolic energy to growth fundamentally influences all levels of biological organisation. Here we use a first-principles theoretical model to characterise the energetics of fish growth at distinct ontogenetic stages and in distinct thermal regimes. Empirically, we show that the mass scaling of growth rates follows that of metabolic rate, and is somewhat steeper at earlier ontogenetic stages. We also demonstrate that the cost of growth, E m , varies substantially among fishes, and that it may increase with temperature, trophic level and level of activity. Theoretically, we show that E m is a primary determinant of the efficiency of energy transfer across trophic levels, and that energy is transferred more efficiently between trophic levels if the prey are young and sedentary. Overall, our study demonstrates the importance of characterising the energetics of individual growth in order to understand constraints on the structure of food webs and ecosystems. © 2018 John Wiley & Sons Ltd/CNRS.

  20. Restricted regions of enhanced growth of Antarctic krill in the circumpolar Southern Ocean.

    PubMed

    Murphy, Eugene J; Thorpe, Sally E; Tarling, Geraint A; Watkins, Jonathan L; Fielding, Sophie; Underwood, Philip

    2017-07-31

    Food webs in high-latitude oceans are dominated by relatively few species. Future ocean and sea-ice changes affecting the distribution of such species will impact the structure and functioning of whole ecosystems. Antarctic krill (Euphausia superba) is a key species in Southern Ocean food webs, but there is little understanding of the factors influencing its success throughout much of the ocean. The capacity of a habitat to maintain growth will be crucial and here we use an empirical relationship of growth rate to assess seasonal spatial variability. Over much of the ocean, potential for growth is limited, with three restricted oceanic regions where seasonal conditions permit high growth rates, and only a few areas around the Scotia Sea and Antarctic Peninsula suitable for growth of the largest krill (>60 mm). Our study demonstrates that projections of impacts of future change need to account for spatial and seasonal variability of key ecological processes within ocean ecosystems.

  1. Quadrilateral Micro-Hole Array Machining on Invar Thin Film: Wet Etching and Electrochemical Fusion Machining

    PubMed Central

    Choi, Woong-Kirl; Kim, Seong-Hyun; Choi, Seung-Geon; Lee, Eun-Sang

    2018-01-01

    Ultra-precision products which contain a micro-hole array have recently shown remarkable demand growth in many fields, especially in the semiconductor and display industries. Photoresist etching and electrochemical machining are widely known as precision methods for machining micro-holes with no residual stress and lower surface roughness on the fabricated products. The Invar shadow masks used for organic light-emitting diodes (OLEDs) contain numerous micro-holes and are currently machined by a photoresist etching method. However, this method has several problems, such as uncontrollable hole machining accuracy, non-etched areas, and overcutting. To solve these problems, a machining method that combines photoresist etching and electrochemical machining can be applied. In this study, negative photoresist with a quadrilateral hole array pattern was dry coated onto 30-µm-thick Invar thin film, and then exposure and development were carried out. After that, photoresist single-side wet etching and a fusion method of wet etching-electrochemical machining were used to machine micro-holes on the Invar. The hole machining geometry, surface quality, and overcutting characteristics of the methods were studied. Wet etching and electrochemical fusion machining can improve the accuracy and surface quality. The overcutting phenomenon can also be controlled by the fusion machining. Experimental results show that the proposed method is promising for the fabrication of Invar film shadow masks. PMID:29351235

  2. Analysis of user activities on popular medical forums

    NASA Astrophysics Data System (ADS)

    Kamalov, M. V.; Dobrynin, V. Y.; Balykina, Y. E.; Martynov, R. S.

    2017-10-01

    The paper is devoted to detailed investigation of users’ behavior and level of expertise on online medical forums. Two popular forums were analyzed in terms of presence of experts who answer health related questions and participate in discussions. This study provides insight into the quality of medical information that one can get from the web resources, and also illustrates relationship between approved medical experts and popular authors of the considered forums. During experiments several machine learning and natural language processing methods were evaluated against to available web content to get further understanding of structure and distribution of information about medicine available online nowadays. As a result of this study the hypothesis of existing correlation between approved medical experts and popular authors has been rejected.

  3. Towards a semantic web of paleoclimatology

    NASA Astrophysics Data System (ADS)

    Emile-Geay, J.; Eshleman, J. A.

    2012-12-01

    The paleoclimate record is information-rich, yet signifiant technical barriers currently exist before it can be used to automatically answer scientific questions. Here we make the case for a universal format to structure paleoclimate data. A simple example demonstrates the scientific utility of such a self-contained way of organizing coral data and meta-data in the Matlab language. This example is generalized to a universal ontology that may form the backbone of an open-source, open-access and crowd-sourced paleoclimate database. Its key attributes are: 1. Parsability: the format is self-contained (hence machine-readable), and would therefore enable a semantic web of paleoclimate information. 2. Universality: the format is platform-independent (readable on all computer and operating systems), and language- independent (readable in major programming languages) 3. Extensibility: the format requires a minimum set of fields to appropriately define a paleoclimate record, but allows for the database to grow organically as more records are added, or - equally important - as more metadata are added to existing records. 4. Citability: The format enables the automatic citation of peer- reviewed articles as well as data citations whenever a data record is being used for analysis, making due recognition of scientific work an automatic part and foundational principle of paleoclimate data analysis. 5. Ergonomy: The format will be easy to use, update and manage. This structure is designed to enable semantic searches, and is expected to help accelerate discovery in all workflows where paleoclimate data are being used. Practical steps towards the implementation of such a system at the community level are then discussed.; Preliminary ontology describing relationships between the data and meta-data fields of the Nurhati et al. [2011] climate record. Several fields are viewed as instances of larger classes (ProxyClass,Site,Reference), which would allow computers to perform operations on all records within a specific class (e.g. if the measurement type is δ18O , or if the proxy class is 'Tree Ring Width', or if the resolution is less than 3 months, etc). All records in such a database would be bound to each other by similar links, allowing machines to automatically process any form of query involving existing information. Such a design would also allow growth, by adding records and/or additional information about each record.

  4. Cloud Forensics Issues

    DTIC Science & Technology

    2014-07-01

    voluminous threat environment. Today we regularly construct seamless encrypted communications between machines through SSL or other TLS . These do not...return to the web application and the user, As a prerequisite to end-to-end communication an SSL , or other suitable TLS is set up between each of the...an TLS connection is established between the requestor and the service provider, within which a WS-Security package will be sent to the service

  5. Integrating Webtop Components with Thin-Client Web Applicators using WDK Tickets

    NASA Technical Reports Server (NTRS)

    Duley, Jason

    2004-01-01

    Contents include the folloving: Issues surrounding encryption/decryption of password strings when deploying on different machines and platforms. Security concerns when exposing docbases to internet users. Docbase Session management in Java Servlets. Customization of Webtop components. WDK Tickets as a silent login alternative. Encoding Tickets and Ticket syntax. Invoking Webtop components via an Action URL. Issues with accessing Webtop components on Mac OS X through SSL.

  6. SeqTU: A web server for identification of bacterial transcription units

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Xin; Chou, Wen -Chi; Ma, Qin

    A transcription unit (TU) consists of K ≥ 1 consecutive genes on the same strand of a bacterial genome that are transcribed into a single mRNA molecule under certain conditions. Their identification is an essential step in elucidation of transcriptional regulatory networks. We have recently developed a machine-learning method to accurately identify TUs from RNA-seq data, based on two features of the assembled RNA reads: the continuity and stability of RNA-seq coverage across a genomic region. While good performance was achieved by the method on Escherichia coli and Clostridium thermocellum, substantial work is needed to make the program generally applicablemore » to all bacteria, knowing that the program requires organism specific information. A web server, named SeqTU, was developed to automatically identify TUs with given RNA-seq data of any bacterium using a machine-learning approach. The server consists of a number of utility tools, in addition to TU identification, such as data preparation, data quality check and RNA-read mapping. SeqTU provides a user-friendly interface and automated prediction of TUs from given RNA-seq data. Furthermore, the predicted TUs are displayed intuitively using HTML format along with a graphic visualization of the prediction.« less

  7. Time-related patient data retrieval for the case studies from the pharmacogenomics research network

    PubMed Central

    Zhu, Qian; Tao, Cui; Ding, Ying; Chute, Christopher G.

    2012-01-01

    There are lots of question-based data elements from the pharmacogenomics research network (PGRN) studies. Many data elements contain temporal information. To semantically represent these elements so that they can be machine processiable is a challenging problem for the following reasons: (1) the designers of these studies usually do not have the knowledge of any computer modeling and query languages, so that the original data elements usually are represented in spreadsheets in human languages; and (2) the time aspects in these data elements can be too complex to be represented faithfully in a machine-understandable way. In this paper, we introduce our efforts on representing these data elements using semantic web technologies. We have developed an ontology, CNTRO, for representing clinical events and their temporal relations in the web ontology language (OWL). Here we use CNTRO to represent the time aspects in the data elements. We have evaluated 720 time-related data elements from PGRN studies. We adapted and extended the knowledge representation requirements for EliXR-TIME to categorize our data elements. A CNTRO-based SPARQL query builder has been developed to customize users’ own SPARQL queries for each knowledge representation requirement. The SPARQL query builder has been evaluated with a simulated EHR triple store to ensure its functionalities. PMID:23076712

  8. Time-related patient data retrieval for the case studies from the pharmacogenomics research network.

    PubMed

    Zhu, Qian; Tao, Cui; Ding, Ying; Chute, Christopher G

    2012-11-01

    There are lots of question-based data elements from the pharmacogenomics research network (PGRN) studies. Many data elements contain temporal information. To semantically represent these elements so that they can be machine processiable is a challenging problem for the following reasons: (1) the designers of these studies usually do not have the knowledge of any computer modeling and query languages, so that the original data elements usually are represented in spreadsheets in human languages; and (2) the time aspects in these data elements can be too complex to be represented faithfully in a machine-understandable way. In this paper, we introduce our efforts on representing these data elements using semantic web technologies. We have developed an ontology, CNTRO, for representing clinical events and their temporal relations in the web ontology language (OWL). Here we use CNTRO to represent the time aspects in the data elements. We have evaluated 720 time-related data elements from PGRN studies. We adapted and extended the knowledge representation requirements for EliXR-TIME to categorize our data elements. A CNTRO-based SPARQL query builder has been developed to customize users' own SPARQL queries for each knowledge representation requirement. The SPARQL query builder has been evaluated with a simulated EHR triple store to ensure its functionalities.

  9. SCENERY: a web application for (causal) network reconstruction from cytometry data

    PubMed Central

    Papoutsoglou, Georgios; Athineou, Giorgos; Lagani, Vincenzo; Xanthopoulos, Iordanis; Schmidt, Angelika; Éliás, Szabolcs; Tegnér, Jesper

    2017-01-01

    Abstract Flow and mass cytometry technologies can probe proteins as biological markers in thousands of individual cells simultaneously, providing unprecedented opportunities for reconstructing networks of protein interactions through machine learning algorithms. The network reconstruction (NR) problem has been well-studied by the machine learning community. However, the potentials of available methods remain largely unknown to the cytometry community, mainly due to their intrinsic complexity and the lack of comprehensive, powerful and easy-to-use NR software implementations specific for cytometry data. To bridge this gap, we present Single CEll NEtwork Reconstruction sYstem (SCENERY), a web server featuring several standard and advanced cytometry data analysis methods coupled with NR algorithms in a user-friendly, on-line environment. In SCENERY, users may upload their data and set their own study design. The server offers several data analysis options categorized into three classes of methods: data (pre)processing, statistical analysis and NR. The server also provides interactive visualization and download of results as ready-to-publish images or multimedia reports. Its core is modular and based on the widely-used and robust R platform allowing power users to extend its functionalities by submitting their own NR methods. SCENERY is available at scenery.csd.uoc.gr or http://mensxmachina.org/en/software/. PMID:28525568

  10. VAT: a computational framework to functionally annotate variants in personal genomes within a cloud-computing environment

    PubMed Central

    Habegger, Lukas; Balasubramanian, Suganthi; Chen, David Z.; Khurana, Ekta; Sboner, Andrea; Harmanci, Arif; Rozowsky, Joel; Clarke, Declan; Snyder, Michael; Gerstein, Mark

    2012-01-01

    Summary: The functional annotation of variants obtained through sequencing projects is generally assumed to be a simple intersection of genomic coordinates with genomic features. However, complexities arise for several reasons, including the differential effects of a variant on alternatively spliced transcripts, as well as the difficulty in assessing the impact of small insertions/deletions and large structural variants. Taking these factors into consideration, we developed the Variant Annotation Tool (VAT) to functionally annotate variants from multiple personal genomes at the transcript level as well as obtain summary statistics across genes and individuals. VAT also allows visualization of the effects of different variants, integrates allele frequencies and genotype data from the underlying individuals and facilitates comparative analysis between different groups of individuals. VAT can either be run through a command-line interface or as a web application. Finally, in order to enable on-demand access and to minimize unnecessary transfers of large data files, VAT can be run as a virtual machine in a cloud-computing environment. Availability and Implementation: VAT is implemented in C and PHP. The VAT web service, Amazon Machine Image, source code and detailed documentation are available at vat.gersteinlab.org. Contact: lukas.habegger@yale.edu or mark.gerstein@yale.edu Supplementary Information: Supplementary data are available at Bioinformatics online. PMID:22743228

  11. Bio-AIMS Collection of Chemoinformatics Web Tools based on Molecular Graph Information and Artificial Intelligence Models.

    PubMed

    Munteanu, Cristian R; Gonzalez-Diaz, Humberto; Garcia, Rafael; Loza, Mabel; Pazos, Alejandro

    2015-01-01

    The molecular information encoding into molecular descriptors is the first step into in silico Chemoinformatics methods in Drug Design. The Machine Learning methods are a complex solution to find prediction models for specific biological properties of molecules. These models connect the molecular structure information such as atom connectivity (molecular graphs) or physical-chemical properties of an atom/group of atoms to the molecular activity (Quantitative Structure - Activity Relationship, QSAR). Due to the complexity of the proteins, the prediction of their activity is a complicated task and the interpretation of the models is more difficult. The current review presents a series of 11 prediction models for proteins, implemented as free Web tools on an Artificial Intelligence Model Server in Biosciences, Bio-AIMS (http://bio-aims.udc.es/TargetPred.php). Six tools predict protein activity, two models evaluate drug - protein target interactions and the other three calculate protein - protein interactions. The input information is based on the protein 3D structure for nine models, 1D peptide amino acid sequence for three tools and drug SMILES formulas for two servers. The molecular graph descriptor-based Machine Learning models could be useful tools for in silico screening of new peptides/proteins as future drug targets for specific treatments.

  12. SeqTU: A web server for identification of bacterial transcription units

    DOE PAGES

    Chen, Xin; Chou, Wen -Chi; Ma, Qin; ...

    2017-03-07

    A transcription unit (TU) consists of K ≥ 1 consecutive genes on the same strand of a bacterial genome that are transcribed into a single mRNA molecule under certain conditions. Their identification is an essential step in elucidation of transcriptional regulatory networks. We have recently developed a machine-learning method to accurately identify TUs from RNA-seq data, based on two features of the assembled RNA reads: the continuity and stability of RNA-seq coverage across a genomic region. While good performance was achieved by the method on Escherichia coli and Clostridium thermocellum, substantial work is needed to make the program generally applicablemore » to all bacteria, knowing that the program requires organism specific information. A web server, named SeqTU, was developed to automatically identify TUs with given RNA-seq data of any bacterium using a machine-learning approach. The server consists of a number of utility tools, in addition to TU identification, such as data preparation, data quality check and RNA-read mapping. SeqTU provides a user-friendly interface and automated prediction of TUs from given RNA-seq data. Furthermore, the predicted TUs are displayed intuitively using HTML format along with a graphic visualization of the prediction.« less

  13. Semantic Document Model to Enhance Data and Knowledge Interoperability

    NASA Astrophysics Data System (ADS)

    Nešić, Saša

    To enable document data and knowledge to be efficiently shared and reused across application, enterprise, and community boundaries, desktop documents should be completely open and queryable resources, whose data and knowledge are represented in a form understandable to both humans and machines. At the same time, these are the requirements that desktop documents need to satisfy in order to contribute to the visions of the Semantic Web. With the aim of achieving this goal, we have developed the Semantic Document Model (SDM), which turns desktop documents into Semantic Documents as uniquely identified and semantically annotated composite resources, that can be instantiated into human-readable (HR) and machine-processable (MP) forms. In this paper, we present the SDM along with an RDF and ontology-based solution for the MP document instance. Moreover, on top of the proposed model, we have built the Semantic Document Management System (SDMS), which provides a set of services that exploit the model. As an application example that takes advantage of SDMS services, we have extended MS Office with a set of tools that enables users to transform MS Office documents (e.g., MS Word and MS PowerPoint) into Semantic Documents, and to search local and distant semantic document repositories for document content units (CUs) over Semantic Web protocols.

  14. Analysis of plastic deformation in silicon web crystals

    NASA Technical Reports Server (NTRS)

    Spitznagel, J. A.; Seidensticker, R. G.; Lien, S. Y.; Mchugh, J. P.; Hopkins, R. H.

    1987-01-01

    Numerical calculation of 111-plane 110-line slip activity in silicon web crystals generated by thermal stresses is in good agreement with etch pit patterns and X-ray topographic data. The data suggest that stress redistribution effects are small and that a model, similar to that proposed by Penning (1958) and Jordan (1981) but modified to account for dislocation annihilation and egress, can be used to describe plastic flow effects during silicon web growth.

  15. Free Factories: Unified Infrastructure for Data Intensive Web Services

    PubMed Central

    Zaranek, Alexander Wait; Clegg, Tom; Vandewege, Ward; Church, George M.

    2010-01-01

    We introduce the Free Factory, a platform for deploying data-intensive web services using small clusters of commodity hardware and free software. Independently administered virtual machines called Freegols give application developers the flexibility of a general purpose web server, along with access to distributed batch processing, cache and storage services. Each cluster exploits idle RAM and disk space for cache, and reserves disks in each node for high bandwidth storage. The batch processing service uses a variation of the MapReduce model. Virtualization allows every CPU in the cluster to participate in batch jobs. Each 48-node cluster can achieve 4-8 gigabytes per second of disk I/O. Our intent is to use multiple clusters to process hundreds of simultaneous requests on multi-hundred terabyte data sets. Currently, our applications achieve 1 gigabyte per second of I/O with 123 disks by scheduling batch jobs on two clusters, one of which is located in a remote data center. PMID:20514356

  16. Fabrication and Test of Large Area Spider-Web Bolometers for CMB Measurements

    NASA Astrophysics Data System (ADS)

    Biasotti, M.; Ceriale, V.; Corsini, D.; De Gerone, M.; Gatti, F.; Orlando, A.; Pizzigoni, G.

    2016-08-01

    Detecting the primordial 'B-mode' polarization of the cosmic microwave background is one of the major challenges of modern observational cosmology. Microwave telescopes need sensitive cryogenic bolometers with an overall equivalent noise temperature in the nK range. In this paper, we present the development status of large area (about 1 cm2) spider-web bolometer, which imply additional fabrication challenges. The spider-web is a suspended Si3N4 1 \\upmu m-thick and 8-mm diameter with mesh size of 250 \\upmu m. The thermal sensitive element is a superconducting transition edge sensor (TES) at the center of the bolometer. The first prototype is a Ti-Au TES with transition temperature tuned around 350 mK, new devices will be a Mo-Au bilayer tuned to have a transition temperature of 500 mK. We present the fabrication process with micro-machining techniques from silicon wafer covered with SiO2 - Si3N4 CVD films, 0.3 and 1 \\upmu m- thick, respectively, and preliminary tests.

  17. RSAT 2015: Regulatory Sequence Analysis Tools

    PubMed Central

    Medina-Rivera, Alejandra; Defrance, Matthieu; Sand, Olivier; Herrmann, Carl; Castro-Mondragon, Jaime A.; Delerce, Jeremy; Jaeger, Sébastien; Blanchet, Christophe; Vincens, Pierre; Caron, Christophe; Staines, Daniel M.; Contreras-Moreira, Bruno; Artufel, Marie; Charbonnier-Khamvongsa, Lucie; Hernandez, Céline; Thieffry, Denis; Thomas-Chollier, Morgane; van Helden, Jacques

    2015-01-01

    RSAT (Regulatory Sequence Analysis Tools) is a modular software suite for the analysis of cis-regulatory elements in genome sequences. Its main applications are (i) motif discovery, appropriate to genome-wide data sets like ChIP-seq, (ii) transcription factor binding motif analysis (quality assessment, comparisons and clustering), (iii) comparative genomics and (iv) analysis of regulatory variations. Nine new programs have been added to the 43 described in the 2011 NAR Web Software Issue, including a tool to extract sequences from a list of coordinates (fetch-sequences from UCSC), novel programs dedicated to the analysis of regulatory variants from GWAS or population genomics (retrieve-variation-seq and variation-scan), a program to cluster motifs and visualize the similarities as trees (matrix-clustering). To deal with the drastic increase of sequenced genomes, RSAT public sites have been reorganized into taxon-specific servers. The suite is well-documented with tutorials and published protocols. The software suite is available through Web sites, SOAP/WSDL Web services, virtual machines and stand-alone programs at http://www.rsat.eu/. PMID:25904632

  18. Contribution of nematodes to the structure and function of the soil food web.

    PubMed

    Ferris, Howard

    2010-03-01

    As carbon and energy flow through the soil food web they are depleted by the metabolic and production functions of organisms. To be sustained, a "long" food web, with a large biomass at higher trophic levels, must receive a high rate of rhizodeposition or detrital subsidy, or be top-populated by organisms of slow growth and long life cycle. Disturbed soil food webs tend to be bottom heavy and recalcitrant to restoration due to the slow growth of upper predator populations, physical and chemical constraints of the soil matrix, biological imbalances, and the relatively low mobility and invasion potential of soil organisms. The functional roles of nematodes, determined by their metabolic and behavioral activities, may be categorized as ecosystem services, disservices or effect-neutral. Among the disservices attributable to nematodes are overgrazing, which diminishes services of prey organisms, and plant-damaging herbivory, which reduces carbon fixation and availability to other organisms in the food web. Unfortunately, management to ameliorate potential disservices of certain nematodes results in unintended but long-lasting diminution of the services of others. Beneficial roles of nematodes may be enhanced by environmental stewardship that fosters greater biodiversity and, consequently, complementarity and continuity of their services.

  19. PMAnalyzer: a new web interface for bacterial growth curve analysis.

    PubMed

    Cuevas, Daniel A; Edwards, Robert A

    2017-06-15

    Bacterial growth curves are essential representations for characterizing bacteria metabolism within a variety of media compositions. Using high-throughput, spectrophotometers capable of processing tens of 96-well plates, quantitative phenotypic information can be easily integrated into the current data structures that describe a bacterial organism. The PMAnalyzer pipeline performs a growth curve analysis to parameterize the unique features occurring within microtiter wells containing specific growth media sources. We have expanded the pipeline capabilities and provide a user-friendly, online implementation of this automated pipeline. PMAnalyzer version 2.0 provides fast automatic growth curve parameter analysis, growth identification and high resolution figures of sample-replicate growth curves and several statistical analyses. PMAnalyzer v2.0 can be found at https://edwards.sdsu.edu/pmanalyzer/ . Source code for the pipeline can be found on GitHub at https://github.com/dacuevas/PMAnalyzer . Source code for the online implementation can be found on GitHub at https://github.com/dacuevas/PMAnalyzerWeb . dcuevas08@gmail.com. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press.

  20. Zooniverse - Web scale citizen science with people and machines. (Invited)

    NASA Astrophysics Data System (ADS)

    Smith, A.; Lynn, S.; Lintott, C.; Simpson, R.

    2013-12-01

    The Zooniverse (zooniverse.org) began in 2007 with the launch of Galaxy Zoo, a project in which more than 175,000 people provided shape analyses of more than 1 million galaxy images sourced from the Sloan Digital Sky Survey. These galaxy 'classifications', some 60 million in total, have since been used to produce more than 50 peer-reviewed publications based not only on the original research goals of the project but also because of serendipitous discoveries made by the volunteer community. Based upon the success of Galaxy Zoo the team have gone on to develop more than 25 web-based citizen science projects, all with a strong research focus in a range of subjects from astronomy to zoology where human-based analysis still exceeds that of machine intelligence. Over the past 6 years Zooniverse projects have collected more than 300 million data analyses from over 1 million volunteers providing fantastically rich datasets for not only the individuals working to produce research from their project but also the machine learning and computer vision research communities. The Zooniverse platform has always been developed to be the 'simplest thing that works' implementing only the most rudimentary algorithms for functionality such as task allocation and user-performance metrics - simplifications necessary to scale the Zooniverse such that the core team of developers and data scientists can remain small and the cost of running the computing infrastructure relatively modest. To date these simplifications have been appropriate for the data volumes and analysis tasks being addressed. This situation however is changing: next generation telescopes such as the Large Synoptic Sky Telescope (LSST) will produce data volumes dwarfing those previously analyzed. If citizen science is to have a part to play in analyzing these next-generation datasets then the Zooniverse will need to evolve into a smarter system capable for example of modeling the abilities of users and the complexities of the data being classified in real time. In this session I will outline the current architecture of the Zooniverse platform and introduce new functionality being developed to enable the development of a true 'social machines'. Our platform is evolving into a system capable of integrating human and machine intelligence in a live environment thus capable of addressing some of the biggest challenges in big-data science.

  1. Global-scale carbon and energy flows through the marine planktonic food web: An analysis with a coupled physical-biological model

    NASA Astrophysics Data System (ADS)

    Stock, Charles A.; Dunne, John P.; John, Jasmin G.

    2014-01-01

    Global-scale planktonic ecosystem models exhibit large differences in simulated net primary production (NPP) and assessment of planktonic food web fluxes beyond primary producers has been limited, diminishing confidence in carbon flux estimates from these models. In this study, a global ocean-ice-ecosystem model was assessed against a suite of observation-based planktonic food web flux estimates, many of which were not considered in previous modeling studies. The simulation successfully captured cross-biome differences and similarities in these fluxes after calibration of a limited number of highly uncertain yet influential parameters. The resulting comprehensive carbon budgets suggested that shortened food webs, elevated growth efficiencies, and tight consumer-resource coupling enable oceanic upwelling systems to support 45% of pelagic mesozooplankton production despite accounting for only 22% of ocean area and 34% of NPP. In seasonally stratified regions (42% of ocean area and 40% of NPP), weakened consumer-resource coupling tempers mesozooplankton production to 41% and enhances export below 100 m to 48% of the global total. In oligotrophic systems (36% of ocean area and 26% of NPP), the dominance of small phytoplankton and low consumer growth efficiencies supported only 14% of mesozooplankton production and 17% of export globally. Bacterial production, in contrast, was maintained in nearly constant proportion to primary production across biomes through the compensating effects of increased partitioning of NPP to the microbial food web in oligotrophic ecosystems and increased bacterial growth efficiencies in more productive areas. Cross-biome differences in mesozooplankton trophic level were muted relative to those invoked by previous work such that significant differences in consumer growth efficiencies and the strength of consumer-resource coupling were needed to explain sharp cross-biome differences in mesozooplankton production. Lastly, simultaneous consideration of multiple flux constraints supports a highly distributed view of respiration across the planktonic food web rather than one dominated by heterotrophic bacteria. The solution herein is unlikely unique in its ability to explain observed cross-biome energy flow patterns and notable misfits remain. Resolution of existing uncertainties in observed biome-scale productivity and increasingly mechanistic physical and biological model components should yield significant refinements to estimates herein.

  2. Intervality and coherence in complex networks

    NASA Astrophysics Data System (ADS)

    Domínguez-García, Virginia; Johnson, Samuel; Muñoz, Miguel A.

    2016-06-01

    Food webs—networks of predators and prey—have long been known to exhibit "intervality": species can generally be ordered along a single axis in such a way that the prey of any given predator tend to lie on unbroken compact intervals. Although the meaning of this axis—usually identified with a "niche" dimension—has remained a mystery, it is assumed to lie at the basis of the highly non-trivial structure of food webs. With this in mind, most trophic network modelling has for decades been based on assigning species a niche value by hand. However, we argue here that intervality should not be considered the cause but rather a consequence of food-web structure. First, analysing a set of 46 empirical food webs, we find that they also exhibit predator intervality: the predators of any given species are as likely to be contiguous as the prey are, but in a different ordering. Furthermore, this property is not exclusive of trophic networks: several networks of genes, neurons, metabolites, cellular machines, airports, and words are found to be approximately as interval as food webs. We go on to show that a simple model of food-web assembly which does not make use of a niche axis can nevertheless generate significant intervality. Therefore, the niche dimension (in the sense used for food-web modelling) could in fact be the consequence of other, more fundamental structural traits. We conclude that a new approach to food-web modelling is required for a deeper understanding of ecosystem assembly, structure, and function, and propose that certain topological features thought to be specific of food webs are in fact common to many complex networks.

  3. Entrez Neuron RDFa: a pragmatic Semantic Web application for data integration in neuroscience research

    PubMed Central

    Samwald, Matthias; Lim, Ernest; Masiar, Peter; Marenco, Luis; Chen, Huajun; Morse, Thomas; Mutalik, Pradeep; Shepherd, Gordon; Miller, Perry; Cheung, Kei-Hoi

    2013-01-01

    The amount of biomedical data available in Semantic Web formats has been rapidly growing in recent years. While these formats are machine-friendly, user-friendly web interfaces allowing easy querying of these data are typically lacking. We present “Entrez Neuron”, a pilot neuron-centric interface that allows for keyword-based queries against a coherent repository of OWL ontologies. These ontologies describe neuronal structures, physiology, mathematical models and microscopy images. The returned query results are organized hierarchically according to brain architecture. Where possible, the application makes use of entities from the Open Biomedical Ontologies (OBO) and the ‘HCLS knowledgebase’ developed by the W3C Interest Group for Health Care and Life Science. It makes use of the emerging RDFa standard to embed ontology fragments and semantic annotations within its HTML-based user interface. The application and underlying ontologies demonstrates how Semantic Web technologies can be used for information integration within a curated information repository and between curated information repositories. It also demonstrates how information integration can be accomplished on the client side, through simple copying and pasting of portions of documents that contain RDFa markup. PMID:19745321

  4. A Web-Based Information System for Field Data Management

    NASA Astrophysics Data System (ADS)

    Weng, Y. H.; Sun, F. S.

    2014-12-01

    A web-based field data management system has been designed and developed to allow field geologists to store, organize, manage, and share field data online. System requirements were analyzed and clearly defined first regarding what data are to be stored, who the potential users are, and what system functions are needed in order to deliver the right data in the right way to the right user. A 3-tiered architecture was adopted to create this secure, scalable system that consists of a web browser at the front end while a database at the back end and a functional logic server in the middle. Specifically, HTML, CSS, and JavaScript were used to implement the user interface in the front-end tier, the Apache web server runs PHP scripts, and MySQL to server is used for the back-end database. The system accepts various types of field information, including image, audio, video, numeric, and text. It allows users to select data and populate them on either Google Earth or Google Maps for the examination of the spatial relations. It also makes the sharing of field data easy by converting them into XML format that is both human-readable and machine-readable, and thus ready for reuse.

  5. BrainLiner: A Neuroinformatics Platform for Sharing Time-Aligned Brain-Behavior Data

    PubMed Central

    Takemiya, Makoto; Majima, Kei; Tsukamoto, Mitsuaki; Kamitani, Yukiyasu

    2016-01-01

    Data-driven neuroscience aims to find statistical relationships between brain activity and task behavior from large-scale datasets. To facilitate high-throughput data processing and modeling, we created BrainLiner as a web platform for sharing time-aligned, brain-behavior data. Using an HDF5-based data format, BrainLiner treats brain activity and data related to behavior with the same salience, aligning both behavioral and brain activity data on a common time axis. This facilitates learning the relationship between behavior and brain activity. Using a common data file format also simplifies data processing and analyses. Properties describing data are unambiguously defined using a schema, allowing machine-readable definition of data. The BrainLiner platform allows users to upload and download data, as well as to explore and search for data from the web platform. A WebGL-based data explorer can visualize highly detailed neurophysiological data from within the web browser, and a data-driven search feature allows users to search for similar time windows of data. This increases transparency, and allows for visual inspection of neural coding. BrainLiner thus provides an essential set of tools for data sharing and data-driven modeling. PMID:26858636

  6. RSAT 2018: regulatory sequence analysis tools 20th anniversary.

    PubMed

    Nguyen, Nga Thi Thuy; Contreras-Moreira, Bruno; Castro-Mondragon, Jaime A; Santana-Garcia, Walter; Ossio, Raul; Robles-Espinoza, Carla Daniela; Bahin, Mathieu; Collombet, Samuel; Vincens, Pierre; Thieffry, Denis; van Helden, Jacques; Medina-Rivera, Alejandra; Thomas-Chollier, Morgane

    2018-05-02

    RSAT (Regulatory Sequence Analysis Tools) is a suite of modular tools for the detection and the analysis of cis-regulatory elements in genome sequences. Its main applications are (i) motif discovery, including from genome-wide datasets like ChIP-seq/ATAC-seq, (ii) motif scanning, (iii) motif analysis (quality assessment, comparisons and clustering), (iv) analysis of regulatory variations, (v) comparative genomics. Six public servers jointly support 10 000 genomes from all kingdoms. Six novel or refactored programs have been added since the 2015 NAR Web Software Issue, including updated programs to analyse regulatory variants (retrieve-variation-seq, variation-scan, convert-variations), along with tools to extract sequences from a list of coordinates (retrieve-seq-bed), to select motifs from motif collections (retrieve-matrix), and to extract orthologs based on Ensembl Compara (get-orthologs-compara). Three use cases illustrate the integration of new and refactored tools to the suite. This Anniversary update gives a 20-year perspective on the software suite. RSAT is well-documented and available through Web sites, SOAP/WSDL (Simple Object Access Protocol/Web Services Description Language) web services, virtual machines and stand-alone programs at http://www.rsat.eu/.

  7. Machine learning approach for automatic quality criteria detection of health web pages.

    PubMed

    Gaudinat, Arnaud; Grabar, Natalia; Boyer, Célia

    2007-01-01

    The number of medical websites is constantly growing [1]. Owing to the open nature of the Web, the reliability of information available on the Web is uneven. Internet users are overwhelmed by the quantity of information available on the Web. The situation is even more critical in the medical area, as the content proposed by health websites can have a direct impact on the users' well being. One way to control the reliability of health websites is to assess their quality and to make this assessment available to users. The HON Foundation has defined a set of eight ethical principles. HON's experts are working in order to manually define whether a given website complies with s the required principles. As the number of medical websites is constantly growing, manual expertise becomes insufficient and automatic systems should be used in order to help medical experts. In this paper we present the design and the evaluation of an automatic system conceived for the categorisation of medical and health documents according to he HONcode ethical principles. A first evaluation shows promising results. Currently the system shows 0.78 micro precision and 0.73 F-measure, with 0.06 errors.

  8. Identifying unproven cancer treatments on the health web: addressing accuracy, generalizability and scalability.

    PubMed

    Aphinyanaphongs, Yin; Fu, Lawrence D; Aliferis, Constantin F

    2013-01-01

    Building machine learning models that identify unproven cancer treatments on the Health Web is a promising approach for dealing with the dissemination of false and dangerous information to vulnerable health consumers. Aside from the obvious requirement of accuracy, two issues are of practical importance in deploying these models in real world applications. (a) Generalizability: The models must generalize to all treatments (not just the ones used in the training of the models). (b) Scalability: The models can be applied efficiently to billions of documents on the Health Web. First, we provide methods and related empirical data demonstrating strong accuracy and generalizability. Second, by combining the MapReduce distributed architecture and high dimensionality compression via Markov Boundary feature selection, we show how to scale the application of the models to WWW-scale corpora. The present work provides evidence that (a) a very small subset of unproven cancer treatments is sufficient to build a model to identify unproven treatments on the web; (b) unproven treatments use distinct language to market their claims and this language is learnable; (c) through distributed parallelization and state of the art feature selection, it is possible to prepare the corpora and build and apply models with large scalability.

  9. Templet Web: the use of volunteer computing approach in PaaS-style cloud

    NASA Astrophysics Data System (ADS)

    Vostokin, Sergei; Artamonov, Yuriy; Tsarev, Daniil

    2018-03-01

    This article presents the Templet Web cloud service. The service is designed for high-performance scientific computing automation. The use of high-performance technology is specifically required by new fields of computational science such as data mining, artificial intelligence, machine learning, and others. Cloud technologies provide a significant cost reduction for high-performance scientific applications. The main objectives to achieve this cost reduction in the Templet Web service design are: (a) the implementation of "on-demand" access; (b) source code deployment management; (c) high-performance computing programs development automation. The distinctive feature of the service is the approach mainly used in the field of volunteer computing, when a person who has access to a computer system delegates his access rights to the requesting user. We developed an access procedure, algorithms, and software for utilization of free computational resources of the academic cluster system in line with the methods of volunteer computing. The Templet Web service has been in operation for five years. It has been successfully used for conducting laboratory workshops and solving research problems, some of which are considered in this article. The article also provides an overview of research directions related to service development.

  10. Defrosting the digital library: bibliographic tools for the next generation web.

    PubMed

    Hull, Duncan; Pettifer, Steve R; Kell, Douglas B

    2008-10-01

    Many scientists now manage the bulk of their bibliographic information electronically, thereby organizing their publications and citation material from digital libraries. However, a library has been described as "thought in cold storage," and unfortunately many digital libraries can be cold, impersonal, isolated, and inaccessible places. In this Review, we discuss the current chilly state of digital libraries for the computational biologist, including PubMed, IEEE Xplore, the ACM digital library, ISI Web of Knowledge, Scopus, Citeseer, arXiv, DBLP, and Google Scholar. We illustrate the current process of using these libraries with a typical workflow, and highlight problems with managing data and metadata using URIs. We then examine a range of new applications such as Zotero, Mendeley, Mekentosj Papers, MyNCBI, CiteULike, Connotea, and HubMed that exploit the Web to make these digital libraries more personal, sociable, integrated, and accessible places. We conclude with how these applications may begin to help achieve a digital defrost, and discuss some of the issues that will help or hinder this in terms of making libraries on the Web warmer places in the future, becoming resources that are considerably more useful to both humans and machines.

  11. Defrosting the Digital Library: Bibliographic Tools for the Next Generation Web

    PubMed Central

    Hull, Duncan; Pettifer, Steve R.; Kell, Douglas B.

    2008-01-01

    Many scientists now manage the bulk of their bibliographic information electronically, thereby organizing their publications and citation material from digital libraries. However, a library has been described as “thought in cold storage,” and unfortunately many digital libraries can be cold, impersonal, isolated, and inaccessible places. In this Review, we discuss the current chilly state of digital libraries for the computational biologist, including PubMed, IEEE Xplore, the ACM digital library, ISI Web of Knowledge, Scopus, Citeseer, arXiv, DBLP, and Google Scholar. We illustrate the current process of using these libraries with a typical workflow, and highlight problems with managing data and metadata using URIs. We then examine a range of new applications such as Zotero, Mendeley, Mekentosj Papers, MyNCBI, CiteULike, Connotea, and HubMed that exploit the Web to make these digital libraries more personal, sociable, integrated, and accessible places. We conclude with how these applications may begin to help achieve a digital defrost, and discuss some of the issues that will help or hinder this in terms of making libraries on the Web warmer places in the future, becoming resources that are considerably more useful to both humans and machines. PMID:18974831

  12. Design and development of an IoT-based web application for an intelligent remote SCADA system

    NASA Astrophysics Data System (ADS)

    Kao, Kuang-Chi; Chieng, Wei-Hua; Jeng, Shyr-Long

    2018-03-01

    This paper presents a design of an intelligent remote electrical power supervisory control and data acquisition (SCADA) system based on the Internet of Things (IoT), with Internet Information Services (IIS) for setting up web servers, an ASP.NET model-view- controller (MVC) for establishing a remote electrical power monitoring and control system by using responsive web design (RWD), and a Microsoft SQL Server as the database. With the web browser connected to the Internet, the sensing data is sent to the client by using the TCP/IP protocol, which supports mobile devices with different screen sizes. The users can provide instructions immediately without being present to check the conditions, which considerably reduces labor and time costs. The developed system incorporates a remote measuring function by using a wireless sensor network and utilizes a visual interface to make the human-machine interface (HMI) more instinctive. Moreover, it contains an analog input/output and a basic digital input/output that can be applied to a motor driver and an inverter for integration with a remote SCADA system based on IoT, and thus achieve efficient power management.

  13. The Learning Space: Teachers Taking Charge.

    ERIC Educational Resources Information Center

    Steede-Terry, Karen

    2001-01-01

    Describes The Learning Space, a Seattle-based organization that provides support for classroom teachers by providing a means of communicating and collaborating with other teachers via the World Wide Web. Discusses the Web site that includes classroom lessons and considers growth of the organization to expand to other states. (LRW)

  14. Semantic Metadata for Heterogeneous Spatial Planning Documents

    NASA Astrophysics Data System (ADS)

    Iwaniak, A.; Kaczmarek, I.; Łukowicz, J.; Strzelecki, M.; Coetzee, S.; Paluszyński, W.

    2016-09-01

    Spatial planning documents contain information about the principles and rights of land use in different zones of a local authority. They are the basis for administrative decision making in support of sustainable development. In Poland these documents are published on the Web according to a prescribed non-extendable XML schema, designed for optimum presentation to humans in HTML web pages. There is no document standard, and limited functionality exists for adding references to external resources. The text in these documents is discoverable and searchable by general-purpose web search engines, but the semantics of the content cannot be discovered or queried. The spatial information in these documents is geographically referenced but not machine-readable. Major manual efforts are required to integrate such heterogeneous spatial planning documents from various local authorities for analysis, scenario planning and decision support. This article presents results of an implementation using machine-readable semantic metadata to identify relationships among regulations in the text, spatial objects in the drawings and links to external resources. A spatial planning ontology was used to annotate different sections of spatial planning documents with semantic metadata in the Resource Description Framework in Attributes (RDFa). The semantic interpretation of the content, links between document elements and links to external resources were embedded in XHTML pages. An example and use case from the spatial planning domain in Poland is presented to evaluate its efficiency and applicability. The solution enables the automated integration of spatial planning documents from multiple local authorities to assist decision makers with understanding and interpreting spatial planning information. The approach is equally applicable to legal documents from other countries and domains, such as cultural heritage and environmental management.

  15. Presentation and response timing accuracy in Adobe Flash and HTML5/JavaScript Web experiments.

    PubMed

    Reimers, Stian; Stewart, Neil

    2015-06-01

    Web-based research is becoming ubiquitous in the behavioral sciences, facilitated by convenient, readily available participant pools and relatively straightforward ways of running experiments: most recently, through the development of the HTML5 standard. Although in most studies participants give untimed responses, there is a growing interest in being able to record response times online. Existing data on the accuracy and cross-machine variability of online timing measures are limited, and generally they have compared behavioral data gathered on the Web with similar data gathered in the lab. For this article, we took a more direct approach, examining two ways of running experiments online-Adobe Flash and HTML5 with CSS3 and JavaScript-across 19 different computer systems. We used specialist hardware to measure stimulus display durations and to generate precise response times to visual stimuli in order to assess measurement accuracy, examining effects of duration, browser, and system-to-system variability (such as across different Windows versions), as well as effects of processing power and graphics capability. We found that (a) Flash and JavaScript's presentation and response time measurement accuracy are similar; (b) within-system variability is generally small, even in low-powered machines under high load; (c) the variability of measured response times across systems is somewhat larger; and (d) browser type and system hardware appear to have relatively small effects on measured response times. Modeling of the effects of this technical variability suggests that for most within- and between-subjects experiments, Flash and JavaScript can both be used to accurately detect differences in response times across conditions. Concerns are, however, noted about using some correlational or longitudinal designs online.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sheu, R; Ghafar, R; Powers, A

    Purpose: Demonstrate the effectiveness of in-house software in ensuring EMR workflow efficiency and safety. Methods: A web-based dashboard system (WBDS) was developed to monitor clinical workflow in real time using web technology (WAMP) through ODBC (Open Database Connectivity). Within Mosaiq (Elekta Inc), operational workflow is driven and indicated by Quality Check Lists (QCLs), which is triggered by automation software IQ Scripts (Elekta Inc); QCLs rely on user completion to propagate. The WBDS retrieves data directly from the Mosaig SQL database and tracks clinical events in real time. For example, the necessity of a physics initial chart check can be determinedmore » by screening all patients on treatment who have received their first fraction and who have not yet had their first chart check. Monitoring similar “real” events with our in-house software creates a safety net as its propagation does not rely on individual users input. Results: The WBDS monitors the following: patient care workflow (initial consult to end of treatment), daily treatment consistency (scheduling, technique, charges), physics chart checks (initial, EOT, weekly), new starts, missing treatments (>3 warning/>5 fractions, action required), and machine overrides. The WBDS can be launched from any web browser which allows the end user complete transparency and timely information. Since the creation of the dashboards, workflow interruptions due to accidental deletion or completion of QCLs were eliminated. Additionally, all physics chart checks were completed timely. Prompt notifications of treatment record inconsistency and machine overrides have decreased the amount of time between occurrence and execution of corrective action. Conclusion: Our clinical workflow relies primarily on QCLs and IQ Scripts; however, this functionality is not the panacea of safety and efficiency. The WBDS creates a more thorough system of checks to provide a safer and near error-less working environment.« less

  17. Web-based education in anesthesiology: a critical overview.

    PubMed

    Doyle, D John

    2008-12-01

    The purpose of this review is to discuss the rise of web-based educational resources available to the anesthesiology community. Recent developments of particular importance include the growth of 'Web 2.0' resources, the development of the concepts of 'open access' and 'information philanthropy', and the expansion of web-based medical simulation software products.In addition, peer review of online educational resources has now come of age. The worldwide web has made available a large variety of valuable medical information and education resources only dreamed of two decades ago. To a large extent,these developments represent a shift in the focus of medical education resources to emphasize free access to materials and to encourage collaborative development efforts.

  18. QMachine: commodity supercomputing in web browsers.

    PubMed

    Wilkinson, Sean R; Almeida, Jonas S

    2014-06-09

    Ongoing advancements in cloud computing provide novel opportunities in scientific computing, especially for distributed workflows. Modern web browsers can now be used as high-performance workstations for querying, processing, and visualizing genomics' "Big Data" from sources like The Cancer Genome Atlas (TCGA) and the International Cancer Genome Consortium (ICGC) without local software installation or configuration. The design of QMachine (QM) was driven by the opportunity to use this pervasive computing model in the context of the Web of Linked Data in Biomedicine. QM is an open-sourced, publicly available web service that acts as a messaging system for posting tasks and retrieving results over HTTP. The illustrative application described here distributes the analyses of 20 Streptococcus pneumoniae genomes for shared suffixes. Because all analytical and data retrieval tasks are executed by volunteer machines, few server resources are required. Any modern web browser can submit those tasks and/or volunteer to execute them without installing any extra plugins or programs. A client library provides high-level distribution templates including MapReduce. This stark departure from the current reliance on expensive server hardware running "download and install" software has already gathered substantial community interest, as QM received more than 2.2 million API calls from 87 countries in 12 months. QM was found adequate to deliver the sort of scalable bioinformatics solutions that computation- and data-intensive workflows require. Paradoxically, the sandboxed execution of code by web browsers was also found to enable them, as compute nodes, to address critical privacy concerns that characterize biomedical environments.

  19. Spider assemblages associated with different crop stages of irrigated rice agroecosystems from eastern Uruguay

    PubMed Central

    Ginella, Juaquín; Cadenazzi, Mónica; Castiglioni, Enrique A.; Martínez, Sebastián; Casales, Luis; Caraballo, María P.; Laborda, Álvaro; Simo, Miguel

    2018-01-01

    Abstract The rice crop and associated ecosystems constitute a rich mosaic of habitats that preserve a rich biological diversity. Spiders are an abundant and successful group of natural predators that are considered efficient in the biocontrol of the major insect pests in agroecosystems. Spider diversity in different stages of the rice crop growth from eastern Uruguay was analysed. Field study was developed on six rice farms with rotation system with pasture, installed during intercropping stage as cover crop. Six rice crops distributed in three locations were sampled with pitfall and entomological vaccum suction machine. Sixteen families, representing six guilds, were collected. Lycosidae, Linyphiidae, Anyphaenidae and Tetragnathidae were the most abundant families (26%, 25%, 20% and 12%, respectively) and comprised more than 80% of total abundance. Other hunters (29%), sheet web weavers (25%) and ground hunters (24%) were the most abundant guilds. Species composition along different crop stages was significantly different according to the ANOSIM test. The results showed higher spider abundance and diversity along the crop and intercrop stages. This study represents the first contribution to the knowledge of spider diversity associated with rice agroecosystem in the country. PMID:29755261

  20. Spider assemblages associated with different crop stages of irrigated rice agroecosystems from eastern Uruguay.

    PubMed

    Bao, Leticia; Ginella, Juaquín; Cadenazzi, Mónica; Castiglioni, Enrique A; Martínez, Sebastián; Casales, Luis; Caraballo, María P; Laborda, Álvaro; Simo, Miguel

    2018-01-01

    The rice crop and associated ecosystems constitute a rich mosaic of habitats that preserve a rich biological diversity. Spiders are an abundant and successful group of natural predators that are considered efficient in the biocontrol of the major insect pests in agroecosystems. Spider diversity in different stages of the rice crop growth from eastern Uruguay was analysed. Field study was developed on six rice farms with rotation system with pasture, installed during intercropping stage as cover crop. Six rice crops distributed in three locations were sampled with pitfall and entomological vaccum suction machine. Sixteen families, representing six guilds, were collected. Lycosidae, Linyphiidae, Anyphaenidae and Tetragnathidae were the most abundant families (26%, 25%, 20% and 12%, respectively) and comprised more than 80% of total abundance. Other hunters (29%), sheet web weavers (25%) and ground hunters (24%) were the most abundant guilds. Species composition along different crop stages was significantly different according to the ANOSIM test. The results showed higher spider abundance and diversity along the crop and intercrop stages. This study represents the first contribution to the knowledge of spider diversity associated with rice agroecosystem in the country.

  1. Clustering and Candidate Motif Detection in Exosomal miRNAs by Application of Machine Learning Algorithms.

    PubMed

    Gaur, Pallavi; Chaturvedi, Anoop

    2017-07-22

    The clustering pattern and motifs give immense information about any biological data. An application of machine learning algorithms for clustering and candidate motif detection in miRNAs derived from exosomes is depicted in this paper. Recent progress in the field of exosome research and more particularly regarding exosomal miRNAs has led much bioinformatic-based research to come into existence. The information on clustering pattern and candidate motifs in miRNAs of exosomal origin would help in analyzing existing, as well as newly discovered miRNAs within exosomes. Along with obtaining clustering pattern and candidate motifs in exosomal miRNAs, this work also elaborates the usefulness of the machine learning algorithms that can be efficiently used and executed on various programming languages/platforms. Data were clustered and sequence candidate motifs were detected successfully. The results were compared and validated with some available web tools such as 'BLASTN' and 'MEME suite'. The machine learning algorithms for aforementioned objectives were applied successfully. This work elaborated utility of machine learning algorithms and language platforms to achieve the tasks of clustering and candidate motif detection in exosomal miRNAs. With the information on mentioned objectives, deeper insight would be gained for analyses of newly discovered miRNAs in exosomes which are considered to be circulating biomarkers. In addition, the execution of machine learning algorithms on various language platforms gives more flexibility to users to try multiple iterations according to their requirements. This approach can be applied to other biological data-mining tasks as well.

  2. Digital Earth Watch (DEW): How Mobile Apps Are Paving The Way Towards A Federated Web-Services Architecture For Citizen Science

    NASA Astrophysics Data System (ADS)

    Carrera, F.; Schloss, A. L.; Guerin, S.; Beaudry, J.; Pickle, J.

    2011-12-01

    Dozens of web-based initiatives allow citizens to provide information to programs that monitor the health of our environment. A concerned citizen can participate on-line as a weather "spotter", provide important phenological information to national databases, update bird counts in the area, or record the freezing of ponds, and much more. Many of these programs are developing mobile apps as companion tools to their web sites. Our group was involved in the development of one such companion app as an adjunct to the Picture Post project web site. Digital Earth Watch (DEW) and the Picture Post network support environmental monitoring through repeat digital photography and satellite imagery. A Picture Post is an eight-sided platform on a stand-alone post for taking a panoramic series of photographs. By taking pictures on a regular basis at Picture Post sites and by sharing these pictures on the program's web site (housed at the University of New Hampshire), citizen scientists are creating a photographic library of change-over-time in their local area and contributing to national monitoring programs. Our DEW Android application simplifies participation by allowing users to upload pictures instantly from their smart phone. The app also removes the constraint of the physical picture post, by allowing users to create a virtual post anywhere in the world. Posts have been set up to monitor trails, forests, water, wetlands, gardens and landscapes. The app uses the phone's GPS to position the virtual post in its geographic location and guides the user through the orientations thanks to the internal accelerometers and compass. To aid in the before-and-after comparison of images taken from the same orientation, the DEW app displays an "onionskin" of the prior image overlayed onto the camera viewfinder. With the transparent onionskin as a guide, the user can align the images more accurately, thus allowing differences between pictures to be detectable and measurable. The app interacts with the UNH server via APIs (Application Programming Interfaces) that were created to allow bi-directional machine-to-machine interaction between the mobile device and the web site. Thus, the principal functions that a user can perform on the web site, such as finding post sites on a map and viewing and adding picture sets, are available on the smartphone. The development of the APIs makes it now possible not only to communicate with our own mobile app, but, more importantly, it opens the door for other computer systems to directly interact with our server. Our ongoing discussions with the National Phenology Network and Project Budburst, have highlighted the potential (and perhaps the need) for the creation of a distributed web-service architecture whereby each national program exposes its key functionalities not only to their own mobile phone apps, but also to other organizations, in a federated system of servers, all supporting citizen-based digital earth watch programs.

  3. Microelectromechanical Systems; A DoD Dual Use Technology Industrial Assessment.

    DTIC Science & Technology

    1995-12-01

    systems, • embedded sensors and actuators for condition-based maintenance of machines and vehicles, on-demand amplified structural strength in...will transmit temperature, pressure, and number-of- rotations information to a hand-held receiver used by the maintenance and service personnel. This...automobile industry being the major driver for most micro- machined sensors (pressure, acceleration and oxygen). In 1994 model year Projected Growth

  4. Crystal nucleation in metallic alloys using x-ray radiography and machine learning

    PubMed Central

    Arteta, Carlos; Lempitsky, Victor

    2018-01-01

    The crystallization of solidifying Al-Cu alloys over a wide range of conditions was studied in situ by synchrotron x-ray radiography, and the data were analyzed using a computer vision algorithm trained using machine learning. The effect of cooling rate and solute concentration on nucleation undercooling, crystal formation rate, and crystal growth rate was measured automatically for thousands of separate crystals, which was impossible to achieve manually. Nucleation undercooling distributions confirmed the efficiency of extrinsic grain refiners and gave support to the widely assumed free growth model of heterogeneous nucleation. We show that crystallization occurred in temporal and spatial bursts associated with a solute-suppressed nucleation zone. PMID:29662954

  5. ABrowse--a customizable next-generation genome browser framework.

    PubMed

    Kong, Lei; Wang, Jun; Zhao, Shuqi; Gu, Xiaocheng; Luo, Jingchu; Gao, Ge

    2012-01-05

    With the rapid growth of genome sequencing projects, genome browser is becoming indispensable, not only as a visualization system but also as an interactive platform to support open data access and collaborative work. Thus a customizable genome browser framework with rich functions and flexible configuration is needed to facilitate various genome research projects. Based on next-generation web technologies, we have developed a general-purpose genome browser framework ABrowse which provides interactive browsing experience, open data access and collaborative work support. By supporting Google-map-like smooth navigation, ABrowse offers end users highly interactive browsing experience. To facilitate further data analysis, multiple data access approaches are supported for external platforms to retrieve data from ABrowse. To promote collaborative work, an online user-space is provided for end users to create, store and share comments, annotations and landmarks. For data providers, ABrowse is highly customizable and configurable. The framework provides a set of utilities to import annotation data conveniently. To build ABrowse on existing annotation databases, data providers could specify SQL statements according to database schema. And customized pages for detailed information display of annotation entries could be easily plugged in. For developers, new drawing strategies could be integrated into ABrowse for new types of annotation data. In addition, standard web service is provided for data retrieval remotely, providing underlying machine-oriented programming interface for open data access. ABrowse framework is valuable for end users, data providers and developers by providing rich user functions and flexible customization approaches. The source code is published under GNU Lesser General Public License v3.0 and is accessible at http://www.abrowse.org/. To demonstrate all the features of ABrowse, a live demo for Arabidopsis thaliana genome has been built at http://arabidopsis.cbi.edu.cn/.

  6. Evaluation of the Effectiveness of Machine-based Situation Assessment - Preliminary Work

    DTIC Science & Technology

    2008-08-01

    consumers . 5. In nature, food webs can be described using networks. 6. An organisation is a network of people. 7. In the social domain there are...examine 6 DSTO-TN-0836 the linguistic affinity between concepts [16, 14, 17]. For example, if Concept A is drink and Concept B is beverage , they...of convergence behaviour they exhibit. A further extension to the methodology described above for comparing the situation assessment with the

  7. A Multi-scale Cognitive Approach to Intrusion Detection and Response

    DTIC Science & Technology

    2015-12-28

    the behavior of the traffic on the network, either by using mathematical formulas or by replaying packet streams. As a result, simulators depend...large scale. Summary of the most important results We obtained a powerful machine, which has 768 cores and 1.25 TB memory . RBG has been...time. Each client is configured with 1GB memory , 10 GB disk space, and one 100M Ethernet interface. The server nodes include web servers

  8. Webcam classification using simple features

    NASA Astrophysics Data System (ADS)

    Pramoun, Thitiporn; Choe, Jeehyun; Li, He; Chen, Qingshuang; Amornraksa, Thumrongrat; Lu, Yung-Hsiang; Delp, Edward J.

    2015-03-01

    Thousands of sensors are connected to the Internet and many of these sensors are cameras. The "Internet of Things" will contain many "things" that are image sensors. This vast network of distributed cameras (i.e. web cams) will continue to exponentially grow. In this paper we examine simple methods to classify an image from a web cam as "indoor/outdoor" and having "people/no people" based on simple features. We use four types of image features to classify an image as indoor/outdoor: color, edge, line, and text. To classify an image as having people/no people we use HOG and texture features. The features are weighted based on their significance and combined. A support vector machine is used for classification. Our system with feature weighting and feature combination yields 95.5% accuracy.

  9. Light at Night Markup Language (LANML): XML Technology for Light at Night Monitoring Data

    NASA Astrophysics Data System (ADS)

    Craine, B. L.; Craine, E. R.; Craine, E. M.; Crawford, D. L.

    2013-05-01

    Light at Night Markup Language (LANML) is a standard, based upon XML, useful in acquiring, validating, transporting, archiving and analyzing multi-dimensional light at night (LAN) datasets of any size. The LANML standard can accommodate a variety of measurement scenarios including single spot measures, static time-series, web based monitoring networks, mobile measurements, and airborne measurements. LANML is human-readable, machine-readable, and does not require a dedicated parser. In addition LANML is flexible; ensuring future extensions of the format will remain backward compatible with analysis software. The XML technology is at the heart of communicating over the internet and can be equally useful at the desktop level, making this standard particularly attractive for web based applications, educational outreach and efficient collaboration between research groups.

  10. The Dynamic Tensile Behavior of Railway Wheel Steel at High Strain Rates

    NASA Astrophysics Data System (ADS)

    Jing, Lin; Han, Liangliang; Zhao, Longmao; Zhang, Ying

    2016-11-01

    The dynamic tensile tests on D1 railway wheel steel at high strain rates were conducted using a split Hopkinson tensile bar (SHTB) apparatus, compared to quasi-static tests. Three different types of specimens, which were machined from three different positions (i.e., the rim, web and hub) of a railway wheel, were prepared and examined. The rim specimens were checked to have a higher yield stress and ultimate tensile strength than those web and hub specimens under both quasi-static and dynamic loadings, and the railway wheel steel was demonstrated to be strain rate dependent in dynamic tension. The dynamic tensile fracture surfaces of all the wheel steel specimens are cup-cone-shaped morphology on a macroscopic scale and with the quasi-ductile fracture features on the microscopic scale.

  11. Impacts of changing food webs in Lake Ontario: Implications of dietary fatty acids on growth of alewives

    USGS Publications Warehouse

    Snyder, R.J.; Demarche, C.J.; Honeyfield, D.C.

    2011-01-01

    Declines in the abundance and condition of Great Lakes Alewives have been reported periodically during the last two decades, and the reasons for these declines remain unclear. To better understand how food web changes may influence Alewife growth and Wisconsin growth model predictions, we fed Alewives isocaloric diets high in omega-6 fatty acids (corn oil) or high in omega-3 fatty acids (fish oil). Alewives were fed the experimental diets at either 1% (“low ration”) or 3% (“high ration”) of their wet body weight per day. After six weeks, Alewives maintained on the high ration diets were significantly larger than those fed the low ration diets. Moreover, Alewives given the high ration fish oil diet were significantly larger than those maintained on the high ration corn oil diet after six weeks of growth. Body lipid, energy density and total body energy of Alewives on the high ration diets were significantly higher than those fed the low ration diets, and total body energy was significantly higher in Alewives given the high ration fish oil diet compared to those on the high ration corn oil diet. The current Wisconsin bioenergetics model underestimated growth and overestimated food consumption by Alewives in our study. Alewife thiaminase activity was similar among treatment groups. Overall, our results suggest that future food web changes in Lake Ontario, particularly if they involve decreases in the abundance of lipid rich prey items such as Mysis, may reduce Alewife growth rates and total body energy due to reductions in the availability of dietary omega-3 fatty acids.

  12. RVMAB: Using the Relevance Vector Machine Model Combined with Average Blocks to Predict the Interactions of Proteins from Protein Sequences.

    PubMed

    An, Ji-Yong; You, Zhu-Hong; Meng, Fan-Rong; Xu, Shu-Juan; Wang, Yin

    2016-05-18

    Protein-Protein Interactions (PPIs) play essential roles in most cellular processes. Knowledge of PPIs is becoming increasingly more important, which has prompted the development of technologies that are capable of discovering large-scale PPIs. Although many high-throughput biological technologies have been proposed to detect PPIs, there are unavoidable shortcomings, including cost, time intensity, and inherently high false positive and false negative rates. For the sake of these reasons, in silico methods are attracting much attention due to their good performances in predicting PPIs. In this paper, we propose a novel computational method known as RVM-AB that combines the Relevance Vector Machine (RVM) model and Average Blocks (AB) to predict PPIs from protein sequences. The main improvements are the results of representing protein sequences using the AB feature representation on a Position Specific Scoring Matrix (PSSM), reducing the influence of noise using a Principal Component Analysis (PCA), and using a Relevance Vector Machine (RVM) based classifier. We performed five-fold cross-validation experiments on yeast and Helicobacter pylori datasets, and achieved very high accuracies of 92.98% and 95.58% respectively, which is significantly better than previous works. In addition, we also obtained good prediction accuracies of 88.31%, 89.46%, 91.08%, 91.55%, and 94.81% on other five independent datasets C. elegans, M. musculus, H. sapiens, H. pylori, and E. coli for cross-species prediction. To further evaluate the proposed method, we compare it with the state-of-the-art support vector machine (SVM) classifier on the yeast dataset. The experimental results demonstrate that our RVM-AB method is obviously better than the SVM-based method. The promising experimental results show the efficiency and simplicity of the proposed method, which can be an automatic decision support tool. To facilitate extensive studies for future proteomics research, we developed a freely available web server called RVMAB-PPI in Hypertext Preprocessor (PHP) for predicting PPIs. The web server including source code and the datasets are available at http://219.219.62.123:8888/ppi_ab/.

  13. Improving protein–protein interactions prediction accuracy using protein evolutionary information and relevance vector machine model

    PubMed Central

    An, Ji‐Yong; Meng, Fan‐Rong; Chen, Xing; Yan, Gui‐Ying; Hu, Ji‐Pu

    2016-01-01

    Abstract Predicting protein–protein interactions (PPIs) is a challenging task and essential to construct the protein interaction networks, which is important for facilitating our understanding of the mechanisms of biological systems. Although a number of high‐throughput technologies have been proposed to predict PPIs, there are unavoidable shortcomings, including high cost, time intensity, and inherently high false positive rates. For these reasons, many computational methods have been proposed for predicting PPIs. However, the problem is still far from being solved. In this article, we propose a novel computational method called RVM‐BiGP that combines the relevance vector machine (RVM) model and Bi‐gram Probabilities (BiGP) for PPIs detection from protein sequences. The major improvement includes (1) Protein sequences are represented using the Bi‐gram probabilities (BiGP) feature representation on a Position Specific Scoring Matrix (PSSM), in which the protein evolutionary information is contained; (2) For reducing the influence of noise, the Principal Component Analysis (PCA) method is used to reduce the dimension of BiGP vector; (3) The powerful and robust Relevance Vector Machine (RVM) algorithm is used for classification. Five‐fold cross‐validation experiments executed on yeast and Helicobacter pylori datasets, which achieved very high accuracies of 94.57 and 90.57%, respectively. Experimental results are significantly better than previous methods. To further evaluate the proposed method, we compare it with the state‐of‐the‐art support vector machine (SVM) classifier on the yeast dataset. The experimental results demonstrate that our RVM‐BiGP method is significantly better than the SVM‐based method. In addition, we achieved 97.15% accuracy on imbalance yeast dataset, which is higher than that of balance yeast dataset. The promising experimental results show the efficiency and robust of the proposed method, which can be an automatic decision support tool for future proteomics research. For facilitating extensive studies for future proteomics research, we developed a freely available web server called RVM‐BiGP‐PPIs in Hypertext Preprocessor (PHP) for predicting PPIs. The web server including source code and the datasets are available at http://219.219.62.123:8888/BiGP/. PMID:27452983

  14. An Instructional Strategy Framework for Online Learning Environments.

    ERIC Educational Resources Information Center

    Johnson, Scott D.; Aragon, Steven R.

    The rapid growth of Web-based instruction has raised many questions about the quality of online courses. It appears that many online courses are simply modeled after traditional forms of instruction instead of incorporating a design that takes advantage of the unique capabilities of Web-based learning environments. This paper describes a research…

  15. Cloud-Based Technologies: Faculty Development, Support, and Implementation

    ERIC Educational Resources Information Center

    Diaz, Veronica

    2011-01-01

    The number of instructional offerings in higher education that are online, blended, or web-enhanced, including courses and programs, continues to grow exponentially. Alongside the growth of e-learning, higher education has witnessed the explosion of cloud-based or Web 2.0 technologies, a term that refers to the vast array of socially oriented,…

  16. Leadership 2.0: Social Media in Advocacy

    ERIC Educational Resources Information Center

    Gonzales, Lisa; Vodicka, Devin; White, John

    2011-01-01

    Technology is always changing, always improving, and always pushing the envelope for how one works in education. In this increasingly connected age, people have seen rapid growth in social network tools such as Twitter and Facebook. These sites are representative of Web 2.0 resources where users contribute content. Other examples of Web 2.0 sites…

  17. Advancements in silicon web technology

    NASA Technical Reports Server (NTRS)

    Hopkins, R. H.; Easoz, J.; Mchugh, J. P.; Piotrowski, P.; Hundal, R.

    1987-01-01

    Low defect density silicon web crystals up to 7 cm wide are produced from systems whose thermal environments are designed for low stress conditions using computer techniques. During growth, the average silicon melt temperature, the lateral melt temperature distribution, and the melt level are each controlled by digital closed loop systems to maintain thermal steady state and to minimize the labor content of the process. Web solar cell efficiencies of 17.2 pct AM1 have been obtained in the laboratory while 15 pct efficiencies are common in pilot production.

  18. Uniform resolution of compact identifiers for biomedical data

    PubMed Central

    Wimalaratne, Sarala M.; Juty, Nick; Kunze, John; Janée, Greg; McMurry, Julie A.; Beard, Niall; Jimenez, Rafael; Grethe, Jeffrey S.; Hermjakob, Henning; Martone, Maryann E.; Clark, Tim

    2018-01-01

    Most biomedical data repositories issue locally-unique accessions numbers, but do not provide globally unique, machine-resolvable, persistent identifiers for their datasets, as required by publishers wishing to implement data citation in accordance with widely accepted principles. Local accessions may however be prefixed with a namespace identifier, providing global uniqueness. Such “compact identifiers” have been widely used in biomedical informatics to support global resource identification with local identifier assignment. We report here on our project to provide robust support for machine-resolvable, persistent compact identifiers in biomedical data citation, by harmonizing the Identifiers.org and N2T.net (Name-To-Thing) meta-resolvers and extending their capabilities. Identifiers.org services hosted at the European Molecular Biology Laboratory - European Bioinformatics Institute (EMBL-EBI), and N2T.net services hosted at the California Digital Library (CDL), can now resolve any given identifier from over 600 source databases to its original source on the Web, using a common registry of prefix-based redirection rules. We believe these services will be of significant help to publishers and others implementing persistent, machine-resolvable citation of research data. PMID:29737976

  19. Design, fabrication and test of prototype furnace for continuous growth of wide silicon ribbon

    NASA Technical Reports Server (NTRS)

    Duncan, C. S.; Seidensticker, R. G.

    1976-01-01

    A program having the overall objective of growing wide, thin silicon dendritic web crystals quasi-continuously from a semi-automated facility is discussed. The design considerations and fabrication of the facility as well as the test and operation phase are covered; detailed engineering drawings are included as an appendix. During the test and operation phase of the program, more than eighty growth runs and numerous thermal test runs were performed. At the conclusion of the program, 2.4 cm wide web was being grown at thicknesses of 100 to 300 micrometers. As expected, the thickness and growth rate are closely related. Solar cells made from this material were tested at NASA-Lewis and found to have conversion efficiencies comparable to devices fabricated from Czochralski material.

  20. Linking oceanic food webs to coastal production and growth rates of Pacific salmon ( Oncorhynchus spp.), using models on three scales

    NASA Astrophysics Data System (ADS)

    Aydin, Kerim Y.; McFarlane, Gordon A.; King, Jacquelynne R.; Megrey, Bernard A.; Myers, Katherine W.

    2005-03-01

    Three independent modeling methods—a nutrient-phytoplankton-zooplankton (NPZ) model (NEMURO), a food web model (Ecopath/Ecosim), and a bioenergetics model for pink salmon ( Oncorhynchus gorbuscha)—were linked to examine the relationship between seasonal zooplankton dynamics and annual food web productive potential for Pacific salmon feeding and growing in the Alaskan subarctic gyre ecosystem. The linked approach shows the importance of seasonal and ontogenetic prey switching for zooplanktivorous pink salmon, and illustrates the critical role played by lipid-rich forage species, especially the gonatid squid Berryteuthis anonychus, in connecting zooplankton to upper trophic level production in the subarctic North Pacific. The results highlight the need to uncover natural mechanisms responsible for accelerated late winter and early spring growth of salmon, especially with respect to climate change and zooplankton bloom timing. Our results indicate that the best match between modeled and observed high-seas pink salmon growth requires the inclusion of two factors into bioenergetics models: (1) decreasing energetic foraging costs for salmon as zooplankton are concentrated by the spring shallowing of pelagic mixed-layer depth and (2) the ontogenetic switch of salmon diets from zooplankton to squid. Finally, we varied the timing and input levels of coastal salmon production to examine effects of density-dependent coastal processes on ocean feeding; coastal processes that place relatively minor limitations on salmon growth may delay the seasonal timing of ontogenetic diet shifts and thus have a magnified effect on overall salmon growth rates.

  1. Differential Growth of and Nanoscale TiO2 Accumulation in Tetrahymena thermophila by Direct Feeding versus Trophic Transfer from Pseudomonas aeruginosa

    PubMed Central

    Mielke, Randall E.; Priester, John H.; Werlin, Rebecca A.; Gelb, Jeff; Horst, Allison M.; Orias, Eduardo

    2013-01-01

    Nanoscale titanium dioxide (TiO2) is increasingly used in consumer goods and is entering waste streams, thereby exposing and potentially affecting environmental microbes. Protozoans could either take up TiO2 directly from water and sediments or acquire TiO2 during bactivory (ingestion of bacteria) of TiO2-encrusted bacteria. Here, the route of exposure of the ciliated protozoan Tetrahymena thermophila to TiO2 was varied and the growth of, and uptake and accumulation of TiO2 by, T. thermophila were measured. While TiO2 did not affect T. thermophila swimming or cellular morphology, direct TiO2 exposure in rich growth medium resulted in a lower population yield. When TiO2 exposure was by bactivory of Pseudomonas aeruginosa, the T. thermophila population yield and growth rate were lower than those that occurred during the bactivory of non-TiO2-encrusted bacteria. Regardless of the feeding mode, T. thermophila cells internalized TiO2 into their food vacuoles. Biomagnification of TiO2 was not observed; this was attributed to the observation that TiO2 appeared to be unable to cross the food vacuole membrane and enter the cytoplasm. Nevertheless, our findings imply that TiO2 could be transferred into higher trophic levels within food webs and that the food web could be affected by the decreased growth rate and yield of organisms near the base of the web. PMID:23851096

  2. Automated Management of Exercise Intervention at the Point of Care: Application of a Web-Based Leg Training System

    PubMed Central

    2015-01-01

    Background Recent advances in information and communication technology have prompted development of Web-based health tools to promote physical activity, the key component of cardiac rehabilitation and chronic disease management. Mobile apps can facilitate behavioral changes and help in exercise monitoring, although actual training usually takes place away from the point of care in specialized gyms or outdoors. Daily participation in conventional physical activities is expensive, time consuming, and mostly relies on self-management abilities of patients who are typically aged, overweight, and unfit. Facilitation of sustained exercise training at the point of care might improve patient engagement in cardiac rehabilitation. Objective In this study we aimed to test the feasibility of execution and automatic monitoring of several exercise regimens on-site using a Web-enabled leg training system. Methods The MedExercise leg rehabilitation machine was equipped with wireless temperature sensors in order to monitor its usage by the rise of temperature in the resistance unit (Δt°). Personal electronic devices such as laptop computers were fitted with wireless gateways and relevant software was installed to monitor the usage of training machines. Cloud-based software allowed monitoring of participant training over the Internet. Seven healthy participants applied the system at various locations with training protocols typically used in cardiac rehabilitation. The heart rates were measured by fingertip pulse oximeters. Results Exercising in home chairs, in bed, and under an office desk was made feasible and resulted in an intensity-dependent increase of participants’ heart rates and Δt° in training machine temperatures. Participants self-controlled their activities on smart devices, while a supervisor monitored them over the Internet. Individual Δt° reached during 30 minutes of moderate-intensity continuous training averaged 7.8°C (SD 1.6). These Δt° were used as personalized daily doses of exercise with automatic email alerts sent upon achieving them. During 1-week training at home, automatic notifications were received on 4.4 days (SD 1.8). Although the high intensity interval training regimen was feasible on-site, it was difficult for self- and remote management. Opportunistic leg exercise under the desk, while working with a computer, and training in bed while viewing television were less intensive than dosed exercise bouts, but allowed prolonged leg mobilization of 73.7 minutes/day (SD 29.7). Conclusions This study demonstrated the feasibility of self-control exercise training on-site, which was accompanied by online monitoring, electronic recording, personalization of exercise doses, and automatic reporting of adherence. The results suggest that this technology and its applications are useful for the delivery of Web-based exercise rehabilitation and cardiac training programs at the point of care. PMID:28582243

  3. High temperature (900-1300 C) mechanical behaviour of dendritic web grown silicon ribbons - Strain rate and temperature dependence of the yield stress

    NASA Technical Reports Server (NTRS)

    Mathews, V. K.; Gross, T. S.

    1987-01-01

    The mechanical behavior of dendritic web Si ribbons close the melting point was studied experimentally. The goal of the study was to generate data for modeling the generation of stresses and dislocation structures during growth of dendritic web Si ribbons, thereby permitting modifications to the production process, i.e., the temperature profile, to lower production costs for the photovoltaic ribbons. A laser was used to cut specimens in the direction of growth of sample ribbons, which were then subjected to tensile tests at temperatures up to 1300 C in an Ar atmosphere. The tensile strengths of the samples increased when the temperature rose above 1200 C, a phenomena which was attributed to the diffusion of oxygen atoms to the quasi-dislocation sites. The migration to the potential dislocations sites effectively locked the dislocations.

  4. Low cost silicon solar array project large area silicon sheet task: Silicon web process development

    NASA Technical Reports Server (NTRS)

    Duncan, C. S.; Seidensticker, R. G.; Mchugh, J. P.; Blais, P. D.; Davis, J. R., Jr.

    1977-01-01

    Growth configurations were developed which produced crystals having low residual stress levels. The properties of a 106 mm diameter round crucible were evaluated and it was found that this design had greatly enhanced temperature fluctuations arising from convection in the melt. Thermal modeling efforts were directed to developing finite element models of the 106 mm round crucible and an elongated susceptor/crucible configuration. Also, the thermal model for the heat loss modes from the dendritic web was examined for guidance in reducing the thermal stress in the web. An economic analysis was prepared to evaluate the silicon web process in relation to price goals.

  5. Participating in the Geospatial Web: Collaborative Mapping, Social Networks and Participatory GIS

    NASA Astrophysics Data System (ADS)

    Rouse, L. Jesse; Bergeron, Susan J.; Harris, Trevor M.

    In 2005, Google, Microsoft and Yahoo! released free Web mapping applications that opened up digital mapping to mainstream Internet users. Importantly, these companies also released free APIs for their platforms, allowing users to geo-locate and map their own data. These initiatives have spurred the growth of the Geospatial Web and represent spatially aware online communities and new ways of enabling communities to share information from the bottom up. This chapter explores how the emerging Geospatial Web can meet some of the fundamental needs of Participatory GIS projects to incorporate local knowledge into GIS, as well as promote public access and collaborative mapping.

  6. Deploying and sharing U-Compare workflows as web services.

    PubMed

    Kontonatsios, Georgios; Korkontzelos, Ioannis; Kolluru, Balakrishna; Thompson, Paul; Ananiadou, Sophia

    2013-02-18

    U-Compare is a text mining platform that allows the construction, evaluation and comparison of text mining workflows. U-Compare contains a large library of components that are tuned to the biomedical domain. Users can rapidly develop biomedical text mining workflows by mixing and matching U-Compare's components. Workflows developed using U-Compare can be exported and sent to other users who, in turn, can import and re-use them. However, the resulting workflows are standalone applications, i.e., software tools that run and are accessible only via a local machine, and that can only be run with the U-Compare platform. We address the above issues by extending U-Compare to convert standalone workflows into web services automatically, via a two-click process. The resulting web services can be registered on a central server and made publicly available. Alternatively, users can make web services available on their own servers, after installing the web application framework, which is part of the extension to U-Compare. We have performed a user-oriented evaluation of the proposed extension, by asking users who have tested the enhanced functionality of U-Compare to complete questionnaires that assess its functionality, reliability, usability, efficiency and maintainability. The results obtained reveal that the new functionality is well received by users. The web services produced by U-Compare are built on top of open standards, i.e., REST and SOAP protocols, and therefore, they are decoupled from the underlying platform. Exported workflows can be integrated with any application that supports these open standards. We demonstrate how the newly extended U-Compare enhances the cross-platform interoperability of workflows, by seamlessly importing a number of text mining workflow web services exported from U-Compare into Taverna, i.e., a generic scientific workflow construction platform.

  7. Deploying and sharing U-Compare workflows as web services

    PubMed Central

    2013-01-01

    Background U-Compare is a text mining platform that allows the construction, evaluation and comparison of text mining workflows. U-Compare contains a large library of components that are tuned to the biomedical domain. Users can rapidly develop biomedical text mining workflows by mixing and matching U-Compare’s components. Workflows developed using U-Compare can be exported and sent to other users who, in turn, can import and re-use them. However, the resulting workflows are standalone applications, i.e., software tools that run and are accessible only via a local machine, and that can only be run with the U-Compare platform. Results We address the above issues by extending U-Compare to convert standalone workflows into web services automatically, via a two-click process. The resulting web services can be registered on a central server and made publicly available. Alternatively, users can make web services available on their own servers, after installing the web application framework, which is part of the extension to U-Compare. We have performed a user-oriented evaluation of the proposed extension, by asking users who have tested the enhanced functionality of U-Compare to complete questionnaires that assess its functionality, reliability, usability, efficiency and maintainability. The results obtained reveal that the new functionality is well received by users. Conclusions The web services produced by U-Compare are built on top of open standards, i.e., REST and SOAP protocols, and therefore, they are decoupled from the underlying platform. Exported workflows can be integrated with any application that supports these open standards. We demonstrate how the newly extended U-Compare enhances the cross-platform interoperability of workflows, by seamlessly importing a number of text mining workflow web services exported from U-Compare into Taverna, i.e., a generic scientific workflow construction platform. PMID:23419017

  8. Web-based interactive 2D/3D medical image processing and visualization software.

    PubMed

    Mahmoudi, Seyyed Ehsan; Akhondi-Asl, Alireza; Rahmani, Roohollah; Faghih-Roohi, Shahrooz; Taimouri, Vahid; Sabouri, Ahmad; Soltanian-Zadeh, Hamid

    2010-05-01

    There are many medical image processing software tools available for research and diagnosis purposes. However, most of these tools are available only as local applications. This limits the accessibility of the software to a specific machine, and thus the data and processing power of that application are not available to other workstations. Further, there are operating system and processing power limitations which prevent such applications from running on every type of workstation. By developing web-based tools, it is possible for users to access the medical image processing functionalities wherever the internet is available. In this paper, we introduce a pure web-based, interactive, extendable, 2D and 3D medical image processing and visualization application that requires no client installation. Our software uses a four-layered design consisting of an algorithm layer, web-user-interface layer, server communication layer, and wrapper layer. To compete with extendibility of the current local medical image processing software, each layer is highly independent of other layers. A wide range of medical image preprocessing, registration, and segmentation methods are implemented using open source libraries. Desktop-like user interaction is provided by using AJAX technology in the web-user-interface. For the visualization functionality of the software, the VRML standard is used to provide 3D features over the web. Integration of these technologies has allowed implementation of our purely web-based software with high functionality without requiring powerful computational resources in the client side. The user-interface is designed such that the users can select appropriate parameters for practical research and clinical studies. Copyright (c) 2009 Elsevier Ireland Ltd. All rights reserved.

  9. E-Government Goes Semantic Web: How Administrations Can Transform Their Information Processes

    NASA Astrophysics Data System (ADS)

    Klischewski, Ralf; Ukena, Stefan

    E-government applications and services are built mainly on access to, retrieval of, integration of, and delivery of relevant information to citizens, businesses, and administrative users. In order to perform such information processing automatically through the Semantic Web,1 machine-readable2 enhancements of web resources are needed, based on the understanding of the content and context of the information in focus. While these enhancements are far from trivial to produce, administrations in their role of information and service providers so far find little guidance on how to migrate their web resources and enable a new quality of information processing; even research is still seeking best practices. Therefore, the underlying research question of this chapter is: what are the appropriate approaches which guide administrations in transforming their information processes toward the Semantic Web? In search for answers, this chapter analyzes the challenges and possible solutions from the perspective of administrations: (a) the reconstruction of the information processing in the e-government in terms of how semantic technologies must be employed to support information provision and consumption through the Semantic Web; (b) the required contribution to the transformation is compared to the capabilities and expectations of administrations; and (c) available experience with the steps of transformation are reviewed and discussed as to what extent they can be expected to successfully drive the e-government to the Semantic Web. This research builds on studying the case of Schleswig-Holstein, Germany, where semantic technologies have been used within the frame of the Access-eGov3 project in order to semantically enhance electronic service interfaces with the aim of providing a new way of accessing and combining e-government services.

  10. Software architecture and design of the web services facilitating climate model diagnostic analysis

    NASA Astrophysics Data System (ADS)

    Pan, L.; Lee, S.; Zhang, J.; Tang, B.; Zhai, C.; Jiang, J. H.; Wang, W.; Bao, Q.; Qi, M.; Kubar, T. L.; Teixeira, J.

    2015-12-01

    Climate model diagnostic analysis is a computationally- and data-intensive task because it involves multiple numerical model outputs and satellite observation data that can both be high resolution. We have built an online tool that facilitates this process. The tool is called Climate Model Diagnostic Analyzer (CMDA). It employs the web service technology and provides a web-based user interface. The benefits of these choices include: (1) No installation of any software other than a browser, hence it is platform compatable; (2) Co-location of computation and big data on the server side, and small results and plots to be downloaded on the client side, hence high data efficiency; (3) multi-threaded implementation to achieve parallel performance on multi-core servers; and (4) cloud deployment so each user has a dedicated virtual machine. In this presentation, we will focus on the computer science aspects of this tool, namely the architectural design, the infrastructure of the web services, the implementation of the web-based user interface, the mechanism of provenance collection, the approach to virtualization, and the Amazon Cloud deployment. As an example, We will describe our methodology to transform an existing science application code into a web service using a Python wrapper interface and Python web service frameworks (i.e., Flask, Gunicorn, and Tornado). Another example is the use of Docker, a light-weight virtualization container, to distribute and deploy CMDA onto an Amazon EC2 instance. Our tool of CMDA has been successfully used in the 2014 Summer School hosted by the JPL Center for Climate Science. Students had positive feedbacks in general and we will report their comments. An enhanced version of CMDA with several new features, some requested by the 2014 students, will be used in the 2015 Summer School soon.

  11. The Sydney West Knowledge Portal: Evaluating the Growth of a Knowledge Portal to Support Translational Research.

    PubMed

    Janssen, Anna; Robinson, Tracy Elizabeth; Provan, Pamela; Shaw, Tim

    2016-06-29

    The Sydney West Translational Cancer Research Centre is an organization funded to build capacity for translational research in cancer. Translational research is essential for ensuring the integration of best available evidence into practice and for improving patient outcomes. However, there is a low level of awareness regarding what it is and how to conduct it optimally. One solution to addressing this gap is the design and deployment of web-based knowledge portals to disseminate new knowledge and engage with and connect dispersed networks of researchers. A knowledge portal is an web-based platform for increasing knowledge dissemination and management in a specialized area. To measure the design and growth of an web-based knowledge portal for increasing individual awareness of translational research and to build organizational capacity for the delivery of translational research projects in cancer. An adaptive methodology was used to capture the design and growth of an web-based knowledge portal in cancer. This involved stakeholder consultations to inform initial design of the portal. Once the portal was live, site analytics were reviewed to evaluate member usage of the portal and to measure growth in membership. Knowledge portal membership grew consistently for the first 18 months after deployment, before leveling out. Analysis of site metrics revealed members were most likely to visit portal pages with community-generated content, particularly pages with a focus on translational research. This was closely followed by pages that disseminated educational material about translational research. Preliminary data from this study suggest that knowledge portals may be beneficial tools for translating new evidence and fostering an environment of communication and collaboration.

  12. The Sydney West Knowledge Portal: Evaluating the Growth of a Knowledge Portal to Support Translational Research

    PubMed Central

    2016-01-01

    Background The Sydney West Translational Cancer Research Centre is an organization funded to build capacity for translational research in cancer. Translational research is essential for ensuring the integration of best available evidence into practice and for improving patient outcomes. However, there is a low level of awareness regarding what it is and how to conduct it optimally. One solution to addressing this gap is the design and deployment of web-based knowledge portals to disseminate new knowledge and engage with and connect dispersed networks of researchers. A knowledge portal is an web-based platform for increasing knowledge dissemination and management in a specialized area. Objective To measure the design and growth of an web-based knowledge portal for increasing individual awareness of translational research and to build organizational capacity for the delivery of translational research projects in cancer. Methods An adaptive methodology was used to capture the design and growth of an web-based knowledge portal in cancer. This involved stakeholder consultations to inform initial design of the portal. Once the portal was live, site analytics were reviewed to evaluate member usage of the portal and to measure growth in membership. Results Knowledge portal membership grew consistently for the first 18 months after deployment, before leveling out. Analysis of site metrics revealed members were most likely to visit portal pages with community-generated content, particularly pages with a focus on translational research. This was closely followed by pages that disseminated educational material about translational research. Conclusions Preliminary data from this study suggest that knowledge portals may be beneficial tools for translating new evidence and fostering an environment of communication and collaboration. PMID:27357641

  13. A smarter way to search, share and utilize open-spatial online data for energy R&D - Custom machine learning and GIS tools in U.S. DOE's virtual data library & laboratory, EDX

    NASA Astrophysics Data System (ADS)

    Rose, K.; Bauer, J.; Baker, D.; Barkhurst, A.; Bean, A.; DiGiulio, J.; Jones, K.; Jones, T.; Justman, D.; Miller, R., III; Romeo, L.; Sabbatino, M.; Tong, A.

    2017-12-01

    As spatial datasets are increasingly accessible through open, online systems, the opportunity to use these resources to address a range of Earth system questions grows. Simultaneously, there is a need for better infrastructure and tools to find and utilize these resources. We will present examples of advanced online computing capabilities, hosted in the U.S. DOE's Energy Data eXchange (EDX), that address these needs for earth-energy research and development. In one study the computing team developed a custom, machine learning, big data computing tool designed to parse the web and return priority datasets to appropriate servers to develop an open-source global oil and gas infrastructure database. The results of this spatial smart search approach were validated against expert-driven, manual search results which required a team of seven spatial scientists three months to produce. The custom machine learning tool parsed online, open systems, including zip files, ftp sites and other web-hosted resources, in a matter of days. The resulting resources were integrated into a geodatabase now hosted for open access via EDX. Beyond identifying and accessing authoritative, open spatial data resources, there is also a need for more efficient tools to ingest, perform, and visualize multi-variate, spatial data analyses. Within the EDX framework, there is a growing suite of processing, analytical and visualization capabilities that allow multi-user teams to work more efficiently in private, virtual workspaces. An example of these capabilities are a set of 5 custom spatio-temporal models and data tools that form NETL's Offshore Risk Modeling suite that can be used to quantify oil spill risks and impacts. Coupling the data and advanced functions from EDX with these advanced spatio-temporal models has culminated with an integrated web-based decision-support tool. This platform has capabilities to identify and combine data across scales and disciplines, evaluate potential environmental, social, and economic impacts, highlight knowledge or technology gaps, and reduce uncertainty for a range of `what if' scenarios relevant to oil spill prevention efforts. These examples illustrate EDX's growing capabilities for advanced spatial data search and analysis to support geo-data science needs.

  14. Using Wmatrix to Explore Discourse of Economic Growth

    ERIC Educational Resources Information Center

    Hu, Chunyu

    2015-01-01

    Growth is a concept of particular interest for economic discourse. This paper sets out to explore a small corpus of economic growth, which consists of articles from "The Economist". The corpus software used in this study is a web-based tool Wmatrix, an automatic tagging software able to assign semantic field (domain) tags, and to permit…

  15. APOLLO: a quality assessment service for single and multiple protein models.

    PubMed

    Wang, Zheng; Eickholt, Jesse; Cheng, Jianlin

    2011-06-15

    We built a web server named APOLLO, which can evaluate the absolute global and local qualities of a single protein model using machine learning methods or the global and local qualities of a pool of models using a pair-wise comparison approach. Based on our evaluations on 107 CASP9 (Critical Assessment of Techniques for Protein Structure Prediction) targets, the predicted quality scores generated from our machine learning and pair-wise methods have an average per-target correlation of 0.671 and 0.917, respectively, with the true model quality scores. Based on our test on 92 CASP9 targets, our predicted absolute local qualities have an average difference of 2.60 Å with the actual distances to native structure. http://sysbio.rnet.missouri.edu/apollo/. Single and pair-wise global quality assessment software is also available at the site.

  16. VizieR Online Data Catalog: SDSS-DR9 photometric redshifts (Brescia+, 2014)

    NASA Astrophysics Data System (ADS)

    Brescia, M.; Cavuoti, S.; Longo, G.; de Stefano, V.

    2014-07-01

    We present an application of a machine learning method to the estimation of photometric redshifts for the galaxies in the SDSS Data Release 9 (SDSS-DR9). Photometric redshifts for more than 143 million galaxies were produced. The MLPQNA (Multi Layer Perceptron with Quasi Newton Algorithm) model provided within the framework of the DAMEWARE (DAta Mining and Exploration Web Application REsource) is an interpolative method derived from machine learning models. The obtained redshifts have an overall uncertainty of σ=0.023 with a very small average bias of about 3x10-5 and a fraction of catastrophic outliers of about 5%. After removal of the catastrophic outliers, the uncertainty is about σ=0.017. The catalogue files report in their name the range of DEC degrees related to the included objects. (60 data files).

  17. OTM Machine Acceptance: In the Arab Culture

    NASA Astrophysics Data System (ADS)

    Rashed, Abdullah; Santos, Henrique

    Basically, neglecting the human factor is one of the main reasons for system failures or for technology rejection, even when important technologies are considered. Biometrics mostly have the characteristics needed for effortless acceptance, such as easiness and usefulness, that are essential pillars of acceptance models such as TAM (technology acceptance model). However, it should be investigated. Many studies have been carried out to research the issues of technology acceptance in different cultures, especially the western culture. Arabic culture lacks these types of studies with few publications in this field. This paper introduces a new biometric interface for ATM machines. This interface depends on a promising biometrics which is odour. To discover the acceptance of this biometrics, we distributed a questionnaire via a web site and called for participation in the Arab Area and found that most respondents would accept to use odour.

  18. Gastrointestinal Spatiotemporal mRNA Expression of Ghrelin vs Growth Hormone Receptor and New Growth Yield Machine Learning Model Based on Perturbation Theory.

    PubMed

    Ran, Tao; Liu, Yong; Li, Hengzhi; Tang, Shaoxun; He, Zhixiong; Munteanu, Cristian R; González-Díaz, Humberto; Tan, Zhiliang; Zhou, Chuanshe

    2016-07-27

    The management of ruminant growth yield has economic importance. The current work presents a study of the spatiotemporal dynamic expression of Ghrelin and GHR at mRNA levels throughout the gastrointestinal tract (GIT) of kid goats under housing and grazing systems. The experiments show that the feeding system and age affected the expression of either Ghrelin or GHR with different mechanisms. Furthermore, the experimental data are used to build new Machine Learning models based on the Perturbation Theory, which can predict the effects of perturbations of Ghrelin and GHR mRNA expression on the growth yield. The models consider eight longitudinal GIT segments (rumen, abomasum, duodenum, jejunum, ileum, cecum, colon and rectum), seven time points (0, 7, 14, 28, 42, 56 and 70 d) and two feeding systems (Supplemental and Grazing feeding) as perturbations from the expected values of the growth yield. The best regression model was obtained using Random Forest, with the coefficient of determination R(2) of 0.781 for the test subset. The current results indicate that the non-linear regression model can accurately predict the growth yield and the key nodes during gastrointestinal development, which is helpful to optimize the feeding management strategies in ruminant production system.

  19. Gastrointestinal Spatiotemporal mRNA Expression of Ghrelin vs Growth Hormone Receptor and New Growth Yield Machine Learning Model Based on Perturbation Theory

    PubMed Central

    Ran, Tao; Liu, Yong; Li, Hengzhi; Tang, Shaoxun; He, Zhixiong; Munteanu, Cristian R.; González-Díaz, Humberto; Tan, Zhiliang; Zhou, Chuanshe

    2016-01-01

    The management of ruminant growth yield has economic importance. The current work presents a study of the spatiotemporal dynamic expression of Ghrelin and GHR at mRNA levels throughout the gastrointestinal tract (GIT) of kid goats under housing and grazing systems. The experiments show that the feeding system and age affected the expression of either Ghrelin or GHR with different mechanisms. Furthermore, the experimental data are used to build new Machine Learning models based on the Perturbation Theory, which can predict the effects of perturbations of Ghrelin and GHR mRNA expression on the growth yield. The models consider eight longitudinal GIT segments (rumen, abomasum, duodenum, jejunum, ileum, cecum, colon and rectum), seven time points (0, 7, 14, 28, 42, 56 and 70 d) and two feeding systems (Supplemental and Grazing feeding) as perturbations from the expected values of the growth yield. The best regression model was obtained using Random Forest, with the coefficient of determination R2 of 0.781 for the test subset. The current results indicate that the non-linear regression model can accurately predict the growth yield and the key nodes during gastrointestinal development, which is helpful to optimize the feeding management strategies in ruminant production system. PMID:27460882

  20. Providing Multi-Page Data Extraction Services with XWRAPComposer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Ling; Zhang, Jianjun; Han, Wei

    2008-04-30

    Dynamic Web data sources – sometimes known collectively as the Deep Web – increase the utility of the Web by providing intuitive access to data repositories anywhere that Web access is available. Deep Web services provide access to real-time information, like entertainment event listings, or present a Web interface to large databases or other data repositories. Recent studies suggest that the size and growth rate of the dynamic Web greatly exceed that of the static Web, yet dynamic content is often ignored by existing search engine indexers owing to the technical challenges that arise when attempting to search the Deepmore » Web. To address these challenges, we present DYNABOT, a service-centric crawler for discovering and clustering Deep Web sources offering dynamic content. DYNABOT has three unique characteristics. First, DYNABOT utilizes a service class model of the Web implemented through the construction of service class descriptions (SCDs). Second, DYNABOT employs a modular, self-tuning system architecture for focused crawling of the Deep Web using service class descriptions. Third, DYNABOT incorporates methods and algorithms for efficient probing of the Deep Web and for discovering and clustering Deep Web sources and services through SCD-based service matching analysis. Our experimental results demonstrate the effectiveness of the service class discovery, probing, and matching algorithms and suggest techniques for efficiently managing service discovery in the face of the immense scale of the Deep Web.« less

  1. GeNets: a unified web platform for network-based genomic analyses.

    PubMed

    Li, Taibo; Kim, April; Rosenbluh, Joseph; Horn, Heiko; Greenfeld, Liraz; An, David; Zimmer, Andrew; Liberzon, Arthur; Bistline, Jon; Natoli, Ted; Li, Yang; Tsherniak, Aviad; Narayan, Rajiv; Subramanian, Aravind; Liefeld, Ted; Wong, Bang; Thompson, Dawn; Calvo, Sarah; Carr, Steve; Boehm, Jesse; Jaffe, Jake; Mesirov, Jill; Hacohen, Nir; Regev, Aviv; Lage, Kasper

    2018-06-18

    Functional genomics networks are widely used to identify unexpected pathway relationships in large genomic datasets. However, it is challenging to compare the signal-to-noise ratios of different networks and to identify the optimal network with which to interpret a particular genetic dataset. We present GeNets, a platform in which users can train a machine-learning model (Quack) to carry out these comparisons and execute, store, and share analyses of genetic and RNA-sequencing datasets.

  2. SCENERY: a web application for (causal) network reconstruction from cytometry data.

    PubMed

    Papoutsoglou, Georgios; Athineou, Giorgos; Lagani, Vincenzo; Xanthopoulos, Iordanis; Schmidt, Angelika; Éliás, Szabolcs; Tegnér, Jesper; Tsamardinos, Ioannis

    2017-07-03

    Flow and mass cytometry technologies can probe proteins as biological markers in thousands of individual cells simultaneously, providing unprecedented opportunities for reconstructing networks of protein interactions through machine learning algorithms. The network reconstruction (NR) problem has been well-studied by the machine learning community. However, the potentials of available methods remain largely unknown to the cytometry community, mainly due to their intrinsic complexity and the lack of comprehensive, powerful and easy-to-use NR software implementations specific for cytometry data. To bridge this gap, we present Single CEll NEtwork Reconstruction sYstem (SCENERY), a web server featuring several standard and advanced cytometry data analysis methods coupled with NR algorithms in a user-friendly, on-line environment. In SCENERY, users may upload their data and set their own study design. The server offers several data analysis options categorized into three classes of methods: data (pre)processing, statistical analysis and NR. The server also provides interactive visualization and download of results as ready-to-publish images or multimedia reports. Its core is modular and based on the widely-used and robust R platform allowing power users to extend its functionalities by submitting their own NR methods. SCENERY is available at scenery.csd.uoc.gr or http://mensxmachina.org/en/software/. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  3. Creating Actionable Data from an Optical Depth Measurement Network using RDF

    NASA Astrophysics Data System (ADS)

    Freemantle, J. R.; O'Neill, N. T.; Lumb, L. I.; Abboud, I.; McArthur, B.

    2010-12-01

    The AEROCAN sunphotometery network has, for more than a decade, generated optical indicators of aerosol concentration and size on a regional and national scale. We believe this optical information can be rendered more “actionable” to the health care community by developing a technical and interpretative information-sharing geospatial strategy with that community. By actionable data we mean information that is presented in manner that can be understood and then used in the decision making process. The decision may be that of a technical professional, a policy maker or a machine. The information leading up to a decision may come from many sources; this means it is particularly important that data are well defined across knowledge fields, in our case atmospheric science and respiratory health science. As part of the AEROCAN operational quality assurance (QA) methodology we have written automatic procedures to make some of the AEROCAN data more accessible or “actionable”. Tim Berners-Lee has advocated making datasets, “Linked Data”, available on the web with a proper structural description (metadata). We have been using RDF (Resource Description Framework) to enhance the utility of our sunphotometer data; the resulting self-describing representation is structured so that it is machine readable. This allows semantically based queries (e.g., via SPARQL) on our dataset that in the past were only viewable as passive Web tables of data.

  4. Summary of ADTT Website Functionality and Features

    NASA Technical Reports Server (NTRS)

    Hawke, Veronica; Duong, Trang; Liang, Lawrence; Gage, Peter; Lawrence, Scott (Technical Monitor)

    2001-01-01

    This report summarizes development of the ADTT web-based design environment by the ELORET team in 2000. The Advanced Design Technology Testbed had been in development for several years, with demonstration applications restricted to aerodynamic analyses of subsonic aircraft. The key changes achieved this year were improvements in Web-based accessibility, evaluation of collaborative visualization, remote invocation of geometry updates and performance analysis, and application to aerospace system analysis. Significant effort was also devoted to post-processing of data, chiefly through comparison of similar data for alternative vehicle concepts. Such comparison is an essential requirement for designers to make informed choices between alternatives. The next section of this report provides more discussion of the goals for ADTT development. Section 3 provides screen shots from a sample session in the ADTT environment, including Login and navigation to the project of interest, data inspection, analysis execution and output evaluation. The following section provides discussion of implementation details and recommendations for future development of the software and information technologies that provide the key functionality of the ADTT system. Section 5 discusses the integration architecture for the system, which links machines running different operating systems and provides unified access to data stored in distributed locations. Security is a significant issue for this system, especially for remote access to NAS machines, so Section 6 discusses several architectural considerations with respect to security. Additional details of some aspects of ADTT development are included in Appendices.

  5. Identifying well-formed biomedical phrases in MEDLINE® text.

    PubMed

    Kim, Won; Yeganova, Lana; Comeau, Donald C; Wilbur, W John

    2012-12-01

    In the modern world people frequently interact with retrieval systems to satisfy their information needs. Humanly understandable well-formed phrases represent a crucial interface between humans and the web, and the ability to index and search with such phrases is beneficial for human-web interactions. In this paper we consider the problem of identifying humanly understandable, well formed, and high quality biomedical phrases in MEDLINE documents. The main approaches used previously for detecting such phrases are syntactic, statistical, and a hybrid approach combining these two. In this paper we propose a supervised learning approach for identifying high quality phrases. First we obtain a set of known well-formed useful phrases from an existing source and label these phrases as positive. We then extract from MEDLINE a large set of multiword strings that do not contain stop words or punctuation. We believe this unlabeled set contains many well-formed phrases. Our goal is to identify these additional high quality phrases. We examine various feature combinations and several machine learning strategies designed to solve this problem. A proper choice of machine learning methods and features identifies in the large collection strings that are likely to be high quality phrases. We evaluate our approach by making human judgments on multiword strings extracted from MEDLINE using our methods. We find that over 85% of such extracted phrase candidates are humanly judged to be of high quality. Published by Elsevier Inc.

  6. Cryosphere Sensor Webs With The Autonomous Sciencecraft Experiment

    NASA Astrophysics Data System (ADS)

    Scharenbroich, L.; Doggett, T.; Kratz, T.; Castano, R.; Chien, S.; Davies, A. G.; Tran, D.; Mazzoni, D.

    2006-12-01

    Autonomous sensor-webs are being deployed as part of the Autonomous Sciencecraft Experiment [1], whereby observations using the Hyperion instrument [2] on-board Earth Observing-1 (EO-1 are triggered by either ground sensors or by near-real-time analysis of data from other space-based sensors. In the realm of cryosphere monitoring, one sensor-web has been set up pairing EO-1 with a sensor buoy [3] deployed in Sparkling Lake, one of several lakes in northern Wisconsin monitored by University of Wisconsin's Trout Lake Station. A Support Vector Machine (SVM) classifier was trained on historical thermistor chain data with manually recorded ice-in and ice-out times and used to trigger Hyperion observations of the Trout Lake area during spring thaw and winter freeze in 2005. A second sensor-web is being developed using near-real time sea ice data products, based on Department of Defense meteorological satellites, available from the National Snow and Ice Data Center (NSIDC) [4]. Once operational, this sensor web will trigger Hyperion observations of pre-defined targets in the Arctic and Antarctic where regional resolution data shows sea ice formation or break up. [1] Chien et al. (2005), An autonomous earth-observing sensor-web, IEEE Intelligent Systems, [2] Pearlman et al. (2003), Hyperion, a space-based imaging spectrometer, IEEE Trans. Geosci. Rem. Sens., 41(6), [3] Kratz, T. et al. (in press) Toward a Global Lake Ecological Observatory Network, Proceedings of the Karelian Institute, [4] Cavalieri et al. (1999) Near real-time DMSP SSM/I daily polar gridded sea ice concentrations, National Snow and Ice Data Center. Digital Media.

  7. Visualization of Vgi Data Through the New NASA Web World Wind Virtual Globe

    NASA Astrophysics Data System (ADS)

    Brovelli, M. A.; Kilsedar, C. E.; Zamboni, G.

    2016-06-01

    GeoWeb 2.0, laying the foundations of Volunteered Geographic Information (VGI) systems, has led to platforms where users can contribute to the geographic knowledge that is open to access. Moreover, as a result of the advancements in 3D visualization, virtual globes able to visualize geographic data even on browsers emerged. However the integration of VGI systems and virtual globes has not been fully realized. The study presented aims to visualize volunteered data in 3D, considering also the ease of use aspects for general public, using Free and Open Source Software (FOSS). The new Application Programming Interface (API) of NASA, Web World Wind, written in JavaScript and based on Web Graphics Library (WebGL) is cross-platform and cross-browser, so that the virtual globe created using this API can be accessible through any WebGL supported browser on different operating systems and devices, as a result not requiring any installation or configuration on the client-side, making the collected data more usable to users, which is not the case with the World Wind for Java as installation and configuration of the Java Virtual Machine (JVM) is required. Furthermore, the data collected through various VGI platforms might be in different formats, stored in a traditional relational database or in a NoSQL database. The project developed aims to visualize and query data collected through Open Data Kit (ODK) platform and a cross-platform application, where data is stored in a relational PostgreSQL and NoSQL CouchDB databases respectively.

  8. QMachine: commodity supercomputing in web browsers

    PubMed Central

    2014-01-01

    Background Ongoing advancements in cloud computing provide novel opportunities in scientific computing, especially for distributed workflows. Modern web browsers can now be used as high-performance workstations for querying, processing, and visualizing genomics’ “Big Data” from sources like The Cancer Genome Atlas (TCGA) and the International Cancer Genome Consortium (ICGC) without local software installation or configuration. The design of QMachine (QM) was driven by the opportunity to use this pervasive computing model in the context of the Web of Linked Data in Biomedicine. Results QM is an open-sourced, publicly available web service that acts as a messaging system for posting tasks and retrieving results over HTTP. The illustrative application described here distributes the analyses of 20 Streptococcus pneumoniae genomes for shared suffixes. Because all analytical and data retrieval tasks are executed by volunteer machines, few server resources are required. Any modern web browser can submit those tasks and/or volunteer to execute them without installing any extra plugins or programs. A client library provides high-level distribution templates including MapReduce. This stark departure from the current reliance on expensive server hardware running “download and install” software has already gathered substantial community interest, as QM received more than 2.2 million API calls from 87 countries in 12 months. Conclusions QM was found adequate to deliver the sort of scalable bioinformatics solutions that computation- and data-intensive workflows require. Paradoxically, the sandboxed execution of code by web browsers was also found to enable them, as compute nodes, to address critical privacy concerns that characterize biomedical environments. PMID:24913605

  9. Second Language Acquisition: Implications of Web 2.0 and Beyond

    ERIC Educational Resources Information Center

    Chang, Ching-Wen; Pearman, Cathy; Farha, Nicholas

    2012-01-01

    Language laboratories, developed in the 1970s under the influence of the Audiolingual Method, were superseded several decades later by computer-assisted language learning (CALL) work stations (Gündüz, 2005). The World Wide Web was developed shortly thereafter. From this introduction and the well-documented and staggering growth of the Internet and…

  10. Single Session Web-Based Counselling: A Thematic Analysis of Content from the Perspective of the Client

    ERIC Educational Resources Information Center

    Rodda, S. N.; Lubman, D. I.; Cheetham, A.; Dowling, N. A.; Jackson, A. C.

    2015-01-01

    Despite the exponential growth of non-appointment-based web counselling, there is limited information on what happens in a single session intervention. This exploratory study, involving a thematic analysis of 85 counselling transcripts of people seeking help for problem gambling, aimed to describe the presentation and content of online…

  11. The Role of Faculty in the Effectiveness of Fully Online Programs

    ERIC Educational Resources Information Center

    Al-Salman, Sami M.

    2013-01-01

    The enormous growth of online learning creates the need to develop a set of standards and guidelines for fully online programs. While many guidelines do exist, web-based programs still fall short in the recognition, adoption, or the implementation of these standards. One consequence is the high attrition rates associated with web-based distance…

  12. The reliability of dental x-ray film in assessment of MP3 stages of the pubertal growth spurt.

    PubMed

    Abdel-Kader, H M

    1998-10-01

    The main object of this clinical study is to provide a simple and practical method to assess the pubertal growth spurt stages of a subject by recording MP3 stages with the dental periapical radiograph and the standard dental x-ray machine.

  13. Salad Machine - A vegetable production unit for long duration space missions

    NASA Technical Reports Server (NTRS)

    Kliss, M.; Macelroy, R. D.

    1990-01-01

    A review of NASA CELSS development specific to vegetable cultivation during space missions is presented in terms of enhancing the quality of life for space crews. A cultivation unit is being developed to permit the production of 600 grams of edible salad vegetables per week, thereby allowing one salad per crew member three times weekly. Plant-growth requirements are set forth for the specific vegetables, and environmental subsystems are listed. Several preprototype systems are discussed, and one particular integrated-systems design concept is presented in detail with views of the proposed rack configuration. The Salad Machine is developed exclusively from CELSS-derived technology, and the major challenge is the mitigation of the effects of plant-growth requirements on other space-mission facility operations.

  14. Internet-based peer support for Ménière's disease: a summary of web-based data collection, impact evaluation, and user evaluation.

    PubMed

    Pyykkő, Ilmari; Manchaiah, Vinaya; Levo, Hilla; Kentala, Erna; Juhola, Martti

    2017-07-01

    This paper presents a summary of web-based data collection, impact evaluation, and user evaluations of an Internet-based peer support program for Ménière's disease (MD). The program is written in html-form. The data are stored in a MySQL database and uses machine learning in the diagnosis of MD. The program works interactively with the user and assesses the participant's disorder profile in various dimensions (i.e., symptoms, impact, personal traits, and positive attitude). The inference engine uses a database to compare the impact with 50 referents, and provides regular feedback to the user. Data were analysed using descriptive statistics and regression analysis. The impact evaluation was based on 740 cases and the user evaluation on a sample of 75 cases of MD respectively. The web-based system was useful in data collection and impact evaluation of people with MD. Among those with a recent onset of MD, 78% rated the program as useful or very useful, whereas those with chronic MD rated the program 55%. We suggest that a web-based data collection and impact evaluation for peer support can be helpful while formulating the rehabilitation goals of building the self-confidence needed for coping and increasing social participation.

  15. Experiment Software and Projects on the Web with VISPA

    NASA Astrophysics Data System (ADS)

    Erdmann, M.; Fischer, B.; Fischer, R.; Geiser, E.; Glaser, C.; Müller, G.; Rieger, M.; Urban, M.; von Cube, R. F.; Welling, C.

    2017-10-01

    The Visual Physics Analysis (VISPA) project defines a toolbox for accessing software via the web. It is based on latest web technologies and provides a powerful extension mechanism that enables to interface a wide range of applications. Beyond basic applications such as a code editor, a file browser, or a terminal, it meets the demands of sophisticated experiment-specific use cases that focus on physics data analyses and typically require a high degree of interactivity. As an example, we developed a data inspector that is capable of browsing interactively through event content of several data formats, e.g., MiniAOD which is utilized by the CMS collaboration. The VISPA extension mechanism can also be used to embed external web-based applications that benefit from dynamic allocation of user-defined computing resources via SSH. For example, by wrapping the JSROOT project, ROOT files located on any remote machine can be inspected directly through a VISPA server instance. We introduced domains that combine groups of users and role-based permissions. Thereby, tailored projects are enabled, e.g. for teaching where access to student’s homework is restricted to a team of tutors, or for experiment-specific data that may only be accessible for members of the collaboration. We present the extension mechanism including corresponding applications and give an outlook onto the new permission system.

  16. An accurate and efficient method to predict the electronic excitation energies of BODIPY fluorescent dyes.

    PubMed

    Wang, Jia-Nan; Jin, Jun-Ling; Geng, Yun; Sun, Shi-Ling; Xu, Hong-Liang; Lu, Ying-Hua; Su, Zhong-Min

    2013-03-15

    Recently, the extreme learning machine neural network (ELMNN) as a valid computing method has been proposed to predict the nonlinear optical property successfully (Wang et al., J. Comput. Chem. 2012, 33, 231). In this work, first, we follow this line of work to predict the electronic excitation energies using the ELMNN method. Significantly, the root mean square deviation of the predicted electronic excitation energies of 90 4,4-difluoro-4-bora-3a,4a-diaza-s-indacene (BODIPY) derivatives between the predicted and experimental values has been reduced to 0.13 eV. Second, four groups of molecule descriptors are considered when building the computing models. The results show that the quantum chemical descriptions have the closest intrinsic relation with the electronic excitation energy values. Finally, a user-friendly web server (EEEBPre: Prediction of electronic excitation energies for BODIPY dyes), which is freely accessible to public at the web site: http://202.198.129.218, has been built for prediction. This web server can return the predicted electronic excitation energy values of BODIPY dyes that are high consistent with the experimental values. We hope that this web server would be helpful to theoretical and experimental chemists in related research. Copyright © 2012 Wiley Periodicals, Inc.

  17. Enhancing Enterprise 2.0 Ecosystems Using Semantic Web and Linked Data Technologies:The SemSLATES Approach

    NASA Astrophysics Data System (ADS)

    Passant, Alexandre; Laublet, Philippe; Breslin, John G.; Decker, Stefan

    During the past few years, various organisations embraced the Enterprise 2.0 paradigms, providing their employees with new means to enhance collaboration and knowledge sharing in the workplace. However, while tools such as blogs, wikis, and principles like free-tagging or content syndication allow user-generated content to be more easily created and shared in the enterprise, in spite of some social issues, these new practices lead to various problems in terms of knowledge management. In this chapter, we provide an approach based on Semantic Web and Linked Data technologies for (1) integrating heterogeneous data from distinct Enterprise 2.0 applications, and (2) bridging the gap between raw text and machine-readable Linked Data. We discuss the theoretical background of our proposal as well as a practical case-study in enterprise, focusing on the various add-ons that have been provided to the original information system, as well as presenting how public Linked Open Data from the Web can be used to enhance existing Enterprise 2.0 ecosystems.

  18. RSAT 2015: Regulatory Sequence Analysis Tools.

    PubMed

    Medina-Rivera, Alejandra; Defrance, Matthieu; Sand, Olivier; Herrmann, Carl; Castro-Mondragon, Jaime A; Delerce, Jeremy; Jaeger, Sébastien; Blanchet, Christophe; Vincens, Pierre; Caron, Christophe; Staines, Daniel M; Contreras-Moreira, Bruno; Artufel, Marie; Charbonnier-Khamvongsa, Lucie; Hernandez, Céline; Thieffry, Denis; Thomas-Chollier, Morgane; van Helden, Jacques

    2015-07-01

    RSAT (Regulatory Sequence Analysis Tools) is a modular software suite for the analysis of cis-regulatory elements in genome sequences. Its main applications are (i) motif discovery, appropriate to genome-wide data sets like ChIP-seq, (ii) transcription factor binding motif analysis (quality assessment, comparisons and clustering), (iii) comparative genomics and (iv) analysis of regulatory variations. Nine new programs have been added to the 43 described in the 2011 NAR Web Software Issue, including a tool to extract sequences from a list of coordinates (fetch-sequences from UCSC), novel programs dedicated to the analysis of regulatory variants from GWAS or population genomics (retrieve-variation-seq and variation-scan), a program to cluster motifs and visualize the similarities as trees (matrix-clustering). To deal with the drastic increase of sequenced genomes, RSAT public sites have been reorganized into taxon-specific servers. The suite is well-documented with tutorials and published protocols. The software suite is available through Web sites, SOAP/WSDL Web services, virtual machines and stand-alone programs at http://www.rsat.eu/. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  19. A web-server of cell type discrimination system.

    PubMed

    Wang, Anyou; Zhong, Yan; Wang, Yanhua; He, Qianchuan

    2014-01-01

    Discriminating cell types is a daily request for stem cell biologists. However, there is not a user-friendly system available to date for public users to discriminate the common cell types, embryonic stem cells (ESCs), induced pluripotent stem cells (iPSCs), and somatic cells (SCs). Here, we develop WCTDS, a web-server of cell type discrimination system, to discriminate the three cell types and their subtypes like fetal versus adult SCs. WCTDS is developed as a top layer application of our recent publication regarding cell type discriminations, which employs DNA-methylation as biomarkers and machine learning models to discriminate cell types. Implemented by Django, Python, R, and Linux shell programming, run under Linux-Apache web server, and communicated through MySQL, WCTDS provides a friendly framework to efficiently receive the user input and to run mathematical models for analyzing data and then to present results to users. This framework is flexible and easy to be expended for other applications. Therefore, WCTDS works as a user-friendly framework to discriminate cell types and subtypes and it can also be expended to detect other cell types like cancer cells.

  20. A Web-Server of Cell Type Discrimination System

    PubMed Central

    Zhong, Yan

    2014-01-01

    Discriminating cell types is a daily request for stem cell biologists. However, there is not a user-friendly system available to date for public users to discriminate the common cell types, embryonic stem cells (ESCs), induced pluripotent stem cells (iPSCs), and somatic cells (SCs). Here, we develop WCTDS, a web-server of cell type discrimination system, to discriminate the three cell types and their subtypes like fetal versus adult SCs. WCTDS is developed as a top layer application of our recent publication regarding cell type discriminations, which employs DNA-methylation as biomarkers and machine learning models to discriminate cell types. Implemented by Django, Python, R, and Linux shell programming, run under Linux-Apache web server, and communicated through MySQL, WCTDS provides a friendly framework to efficiently receive the user input and to run mathematical models for analyzing data and then to present results to users. This framework is flexible and easy to be expended for other applications. Therefore, WCTDS works as a user-friendly framework to discriminate cell types and subtypes and it can also be expended to detect other cell types like cancer cells. PMID:24578634

  1. Using binary classification to prioritize and curate articles for the Comparative Toxicogenomics Database.

    PubMed

    Vishnyakova, Dina; Pasche, Emilie; Ruch, Patrick

    2012-01-01

    We report on the original integration of an automatic text categorization pipeline, so-called ToxiCat (Toxicogenomic Categorizer), that we developed to perform biomedical documents classification and prioritization in order to speed up the curation of the Comparative Toxicogenomics Database (CTD). The task can be basically described as a binary classification task, where a scoring function is used to rank a selected set of articles. Then components of a question-answering system are used to extract CTD-specific annotations from the ranked list of articles. The ranking function is generated using a Support Vector Machine, which combines three main modules: an information retrieval engine for MEDLINE (EAGLi), a gene normalization service (NormaGene) developed for a previous BioCreative campaign and finally, a set of answering components and entity recognizer for diseases and chemicals. The main components of the pipeline are publicly available both as web application and web services. The specific integration performed for the BioCreative competition is available via a web user interface at http://pingu.unige.ch:8080/Toxicat.

  2. Cyber-physical geographical information service-enabled control of diverse in-situ sensors.

    PubMed

    Chen, Nengcheng; Xiao, Changjiang; Pu, Fangling; Wang, Xiaolei; Wang, Chao; Wang, Zhili; Gong, Jianya

    2015-01-23

    Realization of open online control of diverse in-situ sensors is a challenge. This paper proposes a Cyber-Physical Geographical Information Service-enabled method for control of diverse in-situ sensors, based on location-based instant sensing of sensors, which provides closed-loop feedbacks. The method adopts the concepts and technologies of newly developed cyber-physical systems (CPSs) to combine control with sensing, communication, and computation, takes advantage of geographical information service such as services provided by the Tianditu which is a basic geographic information service platform in China and Sensor Web services to establish geo-sensor applications, and builds well-designed human-machine interfaces (HMIs) to support online and open interactions between human beings and physical sensors through cyberspace. The method was tested with experiments carried out in two geographically distributed scientific experimental fields, Baoxie Sensor Web Experimental Field in Wuhan city and Yemaomian Landslide Monitoring Station in Three Gorges, with three typical sensors chosen as representatives using the prototype system Geospatial Sensor Web Common Service Platform. The results show that the proposed method is an open, online, closed-loop means of control.

  3. A Web Server and Mobile App for Computing Hemolytic Potency of Peptides.

    PubMed

    Chaudhary, Kumardeep; Kumar, Ritesh; Singh, Sandeep; Tuknait, Abhishek; Gautam, Ankur; Mathur, Deepika; Anand, Priya; Varshney, Grish C; Raghava, Gajendra P S

    2016-03-08

    Numerous therapeutic peptides do not enter the clinical trials just because of their high hemolytic activity. Recently, we developed a database, Hemolytik, for maintaining experimentally validated hemolytic and non-hemolytic peptides. The present study describes a web server and mobile app developed for predicting, and screening of peptides having hemolytic potency. Firstly, we generated a dataset HemoPI-1 that contains 552 hemolytic peptides extracted from Hemolytik database and 552 random non-hemolytic peptides (from Swiss-Prot). The sequence analysis of these peptides revealed that certain residues (e.g., L, K, F, W) and motifs (e.g., "FKK", "LKL", "KKLL", "KWK", "VLK", "CYCR", "CRR", "RFC", "RRR", "LKKL") are more abundant in hemolytic peptides. Therefore, we developed models for discriminating hemolytic and non-hemolytic peptides using various machine learning techniques and achieved more than 95% accuracy. We also developed models for discriminating peptides having high and low hemolytic potential on different datasets called HemoPI-2 and HemoPI-3. In order to serve the scientific community, we developed a web server, mobile app and JAVA-based standalone software (http://crdd.osdd.net/raghava/hemopi/).

  4. Towards a Web-Enabled Geovisualization and Analytics Platform for the Energy and Water Nexus

    NASA Astrophysics Data System (ADS)

    Sanyal, J.; Chandola, V.; Sorokine, A.; Allen, M.; Berres, A.; Pang, H.; Karthik, R.; Nugent, P.; McManamay, R.; Stewart, R.; Bhaduri, B. L.

    2017-12-01

    Interactive data analytics are playing an increasingly vital role in the generation of new, critical insights regarding the complex dynamics of the energy/water nexus (EWN) and its interactions with climate variability and change. Integration of impacts, adaptation, and vulnerability (IAV) science with emerging, and increasingly critical, data science capabilities offers a promising potential to meet the needs of the EWN community. To enable the exploration of pertinent research questions, a web-based geospatial visualization platform is being built that integrates a data analysis toolbox with advanced data fusion and data visualization capabilities to create a knowledge discovery framework for the EWN. The system, when fully built out, will offer several geospatial visualization capabilities including statistical visual analytics, clustering, principal-component analysis, dynamic time warping, support uncertainty visualization and the exploration of data provenance, as well as support machine learning discoveries to render diverse types of geospatial data and facilitate interactive analysis. Key components in the system architecture includes NASA's WebWorldWind, the Globus toolkit, postgresql, as well as other custom built software modules.

  5. Cyber-Physical Geographical Information Service-Enabled Control of Diverse In-Situ Sensors

    PubMed Central

    Chen, Nengcheng; Xiao, Changjiang; Pu, Fangling; Wang, Xiaolei; Wang, Chao; Wang, Zhili; Gong, Jianya

    2015-01-01

    Realization of open online control of diverse in-situ sensors is a challenge. This paper proposes a Cyber-Physical Geographical Information Service-enabled method for control of diverse in-situ sensors, based on location-based instant sensing of sensors, which provides closed-loop feedbacks. The method adopts the concepts and technologies of newly developed cyber-physical systems (CPSs) to combine control with sensing, communication, and computation, takes advantage of geographical information service such as services provided by the Tianditu which is a basic geographic information service platform in China and Sensor Web services to establish geo-sensor applications, and builds well-designed human-machine interfaces (HMIs) to support online and open interactions between human beings and physical sensors through cyberspace. The method was tested with experiments carried out in two geographically distributed scientific experimental fields, Baoxie Sensor Web Experimental Field in Wuhan city and Yemaomian Landslide Monitoring Station in Three Gorges, with three typical sensors chosen as representatives using the prototype system Geospatial Sensor Web Common Service Platform. The results show that the proposed method is an open, online, closed-loop means of control. PMID:25625906

  6. Semantic similarity measure in biomedical domain leverage web search engine.

    PubMed

    Chen, Chi-Huang; Hsieh, Sheau-Ling; Weng, Yung-Ching; Chang, Wen-Yung; Lai, Feipei

    2010-01-01

    Semantic similarity measure plays an essential role in Information Retrieval and Natural Language Processing. In this paper we propose a page-count-based semantic similarity measure and apply it in biomedical domains. Previous researches in semantic web related applications have deployed various semantic similarity measures. Despite the usefulness of the measurements in those applications, measuring semantic similarity between two terms remains a challenge task. The proposed method exploits page counts returned by the Web Search Engine. We define various similarity scores for two given terms P and Q, using the page counts for querying P, Q and P AND Q. Moreover, we propose a novel approach to compute semantic similarity using lexico-syntactic patterns with page counts. These different similarity scores are integrated adapting support vector machines, to leverage the robustness of semantic similarity measures. Experimental results on two datasets achieve correlation coefficients of 0.798 on the dataset provided by A. Hliaoutakis, 0.705 on the dataset provide by T. Pedersen with physician scores and 0.496 on the dataset provided by T. Pedersen et al. with expert scores.

  7. The Gamma-Ray Burst ToolSHED is Open for Business

    NASA Astrophysics Data System (ADS)

    Giblin, Timothy W.; Hakkila, Jon; Haglin, David J.; Roiger, Richard J.

    2004-09-01

    The GRB ToolSHED, a Gamma-Ray Burst SHell for Expeditions in Data-Mining, is now online and available via a web browser to all in the scientific community. The ToolSHED is an online web utility that contains pre-processed burst attributes of the BATSE catalog and a suite of induction-based machine learning and statistical tools for classification and cluster analysis. Users create their own login account and study burst properties within user-defined multi-dimensional parameter spaces. Although new GRB attributes are periodically added to the database for user selection, the ToolSHED has a feature that allows users to upload their own burst attributes (e.g. spectral parameters, etc.) so that additional parameter spaces can be explored. A data visualization feature using GNUplot and web-based IDL has also been implemented to provide interactive plotting of user-selected session output. In an era in which GRB observations and attributes are becoming increasingly more complex, a utility such as the GRB ToolSHED may play an important role in deciphering GRB classes and understanding intrinsic burst properties.

  8. Flexible querying of Web data to simulate bacterial growth in food.

    PubMed

    Buche, Patrice; Couvert, Olivier; Dibie-Barthélemy, Juliette; Hignette, Gaëlle; Mettler, Eric; Soler, Lydie

    2011-06-01

    A preliminary step in microbial risk assessment in foods is the gathering of experimental data. In the framework of the Sym'Previus project, we have designed a complete data integration system opened on the Web which allows a local database to be complemented by data extracted from the Web and annotated using a domain ontology. We focus on the Web data tables as they contain, in general, a synthesis of data published in the documents. We propose in this paper a flexible querying system using the domain ontology to scan simultaneously local and Web data, this in order to feed the predictive modeling tools available on the Sym'Previus platform. Special attention is paid on the way fuzzy annotations associated with Web data are taken into account in the querying process, which is an important and original contribution of the proposed system. Copyright © 2010 Elsevier Ltd. All rights reserved.

  9. PubMed and beyond: a survey of web tools for searching biomedical literature

    PubMed Central

    Lu, Zhiyong

    2011-01-01

    The past decade has witnessed the modern advances of high-throughput technology and rapid growth of research capacity in producing large-scale biological data, both of which were concomitant with an exponential growth of biomedical literature. This wealth of scholarly knowledge is of significant importance for researchers in making scientific discoveries and healthcare professionals in managing health-related matters. However, the acquisition of such information is becoming increasingly difficult due to its large volume and rapid growth. In response, the National Center for Biotechnology Information (NCBI) is continuously making changes to its PubMed Web service for improvement. Meanwhile, different entities have devoted themselves to developing Web tools for helping users quickly and efficiently search and retrieve relevant publications. These practices, together with maturity in the field of text mining, have led to an increase in the number and quality of various Web tools that provide comparable literature search service to PubMed. In this study, we review 28 such tools, highlight their respective innovations, compare them to the PubMed system and one another, and discuss directions for future development. Furthermore, we have built a website dedicated to tracking existing systems and future advances in the field of biomedical literature search. Taken together, our work serves information seekers in choosing tools for their needs and service providers and developers in keeping current in the field. Database URL: http://www.ncbi.nlm.nih.gov/CBBresearch/Lu/search PMID:21245076

  10. Dynamic and static fatigue of a machinable glass ceramic

    NASA Technical Reports Server (NTRS)

    Magida, M. B.; Forrest, K. A.; Heslin, T. M.

    1984-01-01

    The dynamic and static fatigue behavior of a machinable glass ceramic was investigated to assess its susceptibility to stress corrosion-induced delayed failure. Fracture mechanics techniques were used to analyze the results so that lifetime predictions for components of this material could be made. The resistance to subcritical crack growth of this material was concluded to be only moderate and was found to be dependent on the size of its microstructure.

  11. Carbon Nanotube Growth Rate Regression using Support Vector Machines and Artificial Neural Networks

    DTIC Science & Technology

    2014-03-27

    intensity D peak. Reprinted with permission from [38]. The SVM classifier is trained using custom written Java code leveraging the Sequential Minimal...Society Encog is a machine learning framework for Java , C++ and .Net applications that supports Bayesian Networks, Hidden Markov Models, SVMs and ANNs [13...SVM classifiers are trained using Weka libraries and leveraging custom written Java code. The data set is created as an Attribute Relationship File

  12. Praedicere Possumus: An Italian web-based application for predictive microbiology to ensure food safety.

    PubMed

    Polese, Pierluigi; Torre, Manuela Del; Stecchini, Mara Lucia

    2018-03-31

    The use of predictive modelling tools, which mainly describe the response of microorganisms to a particular set of environmental conditions, may contribute to a better understanding of microbial behaviour in foods. In this paper, a tertiary model, in the form of a readily available and userfriendly web-based application Praedicere Possumus (PP) is presented with research examples from our laboratories. Through the PP application, users have access to different modules, which apply a set of published models considered reliable for determining the compliance of a food product with EU safety criteria and for optimising processing throughout the identification of critical control points. The application pivots around a growth/no-growth boundary model, coupled with a growth model, and includes thermal and non-thermal inactivation models. Integrated functionalities, such as the fractional contribution of each inhibitory factor to growth probability (f) and the time evolution of the growth probability (P t ), have also been included. The PP application is expected to assist food industry and food safety authorities in their common commitment towards the improvement of food safety.

  13. BacDive--The Bacterial Diversity Metadatabase in 2016.

    PubMed

    Söhngen, Carola; Podstawka, Adam; Bunk, Boyke; Gleim, Dorothea; Vetcininova, Anna; Reimer, Lorenz Christian; Ebeling, Christian; Pendarovski, Cezar; Overmann, Jörg

    2016-01-04

    BacDive-the Bacterial Diversity Metadatabase (http://bacdive.dsmz.de) provides strain-linked information about bacterial and archaeal biodiversity. The range of data encompasses taxonomy, morphology, physiology, sampling and concomitant environmental conditions as well as molecular biology. The majority of data is manually annotated and curated. Currently (with release 9/2015), BacDive covers 53 978 strains. Newly implemented RESTful web services provide instant access to the content in machine-readable XML and JSON format. Besides an overall increase of data content, BacDive offers new data fields and features, e.g. the search for gene names, plasmids or 16S rRNA in the advanced search, as well as improved linkage of entries to external life science web resources. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  14. School and Library Media. Introduction; The Uniform Computer Information Transactions Act (UCITA): More Critical for Educators than Copyright Law?; Redefining Professional Growth: New Attitudes, New Tools--A Case Study; Diversity in School Library Media Center Resources; Image-Text Relationships in Web Pages; Aiming for Effective Student Learning in Web-Based Courses: Insights from Student Experiences.

    ERIC Educational Resources Information Center

    Fitzgerald, Mary Ann; Gregory, Vicki L.; Brock, Kathy; Bennett, Elizabeth; Chen, Shu-Hsien Lai; Marsh, Emily; Moore, Joi L.; Kim, Kyung-Sun; Esser, Linda R.

    2002-01-01

    Chapters in this section of "Educational Media and Technology Yearbook" examine important trends prominent in the landscape of the school library media profession in 2001. Themes include mandated educational reform; diversity in school library resources; communication through image-text juxtaposition in Web pages; and professional development and…

  15. Social network and addiction.

    PubMed

    La Barbera, Daniele; La Paglia, Filippo; Valsavoia, Rosaria

    2009-01-01

    In recent decades, the rapid development of innovative Internet-based communication technologies created a new field of academic study among scholars. Particularly, the attention of researchers is focusing on new ways to form relationship-thought social web. Social Network sites constitute a new form of web communities, where people meet and share interests and activities. Due to exponential growth of these sites, an increasing number of scholars are beginning to study the emergent phenomena in order to identify any psychopathological risk related to use of social web, such as addiction. This article examines the recent literature about this issue.

  16. Focused Crawling of the Deep Web Using Service Class Descriptions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rocco, D; Liu, L; Critchlow, T

    2004-06-21

    Dynamic Web data sources--sometimes known collectively as the Deep Web--increase the utility of the Web by providing intuitive access to data repositories anywhere that Web access is available. Deep Web services provide access to real-time information, like entertainment event listings, or present a Web interface to large databases or other data repositories. Recent studies suggest that the size and growth rate of the dynamic Web greatly exceed that of the static Web, yet dynamic content is often ignored by existing search engine indexers owing to the technical challenges that arise when attempting to search the Deep Web. To address thesemore » challenges, we present DynaBot, a service-centric crawler for discovering and clustering Deep Web sources offering dynamic content. DynaBot has three unique characteristics. First, DynaBot utilizes a service class model of the Web implemented through the construction of service class descriptions (SCDs). Second, DynaBot employs a modular, self-tuning system architecture for focused crawling of the DeepWeb using service class descriptions. Third, DynaBot incorporates methods and algorithms for efficient probing of the Deep Web and for discovering and clustering Deep Web sources and services through SCD-based service matching analysis. Our experimental results demonstrate the effectiveness of the service class discovery, probing, and matching algorithms and suggest techniques for efficiently managing service discovery in the face of the immense scale of the Deep Web.« less

  17. Linking functional response and bioenergetics to estimate juvenile salmon growth in a reservoir food web.

    PubMed

    Haskell, Craig A; Beauchamp, David A; Bollens, Stephen M

    2017-01-01

    Juvenile salmon (Oncorhynchus spp.) use of reservoir food webs is understudied. We examined the feeding behavior of subyearling Chinook salmon (O. tshawytscha) and its relation to growth by estimating the functional response of juvenile salmon to changes in the density of Daphnia, an important component of reservoir food webs. We then estimated salmon growth across a broad range of water temperatures and daily rations of two primary prey, Daphnia and juvenile American shad (Alosa sapidissima) using a bioenergetics model. Laboratory feeding experiments yielded a Type-II functional response curve: C = 29.858 P *(4.271 + P)-1 indicating that salmon consumption (C) of Daphnia was not affected until Daphnia densities (P) were < 30 · L-1. Past field studies documented Daphnia densities in lower Columbia River reservoirs of < 3 · L-1 in July but as high as 40 · L-1 in August. Bioenergetics modeling indicated that subyearlings could not achieve positive growth above 22°C regardless of prey type or consumption rate. When feeding on Daphnia, subyearlings could not achieve positive growth above 20°C (water temperatures they commonly encounter in the lower Columbia River during summer). At 16-18°C, subyearlings had to consume about 27,000 Daphnia · day-1 to achieve positive growth. However, when feeding on juvenile American shad, subyearlings had to consume 20 shad · day-1 at 16-18°C, or at least 25 shad · day-1 at 20°C to achieve positive growth. Using empirical consumption rates and water temperatures from summer 2013, subyearlings exhibited negative growth during July (-0.23 to -0.29 g · d-1) and August (-0.05 to -0.07 g · d-1). By switching prey from Daphnia to juvenile shad which have a higher energy density, subyearlings can partially compensate for the effects of higher water temperatures they experience in the lower Columbia River during summer. However, achieving positive growth as piscivores requires subyearlings to feed at higher consumption rates than they exhibited empirically. While our results indicate compromised growth in reservoir habitats, the long-term repercussions to salmon populations in the Columbia River Basin are unknown.

  18. The influence of external subsidies on diet, growth and Hg concentrations of freshwater sport fish: implications for management and fish consumption advisories

    USGS Publications Warehouse

    Lepak, J.M.; Hooten, M.B.; Johnson, B.M.

    2012-01-01

    Mercury (Hg) contamination in sport fish is a global problem. In freshwater systems, food web structure, sport fish sex, size, diet and growth rates influence Hg bioaccumulation. Fish stocking is a common management practice worldwide that can introduce external energy and contaminants into freshwater systems. Thus, stocking can alter many of the factors that influence Hg concentrations in sport fish. Here we evaluated the influence of external subsidies, in the form of hatchery-raised rainbow trout Oncorhynchus mykiss on walleye Sander vitreus diet, growth and Hg concentrations in two freshwater systems. Stocking differentially influenced male and female walleye diets and growth, producing a counterintuitive size-contamination relationship. Modeling indicated that walleye growth rate and diet were important explanatory variables when predicting Hg concentrations. Thus, hatchery contributions to freshwater systems in the form of energy and contaminants can influence diet, growth and Hg concentrations in sport fish. Given the extensive scale of fish stocking, and the known health risks associated with Hg contamination, this represents a significant issue for managers monitoring and manipulating freshwater food web structures, and policy makers attempting to develop fish consumption advisories to protect human health in stocked systems.

  19. The influence of external subsidies on diet, growth and Hg concentrations of freshwater sport fish: implications for management and fish consumption advisories.

    PubMed

    Lepak, Jesse M; Hooten, Mevin B; Johnson, Brett M

    2012-10-01

    Mercury (Hg) contamination in sport fish is a global problem. In freshwater systems, food web structure, sport fish sex, size, diet and growth rates influence Hg bioaccumulation. Fish stocking is a common management practice worldwide that can introduce external energy and contaminants into freshwater systems. Thus, stocking can alter many of the factors that influence Hg concentrations in sport fish. Here we evaluated the influence of external subsidies, in the form of hatchery-raised rainbow trout Oncorhynchus mykiss on walleye Sander vitreus diet, growth and Hg concentrations in two freshwater systems. Stocking differentially influenced male and female walleye diets and growth, producing a counterintuitive size-contamination relationship. Modeling indicated that walleye growth rate and diet were important explanatory variables when predicting Hg concentrations. Thus, hatchery contributions to freshwater systems in the form of energy and contaminants can influence diet, growth and Hg concentrations in sport fish. Given the extensive scale of fish stocking, and the known health risks associated with Hg contamination, this represents a significant issue for managers monitoring and manipulating freshwater food web structures, and policy makers attempting to develop fish consumption advisories to protect human health in stocked systems.

  20. Growing and navigating the small world Web by local content

    PubMed Central

    Menczer, Filippo

    2002-01-01

    Can we model the scale-free distribution of Web hypertext degree under realistic assumptions about the behavior of page authors? Can a Web crawler efficiently locate an unknown relevant page? These questions are receiving much attention due to their potential impact for understanding the structure of the Web and for building better search engines. Here I investigate the connection between the linkage and content topology of Web pages. The relationship between a text-induced distance metric and a link-based neighborhood probability distribution displays a phase transition between a region where linkage is not determined by content and one where linkage decays according to a power law. This relationship is used to propose a Web growth model that is shown to accurately predict the distribution of Web page degree, based on textual content and assuming only local knowledge of degree for existing pages. A qualitatively similar phase transition is found between linkage and semantic distance, with an exponential decay tail. Both relationships suggest that efficient paths can be discovered by decentralized Web navigation algorithms based on textual and/or categorical cues. PMID:12381792

  1. Growing and navigating the small world Web by local content

    NASA Astrophysics Data System (ADS)

    Menczer, Filippo

    2002-10-01

    Can we model the scale-free distribution of Web hypertext degree under realistic assumptions about the behavior of page authors? Can a Web crawler efficiently locate an unknown relevant page? These questions are receiving much attention due to their potential impact for understanding the structure of the Web and for building better search engines. Here I investigate the connection between the linkage and content topology of Web pages. The relationship between a text-induced distance metric and a link-based neighborhood probability distribution displays a phase transition between a region where linkage is not determined by content and one where linkage decays according to a power law. This relationship is used to propose a Web growth model that is shown to accurately predict the distribution of Web page degree, based on textual content and assuming only local knowledge of degree for existing pages. A qualitatively similar phase transition is found between linkage and semantic distance, with an exponential decay tail. Both relationships suggest that efficient paths can be discovered by decentralized Web navigation algorithms based on textual and/or categorical cues.

  2. Growing and navigating the small world Web by local content.

    PubMed

    Menczer, Filippo

    2002-10-29

    Can we model the scale-free distribution of Web hypertext degree under realistic assumptions about the behavior of page authors? Can a Web crawler efficiently locate an unknown relevant page? These questions are receiving much attention due to their potential impact for understanding the structure of the Web and for building better search engines. Here I investigate the connection between the linkage and content topology of Web pages. The relationship between a text-induced distance metric and a link-based neighborhood probability distribution displays a phase transition between a region where linkage is not determined by content and one where linkage decays according to a power law. This relationship is used to propose a Web growth model that is shown to accurately predict the distribution of Web page degree, based on textual content and assuming only local knowledge of degree for existing pages. A qualitatively similar phase transition is found between linkage and semantic distance, with an exponential decay tail. Both relationships suggest that efficient paths can be discovered by decentralized Web navigation algorithms based on textual and/or categorical cues.

  3. An ant colony optimization based feature selection for web page classification.

    PubMed

    Saraç, Esra; Özel, Selma Ayşe

    2014-01-01

    The increased popularity of the web has caused the inclusion of huge amount of information to the web, and as a result of this explosive information growth, automated web page classification systems are needed to improve search engines' performance. Web pages have a large number of features such as HTML/XML tags, URLs, hyperlinks, and text contents that should be considered during an automated classification process. The aim of this study is to reduce the number of features to be used to improve runtime and accuracy of the classification of web pages. In this study, we used an ant colony optimization (ACO) algorithm to select the best features, and then we applied the well-known C4.5, naive Bayes, and k nearest neighbor classifiers to assign class labels to web pages. We used the WebKB and Conference datasets in our experiments, and we showed that using the ACO for feature selection improves both accuracy and runtime performance of classification. We also showed that the proposed ACO based algorithm can select better features with respect to the well-known information gain and chi square feature selection methods.

  4. WWW Motivation Mining: Finding Treasures for Teaching Evaluation Skills, Grades 7-12. Professional Growth Series.

    ERIC Educational Resources Information Center

    Small, Ruth V.; Arnone, Marilyn P.

    Intended for use by middle or high school teachers and library media specialists, this book describes a World Wide Web evaluation tool developed specifically for use by high school students and designed to provide hands-on experience in critically evaluating the strengths and weaknesses of Web sites. The book uses a workbook format and is…

  5. Choosing Web 2.0 Tools for Instruction: An Extension of Task-Technology Fit

    ERIC Educational Resources Information Center

    Gupta, Saurabh

    2014-01-01

    The growth of technology and the inclusion of "digital natives" as students in the education world have created a demand pull for the use of Web 2.0 technologies in education. Dominant among these tools have been wikis, blogs and discussion boards. Distance education experts view the use of these tools as differentiators when compared to…

  6. Marketing nets out. Spending--and expecting--more than ever, hospitals and systems take their message to the Web.

    PubMed

    Hudson, T

    1999-05-01

    Live on the Web, it's open-heart surgery--a showroom window on sweeping new marketing plans. Along with perennial promos like radio and TV ads, health systems have tapped the power of the Internet to hard-wire their organizations for growth. But marketing must be linked to operations as never before.

  7. Parasitic chytrids sustain zooplankton growth during inedible algal bloom

    PubMed Central

    Rasconi, Serena; Grami, Boutheina; Niquil, Nathalie; Jobard, Marlène; Sime-Ngando, Télesphore

    2014-01-01

    This study assesses the quantitative impact of parasitic chytrids on the planktonic food web of two contrasting freshwater lakes during different algal bloom situations. Carbon-based food web models were used to investigate the effects of chytrids during the spring diatom bloom in Lake Pavin (oligo-mesotrophic) and the autumn cyanobacteria bloom in Lake Aydat (eutrophic). Linear inverse modeling was employed to estimate undetermined flows in both lakes. The Monte Carlo Markov chain linear inverse modeling procedure provided estimates of the ranges of model-derived fluxes. Model results confirm recent theories on the impact of parasites on food web function through grazers and recyclers. During blooms of “inedible” algae (unexploited by planktonic herbivores), the epidemic growth of chytrids channeled 19–20% of the primary production in both lakes through the production of grazer exploitable zoospores. The parasitic throughput represented 50% and 57% of the zooplankton diet, respectively, in the oligo-mesotrophic and in the eutrophic lakes. Parasites also affected ecological network properties such as longer carbon path lengths and loop strength, and contributed to increase the stability of the aquatic food web, notably in the oligo-mesotrophic Lake Pavin. PMID:24904543

  8. An automated, high-throughput plant phenotyping system using machine learning-based plant segmentation and image analysis.

    PubMed

    Lee, Unseok; Chang, Sungyul; Putra, Gian Anantrio; Kim, Hyoungseok; Kim, Dong Hwan

    2018-01-01

    A high-throughput plant phenotyping system automatically observes and grows many plant samples. Many plant sample images are acquired by the system to determine the characteristics of the plants (populations). Stable image acquisition and processing is very important to accurately determine the characteristics. However, hardware for acquiring plant images rapidly and stably, while minimizing plant stress, is lacking. Moreover, most software cannot adequately handle large-scale plant imaging. To address these problems, we developed a new, automated, high-throughput plant phenotyping system using simple and robust hardware, and an automated plant-imaging-analysis pipeline consisting of machine-learning-based plant segmentation. Our hardware acquires images reliably and quickly and minimizes plant stress. Furthermore, the images are processed automatically. In particular, large-scale plant-image datasets can be segmented precisely using a classifier developed using a superpixel-based machine-learning algorithm (Random Forest), and variations in plant parameters (such as area) over time can be assessed using the segmented images. We performed comparative evaluations to identify an appropriate learning algorithm for our proposed system, and tested three robust learning algorithms. We developed not only an automatic analysis pipeline but also a convenient means of plant-growth analysis that provides a learning data interface and visualization of plant growth trends. Thus, our system allows end-users such as plant biologists to analyze plant growth via large-scale plant image data easily.

  9. Varying execution discipline to increase performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Campbell, P.L.; Maccabe, A.B.

    1993-12-22

    This research investigates the relationship between execution discipline and performance. The hypothesis has two parts: 1. Different execution disciplines exhibit different performance for different computations, and 2. These differences can be effectively predicted by heuristics. A machine model is developed that can vary its execution discipline. That is, the model can execute a given program using either the control-driven, data-driven or demand-driven execution discipline. This model is referred to as a ``variable-execution-discipline`` machine. The instruction set for the model is the Program Dependence Web (PDW). The first part of the hypothesis will be tested by simulating the execution of themore » machine model on a suite of computations, based on the Livermore Fortran Kernel (LFK) Test (a.k.a. the Livermore Loops), using all three execution disciplines. Heuristics are developed to predict relative performance. These heuristics predict (a) the execution time under each discipline for one iteration of each loop and (b) the number of iterations taken by that loop; then the heuristics use those predictions to develop a prediction for the execution of the entire loop. Similar calculations are performed for branch statements. The second part of the hypothesis will be tested by comparing the results of the simulated execution with the predictions produced by the heuristics. If the hypothesis is supported, then the door is open for the development of machines that can vary execution discipline to increase performance.« less

  10. Impact of liquid fertilizers on plant growth, yield, fruit quality and fertigation management in an organic processing blackberry production system

    USDA-ARS?s Scientific Manuscript database

    The impact of organic fertilizer source on the growth, fruit quality, and yield of blackberry cultivars (‘Marion’ and ‘Black Diamond’) grown in machine-harvested, organic production systems for the processed market was evaluated from 2011-13. The planting was established in spring 2010 using approve...

  11. Cleaning a semipermeable membrane in a papermaking machine

    DOEpatents

    Beck, David A.

    2004-01-06

    A method of cleaning a semipermeable membrane, the semipermeable membrane being configured for carrying a fiber web, includes the steps of providing a cleaning fluid and applying the cleaning fluid on the semipermeable membrane. Further, an air press configured for carrying the semipermeable membrane therethrough is provided, and the air press has pressurized air therein. The semipermeable membrane is conveyed through the air press and is subjected to the pressurized air within the air press. The pressurized air thereby flushes the cleaning fluid through the semipermeable membrane.

  12. Automatic identification of artifacts in electrodermal activity data.

    PubMed

    Taylor, Sara; Jaques, Natasha; Chen, Weixuan; Fedor, Szymon; Sano, Akane; Picard, Rosalind

    2015-01-01

    Recently, wearable devices have allowed for long term, ambulatory measurement of electrodermal activity (EDA). Despite the fact that ambulatory recording can be noisy, and recording artifacts can easily be mistaken for a physiological response during analysis, to date there is no automatic method for detecting artifacts. This paper describes the development of a machine learning algorithm for automatically detecting EDA artifacts, and provides an empirical evaluation of classification performance. We have encoded our results into a freely available web-based tool for artifact and peak detection.

  13. KMgene: a unified R package for gene-based association analysis for complex traits.

    PubMed

    Yan, Qi; Fang, Zhou; Chen, Wei; Stegle, Oliver

    2018-02-09

    In this report, we introduce an R package KMgene for performing gene-based association tests for familial, multivariate or longitudinal traits using kernel machine (KM) regression under a generalized linear mixed model (GLMM) framework. Extensive simulations were performed to evaluate the validity of the approaches implemented in KMgene. http://cran.r-project.org/web/packages/KMgene. qi.yan@chp.edu or wei.chen@chp.edu. Supplementary data are available at Bioinformatics online. © The Author(s) 2018. Published by Oxford University Press.

  14. Dynamic Data-Driven Prognostics and Condition Monitoring of On-board Electronics

    DTIC Science & Technology

    2012-08-27

    of functionality and accessibility; it is an open language unlike Java or Visual meaning that it is also free. It is also one of the most popular...and C# are able to run without the use of a virtual machine like Java . 4.2.1.5 Implementation For building of an OSA-CBM system, the primer...documentation [7] recommends the following steps: 1. Choose a middleware technology (DCOM, CORBA, Web Services, Java RMI, etc.). 2. Transform OSA-CBM UML

  15. An Analysis Platform for Mobile Ad Hoc Network (MANET) Scenario Execution Log Data

    DTIC Science & Technology

    2016-01-01

    these technologies. 4.1 Backend Technologies • Java 1.8 • my-sql-connector- java -5.0.8.jar • Tomcat • VirtualBox • Kali MANET Virtual Machine 4.2...Frontend Technologies • LAMPP 4.3 Database • MySQL Server 5. Database The SEDAP database settings and structure are described in this section...contains all the backend java functionality including the web services, should be placed in the webapps directory inside the Tomcat installation

  16. Enhancing Network Communication in NPSNET-V Virtual Environments Using XML-Described Dynamic Behavior (DBP) Protocols

    DTIC Science & Technology

    2001-09-01

    testing is performed between two machines connected by either a 100 Mbps Ethernet connection or a 56K modem connection. This testing is performed...and defined as follows: • The available bandwidth is set at two different levels (Ethernet 100 Mbps and 56K modem ). 32 • The packet size is set... modem connection. These two connections represent the target 100 Mbps high end and 56k bps low end of anticipated client connections in web-based

  17. Video control system for a drilling in furniture workpiece

    NASA Astrophysics Data System (ADS)

    Khmelev, V. L.; Satarov, R. N.; Zavyalova, K. V.

    2018-05-01

    During last 5 years, Russian industry has being starting to be a robotic, therefore scientific groups got new tasks. One of new tasks is machine vision systems, which should solve problem of automatic quality control. This type of systems has a cost of several thousand dollars each. The price is impossible for regional small business. In this article, we describe principle and algorithm of cheap video control system, which one uses web-cameras and notebook or desktop computer as a computing unit.

  18. ANALYTiC: An Active Learning System for Trajectory Classification.

    PubMed

    Soares Junior, Amilcar; Renso, Chiara; Matwin, Stan

    2017-01-01

    The increasing availability and use of positioning devices has resulted in large volumes of trajectory data. However, semantic annotations for such data are typically added by domain experts, which is a time-consuming task. Machine-learning algorithms can help infer semantic annotations from trajectory data by learning from sets of labeled data. Specifically, active learning approaches can minimize the set of trajectories to be annotated while preserving good performance measures. The ANALYTiC web-based interactive tool visually guides users through this annotation process.

  19. The Mentoring Web -- Coming Together to Make a Difference

    ERIC Educational Resources Information Center

    Gordon, Evelyn; Lowrey, K. Alisa

    2017-01-01

    Developing effective novice teachers involves many components. Researchers have studied the impact of principals, induction programs, and mentors on the growth and development of novice teachers. Relationships with college/university faculty, students, parents, and support staff can also impact the growth of these novice professionals. The…

  20. LARVAL SALAMANDER GROWTH RESPONDS TO ENRICHMENT OF A NUTRIENT POOR HEADWATER STREAM

    EPA Science Inventory

    While many studies have measured effects of nutrient enrichment on higher trophic levels in grazing food webs, few such studies exist for detritus-based systems. We measured effects of nitrogen and phosphorus addition on growth of larval Eruycea wilderae in a heterotrophic head...

  1. Web 2.0 applications in medicine: trends and topics in the literature.

    PubMed

    Boudry, Christophe

    2015-04-01

    The World Wide Web has changed research habits, and these changes were further expanded when "Web 2.0" became popular in 2005. Bibliometrics is a helpful tool used for describing patterns of publication, for interpreting progression over time, and the geographical distribution of research in a given field. Few studies employing bibliometrics, however, have been carried out on the correlative nature of scientific literature and Web 2.0. The aim of this bibliometric analysis was to provide an overview of Web 2.0 implications in the biomedical literature. The objectives were to assess the growth rate of literature, key journals, authors, and country contributions, and to evaluate whether the various Web 2.0 applications were expressed within this biomedical literature, and if so, how. A specific query with keywords chosen to be representative of Web 2.0 applications was built for the PubMed database. Articles related to Web 2.0 were downloaded in Extensible Markup Language (XML) and were processed through developed hypertext preprocessor (PHP) scripts, then imported to Microsoft Excel 2010 for data processing. A total of 1347 articles were included in this study. The number of articles related to Web 2.0 has been increasing from 2002 to 2012 (average annual growth rate was 106.3% with a maximum of 333% in 2005). The United States was by far the predominant country for authors, with 514 articles (54.0%; 514/952). The second and third most productive countries were the United Kingdom and Australia, with 87 (9.1%; 87/952) and 44 articles (4.6%; 44/952), respectively. Distribution of number of articles per author showed that the core population of researchers working on Web 2.0 in the medical field could be estimated at approximately 75. In total, 614 journals were identified during this analysis. Using Bradford's law, 27 core journals were identified, among which three (Studies in Health Technology and Informatics, Journal of Medical Internet Research, and Nucleic Acids Research) produced more than 35 articles related to Web 2.0 over the period studied. A total of 274 words in the field of Web 2.0 were found after manual sorting of the 15,878 words appearing in title and abstract fields for articles. Word frequency analysis reveals "blog" as the most recurrent, followed by "wiki", "Web 2.0", "social media", "Facebook", "social networks", "blogger", "cloud computing", "Twitter", and "blogging". All categories of Web 2.0 applications were found, indicating the successful integration of Web 2.0 into the biomedical field. This study shows that the biomedical community is engaged in the use of Web 2.0 and confirms its high level of interest in these tools. Therefore, changes in the ways researchers use information seem to be far from over.

  2. Social Web mining and exploitation for serious applications: Technosocial Predictive Analytics and related technologies for public health, environmental and national security surveillance.

    PubMed

    Kamel Boulos, Maged N; Sanfilippo, Antonio P; Corley, Courtney D; Wheeler, Steve

    2010-10-01

    This paper explores Technosocial Predictive Analytics (TPA) and related methods for Web "data mining" where users' posts and queries are garnered from Social Web ("Web 2.0") tools such as blogs, micro-blogging and social networking sites to form coherent representations of real-time health events. The paper includes a brief introduction to commonly used Social Web tools such as mashups and aggregators, and maps their exponential growth as an open architecture of participation for the masses and an emerging way to gain insight about people's collective health status of whole populations. Several health related tool examples are described and demonstrated as practical means through which health professionals might create clear location specific pictures of epidemiological data such as flu outbreaks. Copyright 2010 Elsevier Ireland Ltd. All rights reserved.

  3. VOTable JAVA Streaming Writer and Applications.

    NASA Astrophysics Data System (ADS)

    Kulkarni, P.; Kembhavi, A.; Kale, S.

    2004-07-01

    Virtual Observatory related tools use a new standard for data transfer called the VOTable format. This is a variant of the xml format that enables easy transfer of data over the web. We describe a streaming interface that can bridge the VOTable format, through a user friendly graphical interface, with the FITS and ASCII formats, which are commonly used by astronomers. A streaming interface is important for efficient use of memory because of the large size of catalogues. The tools are developed in JAVA to provide a platform independent interface. We have also developed a stand-alone version that can be used to convert data stored in ASCII or FITS format on a local machine. The Streaming writer is successfully being used in VOPlot (See Kale et al 2004 for a description of VOPlot).We present the test results of converting huge FITS and ASCII data into the VOTable format on machines that have only limited memory.

  4. Quantum-chemical insights from deep tensor neural networks

    PubMed Central

    Schütt, Kristof T.; Arbabzadah, Farhad; Chmiela, Stefan; Müller, Klaus R.; Tkatchenko, Alexandre

    2017-01-01

    Learning from data has led to paradigm shifts in a multitude of disciplines, including web, text and image search, speech recognition, as well as bioinformatics. Can machine learning enable similar breakthroughs in understanding quantum many-body systems? Here we develop an efficient deep learning approach that enables spatially and chemically resolved insights into quantum-mechanical observables of molecular systems. We unify concepts from many-body Hamiltonians with purpose-designed deep tensor neural networks, which leads to size-extensive and uniformly accurate (1 kcal mol−1) predictions in compositional and configurational chemical space for molecules of intermediate size. As an example of chemical relevance, the model reveals a classification of aromatic rings with respect to their stability. Further applications of our model for predicting atomic energies and local chemical potentials in molecules, reliable isomer energies, and molecules with peculiar electronic structure demonstrate the potential of machine learning for revealing insights into complex quantum-chemical systems. PMID:28067221

  5. Quantum-chemical insights from deep tensor neural networks.

    PubMed

    Schütt, Kristof T; Arbabzadah, Farhad; Chmiela, Stefan; Müller, Klaus R; Tkatchenko, Alexandre

    2017-01-09

    Learning from data has led to paradigm shifts in a multitude of disciplines, including web, text and image search, speech recognition, as well as bioinformatics. Can machine learning enable similar breakthroughs in understanding quantum many-body systems? Here we develop an efficient deep learning approach that enables spatially and chemically resolved insights into quantum-mechanical observables of molecular systems. We unify concepts from many-body Hamiltonians with purpose-designed deep tensor neural networks, which leads to size-extensive and uniformly accurate (1 kcal mol -1 ) predictions in compositional and configurational chemical space for molecules of intermediate size. As an example of chemical relevance, the model reveals a classification of aromatic rings with respect to their stability. Further applications of our model for predicting atomic energies and local chemical potentials in molecules, reliable isomer energies, and molecules with peculiar electronic structure demonstrate the potential of machine learning for revealing insights into complex quantum-chemical systems.

  6. clearScience: Infrastructure for Communicating Data-Intensive Science.

    PubMed

    Bot, Brian M; Burdick, David; Kellen, Michael; Huang, Erich S

    2013-01-01

    Progress in biomedical research requires effective scientific communication to one's peers and to the public. Current research routinely encompasses large datasets and complex analytic processes, and the constraints of traditional journal formats limit useful transmission of these elements. We are constructing a framework through which authors can not only provide the narrative of what was done, but the primary and derivative data, the source code, the compute environment, and web-accessible virtual machines. This infrastructure allows authors to "hand their machine"- prepopulated with libraries, data, and code-to those interested in reviewing or building off of their work. This project, "clearScience," seeks to provide an integrated system that accommodates the ad hoc nature of discovery in the data-intensive sciences and seamless transitions from working to reporting. We demonstrate that rather than merely describing the science being reported, one can deliver the science itself.

  7. Machine Translation-Supported Cross-Language Information Retrieval for a Consumer Health Resource

    PubMed Central

    Rosemblat, Graciela; Gemoets, Darren; Browne, Allen C.; Tse, Tony

    2003-01-01

    The U.S. National Institutes of Health, through its National Library of Medicine, developed ClinicalTrials.gov to provide the public with easy access to information on clinical trials on a wide range of conditions or diseases. Only English language information retrieval is currently supported. Given the growing number of Spanish speakers in the U.S. and their increasing use of the Web, we anticipate a significant increase in Spanish-speaking users. This study compares the effectiveness of two common cross-language information retrieval methods using machine translation, query translation versus document translation, using a subset of genuine user queries from ClinicalTrials.gov. Preliminary results conducted with the ClinicalTrials.gov search engine show that in our environment, query translation is statistically significantly better than document translation. We discuss possible reasons for this result and we conclude with suggestions for future work. PMID:14728236

  8. Quantum-chemical insights from deep tensor neural networks

    NASA Astrophysics Data System (ADS)

    Schütt, Kristof T.; Arbabzadah, Farhad; Chmiela, Stefan; Müller, Klaus R.; Tkatchenko, Alexandre

    2017-01-01

    Learning from data has led to paradigm shifts in a multitude of disciplines, including web, text and image search, speech recognition, as well as bioinformatics. Can machine learning enable similar breakthroughs in understanding quantum many-body systems? Here we develop an efficient deep learning approach that enables spatially and chemically resolved insights into quantum-mechanical observables of molecular systems. We unify concepts from many-body Hamiltonians with purpose-designed deep tensor neural networks, which leads to size-extensive and uniformly accurate (1 kcal mol-1) predictions in compositional and configurational chemical space for molecules of intermediate size. As an example of chemical relevance, the model reveals a classification of aromatic rings with respect to their stability. Further applications of our model for predicting atomic energies and local chemical potentials in molecules, reliable isomer energies, and molecules with peculiar electronic structure demonstrate the potential of machine learning for revealing insights into complex quantum-chemical systems.

  9. e-Addictology: An Overview of New Technologies for Assessing and Intervening in Addictive Behaviors.

    PubMed

    Ferreri, Florian; Bourla, Alexis; Mouchabac, Stephane; Karila, Laurent

    2018-01-01

    New technologies can profoundly change the way we understand psychiatric pathologies and addictive disorders. New concepts are emerging with the development of more accurate means of collecting live data, computerized questionnaires, and the use of passive data. Digital phenotyping , a paradigmatic example, refers to the use of computerized measurement tools to capture the characteristics of different psychiatric disorders. Similarly, machine learning-a form of artificial intelligence-can improve the classification of patients based on patterns that clinicians have not always considered in the past. Remote or automated interventions (web-based or smartphone-based apps), as well as virtual reality and neurofeedback, are already available or under development. These recent changes have the potential to disrupt practices, as well as practitioners' beliefs, ethics and representations, and may even call into question their professional culture. However, the impact of new technologies on health professionals' practice in addictive disorder care has yet to be determined. In the present paper, we therefore present an overview of new technology in the field of addiction medicine. Using the keywords [e-health], [m-health], [computer], [mobile], [smartphone], [wearable], [digital], [machine learning], [ecological momentary assessment], [biofeedback] and [virtual reality], we searched the PubMed database for the most representative articles in the field of assessment and interventions in substance use disorders. We screened 595 abstracts and analyzed 92 articles, dividing them into seven categories: e-health program and web-based interventions, machine learning, computerized adaptive testing, wearable devices and digital phenotyping, ecological momentary assessment, biofeedback, and virtual reality. This overview shows that new technologies can improve assessment and interventions in the field of addictive disorders. The precise role of connected devices, artificial intelligence and remote monitoring remains to be defined. If they are to be used effectively, these tools must be explained and adapted to the different profiles of physicians and patients. The involvement of patients, caregivers and other health professionals is essential to their design and assessment.

  10. Data Publication and Interoperability for Long Tail Researchers via the Open Data Repository's (ODR) Data Publisher.

    NASA Astrophysics Data System (ADS)

    Stone, N.; Lafuente, B.; Bristow, T.; Keller, R.; Downs, R. T.; Blake, D. F.; Fonda, M.; Pires, A.

    2016-12-01

    Working primarily with astrobiology researchers at NASA Ames, the Open Data Repository (ODR) has been conducting a software pilot to meet the varying needs of this multidisciplinary community. Astrobiology researchers often have small communities or operate individually with unique data sets that don't easily fit into existing database structures. The ODR constructed its Data Publisher software to allow researchers to create databases with common metadata structures and subsequently extend them to meet their individual needs and data requirements. The software accomplishes these tasks through a web-based interface that allows collaborative creation and revision of common metadata templates and individual extensions to these templates for custom data sets. This allows researchers to search disparate datasets based on common metadata established through the metadata tools, but still facilitates distinct analyses and data that may be stored alongside the required common metadata. The software produces web pages that can be made publicly available at the researcher's discretion so that users may search and browse the data in an effort to make interoperability and data discovery a human-friendly task while also providing semantic data for machine-based discovery. Once relevant data has been identified, researchers can utilize the built-in application programming interface (API) that exposes the data for machine-based consumption and integration with existing data analysis tools (e.g. R, MATLAB, Project Jupyter - http://jupyter.org). The current evolution of the project has created the Astrobiology Habitable Environments Database (AHED)[1] which provides an interface to databases connected through a common metadata core. In the next project phase, the goal is for small research teams and groups to be self-sufficient in publishing their research data to meet funding mandates and academic requirements as well as fostering increased data discovery and interoperability through human-readable and machine-readable interfaces. This project is supported by the Science-Enabling Research Activity (SERA) and NASA NNX11AP82A, MSL. [1] B. Lafuente et al. (2016) AGU, submitted.

  11. Wikipedias: Collaborative web-based encyclopedias as complex networks

    NASA Astrophysics Data System (ADS)

    Zlatić, V.; Božičević, M.; Štefančić, H.; Domazet, M.

    2006-07-01

    Wikipedia is a popular web-based encyclopedia edited freely and collaboratively by its users. In this paper we present an analysis of Wikipedias in several languages as complex networks. The hyperlinks pointing from one Wikipedia article to another are treated as directed links while the articles represent the nodes of the network. We show that many network characteristics are common to different language versions of Wikipedia, such as their degree distributions, growth, topology, reciprocity, clustering, assortativity, path lengths, and triad significance profiles. These regularities, found in the ensemble of Wikipedias in different languages and of different sizes, point to the existence of a unique growth process. We also compare Wikipedias to other previously studied networks.

  12. Wikipedias: collaborative web-based encyclopedias as complex networks.

    PubMed

    Zlatić, V; Bozicević, M; Stefancić, H; Domazet, M

    2006-07-01

    Wikipedia is a popular web-based encyclopedia edited freely and collaboratively by its users. In this paper we present an analysis of Wikipedias in several languages as complex networks. The hyperlinks pointing from one Wikipedia article to another are treated as directed links while the articles represent the nodes of the network. We show that many network characteristics are common to different language versions of Wikipedia, such as their degree distributions, growth, topology, reciprocity, clustering, assortativity, path lengths, and triad significance profiles. These regularities, found in the ensemble of Wikipedias in different languages and of different sizes, point to the existence of a unique growth process. We also compare Wikipedias to other previously studied networks.

  13. Polycyclic aromatic hydrocarbons alter the structure of oceanic and oligotrophic microbial food webs.

    PubMed

    Cerezo, Maria Isabel; Agusti, Susana

    2015-12-30

    One way organic pollutants reach remote oceanic regions is by atmospheric transport. During the Malaspina-2010 expedition, across the Atlantic, Indian, and Pacific Oceans, we analyzed the polycyclic aromatic hydrocarbon (PAH) effects on oceanic microbial food webs. We performed perturbation experiments adding PAHs to classic dilution experiments. The phytoplankton growth rates were reduced by more than 5 times, being Prochlorococcus spp. the most affected. 62% of the experiments showed a reduction in the grazing rates due to the presence of PAHs. For the remaining experiments, grazing usually increased likely due to cascading effects. We identified changes in the slope of the relation between the growth rate and the dilution fraction induced by the pollutants, moving from no grazing to V-shape, or to negative slope, indicative of grazing increase by cascade effects and alterations of the grazers' activity structure. Our perturbation experiments indicate that PAHs could influence the structure oceanic food-webs structure. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Classification of HTTP Attacks: A Study on the ECML/PKDD 2007 Discovery Challenge

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gallagher, Brian; Eliassi-Rad, Tina

    2009-07-08

    As the world becomes more reliant on Web applications for commercial, financial, and medical transactions, cyber attacks on the World Wide Web are increasing in frequency and severity. Web applications provide an attractive alternative to traditional desktop applications due to their accessibility and ease of deployment. However, the accessibility of Web applications also makes them extremely vulnerable to attack. This inherent vulnerability is intensified by the distributed nature ofWeb applications and the complexity of configuring application servers. These factors have led to a proliferation of Web-based attacks, in which attackers surreptitiously inject code into HTTP requests, allowing them to executemore » arbitrary commands on remote systems and perform malicious activities such as reading, altering, or destroying sensitive data. One approach for dealing with HTTP-based attacks is to identify malicious code in incoming HTTP requests and eliminate bad requests before they are processed. Using machine learning techniques, we can build a classifier to automatically label requests as “Valid” or “Attack.” For this study, we develop a simple, but effective HTTP attack classifier, based on the vector space model used commonly for Information Retrieval. Our classifier not only separates attacks from valid requests, but can also identify specific attack types (e.g., “SQL Injection” or “Path Traversal”). We demonstrate the effectiveness of our approach through experiments on the ECML/PKDD 2007 Discovery Challenge data set. Specifically, we show that our approach achieves higher precision and recall than previous methods. In addition, our approach has a number of desirable characteristics, including robustness to missing contextual information, interpretability of models, and scalability.« less

  15. Characterizing bacterial communities in paper production-troublemakers revealed.

    PubMed

    Zumsteg, Anita; Urwyler, Simon K; Glaubitz, Joachim

    2017-08-01

    Biofilm formation is a major cause of reduced paper quality and increased down time during paper manufacturing. This study uses Illumina next-generation sequencing to identify the microbial populations causing quality issues due to their presence in biofilms and slimes. The paper defects investigated contained traces of the films and/or slime of mainly two genera, Tepidimonas and Chryseobacterium. The Tepidimonas spp. found contributed on average 68% to the total bacterial population. Both genera have been described previously to be associated with biofilms in paper mills. There was indication that Tepidimonas spp. were present as compact biofilm in the head box of one paper machine and was filtered out by the paper web during production. On the other hand Tepidimonas spp. were also present to a large extent in the press and white waters of two nonproblematic paper machines. Therefore, the mere presence of a known biofilm producer alone is not sufficient to cause slimes and therefore paper defects and other critical factors are additionally at play. For instance, we identified Acidovorax sp., which is an early colonizer of paper machines, exhibiting the ability to form extracellular DNA matrices for attachment and biofilm formation. © 2017 The Authors. MicrobiologyOpen published by John Wiley & Sons Ltd.

  16. Machines, medication, modulation: circuits of dependency and self-care in Las Vegas.

    PubMed

    Schüll, Natasha Dow

    2006-06-01

    The intensive entertainment infrastructure of Las Vegas is overlaid with a robust therapeutic network for those who become addicted to its technologies. Although the objectives of gambling machines and addiction therapeutics are seemingly at odds--the first work to encourage play, the second to stop it--both gear their interventions around a model of the self as a continuum of behavioral potentials that can be externally modulated. For compulsive gamblers implicated in this circuit of modulation, pharmaceutical drugs that have been prescribed to dampen cravings for machine play sometimes function as intensifiers of its effects. Caught in an intractable play between technologies of harm and technologies of care, recovering gambling addicts are challenged to assemble a technical array through which they can maintain balance; health itself, for these individuals, becomes a state of managed dependency. This essay explores the shifting terms and changing stakes of subjectivity and health in the contemporary United States by way of ethnographic research on compulsive gamblers who live and work in Las Vegas. The analysis draws on interviews with gamblers as well as on observations in local self-help groups, directed group therapy sessions, and chat rooms of Internet recovery Web sites.

  17. sRNAtoolboxVM: Small RNA Analysis in a Virtual Machine.

    PubMed

    Gómez-Martín, Cristina; Lebrón, Ricardo; Rueda, Antonio; Oliver, José L; Hackenberg, Michael

    2017-01-01

    High-throughput sequencing (HTS) data for small RNAs (noncoding RNA molecules that are 20-250 nucleotides in length) can now be routinely generated by minimally equipped wet laboratories; however, the bottleneck in HTS-based research has shifted now to the analysis of such huge amount of data. One of the reasons is that many analysis types require a Linux environment but computers, system administrators, and bioinformaticians suppose additional costs that often cannot be afforded by small to mid-sized groups or laboratories. Web servers are an alternative that can be used if the data is not subjected to privacy issues (what very often is an important issue with medical data). However, in any case they are less flexible than stand-alone programs limiting the number of workflows and analysis types that can be carried out.We show in this protocol how virtual machines can be used to overcome those problems and limitations. sRNAtoolboxVM is a virtual machine that can be executed on all common operating systems through virtualization programs like VirtualBox or VMware, providing the user with a high number of preinstalled programs like sRNAbench for small RNA analysis without the need to maintain additional servers and/or operating systems.

  18. Advanced Online Survival Analysis Tool for Predictive Modelling in Clinical Data Science.

    PubMed

    Montes-Torres, Julio; Subirats, José Luis; Ribelles, Nuria; Urda, Daniel; Franco, Leonardo; Alba, Emilio; Jerez, José Manuel

    2016-01-01

    One of the prevailing applications of machine learning is the use of predictive modelling in clinical survival analysis. In this work, we present our view of the current situation of computer tools for survival analysis, stressing the need of transferring the latest results in the field of machine learning to biomedical researchers. We propose a web based software for survival analysis called OSA (Online Survival Analysis), which has been developed as an open access and user friendly option to obtain discrete time, predictive survival models at individual level using machine learning techniques, and to perform standard survival analysis. OSA employs an Artificial Neural Network (ANN) based method to produce the predictive survival models. Additionally, the software can easily generate survival and hazard curves with multiple options to personalise the plots, obtain contingency tables from the uploaded data to perform different tests, and fit a Cox regression model from a number of predictor variables. In the Materials and Methods section, we depict the general architecture of the application and introduce the mathematical background of each of the implemented methods. The study concludes with examples of use showing the results obtained with public datasets.

  19. The dark side of marketing seemingly "Light" cigarettes: successful images and failed fact.

    PubMed

    Pollay, R W; Dewhirst, T

    2002-03-01

    To understand the development, intent, and consequences of US tobacco industry advertising for low machine yield cigarettes. Analysis of trade sources and internal US tobacco company documents now available on various web sites created by corporations, litigation, or public health bodies. When introducing low yield products, cigarette manufacturers were concerned about maintaining products with acceptable taste/flavour and feared consumers might become weaned from smoking. Several tactics were employed by cigarette manufacturers, leading consumers to perceive filtered and low machine yield brands as safer relative to other brands. Tactics include using cosmetic (that is, ineffective) filters, loosening filters over time, using medicinal menthol, using high tech imagery, using virtuous brand names and descriptors, adding a virtuous variant to a brand's product line, and generating misleading data on tar and nicotine yields. Advertisements of filtered and low tar cigarettes were intended to reassure smokers concerned about the health risks of smoking, and to present the respective products as an alternative to quitting. Promotional efforts were successful in getting smokers to adopt filtered and low yield cigarette brands. Corporate documents demonstrate that cigarette manufacturers recognised the inherent deceptiveness of cigarette brands described as "Light"or "Ultra-Light" because of low machine measured yields.

  20. Advanced Online Survival Analysis Tool for Predictive Modelling in Clinical Data Science

    PubMed Central

    Montes-Torres, Julio; Subirats, José Luis; Ribelles, Nuria; Urda, Daniel; Franco, Leonardo; Alba, Emilio; Jerez, José Manuel

    2016-01-01

    One of the prevailing applications of machine learning is the use of predictive modelling in clinical survival analysis. In this work, we present our view of the current situation of computer tools for survival analysis, stressing the need of transferring the latest results in the field of machine learning to biomedical researchers. We propose a web based software for survival analysis called OSA (Online Survival Analysis), which has been developed as an open access and user friendly option to obtain discrete time, predictive survival models at individual level using machine learning techniques, and to perform standard survival analysis. OSA employs an Artificial Neural Network (ANN) based method to produce the predictive survival models. Additionally, the software can easily generate survival and hazard curves with multiple options to personalise the plots, obtain contingency tables from the uploaded data to perform different tests, and fit a Cox regression model from a number of predictor variables. In the Materials and Methods section, we depict the general architecture of the application and introduce the mathematical background of each of the implemented methods. The study concludes with examples of use showing the results obtained with public datasets. PMID:27532883

Top