Science.gov

Sample records for online model server

  1. Design of Accelerator Online Simulator Server Using Structured Data

    SciTech Connect

    Shen, Guobao; Chu, Chungming; Wu, Juhao; Kraimer, Martin; /Argonne

    2012-07-06

    Model based control plays an important role for a modern accelerator during beam commissioning, beam study, and even daily operation. With a realistic model, beam behaviour can be predicted and therefore effectively controlled. The approach used by most current high level application environments is to use a built-in simulation engine and feed a realistic model into that simulation engine. Instead of this traditional monolithic structure, a new approach using a client-server architecture is under development. An on-line simulator server is accessed via network accessible structured data. With this approach, a user can easily access multiple simulation codes. This paper describes the design, implementation, and current status of PVData, which defines the structured data, and PVAccess, which provides network access to the structured data.

  2. Network characteristics for server selection in online games

    NASA Astrophysics Data System (ADS)

    Claypool, Mark

    2008-01-01

    Online gameplay is impacted by the network characteristics of players connected to the same server. Unfortunately, the network characteristics of online game servers are not well-understood, particularly for groups that wish to play together on the same server. As a step towards a remedy, this paper presents analysis of an extensive set of measurements of game servers on the Internet. Over the course of many months, actual Internet game servers were queried simultaneously by twenty-five emulated game clients, with both servers and clients spread out on the Internet. The data provides statistics on the uptime and populations of game servers over a month long period an an in-depth look at the suitability for game servers for multi-player server selection, concentrating on characteristics critical to playability--latency and fairness. Analysis finds most game servers have latencies suitable for third-person and omnipresent games, such as real-time strategy, sports and role-playing games, providing numerous server choices for game players. However, far fewer game servers have the low latencies required for first-person games, such as shooters or race games. In all cases, groups that wish to play together have a greatly reduced set of servers from which to choose because of inherent unfairness in server latencies and server selection is particularly limited as the group size increases. These results hold across different game types and even across different generations of games. The data should be useful for game developers and network researchers that seek to improve game server selection, whether for single or multiple players.

  3. RCD+: Fast loop modeling server.

    PubMed

    López-Blanco, José Ramón; Canosa-Valls, Alejandro Jesús; Li, Yaohang; Chacón, Pablo

    2016-07-01

    Modeling loops is a critical and challenging step in protein modeling and prediction. We have developed a quick online service (http://rcd.chaconlab.org) for ab initio loop modeling combining a coarse-grained conformational search with a full-atom refinement. Our original Random Coordinate Descent (RCD) loop closure algorithm has been greatly improved to enrich the sampling distribution towards near-native conformations. These improvements include a new workflow optimization, MPI-parallelization and fast backbone angle sampling based on neighbor-dependent Ramachandran probability distributions. The server starts by efficiently searching the vast conformational space from only the loop sequence information and the environment atomic coordinates. The generated closed loop models are subsequently ranked using a fast distance-orientation dependent energy filter. Top ranked loops are refined with the Rosetta energy function to obtain accurate all-atom predictions that can be interactively inspected in an user-friendly web interface. Using standard benchmarks, the average root mean squared deviation (RMSD) is 0.8 and 1.4 Å for 8 and 12 residues loops, respectively, in the challenging modeling scenario in where the side chains of the loop environment are fully remodeled. These results are not only very competitive compared to those obtained with public state of the art methods, but also they are obtained ∼10-fold faster. PMID:27151199

  4. RCD+: Fast loop modeling server

    PubMed Central

    López-Blanco, José Ramón; Canosa-Valls, Alejandro Jesús; Li, Yaohang; Chacón, Pablo

    2016-01-01

    Modeling loops is a critical and challenging step in protein modeling and prediction. We have developed a quick online service (http://rcd.chaconlab.org) for ab initio loop modeling combining a coarse-grained conformational search with a full-atom refinement. Our original Random Coordinate Descent (RCD) loop closure algorithm has been greatly improved to enrich the sampling distribution towards near-native conformations. These improvements include a new workflow optimization, MPI-parallelization and fast backbone angle sampling based on neighbor-dependent Ramachandran probability distributions. The server starts by efficiently searching the vast conformational space from only the loop sequence information and the environment atomic coordinates. The generated closed loop models are subsequently ranked using a fast distance-orientation dependent energy filter. Top ranked loops are refined with the Rosetta energy function to obtain accurate all-atom predictions that can be interactively inspected in an user-friendly web interface. Using standard benchmarks, the average root mean squared deviation (RMSD) is 0.8 and 1.4 Å for 8 and 12 residues loops, respectively, in the challenging modeling scenario in where the side chains of the loop environment are fully remodeled. These results are not only very competitive compared to those obtained with public state of the art methods, but also they are obtained ∼10-fold faster. PMID:27151199

  5. PlanetServer: Innovative approaches for the online analysis of hyperspectral satellite data from Mars

    NASA Astrophysics Data System (ADS)

    Oosthoek, J. H. P.; Flahaut, J.; Rossi, A. P.; Baumann, P.; Misev, D.; Campalani, P.; Unnithan, V.

    2014-06-01

    PlanetServer is a WebGIS system, currently under development, enabling the online analysis of Compact Reconnaissance Imaging Spectrometer (CRISM) hyperspectral data from Mars. It is part of the EarthServer project which builds infrastructure for online access and analysis of huge Earth Science datasets. Core functionality consists of the rasdaman Array Database Management System (DBMS) for storage, and the Open Geospatial Consortium (OGC) Web Coverage Processing Service (WCPS) for data querying. Various WCPS queries have been designed to access spatial and spectral subsets of the CRISM data. The client WebGIS, consisting mainly of the OpenLayers javascript library, uses these queries to enable online spatial and spectral analysis. Currently the PlanetServer demonstration consists of two CRISM Full Resolution Target (FRT) observations, surrounding the NASA Curiosity rover landing site. A detailed analysis of one of these observations is performed in the Case Study section. The current PlanetServer functionality is described step by step, and is tested by focusing on detecting mineralogical evidence described in earlier Gale crater studies. Both the PlanetServer methodology and its possible use for mineralogical studies will be further discussed. Future work includes batch ingestion of CRISM data and further development of the WebGIS and analysis tools.

  6. "Just Another Tool for Online Studies" (JATOS): An Easy Solution for Setup and Management of Web Servers Supporting Online Studies.

    PubMed

    Lange, Kristian; Kühn, Simone; Filevich, Elisa

    2015-01-01

    We present here "Just Another Tool for Online Studies" (JATOS): an open source, cross-platform web application with a graphical user interface (GUI) that greatly simplifies setting up and communicating with a web server to host online studies that are written in JavaScript. JATOS is easy to install in all three major platforms (Microsoft Windows, Mac OS X, and Linux), and seamlessly pairs with a database for secure data storage. It can be installed on a server or locally, allowing researchers to try the application and feasibility of their studies within a browser environment, before engaging in setting up a server. All communication with the JATOS server takes place via a GUI (with no need to use a command line interface), making JATOS an especially accessible tool for researchers without a strong IT background. We describe JATOS' main features and implementation and provide a detailed tutorial along with example studies to help interested researchers to set up their online studies. JATOS can be found under the Internet address: www.jatos.org.

  7. "Just Another Tool for Online Studies" (JATOS): An Easy Solution for Setup and Management of Web Servers Supporting Online Studies.

    PubMed

    Lange, Kristian; Kühn, Simone; Filevich, Elisa

    2015-01-01

    We present here "Just Another Tool for Online Studies" (JATOS): an open source, cross-platform web application with a graphical user interface (GUI) that greatly simplifies setting up and communicating with a web server to host online studies that are written in JavaScript. JATOS is easy to install in all three major platforms (Microsoft Windows, Mac OS X, and Linux), and seamlessly pairs with a database for secure data storage. It can be installed on a server or locally, allowing researchers to try the application and feasibility of their studies within a browser environment, before engaging in setting up a server. All communication with the JATOS server takes place via a GUI (with no need to use a command line interface), making JATOS an especially accessible tool for researchers without a strong IT background. We describe JATOS' main features and implementation and provide a detailed tutorial along with example studies to help interested researchers to set up their online studies. JATOS can be found under the Internet address: www.jatos.org. PMID:26114751

  8. "Just Another Tool for Online Studies” (JATOS): An Easy Solution for Setup and Management of Web Servers Supporting Online Studies

    PubMed Central

    Lange, Kristian; Kühn, Simone; Filevich, Elisa

    2015-01-01

    We present here “Just Another Tool for Online Studies” (JATOS): an open source, cross-platform web application with a graphical user interface (GUI) that greatly simplifies setting up and communicating with a web server to host online studies that are written in JavaScript. JATOS is easy to install in all three major platforms (Microsoft Windows, Mac OS X, and Linux), and seamlessly pairs with a database for secure data storage. It can be installed on a server or locally, allowing researchers to try the application and feasibility of their studies within a browser environment, before engaging in setting up a server. All communication with the JATOS server takes place via a GUI (with no need to use a command line interface), making JATOS an especially accessible tool for researchers without a strong IT background. We describe JATOS’ main features and implementation and provide a detailed tutorial along with example studies to help interested researchers to set up their online studies. JATOS can be found under the Internet address: www.jatos.org. PMID:26114751

  9. PHYML Online--a web server for fast maximum likelihood-based phylogenetic inference.

    PubMed

    Guindon, Stéphane; Lethiec, Franck; Duroux, Patrice; Gascuel, Olivier

    2005-07-01

    PHYML Online is a web interface to PHYML, a software that implements a fast and accurate heuristic for estimating maximum likelihood phylogenies from DNA and protein sequences. This tool provides the user with a number of options, e.g. nonparametric bootstrap and estimation of various evolutionary parameters, in order to perform comprehensive phylogenetic analyses on large datasets in reasonable computing time. The server and its documentation are available at http://atgc.lirmm.fr/phyml.

  10. MO/DSD online information server and global information repository access

    NASA Technical Reports Server (NTRS)

    Nguyen, Diem; Ghaffarian, Kam; Hogie, Keith; Mackey, William

    1994-01-01

    Often in the past, standards and new technology information have been available only in hardcopy form, with reproduction and mailing costs proving rather significant. In light of NASA's current budget constraints and in the interest of efficient communications, the Mission Operations and Data Systems Directorate (MO&DSD) New Technology and Data Standards Office recognizes the need for an online information server (OLIS). This server would allow: (1) dissemination of standards and new technology information throughout the Directorate more quickly and economically; (2) online browsing and retrieval of documents that have been published for and by MO&DSD; and (3) searching for current and past study activities on related topics within NASA before issuing a task. This paper explores a variety of available information servers and searching tools, their current capabilities and limitations, and the application of these tools to MO&DSD. Most importantly, the discussion focuses on the way this concept could be easily applied toward improving dissemination of standards and new technologies and improving documentation processes.

  11. Telematics-based online client-server/client collaborative environment for radiotherapy planning simulations.

    PubMed

    Kum, Oyeon

    2007-11-01

    Customized cancer radiation treatment planning for each patient is very useful for both a patient and a doctor because it provides the ability to deliver higher doses to a more accurately defined tumor and at the same time lower doses to organs at risk and normal tissues. This can be realized by building an accurate planning simulation system to provide better treatment strategies based on each patient's tomographic data such as CT, MRI, PET, or SPECT. In this study, we develop a real-time online client-server/client collaborative environment between the client (health care professionals or hospitals) and the server/client under a secure network using telematics (the integrated use of telecommunications and medical informatics). The implementation is based on a point-to-point communication scheme between client and server/client following the WYSIWIS (what you see is what I see) paradigm. After uploading the patient tomographic data, the client is able to collaborate with the server/client for treatment planning. Consequently, the level of health care services can be improved, specifically for small radiotherapy clinics in rural/remote-country areas that do not possess much experience or equipment such as a treatment planning simulator. The telematics service of the system can also be used to provide continued medical education in radiotherapy. Moreover, the system is easy to use. A client can use the system if s/he is familiar with the Windows(TM) operating system because it is designed and built based on a user-friendly concept. This system does not require the client to continue hardware and software maintenance and updates. These are performed automatically by the server.

  12. RosettaAntibody: antibody variable region homology modeling server.

    PubMed

    Sircar, Aroop; Kim, Eric T; Gray, Jeffrey J

    2009-07-01

    The RosettaAntibody server (http://antibody.graylab.jhu.edu) predicts the structure of an antibody variable region given the amino-acid sequences of the respective light and heavy chains. In an initial stage, the server identifies and displays the most sequence homologous template structures for the light and heavy framework regions and each of the complementarity determining region (CDR) loops. Subsequently, the most homologous templates are assembled into a side-chain optimized crude model, and the server returns a picture and coordinate file. For users requesting a high-resolution model, the server executes the full RosettaAntibody protocol which additionally models the hyper-variable CDR H3 loop. The high-resolution protocol also relieves steric clashes by optimizing the CDR backbone torsion angles and by simultaneously perturbing the relative orientation of the light and heavy chains. RosettaAntibody generates 2000 independent structures, and the server returns pictures, coordinate files, and detailed scoring information for the 10 top-scoring models. The 10 models enable users to use rational judgment in choosing the best model or to use the set as an ensemble for further studies such as docking. The high-resolution models generated by RosettaAntibody have been used for the successful prediction of antibody-antigen complex structures.

  13. Parmodel: a web server for automated comparative modeling of proteins.

    PubMed

    Uchôa, Hugo Brandão; Jorge, Guilherme Eberhart; Freitas Da Silveira, Nelson José; Camera, João Carlos; Canduri, Fernanda; De Azevedo, Walter Filgueira

    2004-12-24

    Parmodel is a web server for automated comparative modeling and evaluation of protein structures. The aim of this tool is to help inexperienced users to perform modeling, assessment, visualization, and optimization of protein models as well as crystallographers to evaluate structures solved experimentally. It is subdivided in four modules: Parmodel Modeling, Parmodel Assessment, Parmodel Visualization, and Parmodel Optimization. The main module is the Parmodel Modeling that allows the building of several models for a same protein in a reduced time, through the distribution of modeling processes on a Beowulf cluster. Parmodel automates and integrates the main softwares used in comparative modeling as MODELLER, Whatcheck, Procheck, Raster3D, Molscript, and Gromacs. This web server is freely accessible at .

  14. The PDB_REDO server for macromolecular structure model optimization.

    PubMed

    Joosten, Robbie P; Long, Fei; Murshudov, Garib N; Perrakis, Anastassis

    2014-07-01

    The refinement and validation of a crystallographic structure model is the last step before the coordinates and the associated data are submitted to the Protein Data Bank (PDB). The success of the refinement procedure is typically assessed by validating the models against geometrical criteria and the diffraction data, and is an important step in ensuring the quality of the PDB public archive [Read et al. (2011 ▶), Structure, 19, 1395-1412]. The PDB_REDO procedure aims for 'constructive validation', aspiring to consistent and optimal refinement parameterization and pro-active model rebuilding, not only correcting errors but striving for optimal interpretation of the electron density. A web server for PDB_REDO has been implemented, allowing thorough, consistent and fully automated optimization of the refinement procedure in REFMAC and partial model rebuilding. The goal of the web server is to help practicing crystallo-graphers to improve their model prior to submission to the PDB. For this, additional steps were implemented in the PDB_REDO pipeline, both in the refinement procedure, e.g. testing of resolution limits and k-fold cross-validation for small test sets, and as new validation criteria, e.g. the density-fit metrics implemented in EDSTATS and ligand validation as implemented in YASARA. Innovative ways to present the refinement and validation results to the user are also described, which together with auto-generated Coot scripts can guide users to subsequent model inspection and improvement. It is demonstrated that using the server can lead to substantial improvement of structure models before they are submitted to the PDB. PMID:25075342

  15. The PDB_REDO server for macromolecular structure model optimization

    PubMed Central

    Joosten, Robbie P.; Long, Fei; Murshudov, Garib N.; Perrakis, Anastassis

    2014-01-01

    The refinement and validation of a crystallographic structure model is the last step before the coordinates and the associated data are submitted to the Protein Data Bank (PDB). The success of the refinement procedure is typically assessed by validating the models against geometrical criteria and the diffraction data, and is an important step in ensuring the quality of the PDB public archive [Read et al. (2011 ▶), Structure, 19, 1395–1412]. The PDB_REDO procedure aims for ‘constructive validation’, aspiring to consistent and optimal refinement parameterization and pro-active model rebuilding, not only correcting errors but striving for optimal interpretation of the electron density. A web server for PDB_REDO has been implemented, allowing thorough, consistent and fully automated optimization of the refinement procedure in REFMAC and partial model rebuilding. The goal of the web server is to help practicing crystallo­graphers to improve their model prior to submission to the PDB. For this, additional steps were implemented in the PDB_REDO pipeline, both in the refinement procedure, e.g. testing of resolution limits and k-fold cross-validation for small test sets, and as new validation criteria, e.g. the density-fit metrics implemented in EDSTATS and ligand validation as implemented in YASARA. Innovative ways to present the refinement and validation results to the user are also described, which together with auto-generated Coot scripts can guide users to subsequent model inspection and improvement. It is demonstrated that using the server can lead to substantial improvement of structure models before they are submitted to the PDB. PMID:25075342

  16. SuperLooper—a prediction server for the modeling of loops in globular and membrane proteins

    PubMed Central

    Hildebrand, Peter W.; Goede, Andrean; Bauer, Raphael A.; Gruening, Bjoern; Ismer, Jochen; Michalsky, Elke; Preissner, Robert

    2009-01-01

    SuperLooper provides the first online interface for the automatic, quick and interactive search and placement of loops in proteins (LIP). A database containing half a billion segments of water-soluble proteins with lengths up to 35 residues can be screened for candidate loops. A specified database containing 180 000 membrane loops in proteins (LIMP) can be searched, alternatively. Loop candidates are scored based on sequence criteria and the root mean square deviation (RMSD) of the stem atoms. Searching LIP, the average global RMSD of the respective top-ranked loops to the original loops is benchmarked to be <2 Å, for loops up to six residues or <3 Å for loops shorter than 10 residues. Other suitable conformations may be selected and directly visualized on the web server from a top-50 list. For user guidance, the sequence homology between the template and the original sequence, proline or glycine exchanges or close contacts between a loop candidate and the remainder of the protein are denoted. For membrane proteins, the expansions of the lipid bilayer are automatically modeled using the TMDET algorithm. This allows the user to select the optimal membrane protein loop concerning its relative orientation to the lipid bilayer. The server is online since October 2007 and can be freely accessed at URL: http://bioinformatics.charite.de/superlooper/ PMID:19429894

  17. SuperLooper--a prediction server for the modeling of loops in globular and membrane proteins.

    PubMed

    Hildebrand, Peter W; Goede, Andrean; Bauer, Raphael A; Gruening, Bjoern; Ismer, Jochen; Michalsky, Elke; Preissner, Robert

    2009-07-01

    SuperLooper provides the first online interface for the automatic, quick and interactive search and placement of loops in proteins (LIP). A database containing half a billion segments of water-soluble proteins with lengths up to 35 residues can be screened for candidate loops. A specified database containing 180,000 membrane loops in proteins (LIMP) can be searched, alternatively. Loop candidates are scored based on sequence criteria and the root mean square deviation (RMSD) of the stem atoms. Searching LIP, the average global RMSD of the respective top-ranked loops to the original loops is benchmarked to be <2 A, for loops up to six residues or <3 A for loops shorter than 10 residues. Other suitable conformations may be selected and directly visualized on the web server from a top-50 list. For user guidance, the sequence homology between the template and the original sequence, proline or glycine exchanges or close contacts between a loop candidate and the remainder of the protein are denoted. For membrane proteins, the expansions of the lipid bilayer are automatically modeled using the TMDET algorithm. This allows the user to select the optimal membrane protein loop concerning its relative orientation to the lipid bilayer. The server is online since October 2007 and can be freely accessed at URL: http://bioinformatics.charite.de/superlooper/.

  18. Group-oriented coordination models for distributed client-server computing

    NASA Technical Reports Server (NTRS)

    Adler, Richard M.; Hughes, Craig S.

    1994-01-01

    This paper describes group-oriented control models for distributed client-server interactions. These models transparently coordinate requests for services that involve multiple servers, such as queries across distributed databases. Specific capabilities include: decomposing and replicating client requests; dispatching request subtasks or copies to independent, networked servers; and combining server results into a single response for the client. The control models were implemented by combining request broker and process group technologies with an object-oriented communication middleware tool. The models are illustrated in the context of a distributed operations support application for space-based systems.

  19. A Predictive Performance Model to Evaluate the Contention Cost in Application Servers

    SciTech Connect

    Chen, Shiping; Gorton, Ian )

    2002-12-04

    In multi-tier enterprise systems, application servers are key components that implement business logic and provide application services. To support a large number of simultaneous accesses from clients over the Internet and intranet, most application servers use replication and multi-threading to handle concurrent requests. While multiple processes and multiple threads enhance the processing bandwidth of servers, they also increase the contention for resources in application servers. This paper investigates this issue empirically based on a middleware benchmark. A cost model is proposed to estimate the overall performance of application servers, including the contention overhead. This model is then used to determine the optimal degree of the concurrency of application servers for a specific client load. A case study based on CORBA is presented to validate our model and demonstrate its application.

  20. The data model of a PACS-based DICOM radiation therapy server

    NASA Astrophysics Data System (ADS)

    Law, Maria Y. Y.; Huang, H. K.; Zhang, Xiaoyan; Zhang, Jianguo

    2003-05-01

    Radiotherapy (RT) requires information and images from both diagnostic and treatment equipment. Standards for radiotherapy information have been ratified with seven DICOM-RT objects and their IODs (Information Object Definitions). However, the contents of these objects require the incorporation of the RT workflow in a logical sequence. The first step is to trace the RT workflow. The second step now is to direct all images and related information in their corresponding DICOM-RT objects into a DICOM RT Server and then ultimately to an RT application server. Methods: In our design, the RT DICOM Server was based on a PACS data model. The data model can be translated to web-based technology server and an application server built on top of the Web server for RT. In the process, the contents in each of the DICOM-RT objects were customized for the RT display windows. Results: Six display windows were designed and the data model in the RT application server was developed. The images and related information were grouped into the seven DICOM-RT Objects in the sequence of their procedures, and customized for the seven display windows. This is an important step in organizing the data model in the application server for radiation therapy. Conclusion: Radiation therapy workflow study is a pre-requisite for data model design that can enhance image-based healthcare delivery.

  1. Real-Time Robust Adaptive Modeling and Scheduling for an Electronic Commerce Server

    NASA Astrophysics Data System (ADS)

    Du, Bing; Ruan, Chun

    With the increasing importance and pervasiveness of Internet services, it is becoming a challenge for the proliferation of electronic commerce services to provide performance guarantees under extreme overload. This paper describes a real-time optimization modeling and scheduling approach for performance guarantee of electronic commerce servers. We show that an electronic commerce server may be simulated as a multi-tank system. A robust adaptive server model is subject to unknown additive load disturbances and uncertain model matching. Overload control techniques are based on adaptive admission control to achieve timing guarantees. We evaluate the performance of the model using a complex simulation that is subjected to varying model parameters and massive overload.

  2. Performance model of the Argonne Voyager multimedia server

    SciTech Connect

    Disz, T.; Olson, R.; Stevens, R.

    1997-07-01

    The Argonne Voyager Multimedia Server is being developed in the Futures Lab of the Mathematics and Computer Science Division at Argonne National Laboratory. As a network-based service for recording and playing multimedia streams, it is important that the Voyager system be capable of sustaining certain minimal levels of performance in order for it to be a viable system. In this article, the authors examine the performance characteristics of the server. As they examine the architecture of the system, they try to determine where bottlenecks lie, show actual vs potential performance, and recommend areas for improvement through custom architectures and system tuning.

  3. BeEP Server: using evolutionary information for quality assessment of protein structure models

    PubMed Central

    Palopoli, Nicolas; Lanzarotti, Esteban; Parisi, Gustavo

    2013-01-01

    The BeEP Server (http://www.embnet.qb.fcen.uba.ar/embnet/beep.php) is an online resource aimed to help in the endgame of protein structure prediction. It is able to rank submitted structural models of a protein through an explicit use of evolutionary information, a criterion differing from structural or energetic considerations commonly used in other assessment programs. The idea behind BeEP (Best Evolutionary Pattern) is to benefit from the substitution pattern derived from structural constraints present in a set of homologous proteins adopting a given protein conformation. The BeEP method uses a model of protein evolution that takes into account the structure of a protein to build site-specific substitution matrices. The suitability of these substitution matrices is assessed through maximum likelihood calculations from which position-specific and global scores can be derived. These scores estimate how well the structural constraints derived from each structural model are represented in a sequence alignment of homologous proteins. Our assessment on a subset of proteins from the Critical Assessment of techniques for protein Structure Prediction (CASP) experiment has shown that BeEP is capable of discriminating the models and selecting one or more native-like structures. Moreover, BeEP is not explicitly parameterized to find structural similarities between models and given targets, potentially helping to explore the conformational ensemble of the native state. PMID:23729471

  4. BeEP Server: Using evolutionary information for quality assessment of protein structure models.

    PubMed

    Palopoli, Nicolas; Lanzarotti, Esteban; Parisi, Gustavo

    2013-07-01

    The BeEP Server (http://www.embnet.qb.fcen.uba.ar/embnet/beep.php) is an online resource aimed to help in the endgame of protein structure prediction. It is able to rank submitted structural models of a protein through an explicit use of evolutionary information, a criterion differing from structural or energetic considerations commonly used in other assessment programs. The idea behind BeEP (Best Evolutionary Pattern) is to benefit from the substitution pattern derived from structural constraints present in a set of homologous proteins adopting a given protein conformation. The BeEP method uses a model of protein evolution that takes into account the structure of a protein to build site-specific substitution matrices. The suitability of these substitution matrices is assessed through maximum likelihood calculations from which position-specific and global scores can be derived. These scores estimate how well the structural constraints derived from each structural model are represented in a sequence alignment of homologous proteins. Our assessment on a subset of proteins from the Critical Assessment of techniques for protein Structure Prediction (CASP) experiment has shown that BeEP is capable of discriminating the models and selecting one or more native-like structures. Moreover, BeEP is not explicitly parameterized to find structural similarities between models and given targets, potentially helping to explore the conformational ensemble of the native state.

  5. RHIC injector complex online model status and plans

    SciTech Connect

    Schoefer,V.; Ahrens, L.; Brown, K.; Morris, J.; Nemesure, S.

    2009-05-04

    An online modeling system is being developed for the RHIC injector complex, which consists of the Booster, the AGS and the transfer lines connecting the Booster to the AGS and the AGS to RHIC. Historically the injectors have been operated using static values from design specifications or offline model runs, but tighter beam optics constraints required by polarized proton operations (e.g, accelerating with near-integer tunes) have necessitated a more dynamic system. An online model server for the AGS has been implemented using MAD-X [1] as the model engine, with plans to extend the system to the Booster and the injector transfer lines and to add the option of calculating optics using the Polymorphic Tracking Code (PTC [2]) as the model engine.

  6. Cooperative Server Clustering for a Scalable GAS Model on Petascale Cray XT5 Systems

    SciTech Connect

    Yu, Weikuan; Que, Xinyu; Tipparaju, Vinod; Graham, Richard L; Vetter, Jeffrey S

    2010-05-01

    Global Address Space (GAS) programming models are attractive because they retain the easy-to-use addressing model that is the characteristic of shared-memory style load and store operations. The scalability of GAS models depends directly on the design and implementation of runtime libraries on the targeted platforms. In this paper, we examine the memory requirement of a popular GAS run-time library, Aggregate Remote Memory Copy Interface (ARMCI) on petascale Cray XT 5 systems. Then we describe a new technique, cooperative server clustering, that enhances the memory scalability of ARMCI communication servers. In cooperative server clustering, ARMCI servers are organized into clusters, and cooperatively process incoming communication requests among them. A request intervention scheme is also designed to expedite the return of responses to the initiating processes. Our experimental results demonstrate that, with very little impact on ARMCI communication latency and bandwidth, cooperative server clustering is able to significantly reduce the memory requirement of ARMCI communication servers, thereby enabling highly scalable scientific applications. In particular, it dramatically reduces the total execution time of a scientific application, NWChem, by 45% on 2400 processes.

  7. Cooperative Server Clustering for a Scalable GAS Model on petascale cray XT5 Systems

    SciTech Connect

    Yu, Weikuan; Que, Xinyu; Graham, Richard L; Vetter, Jeffrey S

    2010-01-01

    Global Address Space (GAS) programming models are attractive because they retain the easy-to-use addressing model that is the characteristic of shared-memory style load and store operations. The scalability of GAS models depends directly on the design and implementation of runtime libraries on the targeted platforms. In this paper, we examine the memory requirement of a popular GAS runtime library, Aggregate Remote Memory Copy Interface (ARMCI) on petascale Cray XT 5 systems. Then we describe a new technique cooperative server clustering that enhances the memory scalability of ARMCI communication servers. In cooperative server clustering, ARMCI servers are organized into clusters, and cooperatively process incoming communication requests among them. A request intervention scheme is also designed to expedite the return of responses to the initiating processes. Our experimental results demonstrate that, with very little impact on ARMCI communication latency and bandwidth, cooperative server clustering is able to significantly reduce the memory requirement of ARMCI communication servers, thereby enabling highly scalable scientific applications. In particular, it dramatically reduces the total execution time of a scientific application, NWChem, by 45% on 2400 processes.

  8. Sharing limited Ethernet resources with a client-server model

    NASA Astrophysics Data System (ADS)

    Brownless, D. M.; Burton, P. D.

    1994-12-01

    The new control system proposed for the ISIS facility at Rutherford uses an Ethernet spine to provide mutual communications between disparate equipment, including the control computers. This paper describes the limitations imposed on the use of Ethernet in Local/Wide Area Networks and how a client-server based system can be used to circumvent them. The actual system we developed is discussed with particular reference to the problems we have faced, implementing data standards and the performance statistics attained.

  9. A distributed clients/distributed servers model for STARCAT

    NASA Technical Reports Server (NTRS)

    Pirenne, B.; Albrecht, M. A.; Durand, D.; Gaudet, S.

    1992-01-01

    STARCAT, the Space Telescope ARchive and CATalogue user interface has been along for a number of years already. During this time it has been enhanced and augmented in a number of different fields. This time, we would like to dwell on a new capability allowing geographically distributed user interfaces to connect to geographically distributed data servers. This new concept permits users anywhere on the internet running STARCAT on their local hardware to access e.g., whichever of the 3 existing HST archive sites is available, or get information on the CFHT archive through a transparent connection to the CADC in BC or to get the La Silla weather by connecting to the ESO database in Munich during the same session. Similarly PreView (or quick look) images and spectra will also flow directly to the user from wherever it is available. Moving towards an 'X'-based STARCAT is another goal being pursued: a graphic/image server and a help/doc server are currently being added to it. They should further enhance the user independence and access transparency.

  10. Impact of malicious servers over trust and reputation models in wireless sensor networks

    NASA Astrophysics Data System (ADS)

    Verma, Vinod Kumar; Singh, Surinder; Pathak, N. P.

    2016-03-01

    This article deals with the impact of malicious servers over different trust and reputation models in wireless sensor networks. First, we analysed the five trust and reputation models, namely BTRM-WSN, Eigen trust, peer trust, power trust, linguistic fuzzy trust model. Further, we proposed wireless sensor network design for optimisation of these models. Finally, influence of malicious servers on the behaviour of above mentioned trust and reputation models is discussed. Statistical analysis has been carried out to prove the validity of our proposal.

  11. Cybersecurity, massive data processing, community interaction, and other developments at WWW-based computational X-ray Server

    NASA Astrophysics Data System (ADS)

    Stepanov, Sergey

    2013-03-01

    X-Ray Server (x-server.gmca.aps.anl.gov) is a WWW-based computational server for modeling of X-ray diffraction, reflection and scattering data. The modeling software operates directly on the server and can be accessed remotely either from web browsers or from user software. In the later case the server can be deployed as a software library or a data fitting engine. As the server recently surpassed the milestones of 15 years online and 1.5 million calculations, it accumulated a number of technical solutions that are discussed in this paper. The developed approaches to detecting physical model limits and user calculations failures, solutions to spam and firewall problems, ways to involve the community in replenishing databases and methods to teach users automated access to the server programs may be helpful for X-ray researchers interested in using the server or sharing their own software online.

  12. A New Web-based Application Optimization Model in Multicore Web Server

    NASA Astrophysics Data System (ADS)

    You, Guohua; Zhao, Ying

    More and more web servers adopt multi-core CPUs to improve performance because of the development of multi-core technology. However, web applications couldn't exploit the potential of multi-core web server efficiently because of traditional processing algorithm of requests and scheduling strategies of threads in O/S. In this paper, a new web-based application optimization model was proposed, which could classify and schedule the dynamic requests and static requests on scheduling core, and process the dynamic requests on the other cores. By this way, a simulation program, which is called SIM, was developed. Experiments have been done to validate the new model, and the results show that the new model can effectively improve the performance of multi-core web servers, and avoid the problems of ping-pong effect.

  13. THE RHIC/AGS ONLINE MODEL ENVIRONMENT: DESIGN AND OVERVIEW.

    SciTech Connect

    SATOGATA,T.; BROWN,K.; PILAT,F.; TAFTI,A.A.; TEPIKIAN,S.; VAN ZEIJTS,J.

    1999-03-29

    An integrated online modeling environment is currently under development for use by AGS and RHIC physicists and commissioners. This environment combines the modeling efforts of both groups in a CDEV [1] client-server design, providing access to expected machine optics and physics parameters based on live and design machine settings. An abstract modeling interface has been designed as a set of adapters [2] around core computational modeling engines such as MAD and UAL/Teapot++ [3]. This approach allows us to leverage existing survey, lattice, and magnet infrastructure, as well as easily incorporate new model engine developments. This paper describes the architecture of the RHIC/AGS modeling environment, including the application interface through CDEV and general tools for graphical interaction with the model using Tcl/Tk. Separate papers at this conference address the specifics of implementation and modeling experience for AGS and RHIC.

  14. viwish: a visualization server for protein modelling and docking.

    PubMed

    Klein, T; Ackermann, F; Posch, S

    1996-12-12

    A visualization tool viwish for proteins based on the Tcl command language has been developed. The system is completely menu driven and can display arbitrary many proteins in arbitrary many windows. It isinstantly t o use, even for non computer experts and provides possibilities to modify menus, configurations, and windows. It may be used as a stand-alone molecular graphics package or as a graphics server for external programs. Communications with these client applications is established even across different machines (through the send command to Tk, an extension of Tcl). In addition, a wide rage of chemical data like molecular surfaces and 3D gridded samplings of chemical features can be displayed. Therefore the systmen is especially useful for the development of algorithms that need visual distributed freely, including the source code.

  15. DNA analysis servers: plot.it, bend.it, model.it and IS.

    PubMed

    Vlahovicek, Kristian; Kaján, László; Pongor, Sándor

    2003-07-01

    The WWW servers at http://www.icgeb.trieste.it/dna/ are dedicated to the analysis of user-submitted DNA sequences; plot.it creates parametric plots of 45 physicochemical, as well as statistical, parameters; bend.it calculates DNA curvature according to various methods. Both programs provide 1D as well as 2D plots that allow localisation of peculiar segments within the query. The server model.it creates 3D models of canonical or bent DNA starting from sequence data and presents the results in the form of a standard PDB file, directly viewable on the user's PC using any molecule manipulation program. The recently established introns server allows statistical evaluation of introns in various taxonomic groups and the comparison of taxonomic groups in terms of length, base composition, intron type etc. The options include the analysis of splice sites and a probability test for exon-shuffling. PMID:12824394

  16. QA-RecombineIt: a server for quality assessment and recombination of protein models

    PubMed Central

    Pawlowski, Marcin; Bogdanowicz, Albert; Bujnicki, Janusz M.

    2013-01-01

    QA-RecombineIt provides a web interface to assess the quality of protein 3D structure models and to improve the accuracy of models by merging fragments of multiple input models. QA-RecombineIt has been developed for protein modelers who are working on difficult problems, have a set of different homology models and/or de novo models (from methods such as I-TASSER or ROSETTA) and would like to obtain one consensus model that incorporates the best parts into one structure that is internally coherent. An advanced mode is also available, in which one can modify the operation of the fragment recombination algorithm by manually identifying individual fragments or entire models to recombine. Our method produces up to 100 models that are expected to be on the average more accurate than the starting models. Therefore, our server may be useful for crystallographic protein structure determination, where protein models are used for Molecular Replacement to solve the phase problem. To address the latter possibility, a special feature was added to the QA-RecombineIt server. The QA-RecombineIt server can be freely accessed at http://iimcb.genesilico.pl/qarecombineit/. PMID:23700309

  17. RosettaBackrub—a web server for flexible backbone protein structure modeling and design

    PubMed Central

    Lauck, Florian; Smith, Colin A.; Friedland, Gregory F.; Humphris, Elisabeth L.; Kortemme, Tanja

    2010-01-01

    The RosettaBackrub server (http://kortemmelab.ucsf.edu/backrub) implements the Backrub method, derived from observations of alternative conformations in high-resolution protein crystal structures, for flexible backbone protein modeling. Backrub modeling is applied to three related applications using the Rosetta program for structure prediction and design: (I) modeling of structures of point mutations, (II) generating protein conformational ensembles and designing sequences consistent with these conformations and (III) predicting tolerated sequences at protein–protein interfaces. The three protocols have been validated on experimental data. Starting from a user-provided single input protein structure in PDB format, the server generates near-native conformational ensembles. The predicted conformations and sequences can be used for different applications, such as to guide mutagenesis experiments, for ensemble-docking approaches or to generate sequence libraries for protein design. PMID:20462859

  18. THttpServer class in ROOT

    NASA Astrophysics Data System (ADS)

    Adamczewski-Musch, Joern; Linev, Sergey

    2015-12-01

    The new THttpServer class in ROOT implements HTTP server for arbitrary ROOT applications. It is based on Civetweb embeddable HTTP server and provides direct access to all objects registered for the server. Objects data could be provided in different formats: binary, XML, GIF/PNG, and JSON. A generic user interface for THttpServer has been implemented with HTML/JavaScript based on JavaScript ROOT development. With any modern web browser one could list, display, and monitor objects available on the server. THttpServer is used in Go4 framework to provide HTTP interface to the online analysis.

  19. Online modeling of the Fermilab accelerators

    SciTech Connect

    E. McCrory, O. Krivosheev, L. Michelotti and J-F. Ostiguy

    1999-11-22

    Access through the Fermilab control system to beam physics models of the Fermilab accelerators has been implemented. The models run on Unix workstations, communicating with legacy VMS-based controls consoles via a relational database and TCP/IP.The client side (VMS) and the server side (Unix) are both implemented in object-oriented C++. The models allow scientists and operators in the control room to do beam physics calculations. Settings of real devices as input to the model are supported, and readings from beam diagnostics may be compared with model predictions.

  20. DelPhi Web Server: A comprehensive online suite for electrostatic calculations of biological macromolecules and their complexes.

    PubMed

    Sarkar, Subhra; Witham, Shawn; Zhang, Jie; Zhenirovskyy, Maxim; Rocchia, Walter; Alexov, Emil

    2013-01-01

    Here we report a web server, the DelPhi web server, which utilizes DelPhi program to calculate electrostatic energies and the corresponding electrostatic potential and ionic distributions, and dielectric map. The server provides extra services to fix structural defects, as missing atoms in the structural file and allows for generation of missing hydrogen atoms. The hydrogen placement and the corresponding DelPhi calculations can be done with user selected force field parameters being either Charmm22, Amber98 or OPLS. Upon completion of the calculations, the user is given option to download fixed and protonated structural file, together with the parameter and Delphi output files for further analysis. Utilizing Jmol viewer, the user can see the corresponding structural file, to manipulate it and to change the presentation. In addition, if the potential map is requested to be calculated, the potential can be mapped onto the molecule surface. The DelPhi web server is available from http://compbio.clemson.edu/delphi_webserver.

  1. DelPhi Web Server: A comprehensive online suite for electrostatic calculations of biological macromolecules and their complexes

    PubMed Central

    Sarkar, Subhra; Witham, Shawn; Zhang, Jie; Zhenirovskyy, Maxim; Rocchia, Walter; Alexov, Emil

    2011-01-01

    Here we report a web server, the DelPhi web server, which utilizes DelPhi program to calculate electrostatic energies and the corresponding electrostatic potential and ionic distributions, and dielectric map. The server provides extra services to fix structural defects, as missing atoms in the structural file and allows for generation of missing hydrogen atoms. The hydrogen placement and the corresponding DelPhi calculations can be done with user selected force field parameters being either Charmm22, Amber98 or OPLS. Upon completion of the calculations, the user is given option to download fixed and protonated structural file, together with the parameter and Delphi output files for further analysis. Utilizing Jmol viewer, the user can see the corresponding structural file, to manipulate it and to change the presentation. In addition, if the potential map is requested to be calculated, the potential can be mapped onto the molecule surface. The DelPhi web server is available from http://compbio.clemson.edu/delphi_webserver. PMID:24683424

  2. Modeling Small Noncanonical RNA Motifs with the Rosetta FARFAR Server.

    PubMed

    Yesselman, Joseph D; Das, Rhiju

    2016-01-01

    Noncanonical RNA motifs help define the vast complexity of RNA structure and function, and in many cases, these loops and junctions are on the order of only ten nucleotides in size. Unfortunately, despite their small size, there is no reliable method to determine the ensemble of lowest energy structures of junctions and loops at atomic accuracy. This chapter outlines straightforward protocols using a webserver for Rosetta Fragment Assembly of RNA with Full Atom Refinement (FARFAR) ( http://rosie.rosettacommons.org/rna_denovo/submit ) to model the 3D structure of small noncanonical RNA motifs for use in visualizing motifs and for further refinement or filtering with experimental data such as NMR chemical shifts. PMID:27665600

  3. Scientific Inquiry: A Model for Online Searching.

    ERIC Educational Resources Information Center

    Harter, Stephen P.

    1984-01-01

    Explores scientific inquiry as philosophical and behavioral model for online search specialist and information retrieval process. Nature of scientific research is described and online analogs to research concepts of variable, hypothesis formulation and testing, operational definition, validity, reliability, assumption, and cyclical nature of…

  4. A Model for Enhancing Online Course Development

    ERIC Educational Resources Information Center

    Knowles, Evelyn; Kalata, Kathleen

    2008-01-01

    In order to meet the growing demand for quality online education, Park University has adopted a model that provides a common framework for all of its online courses. Evelyn Knowles and Kathleen Kalata discuss the circumstances leading to the current system and describe the university's implementation of a course development process that ensures…

  5. An online educational atmospheric global circulation model

    NASA Astrophysics Data System (ADS)

    Navarro, T.; Schott, C.; Forget, F.

    2015-10-01

    As part of online courses on exoplanets of Observatoire de Paris, an online tool designed to vizualise outputs of the Laboratoire de Métérologie Dynamique (LMD) Global Circulation Model (GCM) for various atmospheric circulation regimes has been developed. It includes the possibility for students to visualize 1D and 2D plots along with animations of atmospheric quantities such as temperature, winds, surface pressure, mass flux, etc... from a state-of-the-art model.

  6. A Comprehensive Availability Modeling and Analysis of a Virtualized Servers System Using Stochastic Reward Nets

    PubMed Central

    Kim, Dong Seong; Park, Jong Sou

    2014-01-01

    It is important to assess availability of virtualized systems in IT business infrastructures. Previous work on availability modeling and analysis of the virtualized systems used a simplified configuration and assumption in which only one virtual machine (VM) runs on a virtual machine monitor (VMM) hosted on a physical server. In this paper, we show a comprehensive availability model using stochastic reward nets (SRN). The model takes into account (i) the detailed failures and recovery behaviors of multiple VMs, (ii) various other failure modes and corresponding recovery behaviors (e.g., hardware faults, failure and recovery due to Mandelbugs and aging-related bugs), and (iii) dependency between different subcomponents (e.g., between physical host failure and VMM, etc.) in a virtualized servers system. We also show numerical analysis on steady state availability, downtime in hours per year, transaction loss, and sensitivity analysis. This model provides a new finding on how to increase system availability by combining both software rejuvenations at VM and VMM in a wise manner. PMID:25165732

  7. How Much? Cost Models for Online Education.

    ERIC Educational Resources Information Center

    Lorenzo, George

    2001-01-01

    Reviews some of the research being done in the area of cost models for online education. Describes a cost analysis handbook; an activity-based costing model that was based on an economic model for traditional instruction at the Indiana University Purdue University Indianapolis; and blending other costing models. (LRW)

  8. Towards second-generation smart card-based authentication in health information systems: the secure server model.

    PubMed

    Hallberg, J; Hallberg, N; Timpka, T

    2001-01-01

    Conventional smart card-based authentication systems used in health care alleviate some of the security issues in user and system authentication. Existing models still do not cover all security aspects. To enable new protective measures to be developed, an extended model of the authentication process is presented. This model includes a new entity referred to as secure server. Assuming a secure server, a method where the smart card is aware of the status of the terminal integrity verification becomes feasible. The card can then act upon this knowledge and restrict the exposure of sensitive information to the terminal as required in order to minimize the risks. The secure server model can be used to illuminate the weaknesses of current approaches and the need for extensions which alleviate the resulting risks.

  9. CaspR: a web server for automated molecular replacement using homology modelling

    PubMed Central

    Claude, Jean-Baptiste; Suhre, Karsten; Notredame, Cédric; Claverie, Jean-Michel; Abergel, Chantal

    2004-01-01

    Molecular replacement (MR) is the method of choice for X-ray crystallography structure determination when structural homologues are available in the Protein Data Bank (PDB). Although the success rate of MR decreases sharply when the sequence similarity between template and target proteins drops below 35% identical residues, it has been found that screening for MR solutions with a large number of different homology models may still produce a suitable solution where the original template failed. Here we present the web tool CaspR, implementing such a strategy in an automated manner. On input of experimental diffraction data, of the corresponding target sequence and of one or several potential templates, CaspR executes an optimized molecular replacement procedure using a combination of well-established stand-alone software tools. The protocol of model building and screening begins with the generation of multiple structure–sequence alignments produced with T-COFFEE, followed by homology model building using MODELLER, molecular replacement with AMoRe and model refinement based on CNS. As a result, CaspR provides a progress report in the form of hierarchically organized summary sheets that describe the different stages of the computation with an increasing level of detail. For the 10 highest-scoring potential solutions, pre-refined structures are made available for download in PDB format. Results already obtained with CaspR and reported on the web server suggest that such a strategy significantly increases the fraction of protein structures which may be solved by MR. Moreover, even in situations where standard MR yields a solution, pre-refined homology models produced by CaspR significantly reduce the time-consuming refinement process. We expect this automated procedure to have a significant impact on the throughput of large-scale structural genomics projects. CaspR is freely available at http://igs-server.cnrs-mrs.fr/Caspr/. PMID:15215460

  10. BioDataServer: a SQL-based service for the online integration of life science data.

    PubMed

    Freier, Andreas; Hofestädt, Ralf; Lange, Matthias; Scholz, Uwe; Stephanik, Andreas

    2002-01-01

    Regarding molecular biology, we see an exponential growth of data and knowledge. Among others, this fact is reflected in more than 300 molecular databases which are readily available on the Internet. The usage of these data requires integration tools capable of complex information fusion processes. This paper will present a novel concept for user specific integration of life science data. Our approach is based on a mediator architecture in conjunction with freely adjustable data schemes. The implemented prototype is called BioDataServer and can be accessed on the Internet: http://integration.genophen.de. To realize a comfortable usage of the resulted data sets of the integration process, a SQL-based query language and a XML data format were developed and implemented.

  11. Four Models of On-Line Teaching.

    ERIC Educational Resources Information Center

    Roberts, Tim S.; Jones, David Thomas

    Four models of online teaching are currently being used within the Faculty of Informatics and Communications at Central Queensland University in Australia. The naive model, which is the most widely used, may be characterized as "putting lecture notes on the World Wide Web" with no opportunities for interaction or feedback. The standard model…

  12. HPC Server Performance and Power Consumption for Atmospheric Modeling on GPUs Configured with Different CPU Platforms

    NASA Astrophysics Data System (ADS)

    Posey, Stan; Messmer, Peter; Appleyard, Jeremy

    2015-04-01

    Current trends in high performance computing (HPC) are moving towards the use of graphics processing units (GPUs) to achieve speedups through the extraction of fine-grain parallelism of application software. GPUs have been developed exclusively for computational tasks as massively-parallel co-processors to the CPU, and during 2014 the latest NVIDIA GPU architecture can operate with as many as three CPU platforms. In addition to the conventional use of the x86 CPU architecture with GPUs starting from the mid-2000's, the POWER and ARM-64 architectures have recently become available as x86 alternatives. Today computational efficiency and increased performance per energy-cost are key drivers behind HPC decisions to implement GPU-based servers for atmospheric modeling. The choice of a server CPU platform will influence performance and overall power consumption of a system, and also the available configurations of CPU-to-GPU ratio. It follows that such system design configurations continue to be a critical factor behind scientific decisions to implement models at higher resolutions and possibly with an increased use of ensembles. This presentation will examine the current state of GPU developments for atmospheric modeling with examples from the COSMO dycore and from various WRF physics, and for different CPU platforms. The examples provided will be relevant to science-scale HPC practice of CPU-GPU system configurations based on model resolution requirements of a particular simulation. Performance results will compare use of the latest available CPUs from the three available CPU architectures, both with and without GPU acceleration. Finally a GPU outlook is provided on GPU hardware, software, tools, and programmability for each of the available CPU platforms.

  13. Secure IRC Server

    2003-08-25

    The IRCD is an IRC server that was originally distributed by the IRCD Hybrid developer team for use as a server in IRC message over the public Internet. By supporting the IRC protocol defined in the IRC RFC, IRCD allows the users to create and join channels for group or one-to-one text-based instant messaging. It stores information about channels (e.g., whether it is public, secret, or invite-only, the topic set, membership) and users (who ismore » online and what channels they are members of). It receives messages for a specific user or channel and forwards these messages to the targeted destination. Since server-to-server communication is also supported, these targeted destinations may be connected to different IRC servers. Messages are exchanged over TCP connections that remain open between the client and the server. The IRCD is being used within the Pervasive Computing Collaboration Environment (PCCE) as the 'chat server' for message exchange over public and private channels. After an LBNLSecureMessaging(PCCE chat) client has been authenticated, the client connects to IRCD with its assigned nickname or 'nick.' The client can then create or join channels for group discussions or one-to-one conversations. These channels can have an initial mode of public or invite-only and the mode may be changed after creation. If a channel is public, any one online can join the discussion; if a channel is invite-only, users can only join if existing members of the channel explicity invite them. Users can be invited to any type of channel and users may be members of multiple channels simultaneously. For use with the PCCE environment, the IRCD application (which was written in C) was ported to Linux and has been tested and installed under Linux Redhat 7.2. The source code was also modified with SSL so that all messages exchanged over the network are encrypted. This modified IRC server also verifies with an authentication server that the client is who he or she claims to be and

  14. ACHESYM: an algorithm and server for standardized placement of macromolecular models in the unit cell

    PubMed Central

    Kowiel, Marcin; Jaskolski, Mariusz; Dauter, Zbigniew

    2014-01-01

    Despite the existence of numerous useful conventions in structural crystallography, for example for the choice of the asymmetric part of the unit cell or of reciprocal space, surprisingly no standards are in use for the placement of the molecular model in the unit cell, often leading to inconsistencies or confusion. A conceptual solution for this problem has been proposed for macromolecular crystal structures based on the idea of the anti-Cheshire unit cell. Here, a program and server (called ACHESYM; http://achesym.ibch.poznan.pl) are presented for the practical implementation of this concept. In addition, the first task of ACHESYM is to find an optimal (compact) macromolecular assembly if more than one polymer chain exists. ACHESYM processes PDB (atomic parameters and TLS matrices) and mmCIF (diffraction data) input files to produce a new coordinate set and to reindex the reflections and modify their phases, if necessary. PMID:25478846

  15. Remote information service access system based on a client-server-service model

    DOEpatents

    Konrad, Allan M.

    1997-01-01

    A local host computing system, a remote host computing system as connected by a network, and service functionalities: a human interface service functionality, a starter service functionality, and a desired utility service functionality, and a Client-Server-Service (CSS) model is imposed on each service functionality. In one embodiment, this results in nine logical components and three physical components (a local host, a remote host, and an intervening network), where two of the logical components are integrated into one Remote Object Client component, and that Remote Object Client component and the other seven logical components are deployed among the local host and remote host in a manner which eases compatibility and upgrade problems, and provides an illusion to a user that a desired utility service supported on a remote host resides locally on the user's local host, thereby providing ease of use and minimal software maintenance for users of that remote service.

  16. Remote information service access system based on a client-server-service model

    DOEpatents

    Konrad, A.M.

    1997-12-09

    A local host computing system, a remote host computing system as connected by a network, and service functionalities: a human interface service functionality, a starter service functionality, and a desired utility service functionality, and a Client-Server-Service (CSS) model is imposed on each service functionality. In one embodiment, this results in nine logical components and three physical components (a local host, a remote host, and an intervening network), where two of the logical components are integrated into one Remote Object Client component, and that Remote Object Client component and the other seven logical components are deployed among the local host and remote host in a manner which eases compatibility and upgrade problems, and provides an illusion to a user that a desired utility service supported on a remote host resides locally on the user`s local host, thereby providing ease of use and minimal software maintenance for users of that remote service. 16 figs.

  17. Remote information service access system based on a client-server-service model

    DOEpatents

    Konrad, Allan M.

    1996-01-01

    A local host computing system, a remote host computing system as connected by a network, and service functionalities: a human interface service functionality, a starter service functionality, and a desired utility service functionality, and a Client-Server-Service (CSS) model is imposed on each service functionality. In one embodiment, this results in nine logical components and three physical components (a local host, a remote host, and an intervening network), where two of the logical components are integrated into one Remote Object Client component, and that Remote Object Client component and the other seven logical components are deployed among the local host and remote host in a manner which eases compatibility and upgrade problems, and provides an illusion to a user that a desired utility service supported on a remote host resides locally on the user's local host, thereby providing ease of use and minimal software maintenance for users of that remote service.

  18. Remote information service access system based on a client-server-service model

    DOEpatents

    Konrad, Allan M.

    1999-01-01

    A local host computing system, a remote host computing system as connected by a network, and service functionalities: a human interface service functionality, a starter service functionality, and a desired utility service functionality, and a Client-Server-Service (CSS) model is imposed on each service functionality. In one embodiment, this results in nine logical components and three physical components (a local host, a remote host, and an intervening network), where two of the logical components are integrated into one Remote Object Client component, and that Remote Object Client component and the other seven logical components are deployed among the local host and remote host in a manner which eases compatibility and upgrade problems, and provides an illusion to a user that a desired utility service supported on a remote host resides locally on the user's local host, thereby providing ease of use and minimal software maintenance for users of that remote service.

  19. Remote information service access system based on a client-server-service model

    DOEpatents

    Konrad, A.M.

    1996-08-06

    A local host computing system, a remote host computing system as connected by a network, and service functionalities: a human interface service functionality, a starter service functionality, and a desired utility service functionality, and a Client-Server-Service (CSS) model is imposed on each service functionality. In one embodiment, this results in nine logical components and three physical components (a local host, a remote host, and an intervening network), where two of the logical components are integrated into one Remote Object Client component, and that Remote Object Client component and the other seven logical components are deployed among the local host and remote host in a manner which eases compatibility and upgrade problems, and provides an illusion to a user that a desired utility service supported on a remote host resides locally on the user`s local host, thereby providing ease of use and minimal software maintenance for users of that remote service. 16 figs.

  20. Technology and Online Education: Models for Change

    ERIC Educational Resources Information Center

    Cook, Catherine W.; Sonnenberg, Christian

    2014-01-01

    This paper contends that technology changes advance online education. A number of mobile computing and transformative technologies will be examined and incorporated into a descriptive study. The object of the study will be to design innovative mobile awareness models seeking to understand technology changes for mobile devices and how they can be…

  1. Teaching Online: A Professional Development Model.

    ERIC Educational Resources Information Center

    McCallie, Trey; McKinzie, LeAnn

    This paper describes the faculty training model utilized in the development and/or conversion of course materials to be delivered on the World Wide Web. A description of the online learning environment (WTOnline) is provided, as well as the process by which faculty members in the West Texas A&M University College of Education interact with that…

  2. Structural modeling of G-protein coupled receptors: An overview on automatic web-servers.

    PubMed

    Busato, Mirko; Giorgetti, Alejandro

    2016-08-01

    Despite the significant efforts and discoveries during the last few years in G protein-coupled receptor (GPCR) expression and crystallization, the receptors with known structures to date are limited only to a small fraction of human GPCRs. The lack of experimental three-dimensional structures of the receptors represents a strong limitation that hampers a deep understanding of their function. Computational techniques are thus a valid alternative strategy to model three-dimensional structures. Indeed, recent advances in the field, together with extraordinary developments in crystallography, in particular due to its ability to capture GPCRs in different activation states, have led to encouraging results in the generation of accurate models. This, prompted the community of modelers to render their methods publicly available through dedicated databases and web-servers. Here, we present an extensive overview on these services, focusing on their advantages, drawbacks and their role in successful applications. Future challenges in the field of GPCR modeling, such as the predictions of long loop regions and the modeling of receptor activation states are presented as well.

  3. Structural modeling of G-protein coupled receptors: An overview on automatic web-servers.

    PubMed

    Busato, Mirko; Giorgetti, Alejandro

    2016-08-01

    Despite the significant efforts and discoveries during the last few years in G protein-coupled receptor (GPCR) expression and crystallization, the receptors with known structures to date are limited only to a small fraction of human GPCRs. The lack of experimental three-dimensional structures of the receptors represents a strong limitation that hampers a deep understanding of their function. Computational techniques are thus a valid alternative strategy to model three-dimensional structures. Indeed, recent advances in the field, together with extraordinary developments in crystallography, in particular due to its ability to capture GPCRs in different activation states, have led to encouraging results in the generation of accurate models. This, prompted the community of modelers to render their methods publicly available through dedicated databases and web-servers. Here, we present an extensive overview on these services, focusing on their advantages, drawbacks and their role in successful applications. Future challenges in the field of GPCR modeling, such as the predictions of long loop regions and the modeling of receptor activation states are presented as well. PMID:27102413

  4. Drug-target interaction prediction: databases, web servers and computational models.

    PubMed

    Chen, Xing; Yan, Chenggang Clarence; Zhang, Xiaotian; Zhang, Xu; Dai, Feng; Yin, Jian; Zhang, Yongdong

    2016-07-01

    Identification of drug-target interactions is an important process in drug discovery. Although high-throughput screening and other biological assays are becoming available, experimental methods for drug-target interaction identification remain to be extremely costly, time-consuming and challenging even nowadays. Therefore, various computational models have been developed to predict potential drug-target associations on a large scale. In this review, databases and web servers involved in drug-target identification and drug discovery are summarized. In addition, we mainly introduced some state-of-the-art computational models for drug-target interactions prediction, including network-based method, machine learning-based method and so on. Specially, for the machine learning-based method, much attention was paid to supervised and semi-supervised models, which have essential difference in the adoption of negative samples. Although significant improvements for drug-target interaction prediction have been obtained by many effective computational models, both network-based and machine learning-based methods have their disadvantages, respectively. Furthermore, we discuss the future directions of the network-based drug discovery and network approach for personalized drug discovery based on personalized medicine, genome sequencing, tumor clone-based network and cancer hallmark-based network. Finally, we discussed the new evaluation validation framework and the formulation of drug-target interactions prediction problem by more realistic regression formulation based on quantitative bioactivity data.

  5. Aviation System Analysis Capability Quick Response System Report Server User's Guide

    NASA Technical Reports Server (NTRS)

    Roberts, Eileen R.; Villani, James A.; Wingrove, Earl R., III

    1996-01-01

    This report is a user's guide for the Aviation System Analysis Capability Quick Response System (ASAC QRS) Report Server. The ASAC QRS is an automated online capability to access selected ASAC models and data repositories. It supports analysis by the aviation community. This system was designed by the Logistics Management Institute for the NASA Ames Research Center. The ASAC QRS Report Server allows users to obtain information stored in the ASAC Data Repositories.

  6. Online Instructors as Thinking Advisors: A Model for Online Learner Adaptation

    ERIC Educational Resources Information Center

    Benedetti, Christopher

    2015-01-01

    This article examines the characteristics and challenges of online instruction and presents a model for improving learner adaptation in an online classroom. Instruction in an online classroom presents many challenges, including learner individualization. Individual differences in learning styles and preferences are often not considered in the…

  7. The HADDOCK2.2 Web Server: User-Friendly Integrative Modeling of Biomolecular Complexes.

    PubMed

    van Zundert, G C P; Rodrigues, J P G L M; Trellet, M; Schmitz, C; Kastritis, P L; Karaca, E; Melquiond, A S J; van Dijk, M; de Vries, S J; Bonvin, A M J J

    2016-02-22

    The prediction of the quaternary structure of biomolecular macromolecules is of paramount importance for fundamental understanding of cellular processes and drug design. In the era of integrative structural biology, one way of increasing the accuracy of modeling methods used to predict the structure of biomolecular complexes is to include as much experimental or predictive information as possible in the process. This has been at the core of our information-driven docking approach HADDOCK. We present here the updated version 2.2 of the HADDOCK portal, which offers new features such as support for mixed molecule types, additional experimental restraints and improved protocols, all of this in a user-friendly interface. With well over 6000 registered users and 108,000 jobs served, an increasing fraction of which on grid resources, we hope that this timely upgrade will help the community to solve important biological questions and further advance the field. The HADDOCK2.2 Web server is freely accessible to non-profit users at http://haddock.science.uu.nl/services/HADDOCK2.2.

  8. The HADDOCK2.2 Web Server: User-Friendly Integrative Modeling of Biomolecular Complexes.

    PubMed

    van Zundert, G C P; Rodrigues, J P G L M; Trellet, M; Schmitz, C; Kastritis, P L; Karaca, E; Melquiond, A S J; van Dijk, M; de Vries, S J; Bonvin, A M J J

    2016-02-22

    The prediction of the quaternary structure of biomolecular macromolecules is of paramount importance for fundamental understanding of cellular processes and drug design. In the era of integrative structural biology, one way of increasing the accuracy of modeling methods used to predict the structure of biomolecular complexes is to include as much experimental or predictive information as possible in the process. This has been at the core of our information-driven docking approach HADDOCK. We present here the updated version 2.2 of the HADDOCK portal, which offers new features such as support for mixed molecule types, additional experimental restraints and improved protocols, all of this in a user-friendly interface. With well over 6000 registered users and 108,000 jobs served, an increasing fraction of which on grid resources, we hope that this timely upgrade will help the community to solve important biological questions and further advance the field. The HADDOCK2.2 Web server is freely accessible to non-profit users at http://haddock.science.uu.nl/services/HADDOCK2.2. PMID:26410586

  9. Incorporating 3-dimensional models in online articles

    PubMed Central

    Cevidanes, Lucia H. S.; Ruellasa, Antonio C. O.; Jomier, Julien; Nguyen, Tung; Pieper, Steve; Budin, Francois; Styner, Martin; Paniagua, Beatriz

    2015-01-01

    Introduction The aims of this article were to introduce the capability to view and interact with 3-dimensional (3D) surface models in online publications, and to describe how to prepare surface models for such online 3D visualizations. Methods Three-dimensional image analysis methods include image acquisition, construction of surface models, registration in a common coordinate system, visualization of overlays, and quantification of changes. Cone-beam computed tomography scans were acquired as volumetric images that can be visualized as 3D projected images or used to construct polygonal meshes or surfaces of specific anatomic structures of interest. The anatomic structures of interest in the scans can be labeled with color (3D volumetric label maps), and then the scans are registered in a common coordinate system using a target region as the reference. The registered 3D volumetric label maps can be saved in .obj, .ply, .stl, or .vtk file formats and used for overlays, quantification of differences in each of the 3 planes of space, or color-coded graphic displays of 3D surface distances. Results All registered 3D surface models in this study were saved in .vtk file format and loaded in the Elsevier 3D viewer. In this study, we describe possible ways to visualize the surface models constructed from cone-beam computed tomography images using 2D and 3D figures. The 3D surface models are available in the article’s online version for viewing and downloading using the reader’s software of choice. These 3D graphic displays are represented in the print version as 2D snapshots. Overlays and color-coded distance maps can be displayed using the reader’s software of choice, allowing graphic assessment of the location and direction of changes or morphologic differences relative to the structure of reference. The interpretation of 3D overlays and quantitative color-coded maps requires basic knowledge of 3D image analysis. Conclusions When submitting manuscripts, authors can

  10. An evolving model of online bipartite networks

    NASA Astrophysics Data System (ADS)

    Zhang, Chu-Xu; Zhang, Zi-Ke; Liu, Chuang

    2013-12-01

    Understanding the structure and evolution of online bipartite networks is a significant task since they play a crucial role in various e-commerce services nowadays. Recently, various attempts have been tried to propose different models, resulting in either power-law or exponential degree distributions. However, many empirical results show that the user degree distribution actually follows a shifted power-law distribution, the so-called Mandelbrot’s law, which cannot be fully described by previous models. In this paper, we propose an evolving model, considering two different user behaviors: random and preferential attachment. Extensive empirical results on two real bipartite networks, Delicious and CiteULike, show that the theoretical model can well characterize the structure of real networks for both user and object degree distributions. In addition, we introduce a structural parameter p, to demonstrate that the hybrid user behavior leads to the shifted power-law degree distribution, and the region of power-law tail will increase with the increment of p. The proposed model might shed some lights in understanding the underlying laws governing the structure of real online bipartite networks.

  11. CovalentDock Cloud: a web server for automated covalent docking

    PubMed Central

    Ouyang, Xuchang; Zhou, Shuo; Ge, Zemei; Li, Runtao; Kwoh, Chee Keong

    2013-01-01

    Covalent binding is an important mechanism for many drugs to gain its function. We developed a computational algorithm to model this chemical event and extended it to a web server, the CovalentDock Cloud, to make it accessible directly online without any local installation and configuration. It provides a simple yet user-friendly web interface to perform covalent docking experiments and analysis online. The web server accepts the structures of both the ligand and the receptor uploaded by the user or retrieved from online databases with valid access id. It identifies the potential covalent binding patterns, carries out the covalent docking experiments and provides visualization of the result for user analysis. This web server is free and open to all users at http://docking.sce.ntu.edu.sg/. PMID:23677616

  12. Dynamic online sewer modelling in Helsingborg.

    PubMed

    Hernebring, C; Jönsson, L E; Thorén, U B; Møller, A

    2002-01-01

    Within the last decade, the sewer system in Helsingborg, Sweden has been rehabilitated in many ways along with the reconstruction of the WWTP Oresundsverket in order to obtain a high degree of nitrogen and phosphorus removal. In that context a holistic view has been applied in order to optimise the corrective measures as seen from the effects in the receiving waters. A sewer catchment model has been used to evaluate several operation strategies and the effect of introducing RTC. Recently, a MOUSE ONLINE system was installed. In this phase the objective is to establish a stable communication with the SCADA system and to generate short-term flow forecasts. PMID:11936663

  13. Exploring a New Model for Preprint Server: A Case Study of CSPO

    ERIC Educational Resources Information Center

    Hu, Changping; Zhang, Yaokun; Chen, Guo

    2010-01-01

    This paper describes the introduction of an open-access preprint server in China covering 43 disciplines. The system includes mandatory deposit for state-funded research and reports on the repository and its effectiveness and outlines a novel process of peer-review of preprints in the repository, which can be incorporated into the established…

  14. EXpectation Propagation LOgistic REgRession (EXPLORER): Distributed Privacy-Preserving Online Model Learning

    PubMed Central

    Wang, Shuang; Jiang, Xiaoqian; Wu, Yuan; Cui, Lijuan; Cheng, Samuel; Ohno-Machado, Lucila

    2013-01-01

    We developed an EXpectation Propagation LOgistic REgRession (EXPLORER) model for distributed privacy-preserving online learning. The proposed framework provides a high level guarantee for protecting sensitive information, since the information exchanged between the server and the client is the encrypted posterior distribution of coefficients. Through experimental results, EXPLORER shows the same performance (e.g., discrimination, calibration, feature selection etc.) as the traditional frequentist Logistic Regression model, but provides more flexibility in model updating. That is, EXPLORER can be updated one point at a time rather than having to retrain the entire data set when new observations are recorded. The proposed EXPLORER supports asynchronized communication, which relieves the participants from coordinating with one another, and prevents service breakdown from the absence of participants or interrupted communications. PMID:23562651

  15. An Online Adaptive Model for Location Prediction

    NASA Astrophysics Data System (ADS)

    Anagnostopoulos, Theodoros; Anagnostopoulos, Christos; Hadjiefthymiades, Stathes

    Context-awareness is viewed as one of the most important aspects in the emerging pervasive computing paradigm. Mobile context-aware applications are required to sense and react to changing environment conditions. Such applications, usually, need to recognize, classify and predict context in order to act efficiently, beforehand, for the benefit of the user. In this paper, we propose a mobility prediction model, which deals with context representation and location prediction of moving users. Machine Learning (ML) techniques are used for trajectory classification. Spatial and temporal on-line clustering is adopted. We rely on Adaptive Resonance Theory (ART) for location prediction. Location prediction is treated as a context classification problem. We introduce a novel classifier that applies a Hausdorff-like distance over the extracted trajectories handling location prediction. Since our approach is time-sensitive, the Hausdorff distance is considered more advantageous than a simple Euclidean norm. A learning method is presented and evaluated. We compare ART with Offline kMeans and Online kMeans algorithms. Our findings are very promising for the use of the proposed model in mobile context aware applications.

  16. Consumer's Online Shopping Influence Factors and Decision-Making Model

    NASA Astrophysics Data System (ADS)

    Yan, Xiangbin; Dai, Shiliang

    Previous research on online consumer behavior has mostly been confined to the perceived risk which is used to explain those barriers for purchasing online. However, perceived benefit is another important factor which influences consumers’ decision when shopping online. As a result, an integrated consumer online shopping decision-making model is developed which contains three elements—Consumer, Product, and Web Site. This model proposed relative factors which influence the consumers’ intention during the online shopping progress, and divided them into two different dimensions—mentally level and material level. We tested those factors with surveys, from both online volunteers and offline paper surveys with more than 200 samples. With the help of SEM, the experimental results show that the proposed model and method can be used to analyze consumer’s online shopping decision-making process effectively.

  17. Parallel Computing Using Web Servers and "Servlets".

    ERIC Educational Resources Information Center

    Lo, Alfred; Bloor, Chris; Choi, Y. K.

    2000-01-01

    Describes parallel computing and presents inexpensive ways to implement a virtual parallel computer with multiple Web servers. Highlights include performance measurement of parallel systems; models for using Java and intranet technology including single server, multiple clients and multiple servers, single client; and a comparison of CGI (common…

  18. Development of an integrated modelling framework: comparing client-server and demand-driven control flow for model execution

    NASA Astrophysics Data System (ADS)

    Schmitz, Oliver; Karssenberg, Derek; de Jong, Kor; de Kok, Jean-Luc; de Jong, Steven M.

    2014-05-01

    The construction of hydrological models at the catchment or global scale depends on the integration of component models representing various environmental processes, often operating at different spatial and temporal discretisations. A flexible construction of spatio-temporal model components, a means to specify aggregation or disaggregation to bridge discretisation discrepancies, ease of coupling these into complex integrated models, and support for stochastic modelling and the assessment of model outputs are the desired functionalities for the development of integrated models. These functionalities are preferably combined into one modelling framework such that domain specialists can perform exploratory model development without the need to change their working environment. We implemented an integrated modelling framework in the Python programming language, providing support for 1) model construction and 2) model execution. The framework enables modellers to represent spatio-temporal processes or to specify spatio-temporal (dis)aggregation with map algebra operations provided by the PCRaster library. Model algebra operations can be used by the modeller to specify the exchange of data and therefore the coupling of components. The framework determines the control flow for the ordered execution based on the time steps and couplings of the model components given by the modeller. We implemented two different control flow mechanisms. First, a client-server approach is used with a central entity controlling the execution of the component models and steering the data exchange. Second, a demand-driven approach is used that triggers the execution of a component model when data is requested by a coupled component model. We show that both control flow mechanisms allow for the execution of stochastic, multi-scale integrated models. We examine the implications of each control flow mechanism on the terminology used by the modeller to specify integrated models, and illustrate the

  19. Oceanotron server for marine in-situ observations : a thematic data model implementation as a basis for the extensibility

    NASA Astrophysics Data System (ADS)

    Loubrieu, T.; Donnart, J. C.; Bregent, S.; Blower, J.; Griffith, G.

    2012-04-01

    Oceanotron (https://forge.ifremer.fr/plugins/mediawiki/wiki/oceanotron/index.php/Accueil) is an open-source data server dedicated to marine in-situ observation dissemination. For its extensibility it relies of an ocean business data model. IFREMER hosts the CORIOLIS marine in-situ data centre (http://www.coriolis.eu.org) and, as French NODC (National Oceanographic Data Centre, http://www.ifremer.fr/sismer/index_UK.htm), some other in-situ observation databases. As such IFREMER participates to numerous ocean data management projects. IFREMER wished to capitalize its thematic data management expertise in a dedicated data dissemination server called Oceanotron. The development of the server coordinated by IFREMER has started in 2010. Knowing the diversity of data repository formats (RDBMS, netCDF, ODV, MEDATLAS, ...) and the temperamental nature of the standard interoperability interface profiles (OGC/WMS, OGC/WFS, OGC/SOS, OpenDAP, …), the architecture of the software relies on an ocean business data model dedicated to marine in-situ observation features. The ocean business data model relies on the CSML conceptual modelling (http://csml.badc.rl.ac.uk/) and UNIDATA Common Data Model (http://www.unidata.ucar.edu/software/netcdf-java/CDM/) works and focuses on the most common marine observation features which are : vertical profiles, point series, trajectories and point. The ocean business data model has been implemented in java and can be used as an API. The oceanotron server orchestrates different types of modules handling the ocean business data model objects : - StorageUnits : which read specific data repository formats (netCDF/OceanSites, netCDF/ARGO, ...). - TransformationUnits : which apply useful ocean business related transformation to the features (for example conversion of vertical coordinates from pressure in dB to meters under sea surface). - FrontDesks : which get external requests and send results for interoperable protocols (OpenDAP, WMS, ...). These

  20. CORAL Server and CORAL Server Proxy: Scalable Access to Relational Databases from CORAL Applications

    SciTech Connect

    Valassi, A.; Bartoldus, R.; Kalkhof, A.; Salnikov, A.; Wache, M.; /Mainz U., Inst. Phys.

    2012-04-19

    The CORAL software is widely used at CERN by the LHC experiments to access the data they store on relational databases, such as Oracle. Two new components have recently been added to implement a model involving a middle tier 'CORAL server' deployed close to the database and a tree of 'CORAL server proxies', providing data caching and multiplexing, deployed close to the client. A first implementation of the two new components, released in the summer 2009, is now deployed in the ATLAS online system to read the data needed by the High Level Trigger, allowing the configuration of a farm of several thousand processes. This paper reviews the architecture of the software, its development status and its usage in ATLAS.

  1. University Business Models and Online Practices: A Third Way

    ERIC Educational Resources Information Center

    Rubin, Beth

    2013-01-01

    Higher Education is in a state of change, and the existing business models do not meet the needs of stakeholders. This article contrasts the current dominant business models of universities, comparing the traditional non-profit against the for-profit online model, examining the structural features and online teaching practices that underlie each.…

  2. SimRNAweb: a web server for RNA 3D structure modeling with optional restraints

    PubMed Central

    Magnus, Marcin; Boniecki, Michał J.; Dawson, Wayne; Bujnicki, Janusz M.

    2016-01-01

    RNA function in many biological processes depends on the formation of three-dimensional (3D) structures. However, RNA structure is difficult to determine experimentally, which has prompted the development of predictive computational methods. Here, we introduce a user-friendly online interface for modeling RNA 3D structures using SimRNA, a method that uses a coarse-grained representation of RNA molecules, utilizes the Monte Carlo method to sample the conformational space, and relies on a statistical potential to describe the interactions in the folding process. SimRNAweb makes SimRNA accessible to users who do not normally use high performance computational facilities or are unfamiliar with using the command line tools. The simplest input consists of an RNA sequence to fold RNA de novo. Alternatively, a user can provide a 3D structure in the PDB format, for instance a preliminary model built with some other technique, to jump-start the modeling close to the expected final outcome. The user can optionally provide secondary structure and distance restraints, and can freeze a part of the starting 3D structure. SimRNAweb can be used to model single RNA sequences and RNA-RNA complexes (up to 52 chains). The webserver is available at http://genesilico.pl/SimRNAweb. PMID:27095203

  3. EarthServer - an FP7 project to enable the web delivery and analysis of 3D/4D models

    NASA Astrophysics Data System (ADS)

    Laxton, John; Sen, Marcus; Passmore, James

    2013-04-01

    EarthServer aims at open access and ad-hoc analytics on big Earth Science data, based on the OGC geoservice standards Web Coverage Service (WCS) and Web Coverage Processing Service (WCPS). The WCS model defines "coverages" as a unifying paradigm for multi-dimensional raster data, point clouds, meshes, etc., thereby addressing a wide range of Earth Science data including 3D/4D models. WCPS allows declarative SQL-style queries on coverages. The project is developing a pilot implementing these standards, and will also investigate the use of GeoSciML to describe coverages. Integration of WCPS with XQuery will in turn allow coverages to be queried in combination with their metadata and GeoSciML description. The unified service will support navigation, extraction, aggregation, and ad-hoc analysis on coverage data from SQL. Clients will range from mobile devices to high-end immersive virtual reality, and will enable 3D model visualisation using web browser technology coupled with developing web standards. EarthServer is establishing open-source client and server technology intended to be scalable to Petabyte/Exabyte volumes, based on distributed processing, supercomputing, and cloud virtualization. Implementation will be based on the existing rasdaman server technology developed. Services using rasdaman technology are being installed serving the atmospheric, oceanographic, geological, cryospheric, planetary and general earth observation communities. The geology service (http://earthserver.bgs.ac.uk/) is being provided by BGS and at present includes satellite imagery, superficial thickness data, onshore DTMs and 3D models for the Glasgow area. It is intended to extend the data sets available to include 3D voxel models. Use of the WCPS standard allows queries to be constructed against single or multiple coverages. For example on a single coverage data for a particular area can be selected or data with a particular range of pixel values. Queries on multiple surfaces can be

  4. EONS: an online synaptic modeling platform.

    PubMed

    Bouteiller, Jean-Marie C; Qiu, Yumei; Ziane, Mohammed B; Baudry, Michel; Berger, Theodore W

    2006-01-01

    Chemical synapses, although representing the smallest unit of communication between two neurons in the nervous system constitute a complex ensemble of mechanisms. Understanding these mechanisms and the way synaptic transmission occurs is critical for our comprehension of CNS functions in general and learning and memory in particular. Here we describe a modeling platform called EONS (Elementary Object of Neural System) accessible online, which allows neuroscientists throughout the world to study qualitatively, but also quantitatively the relative contributions of diverse mechanisms underlying synaptic efficacy: the relevance of each and every elements that comprise a synapse, the interactions between these components and their subcellular distribution, as well as the influence of synaptic geometry (presynaptic terminal, cleft and postsynaptic density).

  5. The IntFOLD server: an integrated web resource for protein fold recognition, 3D model quality assessment, intrinsic disorder prediction, domain prediction and ligand binding site prediction.

    PubMed

    Roche, Daniel B; Buenavista, Maria T; Tetchner, Stuart J; McGuffin, Liam J

    2011-07-01

    The IntFOLD server is a novel independent server that integrates several cutting edge methods for the prediction of structure and function from sequence. Our guiding principles behind the server development were as follows: (i) to provide a simple unified resource that makes our prediction software accessible to all and (ii) to produce integrated output for predictions that can be easily interpreted. The output for predictions is presented as a simple table that summarizes all results graphically via plots and annotated 3D models. The raw machine readable data files for each set of predictions are also provided for developers, which comply with the Critical Assessment of Methods for Protein Structure Prediction (CASP) data standards. The server comprises an integrated suite of five novel methods: nFOLD4, for tertiary structure prediction; ModFOLD 3.0, for model quality assessment; DISOclust 2.0, for disorder prediction; DomFOLD 2.0 for domain prediction; and FunFOLD 1.0, for ligand binding site prediction. Predictions from the IntFOLD server were found to be competitive in several categories in the recent CASP9 experiment. The IntFOLD server is available at the following web site: http://www.reading.ac.uk/bioinf/IntFOLD/.

  6. Modeling of protein-peptide interactions using the CABS-dock web server for binding site search and flexible docking.

    PubMed

    Blaszczyk, Maciej; Kurcinski, Mateusz; Kouza, Maksim; Wieteska, Lukasz; Debinski, Aleksander; Kolinski, Andrzej; Kmiecik, Sebastian

    2016-01-15

    Protein-peptide interactions play essential functional roles in living organisms and their structural characterization is a hot subject of current experimental and theoretical research. Computational modeling of the structure of protein-peptide interactions is usually divided into two stages: prediction of the binding site at a protein receptor surface, and then docking (and modeling) the peptide structure into the known binding site. This paper presents a comprehensive CABS-dock method for the simultaneous search of binding sites and flexible protein-peptide docking, available as a user's friendly web server. We present example CABS-dock results obtained in the default CABS-dock mode and using its advanced options that enable the user to increase the range of flexibility for chosen receptor fragments or to exclude user-selected binding modes from docking search. Furthermore, we demonstrate a strategy to improve CABS-dock performance by assessing the quality of models with classical molecular dynamics. Finally, we discuss the promising extensions and applications of the CABS-dock method and provide a tutorial appendix for the convenient analysis and visualization of CABS-dock results. The CABS-dock web server is freely available at http://biocomp.chem.uw.edu.pl/CABSdock/.

  7. Planning for Online Education: A Systems Model

    ERIC Educational Resources Information Center

    Picciano, Anthony G.

    2015-01-01

    The purpose of this article is to revisit the basic principles of technology planning as applied to online education initiatives. While not meant to be an exhaustive treatment of the topic, the article is timely because many colleges and universities are considering the development and expansion of online education as part of their planning…

  8. Optimal Self-Tuning PID Controller Based on Low Power Consumption for a Server Fan Cooling System.

    PubMed

    Lee, Chengming; Chen, Rongshun

    2015-05-20

    Recently, saving the cooling power in servers by controlling the fan speed has attracted considerable attention because of the increasing demand for high-density servers. This paper presents an optimal self-tuning proportional-integral-derivative (PID) controller, combining a PID neural network (PIDNN) with fan-power-based optimization in the transient-state temperature response in the time domain, for a server fan cooling system. Because the thermal model of the cooling system is nonlinear and complex, a server mockup system simulating a 1U rack server was constructed and a fan power model was created using a third-order nonlinear curve fit to determine the cooling power consumption by the fan speed control. PIDNN with a time domain criterion is used to tune all online and optimized PID gains. The proposed controller was validated through experiments of step response when the server operated from the low to high power state. The results show that up to 14% of a server's fan cooling power can be saved if the fan control permits a slight temperature response overshoot in the electronic components, which may provide a time-saving strategy for tuning the PID controller to control the server fan speed during low fan power consumption.

  9. Serving and Rendering Cluster-Based Ocean Model Output on a Geowall Using the Live Access Server

    NASA Astrophysics Data System (ADS)

    Moore, C. W.; Hermann, A. J.; Dobbins, E. L.

    2004-12-01

    Scientists at NOAA's Pacific Marine Environmental Laboratory are relying more and more on supercomputing platforms for their modeling efforts. Running ocean models on these large cluster machines poses problems in that domain sizes are increasing and tracking how the model dynamics are developing during a run requires high-bandwidth network time. In an effort to streamline this procedure both server and 3-D rending technology are utilized. Intermediate model results saved in netCDF file format can be served remotely to query model progress using the Live Access Server (LAS). In our implementation, a crontab script checks for model results and generates an XML data-file descriptor and adds the data set to the list of those available for LAS to serve up. On top of the default product choices (2-D plots, data listings, etc), the user can also chose one of two 3D file formats: either a VRML or a Vis5D file of the variable of interest. The LAS is built upon the Ferret data analysis package with the ability to re-grid variables defined on curvilinear coordinate grids and to serve up Vis5D files. An alternate back-end, written using the open-source Visualization Toolkit (VTK), can serve a VRML isosurface as well as current vector fields, keeping bandwidth low by utilizing topology-preserving polygon mesh decimation algorithms. Files served through our LAS system can be projected in passive stereo using a Geowall (www.geowall.org) by either Vis5D, or by ImmersaView. While ImmersaView offers the ability to animate through the VRML isosurfaces in collaboration with a remote researcher, Vis5D (an older-technology application) gives the user the ability to explore the data more thoroughly by allowing the scientist to change isosurfaces levels, or to probe the data using contour or vector slices. We will explore the possibility of using LAS as the server for the parallel, composite-rendering application ParaView.

  10. Online Education as Interactive Experience: Some Guiding Models.

    ERIC Educational Resources Information Center

    McLellan, Hilary

    1999-01-01

    Presents conceptual models and proven practices that are emerging at the convergence of economics, entertainment, and virtual community-building and applies them to the design of online courses. Discusses the experience economy; digital storytelling; social presence; personal space and computer interaction; and online technologies and types of…

  11. RMS ENVELOPE BACK-PROPAGATION IN THE XAL ONLINE MODEL

    SciTech Connect

    Allen, Christopher K; Sako, Hiroyuki; Ikegami, Masanori

    2009-01-01

    The ability to back-propagate RMS envelopes was added to the J-PARC XAL online model. Specifically, given an arbitrary downstream location, the online model can propagate the RMS envelopes backward to an arbitrary upstream location. This feature provides support for algorithms estimating upstream conditions from downstream data. The upgrade required significant refactoring, which we outline. We also show simulations using the new feature.

  12. Online Educational Delivery Models: A Descriptive View

    ERIC Educational Resources Information Center

    Hill, Phil

    2012-01-01

    Although there has been a long history of distance education, the creation of online education occurred just over a decade and a half ago--a relatively short time in academic terms. Early course delivery via the web had started by 1994, soon followed by a more structured approach using the new category of course management systems. Since that…

  13. The Targeted Open Online Course (TOOC) Model

    ERIC Educational Resources Information Center

    Baker, Credence; Gentry, James

    2014-01-01

    In an era of increasingly hyped Massive Open Online Courses (MOOCs) that seem to evoke feelings of both promise and peril for higher education, many institutions are struggling to find their niche among top-tier Ivy League schools offering courses to thousands of participants for free. While the effectiveness of MOOCs in terms of learning outcomes…

  14. BION web server: predicting non-specifically bound surface ions

    PubMed Central

    Alexov, Emil

    2013-01-01

    Motivation: Ions are essential component of the cell and frequently are found bound to various macromolecules, in particular to proteins. A binding of an ion to a protein greatly affects protein’s biophysical characteristics and needs to be taken into account in any modeling approach. However, ion’s bounded positions cannot be easily revealed experimentally, especially if they are loosely bound to macromolecular surface. Results: Here, we report a web server, the BION web server, which addresses the demand for tools of predicting surface bound ions, for which specific interactions are not crucial; thus, they are difficult to predict. The BION is easy to use web server that requires only coordinate file to be inputted, and the user is provided with various, but easy to navigate, options. The coordinate file with predicted bound ions is displayed on the output and is available for download. Availability: http://compbio.clemson.edu/bion_server/ Supplementary information: Supplementary data are available at Bioinformatics online. Contact: ealexov@clemson.edu PMID:23380591

  15. jpHMM at GOBICS: a web server to detect genomic recombinations in HIV-1.

    PubMed

    Zhang, Ming; Schultz, Anne-Kathrin; Calef, Charles; Kuiken, Carla; Leitner, Thomas; Korber, Bette; Morgenstern, Burkhard; Stanke, Mario

    2006-07-01

    Detecting recombinations in the genome sequence of human immunodeficiency virus (HIV-1) is crucial for epidemiological studies and for vaccine development. Herein, we present a web server for subtyping and localization of phylogenetic breakpoints in HIV-1. Our software is based on a jumping profile Hidden Markov Model (jpHMM), a probabilistic generalization of the jumping-alignment approach proposed by Spang et al. The input data for our server is a partial or complete genome sequence from HIV-1; our tool assigns regions of the input sequence to known subtypes of HIV-1 and predicts phylogenetic breakpoints. jpHMM is available online at http://jphmm.gobics.de/.

  16. Optimal Self-Tuning PID Controller Based on Low Power Consumption for a Server Fan Cooling System

    PubMed Central

    Lee, Chengming; Chen, Rongshun

    2015-01-01

    Recently, saving the cooling power in servers by controlling the fan speed has attracted considerable attention because of the increasing demand for high-density servers. This paper presents an optimal self-tuning proportional-integral-derivative (PID) controller, combining a PID neural network (PIDNN) with fan-power-based optimization in the transient-state temperature response in the time domain, for a server fan cooling system. Because the thermal model of the cooling system is nonlinear and complex, a server mockup system simulating a 1U rack server was constructed and a fan power model was created using a third-order nonlinear curve fit to determine the cooling power consumption by the fan speed control. PIDNN with a time domain criterion is used to tune all online and optimized PID gains. The proposed controller was validated through experiments of step response when the server operated from the low to high power state. The results show that up to 14% of a server’s fan cooling power can be saved if the fan control permits a slight temperature response overshoot in the electronic components, which may provide a time-saving strategy for tuning the PID controller to control the server fan speed during low fan power consumption. PMID:26007725

  17. A Model for Measuring Effectiveness of an Online Course

    ERIC Educational Resources Information Center

    Mashaw, Bijan

    2012-01-01

    As a result of this research, a quantitative model and a procedure have been developed to create an online mentoring effectiveness index (EI). To develop the model, mentoring and teaching effectiveness are defined, and then the constructs and factors of effectiveness are identified. The model's construction is based on the theory that…

  18. Online Ph.D. Program Delivery Models and Student Success

    ERIC Educational Resources Information Center

    Jorissen, Shari L.; Keen, James P.; Riedel, Eric S.

    2015-01-01

    The purpose of this study was to provide information to an online university that offers Ph.D. programs in three formats: knowledge area modules (or KAM, a type of faculty-led, self-directed doctoral study), course-based model, and mixed model (a combination of the KAM and course-based models). The investigators sought to determine why students…

  19. Generic OPC UA Server Framework

    NASA Astrophysics Data System (ADS)

    Nikiel, Piotr P.; Farnham, Benjamin; Filimonov, Viatcheslav; Schlenker, Stefan

    2015-12-01

    This paper describes a new approach for generic design and efficient development of OPC UA servers. Development starts with creation of a design file, in XML format, describing an object-oriented information model of the target system or device. Using this model, the framework generates an executable OPC UA server application, which exposes the per-design OPC UA address space, without the developer writing a single line of code. Furthermore, the framework generates skeleton code into which the developer adds the necessary logic for integration to the target system or device. This approach allows both developers unfamiliar with the OPC UA standard, and advanced OPC UA developers, to create servers for the systems they are experts in while greatly reducing design and development effort as compared to developments based purely on COTS OPC UA toolkits. Higher level software may further benefit from the explicit OPC UA server model by using the XML design description as the basis for generating client connectivity configuration and server data representation. Moreover, having the XML design description at hand facilitates automatic generation of validation tools. In this contribution, the concept and implementation of this framework is detailed along with examples of actual production-level usage in the detector control system of the ATLAS experiment at CERN and beyond.

  20. A last updating evolution model for online social networks

    NASA Astrophysics Data System (ADS)

    Bu, Zhan; Xia, Zhengyou; Wang, Jiandong; Zhang, Chengcui

    2013-05-01

    As information technology has advanced, people are turning to electronic media more frequently for communication, and social relationships are increasingly found on online channels. However, there is very limited knowledge about the actual evolution of the online social networks. In this paper, we propose and study a novel evolution network model with the new concept of “last updating time”, which exists in many real-life online social networks. The last updating evolution network model can maintain the robustness of scale-free networks and can improve the network reliance against intentional attacks. What is more, we also found that it has the “small-world effect”, which is the inherent property of most social networks. Simulation experiment based on this model show that the results and the real-life data are consistent, which means that our model is valid.

  1. XAL-Based Applications and Online Models for LCLS

    SciTech Connect

    Chu, P.; Woodley, M.; Iverson, R.; Krejcik, P.; White, G.; Wu, J.; Gan, Q.; /Beijing, Inst. High Energy Phys.

    2009-12-11

    XAL, a high-level accelerator application framework originally developed at the Spallation Neutron Source (SNS), Oak Ridge National Laboratory, has been adopted by the Linac Coherent Light Source (LCLS) project. The work includes proper relational database schema modification to better suit XAL configuration data requirement, addition of new device types for LCLS online modeling purpose, longitudinal coordinate system change to better represent the LCLS electron beam rather than proton or ion beam in the original SNS XAL design, intensively benchmark with MAD and present SLC modeling system for the online model, and various new features to the XAL framework. Storing online model data in a relational database and providing universal access methods for other applications is also described here.

  2. Using servers to enhance control system capability

    SciTech Connect

    M. Bickley; B.A. Bowling; D.A. Bryan; J. van Zeijts; K.S. White; S. Witherspoon

    1999-03-01

    Many traditional control systems include a distributed collection of front end machines to control hardware. Back end tools are used to view, modify and record the signals generated by these front end machines. Software servers, which are a middleware layer between the front and back ends, can improve a control system in several ways. Servers can enable on-line processing of raw data, and consolidation of functionality. In many cases, data retrieved from the front end must be processed in order to convert the raw data into useful information. These calculations are often redundantly performed by different programs, frequently offline. Servers can monitor the raw data and rapidly perform calculations, producing new signals which can be treated like any other control system signal, and can be used by any back end application. Algorithms can be incorporated to actively modify signal values in the control system based upon changes of other signals, essentially producing feedback in a control system. Servers thus increase the flexibility of a control system. Lastly, servers running on inexpensive UNIX workstations can relay or cache frequently needed information, reducing the load on front end hardware by functioning as concentrators. Rather than many back end tools connecting directly to the front end machines, increasing the work load of these machines, they instead connect to the server. Servers like those discussed above have been used successfully at the Thomas Jefferson National Accelerator Facility to provide functionality such as beam steering, fault monitoring, storage of machine parameters, and on-line data processing. The authors discuss the potential uses of such servers, and share the results of work performed to date.

  3. USING SERVERS TO ENHANCE CONTROL SYSTEM CAPABILITY.

    SciTech Connect

    BICKLEY,M.; BOWLING,B.A.; BRYAN,D.A.; ZEIJTS,J.; WHITE,K.S.; WITHERSPOON,S.

    1999-03-29

    Many traditional control systems include a distributed collection of front end machines to control hardware. Back end tools are used to view, modify, and record the signals generated by these front end machines. Software servers, which are a middleware layer between the front and back ends, can improve a control system in several ways. Servers can enable on-line processing of raw data, and consolidation of functionality. In many cases data retrieved from the front end must be processed in order to convert the raw data into useful information. These calculations are often redundantly performed by different programs, frequently offline. Servers can monitor the raw data and rapidly perform calculations, producing new signals which can be treated like any other control system signal, and can be used by any back end application. Algorithms can be incorporated to actively modify signal values in the control system based upon changes of other signals, essentially producing feedback in a control system. Servers thus increase the flexibility of a control system. Lastly, servers running on inexpensive UNIX workstations can relay or cache frequently needed information, reducing the load on front end hardware by functioning as concentrators. Rather than many back end tools connecting directly to the front end machines, increasing the work load of these machines, they instead connect to the server. Servers like those discussed above have been used successfully at the Thomas Jefferson National Accelerator Facility to provide functionality such as beam steering, fault monitoring, storage of machine parameters, and on-line data processing. The authors discuss the potential uses of such, servers, and share the results of work performed to date.

  4. FULLY COUPLED "ONLINE" CHEMISTRY WITHIN THE WRF MODEL

    EPA Science Inventory

    A fully coupled "online" Weather Research and Forecasting/Chemistry (WRF/Chem) model has been developed. The air quality component of the model is fully consistent with the meteorological component; both components use the same transport scheme (mass and scalar preserving), the s...

  5. Keeping Our Network Safe: A Model of Online Protection Behaviour

    ERIC Educational Resources Information Center

    Lee, Doohwang; Larose, Robert; Rifon, Nora

    2008-01-01

    The objective of this study is to develop and test a model of online protection behaviour, particularly regarding the use of virus protection. Hypotheses are proposed concerning the predictors of the intention to engage in virus protection behaviour. Using a survey of 273 college students who use the Internet, a test of the hypotheses is conducted…

  6. An individuality model for online signatures using global Fourier descriptors

    NASA Astrophysics Data System (ADS)

    Kholmatov, Alisher; Yanikoglu, Berrin

    2008-03-01

    The discriminative capability of a biometric is based on its individuality/uniqueness and is an important factor in choosing a biometric for a large-scale deployment. Individuality studies have been carried out rigorously for only certain biometrics, in particular fingerprint and iris, while work on establishing handwriting and signature individuality has been mainly on feature level. In this study, we present a preliminary individuality model for online signatures using the Fourier domain representation of the signature. Using the normalized Fourier coefficients as global features describing the signature, we derive a formula for the probability of coincidentally matching a given signature. Estimating model parameters from a large database and making certain simplifying assumptions, the probability of two arbitrary signatures to match in 13 of the coefficients is calculated as 4.7x10 -4. When compared with the results of a verification algorithm that parallels the theoretical model, the results show that the theoretical model fits the random forgery test results fairly well. While online signatures are sometimes dismissed as not very secure, our results show that the probability of successfully guessing an online signature is very low. Combined with the fact that signature is a behavioral biometric with adjustable complexity, these results support the use of online signatures for biometric authentication.

  7. Designing a Predictive Model of Student Satisfaction in Online Learning

    ERIC Educational Resources Information Center

    Parahoo, Sanjai K; Santally, Mohammad Issack; Rajabalee, Yousra; Harvey, Heather Lea

    2016-01-01

    Higher education institutions consider student satisfaction to be one of the major elements in determining the quality of their programs. The objective of the study was to develop a model of student satisfaction to identify the influencers that emerged in online higher education settings. The study adopted a mixed method approach to identify…

  8. A Performance-Based Development Model for Online Faculty

    ERIC Educational Resources Information Center

    Fang, Berlin

    2007-01-01

    Faculty development in distance education does not happen in a vacuum. It is often interwoven with efforts to increase adoption of distance education programs and increase the effectiveness of online teaching. Training might not be the only way to meet these needs. This article presents a new faculty-development model, based on a systematic…

  9. Collaborative Online Teaching: A Model for Gerontological Social Work Education

    ERIC Educational Resources Information Center

    Fulton, Amy E.; Walsh, Christine A.; Azulai, Anna; Gulbrandsen, Cari; Tong, Hongmei

    2015-01-01

    Social work students and faculty are increasingly embracing online education and collaborative teaching. Yet models to support these activities have not been adequately developed. This paper describes how a team of instructors developed, delivered, and evaluated an undergraduate gerontological social work course using a collaborative online…

  10. New Model for Multimedia Interfaces to Online Public Access Catalogues.

    ERIC Educational Resources Information Center

    Pejtersen, Annelise Mark

    1992-01-01

    Describes the Book House, an interactive, multimedia online public access catalog (OPAC) developed in Denmark that uses icons, text, and animation. An alternative design model that addresses problems in OPACs is described; and database design, system navigation, use for fiction retrieval, and evaluation are discussed. (20 references) (MES)

  11. A Distributed Online Curriculum and Courseware Development Model

    ERIC Educational Resources Information Center

    Durdu, Pinar Onay; Yalabik, Nese; Cagiltay, Kursat

    2009-01-01

    A distributed online curriculum and courseware development model (DONC[superscript 2]) is developed and tested in this study. Courseware development teams which may work in different institutions who need to develop high quality, reduced cost, on time products will be the users of DONC[superscript 2]. The related features from the disciplines of…

  12. Improving Learning and Reducing Costs: New Models for Online Learning.

    ERIC Educational Resources Information Center

    Twigg, Carol A.

    2003-01-01

    Describes five course redesign models (supplemental, replacement, emporium, fully online, and buffet) used by grantees of the Program in Course Redesign sponsored by the Pew Charitable Trusts. The grants helped colleges redesign instruction using technology to achieve quality enhancements as well as cost savings. (EV)

  13. Building a Model Explaining the Social Nature of Online Learning

    ERIC Educational Resources Information Center

    Tsai, I-Chun; Kim, Bosung; Liu, Pei-Ju; Goggins, Sean P.; Kumalasari, Christiana; Laffey, James M.

    2008-01-01

    Based on a framework emphasizing the social nature of learning, this research examines a model of how social constructs affect satisfaction within online learning using path analysis for students in higher education. The social constructs evaluated in this study include sense of community (SOC), social ability (SA), perceived ease of use (PEU) and…

  14. The Divine Pedagogy as a Model for Online Education

    ERIC Educational Resources Information Center

    Gresham, John

    2006-01-01

    In addition to the pragmatic concerns that often drive the use of technology in theological education, there is a need to develop theological justification and direction for online education. Several Roman Catholic Church documents propose the "divine pedagogy," the manner in which God teaches the human race, as a model for catechesis or religious…

  15. Modeling Periodic Impulsive Effects on Online TV Series Diffusion

    PubMed Central

    Fang, Qiwen; Wang, Xi

    2016-01-01

    Background Online broadcasting substantially affects the production, distribution, and profit of TV series. In addition, online word-of-mouth significantly affects the diffusion of TV series. Because on-demand streaming rates are the most important factor that influences the earnings of online video suppliers, streaming statistics and forecasting trends are valuable. In this paper, we investigate the effects of periodic impulsive stimulation and pre-launch promotion on on-demand streaming dynamics. We consider imbalanced audience feverish distribution using an impulsive susceptible-infected-removed(SIR)-like model. In addition, we perform a correlation analysis of online buzz volume based on Baidu Index data. Methods We propose a PI-SIR model to evolve audience dynamics and translate them into on-demand streaming fluctuations, which can be observed and comprehended by online video suppliers. Six South Korean TV series datasets are used to test the model. We develop a coarse-to-fine two-step fitting scheme to estimate the model parameters, first by fitting inter-period accumulation and then by fitting inner-period feverish distribution. Results We find that audience members display similar viewing habits. That is, they seek new episodes every update day but fade away. This outcome means that impulsive intensity plays a crucial role in on-demand streaming diffusion. In addition, the initial audience size and online buzz are significant factors. On-demand streaming fluctuation is highly correlated with online buzz fluctuation. Conclusion To stimulate audience attention and interpersonal diffusion, it is worthwhile to invest in promotion near update days. Strong pre-launch promotion is also a good marketing tool to improve overall performance. It is not advisable for online video providers to promote several popular TV series on the same update day. Inter-period accumulation is a feasible forecasting tool to predict the future trend of the on-demand streaming amount

  16. A mathematical modelling approach for systems where the servers are almost always busy.

    PubMed

    Pagel, Christina; Richards, David A; Utley, Martin

    2012-01-01

    The design and implementation of new configurations of mental health services to meet local needs is a challenging problem. In the UK, services for common mental health disorders such as anxiety and depression are an example of a system running near or at capacity, in that it is extremely rare for the queue size for any given mode of treatment to fall to zero. In this paper we describe a mathematical model that can be applied in such circumstances. The model provides a simple way of estimating the mean and variance of the number of patients that would be treated within a given period of time given a particular configuration of services as defined by the number of appointments allocated to different modes of treatment and the referral patterns to and between different modes of treatment. The model has been used by service planners to explore the impact of different options on throughput, clinical outcomes, queue sizes, and waiting times. We also discuss the potential for using the model in conjunction with optimisation techniques to inform service design and its applicability to other contexts.

  17. The NEOS server.

    SciTech Connect

    Czyzyk, J.; Mesnier, M. P.; More, J. J.; Mathematics and Computer Science

    1998-07-01

    The Network-Enabled Optimization System (NEOS) is an Internet based optimization service. The NEOS Server introduces a novel approach for solving optimization problems. Users of the NEOS Server submit a problem and their choice of optimization solver over the Internet. The NEOS Server computes all information (for example, derivatives and sparsity patterns) required by the solver, links the optimization problem with the solver, and returns a solution.

  18. Performance of a distributed superscalar storage server

    NASA Technical Reports Server (NTRS)

    Finestead, Arlan; Yeager, Nancy

    1993-01-01

    The RS/6000 performed well in our test environment. The potential exists for the RS/6000 to act as a departmental server for a small number of users, rather than as a high speed archival server. Multiple UniTree Disk Server's utilizing one UniTree Disk Server's utilizing one UniTree Name Server could be developed that would allow for a cost effective archival system. Our performance tests were clearly limited by the network bandwidth. The performance gathered by the LibUnix testing shows that UniTree is capable of exceeding ethernet speeds on an RS/6000 Model 550. The performance of FTP might be significantly faster if asked to perform across a higher bandwidth network. The UniTree Name Server also showed signs of being a potential bottleneck. UniTree sites that would require a high ratio of file creations and deletions to reads and writes would run into this bottleneck. It is possible to improve the UniTree Name Server performance by bypassing the UniTree LibUnix Library altogether and communicating directly with the UniTree Name Server and optimizing creations. Although testing was performed in a less than ideal environment, hopefully the performance statistics stated in this paper will give end-users a realistic idea as to what performance they can expect in this type of setup.

  19. Data Transfer Software-SAS MetaData Server & Phoenix Integration Model Center

    SciTech Connect

    2010-04-15

    This software is a plug-in that interfaces between the Phoenix Integration's Model Center and the Base SAS 9.2 applications. The end use of the plug-in is to link input and output data that resides in SAS tables or MS SQL to and from "legacy" software programs without recoding. The potential end users are users who need to run legacy code and want data stored in a SQL database.

  20. A simple generative model of collective online behavior.

    PubMed

    Gleeson, James P; Cellai, Davide; Onnela, Jukka-Pekka; Porter, Mason A; Reed-Tsochas, Felix

    2014-07-22

    Human activities increasingly take place in online environments, providing novel opportunities for relating individual behaviors to population-level outcomes. In this paper, we introduce a simple generative model for the collective behavior of millions of social networking site users who are deciding between different software applications. Our model incorporates two distinct mechanisms: one is associated with recent decisions of users, and the other reflects the cumulative popularity of each application. Importantly, although various combinations of the two mechanisms yield long-time behavior that is consistent with data, the only models that reproduce the observed temporal dynamics are those that strongly emphasize the recent popularity of applications over their cumulative popularity. This demonstrates--even when using purely observational data without experimental design--that temporal data-driven modeling can effectively distinguish between competing microscopic mechanisms, allowing us to uncover previously unidentified aspects of collective online behavior.

  1. A simple generative model of collective online behavior.

    PubMed

    Gleeson, James P; Cellai, Davide; Onnela, Jukka-Pekka; Porter, Mason A; Reed-Tsochas, Felix

    2014-07-22

    Human activities increasingly take place in online environments, providing novel opportunities for relating individual behaviors to population-level outcomes. In this paper, we introduce a simple generative model for the collective behavior of millions of social networking site users who are deciding between different software applications. Our model incorporates two distinct mechanisms: one is associated with recent decisions of users, and the other reflects the cumulative popularity of each application. Importantly, although various combinations of the two mechanisms yield long-time behavior that is consistent with data, the only models that reproduce the observed temporal dynamics are those that strongly emphasize the recent popularity of applications over their cumulative popularity. This demonstrates--even when using purely observational data without experimental design--that temporal data-driven modeling can effectively distinguish between competing microscopic mechanisms, allowing us to uncover previously unidentified aspects of collective online behavior. PMID:25002470

  2. Submersed Aquatic Vegetation Modeling Output Online

    USGS Publications Warehouse

    Yin, Yao; Rogala, Jim; Sullivan, John; Rohweder, Jason J.

    2005-01-01

    Introduction The ability to predict the distribution of submersed aquatic vegetation in the Upper Mississippi River on the basis of physical or chemical variables is useful to resource managers. Wildlife managers have a keen interest in advanced estimates of food quantity such as American wildcelery (Vallisneria americana) population status to give out more informed advisories to hunters before the fall hunting season. Predictions for distribution of submerged aquatic vegetation beds can potentially increase hunter observance of voluntary avoidance zones where foraging birds are left alone to feed undisturbed. In years when submersed aquatic vegetation is predicted to be scarce in important wildlife habitats, managers can get the message out to hunters well before the hunting season (Jim Nissen, Upper Mississippi River National Wildlife and Fish Refuge, La Crosse District Manager, La Crosse, Wisconsin, personal communication). We developed a statistical model to predict the probability of occurrence of submersed aquatic vegetation in Pool 8 of the Upper Mississippi River on the basis of a few hydrological, physical, and geomorphic variables. Our model takes into consideration flow velocity, wind fetch, bathymetry, growing-season daily water level, and light extinction coefficient in the river (fig. 1) and calculates the probability of submersed aquatic vegetation existence in Pool 8 in individual 5- x 5-m grid cells. The model was calibrated using the data collected in 1998 (516 sites), 1999 (595 sites), and 2000 (649 sites) using a stratified random sampling protocol (Yin and others, 2000b). To validate the model, we chose the data from the Long Term Resource Monitoring Program (LTRMP) transect sampling in backwater areas (Rogers and Owens 1995; Yin and others, 2000a) and ran the model for each 5- x 5-m grid cell in every growing season from 1991 to 2001. We tallied all the cells and came up with an annual average percent frequency of submersed aquatic vegetation

  3. Online Knowledge-Based Model for Big Data Topic Extraction

    PubMed Central

    Khan, Muhammad Taimoor; Durrani, Mehr; Khalid, Shehzad; Aziz, Furqan

    2016-01-01

    Lifelong machine learning (LML) models learn with experience maintaining a knowledge-base, without user intervention. Unlike traditional single-domain models they can easily scale up to explore big data. The existing LML models have high data dependency, consume more resources, and do not support streaming data. This paper proposes online LML model (OAMC) to support streaming data with reduced data dependency. With engineering the knowledge-base and introducing new knowledge features the learning pattern of the model is improved for data arriving in pieces. OAMC improves accuracy as topic coherence by 7% for streaming data while reducing the processing cost to half. PMID:27195004

  4. Visible Geology - Interactive online geologic block modelling

    NASA Astrophysics Data System (ADS)

    Cockett, R.

    2012-12-01

    Geology is a highly visual science, and many disciplines require spatial awareness and manipulation. For example, interpreting cross-sections, geologic maps, or plotting data on a stereonet all require various levels of spatial abilities. These skills are often not focused on in undergraduate geoscience curricula and many students struggle with spatial relations, manipulations, and penetrative abilities (e.g. Titus & Horsman, 2009). A newly developed program, Visible Geology, allows for students to be introduced to many geologic concepts and spatial skills in a virtual environment. Visible Geology is a web-based, three-dimensional environment where students can create and interrogate their own geologic block models. The program begins with a blank model, users then add geologic beds (with custom thickness and color) and can add geologic deformation events like tilting, folding, and faulting. Additionally, simple intrusive dikes can be modelled, as well as unconformities. Students can also explore the interaction of geology with topography by drawing elevation contours to produce their own topographic models. Students can not only spatially manipulate their model, but can create cross-sections and boreholes to practice their visual penetrative abilities. Visible Geology is easy to access and use, with no downloads required, so it can be incorporated into current, paper-based, lab activities. Sample learning activities are being developed that target introductory and structural geology curricula with learning objectives such as relative geologic history, fault characterization, apparent dip and thickness, interference folding, and stereonet interpretation. Visible Geology provides a richly interactive, and immersive environment for students to explore geologic concepts and practice their spatial skills.; Screenshot of Visible Geology showing folding and faulting interactions on a ridge topography.

  5. Antibody modeling using the prediction of immunoglobulin structure (PIGS) web server [corrected].

    PubMed

    Marcatili, Paolo; Olimpieri, Pier Paolo; Chailyan, Anna; Tramontano, Anna

    2014-12-01

    Antibodies (or immunoglobulins) are crucial for defending organisms from pathogens, but they are also key players in many medical, diagnostic and biotechnological applications. The ability to predict their structure and the specific residues involved in antigen recognition has several useful applications in all of these areas. Over the years, we have developed or collaborated in developing a strategy that enables researchers to predict the 3D structure of antibodies with a very satisfactory accuracy. The strategy is completely automated and extremely fast, requiring only a few minutes (∼10 min on average) to build a structural model of an antibody. It is based on the concept of canonical structures of antibody loops and on our understanding of the way light and heavy chains pack together.

  6. GPCR & company: databases and servers for GPCRs and interacting partners.

    PubMed

    Kowalsman, Noga; Niv, Masha Y

    2014-01-01

    G-protein-coupled receptors (GPCRs) are a large superfamily of membrane receptors that are involved in a wide range of signaling pathways. To fulfill their tasks, GPCRs interact with a variety of partners, including small molecules, lipids and proteins. They are accompanied by different proteins during all phases of their life cycle. Therefore, GPCR interactions with their partners are of great interest in basic cell-signaling research and in drug discovery.Due to the rapid development of computers and internet communication, knowledge and data can be easily shared within the worldwide research community via freely available databases and servers. These provide an abundance of biological, chemical and pharmacological information.This chapter describes the available web resources for investigating GPCR interactions. We review about 40 freely available databases and servers, and provide a few sentences about the essence and the data they supply. For simplification, the databases and servers were grouped under the following topics: general GPCR-ligand interactions; particular families of GPCRs and their ligands; GPCR oligomerization; GPCR interactions with intracellular partners; and structural information on GPCRs. In conclusion, a multitude of useful tools are currently available. Summary tables are provided to ease navigation between the numerous and partially overlapping resources. Suggestions for future enhancements of the online tools include the addition of links from general to specialized databases and enabling usage of user-supplied template for GPCR structural modeling. PMID:24158806

  7. GPCR & company: databases and servers for GPCRs and interacting partners.

    PubMed

    Kowalsman, Noga; Niv, Masha Y

    2014-01-01

    G-protein-coupled receptors (GPCRs) are a large superfamily of membrane receptors that are involved in a wide range of signaling pathways. To fulfill their tasks, GPCRs interact with a variety of partners, including small molecules, lipids and proteins. They are accompanied by different proteins during all phases of their life cycle. Therefore, GPCR interactions with their partners are of great interest in basic cell-signaling research and in drug discovery.Due to the rapid development of computers and internet communication, knowledge and data can be easily shared within the worldwide research community via freely available databases and servers. These provide an abundance of biological, chemical and pharmacological information.This chapter describes the available web resources for investigating GPCR interactions. We review about 40 freely available databases and servers, and provide a few sentences about the essence and the data they supply. For simplification, the databases and servers were grouped under the following topics: general GPCR-ligand interactions; particular families of GPCRs and their ligands; GPCR oligomerization; GPCR interactions with intracellular partners; and structural information on GPCRs. In conclusion, a multitude of useful tools are currently available. Summary tables are provided to ease navigation between the numerous and partially overlapping resources. Suggestions for future enhancements of the online tools include the addition of links from general to specialized databases and enabling usage of user-supplied template for GPCR structural modeling.

  8. A Comparison Between Publish-and-Subscribe and Client-Server Models in Distributed Control System Networks

    NASA Technical Reports Server (NTRS)

    Boulanger, Richard P., Jr.; Kwauk, Xian-Min; Stagnaro, Mike; Kliss, Mark (Technical Monitor)

    1998-01-01

    The BIO-Plex control system requires real-time, flexible, and reliable data delivery. There is no simple "off-the-shelf 'solution. However, several commercial packages will be evaluated using a testbed at ARC for publish- and-subscribe and client-server communication architectures. Point-to-point communication architecture is not suitable for real-time BIO-Plex control system. Client-server architecture provides more flexible data delivery. However, it does not provide direct communication among nodes on the network. Publish-and-subscribe implementation allows direct information exchange among nodes on the net, providing the best time-critical communication. In this work Network Data Delivery Service (NDDS) from Real-Time Innovations, Inc. ARTIE will be used to implement publish-and subscribe architecture. It offers update guarantees and deadlines for real-time data delivery. Bridgestone, a data acquisition and control software package from National Instruments, will be tested for client-server arrangement. A microwave incinerator located at ARC will be instrumented with a fieldbus network of control devices. BridgeVIEW will be used to implement an enterprise server. An enterprise network consisting of several nodes at ARC and a WAN connecting ARC and RISC will then be setup to evaluate proposed control system architectures. Several network configurations will be evaluated for fault tolerance, quality of service, reliability and efficiency. Data acquired from these network evaluation tests will then be used to determine preliminary design criteria for the BIO-Plex distributed control system.

  9. Online kernel principal component analysis: a reduced-order model.

    PubMed

    Honeine, Paul

    2012-09-01

    Kernel principal component analysis (kernel-PCA) is an elegant nonlinear extension of one of the most used data analysis and dimensionality reduction techniques, the principal component analysis. In this paper, we propose an online algorithm for kernel-PCA. To this end, we examine a kernel-based version of Oja's rule, initially put forward to extract a linear principal axe. As with most kernel-based machines, the model order equals the number of available observations. To provide an online scheme, we propose to control the model order. We discuss theoretical results, such as an upper bound on the error of approximating the principal functions with the reduced-order model. We derive a recursive algorithm to discover the first principal axis, and extend it to multiple axes. Experimental results demonstrate the effectiveness of the proposed approach, both on synthetic data set and on images of handwritten digits, with comparison to classical kernel-PCA and iterative kernel-PCA.

  10. R3D Align web server for global nucleotide to nucleotide alignments of RNA 3D structures

    PubMed Central

    Rahrig, Ryan R.; Petrov, Anton I.; Leontis, Neocles B.; Zirbel, Craig L.

    2013-01-01

    The R3D Align web server provides online access to ‘RNA 3D Align’ (R3D Align), a method for producing accurate nucleotide-level structural alignments of RNA 3D structures. The web server provides a streamlined and intuitive interface, input data validation and output that is more extensive and easier to read and interpret than related servers. The R3D Align web server offers a unique Gallery of Featured Alignments, providing immediate access to pre-computed alignments of large RNA 3D structures, including all ribosomal RNAs, as well as guidance on effective use of the server and interpretation of the output. By accessing the non-redundant lists of RNA 3D structures provided by the Bowling Green State University RNA group, R3D Align connects users to structure files in the same equivalence class and the best-modeled representative structure from each group. The R3D Align web server is freely accessible at http://rna.bgsu.edu/r3dalign/. PMID:23716643

  11. DelPhi web server v2: incorporating atomic-style geometrical figures into the computational protocol

    PubMed Central

    Smith, Nicholas; Witham, Shawn; Sarkar, Subhra; Zhang, Jie; Li, Lin; Li, Chuan; Alexov, Emil

    2012-01-01

    Summary: A new edition of the DelPhi web server, DelPhi web server v2, is released to include atomic presentation of geometrical figures. These geometrical objects can be used to model nano-size objects together with real biological macromolecules. The position and size of the object can be manipulated by the user in real time until desired results are achieved. The server fixes structural defects, adds hydrogen atoms and calculates electrostatic energies and the corresponding electrostatic potential and ionic distributions. Availability and implementation: The web server follows a client–server architecture built on PHP and HTML and utilizes DelPhi software. The computation is carried out on supercomputer cluster and results are given back to the user via http protocol, including the ability to visualize the structure and corresponding electrostatic potential via Jmol implementation. The DelPhi web server is available from http://compbio.clemson.edu/delphi_webserver. Contact: nsmith@clemson.edu, ealexov@clemson.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:22531215

  12. Servers Made to Order

    SciTech Connect

    Anderson, Daryl L.

    2007-11-01

    Virtualization is a hot buzzword right now, and it’s no wonder federal agencies are coming around to the idea of consolidating their servers and storage. Traditional servers do nothing for about 80% of their lifecycle, yet use nearly half their peak energy consumption which wastes capacity and power. Server virtualization creates logical "machines" on a single physical server. At the Pacific Northwest National Laboratory in Richland, Washington, using virtualization technology is proving to be a cost-effective way to make better use of current server hardware resources while reducing hardware lifecycle costs and cooling demands, and saving precious data center space. And as an added bonus, virtualization also ties in with the Laboratory’s mission to be responsible stewards of the environment as well as the Department of Energy’s assets. This article explains why even the smallest IT shops can benefit from the Laboratory’s best practices.

  13. Feedback control by online learning an inverse model.

    PubMed

    Waegeman, Tim; Wyffels, Francis; Schrauwen, Francis

    2012-10-01

    A model, predictor, or error estimator is often used by a feedback controller to control a plant. Creating such a model is difficult when the plant exhibits nonlinear behavior. In this paper, a novel online learning control framework is proposed that does not require explicit knowledge about the plant. This framework uses two learning modules, one for creating an inverse model, and the other for actually controlling the plant. Except for their inputs, they are identical. The inverse model learns by the exploration performed by the not yet fully trained controller, while the actual controller is based on the currently learned model. The proposed framework allows fast online learning of an accurate controller. The controller can be applied on a broad range of tasks with different dynamic characteristics. We validate this claim by applying our control framework on several control tasks: 1) the heating tank problem (slow nonlinear dynamics); 2) flight pitch control (slow linear dynamics); and 3) the balancing problem of a double inverted pendulum (fast linear and nonlinear dynamics). The results of these experiments show that fast learning and accurate control can be achieved. Furthermore, a comparison is made with some classical control approaches, and observations concerning convergence and stability are made. PMID:24808008

  14. Remote diagnosis server

    NASA Technical Reports Server (NTRS)

    Deb, Somnath (Inventor); Ghoshal, Sudipto (Inventor); Malepati, Venkata N. (Inventor); Kleinman, David L. (Inventor); Cavanaugh, Kevin F. (Inventor)

    2004-01-01

    A network-based diagnosis server for monitoring and diagnosing a system, the server being remote from the system it is observing, comprises a sensor for generating signals indicative of a characteristic of a component of the system, a network-interfaced sensor agent coupled to the sensor for receiving signals therefrom, a broker module coupled to the network for sending signals to and receiving signals from the sensor agent, a handler application connected to the broker module for transmitting signals to and receiving signals therefrom, a reasoner application in communication with the handler application for processing, and responding to signals received from the handler application, wherein the sensor agent, broker module, handler application, and reasoner applications operate simultaneously relative to each other, such that the present invention diagnosis server performs continuous monitoring and diagnosing of said components of the system in real time. The diagnosis server is readily adaptable to various different systems.

  15. Home media server content management

    NASA Astrophysics Data System (ADS)

    Tokmakoff, Andrew A.; van Vliet, Harry

    2001-07-01

    With the advent of set-top boxes, the convergence of TV (broadcasting) and PC (Internet) is set to enter the home environment. Currently, a great deal of activity is occurring in developing standards (TV-Anytime Forum) and devices (TiVo) for local storage on Home Media Servers (HMS). These devices lie at the heart of convergence of the triad: communications/networks - content/media - computing/software. Besides massive storage capacity and being a communications 'gateway', the home media server is characterised by the ability to handle metadata and software that provides an easy to use on-screen interface and intelligent search/content handling facilities. In this paper, we describe a research prototype HMS that is being developed within the GigaCE project at the Telematica Instituut . Our prototype demonstrates advanced search and retrieval (video browsing), adaptive user profiling and an innovative 3D component of the Electronic Program Guide (EPG) which represents online presence. We discuss the use of MPEG-7 for representing metadata, the use of MPEG-21 working draft standards for content identification, description and rights expression, and the use of HMS peer-to-peer content distribution approaches. Finally, we outline explorative user behaviour experiments that aim to investigate the effectiveness of the prototype HMS during development.

  16. A Hybrid Evaluation Model for Evaluating Online Professional Development

    ERIC Educational Resources Information Center

    Hahs-Vaughn, Debbie; Zygouris-Coe, Vicky; Fiedler, Rebecca

    2007-01-01

    Online professional development is multidimensional. It encompasses: a) an online, web-based format; b) professional development; and most likely c) specific objectives tailored to and created for the respective online professional development course. Evaluating online professional development is therefore also multidimensional and as such both…

  17. The AtChem On-line model and Electronic Laboratory Notebook (ELN): A free community modelling tool with provenance capture

    NASA Astrophysics Data System (ADS)

    Young, J. C.; Boronska, K.; Martin, C. J.; Rickard, A. R.; Vázquez Moreno, M.; Pilling, M. J.; Haji, M. H.; Dew, P. M.; Lau, L. M.; Jimack, P. K.

    2010-12-01

    AtChem On-line1 is a simple to use zero-dimensional box modelling toolkit, developed for use by laboratory, field and chamber scientists. Any set of chemical reactions can be simulated, in particular the whole Master Chemical Mechanism (MCM2) or any subset of it. Parameters and initial data can be provided through a self-explanatory web form and the resulting model is compiled and run on a dedicated server. The core part of the toolkit, providing a robust solver for thousands of chemical reactions, is written in Fortran and uses SUNDIALS3 CVODE libraries. Chemical systems can be constrained at multiple, user-determined timescales; this enabled studies of radical chemistry at one minute timescales. AtChem On-line is free to use and requires no installation - a web browser, text editor and any compressing software is all the user needs. CPU and storage are provided by the server (input and output data are saved indefinitely). An off-line version is also being developed, which will provide batch processing, an advanced graphical user interface and post-processing tools, for example, Rate of Production Analysis (ROPA) and chainlength analysis. The source code is freely available for advanced users wishing to adapt and run the program locally. Data management, dissemination and archiving are essential in all areas of science. In order to do this in an efficient and transparent way, there is a critical need to capture high quality metadata/provenance for modelling activities. An Electronic Laboratory Notebook (ELN) has been developed in parallel with AtChem Online as part of the EC EUROCHAMP24 project. In order to use controlled chamber experiments to evaluate the MCM, we need to be able to archive, track and search information on all associated chamber model runs, so that they can be used in subsequent mechanism development. Therefore it would be extremely useful if experiment and model metadata/provenance could be easily and automatically stored electronically

  18. Time dependent optimal switching controls in online selling models

    SciTech Connect

    Bradonjic, Milan; Cohen, Albert

    2010-01-01

    We present a method to incorporate dishonesty in online selling via a stochastic optimal control problem. In our framework, the seller wishes to maximize her average wealth level W at a fixed time T of her choosing. The corresponding Hamilton-Jacobi-Bellmann (HJB) equation is analyzed for a basic case. For more general models, the admissible control set is restricted to a jump process that switches between extreme values. We propose a new approach, where the optimal control problem is reduced to a multivariable optimization problem.

  19. Modeling infectious diseases dissemination through online role-playing games.

    PubMed

    Balicer, Ran D

    2007-03-01

    As mathematical modeling of infectious diseases becomes increasingly important for developing public health policies, a novel platform for such studies might be considered. Millions of people worldwide play interactive online role-playing games, forming complex and rich networks among their virtual characters. An unexpected outbreak of an infective communicable disease (unplanned by the game creators) recently occurred in this virtual world. This outbreak holds surprising similarities to real-world epidemics. It is possible that these virtual environments could serve as a platform for studying the dissemination of infectious diseases, and as a testing ground for novel interventions to control emerging communicable diseases.

  20. Online watershed boundary delineation: sharing models through Spatial Data Infrastructures

    NASA Astrophysics Data System (ADS)

    Squividant, H.; Bera, R.; Aurousseau, P.; Cudennec, C.

    2015-05-01

    The proposal in this paper is to make accessible the hydrology analysis tools that were developed by our research team in the past years through an interoperable Spatial Data Infrastructure. To this aim we chose to develop add-ons for the geOrchestra OGC-compliant platform. Such add-ons trigger algorithms and retrieve their output in real time through OGC standard WPS. We then introduce a watershed WPS add-on and its functioning modes. In so doing we exemplify the fact that the use of OGC standards make it straightforward (and transparent to the user operating a common web browser) to remotely trigger a process on a distant server, then apply it to distant data present on a remote cartographic server, and drop the outcome onto a third-party cartographic server while visualizing it all on a browser.

  1. On-line modeling tutorials for emergency managers

    SciTech Connect

    Newsom, D.E.

    1988-01-01

    The Federal Emergency Management Agency (FEMA) maintains two simulation models of interest to Federal, State, and local government managers: MESORAD, a model of radiological plume dispersion and dose; and IDYNEV, a model of traffic movement during evacuation. Users of these models include government staff and technical consultants with simulation experience ranging from extensive to none. To train these users, Argonne National Laboraory has developed two on-line tutorials. The tutorials provide a self-paced, interactive mode of learning about the models. Though user manuals about the models exist, the tutorials afford self-contained instruction to users who lack access to the manuals. The tutorials describe: dose assessment and transportation analysis using computer models; the input parameters needed by the models; how to use forms management software to prepare the data; and how to run the models and view outputs. The tutorials have evolved with upgrades to the models, including the need at various times to emulate three different forms management packages. The tutorials have been used for individual study; in continuing education courses at FEMA's Emergency Management Institute; and are being considered for college classroom use. Persons trained in using the models have applied them to actual emergency planning problems for nuclear power plants. 4 refs., 3 figs., 2 tabs.

  2. Responses and Influences: A Model of Online Information Use for Learning

    ERIC Educational Resources Information Center

    Hughes, Hilary

    2006-01-01

    Introduction: Explores the complexity of online information use for learning in the culturally-diverse, information and communication technologies-intensive, higher education context. It presents a Model of responses and influences in online information use for learning, which aims to increase awareness of the complexity of online information use…

  3. A Model for Developing High-Quality Online Courses: Integrating a Systems Approach with Learning Theory

    ERIC Educational Resources Information Center

    Puzziferro, Maria; Shelton, Kaye

    2008-01-01

    As the demand for online education continues to increase, institutions are faced with developing process models for efficient, high-quality online course development. This paper describes a systems, team-based, approach that centers on an online instructional design theory ("Active Mastery Learning") implemented at Colorado State University-Global…

  4. PEM public key certificate cache server

    NASA Astrophysics Data System (ADS)

    Cheung, T.

    1993-12-01

    Privacy Enhanced Mail (PEM) provides privacy enhancement services to users of Internet electronic mail. Confidentiality, authentication, message integrity, and non-repudiation of origin are provided by applying cryptographic measures to messages transferred between end systems by the Message Transfer System. PEM supports both symmetric and asymmetric key distribution. However, the prevalent implementation uses a public key certificate-based strategy, modeled after the X.509 directory authentication framework. This scheme provides an infrastructure compatible with X.509. According to RFC 1422, public key certificates can be stored in directory servers, transmitted via non-secure message exchanges, or distributed via other means. Directory services provide a specialized distributed database for OSI applications. The directory contains information about objects and then provides structured mechanisms for accessing that information. Since directory services are not widely available now, a good approach is to manage certificates in a centralized certificate server. This document describes the detailed design of a centralized certificate cache serve. This server manages a cache of certificates and a cache of Certificate Revocation Lists (CRL's) for PEM applications. PEMapplications contact the server to obtain/store certificates and CRL's. The server software is programmed in C and ELROS. To use this server, ISODE has to be configured and installed properly. The ISODE library 'libisode.a' has to be linked together with this library because ELROS uses the transport layer functions provided by 'libisode.a.' The X.500 DAP library that is included with the ELROS distribution has to be linked in also, since the server uses the DAP library functions to communicate with directory servers.

  5. RNATOPS-W: a web server for RNA structure searches of genomes

    PubMed Central

    Wang, Yingfeng; Huang, Zhibin; Wu, Yong; Malmberg, Russell L.; Cai, Liming

    2009-01-01

    Summary: RNATOPS-W is a web server to search sequences for RNA secondary structures including pseudoknots. The server accepts an annotated RNA multiple structural alignment as a structural profile and genomic or other sequences to search. It is built upon RNATOPS, a command line C++software package for the same purpose, in which filters to speed up search are manually selected. RNATOPS-W improves upon RNATOPS by adding the function of automatic selection of a hidden Markov model (HMM) filter and also a friendly user interface for selection of a substructure filter by the user. In addition, RNATOPS-W complements existing RNA secondary structure search web servers that either use built-in structure profiles or are not able to detect pseudoknots. RNATOPS-W inherits the efficiency of RNATOPS in detecting large, complex RNA structures. Availability: The web server RNATOPS-W is available at the web site www.uga.edu/RNA-Informatics/?f=software&p=RNATOPS-w. The underlying search program RNATOPS can be downloaded at www.uga.edu/RNA-Informatics/?f=software&p=RNATOPS. Contact: cai@cs.uga.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:19269988

  6. An online model composition tool for system biology models

    PubMed Central

    2013-01-01

    Background There are multiple representation formats for Systems Biology computational models, and the Systems Biology Markup Language (SBML) is one of the most widely used. SBML is used to capture, store, and distribute computational models by Systems Biology data sources (e.g., the BioModels Database) and researchers. Therefore, there is a need for all-in-one web-based solutions that support advance SBML functionalities such as uploading, editing, composing, visualizing, simulating, querying, and browsing computational models. Results We present the design and implementation of the Model Composition Tool (Interface) within the PathCase-SB (PathCase Systems Biology) web portal. The tool helps users compose systems biology models to facilitate the complex process of merging systems biology models. We also present three tools that support the model composition tool, namely, (1) Model Simulation Interface that generates a visual plot of the simulation according to user’s input, (2) iModel Tool as a platform for users to upload their own models to compose, and (3) SimCom Tool that provides a side by side comparison of models being composed in the same pathway. Finally, we provide a web site that hosts BioModels Database models and a separate web site that hosts SBML Test Suite models. Conclusions Model composition tool (and the other three tools) can be used with little or no knowledge of the SBML document structure. For this reason, students or anyone who wants to learn about systems biology will benefit from the described functionalities. SBML Test Suite models will be a nice starting point for beginners. And, for more advanced purposes, users will able to access and employ models of the BioModels Database as well. PMID:24006914

  7. MAVID multiple alignment server.

    PubMed

    Bray, Nicolas; Pachter, Lior

    2003-07-01

    MAVID is a multiple alignment program suitable for many large genomic regions. The MAVID web server allows biomedical researchers to quickly obtain multiple alignments for genomic sequences and to subsequently analyse the alignments for conserved regions. MAVID has been successfully used for the alignment of closely related species such as primates and also for the alignment of more distant organisms such as human and fugu. The server is fast, capable of aligning hundreds of kilobases in less than a minute. The multiple alignment is used to build a phylogenetic tree for the sequences, which is subsequently used as a basis for identifying conserved regions in the alignment. The server can be accessed at http://baboon.math.berkeley.edu/mavid/.

  8. Modelling Influence and Opinion Evolution in Online Collective Behaviour

    PubMed Central

    Gend, Pascal; Rentfrow, Peter J.; Hendrickx, Julien M.; Blondel, Vincent D.

    2016-01-01

    Opinion evolution and judgment revision are mediated through social influence. Based on a large crowdsourced in vitro experiment (n = 861), it is shown how a consensus model can be used to predict opinion evolution in online collective behaviour. It is the first time the predictive power of a quantitative model of opinion dynamics is tested against a real dataset. Unlike previous research on the topic, the model was validated on data which did not serve to calibrate it. This avoids to favor more complex models over more simple ones and prevents overfitting. The model is parametrized by the influenceability of each individual, a factor representing to what extent individuals incorporate external judgments. The prediction accuracy depends on prior knowledge on the participants’ past behaviour. Several situations reflecting data availability are compared. When the data is scarce, the data from previous participants is used to predict how a new participant will behave. Judgment revision includes unpredictable variations which limit the potential for prediction. A first measure of unpredictability is proposed. The measure is based on a specific control experiment. More than two thirds of the prediction errors are found to occur due to unpredictability of the human judgment revision process rather than to model imperfection. PMID:27336834

  9. Online decision support based on modeling with the aim of increased irrigation efficiency

    NASA Astrophysics Data System (ADS)

    Dövényi-Nagy, Tamás; Bakó, Károly; Molnár, Krisztina; Rácz, Csaba; Vasvári, Gyula; Nagy, János; Dobos, Attila

    2015-04-01

    to allow the integration of several public available models and algorithms adapted to local climate (Rácz et al., 2013). The service, the server side framework, scripts and the front-end, providing access to the measured and modeled data, are based on own developments or free available and/or open source softwares and services like Apache, PHP, MySQL and Google Maps API. MetAgro intends to accomplish functionalities of three different areas of usage: research, education and practice. The members differ in educational background, knowledge of models and possibilities to access relevant input data. The system and interfaces must reflect these differences that is accomplished by the degradation of modeling: choosing the place of the farm and the crop already gives some general results, but with every additional parameter given the results are more reliable. The system 'MetAgro' provides a basis for improved decision-making with regard to irrigation on cropland. Based on experiences and feedback, the online application was proved to be useful in the design and practice of reasonable irrigation. In addition to its use in irrigation practice, MetAgro is also a valuable tool for research and education.

  10. BUILDING ROBUST APPEARANCE MODELS USING ON-LINE FEATURE SELECTION

    SciTech Connect

    PORTER, REID B.; LOVELAND, ROHAN; ROSTEN, ED

    2007-01-29

    In many tracking applications, adapting the target appearance model over time can improve performance. This approach is most popular in high frame rate video applications where latent variables, related to the objects appearance (e.g., orientation and pose), vary slowly from one frame to the next. In these cases the appearance model and the tracking system are tightly integrated, and latent variables are often included as part of the tracking system's dynamic model. In this paper we describe our efforts to track cars in low frame rate data (1 frame/second) acquired from a highly unstable airborne platform. Due to the low frame rate, and poor image quality, the appearance of a particular vehicle varies greatly from one frame to the next. This leads us to a different problem: how can we build the best appearance model from all instances of a vehicle we have seen so far. The best appearance model should maximize the future performance of the tracking system, and maximize the chances of reacquiring the vehicle once it leaves the field of view. We propose an online feature selection approach to this problem and investigate the performance and computational trade-offs with a real-world dataset.

  11. Modelling human mobility patterns using photographic data shared online.

    PubMed

    Barchiesi, Daniele; Preis, Tobias; Bishop, Steven; Moat, Helen Susannah

    2015-08-01

    Humans are inherently mobile creatures. The way we move around our environment has consequences for a wide range of problems, including the design of efficient transportation systems and the planning of urban areas. Here, we gather data about the position in space and time of about 16 000 individuals who uploaded geo-tagged images from locations within the UK to the Flickr photo-sharing website. Inspired by the theory of Lévy flights, which has previously been used to describe the statistical properties of human mobility, we design a machine learning algorithm to infer the probability of finding people in geographical locations and the probability of movement between pairs of locations. Our findings are in general agreement with official figures in the UK and on travel flows between pairs of major cities, suggesting that online data sources may be used to quantify and model large-scale human mobility patterns.

  12. Modelling human mobility patterns using photographic data shared online

    PubMed Central

    Barchiesi, Daniele; Preis, Tobias; Bishop, Steven; Moat, Helen Susannah

    2015-01-01

    Humans are inherently mobile creatures. The way we move around our environment has consequences for a wide range of problems, including the design of efficient transportation systems and the planning of urban areas. Here, we gather data about the position in space and time of about 16 000 individuals who uploaded geo-tagged images from locations within the UK to the Flickr photo-sharing website. Inspired by the theory of Lévy flights, which has previously been used to describe the statistical properties of human mobility, we design a machine learning algorithm to infer the probability of finding people in geographical locations and the probability of movement between pairs of locations. Our findings are in general agreement with official figures in the UK and on travel flows between pairs of major cities, suggesting that online data sources may be used to quantify and model large-scale human mobility patterns. PMID:26361545

  13. A Design of Product Collaborative Online Configuration Model

    NASA Astrophysics Data System (ADS)

    Wang, Xiaoguo; Zheng, Jin; Zeng, Qian

    According to the actual needs of mass customization, the personalization of product and its collaborative design, the paper analyzes and studies the working mechanism of modular-based product configuration technology and puts forward an information model of modular product family. Combined with case-based reasoning techniques (CBR) and the constraint satisfaction problem solving techniques (CSP), we design and study the algorithm for product configuration, and analyze its time complexity. A car chassis is made as the application object, we provide a prototype system of online configuration. Taking advantage of this system, designers can make appropriate changes on the existing programs in accordance with the demand. This will accelerate all aspects of product development and shorten the product cycle. Also the system will provide a strong technical support for enterprises to improve their market competitiveness.

  14. Dali server update

    PubMed Central

    Holm, Liisa; Laakso, Laura M.

    2016-01-01

    The Dali server (http://ekhidna2.biocenter.helsinki.fi/dali) is a network service for comparing protein structures in 3D. In favourable cases, comparing 3D structures may reveal biologically interesting similarities that are not detectable by comparing sequences. The Dali server has been running in various places for over 20 years and is used routinely by crystallographers on newly solved structures. The latest update of the server provides enhanced analytics for the study of sequence and structure conservation. The server performs three types of structure comparisons: (i) Protein Data Bank (PDB) search compares one query structure against those in the PDB and returns a list of similar structures; (ii) pairwise comparison compares one query structure against a list of structures specified by the user; and (iii) all against all structure comparison returns a structural similarity matrix, a dendrogram and a multidimensional scaling projection of a set of structures specified by the user. Structural superimpositions are visualized using the Java-free WebGL viewer PV. The structural alignment view is enhanced by sequence similarity searches against Uniprot. The combined structure-sequence alignment information is compressed to a stack of aligned sequence logos. In the stack, each structure is structurally aligned to the query protein and represented by a sequence logo. PMID:27131377

  15. Dali server update.

    PubMed

    Holm, Liisa; Laakso, Laura M

    2016-07-01

    The Dali server (http://ekhidna2.biocenter.helsinki.fi/dali) is a network service for comparing protein structures in 3D. In favourable cases, comparing 3D structures may reveal biologically interesting similarities that are not detectable by comparing sequences. The Dali server has been running in various places for over 20 years and is used routinely by crystallographers on newly solved structures. The latest update of the server provides enhanced analytics for the study of sequence and structure conservation. The server performs three types of structure comparisons: (i) Protein Data Bank (PDB) search compares one query structure against those in the PDB and returns a list of similar structures; (ii) pairwise comparison compares one query structure against a list of structures specified by the user; and (iii) all against all structure comparison returns a structural similarity matrix, a dendrogram and a multidimensional scaling projection of a set of structures specified by the user. Structural superimpositions are visualized using the Java-free WebGL viewer PV. The structural alignment view is enhanced by sequence similarity searches against Uniprot. The combined structure-sequence alignment information is compressed to a stack of aligned sequence logos. In the stack, each structure is structurally aligned to the query protein and represented by a sequence logo.

  16. Online Nucleosynthesis

    NASA Astrophysics Data System (ADS)

    Meyer Jordan, Bradley, IV; The, Lih-Sin; Robbins, Stuart

    2004-05-01

    Nuclear-reaction network codes are important to astronomers seeking to explore nucleosynthetic implications of astrophysical models and to nuclear physicists seeking to understand the role of nuclear properties or reaction rates in element formation. However, many users do not have the time or inclination to download and compile the codes, to manage the requisite input files, or to explore the often complex output with their own graphics programs. To help make nucleosynthesis calculations more readily available, we have placed the Clemson Nucleosynthesis code on the world-wide web at http://www.ces.clemson.edu/physics/nucleo/nuclearNetwork At this web site, any Internet user may set his or her own reaction network, nuclear properties and reaction rates, and thermodynamic trajectories. The user then submits the nucleosynthesis calculation, which runs on a dedicated server professionally maintained at Clemson University. Once the calculation is completed, the user may explore the results through dynamically produced and downloadable tables and graphs. Online help guides the user through the necessary steps. We hope this web site will prove a user-friendly and helpful tool for professional scientists as well as for students seeking to explore element formation.

  17. Evaluation of on-line DEMs for flood inundation modeling

    NASA Astrophysics Data System (ADS)

    Sanders, Brett F.

    2007-08-01

    Recent and highly accurate topographic data should be used for flood inundation modeling, but this is not always feasible given time and budget constraints so the utility of several on-line digital elevation models (DEMs) is examined with a set of steady and unsteady test problems. DEMs are used to parameterize a 2D hydrodynamic flood simulation algorithm and predictions are compared with published flood maps and observed flood conditions. DEMs based on airborne light detection and ranging (LiDAR) are preferred because of horizontal resolution, vertical accuracy (˜0.1 m) and the ability to separate bare-earth from built structures and vegetation. DEMs based on airborne interferometric synthetic aperture radar (IfSAR) have good horizontal resolution but gridded elevations reflect built structures and vegetation and therefore further processing may be required to permit flood modeling. IfSAR and shuttle radar topography mission (SRTM) DEMs suffer from radar speckle, or noise, so flood plains may appear with non-physical relief and predicted flood zones may include non-physical pools. DEMs based on national elevation data (NED) are remarkably smooth in comparison to IfSAR and SRTM but using NED, flood predictions overestimate flood extent in comparison to all other DEMs including LiDAR, the most accurate. This study highlights utility in SRTM as a global source of terrain data for flood modeling.

  18. Improvement plans for the RHIC/AGS on-line model environments

    SciTech Connect

    Brown,K.A.; Ahrens, L.; Beebe-Wang, J.; Morris, J.; Nemesure, S.; Robert-Demolaize, G.; Satogata, T.; Schoefer, V.; Tepikian, S.

    2009-08-31

    The on-line models for Relativistic Ion Collider (RHIC) and the RHIC pre-injectors (the AGS and the AGS Booster) can be thought of as containing our best collective knowledge of these accelerators. As we improve these on-line models we are building the framework to have a sophisticated model-based controls system. Currently the RHIC on-line model is an integral part of the controls system, providing the interface for tune control, chromaticity control, and non-linear chromaticity control. What we discuss in this paper is our vision of the future of the on-line model environment for RHIC and the RHIC preinjectors. Although these on-line models are primarily used as Courant-Snyder parameter calculators using live machine settings, we envision expanding these environments to encompass many other problem domains.

  19. Best Practices for Designing Online Learning Environments for 3D Modeling Curricula: A Delphi Study

    ERIC Educational Resources Information Center

    Mapson, Kathleen Harrell

    2011-01-01

    The purpose of this study was to develop an inventory of best practices for designing online learning environments for 3D modeling curricula. Due to the instructional complexity of three-dimensional modeling, few have sought to develop this type of course for online teaching and learning. Considering this, the study aimed to collectively aggregate…

  20. New Model, New Strategies: Instructional Design for Building Online Wisdom Communities

    ERIC Educational Resources Information Center

    Gunawardena, Charlotte N.; Ortegano-Layne, Ludmila; Carabajal, Kayleigh; Frechette, Casey; Lindemann, Ken; Jennings, Barbara

    2006-01-01

    We discuss the development of an instructional design model, WisCom (Wisdom Communities), based on socio-constructivist and sociocultural learning philosophies and distance education principles for the development of online wisdom communities, and the application and evaluation of the model in an online graduate course in the USA. The WisCom model…

  1. Logic Models as a Way to Support Online Students and Their Projects

    ERIC Educational Resources Information Center

    Strycker, Jesse

    2016-01-01

    As online enrollment continues to grow, students may need additional pedagogical supports to increase their likelihood of success in online environments that don't offer the same supports as those found in face to face classrooms. Logic models are a way to provide such support to students by helping to model project expectations, allowing students…

  2. Introducing the R2D2 Model: Online Learning for the Diverse Learners of This World

    ERIC Educational Resources Information Center

    Bonk, Curtis J.; Zhang, Ke

    2006-01-01

    The R2D2 method--read, reflect, display, and do--is a new model for designing and delivering distance education, and in particular, online learning. Such a model is especially important to address the diverse preferences of online learners of varied generations and varied Internet familiarity. Four quadrants can be utilized separately or as part…

  3. Disconfirming User Expectations of the Online Service Experience: Inferred versus Direct Disconfirmation Modeling.

    ERIC Educational Resources Information Center

    O'Neill, Martin; Palmer, Adrian; Wright, Christine

    2003-01-01

    Disconfirmation models of online service measurement seek to define service quality as the difference between user expectations of the service to be received and perceptions of the service actually received. Two such models-inferred and direct disconfirmation-for measuring quality of the online experience are compared (WebQUAL, SERVQUAL). Findings…

  4. SHIR competitive information diffusion model for online social media

    NASA Astrophysics Data System (ADS)

    Liu, Yun; Diao, Su-Meng; Zhu, Yi-Xiang; Liu, Qing

    2016-11-01

    In online social media, opinion divergences and differentiations generally exist as a result of individuals' extensive participation and personalization. In this paper, a Susceptible-Hesitated-Infected-Removed (SHIR) model is proposed to study the dynamics of competitive dual information diffusion. The proposed model extends the classical SIR model by adding hesitators as a neutralized state of dual information competition. It is both hesitators and stable spreaders that facilitate information dissemination. Researching on the impacts of diffusion parameters, it is found that the final density of stiflers increases monotonically as infection rate increases and removal rate decreases. And the advantage information with larger stable transition rate takes control of whole influence of dual information. The density of disadvantage information spreaders slightly grows with the increase of its stable transition rate, while whole spreaders of dual information and the relaxation time remain almost unchanged. Moreover, simulations imply that the final result of competition is closely related to the ratio of stable transition rates of dual information. If the stable transition rates of dual information are nearly the same, a slightly reduction of the smaller one brings out a significant disadvantage in its propagation coverage. Additionally, the relationship of the ratio of final stiflers versus the ratio of stable transition rates presents power characteristic.

  5. Pathological Buying Online as a Specific Form of Internet Addiction: A Model-Based Experimental Investigation.

    PubMed

    Trotzke, Patrick; Starcke, Katrin; Müller, Astrid; Brand, Matthias

    2015-01-01

    The study aimed to investigate different factors of vulnerability for pathological buying in the online context and to determine whether online pathological buying has parallels to a specific Internet addiction. According to a model of specific Internet addiction by Brand and colleagues, potential vulnerability factors may consist of a predisposing excitability from shopping and as mediating variable, specific Internet use expectancies. Additionally, in line with models on addiction behavior, cue-induced craving should also constitute an important factor for online pathological buying. The theoretical model was tested in this study by investigating 240 female participants with a cue-reactivity paradigm, which was composed of online shopping pictures, to assess excitability from shopping. Craving (before and after the cue-reactivity paradigm) and online shopping expectancies were measured. The tendency for pathological buying and online pathological buying were screened with the Compulsive Buying Scale (CBS) and the Short Internet Addiction Test modified for shopping (s-IATshopping). The results demonstrated that the relationship between individual's excitability from shopping and online pathological buying tendency was partially mediated by specific Internet use expectancies for online shopping (model's R² = .742, p < .001). Furthermore, craving and online pathological buying tendencies were correlated (r = .556, p < .001), and an increase in craving after the cue presentation was observed solely in individuals scoring high for online pathological buying (t(28) = 2.98, p < .01, d = 0.44). Both screening instruments were correlated (r = .517, p < .001), and diagnostic concordances as well as divergences were indicated by applying the proposed cut-off criteria. In line with the model for specific Internet addiction, the study identified potential vulnerability factors for online pathological buying and suggests potential parallels. The presence of craving in

  6. Pathological Buying Online as a Specific Form of Internet Addiction: A Model-Based Experimental Investigation.

    PubMed

    Trotzke, Patrick; Starcke, Katrin; Müller, Astrid; Brand, Matthias

    2015-01-01

    The study aimed to investigate different factors of vulnerability for pathological buying in the online context and to determine whether online pathological buying has parallels to a specific Internet addiction. According to a model of specific Internet addiction by Brand and colleagues, potential vulnerability factors may consist of a predisposing excitability from shopping and as mediating variable, specific Internet use expectancies. Additionally, in line with models on addiction behavior, cue-induced craving should also constitute an important factor for online pathological buying. The theoretical model was tested in this study by investigating 240 female participants with a cue-reactivity paradigm, which was composed of online shopping pictures, to assess excitability from shopping. Craving (before and after the cue-reactivity paradigm) and online shopping expectancies were measured. The tendency for pathological buying and online pathological buying were screened with the Compulsive Buying Scale (CBS) and the Short Internet Addiction Test modified for shopping (s-IATshopping). The results demonstrated that the relationship between individual's excitability from shopping and online pathological buying tendency was partially mediated by specific Internet use expectancies for online shopping (model's R² = .742, p < .001). Furthermore, craving and online pathological buying tendencies were correlated (r = .556, p < .001), and an increase in craving after the cue presentation was observed solely in individuals scoring high for online pathological buying (t(28) = 2.98, p < .01, d = 0.44). Both screening instruments were correlated (r = .517, p < .001), and diagnostic concordances as well as divergences were indicated by applying the proposed cut-off criteria. In line with the model for specific Internet addiction, the study identified potential vulnerability factors for online pathological buying and suggests potential parallels. The presence of craving in

  7. Microsoft SQL Server 6.0{reg_sign} Workbook

    SciTech Connect

    Augustenborg, E.C.

    1996-09-01

    This workbook was prepared for introductory training in the use of Microsoft SQL Server Version 6.0. The examples are all taken from the PUBS database that Microsoft distributes for training purposes or from the Microsoft Online Documentation. The merits of the relational database are presented.

  8. OPC Data Acquisition Server for CPDev Engineering Environment

    NASA Astrophysics Data System (ADS)

    Rzońca, Dariusz; Sadolewski, Jan; Trybus, Bartosz

    OPC Server has been created for the CPDev engineering environment, which provides classified process data for OPC client applications. Hierarchical Coloured Petri nets are used at design stage to model communications of the server with CPDev target controllers. Implementation involves an universal interface for acquisition data via different communication protocols like Modbus or .NET Remoting.

  9. A Maturity Model for Online Classes across Academic Disciplines

    ERIC Educational Resources Information Center

    Neequaye, Barbara Burris

    2013-01-01

    The number of academic institutions offering courses online has increased with courses being offered across almost all academic disciplines. Faculty members are often confronted with the responsibility of converting a face-to-face course to an online course while simultaneously dealing with new technologies and the interrelationship between the…

  10. Learning online community citizenship behavior: a socio-cognitive model.

    PubMed

    Joe, Sheng-Wuu; Lin, Chieh-Peng

    2008-06-01

    This study postulates personal and environmental factors as key drivers of online community citizenship behavior (OCCB). OCCB reveals that the individual chooses to perform a behavior that is beneficial to others. Empirical results confirm the applicability of social cognitive theory (SCT) in online communities.

  11. Modeling Best Practice through Online Learning: Building Relationships

    ERIC Educational Resources Information Center

    Cerniglia, Ellen G.

    2011-01-01

    Students may fear that they will feel unsupported and isolated when engaged in online learning. They don't know how they will be able to build relationships with their teacher and classmates solely based on written words, without facial expressions, tone of voice, and other nonverbal communication cues. Traditionally, online learning required…

  12. Optimizing Success: A Model for Persistence in Online Education

    ERIC Educational Resources Information Center

    Glazer, Hilda R.; Murphy, John A.

    2015-01-01

    The first-year experience for students enrolled in an online degree program, particularly the orientation and the first course experience, is critical to success and completion. The experience of one online university in improving persistence through enhancing orientation and the first academic course is presented. Factors impacting persistence…

  13. Examining Workload Models in Online and Blended Teaching

    ERIC Educational Resources Information Center

    Tynan, Belinda; Ryan, Yoni; Lamont-Mills, Andrea

    2015-01-01

    Over the past decade, most Australian universities have moved increasingly towards "blended" and online course delivery for both undergraduate and graduate programs. In almost all cases, elements of online teaching are part of routine teaching loads. Yet detailed and accurate workload data associated with "e-teaching" are not…

  14. A Structural Equation Model of Predictors of Online Learning Retention

    ERIC Educational Resources Information Center

    Lee, Youngju; Choi, Jaeho

    2013-01-01

    This study examined the effects of internal academic locus of control (ALOC), learning strategies, flow experience, and student satisfaction on student retention in online learning courses. A total number of 282 adult students at the Korea National Open University participated in the study by completing an online survey adopted from previous…

  15. Effective Live Online Faculty Development Workshops: One Model

    ERIC Educational Resources Information Center

    Blyth, Russell D.; May, Michael K.; Rainbolt, Julianne G.

    2006-01-01

    This article describes live, online faculty development workshops that show faculty how to use software packages (to date, GAP and Maple) in teaching college-level mathematics. The authors' primary goal in this article is to encourage others in any discipline to run similar online workshops by providing a resource for their successful operation,…

  16. Enhanced networked server management with random remote backups

    NASA Astrophysics Data System (ADS)

    Kim, Song-Kyoo

    2003-08-01

    In this paper, the model is focused on available server management in network environments. The (remote) backup servers are hooked up by VPN (Virtual Private Network) and replace broken main severs immediately. A virtual private network (VPN) is a way to use a public network infrastructure and hooks up long-distance servers within a single network infrastructure. The servers can be represent as "machines" and then the system deals with main unreliable and random auxiliary spare (remote backup) machines. When the system performs a mandatory routine maintenance, auxiliary machines are being used for backups during idle periods. Unlike other existing models, the availability of auxiliary machines is changed for each activation in this enhanced model. Analytically tractable results are obtained by using several mathematical techniques and the results are demonstrated in the framework of optimized networked server allocation problems.

  17. Modeling a multivariable reactor and on-line model predictive control.

    PubMed

    Yu, D W; Yu, D L

    2005-10-01

    A nonlinear first principle model is developed for a laboratory-scaled multivariable chemical reactor rig in this paper and the on-line model predictive control (MPC) is implemented to the rig. The reactor has three variables-temperature, pH, and dissolved oxygen with nonlinear dynamics-and is therefore used as a pilot system for the biochemical industry. A nonlinear discrete-time model is derived for each of the three output variables and their model parameters are estimated from the real data using an adaptive optimization method. The developed model is used in a nonlinear MPC scheme. An accurate multistep-ahead prediction is obtained for MPC, where the extended Kalman filter is used to estimate system unknown states. The on-line control is implemented and a satisfactory tracking performance is achieved. The MPC is compared with three decentralized PID controllers and the advantage of the nonlinear MPC over the PID is clearly shown.

  18. Pathological Buying Online as a Specific Form of Internet Addiction: A Model-Based Experimental Investigation

    PubMed Central

    Trotzke, Patrick; Starcke, Katrin; Müller, Astrid; Brand, Matthias

    2015-01-01

    The study aimed to investigate different factors of vulnerability for pathological buying in the online context and to determine whether online pathological buying has parallels to a specific Internet addiction. According to a model of specific Internet addiction by Brand and colleagues, potential vulnerability factors may consist of a predisposing excitability from shopping and as mediating variable, specific Internet use expectancies. Additionally, in line with models on addiction behavior, cue-induced craving should also constitute an important factor for online pathological buying. The theoretical model was tested in this study by investigating 240 female participants with a cue-reactivity paradigm, which was composed of online shopping pictures, to assess excitability from shopping. Craving (before and after the cue-reactivity paradigm) and online shopping expectancies were measured. The tendency for pathological buying and online pathological buying were screened with the Compulsive Buying Scale (CBS) and the Short Internet Addiction Test modified for shopping (s-IATshopping). The results demonstrated that the relationship between individual’s excitability from shopping and online pathological buying tendency was partially mediated by specific Internet use expectancies for online shopping (model’s R² = .742, p < .001). Furthermore, craving and online pathological buying tendencies were correlated (r = .556, p < .001), and an increase in craving after the cue presentation was observed solely in individuals scoring high for online pathological buying (t(28) = 2.98, p < .01, d = 0.44). Both screening instruments were correlated (r = .517, p < .001), and diagnostic concordances as well as divergences were indicated by applying the proposed cut-off criteria. In line with the model for specific Internet addiction, the study identified potential vulnerability factors for online pathological buying and suggests potential parallels. The presence of craving in

  19. Shape prior modeling using sparse representation and online dictionary learning.

    PubMed

    Zhang, Shaoting; Zhan, Yiqiang; Zhou, Yan; Uzunbas, Mustafa; Metaxas, Dimitris N

    2012-01-01

    The recently proposed sparse shape composition (SSC) opens a new avenue for shape prior modeling. Instead of assuming any parametric model of shape statistics, SSC incorporates shape priors on-the-fly by approximating a shape instance (usually derived from appearance cues) by a sparse combination of shapes in a training repository. Theoretically, one can increase the modeling capability of SSC by including as many training shapes in the repository. However, this strategy confronts two limitations in practice. First, since SSC involves an iterative sparse optimization at run-time, the more shape instances contained in the repository, the less run-time efficiency SSC has. Therefore, a compact and informative shape dictionary is preferred to a large shape repository. Second, in medical imaging applications, training shapes seldom come in one batch. It is very time consuming and sometimes infeasible to reconstruct the shape dictionary every time new training shapes appear. In this paper, we propose an online learning method to address these two limitations. Our method starts from constructing an initial shape dictionary using the K-SVD algorithm. When new training shapes come, instead of re-constructing the dictionary from the ground up, we update the existing one using a block-coordinates descent approach. Using the dynamically updated dictionary, sparse shape composition can be gracefully scaled up to model shape priors from a large number of training shapes without sacrificing run-time efficiency. Our method is validated on lung localization in X-Ray and cardiac segmentation in MRI time series. Compared to the original SSC, it shows comparable performance while being significantly more efficient. PMID:23286160

  20. Suicide prevention by online support groups: an action theory-based model of emotional first aid.

    PubMed

    Gilat, Itzhak; Shahar, Golan

    2009-01-01

    In the last two decades, online support groups have become a valuable source of help for individuals in suicidal crisis. Their attractiveness is attributed to features that enhance help-seeking and self-disclosure such as availability, anonymity, and use of written communication. However, online support groups also suffer from limitations and potential risks as agents of suicide prevention. The Israeli Association for Emotional First Aid (ERAN) has developed a practical model that seeks to maximize the benefits and minimize the risks of online suicide prevention. The model applies the Action Theory concepts whereby individuals shape their own environment. The present paper presents the model, which is based on an online support group combined with personal chat and a telephonic help line. The online support group is moderated by paraprofessionals who function as both process regulators and support providers. The principles and practice of the model are described, the theoretical rationale is presented, and directions for future research are suggested.

  1. PDS: A Performance Database Server

    DOE PAGES

    Berry, Michael W.; Dongarra, Jack J.; Larose, Brian H.; Letsche, Todd A.

    1994-01-01

    The process of gathering, archiving, and distributing computer benchmark data is a cumbersome task usually performed by computer users and vendors with little coordination. Most important, there is no publicly available central depository of performance data for all ranges of machines from personal computers to supercomputers. We present an Internet-accessible performance database server (PDS) that can be used to extract current benchmark data and literature. As an extension to the X-Windows-based user interface (Xnetlib) to the Netlib archival system, PDS provides an on-line catalog of public domain computer benchmarks such as the LINPACK benchmark, Perfect benchmarks, and the NAS parallelmore » benchmarks. PDS does not reformat or present the benchmark data in any way that conflicts with the original methodology of any particular benchmark; it is thereby devoid of any subjective interpretations of machine performance. We believe that all branches (research laboratories, academia, and industry) of the general computing community can use this facility to archive performance metrics and make them readily available to the public. PDS can provide a more manageable approach to the development and support of a large dynamic database of published performance metrics.« less

  2. Client/Server Architecture Promises Radical Changes.

    ERIC Educational Resources Information Center

    Freeman, Grey; York, Jerry

    1991-01-01

    This article discusses the emergence of the client/server paradigm for the delivery of computer applications, its emergence in response to the proliferation of microcomputers and local area networks, the applicability of the model in academic institutions, and its implications for college campus information technology organizations. (Author/DB)

  3. Designing an Assessment Model for Implementing a Quality Online Degree Program

    ERIC Educational Resources Information Center

    Dobbs, Rita L.; Allen, W. Clayton

    2004-01-01

    This paper will include critical information for administrators and faculty in higher education for developing an assessment model for an online degree program. Determining the assessment process for the program before offering courses will ensure a smooth transition into the online environment so that faculty and administrators will know that the…

  4. Online Synchronous vs. Asynchronous Software Training through the Behavioral Modeling Approach: A Longitudinal Field Experiment

    ERIC Educational Resources Information Center

    Chen, Charlie C.; Shaw, Ruey-shiang

    2006-01-01

    The continued and increasing use of online training raises the question of whether the most effective training methods applied in live instruction will carry over to different online environments in the long run. Behavior Modeling (BM) approach--teaching through demonstration--has been proven as the most effective approach in a face-to-face (F2F)…

  5. Open Online Language Courses: The Multi-Level Model of the Spanish N(ottingham)OOC

    ERIC Educational Resources Information Center

    Goria, Cecilia; Lagares, Manuel

    2015-01-01

    Research into open education has identified a "high number of participants" and "unpredictable mixed abilities" as factors responsible for the relatively weak presence of language Massive Open Online Courses (MOOCs). This contribution presents a model for open online language courses that aims to bridge this gap. The tangible…

  6. Structural Equation Modeling towards Online Learning Readiness, Academic Motivations, and Perceived Learning

    ERIC Educational Resources Information Center

    Horzum, Mehmet Baris; Kaymak, Zeliha Demir; Gungoren, Ozlem Canan

    2015-01-01

    The relationship between online learning readiness, academic motivations, and perceived learning was investigated via structural equation modeling in the research. The population of the research consisted of 750 students who studied using the online learning programs of Sakarya University. 420 of the students who volunteered for the research and…

  7. Talking about Reading as Thinking: Modeling the Hidden Complexities of Online Reading Comprehension

    ERIC Educational Resources Information Center

    Coiro, Julie

    2011-01-01

    This article highlights four cognitive processes key to online reading comprehension and how one might begin to transform existing think-aloud strategy models to encompass the challenges of reading for information on the Internet. Informed by principles of cognitive apprenticeship and an emerging taxonomy of online reading comprehension…

  8. New Learning Models: The Evolution of Online Learning into Innovative K-12 Blended Programs

    ERIC Educational Resources Information Center

    Patrick, Susan

    2011-01-01

    The author traces the growth of K-12 online learning in the United States from its modest genesis in the mid-1990s with 50,000 students to the more than 4 million enrollments today, the fastest scaling ever of any innovation in K-12 education. The evolution from one-size-fits-all online courses to innovative, blended instructional models that are…

  9. Client/server study

    NASA Technical Reports Server (NTRS)

    Dezhgosha, Kamyar; Marcus, Robert; Brewster, Stephen

    1995-01-01

    The goal of this project is to find cost-effective and efficient strategies/solutions to integrate existing databases, manage network, and improve productivity of users in a move towards client/server and Integrated Desktop Environment (IDE) at NASA LeRC. The project consisted of two tasks as follows: (1) Data collection, and (2) Database Development/Integration. Under task 1, survey questionnaires and a database were developed. Also, an investigation on commercially available tools for automated data-collection and net-management was performed. As requirements evolved, the main focus has been task 2 which involved the following subtasks: (1) Data gathering/analysis of database user requirements, (2) Database analysis and design, making recommendations for modification of existing data structures into relational database or proposing a common interface to access heterogeneous databases(INFOMAN system, CCNS equipment list, CCNS software list, USERMAN, and other databases), (3) Establishment of a client/server test bed at Central State University (CSU), (4) Investigation of multi-database integration technologies/ products for IDE at NASA LeRC, and (5) Development of prototypes using CASE tools (Object/View) for representative scenarios accessing multi-databases and tables in a client/server environment. Both CSU and NASA LeRC have benefited from this project. CSU team investigated and prototyped cost-effective/practical solutions to facilitate NASA LeRC move to a more productive environment. CSU students utilized new products and gained skills that could be a great resource for future needs of NASA.

  10. Frame architecture for video servers

    NASA Astrophysics Data System (ADS)

    Venkatramani, Chitra; Kienzle, Martin G.

    1999-11-01

    Video is inherently frame-oriented and most applications such as commercial video processing require to manipulate video in terms of frames. However, typical video servers treat videos as byte streams and perform random access based on approximate byte offsets to be supplied by the client. They do not provide frame or timecode oriented API which is essential for many applications. This paper describes a frame-oriented architecture for video servers. It also describes the implementation in the context of IBM's VideoCharger server. The later part of the paper describes an application that uses the frame architecture and provides fast and slow-motion scanning capabilities to the server.

  11. WebRASP: a server for computing energy scores to assess the accuracy and stability of RNA 3D structures

    PubMed Central

    Norambuena, Tomas; Cares, Jorge F.; Capriotti, Emidio; Melo, Francisco

    2013-01-01

    Summary: The understanding of the biological role of RNA molecules has changed. Although it is widely accepted that RNAs play important regulatory roles without necessarily coding for proteins, the functions of many of these non-coding RNAs are unknown. Thus, determining or modeling the 3D structure of RNA molecules as well as assessing their accuracy and stability has become of great importance for characterizing their functional activity. Here, we introduce a new web application, WebRASP, that uses knowledge-based potentials for scoring RNA structures based on distance-dependent pairwise atomic interactions. This web server allows the users to upload a structure in PDB format, select several options to visualize the structure and calculate the energy profile. The server contains online help, tutorials and links to other related resources. We believe this server will be a useful tool for predicting and assessing the quality of RNA 3D structures. Availability and implementation: The web server is available at http://melolab.org/webrasp. It has been tested on the most popular web browsers and requires Java plugin for Jmol visualization. Contact: fmelo@bio.puc.cl PMID:23929030

  12. PACS image security server

    NASA Astrophysics Data System (ADS)

    Cao, Fei; Huang, H. K.

    2004-04-01

    Medical image security in a PACS environment has become a pressing issue as communications of images increasingly extends over open networks, and hospitals are currently hard-pushed by Health Insurance Portability and Accountability Act (HIPAA) to be HIPPA complaint for ensuring health data security. Other security-related guidelines and technical standards continue bringing to the public attention in healthcare. However, there is not an infrastructure or systematic method to implement and deploy these standards in a PACS. In this paper, we first review DICOM Part15 standard for secure communications of medical images and the HIPAA impacts on PACS security, as well as our previous works on image security. Then we outline a security infrastructure in a HIPAA mandated PACS environment using a dedicated PACS image security server. The server manages its own database of all image security information. It acts as an image Authority for checking and certificating the image origin and integrity upon request by a user, as a secure DICOM gateway to the outside connections and meanwhile also as a PACS operation monitor for HIPAA supporting information.

  13. PEP-FOLD: an updated de novo structure prediction server for both linear and disulfide bonded cyclic peptides.

    PubMed

    Thévenet, Pierre; Shen, Yimin; Maupetit, Julien; Guyon, Frédéric; Derreumaux, Philippe; Tufféry, Pierre

    2012-07-01

    In the context of the renewed interest of peptides as therapeutics, it is important to have an on-line resource for 3D structure prediction of peptides with well-defined structures in aqueous solution. We present an updated version of PEP-FOLD allowing the treatment of both linear and disulphide bonded cyclic peptides with 9-36 amino acids. The server makes possible to define disulphide bonds and any residue-residue proximity under the guidance of the biologists. Using a benchmark of 34 cyclic peptides with one, two and three disulphide bonds, the best PEP-FOLD models deviate by an average RMS of 2.75 Å from the full NMR structures. Using a benchmark of 37 linear peptides, PEP-FOLD locates lowest-energy conformations deviating by 3 Å RMS from the NMR rigid cores. The evolution of PEP-FOLD comes as a new on-line service to supersede the previous server. The server is available at: http://bioserv.rpbs.univ-paris-diderot.fr/PEP-FOLD.

  14. Managing Staff Development for Online Education: A Situated Learning Model.

    ERIC Educational Resources Information Center

    Taylor, Janet A.

    2003-01-01

    Describes the implementation and management of staff development for online education underpinned by the principles of situated learning. Describes technological, human resource, pedagogical, and management initiatives and presents a case study of how a small regional institution changed to being an internationally recognized e-university. (EV)

  15. An Inclusive Approach to Online Learning Environments: Models and Resources

    ERIC Educational Resources Information Center

    Rutherford, Aline Germain; Kerr, Barbara

    2008-01-01

    The impact of ever-increasing numbers of online courses on the demographic composition of classes has meant that the notions of diversity, multiculturality and globalization are now key aspects of curriculum planning. With the internationalization and globalization of education, and faced with rising needs for an increasingly educated and more…

  16. Programmatic, Systematic, Automatic: An Online Course Accessibility Support Model

    ERIC Educational Resources Information Center

    Bastedo, Kathleen; Sugar, Amy; Swenson, Nancy; Vargas, Jessica

    2013-01-01

    Over the past few years, there has been a noticeable increase in the number of requests for online course material accommodations at the University of Central Florida (UCF). In response to these requests, UCF's Center for Distributed Learning (CDL) formed new teams, reevaluated its processes, and initiated a partnership with UCF's…

  17. Interdisciplinary Gerontology Education Online: A Developmental Process Model

    ERIC Educational Resources Information Center

    St. Hill, Halcyon; Edwards, Nancy

    2004-01-01

    Distance education online in gerontology in academic settings is designed to reflect content relevant to gerontology practices, academic standards, teaching strategies, and technology that embrace content delivery while enhancing learning. A balance with community services and needs for older adult populations, academic integrity, stakeholders,…

  18. Designing Online Workshops: Using an Experiential Learning Model

    ERIC Educational Resources Information Center

    Lynch, Sherry K.; Kogan, Lori R.

    2004-01-01

    This article describes 4 online workshops designed to assist college students with improving their time management, textbook reading, memory and concentration, and overall academic performance. These workshops were created to work equally well with imaginative, analytic, common-sense, and dynamic learners. Positive student feedback indicated that…

  19. Free Textbooks: An Online Company Tries a Controversial Publishing Model

    ERIC Educational Resources Information Center

    Rampell, Catherine

    2008-01-01

    The high prices of textbooks, which are approaching $1,000 per year for an average student, have those students and their professors crying for mercy. Flat World Knowledge, a new digital-textbook publisher, has the answer to this problem. Starting next year, the publisher will offer online, peer-reviewed, interactive, user-editable textbooks, free…

  20. Enhanced Online Access Requires Redesigned Delivery Options and Cost Models

    ERIC Educational Resources Information Center

    Stern, David

    2007-01-01

    Rapidly developing online information technologies provide dramatically new capabilities and opportunities, and place new responsibilities on all involved to recreate networks for scholarly communication. Collaborations between all segments of the information network are made possible and necessary as we attempt to find a balanced and mutually…

  1. A Model for Social Presence in Online Classrooms

    ERIC Educational Resources Information Center

    Wei, Chun-Wang; Chen, Nian-Shing; Kinshuk,

    2012-01-01

    It is now possible to create flexible learning environments without time and distance barriers on the internet. However, research has shown that learners typically experience isolation and alienation in online learning environments. These negative experiences can be reduced by enhancing social presence. In order to better facilitate the perceived…

  2. Trayectorias: A New Model for Online Task-Based Learning

    ERIC Educational Resources Information Center

    Ros i Sole, Cristina; Mardomingo, Raquel

    2004-01-01

    This paper discusses a framework for designing online tasks that capitalizes on the possibilities that the Internet and the Web offer for language learning. To present such a framework, we draw from constructivist theories (Brooks and Brooks, 1993) and their application to educational technology (Newby, Stepich, Lehman and Russell, 1996; Jonassen,…

  3. A new hybrid model for exploring the adoption of online nursing courses.

    PubMed

    Tung, Feng-Cheng; Chang, Su-Chao

    2008-04-01

    With the advancement in educational technology and internet access in recent years, nursing academia is searching for ways to widen nurses' educational opportunities. The online nursing courses are drawing more attention as well. The online nursing courses are very important e-learning tools for nursing students. The research combines the innovation diffusion theory and technology acceptance model, and adds two research variables, perceived financial cost and computer self-efficacy to propose a new hybrid technology acceptance model to study nursing students' behavioral intentions to use the online nursing courses. Based on 267 questionnaires collected from six universities in Taiwan, the research finds that studies strongly support this new hybrid technology acceptance model in predicting nursing students' behavioral intentions to use the online nursing courses. This research finds that compatibility, perceived usefulness, perceived ease of use, perceived financial cost and computer self-efficacy are critical factors for nursing students' behavioral intentions to use the online nursing courses. By explaining nursing students' behavioral intentions from a user's perspective, the findings of this research help to develop more user friendly online nursing courses and also provide insight into the best way to promote new e-learning tools for nursing students. This research finds that compatibility is the most important research variable that affects the behavioral intention to use the online nursing courses.

  4. Addressing Diverse Learner Preferences and Intelligences with Emerging Technologies: Matching Models to Online Opportunities

    ERIC Educational Resources Information Center

    Zhang, Ke; Bonk, Curtis J.

    2008-01-01

    This paper critically reviews various learning preferences and human intelligence theories and models with a particular focus on the implications for online learning. It highlights a few key models, Gardner's multiple intelligences, Fleming and Mills' VARK model, Honey and Mumford's Learning Styles, and Kolb's Experiential Learning Model, and…

  5. NEOS server 4.0 administrative guide.

    SciTech Connect

    Dolan, E. D.

    2001-07-13

    The NEOS Server 4.0 provides a general Internet-based client/server as a link between users and software applications. The administrative guide covers the fundamental principals behind the operation of the NEOS Server, installation and trouble-shooting of the Server software, and implementation details of potential interest to a NEOS Server administrator. The guide also discusses making new software applications available through the Server, including areas of concern to remote solver administrators such as maintaining security, providing usage instructions, and enforcing reasonable restrictions on jobs. The administrative guide is intended both as an introduction to the NEOS Server and as a reference for use when running the Server.

  6. Purge Lock Server

    2012-08-21

    The software provides a simple web api to allow users to request a time window where a file will not be removed from cache. HPSS provides the concept of a "purge lock". When a purge lock is set on a file, the file will not be removed from disk, entering tape only state. A lot of network file protocols assume a file is on disk so it is good to purge lock a file beforemore » transferring using one of those protocols. HPSS's purge lock system is very coarse grained though. A file is either purge locked or not. Nothing enforces quotas, timely unlocking of purge locks, or managing the races inherent with multiple users wanting to lock/unlock the same file. The Purge Lock Server lets you, through a simple REST API, specify a list of files to purge lock and an expire time, and the system will ensure things happen properly.« less

  7. 78 FR 48472 - Hewlett Packard Company; Enterprise Storage Servers and Networking (Tape) Group; Formerly D/B/A...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-08

    ... From the Federal Register Online via the Government Publishing Office DEPARTMENT OF LABOR Employment and Training Administration Hewlett Packard Company; Enterprise Storage Servers and Networking..., Enterprise Storage Servers and Networking (Tape) Group (formerly d/b/a Enterprise Group, HP Storage,...

  8. SPAM Detection Server Model Inspired by the Dionaea Muscipula Closure Mechanism: An Alternative Approach for Natural Computing Challenges

    NASA Astrophysics Data System (ADS)

    de Souza Pereira Lopes, Rodrigo Arthur; Carrari R. Lopes, Lia; Mustaro, Pollyana Notargiacomo

    Natural computing has been an increasingly evolving field in the last few years. Focusing on the interesting behaviours offered by nature and biological processes, this work intends to apply the metaphor of the carnivorous plant "Dionaea muscipula" as a complementary defence system against a recurring problem regarding internet and e-mails: spam. The metaphor model presents relevant aspects for further implementation and debate.

  9. PROMALS web server for accurate multiple protein sequence alignments.

    PubMed

    Pei, Jimin; Kim, Bong-Hyun; Tang, Ming; Grishin, Nick V

    2007-07-01

    Multiple sequence alignments are essential in homology inference, structure modeling, functional prediction and phylogenetic analysis. We developed a web server that constructs multiple protein sequence alignments using PROMALS, a progressive method that improves alignment quality by using additional homologs from PSI-BLAST searches and secondary structure predictions from PSIPRED. PROMALS shows higher alignment accuracy than other advanced methods, such as MUMMALS, ProbCons, MAFFT and SPEM. The PROMALS web server takes FASTA format protein sequences as input. The output includes a colored alignment augmented with information about sequence grouping, predicted secondary structures and positional conservation. The PROMALS web server is available at: http://prodata.swmed.edu/promals/ PMID:17452345

  10. Remote Sensing Data Analytics for Planetary Science with PlanetServer/EarthServer

    NASA Astrophysics Data System (ADS)

    Rossi, Angelo Pio; Figuera, Ramiro Marco; Flahaut, Jessica; Martinot, Melissa; Misev, Dimitar; Baumann, Peter; Pham Huu, Bang; Besse, Sebastien

    2016-04-01

    Planetary Science datasets, beyond the change in the last two decades from physical volumes to internet-accessible archives, still face the problem of large-scale processing and analytics (e.g. Rossi et al., 2014, Gaddis and Hare, 2015). PlanetServer, the Planetary Science Data Service of the EC-funded EarthServer-2 project (#654367) tackles the planetary Big Data analytics problem with an array database approach (Baumann et al., 2014). It is developed to serve a large amount of calibrated, map-projected planetary data online, mainly through Open Geospatial Consortium (OGC) Web Coverage Processing Service (WCPS) (e.g. Rossi et al., 2014; Oosthoek et al., 2013; Cantini et al., 2014). The focus of the H2020 evolution of PlanetServer is still on complex multidimensional data, particularly hyperspectral imaging and topographic cubes and imagery. In addition to hyperspectral and topographic from Mars (Rossi et al., 2014), the use of WCPS is applied to diverse datasets on the Moon, as well as Mercury. Other Solar System Bodies are going to be progressively available. Derived parameters such as summary products and indices can be produced through WCPS queries, as well as derived imagery colour combination products, dynamically generated and accessed also through OGC Web Coverage Service (WCS). Scientific questions translated into queries can be posed to a large number of individual coverages (data products), locally, regionally or globally. The new PlanetServer system uses the the Open Source Nasa WorldWind (e.g. Hogan, 2011) virtual globe as visualisation engine, and the array database Rasdaman Community Edition as core server component. Analytical tools and client components of relevance for multiple communities and disciplines are shared across service such as the Earth Observation and Marine Data Services of EarthServer. The Planetary Science Data Service of EarthServer is accessible on http://planetserver.eu. All its code base is going to be available on GitHub, on

  11. Client/server approach to image capturing

    NASA Astrophysics Data System (ADS)

    Tuijn, Chris; Stokes, Earle

    1998-01-01

    The diversity of the digital image capturing devices on the market today is quite astonishing and ranges from low-cost CCD scanners to digital cameras (for both action and stand-still scenes), mid-end CCD scanners for desktop publishing and pre- press applications and high-end CCD flatbed scanners and drum- scanners with photo multiplier technology. Each device and market segment has its own specific needs which explains the diversity of the associated scanner applications. What all those applications have in common is the need to communicate with a particular device to import the digital images; after the import, additional image processing might be needed as well as color management operations. Although the specific requirements for all of these applications might differ considerably, a number of image capturing and color management facilities as well as other services are needed which can be shared. In this paper, we propose a client/server architecture for scanning and image editing applications which can be used as a common component for all these applications. One of the principal components of the scan server is the input capturing module. The specification of the input jobs is based on a generic input device model. Through this model we make abstraction of the specific scanner parameters and define the scan job definitions by a number of absolute parameters. As a result, scan job definitions will be less dependent on a particular scanner and have a more universal meaning. In this context, we also elaborate on the interaction of the generic parameters and the color characterization (i.e., the ICC profile). Other topics that are covered are the scheduling and parallel processing capabilities of the server, the image processing facilities, the interaction with the ICC engine, the communication facilities (both in-memory and over the network) and the different client architectures (stand-alone applications, TWAIN servers, plug-ins, OLE or Apple-event driven

  12. WMS Server 2.0

    NASA Technical Reports Server (NTRS)

    Plesea, Lucian; Wood, James F.

    2012-01-01

    This software is a simple, yet flexible server of raster map products, compliant with the Open Geospatial Consortium (OGC) Web Map Service (WMS) 1.1.1 protocol. The server is a full implementation of the OGC WMS 1.1.1 as a fastCGI client and using Geospatial Data Abstraction Library (GDAL) for data access. The server can operate in a proxy mode, where all or part of the WMS requests are done on a back server. The server has explicit support for a colocated tiled WMS, including rapid response of black (no-data) requests. It generates JPEG and PNG images, including 16-bit PNG. The GDAL back-end support allows great flexibility on the data access. The server is a port to a Linux/GDAL platform from the original IRIX/IL platform. It is simpler to configure and use, and depending on the storage format used, it has better performance than other available implementations. The WMS server 2.0 is a high-performance WMS implementation due to the fastCGI architecture. The use of GDAL data back end allows for great flexibility. The configuration is relatively simple, based on a single XML file. It provides scaling and cropping, as well as blending of multiple layers based on layer transparency.

  13. Online Prediction Under Model Uncertainty via Dynamic Model Averaging: Application to a Cold Rolling Mill.

    PubMed

    Raftery, Adrian E; Kárný, Miroslav; Ettler, Pavel

    2010-02-01

    We consider the problem of online prediction when it is uncertain what the best prediction model to use is. We develop a method called Dynamic Model Averaging (DMA) in which a state space model for the parameters of each model is combined with a Markov chain model for the correct model. This allows the "correct" model to vary over time. The state space and Markov chain models are both specified in terms of forgetting, leading to a highly parsimonious representation. As a special case, when the model and parameters do not change, DMA is a recursive implementation of standard Bayesian model averaging, which we call recursive model averaging. The method is applied to the problem of predicting the output strip thickness for a cold rolling mill, where the output is measured with a time delay. We found that when only a small number of physically motivated models were considered and one was clearly best, the method quickly converged to the best model, and the cost of model uncertainty was small; indeed DMA performed slightly better than the best physical model. When model uncertainty and the number of models considered were large, our method ensured that the penalty for model uncertainty was small. At the beginning of the process, when control is most difficult, we found that DMA over a large model space led to better predictions than the single best performing physically motivated model. We also applied the method to several simulated examples, and found that it recovered both constant and time-varying regression parameters and model specifications quite well.

  14. Revision and Validation of a Culturally-Adapted Online Instructional Module Using Edmundson's CAP Model: A DBR Study

    ERIC Educational Resources Information Center

    Tapanes, Marie A.

    2011-01-01

    In the present study, the Cultural Adaptation Process Model was applied to an online module to include adaptations responsive to the online students' culturally-influenced learning styles and preferences. The purpose was to provide the online learners with a variety of course material presentations, where the e-learners had the opportunity to…

  15. Toward a Social Conflict Evolution Model: Examining the Adverse Power of Conflictual Social Interaction in Online Learning

    ERIC Educational Resources Information Center

    Xie, Kui; Miller, Nicole C.; Allison, Justin R.

    2013-01-01

    This case study examined an authentic online learning phenomenon where social conflict, including harsh critique and negative tone, weaved throughout peer-moderated online discussions in an online class. Opening coding and content analysis were performed on 1306 message units and course artifacts. The results revealed that a model of social…

  16. A model of spatial data interoperability on Oracle Spatial

    NASA Astrophysics Data System (ADS)

    Zhao, Qiansheng; Huang, Quanyi; Guo, Jiming; Wen, Renqiang

    2009-10-01

    It has been acclaimed that the future vision for GIS data sharing might look like this: each of small counties or towns hosts its own online GIS; and each uses software and a data model selected to best meet its own needs. This paper gives a model based on Oracle Spatial, within a local government or enterprise the spatial data is in centralized storage, and with metadata interoperability, which enables the organizations to use the proper tool for the job while eliminating complicated data transfers and duplications throughout the enterprise or different departments. The MapInfo and ArcGIS software have been made to work together under the same oracle spatial database use trigger and storage process. On another hand, with the situation of between the departments or enterprises, a three-tier structure solution is given: spatial data server, application server and application client. The application server is a mediation system, this model uses oracle application server as the mediation system, and through the application server the application client sends WMS or WFS request and get the map server for background application. The three-tier structure model exposes a GIS portal which is an online GIS for external applications. Any client can request the server if it accords with WMS or WFS specification.

  17. The SDSS data archive server

    SciTech Connect

    Neilsen, Eric H., Jr.; /Fermilab

    2007-10-01

    The Sloan Digital Sky Survey (SDSS) Data Archive Server (DAS) provides public access to data files produced by the SDSS data reduction pipeline. This article discusses challenges in public distribution of data of this volume and complexity, and how the project addressed them. The Sloan Digital Sky Survey (SDSS)1 is an astronomical survey of covering roughly one quarter of the night sky. It contains images of this area, a catalog of almost 300 million objects detected in those images, and spectra of more than a million of these objects. The catalog of objects includes a variety of data on each object. These data include not only basic information but also fit parameters for a variety of models, classifications by sophisticated object classification algorithms, statistical parameters, and more. If the survey contains the spectrum of an object, the catalog includes a variety of other parameters derived from its spectrum. Data processing and catalog generation, described more completely in the SDSS Early Data Release2 paper, consists of several stages: collection of imaging data, processing of imaging data, selection of spectroscopic targets from catalogs generated from the imaging data, collection of spectroscopic data, processing of spectroscopic data, and loading of processed data into a database. Each of these stages is itself a complex process. For example, the software that processes the imaging data determines and removes some instrumental signatures in the raw images to create 'corrected frames', models the point spread function, models and removes the sky background, detects objects, measures object positions, measures the radial profile and other morphological parameters for each object, measures the brightness of each object using a variety of methods, classifies the objects, calibrates the brightness measurements against survey standards, and produces a variety of quality assurance plots and diagnostic tables. The complexity of the spectroscopic data

  18. Modelling unsupervised online-learning of artificial grammars: linking implicit and statistical learning.

    PubMed

    Rohrmeier, Martin A; Cross, Ian

    2014-07-01

    Humans rapidly learn complex structures in various domains. Findings of above-chance performance of some untrained control groups in artificial grammar learning studies raise questions about the extent to which learning can occur in an untrained, unsupervised testing situation with both correct and incorrect structures. The plausibility of unsupervised online-learning effects was modelled with n-gram, chunking and simple recurrent network models. A novel evaluation framework was applied, which alternates forced binary grammaticality judgments and subsequent learning of the same stimulus. Our results indicate a strong online learning effect for n-gram and chunking models and a weaker effect for simple recurrent network models. Such findings suggest that online learning is a plausible effect of statistical chunk learning that is possible when ungrammatical sequences contain a large proportion of grammatical chunks. Such common effects of continuous statistical learning may underlie statistical and implicit learning paradigms and raise implications for study design and testing methodologies.

  19. On-line and Model-based Approaches to the Visual Control of Action

    PubMed Central

    Zhao, Huaiyong; Warren, William H.

    2014-01-01

    Two general approaches to the visual control of action have emerged in last few decades, known as the on-line and model-based approaches. The key difference between them is whether action is controlled by current visual information or on the basis of an internal world model. In this paper, we evaluate three hypotheses: strong on-line control, strong model-based control, and a hybrid solution that combines on-line control with weak off-line strategies. We review experimental research on the control of locomotion and manual actions, which indicates that (a) an internal world model is neither sufficient nor necessary to control action at normal levels of performance; (b) current visual information is necessary and sufficient to control action at normal levels; and (c) under certain conditions (e.g. occlusion) action is controlled by less accurate, simple strategies such as heuristics, visual-motor mappings, or spatial memory. We conclude that the strong model-based hypothesis is not sustainable. Action is normally controlled on-line when current information is available, consistent with the strong on-line control hypothesis. In exceptional circumstances, action is controlled by weak, context-specific, off-line strategies. This hybrid solution is comprehensive, parsimonious, and able to account for a variety of tasks under a range of visual conditions. PMID:25454700

  20. Supplemental Instruction Online: As Effective as the Traditional Face-to-Face Model?

    NASA Astrophysics Data System (ADS)

    Hizer, Suzanne E.; Schultz, P. W.; Bray, Richard

    2016-10-01

    Supplemental Instruction (SI) is a well-recognized model of academic assistance with a history of empirical evidence demonstrating increases in student grades and decreases in failure rates across many higher education institutions. However, as college students become more accustomed to learning in online venues, what is not known is whether an SI program offered online could benefit students similarly to SI sessions that occur in face-to-face settings. The in-person (traditional) SI program at California State University San Marcos has demonstrated increases in grades and lower fail rates for courses being supported in science and math. Students enrolled in four biology courses who participated in online SI received increases in academic performance similar to the students in the courses who attended traditional SI. Both the online and traditional SI participating students had higher course grades and lower fail rates as compared to students who did not participate in either form of SI. Self-selection, as measured by past cumulative college grade point average, did not differ between students who attended either form of SI or who did not attend. Student perceptions of online SI were generally positive and appeared to offer an alternative path to receive this valuable academic assistance for some students. Overall, results are promising that the highly effective traditional model can be translated to an online environment.

  1. Compute Server Performance Results

    NASA Technical Reports Server (NTRS)

    Stockdale, I. E.; Barton, John; Woodrow, Thomas (Technical Monitor)

    1994-01-01

    Parallel-vector supercomputers have been the workhorses of high performance computing. As expectations of future computing needs have risen faster than projected vector supercomputer performance, much work has been done investigating the feasibility of using Massively Parallel Processor systems as supercomputers. An even more recent development is the availability of high performance workstations which have the potential, when clustered together, to replace parallel-vector systems. We present a systematic comparison of floating point performance and price-performance for various compute server systems. A suite of highly vectorized programs was run on systems including traditional vector systems such as the Cray C90, and RISC workstations such as the IBM RS/6000 590 and the SGI R8000. The C90 system delivers 460 million floating point operations per second (FLOPS), the highest single processor rate of any vendor. However, if the price-performance ration (PPR) is considered to be most important, then the IBM and SGI processors are superior to the C90 processors. Even without code tuning, the IBM and SGI PPR's of 260 and 220 FLOPS per dollar exceed the C90 PPR of 160 FLOPS per dollar when running our highly vectorized suite,

  2. Using the Constructivist Tridimensional Design Model for Online Continuing Education for Health Care Clinical Faculty

    ERIC Educational Resources Information Center

    Seo, Kay Kyeong-Ju; Engelhard, Chalee

    2014-01-01

    This article presents a new paradigm for continuing education of Clinical Instructors (CIs): the Constructivist Tridimensional (CTD) model for the design of an online curriculum. Based on problem-based learning, self-regulated learning, and adult learning theory, the CTD model was designed to facilitate interactive, collaborative, and authentic…

  3. RISK ASSESSMENT ANALYSES USING EPA'S ON-LINE SITE-SPECIFIC TRANSPORT MODELS AND FIELD DATA

    EPA Science Inventory

    EPA has developed a suite of on-line calculators and transport models to aid in risk assessment for subsurface contamination. The calculators (www.epa.gov/athens/onsite) provide several levels of tools and data. These include tools for generating commonly-used model input param...

  4. The Robust Learning Model (RLM): A Comprehensive Approach to a New Online University

    ERIC Educational Resources Information Center

    Neumann, Yoram; Neumann, Edith F.

    2010-01-01

    This paper outlines the components of the Robust Learning Model (RLM) as a conceptual framework for creating a new online university offering numerous degree programs at all degree levels. The RLM is a multi-factorial model based on the basic belief that successful learning outcomes depend on multiple factors employed together in a holistic…

  5. An online trajectory module (version 1.0) for the nonhydrostatic numerical weather prediction model COSMO

    NASA Astrophysics Data System (ADS)

    Miltenberger, A. K.; Pfahl, S.; Wernli, H.

    2013-11-01

    A module to calculate online trajectories has been implemented into the nonhydrostatic limited-area weather prediction and climate model COSMO. Whereas offline trajectories are calculated with wind fields from model output, which is typically available every one to six hours, online trajectories use the simulated resolved wind field at every model time step (typically less than a minute) to solve the trajectory equation. As a consequence, online trajectories much better capture the short-term temporal fluctuations of the wind field, which is particularly important for mesoscale flows near topography and convective clouds, and they do not suffer from temporal interpolation errors between model output times. The numerical implementation of online trajectories in the COSMO-model is based upon an established offline trajectory tool and takes full account of the horizontal domain decomposition that is used for parallelization of the COSMO-model. Although a perfect workload balance cannot be achieved for the trajectory module (due to the fact that trajectory positions are not necessarily equally distributed over the model domain), the additional computational costs are found to be fairly small for the high-resolution simulations described in this paper. The computational costs may, however, vary strongly depending on the number of trajectories and trace variables. Various options have been implemented to initialize online trajectories at different locations and times during the model simulation. As a first application of the new COSMO-model module, an Alpine north foehn event in summer 1987 has been simulated with horizontal resolutions of 2.2, 7 and 14 km. It is shown that low-tropospheric trajectories calculated offline with one- to six-hourly wind fields can significantly deviate from trajectories calculated online. Deviations increase with decreasing model grid spacing and are particularly large in regions of deep convection and strong orographic flow distortion. On

  6. Modeling the Biodegradability of Chemical Compounds Using the Online CHEmical Modeling Environment (OCHEM).

    PubMed

    Vorberg, Susann; Tetko, Igor V

    2014-01-01

    Biodegradability describes the capacity of substances to be mineralized by free-living bacteria. It is a crucial property in estimating a compound's long-term impact on the environment. The ability to reliably predict biodegradability would reduce the need for laborious experimental testing. However, this endpoint is difficult to model due to unavailability or inconsistency of experimental data. Our approach makes use of the Online Chemical Modeling Environment (OCHEM) and its rich supply of machine learning methods and descriptor sets to build classification models for ready biodegradability. These models were analyzed to determine the relationship between characteristic structural properties and biodegradation activity. The distinguishing feature of the developed models is their ability to estimate the accuracy of prediction for each individual compound. The models developed using seven individual descriptor sets were combined in a consensus model, which provided the highest accuracy. The identified overrepresented structural fragments can be used by chemists to improve the biodegradability of new chemical compounds. The consensus model, the datasets used, and the calculated structural fragments are publicly available at http://ochem.eu/article/31660.

  7. Online coupled regional meteorology-chemistry models in Europe: current status and prospects

    NASA Astrophysics Data System (ADS)

    Baklanov, A.; Schluenzen, K. H.; Suppan, P.; Baldasano, J.; Brunner, D.; Aksoyoglu, S.; Carmichael, G.; Douros, J.; Flemming, J.; Forkel, R.; Galmarini, S.; Gauss, M.; Grell, G.; Hirtl, M.; Joffre, S.; Jorba, O.; Kaas, E.; Kaasik, M.; Kallos, G.; Kong, X.; Korsholm, U.; Kurganskiy, A.; Kushta, J.; Lohmann, U.; Mahura, A.; Manders-Groot, A.; Maurizi, A.; Moussiopoulos, N.; Rao, S. T.; Savage, N.; Seigneur, C.; Sokhi, R.; Solazzo, E.; Solomos, S.; Sørensen, B.; Tsegas, G.; Vignati, E.; Vogel, B.; Zhang, Y.

    2013-05-01

    The simulation of the coupled evolution of atmospheric dynamics, pollutant transport, chemical reactions and atmospheric composition is one of the most challenging tasks in environmental modelling, climate change studies, and weather forecasting for the next decades as they all involve strongly integrated processes. Weather strongly influences air quality (AQ) and atmospheric transport of hazardous materials, while atmospheric composition can influence both weather and climate by directly modifying the atmospheric radiation budget or indirectly affecting cloud formation. Until recently, however, due to the scientific complexities and lack of computational power, atmospheric chemistry and weather forecasting have developed as separate disciplines, leading to the development of separate modelling systems that are only loosely coupled. The continuous increase in computer power has now reached a stage that enables us to perform online coupling of regional meteorological models with atmospheric chemical transport models. The focus on integrated systems is timely, since recent research has shown that meteorology and chemistry feedbacks are important in the context of many research areas and applications, including numerical weather prediction (NWP), AQ forecasting as well as climate and Earth system modelling. However, the relative importance of online integration and its priorities, requirements and levels of detail necessary for representing different processes and feedbacks can greatly vary for these related communities: (i) NWP, (ii) AQ forecasting and assessments, (iii) climate and earth system modelling. Additional applications are likely to benefit from online modelling, e.g.: simulation of volcanic ash or forest fire plumes, pollen warnings, dust storms, oil/gas fires, geo-engineering tests involving changes in the radiation balance. The COST Action ES1004 - European framework for online integrated air quality and meteorology modelling (EuMetChem) - aims at

  8. Comparing Server Energy Use and Efficiency Using Small Sample Sizes

    SciTech Connect

    Coles, Henry C.; Qin, Yong; Price, Phillip N.

    2014-11-01

    This report documents a demonstration that compared the energy consumption and efficiency of a limited sample size of server-type IT equipment from different manufacturers by measuring power at the server power supply power cords. The results are specific to the equipment and methods used. However, it is hoped that those responsible for IT equipment selection can used the methods described to choose models that optimize energy use efficiency. The demonstration was conducted in a data center at Lawrence Berkeley National Laboratory in Berkeley, California. It was performed with five servers of similar mechanical and electronic specifications; three from Intel and one each from Dell and Supermicro. Server IT equipment is constructed using commodity components, server manufacturer-designed assemblies, and control systems. Server compute efficiency is constrained by the commodity component specifications and integration requirements. The design freedom, outside of the commodity component constraints, provides room for the manufacturer to offer a product with competitive efficiency that meets market needs at a compelling price. A goal of the demonstration was to compare and quantify the server efficiency for three different brands. The efficiency is defined as the average compute rate (computations per unit of time) divided by the average energy consumption rate. The research team used an industry standard benchmark software package to provide a repeatable software load to obtain the compute rate and provide a variety of power consumption levels. Energy use when the servers were in an idle state (not providing computing work) were also measured. At high server compute loads, all brands, using the same key components (processors and memory), had similar results; therefore, from these results, it could not be concluded that one brand is more efficient than the other brands. The test results show that the power consumption variability caused by the key components as a

  9. ON-LINE CALCULATOR: FORWARD CALCULATION JOHNSON ETTINGER MODEL

    EPA Science Inventory

    On-Site was developed to provide modelers and model reviewers with prepackaged tools ("calculators") for performing site assessment calculations. The philosophy behind OnSite is that the convenience of the prepackaged calculators helps provide consistency for simple calculations,...

  10. ON-LINE CALCULATOR: JOHNSON ETTINGER VAPOR INTRUSION MODEL

    EPA Science Inventory

    On-Site was developed to provide modelers and model reviewers with prepackaged tools ("calculators") for performing site assessment calculations. The philosophy behind OnSite is that the convenience of the prepackaged calculators helps provide consistency for simple calculations,...

  11. Required Collaborative Work in Online Courses: A Predictive Modeling Approach

    ERIC Educational Resources Information Center

    Smith, Marlene A.; Kellogg, Deborah L.

    2015-01-01

    This article describes a predictive model that assesses whether a student will have greater perceived learning in group assignments or in individual work. The model produces correct classifications 87.5% of the time. The research is notable in that it is the first in the education literature to adopt a predictive modeling methodology using data…

  12. Using the Community of Inquiry Model to Investigate Students' Knowledge Construction in Asynchronous Online Discussions

    ERIC Educational Resources Information Center

    Liu, Chien-Jen; Yang, Shu Ching

    2014-01-01

    This study used the Community of Inquiry (CoI) model proposed by Garrison to investigate students' level of knowledge construction in asynchronous discussions. The participants included 36 senior students (27 males) majoring in information management. The students attended 18 weeks of an online information ethics course. In this study, four types…

  13. Mentoring Professors: A Model for Developing Quality Online Instructors and Courses in Higher Education

    ERIC Educational Resources Information Center

    Barczyk, Casimir; Buckenmeyer, Janet; Feldman, Lori

    2010-01-01

    This article presents a four-stage model for mentoring faculty in higher education to deliver high quality online instruction. It provides a timeline that shows the stages of program implementation. Known as the Distance Education Mentoring Program, its major outcomes include certified instructors, student achievement, and the attainment of a…

  14. Online Discussion and College Student Learning: Toward a Model of Influence

    ERIC Educational Resources Information Center

    Johnson, Genevieve M.; Howell, Andrew J.; Code, Jillianne R.

    2005-01-01

    As technology revolutionizes instruction, conceptual models of influence are necessary to guide implementation and evaluation of specific applications such as online peer discussion. Students in an educational psychology course analyzed five case studies that applied and integrated course content. Some students (n= 42) used "WebCT Discussions" to…

  15. Assessing Readiness for Online Education--Research Models for Identifying Students at Risk

    ERIC Educational Resources Information Center

    Wladis, Claire; Conway, Katherine M.; Hachey, Alyse C.

    2016-01-01

    This study explored the interaction between student characteristics and the online environment in predicting course performance and subsequent college persistence among students in a large urban U.S. university system. Multilevel modeling, propensity score matching, and the KHB decomposition method were used. The most consistent pattern observed…

  16. Taking the Epistemic Step: Toward a Model of On-Line Access to Conversational Implicatures

    ERIC Educational Resources Information Center

    Breheny, Richard; Ferguson, Heather J.; Katsos, Napoleon

    2013-01-01

    There is a growing body of evidence showing that conversational implicatures are rapidly accessed in incremental utterance interpretation. To date, studies showing incremental access have focussed on implicatures related to linguistic triggers, such as "some" and "or". We discuss three kinds of on-line model that can account for this data. A model…

  17. Analysis of Feedback Processes in Online Group Interaction: A Methodological Model

    ERIC Educational Resources Information Center

    Espasa, Anna; Guasch, Teresa; Alvarez, Ibis M.

    2013-01-01

    The aim of this article is to present a methodological model to analyze students' group interaction to improve their essays in online learning environments, based on asynchronous and written communication. In these environments teacher and student scaffolds for discussion are essential to promote interaction. One of these scaffolds can be the…

  18. Phenomenological Study of Business Models Used to Scale Online Enrollment at Institutions of Higher Education

    ERIC Educational Resources Information Center

    Williams, Dana E.

    2012-01-01

    The purpose of this qualitative phenomenological study was to explore factors for selecting a business model for scaling online enrollment by institutions of higher education. The goal was to explore the lived experiences of academic industry experts involved in the selection process. The research question for this study was: What were the lived…

  19. The Development of a Content Analysis Model for Assessing Students' Cognitive Learning in Asynchronous Online Discussions

    ERIC Educational Resources Information Center

    Yang, Dazhi; Richardson, Jennifer C.; French, Brian F.; Lehman, James D.

    2011-01-01

    The purpose of this study was to develop and validate a content analysis model for assessing students' cognitive learning in asynchronous online discussions. It adopted a fully mixed methods design, in which qualitative and quantitative methods were employed sequentially for data analysis and interpretation. Specifically, the design was a…

  20. A Distributed Model for Managing Academic Staff in an International Online Academic Programme

    ERIC Educational Resources Information Center

    Kalman, Yoram M.; Leng, Paul H.

    2007-01-01

    Online delivery of programmes of Higher Education typically involves a distributed community of students interacting with a single university site, at which the teachers, learning resources and administration of the programme are located. The alternative model, of a fully "Virtual University", which assumes no physical campus, poses problems of…

  1. Creating and Testing a Model for Tutors and Participants to Support the Collaborative Construction of Knowledge Online

    ERIC Educational Resources Information Center

    Seddon, Kathy; Postlethwaite, Keith

    2007-01-01

    This paper describes the construction and testing of a model designed to inform contributors to online collaborative dialogues about the nature of their contribution, and to guide the input from tutors who facilitate these dialogues. In particular, the model was designed to assist reflection on learning behaviours in online dialogues by…

  2. A decision-making process model of young online shoppers.

    PubMed

    Lin, Chin-Feng; Wang, Hui-Fang

    2008-12-01

    Based on the concepts of brand equity, means-end chain, and Web site trust, this study proposes a novel model called the consumption decision-making process of adolescents (CDMPA) to understand adolescents' Internet consumption habits and behavioral intention toward particular sporting goods. The findings of the CDMPA model can help marketers understand adolescents' consumption preferences and habits for developing effective Internet marketing strategies.

  3. From honeybees to Internet servers: biomimicry for distributed management of Internet hosting centers.

    PubMed

    Nakrani, Sunil; Tovey, Craig

    2007-12-01

    An Internet hosting center hosts services on its server ensemble. The center must allocate servers dynamically amongst services to maximize revenue earned from hosting fees. The finite server ensemble, unpredictable request arrival behavior and server reallocation cost make server allocation optimization difficult. Server allocation closely resembles honeybee forager allocation amongst flower patches to optimize nectar influx. The resemblance inspires a honeybee biomimetic algorithm. This paper describes details of the honeybee self-organizing model in terms of information flow and feedback, analyzes the homology between the two problems and derives the resulting biomimetic algorithm for hosting centers. The algorithm is assessed for effectiveness and adaptiveness by comparative testing against benchmark and conventional algorithms. Computational results indicate that the new algorithm is highly adaptive to widely varying external environments and quite competitive against benchmark assessment algorithms. Other swarm intelligence applications are briefly surveyed, and some general speculations are offered regarding their various degrees of success. PMID:18037727

  4. From honeybees to Internet servers: biomimicry for distributed management of Internet hosting centers.

    PubMed

    Nakrani, Sunil; Tovey, Craig

    2007-12-01

    An Internet hosting center hosts services on its server ensemble. The center must allocate servers dynamically amongst services to maximize revenue earned from hosting fees. The finite server ensemble, unpredictable request arrival behavior and server reallocation cost make server allocation optimization difficult. Server allocation closely resembles honeybee forager allocation amongst flower patches to optimize nectar influx. The resemblance inspires a honeybee biomimetic algorithm. This paper describes details of the honeybee self-organizing model in terms of information flow and feedback, analyzes the homology between the two problems and derives the resulting biomimetic algorithm for hosting centers. The algorithm is assessed for effectiveness and adaptiveness by comparative testing against benchmark and conventional algorithms. Computational results indicate that the new algorithm is highly adaptive to widely varying external environments and quite competitive against benchmark assessment algorithms. Other swarm intelligence applications are briefly surveyed, and some general speculations are offered regarding their various degrees of success.

  5. Architectural Improvements and New Processing Tools for the Open XAL Online Model

    SciTech Connect

    Allen, Christopher K; Pelaia II, Tom; Freed, Jonathan M

    2015-01-01

    The online model is the component of Open XAL providing accelerator modeling, simulation, and dynamic synchronization to live hardware. Significant architectural changes and feature additions have been recently made in two separate areas: 1) the managing and processing of simulation data, and 2) the modeling of RF cavities. Simulation data and data processing have been completely decoupled. A single class manages all simulation data while standard tools were developed for processing the simulation results. RF accelerating cavities are now modeled as composite structures where parameter and dynamics computations are distributed. The beam and hardware models both maintain their relative phase information, which allows for dynamic phase slip and elapsed time computation.

  6. Exploring the Earth System through online interactive models

    NASA Astrophysics Data System (ADS)

    Coogan, L. A.

    2013-12-01

    Upper level Earth Science students commonly have a strong background of mathematical training from Math courses, however their ability to use mathematical models to solve Earth Science problems is commonly limited. Their difficulty comes, in part, because of the nature of the subject matter. There is a large body of background ';conceptual' and ';observational' understanding and knowledge required in the Earth Sciences before in-depth quantification becomes useful. For example, it is difficult to answer questions about geological processes until you can identify minerals and rocks and understand the general geodynamic implications of their associations. However, science is fundamentally quantitative. To become scientists students have to translate their conceptual understanding into quantifiable models. Thus, it is desirable for students to become comfortable with using mathematical models to test hypotheses. With the aim of helping to bridging the gap between conceptual understanding and quantification I have started to build an interactive teaching website based around quantitative models of Earth System processes. The site is aimed at upper-level undergraduate students and spans a range of topics that will continue to grow as time allows. The mathematical models are all built for the students, allowing them to spend their time thinking about how the ';model world' changes in response to their manipulation of the input variables. The web site is divided into broad topics or chapters (Background, Solid Earth, Ocean and Atmosphere, Earth history) and within each chapter there are different subtopic (e.g. Solid Earth: Core, Mantle, Crust) and in each of these individual webpages. Each webpage, or topic, starts with an introduction to the topic, followed by an interactive model that the students can use sliders to control the input to and watch how the results change. This interaction between student and model is guided by a series of multiple choice questions that

  7. Creating a GIS data server on the World Wide Web: The GISST example

    SciTech Connect

    Pace, P.J.; Evers, T.K.

    1996-01-01

    In an effort to facilitate user access to Geographic Information Systems (GIS) data, the GIS and Computer Modeling Group from the Computational Physics and Engineering Division at the Oak Ridge National Laboratory (ORNL) in Oak Ridge, Tennessee (TN), has developed a World Wide Web server named GISST. The server incorporates a highly interactive and dynamic forms-based interface to browse and download a variety of GIS data types. This paper describes the server`s design considerations, development, resulting implementation and future enhancements.

  8. Hybrid metrology implementation: server approach

    NASA Astrophysics Data System (ADS)

    Osorio, Carmen; Timoney, Padraig; Vaid, Alok; Elia, Alex; Kang, Charles; Bozdog, Cornel; Yellai, Naren; Grubner, Eyal; Ikegami, Toru; Ikeno, Masahiko

    2015-03-01

    Hybrid metrology (HM) is the practice of combining measurements from multiple toolset types in order to enable or improve metrology for advanced structures. HM is implemented in two phases: Phase-1 includes readiness of the infrastructure to transfer processed data from the first toolset to the second. Phase-2 infrastructure allows simultaneous transfer and optimization of raw data between toolsets such as spectra, images, traces - co-optimization. We discuss the extension of Phase-1 to include direct high-bandwidth communication between toolsets using a hybrid server, enabling seamless fab deployment and further laying the groundwork for Phase-2 high volume manufacturing (HVM) implementation. An example of the communication protocol shows the information that can be used by the hybrid server, differentiating its capabilities from that of a host-based approach. We demonstrate qualification and production implementation of the hybrid server approach using CD-SEM and OCD toolsets for complex 20nm and 14nm applications. Finally we discuss the roadmap for Phase-2 HM implementation through use of the hybrid server.

  9. Peer Assessment with Online Tools to Improve Student Modeling

    ERIC Educational Resources Information Center

    Atkins, Leslie J.

    2012-01-01

    Introductory physics courses often require students to develop precise models of phenomena and represent these with diagrams, including free-body diagrams, light-ray diagrams, and maps of field lines. Instructors expect that students will adopt a certain rigor and precision when constructing these diagrams, but we want that rigor and precision to…

  10. Towards a Social Networks Model for Online Learning & Performance

    ERIC Educational Resources Information Center

    Chung, Kon Shing Kenneth; Paredes, Walter Christian

    2015-01-01

    In this study, we develop a theoretical model to investigate the association between social network properties, "content richness" (CR) in academic learning discourse, and performance. CR is the extent to which one contributes content that is meaningful, insightful and constructive to aid learning and by social network properties we…

  11. Understanding the Effectiveness of Online Peer Assessment: A Path Model

    ERIC Educational Resources Information Center

    Lu, Jingyan; Zhang, Zhidong

    2012-01-01

    Peer assessment has been implemented in schools as both a learning tool and an assessment tool. Earlier studies have explored the effectiveness of peer assessment from different perspectives, such as domain knowledge and skills, peer assessment skills, and attitude changes. However, there is no holistic model describing the effects of cognitive…

  12. Promoting Continuous Quality Improvement in Online Teaching: The META Model

    ERIC Educational Resources Information Center

    Dittmar, Eileen; McCracken, Holly

    2012-01-01

    Experienced e-learning faculty members share strategies for implementing a comprehensive postsecondary faculty development program essential to continuous improvement of instructional skills. The high-impact META Model (centered around Mentoring, Engagement, Technology, and Assessment) promotes information sharing and content creation, and fosters…

  13. Telerobotic control of a mobile coordinated robotic server. M.S. Thesis Annual Technical Report

    NASA Technical Reports Server (NTRS)

    Lee, Gordon

    1993-01-01

    The annual report on telerobotic control of a mobile coordinated robotic server is presented. The goal of this effort is to develop advanced control methods for flexible space manipulator systems. As such, an adaptive fuzzy logic controller was developed in which model structure as well as parameter constraints are not required for compensation. The work builds upon previous work on fuzzy logic controllers. Fuzzy logic controllers have been growing in importance in the field of automatic feedback control. Hardware controllers using fuzzy logic have become available as an alternative to the traditional PID controllers. Software has also been introduced to aid in the development of fuzzy logic rule-bases. The advantages of using fuzzy logic controllers include the ability to merge the experience and intuition of expert operators into the rule-base and that a model of the system is not required to construct the controller. A drawback of the classical fuzzy logic controller, however, is the many parameters needed to be turned off-line prior to application in the closed-loop. In this report, an adaptive fuzzy logic controller is developed requiring no system model or model structure. The rule-base is defined to approximate a state-feedback controller while a second fuzzy logic algorithm varies, on-line, parameters of the defining controller. Results indicate the approach is viable for on-line adaptive control of systems when the model is too complex or uncertain for application of other more classical control techniques.

  14. Providing web servers and training in Bioinformatics: 2010 update on the Bioinformatics Links Directory.

    PubMed

    Brazas, Michelle D; Yamada, Joseph T; Ouellette, B F Francis

    2010-07-01

    The Links Directory at Bioinformatics.ca continues its collaboration with Nucleic Acids Research to jointly publish and compile a freely accessible, online collection of tools, databases and resource materials for bioinformatics and molecular biology research. The July 2010 Web Server issue of Nucleic Acids Research adds an additional 115 web server tools and 7 updates to the directory at http://bioinformatics.ca/links_directory/, bringing the total number of servers listed close to an impressive 1500 links. The Bioinformatics Links Directory represents an excellent community resource for locating bioinformatic tools and databases to aid one's research, and in this context bioinformatic education needs and initiatives are discussed. A complete list of all links featured in this Nucleic Acids Research 2010 Web Server issue can be accessed online at http://bioinformatics.ca/links_directory/narweb2010/. The 2010 update of the Bioinformatics Links Directory, which includes the Web Server list and summaries, is also available online at the Nucleic Acids Research website, http://nar.oxfordjournals.org/.

  15. Nuke@ - a nuclear information internet server

    SciTech Connect

    Slone, B.J. III.; Richardson, C.E.

    1994-12-31

    To facilitate Internet communications between nuclear utilities, vendors, agencies, and other interested parties, an Internet server is being established. This server will provide the nuclear industry with its first file-transfer protocol (ftp) connection point, its second mail server, and a potential telnet connection location.

  16. Peer Assessment with Online Tools to Improve Student Modeling

    NASA Astrophysics Data System (ADS)

    Atkins, Leslie J.

    2012-11-01

    Introductory physics courses often require students to develop precise models of phenomena and represent these with diagrams, including free-body diagrams, light-ray diagrams, and maps of field lines. Instructors expect that students will adopt a certain rigor and precision when constructing these diagrams, but we want that rigor and precision to be an aid to sense-making rather than meeting seemingly arbitrary requirements set by the instructor. By giving students the authority to develop their own models and establish requirements for their diagrams, the sense that these are arbitrary requirements diminishes and students are more likely to see modeling as a sense-making activity. The practice of peer assessment can help students take ownership; however, it can be difficult for instructors to manage. Furthermore, it is not without risk: students can be reluctant to critique their peers, they may view this as the job of the instructor, and there is no guarantee that students will employ greater rigor and precision as a result of peer assessment. In this article, we describe one approach for peer assessment that can establish norms for diagrams in a way that is student driven, where students retain agency and authority in assessing and improving their work. We show that such an approach does indeed improve students' diagrams and abilities to assess their own work, without sacrificing students' authority and agency.

  17. An epidemic model of rumor diffusion in online social networks

    NASA Astrophysics Data System (ADS)

    Cheng, Jun-Jun; Liu, Yun; Shen, Bo; Yuan, Wei-Guo

    2013-01-01

    So far, in some standard rumor spreading models, the transition probability from ignorants to spreaders is always treated as a constant. However, from a practical perspective, the case that individual whether or not be infected by the neighbor spreader greatly depends on the trustiness of ties between them. In order to solve this problem, we introduce a stochastic epidemic model of the rumor diffusion, in which the infectious probability is defined as a function of the strength of ties. Moreover, we investigate numerically the behavior of the model on a real scale-free social site with the exponent γ = 2.2. We verify that the strength of ties plays a critical role in the rumor diffusion process. Specially, selecting weak ties preferentially cannot make rumor spread faster and wider, but the efficiency of diffusion will be greatly affected after removing them. Another significant finding is that the maximum number of spreaders max( S) is very sensitive to the immune probability μ and the decay probability v. We show that a smaller μ or v leads to a larger spreading of the rumor, and their relationships can be described as the function ln(max( S)) = Av + B, in which the intercept B and the slope A can be fitted perfectly as power-law functions of μ. Our findings may offer some useful insights, helping guide the application in practice and reduce the damage brought by the rumor.

  18. Rumor spreading model with consideration of forgetting mechanism: A case of online blogging LiveJournal

    NASA Astrophysics Data System (ADS)

    Zhao, Laijun; Wang, Qin; Cheng, Jingjing; Chen, Yucheng; Wang, Jiajia; Huang, Wei

    2011-07-01

    Rumor is an important form of social interaction, and its spreading has a significant impact on people’s lives. In the age of Web, people are using electronic media more frequently than ever before, and blog has become one of the main online social interactions. Therefore, it is essential to learn the evolution mechanism of rumor spreading on homogeneous network in consideration of the forgetting mechanism of spreaders. Here we study a rumor spreading model on an online social blogging platform called LiveJournal. In comparison with the Susceptible-Infected-Removed (SIR) model, we provide a more detailed and realistic description of rumor spreading process with combination of forgetting mechanism and the SIR model of epidemics. A mathematical model has been presented and numerical solutions of the model were used to analyze the impact factors of rumor spreading, such as the average degree, forgetting rate and stifling rate. Our results show that there exist a threshold of the average degree of LiveJournal and above which the influence of rumor reaches saturation. Forgetting mechanism and stifling rate exert great influence on rumor spreading on online social network. The analysis results can guide people’s behaviors in view of the theoretical and practical aspects.

  19. SAbPred: a structure-based antibody prediction server

    PubMed Central

    Dunbar, James; Krawczyk, Konrad; Leem, Jinwoo; Marks, Claire; Nowak, Jaroslaw; Regep, Cristian; Georges, Guy; Kelm, Sebastian; Popovic, Bojana; Deane, Charlotte M.

    2016-01-01

    SAbPred is a server that makes predictions of the properties of antibodies focusing on their structures. Antibody informatics tools can help improve our understanding of immune responses to disease and aid in the design and engineering of therapeutic molecules. SAbPred is a single platform containing multiple applications which can: number and align sequences; automatically generate antibody variable fragment homology models; annotate such models with estimated accuracy alongside sequence and structural properties including potential developability issues; predict paratope residues; and predict epitope patches on protein antigens. The server is available at http://opig.stats.ox.ac.uk/webapps/sabpred. PMID:27131379

  20. Evaluation of surface ozone simulated by the WRF/CMAQ online modelling system

    NASA Astrophysics Data System (ADS)

    Marougianni, Garyfalia; Katragkou, Eleni; Giannaros, Theodoros; Poupkou, Anastasia; Melas, Dimitris; Zanis, Prodromos; Feidas, Haralambos

    2013-04-01

    In this work we evaluate the online model WRF/CMAQ with respect to surface ozone and compare its performance with an off-line modelling system (WRF/CAMx) that has been operationally used by Aristotle University of Thessaloniki (AUTH) for chemical weather forecasting in the Mediterranean. The online model consists of the mesoscale meteorological model WRF3.3 and the air quality model CMAQ5.0.1 which are coupled in every time-step. The modelling domain covers Europe with a resolution of 30 Km (identical projection for meteorological and chemistry simulations to avoid interpolation errors) and CMAQ has 17 vertical layers extending up to 15 Km. Anthropogenic emissions are prepared according to the SNAP nomenclature and the biogenic emissions are provided by the Natural Emission Model (NEMO) developed by AUTH. A 2-month simulation is performed by WRF/CMAQ covering the time period of June-July 2010. Average monthly concentration values obtained from the MACCII service (IFS-Mozart) are used as chemical boundary conditions for the simulations. For the WRF simulations boundary conditions are provided by the ECMWF. The same boundaries, chemical mechanism (CBV), emissions and model set up is used in the off-line WRF/CAMx in order to allow a more direct comparison of model results. To evaluate the performance of the WRF/CMAQ online model, simulated ozone concentrations are compared against near surface ozone measurements from the EMEP network. Τhe model has been validated with the climatic observational database that has been compiled in the framework of the GEOCLIMA project (http://www.geoclima.eu/). In the evaluation analysis only those stations that fulfill the criterion of 75% data availability for near surface ozone are used. Various statistical metrics are used for the model evaluation, including correlation coefficient (R), normalized standard deviation (NSD) and modified normalized mean bias (MNMB). The final aim is to investigate whether the state-of-the-art WRF

  1. Online and offline peer led models against bullying and cyberbullying.

    PubMed

    Palladino, Benedetta Emanuela; Nocentini, Annalaura; Menesini, Ersilia

    2012-11-01

    The aim of the present study is to describe and evaluate an ongoing peer-led model against bullying and cyberbullying carried out with Italian adolescents. The evaluation of the project was made through an experimental design consisting of a pre-test and a post-test. Participants in the study were 375 adolescents (20.3% males), enrolled in 9th to 13th grades. The experimental group involved 231 students with 42 peer educators, and the control group involved 144 students. Results showed a significant decrease in the experimental group as compared to the control group for all the variables except for cyberbullying. Besides, in the experimental group we found a significant increase in adaptive coping strategies like problem solving and a significant decrease in maladaptive coping strategies like avoidance: these changes mediate the changes in the behavioural variables. In particular, the decrease in avoidance predicts the decrease in victimization and cybervictimization for peer educators and for the other students in the experimental classes whereas the increase in problem solving predicts the decrease in cyberbullying only in the peer educators group. Results are discussed following recent reviews on evidence based efficacy of peer led models.

  2. Constructs of Student-Centered Online Learning on Learning Satisfaction of a Diverse Online Student Body: A Structural Equation Modeling Approach

    ERIC Educational Resources Information Center

    Ke, Fengfeng; Kwak, Dean

    2013-01-01

    The present study investigated the relationships between constructs of web-based student-centered learning and the learning satisfaction of a diverse online student body. Hypotheses on the constructs of student-centered learning were tested using structural equation modeling. The results indicated that five key constructs of student-centered…

  3. When Disney Meets the Research Park: Metaphors and Models for Engineering an Online Learning Community of Tomorrow

    ERIC Educational Resources Information Center

    Chenail, Ronald J.

    2004-01-01

    It is suggested that educators look to an environment in which qualitative research can be learned in more flexible and creative ways--an online learning community known as the Research Park Online (RPO). This model, based upon Walt Disney's 1966 plan for his "Experimental Prototype Community of Tomorrow" (EPCOT) and university cooperative…

  4. Using Structural Equation Modeling to Validate Online Game Players' Motivations Relative to Self-Concept and Life Adaptation

    ERIC Educational Resources Information Center

    Yang, Shu Ching; Huang, Chiao Ling

    2013-01-01

    This study aimed to validate a systematic instrument to measure online players' motivations for playing online games (MPOG) and examine how the interplay of differential motivations impacts young gamers' self-concept and life adaptation. Confirmatory factor analysis determined that a hierarchical model with a two-factor structure of…

  5. Students' Performance at Tutorial Online of Social Studies through the Use of Learning Cycle Model

    ERIC Educational Resources Information Center

    Farisi, Mohammad Imam

    2014-01-01

    The purpose of the study is to describe student's performance in tutorial online (tuton) of Social Studies through developing the 5Es--Engage Explore, Explain, Elaborate, and Evaluate--Learning Cycle Model (the 5Es-LCM). The study conducted at UT-Online portal uses the Research and Development (R&D) method. The research subjects consisted…

  6. Evaluation of Major Online Diabetes Risk Calculators and Computerized Predictive Models.

    PubMed

    Stiglic, Gregor; Pajnkihar, Majda

    2015-01-01

    Classical paper-and-pencil based risk assessment questionnaires are often accompanied by the online versions of the questionnaire to reach a wider population. This study focuses on the loss, especially in risk estimation performance, that can be inflicted by direct transformation from the paper to online versions of risk estimation calculators by ignoring the possibilities of more complex and accurate calculations that can be performed using the online calculators. We empirically compare the risk estimation performance between four major diabetes risk calculators and two, more advanced, predictive models. National Health and Nutrition Examination Survey (NHANES) data from 1999-2012 was used to evaluate the performance of detecting diabetes and pre-diabetes. American Diabetes Association risk test achieved the best predictive performance in category of classical paper-and-pencil based tests with an Area Under the ROC Curve (AUC) of 0.699 for undiagnosed diabetes (0.662 for pre-diabetes) and 47% (47% for pre-diabetes) persons selected for screening. Our results demonstrate a significant difference in performance with additional benefits for a lower number of persons selected for screening when statistical methods are used. The best AUC overall was obtained in diabetes risk prediction using logistic regression with AUC of 0.775 (0.734) and an average 34% (48%) persons selected for screening. However, generalized boosted regression models might be a better option from the economical point of view as the number of selected persons for screening of 30% (47%) lies significantly lower for diabetes risk assessment in comparison to logistic regression (p < 0.001), with a significantly higher AUC (p < 0.001) of 0.774 (0.740) for the pre-diabetes group. Our results demonstrate a serious lack of predictive performance in four major online diabetes risk calculators. Therefore, one should take great care and consider optimizing the online versions of questionnaires that were

  7. Evaluation of Major Online Diabetes Risk Calculators and Computerized Predictive Models.

    PubMed

    Stiglic, Gregor; Pajnkihar, Majda

    2015-01-01

    Classical paper-and-pencil based risk assessment questionnaires are often accompanied by the online versions of the questionnaire to reach a wider population. This study focuses on the loss, especially in risk estimation performance, that can be inflicted by direct transformation from the paper to online versions of risk estimation calculators by ignoring the possibilities of more complex and accurate calculations that can be performed using the online calculators. We empirically compare the risk estimation performance between four major diabetes risk calculators and two, more advanced, predictive models. National Health and Nutrition Examination Survey (NHANES) data from 1999-2012 was used to evaluate the performance of detecting diabetes and pre-diabetes. American Diabetes Association risk test achieved the best predictive performance in category of classical paper-and-pencil based tests with an Area Under the ROC Curve (AUC) of 0.699 for undiagnosed diabetes (0.662 for pre-diabetes) and 47% (47% for pre-diabetes) persons selected for screening. Our results demonstrate a significant difference in performance with additional benefits for a lower number of persons selected for screening when statistical methods are used. The best AUC overall was obtained in diabetes risk prediction using logistic regression with AUC of 0.775 (0.734) and an average 34% (48%) persons selected for screening. However, generalized boosted regression models might be a better option from the economical point of view as the number of selected persons for screening of 30% (47%) lies significantly lower for diabetes risk assessment in comparison to logistic regression (p < 0.001), with a significantly higher AUC (p < 0.001) of 0.774 (0.740) for the pre-diabetes group. Our results demonstrate a serious lack of predictive performance in four major online diabetes risk calculators. Therefore, one should take great care and consider optimizing the online versions of questionnaires that were

  8. Online, On Demand Access to Coastal Digital Elevation Models

    NASA Astrophysics Data System (ADS)

    Long, J.; Bristol, S.; Long, D.; Thompson, S.

    2014-12-01

    Process-based numerical models for coastal waves, water levels, and sediment transport are initialized with digital elevation models (DEM) constructed by interpolating and merging bathymetric and topographic elevation data. These gridded surfaces must seamlessly span the land-water interface and may cover large regions where the individual raw data sources are collected at widely different spatial and temporal resolutions. In addition, the datasets are collected from different instrument platforms with varying accuracy and may or may not overlap in coverage. The lack of available tools and difficulties in constructing these DEMs lead scientists to 1) rely on previously merged, outdated, or over-smoothed DEMs; 2) discard more recent data that covers only a portion of the DEM domain; and 3) use inconsistent methodologies to generate DEMs. The objective of this work is to address the immediate need of integrating land and water-based elevation data sources and streamline the generation of a seamless data surface that spans the terrestrial-marine boundary. To achieve this, the U.S. Geological Survey (USGS) is developing a web processing service to format and initialize geoprocessing tasks designed to create coastal DEMs. The web processing service is maintained within the USGS ScienceBase data management system and has an associated user interface. Through the map-based interface, users define a geographic region that identifies the bounds of the desired DEM and a time period of interest. This initiates a query for elevation datasets within federal science agency data repositories. A geoprocessing service is then triggered to interpolate, merge, and smooth the data sources creating a DEM based on user-defined configuration parameters. Uncertainty and error estimates for the DEM are also returned by the geoprocessing service. Upon completion, the information management platform provides access to the final gridded data derivative and saves the configuration parameters

  9. IIR filtering based adaptive active vibration control methodology with online secondary path modeling using PZT actuators

    NASA Astrophysics Data System (ADS)

    Boz, Utku; Basdogan, Ipek

    2015-12-01

    Structural vibrations is a major cause for noise problems, discomfort and mechanical failures in aerospace, automotive and marine systems, which are mainly composed of plate-like structures. In order to reduce structural vibrations on these structures, active vibration control (AVC) is an effective approach. Adaptive filtering methodologies are preferred in AVC due to their ability to adjust themselves for varying dynamics of the structure during the operation. The filtered-X LMS (FXLMS) algorithm is a simple adaptive filtering algorithm widely implemented in active control applications. Proper implementation of FXLMS requires availability of a reference signal to mimic the disturbance and model of the dynamics between the control actuator and the error sensor, namely the secondary path. However, the controller output could interfere with the reference signal and the secondary path dynamics may change during the operation. This interference problem can be resolved by using an infinite impulse response (IIR) filter which considers feedback of the one or more previous control signals to the controller output and the changing secondary path dynamics can be updated using an online modeling technique. In this paper, IIR filtering based filtered-U LMS (FULMS) controller is combined with online secondary path modeling algorithm to suppress the vibrations of a plate-like structure. The results are validated through numerical and experimental studies. The results show that the FULMS with online secondary path modeling approach has more vibration rejection capabilities with higher convergence rate than the FXLMS counterpart.

  10. A low-order coupled chemistry meteorology model for testing online and offline data assimilation schemes

    NASA Astrophysics Data System (ADS)

    Haussaire, J.-M.; Bocquet, M.

    2015-08-01

    Bocquet and Sakov (2013) have introduced a low-order model based on the coupling of the chaotic Lorenz-95 model which simulates winds along a mid-latitude circle, with the transport of a tracer species advected by this zonal wind field. This model, named L95-T, can serve as a playground for testing data assimilation schemes with an online model. Here, the tracer part of the model is extended to a reduced photochemistry module. This coupled chemistry meteorology model (CCMM), the L95-GRS model, mimics continental and transcontinental transport and the photochemistry of ozone, volatile organic compounds and nitrogen oxides. Its numerical implementation is described. The model is shown to reproduce the major physical and chemical processes being considered. L95-T and L95-GRS are specifically designed and useful for testing advanced data assimilation schemes, such as the iterative ensemble Kalman smoother (IEnKS) which combines the best of ensemble and variational methods. These models provide useful insights prior to the implementation of data assimilation methods on larger models. We illustrate their use with data assimilation schemes on preliminary, yet instructive numerical experiments. In particular, online and offline data assimilation strategies can be conveniently tested and discussed with this low-order CCMM. The impact of observed chemical species concentrations on the wind field can be quantitatively estimated. The impacts of the wind chaotic dynamics and of the chemical species non-chaotic but highly nonlinear dynamics on the data assimilation strategies are illustrated.

  11. Adventures in the evolution of a high-bandwidth network for central servers

    SciTech Connect

    Swartz, K.L.; Cottrell, L.; Dart, M.

    1994-08-01

    In a small network, clients and servers may all be connected to a single Ethernet without significant performance concerns. As the number of clients on a network grows, the necessity of splitting the network into multiple sub-networks, each with a manageable number of clients, becomes clear. Less obvious is what to do with the servers. Group file servers on subnets and multihomed servers offer only partial solutions -- many other types of servers do not lend themselves to a decentralized model, and tend to collect on another, well-connected but overloaded Ethernet. The higher speed of FDDI seems to offer an easy solution, but in practice both expense and interoperability problems render FDDI a poor choice. Ethernet switches appear to permit cheaper and more reliable networking to the servers while providing an aggregate network bandwidth greater than a simple Ethernet. This paper studies the evolution of the server networks at SLAC. Difficulties encountered in the deployment of FDDI are described, as are the tools and techniques used to characterize the traffic patterns on the server network. Performance of Ethernet, FDDI, and switched Ethernet networks is analyzed, as are reliability and maintainability issues for these alternatives. The motivations for re-designing the SLAC general server network to use a switched Ethernet instead of FDDI are described, as are the reasons for choosing FDDI for the farm and firewall networks at SLAC. Guidelines are developed which may help in making this choice for other networks.

  12. ModelView for ModelDB: online presentation of model structure

    PubMed Central

    McDougal, Robert A.; Morse, Thomas M.; Hines, Michael L.; Shepherd, Gordon M.

    2015-01-01

    ModelDB (modeldb.yale.edu), a searchable repository of source code of more than 900 published computational neuroscience models, seeks to promote model reuse and reproducibility. Code sharing is a first step; however, model source code is often large and not easily understood. To aid users, we have developed ModelView, a web application for ModelDB that presents a graphical view of model structure augmented with contextual information for NEURON and NEURON-runnable (e.g. NeuroML, PyNN) models. Web presentation provides a rich, simulator-independent environment for interacting with graphs. The necessary data is generated by combining manual curation, text-mining the source code, querying ModelDB, and simulator introspection. Key features of the user interface along with the data analysis, storage, and visualization algorithms are explained. With this tool, researchers can examine and assess the structure of hundreds of models in ModelDB in a standardized presentation without installing any software, downloading the model, or reading model source code. PMID:25896640

  13. ModelView for ModelDB: Online Presentation of Model Structure.

    PubMed

    McDougal, Robert A; Morse, Thomas M; Hines, Michael L; Shepherd, Gordon M

    2015-10-01

    ModelDB ( modeldb.yale.edu ), a searchable repository of source code of more than 950 published computational neuroscience models, seeks to promote model reuse and reproducibility. Code sharing is a first step; however, model source code is often large and not easily understood. To aid users, we have developed ModelView, a web application for ModelDB that presents a graphical view of model structure augmented with contextual information for NEURON and NEURON-runnable (e.g. NeuroML, PyNN) models. Web presentation provides a rich, simulator-independent environment for interacting with graphs. The necessary data is generated by combining manual curation, text-mining the source code, querying ModelDB, and simulator introspection. Key features of the user interface along with the data analysis, storage, and visualization algorithms are explained. With this tool, researchers can examine and assess the structure of hundreds of models in ModelDB in a standardized presentation without installing any software, downloading the model, or reading model source code.

  14. Applying the Dualistic Model of Passion to Post-Secondary Online Instruction: A Comparative Study

    ERIC Educational Resources Information Center

    Greenberger, Scott W.

    2013-01-01

    With the growth of online education, online student attrition and failure rates will continue to be a concern for post-secondary institutions. Although many factors may contribute to such phenomena, the role of the online instructor is clearly an important factor. Exploring how online instructors perceive their role as online teachers,…

  15. SciServer Compute brings Analysis to Big Data in the Cloud

    NASA Astrophysics Data System (ADS)

    Raddick, Jordan; Medvedev, Dmitry; Lemson, Gerard; Souter, Barbara

    2016-06-01

    SciServer Compute uses Jupyter Notebooks running within server-side Docker containers attached to big data collections to bring advanced analysis to big data "in the cloud." SciServer Compute is a component in the SciServer Big-Data ecosystem under development at JHU, which will provide a stable, reproducible, sharable virtual research environment.SciServer builds on the popular CasJobs and SkyServer systems that made the Sloan Digital Sky Survey (SDSS) archive one of the most-used astronomical instruments. SciServer extends those systems with server-side computational capabilities and very large scratch storage space, and further extends their functions to a range of other scientific disciplines.Although big datasets like SDSS have revolutionized astronomy research, for further analysis, users are still restricted to downloading the selected data sets locally – but increasing data sizes make this local approach impractical. Instead, researchers need online tools that are co-located with data in a virtual research environment, enabling them to bring their analysis to the data.SciServer supports this using the popular Jupyter notebooks, which allow users to write their own Python and R scripts and execute them on the server with the data (extensions to Matlab and other languages are planned). We have written special-purpose libraries that enable querying the databases and other persistent datasets. Intermediate results can be stored in large scratch space (hundreds of TBs) and analyzed directly from within Python or R with state-of-the-art visualization and machine learning libraries. Users can store science-ready results in their permanent allocation on SciDrive, a Dropbox-like system for sharing and publishing files. Communication between the various components of the SciServer system is managed through SciServer‘s new Single Sign-on Portal.We have created a number of demos to illustrate the capabilities of SciServer Compute, including Python and R scripts

  16. SciServer Compute brings Analysis to Big Data in the Cloud

    NASA Astrophysics Data System (ADS)

    Raddick, Jordan; Medvedev, Dmitry; Lemson, Gerard; Souter, Barbara

    2016-06-01

    SciServer Compute uses Jupyter Notebooks running within server-side Docker containers attached to big data collections to bring advanced analysis to big data "in the cloud." SciServer Compute is a component in the SciServer Big-Data ecosystem under development at JHU, which will provide a stable, reproducible, sharable virtual research environment.SciServer builds on the popular CasJobs and SkyServer systems that made the Sloan Digital Sky Survey (SDSS) archive one of the most-used astronomical instruments. SciServer extends those systems with server-side computational capabilities and very large scratch storage space, and further extends their functions to a range of other scientific disciplines.Although big datasets like SDSS have revolutionized astronomy research, for further analysis, users are still restricted to downloading the selected data sets locally - but increasing data sizes make this local approach impractical. Instead, researchers need online tools that are co-located with data in a virtual research environment, enabling them to bring their analysis to the data.SciServer supports this using the popular Jupyter notebooks, which allow users to write their own Python and R scripts and execute them on the server with the data (extensions to Matlab and other languages are planned). We have written special-purpose libraries that enable querying the databases and other persistent datasets. Intermediate results can be stored in large scratch space (hundreds of TBs) and analyzed directly from within Python or R with state-of-the-art visualization and machine learning libraries. Users can store science-ready results in their permanent allocation on SciDrive, a Dropbox-like system for sharing and publishing files. Communication between the various components of the SciServer system is managed through SciServer‘s new Single Sign-on Portal.We have created a number of demos to illustrate the capabilities of SciServer Compute, including Python and R scripts

  17. Building Component Library: An Online Repository to Facilitate Building Energy Model Creation; Preprint

    SciTech Connect

    Fleming, K.; Long, N.; Swindler, A.

    2012-05-01

    This paper describes the Building Component Library (BCL), the U.S. Department of Energy's (DOE) online repository of building components that can be directly used to create energy models. This comprehensive, searchable library consists of components and measures as well as the metadata which describes them. The library is also designed to allow contributors to easily add new components, providing a continuously growing, standardized list of components for users to draw upon.

  18. The Ohio River Basin Energy Facility siting model. Volume 2: Sites and on-line dates

    NASA Astrophysics Data System (ADS)

    Fowler, G. L.; Bailey, R. E.; Jansen, S. D.; Randolph, J. C.; Jones, W. W.; Gordon, S. I.

    1981-09-01

    The siting model developed for the Ohio River Basin Energy Study, and specifically designed for regional policy analysis is included. The region includes 423 counties in an area that consists of all of Kentucky and substantial portions of Illinois, Indiana, Ohio, Pennsylvania, and West Virginia. Detailed schedules of county-level sites and on-line dates for coal-fired and nuclear-fueled generating unit additions for each ORBES scenario are included.

  19. Online collaboration and model sharing in volcanology via VHub.org

    NASA Astrophysics Data System (ADS)

    Valentine, G.; Patra, A. K.; Bajo, J. V.; Bursik, M. I.; Calder, E.; Carn, S. A.; Charbonnier, S. J.; Connor, C.; Connor, L.; Courtland, L. M.; Gallo, S.; Jones, M.; Palma Lizana, J. L.; Moore-Russo, D.; Renschler, C. S.; Rose, W. I.

    2013-12-01

    VHub (short for VolcanoHub, and accessible at vhub.org) is an online platform for barrier free access to high end modeling and simulation and collaboration in research and training related to volcanoes, the hazards they pose, and risk mitigation. The underlying concept is to provide a platform, building upon the successful HUBzero software infrastructure (hubzero.org), that enables workers to collaborate online and to easily share information, modeling and analysis tools, and educational materials with colleagues around the globe. Collaboration occurs around several different points: (1) modeling and simulation; (2) data sharing; (3) education and training; (4) volcano observatories; and (5) project-specific groups. VHub promotes modeling and simulation in two ways: (1) some models can be implemented on VHub for online execution. VHub can provide a central warehouse for such models that should result in broader dissemination. VHub also provides a platform that supports the more complex CFD models by enabling the sharing of code development and problem-solving knowledge, benchmarking datasets, and the development of validation exercises. VHub also provides a platform for sharing of data and datasets. The VHub development team is implementing the iRODS data sharing middleware (see irods.org). iRODS allows a researcher to access data that are located at participating data sources around the world (a cloud of data) as if the data were housed in a single virtual database. Projects associated with VHub are also going to introduce the use of data driven workflow tools to support the use of multistage analysis processes where computing and data are integrated for model validation, hazard analysis etc. Audio-video recordings of seminars, PowerPoint slide sets, and educational simulations are all items that can be placed onto VHub for use by the community or by selected collaborators. An important point is that the manager of a given educational resource (or any other

  20. Online and Certifiable Spectroscopy Courses Using Information and Communication Tools. a Model for Classrooms and Beyond

    NASA Astrophysics Data System (ADS)

    Krishnan, Mangala Sunder

    2015-06-01

    Online education tools and flipped (reverse) class models for teaching and learning and pedagogic and andragogic approaches to self-learning have become quite mature in the last few years because of the revolution in video, interactive software and social learning tools. Open Educational resources of dependable quality and variety are also becoming available throughout the world making the current era truly a renaissance period for higher education using Internet. In my presentation, I shall highlight structured course content preparation online in several areas of spectroscopy and also the design and development of virtual lab tools and kits for studying optical spectroscopy. Both elementary and advanced courses on molecular spectroscopy are currently under development jointly with researchers in other institutions in India. I would like to explore participation from teachers throughout the world in the teaching-learning process using flipped class methods for topics such as experimental and theoretical microwave spectroscopy of semi-rigid and non-rigid molecules, molecular complexes and aggregates. In addition, courses in Raman, Infrared spectroscopy experimentation and advanced electronic spectroscopy courses are also envisaged for free, online access. The National Programme on Technology Enhanced Learning (NPTEL) and the National Mission on Education through Information and Communication Technology (NMEICT) are two large Government of India funded initiatives for producing certified and self-learning courses with financial support for moderated discussion forums. The learning tools and interactive presentations so developed can be used in classrooms throughout the world using flipped mode of teaching. They are very much sought after by learners and researchers who are in other areas of learning but want to contribute to research and development through inter-disciplinary learning. NPTEL is currently is experimenting with Massive Open Online Course (MOOC

  1. File servers, networking, and supercomputers

    NASA Technical Reports Server (NTRS)

    Moore, Reagan W.

    1991-01-01

    One of the major tasks of a supercomputer center is managing the massive amount of data generated by application codes. A data flow analysis of the San Diego Supercomputer Center is presented that illustrates the hierarchical data buffering/caching capacity requirements and the associated I/O throughput requirements needed to sustain file service and archival storage. Usage paradigms are examined for both tightly-coupled and loosely-coupled file servers linked to the supercomputer by high-speed networks.

  2. File servers, networking, and supercomputers

    NASA Technical Reports Server (NTRS)

    Moore, Reagan W.

    1992-01-01

    One of the major tasks of a supercomputer center is managing the massive amount of data generated by application codes. A data flow analysis of the San Diego Supercomputer Center is presented that illustrates the hierarchical data buffering/caching capacity requirements and the associated I/O throughput requirements needed to sustain file service and archival storage. Usage paradigms are examined for both tightly-coupled and loosely-coupled file servers linked to the supercomputer by high-speed networks.

  3. The NASA Technical Report Server

    NASA Astrophysics Data System (ADS)

    Nelson, M. L.; Gottlich, G. L.; Bianco, D. J.; Paulson, S. S.; Binkley, R. L.; Kellogg, Y. D.; Beaumont, C. J.; Schmunk, R. B.; Kurtz, M. J.; Accomazzi, A.; Syed, O.

    The National Aeronautics and Space Act of 1958 established the National Aeronautics and Space Administration (NASA) and charged it to "provide for the widest practicable and appropriate dissemination of information concerning...its activities and the results thereof". The search for innovative methods to distribute NASA's information led a grass-roots team to create the NASA Technical Report Server (NTRS), which uses the World Wide Web and other popular Internet-based information systems .

  4. Client-Server Password Recovery

    NASA Astrophysics Data System (ADS)

    Chmielewski, Łukasz; Hoepman, Jaap-Henk; van Rossum, Peter

    Human memory is not perfect - people constantly memorize new facts and forget old ones. One example is forgetting a password, a common problem raised at IT help desks. We present several protocols that allow a user to automatically recover a password from a server using partial knowledge of the password. These protocols can be easily adapted to the personal entropy setting [7], where a user can recover a password only if he can answer a large enough subset of personal questions.

  5. Mathematical defense method of networked servers with controlled remote backups

    NASA Astrophysics Data System (ADS)

    Kim, Song-Kyoo

    2006-05-01

    The networked server defense model is focused on reliability and availability in security respects. The (remote) backup servers are hooked up by VPN (Virtual Private Network) with high-speed optical network and replace broken main severs immediately. The networked server can be represent as "machines" and then the system deals with main unreliable, spare, and auxiliary spare machine. During vacation periods, when the system performs a mandatory routine maintenance, auxiliary machines are being used for back-ups; the information on the system is naturally delayed. Analog of the N-policy to restrict the usage of auxiliary machines to some reasonable quantity. The results are demonstrated in the network architecture by using the stochastic optimization techniques.

  6. LassoProt: server to analyze biopolymers with lassos

    PubMed Central

    Dabrowski-Tumanski, Pawel; Niemyska, Wanda; Pasznik, Pawel; Sulkowska, Joanna I.

    2016-01-01

    The LassoProt server, http://lassoprot.cent.uw.edu.pl/, enables analysis of biopolymers with entangled configurations called lassos. The server offers various ways of visualizing lasso configurations, as well as their time trajectories, with all the results and plots downloadable. Broad spectrum of applications makes LassoProt a useful tool for biologists, biophysicists, chemists, polymer physicists and mathematicians. The server and our methods have been validated on the whole PDB, and the results constitute the database of proteins with complex lassos, supported with basic biological data. This database can serve as a source of information about protein geometry and entanglement-function correlations, as a reference set in protein modeling, and for many other purposes. PMID:27131383

  7. LassoProt: server to analyze biopolymers with lassos.

    PubMed

    Dabrowski-Tumanski, Pawel; Niemyska, Wanda; Pasznik, Pawel; Sulkowska, Joanna I

    2016-07-01

    The LassoProt server, http://lassoprot.cent.uw.edu.pl/, enables analysis of biopolymers with entangled configurations called lassos. The server offers various ways of visualizing lasso configurations, as well as their time trajectories, with all the results and plots downloadable. Broad spectrum of applications makes LassoProt a useful tool for biologists, biophysicists, chemists, polymer physicists and mathematicians. The server and our methods have been validated on the whole PDB, and the results constitute the database of proteins with complex lassos, supported with basic biological data. This database can serve as a source of information about protein geometry and entanglement-function correlations, as a reference set in protein modeling, and for many other purposes.

  8. The Online Theology Classroom: Strategies for Engaging a Community of Distance Learners in a Hybrid Model of Online Education

    ERIC Educational Resources Information Center

    Hege, Brent A. R.

    2011-01-01

    One factor contributing to success in online education is the creation of a safe and vibrant virtual community and sustained, lively engagement with that community of learners. In order to create and engage such a community instructors must pay special attention to the relationship between technology and pedagogy, specifically in terms of issues…

  9. MARSIS data and simulation exploited using array databases: PlanetServer/EarthServer for sounding radars

    NASA Astrophysics Data System (ADS)

    Cantini, Federico; Pio Rossi, Angelo; Orosei, Roberto; Baumann, Peter; Misev, Dimitar; Oosthoek, Jelmer; Beccati, Alan; Campalani, Piero; Unnithan, Vikram

    2014-05-01

    parallel computing has been developed and tested on a Tier 0 class HPC cluster computer located at CINECA, Bologna, Italy, to produce accurate simulations for the entire MARSIS dataset. Although the necessary computational resources have not yet been secured, through the HPC cluster at Jacobs University in Bremen it was possible to simulate a significant subset of orbits covering the area of the Medusae Fossae Formation (MFF), a seeimingly soft, easily eroded deposit that extends for nearly 1,000 km along the equator of Mars (e.g. Watters et al., 2007; Carter et al., 2009). Besides the MARSIS data, simulation of MARSIS surface clutter signal are included in the db to further improve its scientific value. Simulations will be available throught the project portal to end users/scientists and they will eventually be provided in the PSA/PDS archives. References: Baumann, P. On the management of multidimensional discrete data. VLDB J. 4 (3), 401-444, Special Issue on Spatial Database Systems, 1994. Carter, L. M., Campbell, B. A., Watters, T. R., Phillips, R. J., Putzig, N. E., Safaeinili, A., Plaut, J., Okubo, C., Egan, A. F., Biccari, D., Orosei, R. (2009). Shallow radar (SHARAD) sounding observations of the Medusae Fossae Formation, Mars. Icarus, 199(2), 295-302. Nouvel, J.-F., Herique, A., Kofman, W., Safaeinili, A. 2004. Radar signal simulation: Surface modeling with the Facet Method. Radio Science 39, 1013. Oosthoek, J.H.P, Flahaut J., Rossi, A. P., Baumann, P., Misev, D., Campalani, P., Unnithan, V. (2013) PlanetServer: Innovative Approaches for the Online Analysis of Hyperspectral Satellite Data from Mars, Advances in Space Research. DOI: 10.1016/j.asr.2013.07.002 Picardi, G., and 33 colleagues 2005. Radar Soundings of the Subsurface of Mars. Science 310, 1925-1928. Rossi, A. P., Baumann, P., Oosthoek, J., Beccati, A., Cantini, F., Misev, D. Orosei, R., Flahaut, J., Campalani, P., Unnithan, V. (2014),Geophys. Res. Abs., Vol. 16, #EGU2014-5149, this meeting. Watters, T. R

  10. MARSIS data and simulation exploited using array databases: PlanetServer/EarthServer for sounding radars

    NASA Astrophysics Data System (ADS)

    Cantini, Federico; Pio Rossi, Angelo; Orosei, Roberto; Baumann, Peter; Misev, Dimitar; Oosthoek, Jelmer; Beccati, Alan; Campalani, Piero; Unnithan, Vikram

    2014-05-01

    parallel computing has been developed and tested on a Tier 0 class HPC cluster computer located at CINECA, Bologna, Italy, to produce accurate simulations for the entire MARSIS dataset. Although the necessary computational resources have not yet been secured, through the HPC cluster at Jacobs University in Bremen it was possible to simulate a significant subset of orbits covering the area of the Medusae Fossae Formation (MFF), a seeimingly soft, easily eroded deposit that extends for nearly 1,000 km along the equator of Mars (e.g. Watters et al., 2007; Carter et al., 2009). Besides the MARSIS data, simulation of MARSIS surface clutter signal are included in the db to further improve its scientific value. Simulations will be available throught the project portal to end users/scientists and they will eventually be provided in the PSA/PDS archives. References: Baumann, P. On the management of multidimensional discrete data. VLDB J. 4 (3), 401-444, Special Issue on Spatial Database Systems, 1994. Carter, L. M., Campbell, B. A., Watters, T. R., Phillips, R. J., Putzig, N. E., Safaeinili, A., Plaut, J., Okubo, C., Egan, A. F., Biccari, D., Orosei, R. (2009). Shallow radar (SHARAD) sounding observations of the Medusae Fossae Formation, Mars. Icarus, 199(2), 295-302. Nouvel, J.-F., Herique, A., Kofman, W., Safaeinili, A. 2004. Radar signal simulation: Surface modeling with the Facet Method. Radio Science 39, 1013. Oosthoek, J.H.P, Flahaut J., Rossi, A. P., Baumann, P., Misev, D., Campalani, P., Unnithan, V. (2013) PlanetServer: Innovative Approaches for the Online Analysis of Hyperspectral Satellite Data from Mars, Advances in Space Research. DOI: 10.1016/j.asr.2013.07.002 Picardi, G., and 33 colleagues 2005. Radar Soundings of the Subsurface of Mars. Science 310, 1925-1928. Rossi, A. P., Baumann, P., Oosthoek, J., Beccati, A., Cantini, F., Misev, D. Orosei, R., Flahaut, J., Campalani, P., Unnithan, V. (2014),Geophys. Res. Abs., Vol. 16, #EGU2014-5149, this meeting. Watters, T. R

  11. Web Server Security on Open Source Environments

    NASA Astrophysics Data System (ADS)

    Gkoutzelis, Dimitrios X.; Sardis, Manolis S.

    Administering critical resources has never been more difficult that it is today. In a changing world of software innovation where major changes occur on a daily basis, it is crucial for the webmasters and server administrators to shield their data against an unknown arsenal of attacks in the hands of their attackers. Up until now this kind of defense was a privilege of the few, out-budgeted and low cost solutions let the defender vulnerable to the uprising of innovating attacking methods. Luckily, the digital revolution of the past decade left its mark, changing the way we face security forever: open source infrastructure today covers all the prerequisites for a secure web environment in a way we could never imagine fifteen years ago. Online security of large corporations, military and government bodies is more and more handled by open source application thus driving the technological trend of the 21st century in adopting open solutions to E-Commerce and privacy issues. This paper describes substantial security precautions in facing privacy and authentication issues in a totally open source web environment. Our goal is to state and face the most known problems in data handling and consequently propose the most appealing techniques to face these challenges through an open solution.

  12. Opportunities for the Mashup of Heterogenous Data Server via Semantic Web Technology

    NASA Astrophysics Data System (ADS)

    Ritschel, Bernd; Seelus, Christoph; Neher, Günther; Iyemori, Toshihiko; Koyama, Yukinobu; Yatagai, Akiyo; Murayama, Yasuhiro; King, Todd; Hughes, John; Fung, Shing; Galkin, Ivan; Hapgood, Michael; Belehaki, Anna

    2015-04-01

    Opportunities for the Mashup of Heterogenous Data Server via Semantic Web Technology European Union ESPAS, Japanese IUGONET and GFZ ISDC data server are developed for the ingestion, archiving and distributing of geo and space science domain data. Main parts of the data -managed by the mentioned data server- are related to near earth-space and geomagnetic field data. A smart mashup of the data server would allow a seamless browse and access to data and related context information. However the achievement of a high level of interoperability is a challenge because the data server are based on different data models and software frameworks. This paper is focused on the latest experiments and results for the mashup of the data server using the semantic Web approach. Besides the mashup of domain and terminological ontologies, especially the options to connect data managed by relational databases using D2R server and SPARQL technology will be addressed. A successful realization of the data server mashup will not only have a positive impact to the data users of the specific scientific domain but also to related projects, such as e.g. the development of a new interoperable version of NASA's Planetary Data System (PDS) or ICUS's World Data System alliance. ESPAS data server: https://www.espas-fp7.eu/portal/ IUGONET data server: http://search.iugonet.org/iugonet/ GFZ ISDC data server (semantic Web based prototype): http://rz-vm30.gfz-potsdam.de/drupal-7.9/ NASA PDS: http://pds.nasa.gov ICSU-WDS: https://www.icsu-wds.org

  13. Cloud microphysics modification with an online coupled COSMO-MUSCAT regional model

    NASA Astrophysics Data System (ADS)

    Sudhakar, D.; Quaas, J.; Wolke, R.; Stoll, J.; Muehlbauer, A. D.; Tegen, I.

    2015-12-01

    Abstract: The quantification of clouds, aerosols, and aerosol-cloud interactions in models, continues to be a challenge (IPCC, 2013). In this scenario two-moment bulk microphysical scheme is used to understand the aerosol-cloud interactions in the regional model COSMO (Consortium for Small Scale Modeling). The two-moment scheme in COSMO has been especially designed to represent aerosol effects on the microphysics of mixed-phase clouds (Seifert et al., 2006). To improve the model predictability, the radiation scheme has been coupled with two-moment microphysical scheme. Further, the cloud microphysics parameterization has been modified via coupling COSMO with MUSCAT (MultiScale Chemistry Aerosol Transport model, Wolke et al., 2004). In this study, we will be discussing the initial result from the online-coupled COSMO-MUSCAT model system with modified two-moment parameterization scheme along with COSP (CFMIP Observational Simulator Package) satellite simulator. This online coupled model system aims to improve the sub-grid scale process in the regional weather prediction scenario. The constant aerosol concentration used in the Seifert and Beheng, (2006) parameterizations in COSMO model has been replaced by aerosol concentration derived from MUSCAT model. The cloud microphysical process from the modified two-moment scheme is compared with stand-alone COSMO model. To validate the robustness of the model simulation, the coupled model system is integrated with COSP satellite simulator (Muhlbauer et al., 2012). Further, the simulations are compared with MODIS (Moderate Resolution Imaging Spectroradiometer) and ISCCP (International Satellite Cloud Climatology Project) satellite products.

  14. OneRTM: an online real-time modelling platform for the next generation of numerical environmental modelling

    NASA Astrophysics Data System (ADS)

    Wang, Lei; Kingdon, Andrew

    2014-05-01

    Numerical modelling has been applied in many fields to better understand and predict the behaviours of different processes. In our increasingly dynamic world there is an imperative to identify potential stresses and threats in the environment and to respond quickly with sound decisions. However, the limitations in traditional modelling methodologies make it difficult to respond quickly to rapidly developing environmental events, such as floods, droughts and pollution incidents. For example, it is both time consuming and costly to keep model data up-to-date and also to disseminate models results and modelled output datasets to end-users. Crucially it is difficult for people who has limited numerical modelling skills to understand and interact with models and modelled results. In response to these challenges, a proof-of-concept online real-time modelling platform (OneRTM) has been developed as a mechanism for maintaining and disseminating numerical models and datasets. This automatically keeps models current for the most recent input data, links models based on data flow; it makes models and modelled datasets (historic, real-time and forecasted) immediately available via the internet as easy-to-understand dynamic GIS layers and graphs; and it provides online modelling functions to allow non-modellers to manipulate model including running pre-defined scenarios with a few mouse clicks. OneRTM has been successfully applied and tested in the Chalk groundwater flow modelling in the Thames Basin, UK. The system hosts and links groundwater recharge and groundwater flow models in the case study area, and automatically publishes the latest groundwater level layers on the internet once the current weather datasets becomes available. It also provides online functions of generating groundwater hydrograph and running groundwater abstraction scenarios. Although OneRTM is currently tested using groundwater flow modelling as an example, it could be further developed into a platform

  15. Oceanotron, Scalable Server for Marine Observations

    NASA Astrophysics Data System (ADS)

    Loubrieu, T.; Bregent, S.; Blower, J. D.; Griffiths, G.

    2013-12-01

    Ifremer, French marine institute, is deeply involved in data management for different ocean in-situ observation programs (ARGO, OceanSites, GOSUD, ...) or other European programs aiming at networking ocean in-situ observation data repositories (myOcean, seaDataNet, Emodnet). To capitalize the effort for implementing advance data dissemination services (visualization, download with subsetting) for these programs and generally speaking water-column observations repositories, Ifremer decided to develop the oceanotron server (2010). Knowing the diversity of data repository formats (RDBMS, netCDF, ODV, ...) and the temperamental nature of the standard interoperability interface profiles (OGC/WMS, OGC/WFS, OGC/SOS, OpeNDAP, ...), the server is designed to manage plugins: - StorageUnits : which enable to read specific data repository formats (netCDF/OceanSites, RDBMS schema, ODV binary format). - FrontDesks : which get external requests and send results for interoperable protocols (OGC/WMS, OGC/SOS, OpenDAP). In between a third type of plugin may be inserted: - TransformationUnits : which enable ocean business related transformation of the features (for example conversion of vertical coordinates from pressure in dB to meters under sea surface). The server is released under open-source license so that partners can develop their own plugins. Within MyOcean project, University of Reading has plugged a WMS implementation as an oceanotron frontdesk. The modules are connected together by sharing the same information model for marine observations (or sampling features: vertical profiles, point series and trajectories), dataset metadata and queries. The shared information model is based on OGC/Observation & Measurement and Unidata/Common Data Model initiatives. The model is implemented in java (http://www.ifremer.fr/isi/oceanotron/javadoc/). This inner-interoperability level enables to capitalize ocean business expertise in software development without being indentured to

  16. An online database for informing ecological network models: http://kelpforest.ucsc.edu

    USGS Publications Warehouse

    Beas-Luna, Rodrigo; Tinker, M. Tim; Novak, Mark; Carr, Mark H.; Black, August; Caselle, Jennifer E.; Hoban, Michael; Malone, Dan; Iles, Alison C.

    2014-01-01

    Ecological network models and analyses are recognized as valuable tools for understanding the dynamics and resiliency of ecosystems, and for informing ecosystem-based approaches to management. However, few databases exist that can provide the life history, demographic and species interaction information necessary to parameterize ecological network models. Faced with the difficulty of synthesizing the information required to construct models for kelp forest ecosystems along the West Coast of North America, we developed an online database (http://kelpforest.ucsc.edu/) to facilitate the collation and dissemination of such information. Many of the database's attributes are novel yet the structure is applicable and adaptable to other ecosystem modeling efforts. Information for each taxonomic unit includes stage-specific life history, demography, and body-size allometries. Species interactions include trophic, competitive, facilitative, and parasitic forms. Each data entry is temporally and spatially explicit. The online data entry interface allows researchers anywhere to contribute and access information. Quality control is facilitated by attributing each entry to unique contributor identities and source citations. The database has proven useful as an archive of species and ecosystem-specific information in the development of several ecological network models, for informing management actions, and for education purposes (e.g., undergraduate and graduate training). To facilitate adaptation of the database by other researches for other ecosystems, the code and technical details on how to customize this database and apply it to other ecosystems are freely available and located at the following link (https://github.com/kelpforest-cameo/data​baseui).

  17. An Online Database for Informing Ecological Network Models: http://kelpforest.ucsc.edu

    PubMed Central

    Beas-Luna, Rodrigo; Novak, Mark; Carr, Mark H.; Tinker, Martin T.; Black, August; Caselle, Jennifer E.; Hoban, Michael; Malone, Dan; Iles, Alison

    2014-01-01

    Ecological network models and analyses are recognized as valuable tools for understanding the dynamics and resiliency of ecosystems, and for informing ecosystem-based approaches to management. However, few databases exist that can provide the life history, demographic and species interaction information necessary to parameterize ecological network models. Faced with the difficulty of synthesizing the information required to construct models for kelp forest ecosystems along the West Coast of North America, we developed an online database (http://kelpforest.ucsc.edu/) to facilitate the collation and dissemination of such information. Many of the database's attributes are novel yet the structure is applicable and adaptable to other ecosystem modeling efforts. Information for each taxonomic unit includes stage-specific life history, demography, and body-size allometries. Species interactions include trophic, competitive, facilitative, and parasitic forms. Each data entry is temporally and spatially explicit. The online data entry interface allows researchers anywhere to contribute and access information. Quality control is facilitated by attributing each entry to unique contributor identities and source citations. The database has proven useful as an archive of species and ecosystem-specific information in the development of several ecological network models, for informing management actions, and for education purposes (e.g., undergraduate and graduate training). To facilitate adaptation of the database by other researches for other ecosystems, the code and technical details on how to customize this database and apply it to other ecosystems are freely available and located at the following link (https://github.com/kelpforest-cameo/databaseui). PMID:25343723

  18. Using a Comprehensive Model to Test and Predict the Factors of Online Learning Effectiveness

    ERIC Educational Resources Information Center

    He, Minyan

    2013-01-01

    As online learning is an important part of higher education, the effectiveness of online learning has been tested with different methods. Although the literature regarding online learning effectiveness has been related to various factors, a more comprehensive review of the factors may result in broader understanding of online learning…

  19. National Medical Terminology Server in Korea

    NASA Astrophysics Data System (ADS)

    Lee, Sungin; Song, Seung-Jae; Koh, Soonjeong; Lee, Soo Kyoung; Kim, Hong-Gee

    Interoperable EHR (Electronic Health Record) necessitates at least the use of standardized medical terminologies. This paper describes a medical terminology server, LexCare Suite, which houses terminology management applications, such as a terminology editor, and a terminology repository populated with international standard terminology systems such as Systematized Nomenclature of Medicine (SNOMED). The server is to satisfy the needs of quality terminology systems to local primary to tertiary hospitals. Our partner general hospitals have used the server to test its applicability. This paper describes the server and the results of the applicability test.

  20. UniTree Name Server internals

    SciTech Connect

    Mecozzi, D.; Minton, J.

    1996-01-01

    The UniTree Name Server (UNS) is one of several servers which make up the UniTree storage system. The Name Server is responsible for mapping names to capabilities Names are generally human readable ASCII strings of any length. Capabilities are unique 256-bit identifiers that point to files, directories, or symbolic links. The Name Server implements a UNIX style hierarchical directory structure to facilitate name-to-capability mapping. The principal task of the Name Server is to manage the directories which make up the UniTree directory structure. The principle clients of the Name Server are the FTP Daemon, NFS and a few UniTree utility routines. However, the Name Server is a generalized server and will accept messages from any client. The purpose of this paper is to describe the internal workings of the UniTree Name Server. In cases where it seems appropriate, the motivation for a particular choice of algorithm as description of the algorithm itself will be given.

  1. WSKE: Web Server Key Enabled Cookies

    NASA Astrophysics Data System (ADS)

    Masone, Chris; Baek, Kwang-Hyun; Smith, Sean

    In this paper, we present the design and prototype of a new approach to cookie management: if a server deposits a cookie only after authenticating itself via the SSL handshake, the browser will return the cookie only to a server that can authenticate itself, via SSL, to the same keypair. This approach can enable usable but secure client authentication. This approach can improve the usability of server authentication by clients. This approach is superior to the prior work on Active Cookies in that it defends against both DNS spoofing and IP spoofing—and does not require binding a user's interaction with a server to individual IP addresses.

  2. Performance analysis of a fault-tolerant distributed multimedia server

    NASA Astrophysics Data System (ADS)

    Derryberry, Barbara

    1998-12-01

    The evolving demands of networks to support Webtone, H.323, AIN and other advanced services require multimedia servers that can deliver a number of value-added capabilities such as to negotiate protocols, deliver network services, and respond to QoS requests. The server is one of the primary limiters on network capacity. THe next generation server must be based upon a flexible, robust, scalable, and reliable platform to keep abreast with the revolutionary pace of service demand and development while continuing to provide the same dependability that voice networks have provided for decades. A new distributed platform, which is based upon the Totem fault-tolerant messaging system, is described. Processor and network resources are modeled and analyzed. Quantitative results are presented that assess this platform in terms of messaging capacity and performance for various architecture and design options including processing technologies and fault-tolerance modes. The impacts of fault-tolerant messaging are identified based upon analytical modeling of the proposed server architecture.

  3. Online model-based diagnosis to support autonomous operation of an advanced life support system

    NASA Technical Reports Server (NTRS)

    Biswas, Gautam; Manders, Eric-Jan; Ramirez, John; Mahadevan, Nagabhusan; Abdelwahed, Sherif

    2004-01-01

    This article describes methods for online model-based diagnosis of subsystems of the advanced life support system (ALS). The diagnosis methodology is tailored to detect, isolate, and identify faults in components of the system quickly so that fault-adaptive control techniques can be applied to maintain system operation without interruption. We describe the components of our hybrid modeling scheme and the diagnosis methodology, and then demonstrate the effectiveness of this methodology by building a detailed model of the reverse osmosis (RO) system of the water recovery system (WRS) of the ALS. This model is validated with real data collected from an experimental testbed at NASA JSC. A number of diagnosis experiments run on simulated faulty data are presented and the results are discussed.

  4. Online Dectection and Modeling of Safety Boundaries for Aerospace Application Using Bayesian Statistics

    NASA Technical Reports Server (NTRS)

    He, Yuning

    2015-01-01

    The behavior of complex aerospace systems is governed by numerous parameters. For safety analysis it is important to understand how the system behaves with respect to these parameter values. In particular, understanding the boundaries between safe and unsafe regions is of major importance. In this paper, we describe a hierarchical Bayesian statistical modeling approach for the online detection and characterization of such boundaries. Our method for classification with active learning uses a particle filter-based model and a boundary-aware metric for best performance. From a library of candidate shapes incorporated with domain expert knowledge, the location and parameters of the boundaries are estimated using advanced Bayesian modeling techniques. The results of our boundary analysis are then provided in a form understandable by the domain expert. We illustrate our approach using a simulation model of a NASA neuro-adaptive flight control system, as well as a system for the detection of separation violations in the terminal airspace.

  5. Online model-based diagnosis to support autonomous operation of an advanced life support system.

    PubMed

    Biswas, Gautam; Manders, Eric-Jan; Ramirez, John; Mahadevan, Nagabhusan; Abdelwahed, Sherif

    2004-01-01

    This article describes methods for online model-based diagnosis of subsystems of the advanced life support system (ALS). The diagnosis methodology is tailored to detect, isolate, and identify faults in components of the system quickly so that fault-adaptive control techniques can be applied to maintain system operation without interruption. We describe the components of our hybrid modeling scheme and the diagnosis methodology, and then demonstrate the effectiveness of this methodology by building a detailed model of the reverse osmosis (RO) system of the water recovery system (WRS) of the ALS. This model is validated with real data collected from an experimental testbed at NASA JSC. A number of diagnosis experiments run on simulated faulty data are presented and the results are discussed. PMID:15880907

  6. PlanetServer/EarthServer: Big Data analytics in Planetary Science

    NASA Astrophysics Data System (ADS)

    Pio Rossi, Angelo; Oosthoek, Jelmer; Baumann, Peter; Beccati, Alan; Cantini, Federico; Misev, Dimitar; Orosei, Roberto; Flahaut, Jessica; Campalani, Piero; Unnithan, Vikram

    2014-05-01

    Planetary data are freely available on PDS/PSA archives and alike (e.g. Heather et al., 2013). Their exploitation by the community is somewhat limited by the variable availability of calibrated/higher level datasets. An additional complexity of these multi-experiment, multi-mission datasets is related to the heterogeneity of data themselves, rather than their volume. Orbital - so far - data are best suited for an inclusion in array databases (Baumann et al., 1994). Most lander- or rover-based remote sensing experiment (and possibly, in-situ as well) are suitable for similar approaches, although the complexity of coordinate reference systems (CRS) is higher in the latter case. PlanetServer, the Planetary Service of the EC FP7 e-infrastructure project EarthServer (http://earthserver.eu) is a state-of-art online data exploration and analysis system based on the Open Geospatial Consortium (OGC) standards for Mars orbital data. It provides access to topographic, panchromatic, multispectral and hyperspectral calibrated data. While its core focus has been on hyperspectral data analysis through the OGC Web Coverage Processing Service (Oosthoek et al., 2013; Rossi et al., 2013), the Service progressively expanded to host also sounding radar data (Cantini et al., this volume). Additionally, both single swath and mosaicked imagery and topographic data are being added to the Service, deriving from the HRSC experiment (e.g. Jaumann et al., 2007; Gwinner et al., 2009) The current Mars-centric focus can be extended to other planetary bodies and most components are general purpose ones, making possible its application to the Moon, Mercury or alike. The Planetary Service of EarthServer is accessible on http://www.planetserver.eu References: Baumann, P. (1994) VLDB J. 4 (3), 401-444, Special Issue on Spatial Database Systems. Cantini, F. et al. (2014) Geophys. Res. Abs., Vol. 16, #EGU2014-3784, this volume Heather, D., et al.(2013) EuroPlanet Sci. Congr. #EPSC2013-626 Gwinner, K

  7. Development of the acquisition model of online information resources at Faculty of Medicine Library, Khon Kaen University.

    PubMed

    Thanapaisal, Soodjai; Thanapaisal, Chaiwit

    2013-09-01

    Faculty of Medicine Library, Khon Kaen University started to acquire online information resources since 2001 with the subscriptions to 2 databases. Nowadays it has 29 items of subscriptions and the expenses on online information resources reach to 17 million baht, more than 70 percent of the information resources budget, serving the academic purposes of the Faculty of Medicine. The problems of online information resources acquisition fall into 4 categories, and lead to 4 aspects conforming the model of the acquisition, comparing or benchmarking with the 4 selected medical school libraries in Bangkok, Chiang Mai, and Songkhla, and discussion with some other Thai and foreign libraries. The acquisition model of online information resources is developed from those problems and proposed for Faculty of Medicine Library, Khon Kaen University as well as for any medical libraries which prefer.

  8. An online spatio-temporal prediction model for dengue fever epidemic in Kaohsiung,Taiwan

    NASA Astrophysics Data System (ADS)

    Cheng, Ming-Hung; Yu, Hwa-Lung; Angulo, Jose; Christakos, George

    2013-04-01

    Dengue Fever (DF) is one of the most serious vector-borne infectious diseases in tropical and subtropical areas. DF epidemics occur in Taiwan annually especially during summer and fall seasons. Kaohsiung city has been one of the major DF hotspots in decades. The emergence and re-emergence of the DF epidemic is complex and can be influenced by various factors including space-time dynamics of human and vector populations and virus serotypes as well as the associated uncertainties. This study integrates a stochastic space-time "Susceptible-Infected-Recovered" model under Bayesian maximum entropy framework (BME-SIR) to perform real-time prediction of disease diffusion across space-time. The proposed model is applied for spatiotemporal prediction of the DF epidemic at Kaohsiung city during 2002 when the historical series of high DF cases was recorded. The online prediction by BME-SIR model updates the parameters of SIR model and infected cases across districts over time. Results show that the proposed model is rigorous to initial guess of unknown model parameters, i.e. transmission and recovery rates, which can depend upon the virus serotypes and various human interventions. This study shows that spatial diffusion can be well characterized by BME-SIR model, especially at the district surrounding the disease outbreak locations. The prediction performance at DF hotspots, i.e. Cianjhen and Sanmin, can be degraded due to the implementation of various disease control strategies during the epidemics. The proposed online disease prediction BME-SIR model can provide the governmental agency with a valuable reference to timely identify, control, and efficiently prevent DF spread across space-time.

  9. An information diffusion model based on retweeting mechanism for online social media

    NASA Astrophysics Data System (ADS)

    Xiong, Fei; Liu, Yun; Zhang, Zhen-jiang; Zhu, Jiang; Zhang, Ying

    2012-06-01

    To characterize information propagation on online microblogs, we propose a diffusion model (SCIR) which contains four possible states: Susceptible, contacted, infected and refractory. Agents that read the information but have not decided to spread it, stay in the contacted state. They may become infected or refractory, and both the infected and refractory state are stable. Results show during the evolution process, more contacted agents appear in scale-free networks than in regular lattices. The degree based density of infected agents increases with the degree monotonously, but larger average network degree doesn't always mean less relaxation time.

  10. A robust model for on-line handwritten Japanese text recognition

    NASA Astrophysics Data System (ADS)

    Zhu, Bilan; Zhou, Xiang-Dong; Liu, Cheng-Lin; Nakagawa, Masaki

    2009-01-01

    This paper describes a robust model for on-line handwritten Japanese text recognition. The method evaluates the likelihood of candidate segmentation paths by combining scores of character pattern size, inner gap, character recognition, single-character position, pair-character position, likelihood of candidate segmentation point and linguistic context. The path score is insensitive to the number of candidate patterns and the optimal path can be found by the Viterbi search. In experiments of handwritten Japanese sentence recognition, the proposed method yielded superior performance.

  11. RNA-Redesign: a web server for fixed-backbone 3D design of RNA.

    PubMed

    Yesselman, Joseph D; Das, Rhiju

    2015-07-01

    RNA is rising in importance as a design medium for interrogating fundamental biology and for developing therapeutic and bioengineering applications. While there are several online servers for design of RNA secondary structure, there are no tools available for the rational design of 3D RNA structure. Here we present RNA-Redesign (http://rnaredesign.stanford.edu), an online 3D design tool for RNA. This resource utilizes fixed-backbone design to optimize the sequence identity and nucleobase conformations of an RNA to match a desired backbone, analogous to fundamental tools that underlie rational protein engineering. The resulting sequences suggest thermostabilizing mutations that can be experimentally verified. Further, sequence preferences that differ between natural and computationally designed sequences can suggest whether natural sequences possess functional constraints besides folding stability, such as cofactor binding or conformational switching. Finally, for biochemical studies, the designed sequences can suggest experimental tests of 3D models, including concomitant mutation of base triples. In addition to the designs generated, detailed graphical analysis is presented through an integrated and user-friendly environment.

  12. RNA-Redesign: a web server for fixed-backbone 3D design of RNA

    PubMed Central

    Yesselman, Joseph D.; Das, Rhiju

    2015-01-01

    RNA is rising in importance as a design medium for interrogating fundamental biology and for developing therapeutic and bioengineering applications. While there are several online servers for design of RNA secondary structure, there are no tools available for the rational design of 3D RNA structure. Here we present RNA-Redesign (http://rnaredesign.stanford.edu), an online 3D design tool for RNA. This resource utilizes fixed-backbone design to optimize the sequence identity and nucleobase conformations of an RNA to match a desired backbone, analogous to fundamental tools that underlie rational protein engineering. The resulting sequences suggest thermostabilizing mutations that can be experimentally verified. Further, sequence preferences that differ between natural and computationally designed sequences can suggest whether natural sequences possess functional constraints besides folding stability, such as cofactor binding or conformational switching. Finally, for biochemical studies, the designed sequences can suggest experimental tests of 3D models, including concomitant mutation of base triples. In addition to the designs generated, detailed graphical analysis is presented through an integrated and user-friendly environment. PMID:25964298

  13. HDF-EOS Web Server

    NASA Technical Reports Server (NTRS)

    Ullman, Richard; Bane, Bob; Yang, Jingli

    2008-01-01

    A shell script has been written as a means of automatically making HDF-EOS-formatted data sets available via the World Wide Web. ("HDF-EOS" and variants thereof are defined in the first of the two immediately preceding articles.) The shell script chains together some software tools developed by the Data Usability Group at Goddard Space Flight Center to perform the following actions: Extract metadata in Object Definition Language (ODL) from an HDF-EOS file, Convert the metadata from ODL to Extensible Markup Language (XML), Reformat the XML metadata into human-readable Hypertext Markup Language (HTML), Publish the HTML metadata and the original HDF-EOS file to a Web server and an Open-source Project for a Network Data Access Protocol (OPeN-DAP) server computer, and Reformat the XML metadata and submit the resulting file to the EOS Clearinghouse, which is a Web-based metadata clearinghouse that facilitates searching for, and exchange of, Earth-Science data.

  14. The NASA Technical Report Server

    NASA Technical Reports Server (NTRS)

    Nelson, Michael L.; Gottlich, Gretchen L.; Bianco, David J.; Paulson, Sharon S.; Binkley, Robert L.; Kellogg, Yvonne D.; Beaumont, Chris J.; Schmunk, Robert B.; Kurtz, Michael J.; Accomazzi, Alberto

    1995-01-01

    The National Aeronautics and Space Act of 1958 established NASA and charged it to "provide for the widest practicable and appropriate dissemination of information concerning its activities and the results thereof." The search for innovative methods to distribute NASA's information lead a grass-roots team to create the NASA Technical Report Server (NTRS), which uses the World Wide Web and other popular Internet-based information systems as search engines. The NTRS is an inter-center effort which provides uniform access to various distributed publication servers residing on the Internet. Users have immediate desktop access to technical publications from NASA centers and institutes. The NTRS is comprised of several units, some constructed especially for inclusion in NTRS, and others that are existing NASA publication services that NTRS reuses. This paper presents the NTRS architecture, usage metrics, and the lessons learned while implementing and maintaining the service. The NTRS is largely constructed with freely available software running on existing hardware. NTRS builds upon existing hardware and software, and the resulting additional exposure for the body of literature contained ensures that NASA's institutional knowledge base will continue to receive the widest practicable and appropriate dissemination.

  15. Assessment of Energy Removal Impacts on Physical Systems: Hydrodynamic Model Domain Expansion and Refinement, and Online Dissemination of Model Results

    SciTech Connect

    Yang, Zhaoqing; Khangaonkar, Tarang; Wang, Taiping

    2010-08-01

    In this report we describe the 1) the expansion of the PNNL hydrodynamic model domain to include the continental shelf along the coasts of Washington, Oregon, and Vancouver Island; and 2) the approach and progress in developing the online/Internet disseminations of model results and outreach efforts in support of the Puget Sound Operational Forecast System (PS-OPF). Submittal of this report completes the work on Task 2.1.2, Effects of Physical Systems, Subtask 2.1.2.1, Hydrodynamics, for fiscal year 2010 of the Environmental Effects of Marine and Hydrokinetic Energy project.

  16. The PhyloFacts FAT-CAT web server: ortholog identification and function prediction using fast approximate tree classification.

    PubMed

    Afrasiabi, Cyrus; Samad, Bushra; Dineen, David; Meacham, Christopher; Sjölander, Kimmen

    2013-07-01

    The PhyloFacts 'Fast Approximate Tree Classification' (FAT-CAT) web server provides a novel approach to ortholog identification using subtree hidden Markov model-based placement of protein sequences to phylogenomic orthology groups in the PhyloFacts database. Results on a data set of microbial, plant and animal proteins demonstrate FAT-CAT's high precision at separating orthologs and paralogs and robustness to promiscuous domains. We also present results documenting the precision of ortholog identification based on subtree hidden Markov model scoring. The FAT-CAT phylogenetic placement is used to derive a functional annotation for the query, including confidence scores and drill-down capabilities. PhyloFacts' broad taxonomic and functional coverage, with >7.3 M proteins from across the Tree of Life, enables FAT-CAT to predict orthologs and assign function for most sequence inputs. Four pipeline parameter presets are provided to handle different sequence types, including partial sequences and proteins containing promiscuous domains; users can also modify individual parameters. PhyloFacts trees matching the query can be viewed interactively online using the PhyloScope Javascript tree viewer and are hyperlinked to various external databases. The FAT-CAT web server is available at http://phylogenomics.berkeley.edu/phylofacts/fatcat/.

  17. A Web Server for MACCS Magnetometer Data

    NASA Technical Reports Server (NTRS)

    Engebretson, Mark J.

    1998-01-01

    NASA Grant NAG5-3719 was provided to Augsburg College to support the development of a web server for the Magnetometer Array for Cusp and Cleft Studies (MACCS), a two-dimensional array of fluxgate magnetometers located at cusp latitudes in Arctic Canada. MACCS was developed as part of the National Science Foundation's GEM (Geospace Environment Modeling) Program, which was designed in part to complement NASA's Global Geospace Science programs during the decade of the 1990s. This report describes the successful use of these grant funds to support a working web page that provides both daily plots and file access to any user accessing the worldwide web. The MACCS home page can be accessed at http://space.augsburg.edu/space/MaccsHome.html.

  18. PockDrug-Server: a new web server for predicting pocket druggability on holo and apo proteins.

    PubMed

    Hussein, Hiba Abi; Borrel, Alexandre; Geneix, Colette; Petitjean, Michel; Regad, Leslie; Camproux, Anne-Claude

    2015-07-01

    Predicting protein pocket's ability to bind drug-like molecules with high affinity, i.e. druggability, is of major interest in the target identification phase of drug discovery. Therefore, pocket druggability investigations represent a key step of compound clinical progression projects. Currently computational druggability prediction models are attached to one unique pocket estimation method despite pocket estimation uncertainties. In this paper, we propose 'PockDrug-Server' to predict pocket druggability, efficient on both (i) estimated pockets guided by the ligand proximity (extracted by proximity to a ligand from a holo protein structure) and (ii) estimated pockets based solely on protein structure information (based on amino atoms that form the surface of potential binding cavities). PockDrug-Server provides consistent druggability results using different pocket estimation methods. It is robust with respect to pocket boundary and estimation uncertainties, thus efficient using apo pockets that are challenging to estimate. It clearly distinguishes druggable from less druggable pockets using different estimation methods and outperformed recent druggability models for apo pockets. It can be carried out from one or a set of apo/holo proteins using different pocket estimation methods proposed by our web server or from any pocket previously estimated by the user. PockDrug-Server is publicly available at: http://pockdrug.rpbs.univ-paris-diderot.fr.

  19. On-line updating of a distributed flow routing model - River Vistula case study

    NASA Astrophysics Data System (ADS)

    Karamuz, Emilia; Romanowicz, Renata; Napiorkowski, Jaroslaw

    2015-04-01

    This paper presents an application of methods of on-line updating in the River Vistula flow forecasting system. All flow-routing codes make simplifying assumptions and consider only a reduced set of the processes known to occur during a flood. Hence, all models are subject to a degree of structural error that is typically compensated for by calibration of the friction parameters. Calibrated parameter values are not, therefore, physically realistic, as in estimating them we also make allowance for a number of distinctly non-physical effects, such as model structural error and any energy losses or flow processes which occur at sub-grid scales. Calibrated model parameters are therefore area-effective, scale-dependent values which are not drawn from the same underlying statistical distribution as the equivalent at-a-point parameter of the same name. The aim of this paper is the derivation of real-time updated, on-line flow forecasts at certain strategic locations along the river, over a specified time horizon into the future, based on information on the behaviour of the flood wave upstream and available on-line measurements at a site. Depending on the length of the river reach and the slope of the river bed, a realistic forecast lead time, obtained in this manner, may range from hours to days. The information upstream can include observations of river levels and/or rainfall measurements. The proposed forecasting system will integrate distributed modelling, acting as a spatial interpolator with lumped parameter Stochastic Transfer Function models. Daily stage data from gauging stations are typically available at sites 10-60 km apart and test only the average routing performance of hydraulic models and not their ability to produce spatial predictions. Application of a distributed flow routing model makes it possible to interpolate forecasts both in time and space. This work was partly supported by the project "Stochastic flood forecasting system (The River Vistula reach

  20. Pathological tremor and voluntary motion modeling and online estimation for active compensation.

    PubMed

    Bo, Antônio Padilha Lanari; Poignet, Philippe; Geny, Christian

    2011-04-01

    This paper presents an algorithm to perform online tremor characterization from motion sensors measurements, while filtering the voluntary motion performed by the patient. In order to estimate simultaneously both nonstationary signals in a stochastic filtering framework, pathological tremor was represented by a time-varying harmonic model and voluntary motion was modeled as an auto-regressive moving-average (ARMA) model. Since it is a nonlinear problem, an extended Kalman filter (EKF) was used. The developed solution was evaluated with simulated signals and experimental data from patients with different pathologies. Also, the results were comprehensively compared with alternative techniques proposed in the literature, evidencing the better performance of the proposed method. The algorithm presented in this paper may be an important tool in the design of active tremor compensation systems.

  1. Online NIR Analysis and Prediction Model for Synthesis Process of Ethyl 2-Chloropropionate

    PubMed Central

    Zhang, Wei; Song, Hang; Lu, Jing; Liu, Wen; Nie, Lirong; Yao, Shun

    2015-01-01

    Online near-infrared spectroscopy was used as a process analysis technique in the synthesis of 2-chloropropionate for the first time. Then, the partial least squares regression (PLSR) quantitative model of the product solution concentration was established and optimized. Correlation coefficient (R2) of partial least squares regression (PLSR) calibration model was 0.9944, and the root mean square error of correction (RMSEC) was 0.018105 mol/L. These values of PLSR and RMSEC could prove that the quantitative calibration model had good performance. Moreover, the root mean square error of prediction (RMSEP) of validation set was 0.036429 mol/L. The results were very similar to those of offline gas chromatographic analysis, which could prove the method was valid. PMID:26366175

  2. Interfaces for Distributed Systems of Information Servers.

    ERIC Educational Resources Information Center

    Kahle, Brewster; And Others

    1992-01-01

    Describes two systems--Wide Area Information Servers (WAIS) and Rosebud--that provide protocol-based mechanisms for accessing remote full-text information servers. Design constraints, human interface design, and implementation are examined for five interfaces to these systems developed to run on the Macintosh or Unix terminals. Sample screen…

  3. Optimizing the NASA Technical Report Server.

    ERIC Educational Resources Information Center

    Nelson, Michael L.; Maa, Ming-Hokng

    1996-01-01

    Modifying the NASA Technical Report Server (NTRS), a World Wide Web report distribution NASA technical publications service, has enhanced its performance, protocol support, and human interfacing. This article discusses the original and revised NTRS architecture, sequential and parallel query methods, and wide area information server (WAIS) uniform…

  4. Get the Word Out with List Servers

    ERIC Educational Resources Information Center

    Goldberg, Laurence

    2006-01-01

    In this article, the author details the use of electronic mail server in their school. In their school district of about 7,300 students in suburban Philadelphia (Abington SD), electronic mail list servers are now being used, along with other methods of communication, to disseminate information quickly and widely. They began by manually maintaining…

  5. You're a What? Process Server

    ERIC Educational Resources Information Center

    Torpey, Elka

    2012-01-01

    In this article, the author talks about the role and functions of a process server. The job of a process server is to hand deliver legal documents to the people involved in court cases. These legal documents range from a summons to appear in court to a subpoena for producing evidence. Process serving can involve risk, as some people take out their…

  6. CCTOP: a Consensus Constrained TOPology prediction web server.

    PubMed

    Dobson, László; Reményi, István; Tusnády, Gábor E

    2015-07-01

    The Consensus Constrained TOPology prediction (CCTOP; http://cctop.enzim.ttk.mta.hu) server is a web-based application providing transmembrane topology prediction. In addition to utilizing 10 different state-of-the-art topology prediction methods, the CCTOP server incorporates topology information from existing experimental and computational sources available in the PDBTM, TOPDB and TOPDOM databases using the probabilistic framework of hidden Markov model. The server provides the option to precede the topology prediction with signal peptide prediction and transmembrane-globular protein discrimination. The initial result can be recalculated by (de)selecting any of the prediction methods or mapped experiments or by adding user specified constraints. CCTOP showed superior performance to existing approaches. The reliability of each prediction is also calculated, which correlates with the accuracy of the per protein topology prediction. The prediction results and the collected experimental information are visualized on the CCTOP home page and can be downloaded in XML format. Programmable access of the CCTOP server is also available, and an example of client-side script is provided.

  7. CCTOP: a Consensus Constrained TOPology prediction web server

    PubMed Central

    Dobson, László; Reményi, István; Tusnády, Gábor E.

    2015-01-01

    The Consensus Constrained TOPology prediction (CCTOP; http://cctop.enzim.ttk.mta.hu) server is a web-based application providing transmembrane topology prediction. In addition to utilizing 10 different state-of-the-art topology prediction methods, the CCTOP server incorporates topology information from existing experimental and computational sources available in the PDBTM, TOPDB and TOPDOM databases using the probabilistic framework of hidden Markov model. The server provides the option to precede the topology prediction with signal peptide prediction and transmembrane-globular protein discrimination. The initial result can be recalculated by (de)selecting any of the prediction methods or mapped experiments or by adding user specified constraints. CCTOP showed superior performance to existing approaches. The reliability of each prediction is also calculated, which correlates with the accuracy of the per protein topology prediction. The prediction results and the collected experimental information are visualized on the CCTOP home page and can be downloaded in XML format. Programmable access of the CCTOP server is also available, and an example of client-side script is provided. PMID:25943549

  8. Using Online Space Weather Modeling Resources in a Capstone Undergraduate Course

    NASA Astrophysics Data System (ADS)

    Liemohn, M.

    2012-04-01

    The University of Michigan offers a senior-undergraduate-level course entitled, "Space Weather Modeling," taken by all of the space weather concentration students in the Atmospheric, Oceanic, and Space Sciences department. This is the capstone course of our undergraduate series, using the foundational knowledge from the previous courses towards an integrative large-scale numerical modeling study. A fraction of the graduate students also take this course. Because the state-of-the-art modeling capabilities are well beyond what is possible in a single term of programming, this course uses available online model resources, in particular the Community Coordinated Modeling Center (CCMC), a multi-agency facility hosted by NASA's Goddard Space Flight Center. Students learn not only how to use the codes, but also the various options of what equations to solve to model a specific region of space and the various numerical approaches for implementing the equations within a code. The course is project-based, consisting of multiple written reports and oral presentations, and the technical communication skills are an important component of the grading rubric. Students learn how to conduct a numerical modeling study by critiquing several space weather modeling journal articles, and then carry out their our studies with several of the available codes. In the end, they are familiarized with the available models to know the ranges of validity and applicability for a wide array of space weather applications.

  9. Differential surface models for tactile perception of shape and on-line tracking of features

    NASA Technical Reports Server (NTRS)

    Hemami, H.

    1987-01-01

    Tactile perception of shape involves an on-line controller and a shape perceptor. The purpose of the on-line controller is to maintain gliding or rolling contact with the surface, and collect information, or track specific features of the surface such as edges of a certain sharpness. The shape perceptor uses the information to perceive, estimate the parameters of, or recognize the shape. The differential surface model depends on the information collected and on the a priori information known about the robot and its physical parameters. These differential models are certain functionals that are projections of the dynamics of the robot onto the surface gradient or onto the tangent plane. A number of differential properties may be directly measured from present day tactile sensors. Others may have to be indirectly computed from measurements. Others may constitute design objectives for distributed tactile sensors of the future. A parameterization of the surface leads to linear and nonlinear sequential parameter estimation techniques for identification of the surface. Many interesting compromises between measurement and computation are possible.

  10. Security Enhancement for Authentication of Nodes in MANET by Checking the CRL Status of Servers

    NASA Astrophysics Data System (ADS)

    Irshad, Azeem; Noshairwan, Wajahat; Shafiq, Muhammad; Khurram, Shahzada; Irshad, Ehtsham; Usman, Muhammad

    MANET security is becoming a challenge for researchers with the time. The lack of infrastructure gives rise to authentication problems in these networks. Most of the TTP and non-TTP based schemes seem to be impractical for being adopted in MANETs. A hybrid key-management scheme addressed these issues effectively by pre-assigned logins on offline basis and issuing certificates on its basis using 4G services. However, the scheme did not taken into account the CRL status of servers; if it is embedded the nodes need to check frequently the server's CRL status for authenticating any node and place external messages outside MANET which leads to overheads. We have tried to reduce them by introducing an online MANET Authority responsible for issuing certificates by considering the CRL status of servers, renewing them and key verification within MANET that has greatly reduced the external messages.

  11. Human behavior in online social systems

    NASA Astrophysics Data System (ADS)

    Grabowski, A.

    2009-06-01

    We present and study data concerning human behavior in four online social systems: (i) an Internet community of friends of over 107 people, (ii) a music community website with over 106 users, (iii) a gamers’ community server with over 5 × 106 users and (iv) a booklovers’ website with over 2.5 × 105 users. The purpose of those systems is different; however, their properties are very similar. We have found that the distribution of human activity (e.g., the sum of books read or songs played) has the form of a power law. Moreover, the relationship between human activity and time has a power-law form, too. We present a simple interest-driven model of the evolution of such systems which explains the emergence of two scaling regimes.

  12. The Argonne Voyager multimedia server

    SciTech Connect

    Disz, T.; Judson, I.; Olson, R.; Stevens, R.

    1997-07-01

    With the growing presence of multimedia-enabled systems, one will see an integration of collaborative computing concepts into the everyday environments of future scientific and technical workplaces. Desktop teleconferencing is in common use today, while more complex desktop teleconferencing technology that relies on the availability of multipoint (greater than two nodes) enabled tools is now starting to become available on PCs. A critical problem when using these collaboration tools is the inability to easily archive multistream, multipoint meetings and make the content available to others. Ideally one would like the ability to capture, record, playback, index, annotate and distribute multimedia stream data as easily as one currently handles text or still image data. While the ultimate goal is still some years away, the Argonne Voyager project is aimed at exploring and developing media server technology needed to provide a flexible virtual multipoint recording/playback capability. In this article the authors describe the motivating requirements, architecture implementation, operation, performance, and related work.

  13. Stress Caused by On-Line Collaboration in E-Learning: A Developing Model

    ERIC Educational Resources Information Center

    Allan, John; Lawless, Naomi

    2003-01-01

    On-line collaboration is becoming increasingly common in education and in organisations. It was believed that this could in itself cause stress for collaborators. An analysis of on-line learning diaries, phone interviews and questionnaires indicated that on-line collaboration could cause stress, and this stress was linked to the dependency of the…

  14. College Students' Choice Modeling of Taking On-Line International Business Courses

    ERIC Educational Resources Information Center

    Yeh, Robert S.

    2006-01-01

    To understand students' choice behavior of taking on-line international business courses, a survey study is conducted to collect information regarding students' actual choices of taking on-line courses and potential factors that may have impacts on students' choices of online learning. Potential factors such as enrollment status, demographic…

  15. A conceptual model for analysing informal learning in online social networks for health professionals.

    PubMed

    Li, Xin; Gray, Kathleen; Chang, Shanton; Elliott, Kristine; Barnett, Stephen

    2014-01-01

    Online social networking (OSN) provides a new way for health professionals to communicate, collaborate and share ideas with each other for informal learning on a massive scale. It has important implications for ongoing efforts to support Continuing Professional Development (CPD) in the health professions. However, the challenge of analysing the data generated in OSNs makes it difficult to understand whether and how they are useful for CPD. This paper presents a conceptual model for using mixed methods to study data from OSNs to examine the efficacy of OSN in supporting informal learning of health professionals. It is expected that using this model with the dataset generated in OSNs for informal learning will produce new and important insights into how well this innovation in CPD is serving professionals and the healthcare system.

  16. MODELING THE INTONATION OF DISCOURSE SEGMENTS FOR IMPROVED ONLINE DIALOG ACT TAGGING.

    PubMed

    Vivek Kumar Rangarajan, Sridhar; Narayanan, Shrikanth; Bangalore, Srinivas

    2008-01-01

    Prosody is an important cue for identifying dialog acts. In this paper, we show that modeling the sequence of acoustic-prosodic values as n-gram features with a maximum entropy model for dialog act (DA) tagging can perform better than conventional approaches that use coarse representation of the prosodic contour through acoustic correlates of prosody. We also propose a discriminative framework that exploits preceding context in the form of lexical and prosodic cues from previous discourse segments. Such a scheme facilitates online DA tagging and offers robustness in the decoding process, unlike greedy decoding schemes that can potentially propagate errors. Using only lexical and prosodic cues from 3 previous utterances, we achieve a DA tagging accuracy of 72% compared to the best case scenario with accurate knowledge of previous DA tag, which results in 74% accuracy.

  17. Use of whole building simulation in on-line performance assessment: Modeling and implementation issues

    SciTech Connect

    Haves, Philip; Salsbury, Tim; Claridge, David; Liu, Mingsheng

    2001-06-15

    The application of model-based performance assessment at the whole building level is explored. The information requirements for a simulation to predict the actual performance of a particular real building, as opposed to estimating the impact of design options, are addressed with particular attention to common sources of input error and important deficiencies in most simulation models. The role of calibrated simulations is discussed. The communication requirements for passive monitoring and active testing are identified and the possibilities for using control system communications protocols to link on-line simulation and energy management and control systems are discussed. The potential of simulation programs to act as ''plug-and-play'' components on building control networks is discussed.

  18. HydroShare: An online, collaborative environment for the sharing of hydrologic data and models (Invited)

    NASA Astrophysics Data System (ADS)

    Tarboton, D. G.; Idaszak, R.; Horsburgh, J. S.; Ames, D.; Goodall, J. L.; Band, L. E.; Merwade, V.; Couch, A.; Arrigo, J.; Hooper, R. P.; Valentine, D. W.; Maidment, D. R.

    2013-12-01

    HydroShare is an online, collaborative system being developed for sharing hydrologic data and models. The goal of HydroShare is to enable scientists to easily discover and access data and models, retrieve them to their desktop or perform analyses in a distributed computing environment that may include grid, cloud or high performance computing model instances as necessary. Scientists may also publish outcomes (data, results or models) into HydroShare, using the system as a collaboration platform for sharing data, models and analyses. HydroShare is expanding the data sharing capability of the CUAHSI Hydrologic Information System by broadening the classes of data accommodated, creating new capability to share models and model components, and taking advantage of emerging social media functionality to enhance information about and collaboration around hydrologic data and models. One of the fundamental concepts in HydroShare is that of a Resource. All content is represented using a Resource Data Model that separates system and science metadata and has elements common to all resources as well as elements specific to the types of resources HydroShare will support. These will include different data types used in the hydrology community and models and workflows that require metadata on execution functionality. HydroShare will use the integrated Rule-Oriented Data System (iRODS) to manage federated data content and perform rule-based background actions on data and model resources, including parsing to generate metadata catalog information and the execution of models and workflows. This presentation will introduce the HydroShare functionality developed to date, describe key elements of the Resource Data Model and outline the roadmap for future development.

  19. Micro-level dynamics of the online information propagation: A user behavior model based on noisy spiking neurons.

    PubMed

    Lymperopoulos, Ilias N; Ioannou, George D

    2016-10-01

    We develop and validate a model of the micro-level dynamics underlying the formation of macro-level information propagation patterns in online social networks. In particular, we address the dynamics at the level of the mechanism regulating a user's participation in an online information propagation process. We demonstrate that this mechanism can be realistically described by the dynamics of noisy spiking neurons driven by endogenous and exogenous, deterministic and stochastic stimuli representing the influence modulating one's intention to be an information spreader. Depending on the dynamically changing influence characteristics, time-varying propagation patterns emerge reflecting the temporal structure, strength, and signal-to-noise ratio characteristics of the stimulation driving the online users' information sharing activity. The proposed model constitutes an overarching, novel, and flexible approach to the modeling of the micro-level mechanisms whereby information propagates in online social networks. As such, it can be used for a comprehensive understanding of the online transmission of information, a process integral to the sociocultural evolution of modern societies. The proposed model is highly adaptable and suitable for the study of the propagation patterns of behavior, opinions, and innovations among others.

  20. Micro-level dynamics of the online information propagation: A user behavior model based on noisy spiking neurons.

    PubMed

    Lymperopoulos, Ilias N; Ioannou, George D

    2016-10-01

    We develop and validate a model of the micro-level dynamics underlying the formation of macro-level information propagation patterns in online social networks. In particular, we address the dynamics at the level of the mechanism regulating a user's participation in an online information propagation process. We demonstrate that this mechanism can be realistically described by the dynamics of noisy spiking neurons driven by endogenous and exogenous, deterministic and stochastic stimuli representing the influence modulating one's intention to be an information spreader. Depending on the dynamically changing influence characteristics, time-varying propagation patterns emerge reflecting the temporal structure, strength, and signal-to-noise ratio characteristics of the stimulation driving the online users' information sharing activity. The proposed model constitutes an overarching, novel, and flexible approach to the modeling of the micro-level mechanisms whereby information propagates in online social networks. As such, it can be used for a comprehensive understanding of the online transmission of information, a process integral to the sociocultural evolution of modern societies. The proposed model is highly adaptable and suitable for the study of the propagation patterns of behavior, opinions, and innovations among others. PMID:27442224

  1. CoCAR: An Online Synchronous Training Model for Empowering ICT Capacity of Teachers of Chinese as a Foreign Language

    ERIC Educational Resources Information Center

    Lan, Yu-Ju; Chang, Kuo-En; Chen, Nian-Shing

    2012-01-01

    In response to the need to cultivate pre-service Chinese as a foreign language (CFL) teachers' information and communication technology (ICT) competency in online synchronous environments, this research adopted a three-stage cyclical model named "cooperation-based cognition, action, and reflection" (CoCAR). The model was implemented in an 18-week…

  2. Measures of Quality in Online Education: An Investigation of the Community of Inquiry Model and the Net Generation

    ERIC Educational Resources Information Center

    Shea, Peter; Bidjerano, Temi

    2008-01-01

    The goal of this article is to present and validate an instrument that reflects the Community of Inquiry Model (Garrison, Anderson, & Archer, 2000, 2001) and inquire into whether the instrument and the model it reflects explain variation in levels of student learning and satisfaction with online courses in a higher education context. Additionally…

  3. An Online Approach for Training International Climate Scientists to Use Computer Models

    NASA Astrophysics Data System (ADS)

    Yarker, M. B.; Mesquita, M. D.; Veldore, V.

    2013-12-01

    With the mounting evidence by the work of IPCC (2007), climate change has been acknowledged as a significant challenge to Sustainable Development by the international community. It is important that scientists in developing countries have access to knowledge and tools so that well-informed decisions can be made about the mitigation and adaptation of climate change. However, training researchers to use climate modeling techniques and data analysis has become a challenge, because current capacity building approaches train researchers to use climate models through short-term workshops, which requires a large amount of funding. It has also been observed that many participants who recently completed capacity building courses still view climate and weather models as a metaphorical 'black box', where data goes in and results comes out; and there is evidence that these participants lack a basic understanding of the climate system. Both of these issues limit the ability of some scientists to go beyond running a model based on rote memorization of the process. As a result, they are unable to solve problems regarding run-time errors, thus cannot determine whether or not their model simulation is reasonable. Current research in the field of science education indicates that there are effective strategies to teach learners about science models. They involve having the learner work with, experiment with, modify, and apply models in a way that is significant and informative to the learner. It has also been noted that in the case of computational models, the installation and set up process alone can be time consuming and confusing for new users, which can hinder their ability to concentrate on using, experimenting with, and applying the model to real-world scenarios. Therefore, developing an online version of capacity building is an alternative approach to the workshop training programs, which makes use of new technologies and it allows for a long-term educational process in a way

  4. Online Higher Education Instruction to Foster Critical Thinking When Assessing Environmental Issues - the Brownfield Action Model

    NASA Astrophysics Data System (ADS)

    Bower, Peter; Liddicoat, Joseph; Dittrick, Diane; Maenza-Gmelch, Terryanne; Kelsey, Ryan

    2013-04-01

    According to the Environmental Protection Agency, there are presently over half a million brownfields in the United States, but this number only includes sites for which an Environmental Site Assessment has been conducted. The actual number of brownfields is certainly into the millions and constitutes one of the major environmental issues confronting all communities today. Taught in part online for more than a decade in environmental science courses at over a dozen colleges, universities, and high schools in the United States, Brownfield Action (BA) is an interactive, web-based simulation that combines scientific expertise, constructivist education philosophy, and multimedia to advance the teaching of environmental science (Bower et al., 2011). In the online simulation and classroom, students form geotechnical consulting companies, conduct environmental site assessment investigations, and work collaboratively to solve a problem in environmental forensics. The BA model contains interdisciplinary scientific and social information that are integrated within a digital learning environment that encourages students to construct their knowledge as they learn by doing. As such, the approach improves the depth and coherence of students understanding of the course material. Like real-world environmental consultants, students are required to develop and apply expertise from a wide range of fields, including environmental science and engineering as well as journalism, medicine, public health, law, civics, economics, and business management. The overall objective is for students to gain an unprecedented appreciation of the complexity, ambiguity, and risk involved in any environmental issue or crisis.

  5. Strategies for Teaching Regional Climate Modeling: Online Professional Development for Scientists and Decision Makers

    NASA Astrophysics Data System (ADS)

    Walton, P.; Yarker, M. B.; Mesquita, M. D. S.; Otto, F. E. L.

    2014-12-01

    There is a clear role for climate science in supporting decision making at a range of scales and in a range of contexts: from Global to local, from Policy to Industry. However, clear a role climate science can play, there is also a clear discrepancy in the understanding of how to use the science and associated tools (such as climate models). Despite there being a large body of literature on the science there is clearly a need to provide greater support in how to apply appropriately. However, access to high quality professional development courses can be problematic, due to geographic, financial and time constraints. In attempt to address this gap we independently developed two online professional courses that focused on helping participants use and apply two regional climate models, WRF and PRECIS. Both courses were designed to support participants' learning through tutor led programs that covered the basic climate scientific principles of regional climate modeling and how to apply model outputs. The fundamental differences between the two courses are: 1) the WRF modeling course expected participants to design their own research question that was then run on a version of the model, whereas 2) the PRECIS course concentrated on the principles of regional modeling and how the climate science informed the modeling process. The two courses were developed to utilise the cost and time management benefits associated with eLearning, with the recognition that this mode of teaching can also be accessed internationally, providing professional development courses in countries that may not be able to provide their own. The development teams saw it as critical that the courses reflected sound educational theory, to ensure that participants had the maximum opportunity to learn successfully. In particular, the role of reflection is central to both course structures to help participants make sense of the science in relation to their own situation. This paper details the different

  6. Performance measurements of single server fuzzy queues with unreliable server using left and right method

    NASA Astrophysics Data System (ADS)

    Mueen, Zeina; Ramli, Razamin; Zaibidi, Nerda Zura

    2015-12-01

    There are a number of real life systems that can be described as a queuing system, and this paper presents a queuing system model applied in a manufacturing system example. The queuing model considered is depicted in a fuzzy environment with retrial queues and unreliable server. The stability condition state of this model is investigated and the performance measurement is obtained by adopting the left and right method. The new approach adopted in this study merges the existing α-cut interval and nonlinear programming techniques and a numerical example was considered to explain the methodology of this technique. From the numerical example, the flexibility of the method was shown graphically showing the exact real mean value of customers in the system and also the expected waiting times.

  7. An online trajectory module (version 1.0) for the non-hydrostatic numerical weather prediction model COSMO

    NASA Astrophysics Data System (ADS)

    Miltenberger, A. K.; Pfahl, S.; Wernli, H.

    2013-02-01

    A module to calculate online trajectories has been implemented into the non-hydrostatic limited-area weather prediction and climate model COSMO. Whereas offline trajectories are calculated with wind fields from model output, which is typically available every one to six hours, online trajectories use the simulated wind field at every model time step (typically less than a minute) to solve the trajectory equation. As a consequence, online trajectories much better capture the short-term temporal fluctuations of the wind field, which is particularly important for mesoscale flows near topography and convective clouds, and they do not suffer from temporal interpolation errors between model output times. The numerical implementation of online trajectories in the COSMO model is based upon an established offline trajectory tool and takes full account of the horizontal domain decomposition that is used for parallelization of the COSMO model. Although a perfect workload balance cannot be achieved for the trajectory module (due to the fact that trajectory positions are not necessarily equally distributed over the model domain), the additional computational costs are fairly small for high-resolution simulations. Various options have been implemented to initialize online trajectories at different locations and times during the model simulation. As a first application of the new COSMO module an Alpine North Föhn event in summer 1987 has been simulated with horizontal resolutions of 2.2 km, 7 km, and 14 km. It is shown that low-tropospheric trajectories calculated offline with one- to six-hourly wind fields can significantly deviate from trajectories calculated online. Deviations increase with decreasing model grid spacing and are particularly large in regions of deep convection and strong orographic flow distortion. On average, for this particular case study, horizontal and vertical positions between online and offline trajectories differed by 50-190 km and 150-750 m

  8. Scaffolding Students' Online Critiquing of Expert- and Peer-generated Molecular Models of Chemical Reactions

    NASA Astrophysics Data System (ADS)

    Chang, Hsin-Yi; Chang, Hsiang-Chi

    2013-08-01

    In this study, we developed online critiquing activities using an open-source computer learning environment. We investigated how well the activities scaffolded students to critique molecular models of chemical reactions made by scientists, peers, and a fictitious peer, and whether the activities enhanced the students' understanding of science models and chemical reactions. The activities were implemented in an eighth-grade class with 28 students in a public junior high school in southern Taiwan. The study employed mixed research methods. Data collected included pre- and post-instructional assessments, post-instructional interviews, and students' electronic written responses and oral discussions during the critiquing activities. The results indicated that these activities guided the students to produce overall quality critiques. Also, the students developed a more sophisticated understanding of chemical reactions and scientific models as a result of the intervention. Design considerations for effective model critiquing activities are discussed based on observational results, including the use of peer-generated artefacts for critiquing to promote motivation and collaboration, coupled with critiques of scientific models to enhance students' epistemological understanding of model purpose and communication.

  9. Migrating an Online Service to WAP - A Case Study.

    ERIC Educational Resources Information Center

    Klasen, Lars

    2002-01-01

    Discusses mobile access via wireless application protocol (WAP) to online services that is offered in Sweden through InfoTorg. Topics include the Swedish online market; filtering HTML data from an Internet/Web server into WML (wireless markup language); mobile phone technology; microbrowsers; WAP protocol; and future possibilities. (LRW)

  10. The RAMI On-line Model Checker (ROMC): A tool for the automated evaluation of canopy reflectance models.

    NASA Astrophysics Data System (ADS)

    Widlowski, J.-L.; Robustelli, M.; Taberner, M.; Pinty, B.; Rami Participants, All

    The Radiative transfer Model Intercomparison RAMI exercise was first launched in 1999 and then again in 2002 and 2005 RAMI aims at evaluating the performance of canopy reflectance models in absence of any absolute reference truth It does so by intercomparing models over a large ensemble of test cases under a variety of spectral and illumination conditions A series of criteria can be applied to select an ensemble of mutually agreeing 3-D Monte Carlo models to provide a surrogate truth against which all other models can then be compared We will present an overview of the RAMI activities and show how the results of the latest phase have lead to the development of the RAMI Online model checker ROMC This tool allows both model developers and users to evaluate the performance of their canopy reflectance models a against previous RAMI test cases whose results have already been published in the literature and b against test cases that are similar to the RAMI cases but for which no results will be known a priori As such the ROMC allows models to be debugged and or validated autonomously on a limited number of test cases RAMI-certified graphics that document a model s performance can be downloaded for future use in scientific presentations and or publications

  11. Methodology to model the energy and greenhouse gas emissions of electronic software distributions.

    PubMed

    Williams, Daniel R; Tang, Yinshan

    2012-01-17

    A new electronic software distribution (ESD) life cycle analysis (LCA) methodology and model structure were constructed to calculate energy consumption and greenhouse gas (GHG) emissions. In order to counteract the use of high level, top-down modeling efforts, and to increase result accuracy, a focus upon device details and data routes was taken. In order to compare ESD to a relevant physical distribution alternative, physical model boundaries and variables were described. The methodology was compiled from the analysis and operational data of a major online store which provides ESD and physical distribution options. The ESD method included the calculation of power consumption of data center server and networking devices. An in-depth method to calculate server efficiency and utilization was also included to account for virtualization and server efficiency features. Internet transfer power consumption was analyzed taking into account the number of data hops and networking devices used. The power consumed by online browsing and downloading was also factored into the model. The embedded CO(2)e of server and networking devices was proportioned to each ESD process. Three U.K.-based ESD scenarios were analyzed using the model which revealed potential CO(2)e savings of 83% when ESD was used over physical distribution. Results also highlighted the importance of server efficiency and utilization methods.

  12. Surfing for Data: A Gathering Trend in Data Storage Is the Use of Web-Based Applications that Make It Easy for Authorized Users to Access Hosted Server Content with Just a Computing Device and Browser

    ERIC Educational Resources Information Center

    Technology & Learning, 2005

    2005-01-01

    In recent years, the widespread availability of networks and the flexibility of Web browsers have shifted the industry from a client-server model to a Web-based one. In the client-server model of computing, clients run applications locally, with the servers managing storage, printing functions, and network traffic. Because every client is…

  13. Web server with ATMEGA 2560 microcontroller

    NASA Astrophysics Data System (ADS)

    Răduca, E.; Ungureanu-Anghel, D.; Nistor, L.; Haţiegan, C.; Drăghici, S.; Chioncel, C.; Spunei, E.; Lolea, R.

    2016-02-01

    This paper presents the design and building of a Web Server to command, control and monitor at a distance lots of industrial or personal equipments and/or sensors. The server works based on a personal software. The software can be written by users and can work with many types of operating system. The authors were realized the Web server based on two platforms, an UC board and a network board. The source code was written in "open source" language Arduino 1.0.5.

  14. The Matpar Server on the HP Exemplar

    NASA Technical Reports Server (NTRS)

    Springer, Paul

    2000-01-01

    This presentation reviews the design of Matlab for parallel processing on a parallel system. Matlab was found to be too slow on many large problems, and with the Next Generation Space Telescope requiring greater capability, the work was begun in early 1996 on parallel extensions to Matlab, called Matpar. This presentation reviews the architecture, the functionality, and the design of MatPar. The design utilizes a client server strategy, with the client code written in C, and the object-oriented server code written in C++. The client/server approach for Matpar provides ease of use an good speed.

  15. The HADDOCK web server for data-driven biomolecular docking.

    PubMed

    de Vries, Sjoerd J; van Dijk, Marc; Bonvin, Alexandre M J J

    2010-05-01

    Computational docking is the prediction or modeling of the three-dimensional structure of a biomolecular complex, starting from the structures of the individual molecules in their free, unbound form. HADDOCK is a popular docking program that takes a data-driven approach to docking, with support for a wide range of experimental data. Here we present the HADDOCK web server protocol, facilitating the modeling of biomolecular complexes for a wide community. The main web interface is user-friendly, requiring only the structures of the individual components and a list of interacting residues as input. Additional web interfaces allow the more advanced user to exploit the full range of experimental data supported by HADDOCK and to customize the docking process. The HADDOCK server has access to the resources of a dedicated cluster and of the e-NMR GRID infrastructure. Therefore, a typical docking run takes only a few minutes to prepare and a few hours to complete.

  16. Implementing Online Physical Education

    ERIC Educational Resources Information Center

    Mohnsen, Bonnie

    2012-01-01

    Online physical education, although seemingly an oxymoron, appears to be the wave of the future at least for some students. The purpose of this article is to explore research and options for online learning in physical education and to examine a curriculum, assessment, and instructional model for online learning. The article examines how physical…

  17. A Hierarchical Neuronal Model for Generation and Online Recognition of Birdsongs

    PubMed Central

    Yildiz, Izzet B.; Kiebel, Stefan J.

    2011-01-01

    The neuronal system underlying learning, generation and recognition of song in birds is one of the best-studied systems in the neurosciences. Here, we use these experimental findings to derive a neurobiologically plausible, dynamic, hierarchical model of birdsong generation and transform it into a functional model of birdsong recognition. The generation model consists of neuronal rate models and includes critical anatomical components like the premotor song-control nucleus HVC (proper name), the premotor nucleus RA (robust nucleus of the arcopallium), and a model of the syringeal and respiratory organs. We use Bayesian inference of this dynamical system to derive a possible mechanism for how birds can efficiently and robustly recognize the songs of their conspecifics in an online fashion. Our results indicate that the specific way birdsong is generated enables a listening bird to robustly and rapidly perceive embedded information at multiple time scales of a song. The resulting mechanism can be useful for investigating the functional roles of auditory recognition areas and providing predictions for future birdsong experiments. PMID:22194676

  18. A hierarchical neuronal model for generation and online recognition of birdsongs.

    PubMed

    Yildiz, Izzet B; Kiebel, Stefan J

    2011-12-01

    The neuronal system underlying learning, generation and recognition of song in birds is one of the best-studied systems in the neurosciences. Here, we use these experimental findings to derive a neurobiologically plausible, dynamic, hierarchical model of birdsong generation and transform it into a functional model of birdsong recognition. The generation model consists of neuronal rate models and includes critical anatomical components like the premotor song-control nucleus HVC (proper name), the premotor nucleus RA (robust nucleus of the arcopallium), and a model of the syringeal and respiratory organs. We use Bayesian inference of this dynamical system to derive a possible mechanism for how birds can efficiently and robustly recognize the songs of their conspecifics in an online fashion. Our results indicate that the specific way birdsong is generated enables a listening bird to robustly and rapidly perceive embedded information at multiple time scales of a song. The resulting mechanism can be useful for investigating the functional roles of auditory recognition areas and providing predictions for future birdsong experiments.

  19. A hierarchical neuronal model for generation and online recognition of birdsongs.

    PubMed

    Yildiz, Izzet B; Kiebel, Stefan J

    2011-12-01

    The neuronal system underlying learning, generation and recognition of song in birds is one of the best-studied systems in the neurosciences. Here, we use these experimental findings to derive a neurobiologically plausible, dynamic, hierarchical model of birdsong generation and transform it into a functional model of birdsong recognition. The generation model consists of neuronal rate models and includes critical anatomical components like the premotor song-control nucleus HVC (proper name), the premotor nucleus RA (robust nucleus of the arcopallium), and a model of the syringeal and respiratory organs. We use Bayesian inference of this dynamical system to derive a possible mechanism for how birds can efficiently and robustly recognize the songs of their conspecifics in an online fashion. Our results indicate that the specific way birdsong is generated enables a listening bird to robustly and rapidly perceive embedded information at multiple time scales of a song. The resulting mechanism can be useful for investigating the functional roles of auditory recognition areas and providing predictions for future birdsong experiments. PMID:22194676

  20. GrayStarServer: Server-side Spectrum Synthesis with a Browser-based Client-side User Interface

    NASA Astrophysics Data System (ADS)

    Short, C. Ian

    2016-10-01

    We present GrayStarServer (GSS), a stellar atmospheric modeling and spectrum synthesis code of pedagogical accuracy that is accessible in any web browser on commonplace computational devices and that runs on a timescale of a few seconds. The addition of spectrum synthesis annotated with line identifications extends the functionality and pedagogical applicability of GSS beyond that of its predecessor, GrayStar3 (GS3). The spectrum synthesis is based on a line list acquired from the NIST atomic spectra database, and the GSS post-processing and user interface client allows the user to inspect the plain text ASCII version of the line list, as well as to apply macroscopic broadening. Unlike GS3, GSS carries out the physical modeling on the server side in Java, and communicates with the JavaScript and HTML client via an asynchronous HTTP request. We also describe other improvements beyond GS3 such as a more physical treatment of background opacity and atmospheric physics, the comparison of key results with those of the Phoenix code, and the use of the HTML < {canvas}> element for higher quality plotting and rendering of results. We also present LineListServer, a Java code for converting custom ASCII line lists in NIST format to the byte data type file format required by GSS so that users can prepare their own custom line lists. We propose a standard for marking up and packaging model atmosphere and spectrum synthesis output for data transmission and storage that will facilitate a web-based approach to stellar atmospheric modeling and spectrum synthesis. We describe some pedagogical demonstrations and exercises enabled by easily accessible, on-demand, responsive spectrum synthesis. GSS may serve as a research support tool by providing quick spectroscopic reconnaissance. GSS may be found at www.ap.smu.ca/~ishort/OpenStars/GrayStarServer/grayStarServer.html, and source tarballs for local installations of both GSS and LineListServer may be found at www.ap.smu.ca/~ishort/OpenStars/.

  1. Combining next-generation sequencing and online databases for microsatellite development in non-model organisms

    PubMed Central

    Rico, Ciro; Normandeau, Eric; Dion-Côté, Anne-Marie; Rico, María Inés; Côté, Guillaume; Bernatchez, Louis

    2013-01-01

    Next-generation sequencing (NGS) is revolutionising marker development and the rapidly increasing amount of transcriptomes published across a wide variety of taxa is providing valuable sequence databases for the identification of genetic markers without the need to generate new sequences. Microsatellites are still the most important source of polymorphic markers in ecology and evolution. Motivated by our long-term interest in the adaptive radiation of a non-model species complex of whitefishes (Coregonus spp.), in this study, we focus on microsatellite characterisation and multiplex optimisation using transcriptome sequences generated by Illumina® and Roche-454, as well as online databases of Expressed Sequence Tags (EST) for the study of whitefish evolution and demographic history. We identified and optimised 40 polymorphic loci in multiplex PCR reactions and validated the robustness of our analyses by testing several population genetics and phylogeographic predictions using 494 fish from five lakes and 2 distinct ecotypes. PMID:24296905

  2. On-Line Model-Based System For Nuclear Plant Monitoring

    NASA Astrophysics Data System (ADS)

    Tsoukalas, Lefteri H.; Lee, G. W.; Ragheb, Magdi; McDonough, T.; Niziolek, F.; Parker, M.

    1989-03-01

    A prototypical on-line model-based system, LASALLE1, developed at the University of Illinois in collaboration with the Illinois Department of Nuclear Safety (IDNS) is described. Its main purpose is to interpret about 300 signals, updated every two minutes at IDNS from the LaSalle Nuclear Power Plant, and to diagnose possible abnormal conditions. It is written in VAX/VMS OPS5 and operates on both the on-line and testing modes. In its knowledge base, operator and plant actions pertaining to the Emergency Operating Procedure(EOP) A-01, are encoded. This is a procedure driven by a reactor's coolant level and pressure signals; with the purpose of shutting down the reactor, maintaining adequate core cooling and reducing the reactor pressure and temperature to cold shutdown conditions ( about 90 to 200 °F). The monitoring of the procedure is performed from the perspective of Emergency Preparedness. Two major issues are addressed in this system. First, the management of the short-term or working memory of the system. LASALLE1 must reach its inferences, display its conclusion and update a message file every two minutes before a new set of data arrives from the plant. This was achieved by superimposing additional layers of control over the inferencing strategies inherent in OPS5, and developing special rules for the management of the used or outdated information. The second issue is the representation of information and its uncertainty. The concepts of information granularity and performance-level, which are based on a coupling of Probability Theory and the theory of Fuzzy Sets, are used for this purpose. The estimation of the performance-level incorporates a mathematical methodology which accounts for two types of uncertainty encountered in monitoring physical systems: Random uncertainty, in the form of of probability density functions generated by observations, measurements and sensors data and fuzzy uncertainty represented by membership functions based on symbolic

  3. New Development of the Online Integrated Climate-Chemistry model framwork (RegCM-CHEM4)

    NASA Astrophysics Data System (ADS)

    Zakey, A. S.; Shalaby, A. K.; Solmon, F.; Giorgi, F.; Tawfik, A. B.; Steiner, A. L.; Baklanov, A.

    2012-04-01

    The RegCM-CHEM4 is a new online integrated climate-chemistry model based on the regional climate model (RegCM4). The RegCM4 developed at the Abdus Salam International Centre for Theoretical Physics (ICTP), is a hydrostatic, sigma coordinate model. Tropospheric gas-phase chemistry is integrated into the climate model using the condensed version of the Carbon Bond Mechanism CBM-Z with lumped species that represent broad categories of organics based on carbon bond structure. The computationally rapid radical balance method RBM is coupled as a chemical solver to the gas-phase mechanism. Photolysis rates are determined as a function of meteorological and chemical inputs and interpolated from an array of pre-determined values based on the Tropospheric Ultraviolet-Visible Model (TUV) with cloud cover corrections. Cloud optical depths and cloud altitudes from RegCM-CHEM4 are used in the photolysis calculations, thereby directly coupling the photolysis rates and chemical reactions to meteorological conditions at each model time step. In this study, we evaluate the model over Europe for two different time scales: (1) an event-based analysis of the ozone episode associated with the heat wave of August 2003 and (2) a climatological analysis of a six-year simulation (2000-2005). For the episode analysis, model simulations show a good agreement with the European Monitoring and Evaluation Program (EMEP) observations of hourly ozone over different regions in Europe and capture ozone concentrations during and after the summer 2003 heat wave event. Analysis of the full six years of simulation indicates that the coupled chemistry-climate model can reproduce the seasonal cycle of ozone, with an overestimation of ozone in the non-event years of 5-15 ppb depending on the geographic region. Overall, the ozone and ozone precursor evaluation shows the feasibility of using RegCM-CHEM4 for decadal-length simulations of chemistry-climate interactions.

  4. PHYML Online—a web server for fast maximum likelihood-based phylogenetic inference

    PubMed Central

    Guindon, Stéphane; Lethiec, Franck; Duroux, Patrice; Gascuel, Olivier

    2005-01-01

    PHYML Online is a web interface to PHYML, a software that implements a fast and accurate heuristic for estimating maximum likelihood phylogenies from DNA and protein sequences. This tool provides the user with a number of options, e.g. nonparametric bootstrap and estimation of various evolutionary parameters, in order to perform comprehensive phylogenetic analyses on large datasets in reasonable computing time. The server and its documentation are available at . PMID:15980534

  5. Advanced Online Survival Analysis Tool for Predictive Modelling in Clinical Data Science.

    PubMed

    Montes-Torres, Julio; Subirats, José Luis; Ribelles, Nuria; Urda, Daniel; Franco, Leonardo; Alba, Emilio; Jerez, José Manuel

    2016-01-01

    One of the prevailing applications of machine learning is the use of predictive modelling in clinical survival analysis. In this work, we present our view of the current situation of computer tools for survival analysis, stressing the need of transferring the latest results in the field of machine learning to biomedical researchers. We propose a web based software for survival analysis called OSA (Online Survival Analysis), which has been developed as an open access and user friendly option to obtain discrete time, predictive survival models at individual level using machine learning techniques, and to perform standard survival analysis. OSA employs an Artificial Neural Network (ANN) based method to produce the predictive survival models. Additionally, the software can easily generate survival and hazard curves with multiple options to personalise the plots, obtain contingency tables from the uploaded data to perform different tests, and fit a Cox regression model from a number of predictor variables. In the Materials and Methods section, we depict the general architecture of the application and introduce the mathematical background of each of the implemented methods. The study concludes with examples of use showing the results obtained with public datasets. PMID:27532883

  6. Advanced Online Survival Analysis Tool for Predictive Modelling in Clinical Data Science

    PubMed Central

    Montes-Torres, Julio; Subirats, José Luis; Ribelles, Nuria; Urda, Daniel; Franco, Leonardo; Alba, Emilio; Jerez, José Manuel

    2016-01-01

    One of the prevailing applications of machine learning is the use of predictive modelling in clinical survival analysis. In this work, we present our view of the current situation of computer tools for survival analysis, stressing the need of transferring the latest results in the field of machine learning to biomedical researchers. We propose a web based software for survival analysis called OSA (Online Survival Analysis), which has been developed as an open access and user friendly option to obtain discrete time, predictive survival models at individual level using machine learning techniques, and to perform standard survival analysis. OSA employs an Artificial Neural Network (ANN) based method to produce the predictive survival models. Additionally, the software can easily generate survival and hazard curves with multiple options to personalise the plots, obtain contingency tables from the uploaded data to perform different tests, and fit a Cox regression model from a number of predictor variables. In the Materials and Methods section, we depict the general architecture of the application and introduce the mathematical background of each of the implemented methods. The study concludes with examples of use showing the results obtained with public datasets. PMID:27532883

  7. The network-enabled optimization system server

    SciTech Connect

    Mesnier, M.P.

    1995-08-01

    Mathematical optimization is a technology under constant change and advancement, drawing upon the most efficient and accurate numerical methods to date. Further, these methods can be tailored for a specific application or generalized to accommodate a wider range of problems. This perpetual change creates an ever growing field, one that is often difficult to stay abreast of. Hence, the impetus behind the Network-Enabled Optimization System (NEOS) server, which aims to provide users, both novice and expert, with a guided tour through the expanding world of optimization. The NEOS server is responsible for bridging the gap between users and the optimization software they seek. More specifically, the NEOS server will accept optimization problems over the Internet and return a solution to the user either interactively or by e-mail. This paper discusses the current implementation of the server.

  8. Process modeling and bottleneck mining in online peer-review systems.

    PubMed

    Premchaiswadi, Wichian; Porouhan, Parham

    2015-01-01

    This paper is divided into three main parts. In the first part of the study, we captured, collected and formatted an event log describing the handling of reviews for proceedings of an international conference in Thailand. In the second part, we used several process mining techniques in order to discover process models, social, organizational, and hierarchical structures from the proceeding's event log. In the third part, we detected the deviations and bottlenecks of the peer review process by comparing the observed events (i.e., authentic dataset) with a pre-defined model (i.e., master map). Finally, we investigated the performance information as well as the total waiting time in order to improve the effectiveness and efficiency of the online submission and peer review system for the prospective conferences and seminars. Consequently, the main goals of the study were as follows: (1) to convert the collected event log into the appropriate format supported by process mining analysis tools, (2) to discover process models and to construct social networks based on the collected event log, and (3) to find deviations, discrepancies and bottlenecks between the collected event log and the master pre-defined model. The results showed that although each paper was initially sent to three different reviewers; it was not always possible to make a decision after the first round of reviewing; therefore, additional reviewers were invited. In total, all the accepted and rejected manuscripts were reviewed by an average of 3.9 and 3.2 expert reviewers, respectively. Moreover, obvious violations of the rules and regulations relating to careless or inappropriate peer review of a manuscript-committed by the editorial board and other staff-were identified. Nine blocks of activity in the authentic dataset were not completely compatible with the activities defined in the master model. Also, five of the activity traces were not correctly enabled, and seven activities were missed within the

  9. A client/server system for remote diagnosis of cardiac arrhythmias.

    PubMed

    Tong, D A; Gajjala, V; Widman, L E

    1995-01-01

    Health care practitioners are often faced with the task of interpreting complex heart rhythms from electrocardiograms (ECGs) produced by 12-lead ECG machines, ambulatory (Holter) monitoring systems, and intensive-care unit monitors. Usually, the practitioner caring for the patient does not have specialized training in cardiology or in ECG interpretation; and commercial programs that interpret 12-lead ECGs have been well-documented in the medical literature to perform poorly at analyzing cardiac rhythm. We believe that a system capable of providing comprehensive ECG interpretation as well as access to online consultations will be beneficial to the health care system. We hypothesized that we could develop a client-server based telemedicine system capable of providing access to (1) an on-line knowledge-based system for remote diagnosis of cardiac arrhythmias and (2) an on-line cardiologist for real-time interactive consultation using readily available resources on the Internet. Furthermore, we hypothesized that Macintosh and Microsoft Windows-based personal computers running an X server could function as the delivery platform for the developed system. Although we were successful in developing such a system that will run efficiently on a UNIX-based work-station, current personal computer X server software are not capable of running the system efficiently.

  10. 3Drefine: an interactive web server for efficient protein structure refinement

    PubMed Central

    Bhattacharya, Debswapna; Nowotny, Jackson; Cao, Renzhi; Cheng, Jianlin

    2016-01-01

    3Drefine is an interactive web server for consistent and computationally efficient protein structure refinement with the capability to perform web-based statistical and visual analysis. The 3Drefine refinement protocol utilizes iterative optimization of hydrogen bonding network combined with atomic-level energy minimization on the optimized model using a composite physics and knowledge-based force fields for efficient protein structure refinement. The method has been extensively evaluated on blind CASP experiments as well as on large-scale and diverse benchmark datasets and exhibits consistent improvement over the initial structure in both global and local structural quality measures. The 3Drefine web server allows for convenient protein structure refinement through a text or file input submission, email notification, provided example submission and is freely available without any registration requirement. The server also provides comprehensive analysis of submissions through various energy and statistical feedback and interactive visualization of multiple refined models through the JSmol applet that is equipped with numerous protein model analysis tools. The web server has been extensively tested and used by many users. As a result, the 3Drefine web server conveniently provides a useful tool easily accessible to the community. The 3Drefine web server has been made publicly available at the URL: http://sysbio.rnet.missouri.edu/3Drefine/. PMID:27131371

  11. Modeling the Relationship between Transportation-Related Carbon Dioxide Emissions and Hybrid-Online Courses at a Large Urban University

    ERIC Educational Resources Information Center

    Little, Matthew; Cordero, Eugene

    2014-01-01

    Purpose: This paper aims to investigate the relationship between hybrid classes (where a per cent of the class meetings are online) and transportation-related CO[subscript 2] emissions at a commuter campus similar to San José State University (SJSU). Design/methodology/approach: A computer model was developed to calculate the number of trips to…

  12. A Model for Semi-Informal Online Learning Communities: A Case Study of the NASA INSPIRE Project

    ERIC Educational Resources Information Center

    Keesee, Amanda Glasgow

    2011-01-01

    Scope and Method of Study: The purpose of this study was to develop a model of informal online learning communities based on theory, research and practice. Case study methodology was used to examine the NASA Interdisciplinary National Science Project Incorporating Research and Education Experience (INSPIRE) Project as an example of a successful…

  13. Toward a Theoretical Model of Decision-Making and Resistance to Change among Higher Education Online Course Designers

    ERIC Educational Resources Information Center

    Dodd, Bucky J.

    2013-01-01

    Online course design is an emerging practice in higher education, yet few theoretical models currently exist to explain or predict how the diffusion of innovations occurs in this space. This study used a descriptive, quantitative survey research design to examine theoretical relationships between decision-making style and resistance to change…

  14. The Interdependence of the Factors Influencing the Perceived Quality of the Online Learning Experience: A Causal Model

    ERIC Educational Resources Information Center

    Peltier, James W.; Schibrowsky, John A.; Drago, William

    2007-01-01

    A structural model of the drivers of online education is proposed and tested. The findings help to identify the interrelated nature of the lectures delivered via technology outside of the traditional classroom, the importance of mentoring, the need to develop course structure, the changing roles for instructors and students, and the importance of…

  15. Toward a Model of Sources of Influence in Online Education: Cognitive Learning and the Effects of Web 2.0

    ERIC Educational Resources Information Center

    Carr, Caleb T.; Zube, Paul; Dickens, Eric; Hayter, Carolyn A.; Barterian, Justin A.

    2013-01-01

    To explore the integration of education processes into social media, we tested an initial model of student learning via interactive web tools and theorized three sources of influence: interpersonal, intrapersonal, and masspersonal. Three-hundred thirty-seven students observed an online lecture and then completed a series of scales. Structural…

  16. The 3 x 2 Achievement Goal Model in Predicting Online Student Test Anxiety and Help-Seeking

    ERIC Educational Resources Information Center

    Yang, Yan; Taylor, Jeff; Cao, Li

    2016-01-01

    This study investigates the utility of the new 3 × 2 achievement goal model in predicting online student test anxiety and help-seeking. Achievement goals refer to students' general aims for participating in learning and the standard by which they judge their achievement (Pintrich, 2000). According to Elliot and his colleagues (2011), there are six…

  17. An Online Process Model of Second-Order Cultivation Effects: How Television Cultivates Materialism and Its Consequences for Life Satisfaction

    ERIC Educational Resources Information Center

    Shrum, L. J.; Lee, Jaehoon; Burroughs, James E.; Rindfleisch, Aric

    2011-01-01

    Two studies investigated the interrelations among television viewing, materialism, and life satisfaction, and their underlying processes. Study 1 tested an online process model for television's cultivation of materialism by manipulating level of materialistic content. Viewing level influenced materialism, but only among participants who reported…

  18. A Theoretical Model and Analysis of the Effect of Self-Regulation on Attrition from Voluntary Online Training

    ERIC Educational Resources Information Center

    Sitzmann, Traci

    2012-01-01

    A theoretical model is presented that examines self-regulatory processes and trainee characteristics as predictors of attrition from voluntary online training in order to determine who is at risk of dropping out and the processes that occur during training that determine when they are at risk of dropping out. Attrition increased following declines…

  19. Community of Inquiry: A Useful Model for Examining Educational Interactions in Online Graduate Education Courses at Christian Colleges

    ERIC Educational Resources Information Center

    Bartruff, Elizabeth Ann

    2009-01-01

    Using the Community of Inquiry (COI) model as a framework, this case study analyzed the interactions of teacher and students in an online graduate level education course at a small Christian college in the Pacific Northwest. Using transcript content analysis, communication between participants was coded as either contributing to the social,…

  20. User Evaluation of the NASA Technical Report Server Recommendation Service

    NASA Technical Reports Server (NTRS)

    Nelson, Michael L.; Bollen, Johan; Calhoun, JoAnne R.; Mackey, Calvin E.

    2004-01-01

    We present the user evaluation of two recommendation server methodologies implemented for the NASA Technical Report Server (NTRS). One methodology for generating recommendations uses log analysis to identify co-retrieval events on full-text documents. For comparison, we used the Vector Space Model (VSM) as the second methodology. We calculated cosine similarities and used the top 10 most similar documents (based on metadata) as 'recommendations'. We then ran an experiment with NASA Langley Research Center (LaRC) staff members to gather their feedback on which method produced the most 'quality' recommendations. We found that in most cases VSM outperformed log analysis of co-retrievals. However, analyzing the data revealed the evaluations may have been structurally biased in favor of the VSM generated recommendations. We explore some possible methods for combining log analysis and VSM generated recommendations and suggest areas of future work.

  1. Experience of public procurement of Open Compute servers

    NASA Astrophysics Data System (ADS)

    Bärring, Olof; Guerri, Marco; Bonfillou, Eric; Valsan, Liviu; Grigore, Alexandru; Dore, Vincent; Gentit, Alain; Clement, Benoît; Grossir, Anthony

    2015-12-01

    The Open Compute Project. OCP (http://www.opencompute.org/). was launched by Facebook in 2011 with the objective of building efficient computing infrastructures at the lowest possible cost. The technologies are released as open hardware. with the goal to develop servers and data centres following the model traditionally associated with open source software projects. In 2013 CERN acquired a few OCP servers in order to compare performance and power consumption with standard hardware. The conclusions were that there are sufficient savings to motivate an attempt to procure a large scale installation. One objective is to evaluate if the OCP market is sufficiently mature and broad enough to meet the constraints of a public procurement. This paper summarizes this procurement. which started in September 2014 and involved the Request for information (RFI) to qualify bidders and Request for Tender (RFT).

  2. User Evaluation of the NASA Technical Report Server Recommendation Service

    NASA Technical Reports Server (NTRS)

    Nelson, Michael L.; Bollen, Johan; Calhoun, JoAnne R.; Mackey, Calvin E.

    2004-01-01

    We present the user evaluation of two recommendation server methodologies implemented for the NASA Technical Report Server (NTRS). One methodology for generating recommendations uses log analysis to identify co-retrieval events on full-text documents. For comparison, we used the Vector Space Model (VSM) as the second methodology. We calculated cosine similarities and used the top 10 most similar documents (based on metadata) as recommendations . We then ran an experiment with NASA Langley Research Center (LaRC) staff members to gather their feedback on which method produced the most quality recommendations. We found that in most cases VSM outperformed log analysis of co-retrievals. However, analyzing the data revealed the evaluations may have been structurally biased in favor of the VSM generated recommendations. We explore some possible methods for combining log analysis and VSM generated recommendations and suggest areas of future work.

  3. Utilizing a language model to improve online dynamic data collection in P300 spellers.

    PubMed

    Mainsah, Boyla O; Colwell, Kenneth A; Collins, Leslie M; Throckmorton, Chandra S

    2014-07-01

    P300 spellers provide a means of communication for individuals with severe physical limitations, especially those with locked-in syndrome, such as amyotrophic lateral sclerosis. However, P300 speller use is still limited by relatively low communication rates due to the multiple data measurements that are required to improve the signal-to-noise ratio of event-related potentials for increased accuracy. Therefore, the amount of data collection has competing effects on accuracy and spelling speed. Adaptively varying the amount of data collection prior to character selection has been shown to improve spelling accuracy and speed. The goal of this study was to optimize a previously developed dynamic stopping algorithm that uses a Bayesian approach to control data collection by incorporating a priori knowledge via a language model. Participants ( n = 17) completed online spelling tasks using the dynamic stopping algorithm, with and without a language model. The addition of the language model resulted in improved participant performance from a mean theoretical bit rate of 46.12 bits/min at 88.89% accuracy to 54.42 bits/min ( ) at 90.36% accuracy.

  4. Case analysis online: a strategic management case model for the health industry.

    PubMed

    Walsh, Anne; Bearden, Eithne

    2004-01-01

    Despite the plethora of methods and tools available to support strategic management, the challenge for health executives in the next century will relate to their ability to access and interpret data from multiple and intricate communication networks. Integrated digital networks and satellite systems will expand the scope and ease of sharing information between business divisions, and networked systems will facilitate the use of virtual case discussions across universities. While the internet is frequently used to support clinical decisions in the healthcare industry, few executives rely upon the internetfor strategic analysis. Although electronic technologies can easily synthesize data from multiple information channels, research as well as technical issues may deter their application in strategic analysis. As digital models transform access to information, online models may become increasingly relevant in designing strategic solutions. While there are various pedagogical models available to support the strategic management process, this framework was designed to enhance strategic analysis through the application of technology and electronic research. A strategic analysis framework, which incorporated internet research and case analysis in a strategic managementcourse, is described alongwith design and application issues that emerged during the case analysis process.

  5. Using GOMS models and hypertext to create representations of medical procedures for online display

    NASA Technical Reports Server (NTRS)

    Gugerty, Leo; Halgren, Shannon; Gosbee, John; Rudisill, Marianne

    1991-01-01

    This study investigated two methods to improve organization and presentation of computer-based medical procedures. A literature review suggested that the GOMS (goals, operators, methods, and selecton rules) model can assist in rigorous task analysis, which can then help generate initial design ideas for the human-computer interface. GOMS model are hierarchical in nature, so this study also investigated the effect of hierarchical, hypertext interfaces. We used a 2 x 2 between subjects design, including the following independent variables: procedure organization - GOMS model based vs. medical-textbook based; navigation type - hierarchical vs. linear (booklike). After naive subjects studies the online procedures, measures were taken of their memory for the content and the organization of the procedures. This design was repeated for two medical procedures. For one procedure, subjects who studied GOMS-based and hierarchical procedures remembered more about the procedures than other subjects. The results for the other procedure were less clear. However, data for both procedures showed a 'GOMSification effect'. That is, when asked to do a free recall of a procedure, subjects who had studies a textbook procedure often recalled key information in a location inconsistent with the procedure they actually studied, but consistent with the GOMS-based procedure.

  6. The Medicago truncatula gene expression atlas web server

    PubMed Central

    2009-01-01

    Background Legumes (Leguminosae or Fabaceae) play a major role in agriculture. Transcriptomics studies in the model legume species, Medicago truncatula, are instrumental in helping to formulate hypotheses about the role of legume genes. With the rapid growth of publically available Affymetrix GeneChip Medicago Genome Array GeneChip data from a great range of tissues, cell types, growth conditions, and stress treatments, the legume research community desires an effective bioinformatics system to aid efforts to interpret the Medicago genome through functional genomics. We developed the Medicago truncatula Gene Expression Atlas (MtGEA) web server for this purpose. Description The Medicago truncatula Gene Expression Atlas (MtGEA) web server is a centralized platform for analyzing the Medicago transcriptome. Currently, the web server hosts gene expression data from 156 Affymetrix GeneChip® Medicago genome arrays in 64 different experiments, covering a broad range of developmental and environmental conditions. The server enables flexible, multifaceted analyses of transcript data and provides a range of additional information about genes, including different types of annotation and links to the genome sequence, which help users formulate hypotheses about gene function. Transcript data can be accessed using Affymetrix probe identification number, DNA sequence, gene name, functional description in natural language, GO and KEGG annotation terms, and InterPro domain number. Transcripts can also be discovered through co-expression or differential expression analysis. Flexible tools to select a subset of experiments and to visualize and compare expression profiles of multiple genes have been implemented. Data can be downloaded, in part or full, in a tabular form compatible with common analytical and visualization software. The web server will be updated on a regular basis to incorporate new gene expression data and genome annotation, and is accessible at: http

  7. On-line Meteorology-Chemistry/Aerosols Modelling and Integration for Risk Assessment: Case Studies

    NASA Astrophysics Data System (ADS)

    Bostanbekov, Kairat; Mahura, Alexander; Nuterman, Roman; Nurseitov, Daniyar; Zakarin, Edige; Baklanov, Alexander

    2016-04-01

    On regional level, and especially in areas with potential diverse sources of industrial pollutants, the risk assessment of impact on environment and population is critically important. During normal operations, the risk is minimal. However, during accidental situations, the risk is increased due to releases of harmful pollutants into different environments such as water, soil, and atmosphere where it is following processes of continuous transformation and transport. In this study, the Enviro-HIRLAM (Environment High Resolution Limited Area Model) was adapted and employed for assessment of scenarios with accidental and continuous emissions of sulphur dioxide (SO2) for selected case studies during January of 2010. The following scenarios were considered: (i) control reference run; (ii) accidental release (due to short-term 1 day fire at oil storage facility) occurred at city of Atyrau (Kazakhstan) near the northern part of the Caspian Sea; and (iii) doubling of original continuous emissions from three locations of metallurgical enterprises on the Kola Peninsula (Russia). The implemented aerosol microphysics module M7 uses 5 types - sulphates, sea salt, dust, black and organic carbon; as well as distributed in 7 size modes. Removal processes of aerosols include gravitational settling and wet deposition. As the Enviro-HIRLAM model is the on-line integrated model, both meteorological and chemical processes are simultaneously modelled at each time step. The modelled spatio-temporal variations for meteorological and chemical patterns are analyzed for both European and Kazakhstan regions domains. The results of evaluation of sulphur dioxide concentration and deposition on main populated cities, selected regions, countries are presented employing GIS tools. As outcome, the results of Enviro-HIRLAM modelling for accidental release near the Caspian Sea are integrated into the RANDOM (Risk Assessment of Nature Detriment due to Oil spill Migration) system.

  8. Online Course Effectiveness: A Model for Innovative Research in Counselor Education

    ERIC Educational Resources Information Center

    Cicco, Gina

    2013-01-01

    This article will discuss the need for experimental research to document the effectiveness of online counseling skills courses. There are relatively few published studies that have investigated faculty and student performance outcomes when counseling skills and techniques courses are taught through a completely online modality. Various studies…

  9. Testing a Model to Predict Online Cheating--Much Ado about Nothing

    ERIC Educational Resources Information Center

    Beck, Victoria

    2014-01-01

    Much has been written about student and faculty opinions on academic integrity in testing. Currently, concerns appear to focus more narrowly on online testing, generally based on anecdotal assumptions that online students are more likely to engage in academic dishonesty in testing than students in traditional on-campus courses. To address such…

  10. Online Calibration Methods for the DINA Model with Independent Attributes in CD-CAT

    ERIC Educational Resources Information Center

    Chen, Ping; Xin, Tao; Wang, Chun; Chang, Hua-Hua

    2012-01-01

    Item replenishing is essential for item bank maintenance in cognitive diagnostic computerized adaptive testing (CD-CAT). In regular CAT, online calibration is commonly used to calibrate the new items continuously. However, until now no reference has publicly become available about online calibration for CD-CAT. Thus, this study investigates the…

  11. Can the Current Model of Higher Education Survive MOOCs and Online Learning?

    ERIC Educational Resources Information Center

    Lucas, Henry C., Jr.

    2013-01-01

    The debate about online education--and Massive Open Online Courses (MOOCs) in particular--generates much confusion because there are so many options for how these technologies can be applied. Institutes of higher education and colleges have to examine these changes or face the risk of no longer being in control of their own fate. To survive, they…

  12. A Proposed Model for Authenticating Knowledge Transfer in Online Discussion Forums

    ERIC Educational Resources Information Center

    Tucker, Jan P.; YoungGonzaga, Stephanie; Krause, Jaclyn

    2014-01-01

    Discussion forums are often utilized in the online classroom to build a sense of community, encourage collaboration and exchange, and measure time on task. A review of the literature revealed that there is little research that examines the role of the online discussion forum as a mechanism for knowledge transfer. Researchers reviewed 21 course…

  13. Learning from e-Family History: A Model of Online Family Historian Research Behaviour

    ERIC Educational Resources Information Center

    Friday, Kate

    2014-01-01

    Introduction: This paper reports on doctoral research which investigated the online research behaviour of family historians, from the overall perspective of local studies collections and developing online services for family historians. Method: A hybrid (primarily ethnographic) study was employed using qualitative diaries and shadowing, to examine…

  14. Applying a Model of Communicative Influence in Education in Closed Online and Offline Courses

    ERIC Educational Resources Information Center

    Carr, Caleb T.

    2014-01-01

    This research explores communicative influences on cognitive learning and educational affect in online and offline courses limited to only enrolled students. A survey was conducted of students (N = 147) enrolled in online and offline courses within a single department during Summer, 2013. Respondents were asked about their classroom communication…

  15. A Blended Model: Simultaneously Teaching a Quantitative Course Traditionally, Online, and Remotely

    ERIC Educational Resources Information Center

    Lightner, Constance A.; Lightner-Laws, Carin A.

    2016-01-01

    As universities seek to bolster enrollment through distance education, faculty are tasked with maintaining comparable teaching/learning standards in traditional, blended, and online courses. Research has shown that there is an achievement gap between students taking courses exclusively offered online versus those enrolled in face-to-face classes.…

  16. Evaluating Two Models of Collaborative Tests in an Online Introductory Statistics Course

    ERIC Educational Resources Information Center

    Björnsdóttir, Auðbjörg; Garfield, Joan; Everson, Michelle

    2015-01-01

    This study explored the use of two different types of collaborative tests in an online introductory statistics course. A study was designed and carried out to investigate three research questions: (1) What is the difference in students' learning between using consensus and non-consensus collaborative tests in the online environment?, (2) What is…

  17. Machine Beats Experts: Automatic Discovery of Skill Models for Data-Driven Online Course Refinement

    ERIC Educational Resources Information Center

    Matsuda, Noboru; Furukawa, Tadanobu; Bier, Norman; Faloutsos, Christos

    2015-01-01

    How can we automatically determine which skills must be mastered for the successful completion of an online course? Large-scale online courses (e.g., MOOCs) often contain a broad range of contents frequently intended to be a semester's worth of materials; this breadth often makes it difficult to articulate an accurate set of skills and knowledge…

  18. Talk in Virtual Contexts: Reflecting on Participation and Online Learning Models

    ERIC Educational Resources Information Center

    Thorpe, Mary; McCormick, Robert; Kubiak, Chris; Carmichael, Patrick

    2007-01-01

    Computer-mediated conferencing has been adopted, particularly for purposes of online course provision, as a method that can deliver community. Widespread interest in a communities-of-practice approach within both informal and formal learning has strengthened perceptions of the value of creating a community online. A case study of asynchronous…

  19. A Conceptual Model for Understanding Self-Directed Learning in Online Environments

    ERIC Educational Resources Information Center

    Song, Liyan; Hill, Janette R.

    2007-01-01

    Research indicates that online learning often situates control of implementation with the learner. Recently, scholars have turned attention to the importance of self-directed learning (SDL) skills for online learning environments. Existing frameworks for understanding SDL focus primarily on process and personal attributes in face-to-face settings.…

  20. Prototyping an online wetland ecosystem services model using open model sharing standards

    USGS Publications Warehouse

    Feng, M.; Liu, S.; Euliss, N.H.; Young, Caitlin; Mushet, D.M.

    2011-01-01

    Great interest currently exists for developing ecosystem models to forecast how ecosystem services may change under alternative land use and climate futures. Ecosystem services are diverse and include supporting services or functions (e.g., primary production, nutrient cycling), provisioning services (e.g., wildlife, groundwater), regulating services (e.g., water purification, floodwater retention), and even cultural services (e.g., ecotourism, cultural heritage). Hence, the knowledge base necessary to quantify ecosystem services is broad and derived from many diverse scientific disciplines. Building the required interdisciplinary models is especially challenging as modelers from different locations and times may develop the disciplinary models needed for ecosystem simulations, and these models must be identified and made accessible to the interdisciplinary simulation. Additional difficulties include inconsistent data structures, formats, and metadata required by geospatial models as well as limitations on computing, storage, and connectivity. Traditional standalone and closed network systems cannot fully support sharing and integrating interdisciplinary geospatial models from variant sources. To address this need, we developed an approach to openly share and access geospatial computational models using distributed Geographic Information System (GIS) techniques and open geospatial standards. We included a means to share computational models compliant with Open Geospatial Consortium (OGC) Web Processing Services (WPS) standard to ensure modelers have an efficient and simplified means to publish new models. To demonstrate our approach, we developed five disciplinary models that can be integrated and shared to simulate a few of the ecosystem services (e.g., water storage, waterfowl breeding) that are provided by wetlands in the Prairie Pothole Region (PPR) of North America. ?? 2010 Elsevier Ltd.

  1. Model-based fault detection and identification with online aerodynamic model structure selection

    NASA Astrophysics Data System (ADS)

    Lombaerts, T.

    2013-12-01

    This publication describes a recursive algorithm for the approximation of time-varying nonlinear aerodynamic models by means of a joint adaptive selection of the model structure and parameter estimation. This procedure is called adaptive recursive orthogonal least squares (AROLS) and is an extension and modification of the previously developed ROLS procedure. This algorithm is particularly useful for model-based fault detection and identification (FDI) of aerospace systems. After the failure, a completely new aerodynamic model can be elaborated recursively with respect to structure as well as parameter values. The performance of the identification algorithm is demonstrated on a simulation data set.

  2. Online calculation of global marine halocarbon emissions in the chemistry climate model EMAC

    NASA Astrophysics Data System (ADS)

    Lennartz, Sinikka T.; Krysztofiak-Tong, Gisèle; Sinnhuber, Björn-Martin; Marandino, Christa A.; Tegtmeier, Susann; Krüger, Kirstin; Ziska, Franziska; Quack, Birgit

    2015-04-01

    Marine produced trace gases such as dibromomethane (CH2Br2), bromoform (CHBr3) and methyl iodide (CH3I) significantly impact tropospheric and stratospheric chemistry. Marine emissions are the dominant source of halocarbons to the atmosphere, and therefore, it is crucial to represent them accurately in order to model their impact on atmospheric chemistry. Chemistry climate models are a frequently used tool for quantifying the influence of halocarbons on ozone depletion. In these model simulations, marine emissions of halocarbons have mainly been prescribed from established emission climatologies, thus neglecting the interaction with the actual state of the atmosphere in the model. Here, we calculate halocarbon marine emissions for the first time online by coupling the submodel AIRSEA to the chemical climate model EMAC. Our method combines prescribed water concentrations and varying atmospheric concentrations derived from the model instead of using fixed emission climatologies. This method has a number of conceptual and practical advantages, as the modelled emissions can respond consistently to changes in temperature, wind speed, possible sea ice cover and atmospheric concentration in the model. Differences between the climatology-based and the new approach (2-18%) result from consideration of the actual, time-varying state of the atmosphere and the consideration of air-side transfer velocities. Extensive comparison to observations from aircraft, ships and ground stations reveal that interactively computing the air-sea flux from prescribed water concentrations leads to equally or more accurate atmospheric concentrations in the model compared to using constant emission climatologies. The effect of considering the actual state of the atmosphere is largest for gases with concentrations close to equilibrium in the surface ocean, such as CH2Br2. Halocarbons with comparably long atmospheric lifetimes, e.g. CH2Br2, are reflected more accurately in EMAC when compared to time

  3. Response of different regional online coupled models to aerosol-radiation interactions

    NASA Astrophysics Data System (ADS)

    Forkel, Renate; Balzarini, Alessandra; Brunner, Dominik; Baró, Rocio; Curci, Gabriele; Hirtl, Marcus; Honzak, Luka; Jiménez-Guerrero, Pedro; Jorba, Oriol; Pérez, Juan L.; Pirovano, Guido; San José, Roberto; Schröder, Wolfram; Tuccella, Paolo; Werhahn, Johannes; Wolke, Ralf; Žabkar, Rahela

    2016-04-01

    The importance of aerosol-meteorology interactions and their representation in online coupled regional atmospheric chemistry-meteorology models was investigated in COST Action ES1004 (EuMetChem, http://eumetchem.info/). Case study results from different models (COSMO-Muscat, COSMO-ART, and different configurations of WRF-Chem), which were applied for Europe as a coordinated exercise for the year 2010, are analyzed with respect to inter-model variability and the response of the different models to direct and indirect aerosol-radiation interactions. The main focus was on two episodes - the Russian heat wave and wildfires episode in July/August 2010 and a period in October 2010 with enhanced cloud cover and rain and including an of Saharan dust transport to Europe. Looking at physical plausibility the decrease in downward solar radiation and daytime temperature due to the direct aerosol effect is robust for all model configurations. The same holds for the pronounced decrease in cloud water content and increase in solar radiation for cloudy conditions and very low aerosol concentrations that was found for WRF-Chem when aerosol cloud interactions were considered. However, when the differences were tested for statistical significance no significant differences in mean solar radiation and mean temperature between the baseline case and the simulations including the direct and indirect effect from simulated aerosol concentrations were found over Europe for the October episode. Also for the fire episode differences between mean temperature and radiation from the simulations with and without the direct aerosol effect were not significant for the major part of the modelling domain. Only for the region with high fire emissions in Russia, the differences in mean solar radiation and temperature due to the direct effect were found to be significant during the second half of the fire episode - however only for a significance level of 0.1. The few observational data indicate that

  4. PockDrug-Server: a new web server for predicting pocket druggability on holo and apo proteins

    PubMed Central

    Hussein, Hiba Abi; Borrel, Alexandre; Geneix, Colette; Petitjean, Michel; Regad, Leslie; Camproux, Anne-Claude

    2015-01-01

    Predicting protein pocket's ability to bind drug-like molecules with high affinity, i.e. druggability, is of major interest in the target identification phase of drug discovery. Therefore, pocket druggability investigations represent a key step of compound clinical progression projects. Currently computational druggability prediction models are attached to one unique pocket estimation method despite pocket estimation uncertainties. In this paper, we propose ‘PockDrug-Server’ to predict pocket druggability, efficient on both (i) estimated pockets guided by the ligand proximity (extracted by proximity to a ligand from a holo protein structure) and (ii) estimated pockets based solely on protein structure information (based on amino atoms that form the surface of potential binding cavities). PockDrug-Server provides consistent druggability results using different pocket estimation methods. It is robust with respect to pocket boundary and estimation uncertainties, thus efficient using apo pockets that are challenging to estimate. It clearly distinguishes druggable from less druggable pockets using different estimation methods and outperformed recent druggability models for apo pockets. It can be carried out from one or a set of apo/holo proteins using different pocket estimation methods proposed by our web server or from any pocket previously estimated by the user. PockDrug-Server is publicly available at: http://pockdrug.rpbs.univ-paris-diderot.fr. PMID:25956651

  5. Chemical Weather Forecasting using the online fully integrated modeling system RAMS/ICLAMS - Comparison with the offline approach

    NASA Astrophysics Data System (ADS)

    Kushta, Jonilda; Astitha, Marina; Solomos, Stavros; Kallos, George

    2013-04-01

    In the framework of chemical weather forecasting, the online approach consists of the coupled treatment of chemical parameters, simultaneously with the meteorological parameters, in a single integrated modelling system. This approach offers the possibility to simulate the links and feedbacks between atmospheric processes that are traditionally neglected in air quality models. These links include direct and indirect effects of gases and aerosols on radiation, clouds and precipitation that in turn re-modify atmospheric composition in a two way interactive pattern. Both meteorological and chemical components are expected to benefit from this approach. The extend to which this improvement can justify a thorough migration to integrated systems is the subject of the current work. In this study we discuss the performance of the online Integrated Community Limited Area Modelling System (RAMS/ICLAMS) and compare the results with the offline use with CAMx model, for a month long summertime text period. The area under consideration is Europe and the Greater Mediterranean Region (GMR). In both on and off line simulations the same meteorological driver has been used (RAMS). The comparability of the two models is achieved with the implementation of same chemical mechanisms, meteorological fields, emissions, initial and boundary conditions. The differences in the model configurations are also taken into account in the comparison of the two modelling approaches. In this presentation, the advantages and disadvantages of simulating the regional atmospheric chemical composition by using the online versus the offline approach are analyzed and discussed.

  6. A novel rumor diffusion model considering the effect of truth in online social media

    NASA Astrophysics Data System (ADS)

    Sun, Ling; Liu, Yun; Zeng, Qing-An; Xiong, Fei

    2015-12-01

    In this paper, we propose a model to investigate how truth affects rumor diffusion in online social media. Our model reveals a relation between rumor and truth — namely, when a rumor is diffusing, the truth about the rumor also diffuses with it. Two patterns of the agents used to identify rumor, self-identification and passive learning are taken into account. Combining theoretical proof and simulation analysis, we find that the threshold value of rumor diffusion is negatively correlated to the connectivity between nodes in the network and the probability β of agents knowing truth. Increasing β can reduce the maximum density of the rumor spreaders and slow down the generation speed of new rumor spreaders. On the other hand, we conclude that the best rumor diffusion strategy must balance the probability of forwarding rumor and the probability of agents losing interest in the rumor. High spread rate λ of rumor would lead to a surge in truth dissemination which will greatly limit the diffusion of rumor. Furthermore, in the case of unknown λ, increasing β can effectively reduce the maximum proportion of agents who do not know the truth, but cannot narrow the rumor diffusion range in a certain interval of β.

  7. Trust-Based Access Control Model from Sociological Approach in Dynamic Online Social Network Environment

    PubMed Central

    Kim, Seungjoo

    2014-01-01

    There has been an explosive increase in the population of the OSN (online social network) in recent years. The OSN provides users with many opportunities to communicate among friends and family. Further, it facilitates developing new relationships with previously unknown people having similar beliefs or interests. However, the OSN can expose users to adverse effects such as privacy breaches, the disclosing of uncontrolled material, and the disseminating of false information. Traditional access control models such as MAC, DAC, and RBAC are applied to the OSN to address these problems. However, these models are not suitable for the dynamic OSN environment because user behavior in the OSN is unpredictable and static access control imposes a burden on the users to change the access control rules individually. We propose a dynamic trust-based access control for the OSN to address the problems of the traditional static access control. Moreover, we provide novel criteria to evaluate trust factors such as sociological approach and evaluate a method to calculate the dynamic trust values. The proposed method can monitor negative behavior and modify access permission levels dynamically to prevent the indiscriminate disclosure of information. PMID:25374943

  8. Trust-based access control model from sociological approach in dynamic online social network environment.

    PubMed

    Baek, Seungsoo; Kim, Seungjoo

    2014-01-01

    There has been an explosive increase in the population of the OSN (online social network) in recent years. The OSN provides users with many opportunities to communicate among friends and family. Further, it facilitates developing new relationships with previously unknown people having similar beliefs or interests. However, the OSN can expose users to adverse effects such as privacy breaches, the disclosing of uncontrolled material, and the disseminating of false information. Traditional access control models such as MAC, DAC, and RBAC are applied to the OSN to address these problems. However, these models are not suitable for the dynamic OSN environment because user behavior in the OSN is unpredictable and static access control imposes a burden on the users to change the access control rules individually. We propose a dynamic trust-based access control for the OSN to address the problems of the traditional static access control. Moreover, we provide novel criteria to evaluate trust factors such as sociological approach and evaluate a method to calculate the dynamic trust values. The proposed method can monitor negative behavior and modify access permission levels dynamically to prevent the indiscriminate disclosure of information.

  9. PiRaNhA: a server for the computational prediction of RNA-binding residues in protein sequences

    PubMed Central

    Murakami, Yoichi; Spriggs, Ruth V.; Nakamura, Haruki; Jones, Susan

    2010-01-01

    The PiRaNhA web server is a publicly available online resource that automatically predicts the location of RNA-binding residues (RBRs) in protein sequences. The goal of functional annotation of sequences in the field of RNA binding is to provide predictions of high accuracy that require only small numbers of targeted mutations for verification. The PiRaNhA server uses a support vector machine (SVM), with position-specific scoring matrices, residue interface propensity, predicted residue accessibility and residue hydrophobicity as features. The server allows the submission of up to 10 protein sequences, and the predictions for each sequence are provided on a web page and via email. The prediction results are provided in sequence format with predicted RBRs highlighted, in text format with the SVM threshold score indicated and as a graph which enables users to quickly identify those residues above any specific SVM threshold. The graph effectively enables the increase or decrease of the false positive rate. When tested on a non-redundant data set of 42 protein sequences not used in training, the PiRaNhA server achieved an accuracy of 85%, specificity of 90% and a Matthews correlation coefficient of 0.41 and outperformed other publicly available servers. The PiRaNhA prediction server is freely available at http://www.bioinformatics.sussex.ac.uk/PIRANHA. PMID:20507911

  10. SSIC model: A multi-layer model for intervention of online rumors spreading

    NASA Astrophysics Data System (ADS)

    Tian, Ru-Ya; Zhang, Xue-Fu; Liu, Yi-Jun

    2015-06-01

    SIR model is a classical model to simulate rumor spreading, while the supernetwork is an effective tool for modeling complex systems. Based on the Opinion SuperNetwork involving Social Sub-network, Environmental Sub-network, Psychological Sub-network, and Viewpoint Sub-network, drawing from the modeling idea of SIR model, this paper designs super SIC model (SSIC model) and its evolution rules, and also analyzes intervention effects on public opinion of four elements of supernetwork, which are opinion agent, opinion environment, agent's psychology and viewpoint. Studies show that, the SSIC model based on supernetwork has effective intervention effects on rumor spreading. It is worth noting that (i) identifying rumor spreaders in Social Sub-network and isolating them can achieve desired intervention results, (ii) improving environmental information transparency so that the public knows as much information as possible to reduce the rumors is a feasible way to intervene, (iii) persuading wavering neutrals has better intervention effects than clarifying rumors already spread everywhere, so rumors should be intervened in properly in time by psychology counseling.

  11. Tiled WMS/KML Server V2

    NASA Technical Reports Server (NTRS)

    Plesea, Lucian

    2012-01-01

    This software is a higher-performance implementation of tiled WMS, with integral support for KML and time-varying data. This software is compliant with the Open Geospatial WMS standard, and supports KML natively as a WMS return type, including support for the time attribute. Regionated KML wrappers are generated that match the existing tiled WMS dataset. Ping and JPG formats are supported, and the software is implemented as an Apache 2.0 module that supports a threading execution model that is capable of supporting very high request rates. The module intercepts and responds to WMS requests that match certain patterns and returns the existing tiles. If a KML format that matches an existing pyramid and tile dataset is requested, regionated KML is generated and returned to the requesting application. In addition, KML requests that do not match the existing tile datasets generate a KML response that includes the corresponding JPG WMS request, effectively adding KML support to a backing WMS server.

  12. Towards an online-coupled chemistry-climate model: evaluation of COSMO-ART

    NASA Astrophysics Data System (ADS)

    Knote, C.; Brunner, D.; Vogel, H.; Allan, J.; Asmi, A.; Äijälä, M.; Carbone, S.; van der Gon, H. D.; Jimenez, J. L.; Kiendler-Scharr, A.; Mohr, C.; Poulain, L.; Prévôt, A. S. H.; Swietlicki, E.; Vogel, B.

    2011-08-01

    The online-coupled, regional chemistry transport model COSMO-ART is evaluated for periods in all seasons against several measurement datasets to assess its ability to represent gaseous pollutants and ambient aerosol characteristics over the European domain. Measurements used in the comparison include long-term station observations, satellite and ground-based remote sensing products, and complex datasets of aerosol chemical composition and number size distribution from recent field campaigns. This is the first time these comprehensive measurements of aerosol characteristics in Europe are used to evaluate a regional chemistry transport model. We show a detailed analysis of the simulated size-resolved chemical composition under different meteorological conditions. The model is able to represent trace gas concentrations with good accuracy and reproduces bulk aerosol properties rather well though with a clear tendency to underestimate both total mass (PM10 and PM2.5) and aerosol optical depth. We find indications of an overestimation of shipping emissions. Time evolution of aerosol chemical composition is captured, although some biases are found in relative composition. Nitrate aerosol components are on average overestimated, and sulfates underestimated. The accuracy of simulated organics depends strongly on season and location. While strongly underestimated during summer, organic mass is comparable in spring and autumn. We see indications for an overestimated fractional contribution of primary organic matter in urban areas and an underestimation of SOA at many locations. Aerosol number concentrations can be simulated well, size distributions are comparable. Our work sets the basis for subsequent studies of aerosol characteristics and climate impacts with COSMO-ART, and highlights areas where improvements are necessary for current regional modeling systems in general.

  13. Mobile object retrieval in server-based image databases

    NASA Astrophysics Data System (ADS)

    Manger, D.; Pagel, F.; Widak, H.

    2013-05-01

    The increasing number of mobile phones equipped with powerful cameras leads to huge collections of user-generated images. To utilize the information of the images on site, image retrieval systems are becoming more and more popular to search for similar objects in an own image database. As the computational performance and the memory capacity of mobile devices are constantly increasing, this search can often be performed on the device itself. This is feasible, for example, if the images are represented with global image features or if the search is done using EXIF or textual metadata. However, for larger image databases, if multiple users are meant to contribute to a growing image database or if powerful content-based image retrieval methods with local features are required, a server-based image retrieval backend is needed. In this work, we present a content-based image retrieval system with a client server architecture working with local features. On the server side, the scalability to large image databases is addressed with the popular bag-of-word model with state-of-the-art extensions. The client end of the system focuses on a lightweight user interface presenting the most similar images of the database highlighting the visual information which is common with the query image. Additionally, new images can be added to the database making it a powerful and interactive tool for mobile contentbased image retrieval.

  14. ACFIS: a web server for fragment-based drug discovery

    PubMed Central

    Hao, Ge-Fei; Jiang, Wen; Ye, Yuan-Nong; Wu, Feng-Xu; Zhu, Xiao-Lei; Guo, Feng-Biao; Yang, Guang-Fu

    2016-01-01

    In order to foster innovation and improve the effectiveness of drug discovery, there is a considerable interest in exploring unknown ‘chemical space’ to identify new bioactive compounds with novel and diverse scaffolds. Hence, fragment-based drug discovery (FBDD) was developed rapidly due to its advanced expansive search for ‘chemical space’, which can lead to a higher hit rate and ligand efficiency (LE). However, computational screening of fragments is always hampered by the promiscuous binding model. In this study, we developed a new web server Auto Core Fragment in silico Screening (ACFIS). It includes three computational modules, PARA_GEN, CORE_GEN and CAND_GEN. ACFIS can generate core fragment structure from the active molecule using fragment deconstruction analysis and perform in silico screening by growing fragments to the junction of core fragment structure. An integrated energy calculation rapidly identifies which fragments fit the binding site of a protein. We constructed a simple interface to enable users to view top-ranking molecules in 2D and the binding mode in 3D for further experimental exploration. This makes the ACFIS a highly valuable tool for drug discovery. The ACFIS web server is free and open to all users at http://chemyang.ccnu.edu.cn/ccb/server/ACFIS/. PMID:27150808

  15. (PS)2: protein structure prediction server version 3.0.

    PubMed

    Huang, Tsun-Tsao; Hwang, Jenn-Kang; Chen, Chu-Huang; Chu, Chih-Sheng; Lee, Chi-Wen; Chen, Chih-Chieh

    2015-07-01

    Protein complexes are involved in many biological processes. Examining coupling between subunits of a complex would be useful to understand the molecular basis of protein function. Here, our updated (PS)(2) web server predicts the three-dimensional structures of protein complexes based on comparative modeling; furthermore, this server examines the coupling between subunits of the predicted complex by combining structural and evolutionary considerations. The predicted complex structure could be indicated and visualized by Java-based 3D graphics viewers and the structural and evolutionary profiles are shown and compared chain-by-chain. For each subunit, considerations with or without the packing contribution of other subunits cause the differences in similarities between structural and evolutionary profiles, and these differences imply which form, complex or monomeric, is preferred in the biological condition for the subunit. We believe that the (PS)(2) server would be a useful tool for biologists who are interested not only in the structures of protein complexes but also in the coupling between subunits of the complexes. The (PS)(2) is freely available at http://ps2v3.life.nctu.edu.tw/. PMID:25943546

  16. ACFIS: a web server for fragment-based drug discovery.

    PubMed

    Hao, Ge-Fei; Jiang, Wen; Ye, Yuan-Nong; Wu, Feng-Xu; Zhu, Xiao-Lei; Guo, Feng-Biao; Yang, Guang-Fu

    2016-07-01

    In order to foster innovation and improve the effectiveness of drug discovery, there is a considerable interest in exploring unknown 'chemical space' to identify new bioactive compounds with novel and diverse scaffolds. Hence, fragment-based drug discovery (FBDD) was developed rapidly due to its advanced expansive search for 'chemical space', which can lead to a higher hit rate and ligand efficiency (LE). However, computational screening of fragments is always hampered by the promiscuous binding model. In this study, we developed a new web server Auto Core Fragment in silico Screening (ACFIS). It includes three computational modules, PARA_GEN, CORE_GEN and CAND_GEN. ACFIS can generate core fragment structure from the active molecule using fragment deconstruction analysis and perform in silico screening by growing fragments to the junction of core fragment structure. An integrated energy calculation rapidly identifies which fragments fit the binding site of a protein. We constructed a simple interface to enable users to view top-ranking molecules in 2D and the binding mode in 3D for further experimental exploration. This makes the ACFIS a highly valuable tool for drug discovery. The ACFIS web server is free and open to all users at http://chemyang.ccnu.edu.cn/ccb/server/ACFIS/. PMID:27150808

  17. (PS)2: protein structure prediction server version 3.0.

    PubMed

    Huang, Tsun-Tsao; Hwang, Jenn-Kang; Chen, Chu-Huang; Chu, Chih-Sheng; Lee, Chi-Wen; Chen, Chih-Chieh

    2015-07-01

    Protein complexes are involved in many biological processes. Examining coupling between subunits of a complex would be useful to understand the molecular basis of protein function. Here, our updated (PS)(2) web server predicts the three-dimensional structures of protein complexes based on comparative modeling; furthermore, this server examines the coupling between subunits of the predicted complex by combining structural and evolutionary considerations. The predicted complex structure could be indicated and visualized by Java-based 3D graphics viewers and the structural and evolutionary profiles are shown and compared chain-by-chain. For each subunit, considerations with or without the packing contribution of other subunits cause the differences in similarities between structural and evolutionary profiles, and these differences imply which form, complex or monomeric, is preferred in the biological condition for the subunit. We believe that the (PS)(2) server would be a useful tool for biologists who are interested not only in the structures of protein complexes but also in the coupling between subunits of the complexes. The (PS)(2) is freely available at http://ps2v3.life.nctu.edu.tw/.

  18. SERVER DEVELOPMENT FOR NSLS-II PHYSICS APPLICATIONS AND PERFORMANCE ANALYSIS

    SciTech Connect

    Shen, G.; Kraimer, M.

    2011-03-28

    The beam commissioning software framework of NSLS-II project adopts a client/server based architecture to replace the more traditional monolithic high level application approach. The server software under development is available via an open source sourceforge project named epics-pvdata, which consists of modules pvData, pvAccess, pvIOC, and pvService. Examples of two services that already exist in the pvService module are itemFinder, and gather. Each service uses pvData to store in-memory transient data, pvService to transfer data over the network, and pvIOC as the service engine. The performance benchmarking for pvAccess and both gather service and item finder service are presented in this paper. The performance comparison between pvAccess and Channel Access are presented also. For an ultra low emittance synchrotron radiation light source like NSLS II, the control system requirements, especially for beam control are tight. To control and manipulate the beam effectively, a use case study has been performed to satisfy the requirement and theoretical evaluation has been performed. The analysis shows that model based control is indispensable for beam commissioning and routine operation. However, there are many challenges such as how to re-use a design model for on-line model based control, and how to combine the numerical methods for modeling of a realistic lattice with the analytical techniques for analysis of its properties. To satisfy the requirements and challenges, adequate system architecture for the software framework for beam commissioning and operation is critical. The existing traditional approaches are self-consistent, and monolithic. Some of them have adopted a concept of middle layer to separate low level hardware processing from numerical algorithm computing, physics modelling, data manipulating and plotting, and error handling. However, none of the existing approaches can satisfy the requirement. A new design has been proposed by introducing service

  19. Statistical properties of online avatar numbers in a massive multiplayer online role-playing game

    NASA Astrophysics Data System (ADS)

    Jiang, Zhi-Qiang; Ren, Fei; Gu, Gao-Feng; Tan, Qun-Zhao; Zhou, Wei-Xing

    2010-02-01

    Massive multiplayer online role-playing games (MMORPGs) have been very popular in the past few years. The profit of an MMORPG company is proportional to how many users registered, and the instant number of online avatars is a key factor to assess how popular an MMORPG is. We use the online-offline logs on an MMORPG server to reconstruct the instant number of online avatars per second and investigate its statistical properties. We find that the online avatar number exhibits one-day periodic behavior and clear intraday pattern, the fluctuation distribution of the online avatar numbers has a leptokurtic non-Gaussian shape with power-law tails, and the increments of online avatar numbers after removing the intraday pattern are uncorrelated and the associated absolute values have long-term correlation. In addition, both time series exhibit multifractal nature.

  20. An online mineral dust model within the global/regional NMMB: current progress and plans

    NASA Astrophysics Data System (ADS)

    Perez, C.; Haustein, K.; Janjic, Z.; Jorba, O.; Baldasano, J. M.; Black, T.; Nickovic, S.

    2008-12-01

    While mineral dust distribution and effects are important on global scales, they strongly depend on dust emissions that are occurring on small spatial and temporal scales. Indeed, the accuracy of surface wind speed used in dust models is crucial. Due to the high-order power dependency on wind friction velocity and the threshold behaviour of dust emissions, small errors in surface wind speed lead to large dust emission errors. Most global dust models use prescribed wind fields provided by major meteorological centres (e.g., NCEP and ECMWF) and their spatial resolution is currently about 1 degree x 1 degree . Such wind speeds tend to be strongly underestimated over arid and semi-arid areas and do not account for mesoscale systems responsible for a significant fraction of dust emissions regionally and globally. Other significant uncertainties in dust emissions resulting from such approaches are related to the misrepresentation of high subgrid-scale spatial heterogeneity in soil and vegetation boundary conditions, mainly in semi-arid areas. In order to significantly reduce these uncertainties, the Barcelona Supercomputing Center is currently implementing a mineral dust model coupled on-line with the new global/regional NMMB atmospheric model using the ESMF framework under development in NOAA/NCEP/EMC. The NMMB is an evolution of the operational WRF-NMME extending from meso to global scales, and including non-hydrostatic option and improved tracer advection. This model is planned to become the next-generation NCEP mesoscale model for operational weather forecasting in North America. Current implementation is based on the well established regional dust model and forecast system Eta/DREAM (http://www.bsc.es/projects/earthscience/DREAM/). First successful global simulations show the potentials of such an approach and compare well with DREAM regionally. Ongoing developments include improvements in dust size distribution representation, sedimentation, dry deposition, wet

  1. A dual model of entertainment-based and community-based mechanisms to explore continued participation in online entertainment communities.

    PubMed

    Deng, Yun; Hou, Jinghui; Ma, Xiao; Cai, Shuqin

    2013-05-01

    Online entertainment communities have exploded in popularity and drawn attention from researchers. However, few studies have investigated what leads people to remain active in such communities at the postadoption stage. We proposed and tested a dual model of entertainment-based and community-based mechanisms to examine the factors that affect individuals' continued participation in online entertainment communities. Survival analysis was employed on a longitudinal dataset of 2,302 users collected over 2 years from an online game community. Our results were highly consistent with the theoretical model. Specifically, under the entertainment-based mechanism, our findings showed that the intensities of initial use and frequent use were positive predictors of players' activity lifespan. Under the community-based mechanism, the results demonstrated that the number of guilds a player was affiliated with and the average number of days of being a guild member positively predict players' lifespan in the game. Overall, our study suggests that the entertainment-based mechanism and community-based mechanism are two key drivers that determinate individuals' continued participation in online entertainment communities.

  2. MISTIC: Mutual information server to infer coevolution.

    PubMed

    Simonetti, Franco L; Teppa, Elin; Chernomoretz, Ariel; Nielsen, Morten; Marino Buslje, Cristina

    2013-07-01

    MISTIC (mutual information server to infer coevolution) is a web server for graphical representation of the information contained within a MSA (multiple sequence alignment) and a complete analysis tool for Mutual Information networks in protein families. The server outputs a graphical visualization of several information-related quantities using a circos representation. This provides an integrated view of the MSA in terms of (i) the mutual information (MI) between residue pairs, (ii) sequence conservation and (iii) the residue cumulative and proximity MI scores. Further, an interactive interface to explore and characterize the MI network is provided. Several tools are offered for selecting subsets of nodes from the network for visualization. Node coloring can be set to match different attributes, such as conservation, cumulative MI, proximity MI and secondary structure. Finally, a zip file containing all results can be downloaded. The server is available at http://mistic.leloir.org.ar. In summary, MISTIC allows for a comprehensive, compact, visually rich view of the information contained within an MSA in a manner unique to any other publicly available web server. In particular, the use of circos representation of MI networks and the visualization of the cumulative MI and proximity MI concepts is novel.

  3. dnaMATE: a consensus melting temperature prediction server for short DNA sequences.

    PubMed

    Panjkovich, Alejandro; Norambuena, Tomás; Melo, Francisco

    2005-07-01

    An accurate and robust large-scale melting temperature prediction server for short DNA sequences is dispatched. The server calculates a consensus melting temperature value using the nearest-neighbor model based on three independent thermodynamic data tables. The consensus method gives an accurate prediction of melting temperature, as it has been recently demonstrated in a benchmark performed using all available experimental data for DNA sequences within the length range of 16-30 nt. This constitutes the first web server that has been implemented to perform a large-scale calculation of melting temperatures in real time (up to 5000 DNA sequences can be submitted in a single run). The expected accuracy of calculations carried out by this server in the range of 50-600 mM monovalent salt concentration is that 89% of the melting temperature predictions will have an error or deviation of <5 degrees C from experimental data. The server can be freely accessed at http://dna.bio.puc.cl/tm.html. The standalone executable versions of this software for LINUX, Macintosh and Windows platforms are also freely available at the same web site. Detailed further information supporting this server is available at the same web site referenced above.

  4. Performance of the WeNMR CS-Rosetta3 web server in CASD-NMR.

    PubMed

    van der Schot, Gijs; Bonvin, Alexandre M J J

    2015-08-01

    We present here the performance of the WeNMR CS-Rosetta3 web server in CASD-NMR, the critical assessment of automated structure determination by NMR. The CS-Rosetta server uses only chemical shifts for structure prediction, in combination, when available, with a post-scoring procedure based on unassigned NOE lists (Huang et al. in J Am Chem Soc 127:1665-1674, 2005b, doi: 10.1021/ja047109h). We compare the original submissions using a previous version of the server based on Rosetta version 2.6 with recalculated targets using the new R3FP fragment picker for fragment selection and implementing a new annotation of prediction reliability (van der Schot et al. in J Biomol NMR 57:27-35, 2013, doi: 10.1007/s10858-013-9762-6), both implemented in the CS-Rosetta3 WeNMR server. In this second round of CASD-NMR, the WeNMR CS-Rosetta server has demonstrated a much better performance than in the first round since only converged targets were submitted. Further, recalculation of all CASD-NMR targets using the new version of the server demonstrates that our new annotation of prediction quality is giving reliable results. Predictions annotated as weak are often found to provide useful models, but only for a fraction of the sequence, and should therefore only be used with caution. PMID:25982706

  5. Online Sellers’ Website Quality Influencing Online Buyers’ Purchase Intention

    NASA Astrophysics Data System (ADS)

    Shea Lee, Tan; Ariff, Mohd Shoki Md; Zakuan, Norhayati; Sulaiman, Zuraidah; Zameri Mat Saman, Muhamad

    2016-05-01

    The increase adoption of Internet among young users in Malaysia provides high prospect for online seller. Young users aged between 18 and 25 years old are important to online sellers because they are actively involved in online purchasing and this group of online buyers is expected to dominate future online market. Therefore, examining online sellers’ website quality and online buyers’ purchase intention is crucial. Based on the Theory of planned behavior (TPB), a conceptual model of online sellers’ website quality and purchase intention of online buyers was developed. E-tailQ instrument was adapted in this study which composed of website design, reliability/fulfillment, security, privacy & trust, and customer service. Using online questionnaire and convenience sampling procedure, primary data were obtained from 240 online buyers aged between 18 to 25 years old. It was discovered that website design, website reliability/fulfillment, website security, privacy & trust, and website customer service positively and significantly influence intention of online buyers to continuously purchase via online channels. This study concludes that online sellers’ website quality is important in predicting online buyers’ purchase intention. Recommendation and implication of this study were discussed focusing on how online sellers should improve their website quality to stay competitive in online business.

  6. Evidence implementation: Development of an online methodology from the knowledge-to-action model of knowledge translation.

    PubMed

    Lockwood, Craig; Stephenson, Matthew; Lizarondo, Lucylynn; van Den Hoek, Joan; Harrison, Margaret

    2016-08-01

    This paper describes an online facilitation for operationalizing the knowledge-to-action (KTA) model. The KTA model incorporates implementation planning that is optimally suited to the information needs of clinicians. The can-implement(©) is an evidence implementation process informed by the KTA model. An online counterpart, the can-implement.pro(©) , was developed to enable greater dissemination and utilization of the can-implement(©) process. The driver for this work was health professionals' need for facilitation that is iterative, informed by context and localized to the specific needs of users. The literature supporting this paper includes evaluation studies and theoretical concepts relevant to KTA model, evidence implementation and facilitation. Nursing and other health disciplines require a skill set and resources to successfully navigate the complexity of organizational requirements, inter-professional leadership and day-to-day practical management to implement evidence into clinical practice. The can-implement.pro(©) provides an accessible, inclusive system for evidence implementation projects. There is empirical support for evidence implementation informed by the KTA model, which in this phase of work has been developed for online uptake. Nurses and other clinicians seeking to implement evidence could benefit from the directed actions, planning advice and information embedded in the phases and steps of can-implement.pro(©) . PMID:27562662

  7. SEGEL: A Web Server for Visualization of Smoking Effects on Human Lung Gene Expression.

    PubMed

    Xu, Yan; Hu, Brian; Alnajm, Sammy S; Lu, Yin; Huang, Yangxin; Allen-Gipson, Diane; Cheng, Feng

    2015-01-01

    Cigarette smoking is a major cause of death worldwide resulting in over six million deaths per year. Cigarette smoke contains complex mixtures of chemicals that are harmful to nearly all organs of the human body, especially the lungs. Cigarette smoking is considered the major risk factor for many lung diseases, particularly chronic obstructive pulmonary diseases (COPD) and lung cancer. However, the underlying molecular mechanisms of smoking-induced lung injury associated with these lung diseases still remain largely unknown. Expression microarray techniques have been widely applied to detect the effects of smoking on gene expression in different human cells in the lungs. These projects have provided a lot of useful information for researchers to understand the potential molecular mechanism(s) of smoke-induced pathogenesis. However, a user-friendly web server that would allow scientists to fast query these data sets and compare the smoking effects on gene expression across different cells had not yet been established. For that reason, we have integrated eight public expression microarray data sets from trachea epithelial cells, large airway epithelial cells, small airway epithelial cells, and alveolar macrophage into an online web server called SEGEL (Smoking Effects on Gene Expression of Lung). Users can query gene expression patterns across these cells from smokers and nonsmokers by gene symbols, and find the effects of smoking on the gene expression of lungs from this web server. Sex difference in response to smoking is also shown. The relationship between the gene expression and cigarette smoking consumption were calculated and are shown in the server. The current version of SEGEL web server contains 42,400 annotated gene probe sets represented on the Affymetrix Human Genome U133 Plus 2.0 platform. SEGEL will be an invaluable resource for researchers interested in the effects of smoking on gene expression in the lungs. The server also provides useful information

  8. An expert system to perform on-line controller restructuring for abrupt model changes

    NASA Technical Reports Server (NTRS)

    Litt, Jonathan S.

    1990-01-01

    Work in progress on an expert system used to reconfigure and tune airframe/engine control systems on-line in real time in response to battle damage or structural failures is presented. The closed loop system is monitored constantly for changes in structure and performance, the detection of which prompts the expert system to choose and apply a particular control restructuring algorithm based on the type and severity of the damage. Each algorithm is designed to handle specific types of failures and each is applicable only in certain situations. The expert system uses information about the system model to identify the failure and to select the technique best suited to compensate for it. A depth-first search is used to find a solution. Once a new controller is designed and implemented it must be tuned to recover the original closed-loop handling qualities and responsiveness from the degraded system. Ideally, the pilot should not be able to tell the difference between the original and redesigned systems. The key is that the system must have inherent redundancy so that degraded or missing capabilities can be restored by creative use of alternate functionalities. With enough redundancy in the control system, minor battle damage affecting individual control surfaces or actuators, compressor efficiency, etc., can be compensated for such that the closed-loop performance in not noticeably altered. The work is applied to a Black Hawk/T700 system.

  9. Development of intelligent instruments with embedded HTTP servers for control and data acquisition in a cryogenic setup—The hardware, firmware, and software implementation

    SciTech Connect

    Antony, Joby; Mathuria, D. S.; Datta, T. S.; Maity, Tanmoy

    2015-12-15

    The power of Ethernet for control and automation technology is being largely understood by the automation industry in recent times. Ethernet with HTTP (Hypertext Transfer Protocol) is one of the most widely accepted communication standards today. Ethernet is best known for being able to control through internet from anywhere in the globe. The Ethernet interface with built-in on-chip embedded servers ensures global connections for crate-less model of control and data acquisition systems which have several advantages over traditional crate-based control architectures for slow applications. This architecture will completely eliminate the use of any extra PLC (Programmable Logic Controller) or similar control hardware in any automation network as the control functions are firmware coded inside intelligent meters itself. Here, we describe the indigenously built project of a cryogenic control system built for linear accelerator at Inter University Accelerator Centre, known as “CADS,” which stands for “Complete Automation of Distribution System.” CADS deals with complete hardware, firmware, and software implementation of the automated linac cryogenic distribution system using many Ethernet based embedded cryogenic instruments developed in-house. Each instrument works as an intelligent meter called device-server which has the control functions and control loops built inside the firmware itself. Dedicated meters with built-in servers were designed out of ARM (Acorn RISC (Reduced Instruction Set Computer) Machine) and ATMEL processors and COTS (Commercially Off-the-Shelf) SMD (Surface Mount Devices) components, with analog sensor front-end and a digital back-end web server implementing remote procedure call over HTTP for digital control and readout functions. At present, 24 instruments which run 58 embedded servers inside, each specific to a particular type of sensor-actuator combination for closed loop operations, are now deployed and distributed across control LAN

  10. Development of intelligent instruments with embedded HTTP servers for control and data acquisition in a cryogenic setup--The hardware, firmware, and software implementation.

    PubMed

    Antony, Joby; Mathuria, D S; Datta, T S; Maity, Tanmoy

    2015-12-01

    The power of Ethernet for control and automation technology is being largely understood by the automation industry in recent times. Ethernet with HTTP (Hypertext Transfer Protocol) is one of the most widely accepted communication standards today. Ethernet is best known for being able to control through internet from anywhere in the globe. The Ethernet interface with built-in on-chip embedded servers ensures global connections for crate-less model of control and data acquisition systems which have several advantages over traditional crate-based control architectures for slow applications. This architecture will completely eliminate the use of any extra PLC (Programmable Logic Controller) or similar control hardware in any automation network as the control functions are firmware coded inside intelligent meters itself. Here, we describe the indigenously built project of a cryogenic control system built for linear accelerator at Inter University Accelerator Centre, known as "CADS," which stands for "Complete Automation of Distribution System." CADS deals with complete hardware, firmware, and software implementation of the automated linac cryogenic distribution system using many Ethernet based embedded cryogenic instruments developed in-house. Each instrument works as an intelligent meter called device-server which has the control functions and control loops built inside the firmware itself. Dedicated meters with built-in servers were designed out of ARM (Acorn RISC (Reduced Instruction Set Computer) Machine) and ATMEL processors and COTS (Commercially Off-the-Shelf) SMD (Surface Mount Devices) components, with analog sensor front-end and a digital back-end web server implementing remote procedure call over HTTP for digital control and readout functions. At present, 24 instruments which run 58 embedded servers inside, each specific to a particular type of sensor-actuator combination for closed loop operations, are now deployed and distributed across control LAN (Local

  11. Development of intelligent instruments with embedded HTTP servers for control and data acquisition in a cryogenic setup—The hardware, firmware, and software implementation

    NASA Astrophysics Data System (ADS)

    Antony, Joby; Mathuria, D. S.; Datta, T. S.; Maity, Tanmoy

    2015-12-01

    The power of Ethernet for control and automation technology is being largely understood by the automation industry in recent times. Ethernet with HTTP (Hypertext Transfer Protocol) is one of the most widely accepted communication standards today. Ethernet is best known for being able to control through internet from anywhere in the globe. The Ethernet interface with built-in on-chip embedded servers ensures global connections for crate-less model of control and data acquisition systems which have several advantages over traditional crate-based control architectures for slow applications. This architecture will completely eliminate the use of any extra PLC (Programmable Logic Controller) or similar control hardware in any automation network as the control functions are firmware coded inside intelligent meters itself. Here, we describe the indigenously built project of a cryogenic control system built for linear accelerator at Inter University Accelerator Centre, known as "CADS," which stands for "Complete Automation of Distribution System." CADS deals with complete hardware, firmware, and software implementation of the automated linac cryogenic distribution system using many Ethernet based embedded cryogenic instruments developed in-house. Each instrument works as an intelligent meter called device-server which has the control functions and control loops built inside the firmware itself. Dedicated meters with built-in servers were designed out of ARM (Acorn RISC (Reduced Instruction Set Computer) Machine) and ATMEL processors and COTS (Commercially Off-the-Shelf) SMD (Surface Mount Devices) components, with analog sensor front-end and a digital back-end web server implementing remote procedure call over HTTP for digital control and readout functions. At present, 24 instruments which run 58 embedded servers inside, each specific to a particular type of sensor-actuator combination for closed loop operations, are now deployed and distributed across control LAN (Local

  12. Development of intelligent instruments with embedded HTTP servers for control and data acquisition in a cryogenic setup--The hardware, firmware, and software implementation.

    PubMed

    Antony, Joby; Mathuria, D S; Datta, T S; Maity, Tanmoy

    2015-12-01

    The power of Ethernet for control and automation technology is being largely understood by the automation industry in recent times. Ethernet with HTTP (Hypertext Transfer Protocol) is one of the most widely accepted communication standards today. Ethernet is best known for being able to control through internet from anywhere in the globe. The Ethernet interface with built-in on-chip embedded servers ensures global connections for crate-less model of control and data acquisition systems which have several advantages over traditional crate-based control architectures for slow applications. This architecture will completely eliminate the use of any extra PLC (Programmable Logic Controller) or similar control hardware in any automation network as the control functions are firmware coded inside intelligent meters itself. Here, we describe the indigenously built project of a cryogenic control system built for linear accelerator at Inter University Accelerator Centre, known as "CADS," which stands for "Complete Automation of Distribution System." CADS deals with complete hardware, firmware, and software implementation of the automated linac cryogenic distribution system using many Ethernet based embedded cryogenic instruments developed in-house. Each instrument works as an intelligent meter called device-server which has the control functions and control loops built inside the firmware itself. Dedicated meters with built-in servers were designed out of ARM (Acorn RISC (Reduced Instruction Set Computer) Machine) and ATMEL processors and COTS (Commercially Off-the-Shelf) SMD (Surface Mount Devices) components, with analog sensor front-end and a digital back-end web server implementing remote procedure call over HTTP for digital control and readout functions. At present, 24 instruments which run 58 embedded servers inside, each specific to a particular type of sensor-actuator combination for closed loop operations, are now deployed and distributed across control LAN (Local

  13. FastML: a web server for probabilistic reconstruction of ancestral sequences.

    PubMed

    Ashkenazy, Haim; Penn, Osnat; Doron-Faigenboim, Adi; Cohen, Ofir; Cannarozzi, Gina; Zomer, Oren; Pupko, Tal

    2012-07-01

    Ancestral sequence reconstruction is essential to a variety of evolutionary studies. Here, we present the FastML web server, a user-friendly tool for the reconstruction of ancestral sequences. FastML implements various novel features that differentiate it from existing tools: (i) FastML uses an indel-coding method, in which each gap, possibly spanning multiples sites, is coded as binary data. FastML then reconstructs ancestral indel states assuming a continuous time Markov process. FastML provides the most likely ancestral sequences, integrating both indels and characters; (ii) FastML accounts for uncertainty in ancestral states: it provides not only the posterior probabilities for each character and indel at each sequence position, but also a sample of ancestral sequences from this posterior distribution, and a list of the k-most likely ancestral sequences; (iii) FastML implements a large array of evolutionary models, which makes it generic and applicable for nucleotide, protein and codon sequences; and (iv) a graphical representation of the results is provided, including, for example, a graphical logo of the inferred ancestral sequences. The utility of FastML is demonstrated by reconstructing ancestral sequences of the Env protein from various HIV-1 subtypes. FastML is freely available for all academic users and is available online at http://fastml.tau.ac.il/.

  14. Applying the Context, Input, Process, Product Evaluation Model for Evaluation, Research, and Redesign of an Online Master's Program

    ERIC Educational Resources Information Center

    Sancar Tokmak, Hatice; Meltem Baturay, H.; Fadde, Peter

    2013-01-01

    This study aimed to evaluate and redesign an online master's degree program consisting of 12 courses from the informatics field using a context, input, process, product (CIPP) evaluation model. Research conducted during the redesign of the online program followed a mixed methodology in which data was collected through a CIPP survey,…

  15. NMMB/BSC-DUST: an online mineral dust atmospheric model from meso to global scales

    NASA Astrophysics Data System (ADS)

    Haustein, K.; Pérez, C.; Jorba, O.; Baldasano, J. M.; Janjic, Z.; Black, T.; Nickovic, S.

    2009-04-01

    While mineral dust distribution and effects are important at global scales, they strongly depend on dust emissions that are controlled on small spatial and temporal scales. Most global dust models use prescribed wind fields provided by meteorological centers (e.g., NCEP and ECMWF) and their spatial resolution is currently never better than about 1°×1°. Regional dust models offer substantially higher resolution (10-20 km) and are typically coupled with weather forecast models that simulate processes that GCMs either cannot resolve or can resolve only poorly. These include internal circulation features such as the low-level nocturnal jet which is a crucial feature for dust emission in several dust ‘hot spot' sources in North Africa. Based on our modeling experience with the BSC-DREAM regional forecast model (http://www.bsc.es/projects/earthscience/DREAM/) we are currently implementing an improved mineral dust model [Pérez et al., 2008] coupled online with the new global/regional NMMB atmospheric model under development in NOAA/NCEP/EMC [Janjic, 2005]. The NMMB is an evolution of the operational WRF-NMME extending from meso to global scales. The NMMB will become the next-generation NCEP model for operational weather forecast in 2010. The corresponding unified non-hydrostatic dynamical core ranges from meso to global scale allowing regional and global simulations. It has got an add-on non-hydrostatic module and it is based on the Arakawa B-grid and hybrid pressure-sigma vertical coordinates. NMMB is fully embedded into the Earth System Modeling Framework (ESMF), treating dynamics and physics separately and coupling them easily within the ESMF structure. Our main goal is to provide global dust forecasts up to 7 days at mesoscale resolutions. New features of the model include a physically-based dust emission scheme after White [1979], Iversen and White [1982] and Marticorena and Bergametti [1995] that takes the effects of saltation and sandblasting into account

  16. Using a Global Climate Model in an On-line Climate Change Course

    NASA Astrophysics Data System (ADS)

    Randle, D. E.; Chandler, M. A.; Sohl, L. E.

    2012-12-01

    Seminars on Science: Climate Change is an on-line, graduate-level teacher professional development course offered by the American Museum of Natural History. It is an intensive 6-week course covering a broad range of global climate topics, from the fundamentals of the climate system, to the causes of climate change, the role of paleoclimate investigations, and a discussion of potential consequences and risks. The instructional method blends essays, videos, textbooks, and linked websites, with required participation in electronic discussion forums that are moderated by an experienced educator and a course scientist. Most weeks include additional assignments. Three of these assignments employ computer models, including two weeks spent working with a full-fledged 3D global climate model (GCM). The global climate modeling environment is supplied through a partnership with Columbia University's Educational Global Climate Modeling Project (EdGCM). The objective is to have participants gain hands-on experience with one of the most important, yet misunderstood, aspects of climate change research. Participants in the course are supplied with a USB drive that includes installers for the software and sample data. The EdGCM software includes a version of NASA's global climate model fitted with a graphical user interface and pre-loaded with several climate change simulations. Step-by-step assignments and video tutorials help walk people through these challenging exercises and the course incorporates a special assignment discussion forum to help with technical problems and questions about the NASA GCM. There are several takeaways from our first year and a half of offering this course, which has become one of the most popular out of the twelve courses offered by the Museum. Participants report a high level of satisfaction in using EdGCM. Some report frustration at the initial steps, but overwhelmingly claim that the assignments are worth the effort. Many of the difficulties that

  17. Object Segmentation Methods for Online Model Acquisition to Guide Robotic Grasping

    NASA Astrophysics Data System (ADS)

    Ignakov, Dmitri

    A vision system is an integral component of many autonomous robots. It enables the robot to perform essential tasks such as mapping, localization, or path planning. A vision system also assists with guiding the robot's grasping and manipulation tasks. As an increased demand is placed on service robots to operate in uncontrolled environments, advanced vision systems must be created that can function effectively in visually complex and cluttered settings. This thesis presents the development of segmentation algorithms to assist in online model acquisition for guiding robotic manipulation tasks. Specifically, the focus is placed on localizing door handles to assist in robotic door opening, and on acquiring partial object models to guide robotic grasping. First, a method for localizing a door handle of unknown geometry based on a proposed 3D segmentation method is presented. Following segmentation, localization is performed by fitting a simple box model to the segmented handle. The proposed method functions without requiring assumptions about the appearance of the handle or the door, and without a geometric model of the handle. Next, an object segmentation algorithm is developed, which combines multiple appearance (intensity and texture) and geometric (depth and curvature) cues. The algorithm is able to segment objects without utilizing any a priori appearance or geometric information in visually complex and cluttered environments. The segmentation method is based on the Conditional Random Fields (CRF) framework, and the graph cuts energy minimization technique. A simple and efficient method for initializing the proposed algorithm which overcomes graph cuts' reliance on user interaction is also developed. Finally, an improved segmentation algorithm is developed which incorporates a distance metric learning (DML) step as a means of weighing various appearance and geometric segmentation cues, allowing the method to better adapt to the available data. The improved method

  18. A Server-Based Mobile Coaching System

    PubMed Central

    Baca, Arnold; Kornfeind, Philipp; Preuschl, Emanuel; Bichler, Sebastian; Tampier, Martin; Novatchkov, Hristo

    2010-01-01

    A prototype system for monitoring, transmitting and processing performance data in sports for the purpose of providing feedback has been developed. During training, athletes are equipped with a mobile device and wireless sensors using the ANT protocol in order to acquire biomechanical, physiological and other sports specific parameters. The measured data is buffered locally and forwarded via the Internet to a server. The server provides experts (coaches, biomechanists, sports medicine specialists etc.) with remote data access, analysis and (partly automated) feedback routines. In this way, experts are able to analyze the athlete’s performance and return individual feedback messages from remote locations. PMID:22163490

  19. Optimizing the NASA Technical Report Server

    NASA Technical Reports Server (NTRS)

    Nelson, Michael L.; Maa, Ming-Hokng

    1996-01-01

    The NASA Technical Report Server (NTRS), a World Wide Web report distribution NASA technical publications service, is modified for performance enhancement, greater protocol support, and human interface optimization. Results include: Parallel database queries, significantly decreasing user access times by an average factor of 2.3; access from clients behind firewalls and/ or proxies which truncate excessively long Uniform Resource Locators (URLs); access to non-Wide Area Information Server (WAIS) databases and compatibility with the 239-50.3 protocol; and a streamlined user interface.

  20. 4-Stage Online Presence Model: Model for Module Design and Delivery Using Web 2.0 Technologies to Facilitate Critical Thinking Skills

    ERIC Educational Resources Information Center

    Goh, WeiWei; Dexter, Barbara; Self, Richard

    2014-01-01

    The main purpose of this paper is to present a conceptual model for the use of web 2.0 online technologies in order to develop and enhance students' critical thinking skills at higher education level. Wiki is chosen as the main focus in this paper. The model integrates Salmon's 5-stage model (Salmon, 2002) with Garrison's Community…

  1. An online model correction method based on an inverse problem: Part II—systematic model error correction

    NASA Astrophysics Data System (ADS)

    Xue, Haile; Shen, Xueshun; Chou, Jifan

    2015-11-01

    An online systematic error correction is presented and examined as a technique to improve the accuracy of real-time numerical weather prediction, based on the dataset of model errors (MEs) in past intervals. Given the analyses, the ME in each interval (6 h) between two analyses can be iteratively obtained by introducing an unknown tendency term into the prediction equation, shown in Part I of this two-paper series. In this part, after analyzing the 5-year (2001-2005) GRAPES-GFS (Global Forecast System of the Global and Regional Assimilation and Prediction System) error patterns and evolution, a systematic model error correction is given based on the least-squares approach by firstly using the past MEs. To test the correction, we applied the approach in GRAPES-GFS for July 2009 and January 2010. The datasets associated with the initial condition and SST used in this study were based on NCEP (National Centers for Environmental Prediction) FNL (final) data. The results indicated that the Northern Hemispheric systematically underestimated equator-to-pole geopotential gradient and westerly wind of GRAPES-GFS were largely enhanced, and the biases of temperature and wind in the tropics were strongly reduced. Therefore, the correction results in a more skillful forecast with lower mean bias and root-mean-square error and higher anomaly correlation coefficient.

  2. Application of Large-Scale Database-Based Online Modeling to Plant State Long-Term Estimation

    NASA Astrophysics Data System (ADS)

    Ogawa, Masatoshi; Ogai, Harutoshi

    Recently, attention has been drawn to the local modeling techniques of a new idea called “Just-In-Time (JIT) modeling”. To apply “JIT modeling” to a large amount of database online, “Large-scale database-based Online Modeling (LOM)” has been proposed. LOM is a technique that makes the retrieval of neighboring data more efficient by using both “stepwise selection” and quantization. In order to predict the long-term state of the plant without using future data of manipulated variables, an Extended Sequential Prediction method of LOM (ESP-LOM) has been proposed. In this paper, the LOM and the ESP-LOM are introduced.

  3. Applying the Practical Inquiry Model to Investigate the Quality of Students' Online Discourse in an Information Ethics Course Based on Bloom's Teaching Goal and Bird's 3C Model

    ERIC Educational Resources Information Center

    Liu, Chien-Jen; Yang, Shu Ching

    2012-01-01

    The goal of this study is to better understand how the study participants' cognitive discourse is displayed in their learning transaction in an asynchronous, text-based conferencing environment based on Garrison's Practical Inquiry Model (2001). The authors designed an online information ethics course based on Bloom's taxonomy of educational…

  4. Modelling the Factors that Affect Individuals' Utilisation of Online Learning Systems: An Empirical Study Combining the Task Technology Fit Model with the Theory of Planned Behaviour

    ERIC Educational Resources Information Center

    Yu, Tai-Kuei; Yu, Tai-Yi

    2010-01-01

    Understanding learners' behaviour, perceptions and influence in terms of learner performance is crucial to predict the use of electronic learning systems. By integrating the task-technology fit (TTF) model and the theory of planned behaviour (TPB), this paper investigates the online learning utilisation of Taiwanese students. This paper provides a…

  5. Improved wet weather wastewater influent modelling at Viikinmäki WWTP by on-line weather radar information.

    PubMed

    Heinonen, M; Jokelainen, M; Fred, T; Koistinen, J; Hohti, H

    2013-01-01

    Municipal wastewater treatment plant (WWTP) influent is typically dependent on diurnal variation of urban production of liquid waste, infiltration of stormwater runoff and groundwater infiltration. During wet weather conditions the infiltration phenomenon typically increases the risk of overflows in the sewer system as well as the risk of having to bypass the WWTP. Combined sewer infrastructure multiplies the role of rainwater runoff in the total influent. Due to climate change, rain intensity and magnitude is tending to rise as well, which can already be observed in the normal operation of WWTPs. Bypass control can be improved if the WWTP is prepared for the increase of influent, especially if there is some storage capacity prior to the treatment plant. One option for this bypass control is utilisation of on-line weather-radar-based forecast data of rainfall as an input for the on-line influent model. This paper reports the Viikinmäki WWTP wet weather influent modelling project results where gridded exceedance probabilities of hourly rainfall accumulations for the next 3 h from the Finnish Meteorological Institute are utilised as on-line input data for the influent model.

  6. SARA-Coffee web server, a tool for the computation of RNA sequence and structure multiple alignments

    PubMed Central

    Di Tommaso, Paolo; Bussotti, Giovanni; Kemena, Carsten; Capriotti, Emidio; Chatzou, Maria; Prieto, Pablo; Notredame, Cedric

    2014-01-01

    This article introduces the SARA-Coffee web server; a service allowing the online computation of 3D structure based multiple RNA sequence alignments. The server makes it possible to combine sequences with and without known 3D structures. Given a set of sequences SARA-Coffee outputs a multiple sequence alignment along with a reliability index for every sequence, column and aligned residue. SARA-Coffee combines SARA, a pairwise structural RNA aligner with the R-Coffee multiple RNA aligner in a way that has been shown to improve alignment accuracy over most sequence aligners when enough structural data is available. The server can be accessed from http://tcoffee.crg.cat/apps/tcoffee/do:saracoffee. PMID:24972831

  7. SARA-Coffee web server, a tool for the computation of RNA sequence and structure multiple alignments.

    PubMed

    Di Tommaso, Paolo; Bussotti, Giovanni; Kemena, Carsten; Capriotti, Emidio; Chatzou, Maria; Prieto, Pablo; Notredame, Cedric

    2014-07-01

    This article introduces the SARA-Coffee web server; a service allowing the online computation of 3D structure based multiple RNA sequence alignments. The server makes it possible to combine sequences with and without known 3D structures. Given a set of sequences SARA-Coffee outputs a multiple sequence alignment along with a reliability index for every sequence, column and aligned residue. SARA-Coffee combines SARA, a pairwise structural RNA aligner with the R-Coffee multiple RNA aligner in a way that has been shown to improve alignment accuracy over most sequence aligners when enough structural data is available. The server can be accessed from http://tcoffee.crg.cat/apps/tcoffee/do:saracoffee.

  8. CTserver: A Computational Thermodynamics Server for the Geoscience Community

    NASA Astrophysics Data System (ADS)

    Kress, V. C.; Ghiorso, M. S.

    2006-12-01

    The CTserver platform is an Internet-based computational resource that provides on-demand services in Computational Thermodynamics (CT) to a diverse geoscience user base. This NSF-supported resource can be accessed at ctserver.ofm-research.org. The CTserver infrastructure leverages a high-quality and rigorously tested software library of routines for computing equilibrium phase assemblages and for evaluating internally consistent thermodynamic properties of materials, e.g. mineral solid solutions and a variety of geological fluids, including magmas. Thermodynamic models are currently available for 167 phases. Recent additions include Duan, Møller and Weare's model for supercritical C-O-H-S, extended to include SO2 and S2 species, and an entirely new associated solution model for O-S-Fe-Ni sulfide liquids. This software library is accessed via the CORBA Internet protocol for client-server communication. CORBA provides a standardized, object-oriented, language and platform independent, fast, low-bandwidth interface to phase property modules running on the server cluster. Network transport, language translation and resource allocation are handled by the CORBA interface. Users access server functionality in two principal ways. Clients written as browser- based Java applets may be downloaded which provide specific functionality such as retrieval of thermodynamic properties of phases, computation of phase equilibria for systems of specified composition, or modeling the evolution of these systems along some particular reaction path. This level of user interaction requires minimal programming effort and is ideal for classroom use. A more universal and flexible mode of CTserver access involves making remote procedure calls from user programs directly to the server public interface. The CTserver infrastructure relieves the user of the burden of implementing and testing the often complex thermodynamic models of real liquids and solids. A pilot application of this distributed

  9. Automated data evaluation and modelling of simultaneous (19) F-(1) H medium-resolution NMR spectra for online reaction monitoring.

    PubMed

    Zientek, Nicolai; Laurain, Clément; Meyer, Klas; Paul, Andrea; Engel, Dirk; Guthausen, Gisela; Kraume, Matthias; Maiwald, Michael

    2016-06-01

    Medium-resolution nuclear magnetic resonance spectroscopy (MR-NMR) currently develops to an important analytical tool for both quality control and process monitoring. In contrast to high-resolution online NMR (HR-NMR), MR-NMR can be operated under rough environmental conditions. A continuous re-circulating stream of reaction mixture from the reaction vessel to the NMR spectrometer enables a non-invasive, volume integrating online analysis of reactants and products. Here, we investigate the esterification of 2,2,2-trifluoroethanol with acetic acid to 2,2,2-trifluoroethyl acetate both by (1) H HR-NMR (500 MHz) and (1) H and (19) F MR-NMR (43 MHz) as a model system. The parallel online measurement is realised by splitting the flow, which allows the adjustment of quantitative and independent flow rates, both in the HR-NMR probe as well as in the MR-NMR probe, in addition to a fast bypass line back to the reactor. One of the fundamental acceptance criteria for online MR-MNR spectroscopy is a robust data treatment and evaluation strategy with the potential for automation. The MR-NMR spectra are treated by an automated baseline and phase correction using the minimum entropy method. The evaluation strategies comprise (i) direct integration, (ii) automated line fitting, (iii) indirect hard modelling (IHM) and (iv) partial least squares regression (PLS-R). To assess the potential of these evaluation strategies for MR-NMR, prediction results are compared with the line fitting data derived from the quantitative HR-NMR spectroscopy. Although, superior results are obtained from both IHM and PLS-R for (1) H MR-NMR, especially the latter demands for elaborate data pretreatment, whereas IHM models needed no previous alignment. Copyright © 2015 John Wiley & Sons, Ltd.

  10. Interfaces for Distributed Systems of Information Servers.

    ERIC Educational Resources Information Center

    Kahle, Brewster M.; And Others

    1993-01-01

    Describes five interfaces to remote, full-text databases accessed through distributed systems of servers. These are WAIStation for the Macintosh, XWAIS for X-Windows, GWAIS for Gnu-Emacs; SWAIS for dumb terminals, and Rosebud for the Macintosh. Sixteen illustrations provide examples of display screens. Problems and needed improvements are…

  11. Implementing bioinformatic workflows within the bioextract server

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Computational workflows in bioinformatics are becoming increasingly important in the achievement of scientific advances. These workflows typically require the integrated use of multiple, distributed data sources and analytic tools. The BioExtract Server (http://bioextract.org) is a distributed servi...

  12. Managing heterogeneous wireless environments via Hotspot servers

    NASA Astrophysics Data System (ADS)

    Simunic, Tajana; Qadeer, Wajahat; De Micheli, Giovanni

    2005-01-01

    Wireless communication today supports heterogeneous wireless devices with a number of different wireless network interfaces (WNICs). A large fraction of communication is infrastructure based, so the wireless access points and hotspot servers have become more ubiquitous. Battery lifetime is still a critical issue, with WNICs typically consuming a large fraction of the overall power budget in a mobile device. In this work we present a new technique for managing power consumption and QoS in diverse wireless environments using Hotspot servers. We introduce a resource manager module at both Hotspot server and the client. Resource manager schedules communication bursts between it and each client. The schedulers decide what WNIC to employ for communication, when to communicate data and how to minimize power dissipation while maintaining an acceptable QoS based on the application needs. We present two new scheduling policies derived from well known earliest deadline first (EDF) and rate monotonic (RM) [26] algorithms. The resource manager and the schedulers have been implemented in the HP's Hotspot server [14]. Our measurement and simulation results show a significant improvement in power dissipation and QoS of Bluetooth and 802.11b for applications such as MP3, MPEG4, WWW, and email.

  13. Managing heterogeneous wireless environments via Hotspot servers

    NASA Astrophysics Data System (ADS)

    Simunic, Tajana; Qadeer, Wajahat; De Micheli, Giovanni

    2004-12-01

    Wireless communication today supports heterogeneous wireless devices with a number of different wireless network interfaces (WNICs). A large fraction of communication is infrastructure based, so the wireless access points and hotspot servers have become more ubiquitous. Battery lifetime is still a critical issue, with WNICs typically consuming a large fraction of the overall power budget in a mobile device. In this work we present a new technique for managing power consumption and QoS in diverse wireless environments using Hotspot servers. We introduce a resource manager module at both Hotspot server and the client. Resource manager schedules communication bursts between it and each client. The schedulers decide what WNIC to employ for communication, when to communicate data and how to minimize power dissipation while maintaining an acceptable QoS based on the application needs. We present two new scheduling policies derived from well known earliest deadline first (EDF) and rate monotonic (RM) [26] algorithms. The resource manager and the schedulers have been implemented in the HP's Hotspot server [14]. Our measurement and simulation results show a significant improvement in power dissipation and QoS of Bluetooth and 802.11b for applications such as MP3, MPEG4, WWW, and email.

  14. An online model correction method based on an inverse problem: Part I—Model error estimation by iteration

    NASA Astrophysics Data System (ADS)

    Xue, Haile; Shen, Xueshun; Chou, Jifan

    2015-10-01

    Errors inevitably exist in numerical weather prediction (NWP) due to imperfect numeric and physical parameterizations. To eliminate these errors, by considering NWP as an inverse problem, an unknown term in the prediction equations can be estimated inversely by using the past data, which are presumed to represent the imperfection of the NWP model (model error, denoted as ME). In this first paper of a two-part series, an iteration method for obtaining the MEs in past intervals is presented, and the results from testing its convergence in idealized experiments are reported. Moreover, two batches of iteration tests were applied in the global forecast system of the Global and Regional Assimilation and Prediction System (GRAPES-GFS) for July-August 2009 and January-February 2010. The datasets associated with the initial conditions and sea surface temperature (SST) were both based on NCEP (National Centers for Environmental Prediction) FNL (final) data. The results showed that 6th h forecast errors were reduced to 10% of their original value after a 20-step iteration. Then, off-line forecast error corrections were estimated linearly based on the 2-month mean MEs and compared with forecast errors. The estimated error corrections agreed well with the forecast errors, but the linear growth rate of the estimation was steeper than the forecast error. The advantage of this iteration method is that the MEs can provide the foundation for online correction. A larger proportion of the forecast errors can be expected to be canceled out by properly introducing the model error correction into GRAPES-GFS.

  15. Cole-Cole, linear and multivariate modeling of capacitance data for on-line monitoring of biomass.

    PubMed

    Dabros, Michal; Dennewald, Danielle; Currie, David J; Lee, Mark H; Todd, Robert W; Marison, Ian W; von Stockar, Urs

    2009-02-01

    This work evaluates three techniques of calibrating capacitance (dielectric) spectrometers used for on-line monitoring of biomass: modeling of cell properties using the theoretical Cole-Cole equation, linear regression of dual-frequency capacitance measurements on biomass concentration, and multivariate (PLS) modeling of scanning dielectric spectra. The performance and robustness of each technique is assessed during a sequence of validation batches in two experimental settings of differing signal noise. In more noisy conditions, the Cole-Cole model had significantly higher biomass concentration prediction errors than the linear and multivariate models. The PLS model was the most robust in handling signal noise. In less noisy conditions, the three models performed similarly. Estimates of the mean cell size were done additionally using the Cole-Cole and PLS models, the latter technique giving more satisfactory results.

  16. Deploying Server-side File System Monitoring at NERSC

    SciTech Connect

    Uselton, Andrew

    2009-05-01

    The Franklin Cray XT4 at the NERSC center was equipped with the server-side I/O monitoring infrastructure Cerebro/LMT, which is described here in detail. Insights gained from the data produced include a better understanding of instantaneous data rates during file system testing, file system behavior during regular production time, and long-term average behaviors. Information and insights gleaned from this monitoring support efforts to proactively manage the I/O infrastructure on Franklin. A simple model for I/O transactions is introduced and compared with the 250 million observations sent to the LMT database from August 2008 to February 2009.

  17. An online model-based method for state of energy estimation of lithium-ion batteries using dual filters

    NASA Astrophysics Data System (ADS)

    Dong, Guangzhong; Chen, Zonghai; Wei, Jingwen; Zhang, Chenbin; Wang, Peng

    2016-01-01

    The state-of-energy of lithium-ion batteries is an important evaluation index for energy storage systems in electric vehicles and smart grids. To improve the battery state-of-energy estimation accuracy and reliability, an online model-based estimation approach is proposed against uncertain dynamic load currents and environment temperatures. Firstly, a three-dimensional response surface open-circuit-voltage model is built up to improve the battery state-of-energy estimation accuracy, taking various temperatures into account. Secondly, a total-available-energy-capacity model that involves temperatures and discharge rates is reconstructed to improve the accuracy of the battery model. An extended-Kalman-filter and particle-filter based dual filters algorithm is then developed to establish an online model-based estimator for the battery state-of-energy. The extended-Kalman-filter is employed to update parameters of the battery model using real-time battery current and voltage at each sampling interval, while the particle-filter is applied to estimate the battery state-of-energy. Finally, the proposed approach is verified by experiments conducted on a LiFePO4 lithium-ion battery under different operating currents and temperatures. Experimental results indicate that the battery model simulates battery dynamics robustly with high accuracy, and the estimates of the dual filters converge to the real state-of-energy within an error of ±4%.

  18. The Live Access Server Scientific Product Generation Through Workflow Orchestration

    NASA Astrophysics Data System (ADS)

    Hankin, S.; Calahan, J.; Li, J.; Manke, A.; O'Brien, K.; Schweitzer, R.

    2006-12-01

    The Live Access Server (LAS) is a well-established Web-application for display and analysis of geo-science data sets. The software, which can be downloaded and installed by anyone, gives data providers an easy way to establish services for their on-line data holdings, so their users can make plots; create and download data sub-sets; compare (difference) fields; and perform simple analyses. Now at version 7.0, LAS has been in operation since 1994. The current "Armstrong" release of LAS V7 consists of three components in a tiered architecture: user interface, workflow orchestration and Web Services. The LAS user interface (UI) communicates with the LAS Product Server via an XML protocol embedded in an HTTP "get" URL. Libraries (APIs) have been developed in Java, JavaScript and perl that can readily generate this URL. As a result of this flexibility it is common to find LAS user interfaces of radically different character, tailored to the nature of specific datasets or the mindset of specific users. When a request is received by the LAS Product Server (LPS -- the workflow orchestration component), business logic converts this request into a series of Web Service requests invoked via SOAP. These "back- end" Web services perform data access and generate products (visualizations, data subsets, analyses, etc.). LPS then packages these outputs into final products (typically HTML pages) via Jakarta Velocity templates for delivery to the end user. "Fine grained" data access is performed by back-end services that may utilize JDBC for data base access; the OPeNDAP "DAPPER" protocol; or (in principle) the OGC WFS protocol. Back-end visualization services are commonly legacy science applications wrapped in Java or Python (or perl) classes and deployed as Web Services accessible via SOAP. Ferret is the default visualization application used by LAS, though other applications such as Matlab, CDAT, and GrADS can also be used. Other back-end services may include generation of Google

  19. Developing the online survey.

    PubMed

    Gordon, Jeffry S; McNew, Ryan

    2008-12-01

    Institutions of higher education are now using Internet-based technology tools to conduct surveys for data collection. Research shows that the type and quality of responses one receives with online surveys are comparable with what one receives in paper-based surveys. Data collection can take place on Web-based surveys, e-mail-based surveys, and personal digital assistants/Smartphone devices. Web surveys can be subscription templates, software packages installed on one's own server, or created from scratch using Web programming development tools. All of these approaches have their advantages and disadvantages. The survey owner must make informed decisions as to the right technology to implement. The correct choice can save hours of work in sorting, organizing, and analyzing data.

  20. Sex-Related Online Behaviors, Perceived Peer Norms and Adolescents’ Experience with Sexual Behavior: Testing an Integrative Model

    PubMed Central

    Doornwaard, Suzan M.; ter Bogt, Tom F. M.; Reitz, Ellen; van den Eijnden, Regina J. J. M.

    2015-01-01

    Research on the role of sex-related Internet use in adolescents’ sexual development has often isolated the Internet and online behaviors from other, offline influencing factors in adolescents’ lives, such as processes in the peer domain. The aim of this study was to test an integrative model explaining how receptive (i.e., use of sexually explicit Internet material [SEIM]) and interactive (i.e., use of social networking sites [SNS]) sex-related online behaviors interrelate with perceived peer norms in predicting adolescents’ experience with sexual behavior. Structural equation modeling on longitudinal data from 1,132 Dutch adolescents (Mage T1 = 13.95; range 11-17; 52.7% boys) demonstrated concurrent, direct, and indirect effects between sex-related online behaviors, perceived peer norms, and experience with sexual behavior. SEIM use (among boys) and SNS use (among boys and girls) predicted increases in adolescents’ perceptions of peer approval of sexual behavior and/or in their estimates of the numbers of sexually active peers. These perceptions, in turn, predicted increases in adolescents’ level of experience with sexual behavior at the end of the study. Boys’ SNS use also directly predicted increased levels of experience with sexual behavior. These findings highlight the need for multisystemic research and intervention development to promote adolescents’ sexual health. PMID:26086606

  1. Sex-Related Online Behaviors, Perceived Peer Norms and Adolescents' Experience with Sexual Behavior: Testing an Integrative Model.

    PubMed

    Doornwaard, Suzan M; ter Bogt, Tom F M; Reitz, Ellen; van den Eijnden, Regina J J M

    2015-01-01

    Research on the role of sex-related Internet use in adolescents' sexual development has often isolated the Internet and online behaviors from other, offline influencing factors in adolescents' lives, such as processes in the peer domain. The aim of this study was to test an integrative model explaining how receptive (i.e., use of sexually explicit Internet material [SEIM]) and interactive (i.e., use of social networking sites [SNS]) sex-related online behaviors interrelate with perceived peer norms in predicting adolescents' experience with sexual behavior. Structural equation modeling on longitudinal data from 1,132 Dutch adolescents (M(age) T1 = 13.95; range 11-17; 52.7% boys) demonstrated concurrent, direct, and indirect effects between sex-related online behaviors, perceived peer norms, and experience with sexual behavior. SEIM use (among boys) and SNS use (among boys and girls) predicted increases in adolescents' perceptions of peer approval of sexual behavior and/or in their estimates of the numbers of sexually active peers. These perceptions, in turn, predicted increases in adolescents' level of experience with sexual behavior at the end of the study. Boys' SNS use also directly predicted increased levels of experience with sexual behavior. These findings highlight the need for multisystemic research and intervention development to promote adolescents' sexual health.

  2. Care Models of eHealth Services: A Case Study on the Design of a Business Model for an Online Precare Service

    PubMed Central

    2015-01-01

    Background With a growing population of health care clients in the future, the organization of high-quality and cost-effective service providing becomes an increasing challenge. New online eHealth services are proposed as innovative options for the future. Yet, a major barrier to these services appears to be the lack of new business model designs. Although design efforts generally result in visual models, no such artifacts have been found in the literature on business model design. This paper investigates business model design in eHealth service practices from a design perspective. It adopts a research by design approach and seeks to unravel what characteristics of business models determine an online service and what are important value exchanges between health professionals and clients. Objective The objective of the study was to analyze the construction of care models in-depth, framing the essential elements of a business model, and design a new care model that structures these elements for the particular context of an online pre-care service in practice. Methods This research employs a qualitative method of an in-depth case study in which different perspectives on constructing a care model are investigated. Data are collected by using the visual business modeling toolkit, designed to cocreate and visualize the business model. The cocreated models are transcribed and analyzed per actor perspective, transactions, and value attributes. Results We revealed eight new actors in the business model for providing the service. Essential actors are: the intermediary network coordinator connecting companies, the service dedicated information technology specialists, and the service dedicated health specialist. In the transactions for every service providing we found a certain type of contract, such as a license contract and service contracts for precare services and software products. In addition to the efficiency, quality, and convenience, important value attributes

  3. Model-based evaluation of an on-line control strategy for SBRs based on OUR and ORP measurements.

    PubMed

    Corominas, Ll; Sin, G; Puig, S; Traore, A; Balaguer, M; Colprim, J; Vanrolleghem, P A

    2006-01-01

    Application of control strategies for existing wastewater treatment technologies becomes necessary to meet ever-stricter effluent legislations and reduce the associated treatment costs. In the case of SBR technology, controlling the phase scheduling is one of the key aspects of SBR operation. In this study a calibrated mechanistic model based on the ASM1 was used to evaluate an on-line control strategy for the SBR phase-scheduling and compare it with the SBR's performance using no control strategy. To evaluate the performance, reference indices relating to the effluent quality, the required energy for aeration and the treated wastewater volume were used. The results showed that it is possible to maintain optimal SBR performance in the studied system at minimal costs by on-line control of the length of the aerobic and anoxic phases.

  4. NMMB/BSC-DUST: an online mineral dust atmospheric model from meso to global scales

    NASA Astrophysics Data System (ADS)

    Haustein, K.; Pérez, C.; Jorba, O.; Baldasano, J. M.; Janjic, Z.; Black, T.; Nickovic, S.

    2009-04-01

    While mineral dust distribution and effects are important at global scales, they strongly depend on dust emissions that are controlled on small spatial and temporal scales. Most global dust models use prescribed wind fields provided by meteorological centers (e.g., NCEP and ECMWF) and their spatial resolution is currently never better than about 1°×1°. Regional dust models offer substantially higher resolution (10-20 km) and are typically coupled with weather forecast models that simulate processes that GCMs either cannot resolve or can resolve only poorly. These include internal circulation features such as the low-level nocturnal jet which is a crucial feature for dust emission in several dust ‘hot spot' sources in North Africa. Based on our modeling experience with the BSC-DREAM regional forecast model (http://www.bsc.es/projects/earthscience/DREAM/) we are currently implementing an improved mineral dust model [Pérez et al., 2008] coupled online with the new global/regional NMMB atmospheric model under development in NOAA/NCEP/EMC [Janjic, 2005]. The NMMB is an evolution of the operational WRF-NMME extending from meso to global scales. The NMMB will become the next-generation NCEP model for operational weather forecast in 2010. The corresponding unified non-hydrostatic dynamical core ranges from meso to global scale allowing regional and global simulations. It has got an add-on non-hydrostatic module and it is based on the Arakawa B-grid and hybrid pressure-sigma vertical coordinates. NMMB is fully embedded into the Earth System Modeling Framework (ESMF), treating dynamics and physics separately and coupling them easily within the ESMF structure. Our main goal is to provide global dust forecasts up to 7 days at mesoscale resolutions. New features of the model include a physically-based dust emission scheme after White [1979], Iversen and White [1982] and Marticorena and Bergametti [1995] that takes the effects of saltation and sandblasting into account

  5. MESSA: MEta-Server for protein Sequence Analysis

    PubMed Central

    2012-01-01

    Background Computational sequence analysis, that is, prediction of local sequence properties, homologs, spatial structure and function from the sequence of a protein, offers an efficient way to obtain needed information about proteins under study. Since reliable prediction is usually based on the consensus of many computer programs, meta-severs have been developed to fit such needs. Most meta-servers focus on one aspect of sequence analysis, while others incorporate more information, such as PredictProtein for local sequence feature predictions, SMART for domain architecture and sequence motif annotation, and GeneSilico for secondary and spatial structure prediction. However, as predictions of local sequence properties, three-dimensional structure and function are usually intertwined, it is beneficial to address them together. Results We developed a MEta-Server for protein Sequence Analysis (MESSA) to facilitate comprehensive protein sequence analysis and gather structural and functional predictions for a protein of interest. For an input sequence, the server exploits a number of select tools to predict local sequence properties, such as secondary structure, structurally disordered regions, coiled coils, signal peptides and transmembrane helices; detect homologous proteins and assign the query to a protein family; identify three-dimensional structure templates and generate structure models; and provide predictive statements about the protein's function, including functional annotations, Gene Ontology terms, enzyme classification and possible functionally associated proteins. We tested MESSA on the proteome of Candidatus Liberibacter asiaticus. Manual curation shows that three-dimensional structure models generated by MESSA covered around 75% of all the residues in this proteome and the function of 80% of all proteins could be predicted. Availability MESSA is free for non-commercial use at http://prodata.swmed.edu/MESSA/ PMID:23031578

  6. San Mateo County's Server Information Program (S.I.P.): A Community-Based Alcohol Server Training Program.

    ERIC Educational Resources Information Center

    de Miranda, John

    The field of alcohol server awareness and training has grown dramatically in the past several years and the idea of training servers to reduce alcohol problems has become a central fixture in the current alcohol policy debate. The San Mateo County, California Server Information Program (SIP) is a community-based prevention strategy designed to…

  7. The PhyloPythiaS Web Server for Taxonomic Assignment of Metagenome Sequences

    PubMed Central

    Patil, Kaustubh Raosaheb; Roune, Linus; McHardy, Alice Carolyn

    2012-01-01

    Metagenome sequencing is becoming common and there is an increasing need for easily accessible tools for data analysis. An essential step is the taxonomic classification of sequence fragments. We describe a web server for the taxonomic assignment of metagenome sequences with PhyloPythiaS. PhyloPythiaS is a fast and accurate sequence composition-based classifier that utilizes the hierarchical relationships between clades. Taxonomic assignments with the web server can be made with a generic model, or with sample-specific models that users can specify and create. Several interactive visualization modes and multiple download formats allow quick and convenient analysis and downstream processing of taxonomic assignments. Here, we demonstrate usage of our web server by taxonomic assignment of metagenome samples from an acidophilic biofilm community of an acid mine and of a microbial community from cow rumen. PMID:22745671

  8. Data decomposition of Monte Carlo particle transport simulations via tally servers

    SciTech Connect

    Romano, Paul K.; Siegel, Andrew R.; Forget, Benoit; Smith, Kord

    2013-11-01

    An algorithm for decomposing large tally data in Monte Carlo particle transport simulations is developed, analyzed, and implemented in a continuous-energy Monte Carlo code, OpenMC. The algorithm is based on a non-overlapping decomposition of compute nodes into tracking processors and tally servers. The former are used to simulate the movement of particles through the domain while the latter continuously receive and update tally data. A performance model for this approach is developed, suggesting that, for a range of parameters relevant to LWR analysis, the tally server algorithm should perform with minimal overhead on contemporary supercomputers. An implementation of the algorithm in OpenMC is then tested on the Intrepid and Titan supercomputers, supporting the key predictions of the model over a wide range of parameters. We thus conclude that the tally server algorithm is a successful approach to circumventing classical on-node memory constraints en route to unprecedentedly detailed Monte Carlo reactor simulations.

  9. The PhyloPythiaS web server for taxonomic assignment of metagenome sequences.

    PubMed

    Patil, Kaustubh Raosaheb; Roune, Linus; McHardy, Alice Carolyn

    2012-01-01

    Metagenome sequencing is becoming common and there is an increasing need for easily accessible tools for data analysis. An essential step is the taxonomic classification of sequence fragments. We describe a web server for the taxonomic assignment of metagenome sequences with PhyloPythiaS. PhyloPythiaS is a fast and accurate sequence composition-based classifier that utilizes the hierarchical relationships between clades. Taxonomic assignments with the web server can be made with a generic model, or with sample-specific models that users can specify and create. Several interactive visualization modes and multiple download formats allow quick and convenient analysis and downstream processing of taxonomic assignments. Here, we demonstrate usage of our web server by taxonomic assignment of metagenome samples from an acidophilic biofilm community of an acid mine and of a microbial community from cow rumen.

  10. Factors Affecting Perceived Learning, Satisfaction, and Quality in the Online MBA: A Structural Equation Modeling Approach

    ERIC Educational Resources Information Center

    Sebastianelli, Rose; Swift, Caroline; Tamimi, Nabil

    2015-01-01

    The authors examined how six factors related to content and interaction affect students' perceptions of learning, satisfaction, and quality in online master of business administration (MBA) courses. They developed three scale items to measure each factor. Using survey data from MBA students at a private university, the authors estimated structural…

  11. Enhancing Online Distance Education in Small Rural US Schools: A Hybrid, Learner-Centred Model

    ERIC Educational Resources Information Center

    de la Varre, Claire; Keane, Julie; Irvin, Matthew J.

    2011-01-01

    Online distance education (ODE) has become pervasive and can potentially transform pedagogical practices across primary, secondary and university-based educational systems. ODE is considered a flexible option for non-traditional students such as adult learners and home-schoolers, and a convenient way to deliver remedial courses. ODE is also a…

  12. Enhancing Online Distance Education in Small Rural US Schools: A Hybrid, Learner-Centred Model

    ERIC Educational Resources Information Center

    de la Varre, Claire; Keane, Julie; Irvin, Matthew J.

    2010-01-01

    Online distance education (ODE) has become pervasive and can potentially transform pedagogical practices across primary, secondary and university-based educational systems. ODE is considered a flexible option for non-traditional students such as adult learners and home-schoolers, and a convenient way to deliver remedial courses. ODE is also a…

  13. A Web-Based Model for Online Collaboration between Distance Learning and Campus Students.

    ERIC Educational Resources Information Center

    Mouza, Chrystalla; Kaplan, Danielle; Espinet, Ivana

    This paper presents a hybrid course framework at the Teachers College at Columbia University (New York) that seamlessly integrates a traditional course plan designed for on-campus students with an online course plan designed for distance learning students. Based on innovative teaching and learning principles, the course encourages active…

  14. Activity-Based Costing Models for Alternative Modes of Delivering On-Line Courses

    ERIC Educational Resources Information Center

    Garbett, Chris

    2011-01-01

    In recent years there has been growth in online distance learning courses. This has been prompted by; new technology such as the Internet, mobile learning, video and audio conferencing: the explosion in student numbers in Higher Education, and the need for outreach to a world wide market. Web-based distance learning is seen as a solution to…

  15. Attitudes towards Online Feedback on Writing: Why Students Mistrust the Learning Potential of Models

    ERIC Educational Resources Information Center

    Strobl, Carola

    2015-01-01

    This exploratory study sheds new light on students' perceptions of online feedback types for a complex writing task, summary writing from spoken input in a foreign language (L2), and investigates how these correlate with their actual learning to write. Students tend to favour clear-cut, instructivist rather than constructivist feedback, and guided…

  16. A Preliminary Evaluation of Short Blended Online Training Workshop for TPACK Development Using Technology Acceptance Model

    ERIC Educational Resources Information Center

    Alsofyani, Mohammed Modeef; Aris, Baharuddin bin; Eynon, Rebecca; Majid, Norazman Abdul

    2012-01-01

    The use of Short Blended Online Training (SBOT) for the development of Technological Pedagogical and Content Knowledge (TPACK) is a promising approach to facilitate the use of e-learning by academics. Adult learners prefer the blend of pedagogies such as the presentation, demonstration, practice and feedback if they are structured and…

  17. Social Work Online Education: A Model for Getting Started and Staying Connected

    ERIC Educational Resources Information Center

    Moore, Sharon E.; Golder, Seana; Sterrett, Emma; Faul, Anna C.; Yankeelov, Pam; Weathers Mathis, Lynetta; Barbee, Anita P.

    2015-01-01

    Social work education has been greatly affected by ongoing technological advances in society at large and in the academy. Options for instructional delivery have been broadened tremendously. The University of Louisville is the first in Kentucky to put its master's of social work degree fully online, with a first cohort admitted in 2012. The…

  18. Online Help-Seeking in Communities of Practice: Modeling the Acceptance of Conceptual Artifacts

    ERIC Educational Resources Information Center

    Nistor, Nicolae; Schworm, Silke; Werner, Matthias

    2012-01-01

    Interactive online help systems are considered to be a fruitful supplement to traditional IT helpdesks, which are often overloaded. They often comprise user-generated FAQ collections playing the role of technology-based conceptual artifacts. Two main questions arise: how the conceptual artifacts should be used, and which factors influence their…

  19. Quality Models in Online and Open Education around the Globe: State of the Art and Recommendations

    ERIC Educational Resources Information Center

    Ossiannilsson, Ebba; Williams, Keith; Camilleri, Anthony F.; Brown, Mark

    2015-01-01

    This report is written for: (1) institutional leaders responsible for quality in online, open and flexible higher education; (2) faculty wanting to have an overview of the field; (3) newcomers that want to develop quality schemes; (4) policy makers in governments, agencies and organisations; and (5) major educational stakeholders in the…

  20. The REEAL Model: A Framework for Faculty Training in Online Discussion Facilitation

    ERIC Educational Resources Information Center

    Bedford, Laurie

    2014-01-01

    Discussion forums are a primary tool for interactions in the online classroom. Discussions are a critical part of the learning process for students, and instructor facilitation should reflect this importance. Effective instructor discussion facilitation encourages students, provides evidence and analysis and links the discussion to subsequent…

  1. Evaluation of Online, On-Demand Science Professional Development Material Involving Two Different Implementation Models

    ERIC Educational Resources Information Center

    Sherman, Greg; Byers, Al; Rapp, Steve

    2008-01-01

    This report presents pilot-test results for a science professional development program featuring online, on-demand materials developed by the National Science Teachers Association. During the spring 2006 semester, 45 middle school teachers from three different school districts across the United States participated in a professional development…

  2. Applying Fuzzy Logic for Learner Modeling and Decision Support in Online Learning Systems

    ERIC Educational Resources Information Center

    Al-Aubidy, Kasim M.

    2005-01-01

    Advances in computers and multimedia technology have changed traditional methods for learning and skills training. Online learning continues to play a major success of any academic program. Such learning can personalize learning needs for students, it can provide an environment where virtual reality techniques are used to create interactive…

  3. Team Models in Online Course Development: A Unit-Specific Approach

    ERIC Educational Resources Information Center

    Alvarez, Deborah M.; Blair, Kristine; Monske, Elizabeth; Wolf, Amie

    2005-01-01

    This article profiles an educational technology assistance program titled Digital Language and Literacy, linking technologically literate graduate students in English with faculty developing online courses for the first time. Our reporting and assessment process includes the narrative evidence of two faculty and two graduate student instructional…

  4. Providing Role Models Online: Telementoring Gives Students Real-Life Connections in Science and Beyond.

    ERIC Educational Resources Information Center

    Bennett, Dorothy T.

    1997-01-01

    "Telementoring" programs, formal and informal online exchanges between students and working professionals, have flourished using e-mail. This article discusses telementoring and issues to consider (finding mentors, familiarity, frequency of exchange, preparation and facilitation, and closure) before creating a program and the Telementoring Young…

  5. A secure online image trading system for untrusted cloud environments.

    PubMed

    Munadi, Khairul; Arnia, Fitri; Syaryadhi, Mohd; Fujiyoshi, Masaaki; Kiya, Hitoshi

    2015-01-01

    In conventional image trading systems, images are usually stored unprotected on a server, rendering them vulnerable to untrusted server providers and malicious intruders. This paper proposes a conceptual image trading framework that enables secure storage and retrieval over Internet services. The process involves three parties: an image publisher, a server provider, and an image buyer. The aim is to facilitate secure storage and retrieval of original images for commercial transactions, while preventing untrusted server providers and unauthorized users from gaining access to true contents. The framework exploits the Discrete Cosine Transform (DCT) coefficients and the moment invariants of images. Original images are visually protected in the DCT domain, and stored on a repository server. Small representation of the original images, called thumbnails, are generated and made publicly accessible for browsing. When a buyer is interested in a thumbnail, he/she sends a query to retrieve the visually protected image. The thumbnails and protected images are matched using the DC component of the DCT coefficients and the moment invariant feature. After the matching process, the server returns the corresponding protected image to the buyer. However, the image remains visually protected unless a key is granted. Our target application is the online market, where publishers sell their stock images over the Internet using public cloud servers. PMID:26090324

  6. RNAssess--a web server for quality assessment of RNA 3D structures.

    PubMed

    Lukasiak, Piotr; Antczak, Maciej; Ratajczak, Tomasz; Szachniuk, Marta; Popenda, Mariusz; Adamiak, Ryszard W; Blazewicz, Jacek

    2015-07-01

    Nowadays, various methodologies can be applied to model RNA 3D structure. Thus, the plausible quality assessment of 3D models has a fundamental impact on the progress of structural bioinformatics. Here, we present RNAssess server, a novel tool dedicated to visual evaluation of RNA 3D models in the context of the known reference structure for a wide range of accuracy levels (from atomic to the whole molecule perspective). The proposed server is based on the concept of local neighborhood, defined as a set of atoms observed within a sphere localized around a central atom of a particular residue. A distinctive feature of our server is the ability to perform simultaneous visual analysis of the model-reference structure coherence. RNAssess supports the quality assessment through delivering both static and interactive visualizations that allows an easy identification of native-like models and/or chosen structural regions of the analyzed molecule. A combination of results provided by RNAssess allows us to rank analyzed models. RNAssess offers new route to a fast and efficient 3D model evaluation suitable for the RNA-Puzzles challenge. The proposed automated tool is implemented as a free and open to all users web server with an user-friendly interface and can be accessed at: http://rnassess.cs.put.poznan.pl/. PMID:26068469

  7. Volcanic ash modeling with the online NMMB/BSC-ASH-v1.0: A novel multiscale meteorological model for operational forecast

    NASA Astrophysics Data System (ADS)

    Marti, Alejandro; Folch, Arnau; Jorba, Oriol; Janjic, Zavisa

    2016-04-01

    Volcanic ash forecast became a research priority and a social concern as a consequence of the severe air-traffic disruptions caused by the eruptions of Eyjafjallajökull (Iceland, 2010) and Cordón Caulle (Chile, 2011) volcanoes. Significant progress has taken place in the aftermath of these dramatic events to improve the accuracy of Volcanic Ash Transport and Dispersal (VATD) models and lessen its associated uncertainties. Various levels of uncertainties affect both the quantification of the source term and the driving meteorological inputs. Substantial research is being performed to reduce and quantify epistemic and aleatoric uncertainties affecting the source term. However, uncertainties arising from the driving NWPMs and its coupling offline with the VATDMs have received little attention, even if the experience from other communities (e.g. air quality) highlights the importance of coupling online dispersal and meteorological modeling. Consequently, the need for integrated predictions to represent these two-way feedback effects of the volcanic pollutants on local-scale meteorology is timely. The aim of this talk is to present the NMMB/BSC-ASH, a new on-line multi-scale meteorological model to simulate the emission, transport and deposition of tephra particles released from volcanic eruptions. The model builds on the NMMB/BSC Chemical Transport Model (NMMB/BSC-CTM), which we have modified to account for the specifics of volcanic particles. The final objective in developing the NMMB/BSC-ASH model is two-fold. On one hand, at a research level, we aim at studying the differences between the online/offline approaches and quantify the two-way feedback effect of dense volcanic ash clouds on the radiative budget and regional meteorology. On the other hand, at an operational level, the low computational cost of the NMMB dynamic core suggests that NMMB/BSC-ASH could be applied in a future for more accurate online operational forecasting of volcanic ash clouds.

  8. Reputation mechanism: From resolution for truthful online auctions to the model of optimal one-gambler problem

    SciTech Connect

    Bradonjic, Milan

    2009-01-01

    In this paper we study reputation mechanisms, and show how the notion of reputation can help us in building truthful online auction mechanisms. From the mechanism design prospective, we derive the conditions on and design a truthful online auction mechanism. Moreover, in the case when some agents may lay or cannot have the real knowledge about the other agents reputations, we derive the resolution of the auction, such that the mechanism is truthful. Consequently, we move forward to the optimal one-gambler/one-seller problem, and explain how that problem is refinement of the previously discussed online auction design in the presence of reputation mechanism. In the setting of the optimal one-gambler problem, we naturally rise and solve the specific question: What is an agent's optimal strategy, in order to maximize his revenue? We would like to stress that our analysis goes beyond the scope, which game theory usually discusses under the notion of reputation. We model one-player games, by introducing a new parameter (reputation), which helps us in predicting the agent's behavior, in real-world situations, such as, behavior of a gambler, real-estate dealer, etc.

  9. Multiple-server Flexible Blind Quantum Computation in Networks

    NASA Astrophysics Data System (ADS)

    Kong, Xiaoqin; Li, Qin; Wu, Chunhui; Yu, Fang; He, Jinjun; Sun, Zhiyuan

    2016-06-01

    Blind quantum computation (BQC) can allow a client with limited quantum power to delegate his quantum computation to a powerful server and still keep his own data private. In this paper, we present a multiple-server flexible BQC protocol, where a client who only needs the ability of accessing qua ntum channels can delegate the computational task to a number of servers. Especially, the client's quantum computation also can be achieved even when one or more delegated quantum servers break down in networks. In other words, when connections to certain quantum servers are lost, clients can adjust flexibly and delegate their quantum computation to other servers. Obviously it is trivial that the computation will be unsuccessful if all servers are interrupted.

  10. MAP(2.0)3D: a sequence/structure based server for protein engineering.

    PubMed

    Verma, Rajni; Schwaneberg, Ulrich; Roccatano, Danilo

    2012-04-20

    The Mutagenesis Assistant Program (MAP) is a web-based tool to provide statistical analyses of the mutational biases of directed evolution experiments on amino acid substitution patterns. MAP analysis assists protein engineers in the benchmarking of random mutagenesis methods that generate single nucleotide mutations in a codon. Herein, we describe a completely renewed and improved version of the MAP server, the MAP(2.0)3D server, which correlates the generated amino acid substitution patterns to the structural information of the target protein. This correlation aids in the selection of a more suitable random mutagenesis method with specific biases on amino acid substitution patterns. In particular, the new server represents MAP indicators on secondary and tertiary structure and correlates them to specific structural components such as hydrogen bonds, hydrophobic contacts, salt bridges, solvent accessibility, and crystallographic B-factors. Three model proteins (D-amino oxidase, phytase, and N-acetylneuraminic acid aldolase) are used to illustrate the novel capability of the server. MAP(2.0)3D server is available publicly at http://map.jacobs-university.de/map3d.html.

  11. Online Degrees.

    ERIC Educational Resources Information Center

    Dolezalek, Holly

    2003-01-01

    Discusses the trend of trainers who are getting degrees through online courses delivered via the Internet. Addresses accreditation issues and what to ask before enrolling in online degree programs. (JOW)

  12. A web-server of cell type discrimination system.

    PubMed

    Wang, Anyou; Zhong, Yan; Wang, Yanhua; He, Qianchuan

    2014-01-01

    Discriminating cell types is a daily request for stem cell biologists. However, there is not a user-friendly system available to date for public users to discriminate the common cell types, embryonic stem cells (ESCs), induced pluripotent stem cells (iPSCs), and somatic cells (SCs). Here, we develop WCTDS, a web-server of cell type discrimination system, to discriminate the three cell types and their subtypes like fetal versus adult SCs. WCTDS is developed as a top layer application of our recent publication regarding cell type discriminations, which employs DNA-methylation as biomarkers and machine learning models to discriminate cell types. Implemented by Django, Python, R, and Linux shell programming, run under Linux-Apache web server, and communicated through MySQL, WCTDS provides a friendly framework to efficiently receive the user input and to run mathematical models for analyzing data and then to present results to users. This framework is flexible and easy to be expended for other applications. Therefore, WCTDS works as a user-friendly framework to discriminate cell types and subtypes and it can also be expended to detect other cell types like cancer cells. PMID:24578634

  13. A web-server of cell type discrimination system.

    PubMed

    Wang, Anyou; Zhong, Yan; Wang, Yanhua; He, Qianchuan

    2014-01-01

    Discriminating cell types is a daily request for stem cell biologists. However, there is not a user-friendly system available to date for public users to discriminate the common cell types, embryonic stem cells (ESCs), induced pluripotent stem cells (iPSCs), and somatic cells (SCs). Here, we develop WCTDS, a web-server of cell type discrimination system, to discriminate the three cell types and their subtypes like fetal versus adult SCs. WCTDS is developed as a top layer application of our recent publication regarding cell type discriminations, which employs DNA-methylation as biomarkers and machine learning models to discriminate cell types. Implemented by Django, Python, R, and Linux shell programming, run under Linux-Apache web server, and communicated through MySQL, WCTDS provides a friendly framework to efficiently receive the user input and to run mathematical models for analyzing data and then to present results to users. This framework is flexible and easy to be expended for other applications. Therefore, WCTDS works as a user-friendly framework to discriminate cell types and subtypes and it can also be expended to detect other cell types like cancer cells.

  14. Implementation of an online chemical mechanism within a global-regional atmospheric model: design and initial steps

    NASA Astrophysics Data System (ADS)

    Jorba, O.; Pérez, C.; Baldasano, J. M.

    2009-04-01

    Chemical processes in air quality modelling systems are usually treated independently from the meteorological models. This approach is computationally attractive since off-line chemical transport simulations only require a single meteorological dataset to produce many chemical simulations. However, this separation of chemistry and meteorology produces a loss of important information about atmospheric processes and does not allow for feedbacks between chemistry and meteorology. To take into account such processes current models are evolving to an online coupling of chemistry and meteorology to produce consistent chemical weather predictions. The Earth Sciences Department of the Barcelona Supercomputing Center (BSC) develops the NMMB/BSC-DUST (Pérez et al., 2008), an online dust model within the global-regional NCEP/NMMB numerical weather prediction model (Janjic and Black, 2007) under development at National Centers for Environmental Prediction (NCEP). Current implementation is based on the well established regional dust model and forecast system DREAM (Nickovic et al., 2001). The most relevant characteristics of NMMB/BSC-DUST are its on-line coupling of the dust scheme with the meteorological driver, the wide range of applications from meso to global scales, and the inclusion of dust radiative effects allowing feedbacks between aerosols and meteorology. In order to complement such development, BSC works also in the implementation of a fully coupled online chemical mechanism within NMMB/BSC-DUST. The final objective is to develop a fully chemical weather prediction system able to resolve gas-aerosol-meteorology interactions from global to local scales. In this contribution we will present the design of the chemistry coupling and the current progress of its implementation. Following the NCEP/NMMB approach, the chemistry part will be coupled through the Earth System Modeling Framework (ESMF) as a pluggable component. The chemical mechanism and chemistry solver is

  15. Using Social Media to Expand Peer-to-Peer Discussion in an Online Course about Regional Climate Modeling

    NASA Astrophysics Data System (ADS)

    Yarker, M. B.; Mesquita, M. D. S.

    2015-12-01

    The goal of this project is to make knowledge about regional climate modeling accessible to anyone in any location, regardless of their resources. We accomplish this through the development of a free online course, which introduces novice model users to an educational version of the Weather Research and Forecasting model (e-WRF). These courses are grounded in education theory and have been described in detail at prior AGU meetings (Kelsey et al. 2014, Walton et al. 2014, Yarker & Mesquita 2013). Research indicates that effective dialogue is an important component for successful learning to occur and displays the following elements: asking complex questions, deep discussion, and use of evidence to construct arguments (Benus et al. 2013). These can happen between the student and tutor, but peer-to-peer interaction is especially important as well as the most difficult aspect of social constructivism to meet, especially in an online setting. In our online courses, standard course forums were underutilized and generally only used to ask the tutor clarifying questions or troubleshoot error messages. To rectify this problem, we began using social media to facilitate conversation and notice vast improvement in peer-to-peer communication. Moreover, we created a community of over 700 regional climate modelers from around the world, sharing information, asking questions, and creating research projects relating to climate change. Data was gathered by qualitatively analyzing forum and Facebook posts and quantitatively analyzing survey data from participants in both courses. Facebook participants posted on the group more often about a wider variety of topics than the forum participants. Additionally, there were statistically significant increase ('student' t test and Mann-Whitney test) in the elements of effective dialogue. We conclude that social media can serve as a possible tool in the development of online learning, especially for difficult concepts like regional climate

  16. PSSweb: protein structural statistics web server.

    PubMed

    Gaillard, Thomas; Stote, Roland H; Dejaegere, Annick

    2016-07-01

    With the increasing number of protein structures available, there is a need for tools capable of automating the comparison of ensembles of structures, a common requirement in structural biology and bioinformatics. PSSweb is a web server for protein structural statistics. It takes as input an ensemble of PDB files of protein structures, performs a multiple sequence alignment and computes structural statistics for each position of the alignment. Different optional functionalities are proposed: structure superposition, Cartesian coordinate statistics, dihedral angle calculation and statistics, and a cluster analysis based on dihedral angles. An interactive report is generated, containing a summary of the results, tables, figures and 3D visualization of superposed structures. The server is available at http://pssweb.org.

  17. The Uppsala Electron-Density Server.

    PubMed

    Kleywegt, Gerard J; Harris, Mark R; Zou, Jin Yu; Taylor, Thomas C; Wählby, Anders; Jones, T Alwyn

    2004-12-01

    The Uppsala Electron Density Server (EDS; http://eds.bmc.uu.se/) is a web-based facility that provides access to electron-density maps and statistics concerning the fit of crystal structures and their maps. Maps are available for approximately 87% of the crystallographic Protein Data Bank (PDB) entries for which structure factors have been deposited and for which straightforward map calculations succeed in reproducing the published R value to within five percentage points. Here, an account is provided of the methods that are used to generate the information contained in the server. Some of the problems that are encountered in the map-generation process as well as some spin-offs of the project are also discussed.

  18. PSSweb: protein structural statistics web server

    PubMed Central

    Gaillard, Thomas; Stote, Roland H.; Dejaegere, Annick

    2016-01-01

    With the increasing number of protein structures available, there is a need for tools capable of automating the comparison of ensembles of structures, a common requirement in structural biology and bioinformatics. PSSweb is a web server for protein structural statistics. It takes as input an ensemble of PDB files of protein structures, performs a multiple sequence alignment and computes structural statistics for each position of the alignment. Different optional functionalities are proposed: structure superposition, Cartesian coordinate statistics, dihedral angle calculation and statistics, and a cluster analysis based on dihedral angles. An interactive report is generated, containing a summary of the results, tables, figures and 3D visualization of superposed structures. The server is available at http://pssweb.org. PMID:27174930

  19. Energy Servers Deliver Clean, Affordable Power

    NASA Technical Reports Server (NTRS)

    2010-01-01

    K.R. Sridhar developed a fuel cell device for Ames Research Center, that could use solar power to split water into oxygen for breathing and hydrogen for fuel on Mars. Sridhar saw the potential of the technology, when reversed, to create clean energy on Earth. He founded Bloom Energy, of Sunnyvale, California, to advance the technology. Today, the Bloom Energy Server is providing cost-effective, environmentally friendly energy to a host of companies such as eBay, Google, and The Coca-Cola Company. Bloom's NASA-derived Energy Servers generate energy that is about 67-percent cleaner than a typical coal-fired power plant when using fossil fuels and 100-percent cleaner with renewable fuels.

  20. A Service Value Model for Continued Use of Online Services: Conceptual Development and Empirical Examination

    ERIC Educational Resources Information Center

    Hu, Tao

    2009-01-01

    Online services (OLS) provide billions of Internet users with a variety of opportunities to exchange goods, share information, and develop or maintain relationships. Popular examples of OLS web sites include eBay.com, Amazon.com, Dell.com, Craigslist.com, MSN.com, Yahoo.com, LinkedIn.com, Zillow.com, Facebook.com, Wikipedia.com, and Twitter.com.…