Sample records for high performance shared

  1. Shared Storage Usage Policy | High-Performance Computing | NREL

    Science.gov Websites

    Shared Storage Usage Policy Shared Storage Usage Policy To use NREL's high-performance computing (HPC) systems, you must abide by the Shared Storage Usage Policy. /projects NREL HPC allocations include storage space in the /projects filesystem. However, /projects is a shared resource and project

  2. Team Development for High Performance Management.

    ERIC Educational Resources Information Center

    Schermerhorn, John R., Jr.

    1986-01-01

    The author examines a team development approach to management that creates shared commitments to performance improvement by focusing the attention of managers on individual workers and their task accomplishments. It uses the "high-performance equation" to help managers confront shared beliefs and concerns about performance and develop realistic…

  3. Minimizing End-to-End Interference in I/O Stacks Spanning Shared Multi-Level Buffer Caches

    ERIC Educational Resources Information Center

    Patrick, Christina M.

    2011-01-01

    This thesis presents an end-to-end interference minimizing uniquely designed high performance I/O stack that spans multi-level shared buffer cache hierarchies accessing shared I/O servers to deliver a seamless high performance I/O stack. In this thesis, I show that I can build a superior I/O stack which minimizes the inter-application interference…

  4. Shared Features of High-Performing After-School Programs: A Follow-Up to the TASC Evaluation

    ERIC Educational Resources Information Center

    Birmingham, Jennifer; Pechman, Ellen M.; Russell, Christina A.; Mielke, Monica

    2005-01-01

    This study examined high-performing after-school projects funded by The After-School Corporation (TASC), to determine what characteristics, if any, these projects shared. Evaluators reanalyzed student performance data collected during the multi-year evaluation of the TASC initiative to identify projects where the after-school program was…

  5. The effect of coworker knowledge sharing on performance and its boundary conditions: an interactional perspective.

    PubMed

    Kim, Seckyoung Loretta; Yun, Seokhwa

    2015-03-01

    Considering the importance of coworkers and knowledge sharing in current business environment, this study intends to advance understanding by investigating the effect of coworker knowledge sharing on focal employees' task performance. Furthermore, by taking an interactional perspective, this study examines the boundary conditions of coworker knowledge sharing on task performance. Data from 149 samples indicate that there is a positive relationship between coworker knowledge sharing and task performance, and this relationship is strengthened when general self-efficacy or abusive supervision is low rather than high. Our findings suggest that the recipients' characteristics and leaders' behaviors could be important contingent factors that limit the effect of coworker knowledge sharing on task performance. Implications for theory and practice are discussed. PsycINFO Database Record (c) 2015 APA, all rights reserved.

  6. A simple modern correctness condition for a space-based high-performance multiprocessor

    NASA Technical Reports Server (NTRS)

    Probst, David K.; Li, Hon F.

    1992-01-01

    A number of U.S. national programs, including space-based detection of ballistic missile launches, envisage putting significant computing power into space. Given sufficient progress in low-power VLSI, multichip-module packaging and liquid-cooling technologies, we will see design of high-performance multiprocessors for individual satellites. In very high speed implementations, performance depends critically on tolerating large latencies in interprocessor communication; without latency tolerance, performance is limited by the vastly differing time scales in processor and data-memory modules, including interconnect times. The modern approach to tolerating remote-communication cost in scalable, shared-memory multiprocessors is to use a multithreaded architecture, and alter the semantics of shared memory slightly, at the price of forcing the programmer either to reason about program correctness in a relaxed consistency model or to agree to program in a constrained style. The literature on multiprocessor correctness conditions has become increasingly complex, and sometimes confusing, which may hinder its practical application. We propose a simple modern correctness condition for a high-performance, shared-memory multiprocessor; the correctness condition is based on a simple interface between the multiprocessor architecture and a high-performance, shared-memory multiprocessor; the correctness condition is based on a simple interface between the multiprocessor architecture and the parallel programming system.

  7. Effects of Peer-Tutor Competences on Learner Cognitive Load and Learning Performance during Knowledge Sharing

    ERIC Educational Resources Information Center

    Hsiao, Ya-Ping; Brouns, Francis; van Bruggen, Jan; Sloep, Peter B.

    2012-01-01

    In Learning Networks, learners need to share knowledge with others to build knowledge. In particular, when working on complex tasks, they often need to acquire extra cognitive resources from others to process a high task load. However, without support high task load and organizing knowledge sharing themselves might easily overload learners'…

  8. One Big Happy Family? Unraveling the Relationship between Shared Perceptions of Team Psychological Contracts, Person-Team Fit and Team Performance.

    PubMed

    Gibbard, Katherine; Griep, Yannick; De Cooman, Rein; Hoffart, Genevieve; Onen, Denis; Zareipour, Hamidreza

    2017-01-01

    With the knowledge that team work is not always associated with high(er) performance, we draw from the Multi-Level Theory of Psychological Contracts, Person-Environment Fit Theory, and Optimal Distinctiveness Theory to study shared perceptions of psychological contract (PC) breach in relation to shared perceptions of complementary and supplementary fit to explain why some teams perform better than other teams. We collected three repeated survey measures in a sample of 128 respondents across 46 teams. After having made sure that we met all statistical criteria, we aggregated our focal variables to the team-level and analyzed our data by means of a longitudinal three-wave autoregressive moderated-mediation model in which each relationship was one-time lag apart. We found that shared perceptions of PC breach were directly negatively related to team output and negatively related to perceived team member effectiveness through a decrease in shared perceptions of supplementary fit. However, we also demonstrated a beneficial process in that shared perceptions of PC breach were positively related to shared perceptions of complementary fit, which in turn were positively related to team output. Moreover, best team output appeared in teams that could combine high shared perceptions of complementary fit with modest to high shared perceptions of supplementary fit. Overall, our findings seem to indicate that in terms of team output there may be a bright side to perceptions of PC breach and that perceived person-team fit may play an important role in this process.

  9. One Big Happy Family? Unraveling the Relationship between Shared Perceptions of Team Psychological Contracts, Person-Team Fit and Team Performance

    PubMed Central

    Gibbard, Katherine; Griep, Yannick; De Cooman, Rein; Hoffart, Genevieve; Onen, Denis; Zareipour, Hamidreza

    2017-01-01

    With the knowledge that team work is not always associated with high(er) performance, we draw from the Multi-Level Theory of Psychological Contracts, Person-Environment Fit Theory, and Optimal Distinctiveness Theory to study shared perceptions of psychological contract (PC) breach in relation to shared perceptions of complementary and supplementary fit to explain why some teams perform better than other teams. We collected three repeated survey measures in a sample of 128 respondents across 46 teams. After having made sure that we met all statistical criteria, we aggregated our focal variables to the team-level and analyzed our data by means of a longitudinal three-wave autoregressive moderated-mediation model in which each relationship was one-time lag apart. We found that shared perceptions of PC breach were directly negatively related to team output and negatively related to perceived team member effectiveness through a decrease in shared perceptions of supplementary fit. However, we also demonstrated a beneficial process in that shared perceptions of PC breach were positively related to shared perceptions of complementary fit, which in turn were positively related to team output. Moreover, best team output appeared in teams that could combine high shared perceptions of complementary fit with modest to high shared perceptions of supplementary fit. Overall, our findings seem to indicate that in terms of team output there may be a bright side to perceptions of PC breach and that perceived person-team fit may play an important role in this process. PMID:29170648

  10. High Performance Programming Using Explicit Shared Memory Model on Cray T3D1

    NASA Technical Reports Server (NTRS)

    Simon, Horst D.; Saini, Subhash; Grassi, Charles

    1994-01-01

    The Cray T3D system is the first-phase system in Cray Research, Inc.'s (CRI) three-phase massively parallel processing (MPP) program. This system features a heterogeneous architecture that closely couples DEC's Alpha microprocessors and CRI's parallel-vector technology, i.e., the Cray Y-MP and Cray C90. An overview of the Cray T3D hardware and available programming models is presented. Under Cray Research adaptive Fortran (CRAFT) model four programming methods (data parallel, work sharing, message-passing using PVM, and explicit shared memory model) are available to the users. However, at this time data parallel and work sharing programming models are not available to the user community. The differences between standard PVM and CRI's PVM are highlighted with performance measurements such as latencies and communication bandwidths. We have found that the performance of neither standard PVM nor CRI s PVM exploits the hardware capabilities of the T3D. The reasons for the bad performance of PVM as a native message-passing library are presented. This is illustrated by the performance of NAS Parallel Benchmarks (NPB) programmed in explicit shared memory model on Cray T3D. In general, the performance of standard PVM is about 4 to 5 times less than obtained by using explicit shared memory model. This degradation in performance is also seen on CM-5 where the performance of applications using native message-passing library CMMD on CM-5 is also about 4 to 5 times less than using data parallel methods. The issues involved (such as barriers, synchronization, invalidating data cache, aligning data cache etc.) while programming in explicit shared memory model are discussed. Comparative performance of NPB using explicit shared memory programming model on the Cray T3D and other highly parallel systems such as the TMC CM-5, Intel Paragon, Cray C90, IBM-SP1, etc. is presented.

  11. Maintenance service contract model for heavy equipment in mining industry using principal agent theory

    NASA Astrophysics Data System (ADS)

    Pakpahan, Eka K. A.; Iskandar, Bermawi P.

    2015-12-01

    Mining industry is characterized by a high operational revenue, and hence high availability of heavy equipment used in mining industry is a critical factor to ensure the revenue target. To maintain high avaliability of the heavy equipment, the equipment's owner hires an agent to perform maintenance action. Contract is then used to control the relationship between the two parties involved. The traditional contracts such as fixed price, cost plus or penalty based contract studied is unable to push agent's performance to exceed target, and this in turn would lead to a sub-optimal result (revenue). This research deals with designing maintenance contract compensation schemes. The scheme should induce agent to select the highest possible maintenance effort level, thereby pushing agent's performance and achieve maximum utility for both parties involved. Principal agent theory is used as a modeling approach due to its ability to simultaneously modeled owner and agent decision making process. Compensation schemes considered in this research includes fixed price, cost sharing and revenue sharing. The optimal decision is obtained using a numerical method. The results show that if both parties are risk neutral, then there are infinite combination of fixed price, cost sharing and revenue sharing produced the same optimal solution. The combination of fixed price and cost sharing contract results in the optimal solution when the agent is risk averse, while the optimal combination of fixed price and revenue sharing contract is obtained when agent is risk averse. When both parties are risk averse, the optimal compensation scheme is a combination of fixed price, cost sharing and revenue sharing.

  12. High-performing trauma teams: frequency of behavioral markers of a shared mental model displayed by team leaders and quality of medical performance.

    PubMed

    Johnsen, Bjørn Helge; Westli, Heidi Kristina; Espevik, Roar; Wisborg, Torben; Brattebø, Guttorm

    2017-11-10

    High quality team leadership is important for the outcome of medical emergencies. However, the behavioral marker of leadership are not well defined. The present study investigated frequency of behavioral markers of shared mental models (SMM) on quality of medical management. Training video recordings of 27 trauma teams simulating emergencies were analyzed according to team -leader's frequency of shared mental model behavioral markers. The results showed a positive correlation of quality of medical management with leaders sharing information without an explicit demand for the information ("push" of information) and with leaders communicating their situational awareness (SA) and demonstrating implicit supporting behavior. When separating the sample into higher versus lower performing teams, the higher performing teams had leaders who displayed a greater frequency of "push" of information and communication of SA and supportive behavior. No difference was found for the behavioral marker of team initiative, measured as bringing up suggestions to other teammembers. The results of this study emphasize the team leader's role in initiating and updating a team's shared mental model. Team leaders should also set expectations for acceptable interaction patterns (e.g., promoting information exchange) and create a team climate that encourages behaviors, such as mutual performance monitoring, backup behavior, and adaptability to enhance SMM.

  13. Endogenous Groups and Dynamic Selection in Mechanism Design*

    PubMed Central

    Madeira, Gabriel A.; Townsend, Robert M.

    2010-01-01

    We create a dynamic theory of endogenous risk sharing groups, with good internal information, and their coexistence with relative performance, individualistic regimes, which are informationally more opaque. Inequality and organizational form are determined simultaneously. Numerical techniques and succinct re-formulations of mechanism design problems with suitable choice of promised utilities allow the computation of a stochastic steady state and its transitions. Regions of low inequality and moderate to high wealth (utility promises) produce the relative performance regime, while regions of high inequality and low wealth produce the risk sharing group regime. If there is a cost to prevent coalitions, risk sharing groups emerge at high wealth levels also. Transitions from the relative performance regime to the group regime tend to occur when rewards to observed outputs exacerbate inequality, while transitions from the group regime to the relative performance regime tend to come with a decrease in utility promises. Some regions of inequality and wealth deliver long term persistence of organization form and inequality, while other regions deliver high levels of volatility. JEL Classification Numbers: D23,D71,D85,O17. PMID:20107614

  14. An Ephemeral Burst-Buffer File System for Scientific Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Teng; Moody, Adam; Yu, Weikuan

    BurstFS is a distributed file system for node-local burst buffers on high performance computing systems. BurstFS presents a shared file system space across the burst buffers so that applications that use shared files can access the highly-scalable burst buffers without changing their applications.

  15. Importance of balanced architectures in the design of high-performance imaging systems

    NASA Astrophysics Data System (ADS)

    Sgro, Joseph A.; Stanton, Paul C.

    1999-03-01

    Imaging systems employed in demanding military and industrial applications, such as automatic target recognition and computer vision, typically require real-time high-performance computing resources. While high- performances computing systems have traditionally relied on proprietary architectures and custom components, recent advances in high performance general-purpose microprocessor technology have produced an abundance of low cost components suitable for use in high-performance computing systems. A common pitfall in the design of high performance imaging system, particularly systems employing scalable multiprocessor architectures, is the failure to balance computational and memory bandwidth. The performance of standard cluster designs, for example, in which several processors share a common memory bus, is typically constrained by memory bandwidth. The symptom characteristic of this problem is failure to the performance of the system to scale as more processors are added. The problem becomes exacerbated if I/O and memory functions share the same bus. The recent introduction of microprocessors with large internal caches and high performance external memory interfaces makes it practical to design high performance imaging system with balanced computational and memory bandwidth. Real word examples of such designs will be presented, along with a discussion of adapting algorithm design to best utilize available memory bandwidth.

  16. Explaining efficient search for conjunctions of motion and form: evidence from negative color effects.

    PubMed

    Dent, Kevin

    2014-05-01

    Dent, Humphreys, and Braithwaite (2011) showed substantial costs to search when a moving target shared its color with a group of ignored static distractors. The present study further explored the conditions under which such costs to performance occur. Experiment 1 tested whether the negative color-sharing effect was specific to cases in which search showed a highly serial pattern. The results showed that the negative color-sharing effect persisted in the case of a target defined as a conjunction of movement and form, even when search was highly efficient. In Experiment 2, the ease with which participants could find an odd-colored target amongst a moving group was examined. Participants searched for a moving target amongst moving and stationary distractors. In Experiment 2A, participants performed a highly serial search through a group of similarly shaped moving letters. Performance was much slower when the target shared its color with a set of ignored static distractors. The exact same displays were used in Experiment 2B; however, participants now responded "present" for targets that shared the color of the static distractors. The same targets that had previously been difficult to find were now found efficiently. The results are interpreted in a flexible framework for attentional control. Targets that are linked with irrelevant distractors by color tend to be ignored. However, this cost can be overridden by top-down control settings.

  17. Message Passing and Shared Address Space Parallelism on an SMP Cluster

    NASA Technical Reports Server (NTRS)

    Shan, Hongzhang; Singh, Jaswinder P.; Oliker, Leonid; Biswas, Rupak; Biegel, Bryan (Technical Monitor)

    2002-01-01

    Currently, message passing (MP) and shared address space (SAS) are the two leading parallel programming paradigms. MP has been standardized with MPI, and is the more common and mature approach; however, code development can be extremely difficult, especially for irregularly structured computations. SAS offers substantial ease of programming, but may suffer from performance limitations due to poor spatial locality and high protocol overhead. In this paper, we compare the performance of and the programming effort required for six applications under both programming models on a 32-processor PC-SMP cluster, a platform that is becoming increasingly attractive for high-end scientific computing. Our application suite consists of codes that typically do not exhibit scalable performance under shared-memory programming due to their high communication-to-computation ratios and/or complex communication patterns. Results indicate that SAS can achieve about half the parallel efficiency of MPI for most of our applications, while being competitive for the others. A hybrid MPI+SAS strategy shows only a small performance advantage over pure MPI in some cases. Finally, improved implementations of two MPI collective operations on PC-SMP clusters are presented.

  18. Message Passing vs. Shared Address Space on a Cluster of SMPs

    NASA Technical Reports Server (NTRS)

    Shan, Hongzhang; Singh, Jaswinder Pal; Oliker, Leonid; Biswas, Rupak

    2000-01-01

    The convergence of scalable computer architectures using clusters of PCs (or PC-SMPs) with commodity networking has become an attractive platform for high end scientific computing. Currently, message-passing and shared address space (SAS) are the two leading programming paradigms for these systems. Message-passing has been standardized with MPI, and is the most common and mature programming approach. However message-passing code development can be extremely difficult, especially for irregular structured computations. SAS offers substantial ease of programming, but may suffer from performance limitations due to poor spatial locality, and high protocol overhead. In this paper, we compare the performance of and programming effort, required for six applications under both programming models on a 32 CPU PC-SMP cluster. Our application suite consists of codes that typically do not exhibit high efficiency under shared memory programming. due to their high communication to computation ratios and complex communication patterns. Results indicate that SAS can achieve about half the parallel efficiency of MPI for most of our applications: however, on certain classes of problems SAS performance is competitive with MPI. We also present new algorithms for improving the PC cluster performance of MPI collective operations.

  19. Improvement of multiprocessing performance by using optical centralized shared bus

    NASA Astrophysics Data System (ADS)

    Han, Xuliang; Chen, Ray T.

    2004-06-01

    With the ever-increasing need to solve larger and more complex problems, multiprocessing is attracting more and more research efforts. One of the challenges facing the multiprocessor designers is to fulfill in an effective manner the communications among the processes running in parallel on multiple multiprocessors. The conventional electrical backplane bus provides narrow bandwidth as restricted by the physical limitations of electrical interconnects. In the electrical domain, in order to operate at high frequency, the backplane topology has been changed from the simple shared bus to the complicated switched medium. However, the switched medium is an indirect network. It cannot support multicast/broadcast as effectively as the shared bus. Besides the additional latency of going through the intermediate switching nodes, signal routing introduces substantial delay and considerable system complexity. Alternatively, optics has been well known for its interconnect capability. Therefore, it has become imperative to investigate how to improve multiprocessing performance by utilizing optical interconnects. From the implementation standpoint, the existing optical technologies still cannot fulfill the intelligent functions that a switch fabric should provide as effectively as their electronic counterparts. Thus, an innovative optical technology that can provide sufficient bandwidth capacity, while at the same time, retaining the essential merits of the shared bus topology, is highly desirable for the multiprocessing performance improvement. In this paper, the optical centralized shared bus is proposed for use in the multiprocessing systems. This novel optical interconnect architecture not only utilizes the beneficial characteristics of optics, but also retains the desirable properties of the shared bus topology. Meanwhile, from the architecture standpoint, it fits well in the centralized shared-memory multiprocessing scheme. Therefore, a smooth migration with substantial multiprocessing performance improvement is expected. To prove the technical feasibility from the architecture standpoint, a conceptual emulation of the centralized shared-memory multiprocessing scheme is demonstrated on a generic PCI subsystem with an optical centralized shared bus.

  20. Modification of Existing Prestressed Girder Cross-Sections for the Optimal Structural Use of Ultra-High Performance Concrete

    DOT National Transportation Integrated Search

    2008-10-22

    Ultra High Performance Concrete (UHPC) is a class of cementitious materials that share similar characteristics including very large compressive strengths, tensile strength greater than conventional concrete and high durability. The material consists ...

  1. LU Factorization with Partial Pivoting for a Multi-CPU, Multi-GPU Shared Memory System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurzak, Jakub; Luszczek, Pitior; Faverge, Mathieu

    2012-03-01

    LU factorization with partial pivoting is a canonical numerical procedure and the main component of the High Performance LINPACK benchmark. This article presents an implementation of the algorithm for a hybrid, shared memory, system with standard CPU cores and GPU accelerators. Performance in excess of one TeraFLOPS is achieved using four AMD Magny Cours CPUs and four NVIDIA Fermi GPUs.

  2. Initiating and utilizing shared leadership in teams: The role of leader humility, team proactive personality, and team performance capability.

    PubMed

    Chiu, Chia-Yen Chad; Owens, Bradley P; Tesluk, Paul E

    2016-12-01

    The present study was designed to produce novel theoretical insight regarding how leader humility and team member characteristics foster the conditions that promote shared leadership and when shared leadership relates to team effectiveness. Drawing on social information processing theory and adaptive leadership theory, we propose that leader humility facilitates shared leadership by promoting leadership-claiming and leadership-granting interactions among team members. We also apply dominance complementary theory to propose that team proactive personality strengthens the impact of leader humility on shared leadership. Finally, we predict that shared leadership will be most strongly related to team performance when team members have high levels of task-related competence. Using a sample composed of 62 Taiwanese professional work teams, we find support for our proposed hypothesized model. The theoretical and practical implications of these results for team leadership, humility, team composition, and shared leadership are discussed. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  3. Structural and psychological empowerment climates, performance, and the moderating role of shared felt accountability: a managerial perspective.

    PubMed

    Wallace, J Craig; Johnson, Paul D; Mathe, Kimberly; Paul, Jeff

    2011-07-01

    The authors proposed and tested a model in which data were collected from managers (n = 539) at 116 corporate-owned quick service restaurants to assess the structural and psychological empowerment process as moderated by shared-felt accountability on indices of performance from a managerial perspective. The authors found that empowering leadership climate positively relates to psychological empowerment climate. In turn, psychological empowerment climate relates to performance only under conditions of high-felt accountability; it does not relate to performance under conditions of low-felt accountability. Overall, the present results indicate that the quick-service restaurant managers, who feel more empowered, operate restaurants that perform better than managers who feel less empowered, but only when those empowered managers also feel a high sense of accountability.

  4. HydroShare: An online, collaborative environment for the sharing of hydrologic data and models (Invited)

    NASA Astrophysics Data System (ADS)

    Tarboton, D. G.; Idaszak, R.; Horsburgh, J. S.; Ames, D.; Goodall, J. L.; Band, L. E.; Merwade, V.; Couch, A.; Arrigo, J.; Hooper, R. P.; Valentine, D. W.; Maidment, D. R.

    2013-12-01

    HydroShare is an online, collaborative system being developed for sharing hydrologic data and models. The goal of HydroShare is to enable scientists to easily discover and access data and models, retrieve them to their desktop or perform analyses in a distributed computing environment that may include grid, cloud or high performance computing model instances as necessary. Scientists may also publish outcomes (data, results or models) into HydroShare, using the system as a collaboration platform for sharing data, models and analyses. HydroShare is expanding the data sharing capability of the CUAHSI Hydrologic Information System by broadening the classes of data accommodated, creating new capability to share models and model components, and taking advantage of emerging social media functionality to enhance information about and collaboration around hydrologic data and models. One of the fundamental concepts in HydroShare is that of a Resource. All content is represented using a Resource Data Model that separates system and science metadata and has elements common to all resources as well as elements specific to the types of resources HydroShare will support. These will include different data types used in the hydrology community and models and workflows that require metadata on execution functionality. HydroShare will use the integrated Rule-Oriented Data System (iRODS) to manage federated data content and perform rule-based background actions on data and model resources, including parsing to generate metadata catalog information and the execution of models and workflows. This presentation will introduce the HydroShare functionality developed to date, describe key elements of the Resource Data Model and outline the roadmap for future development.

  5. Sharing without caring? Respect for moral others compensates for low sympathy in children's sharing.

    PubMed

    Zuffianò, Antonio; Colasante, Tyler; Peplak, Joanna; Malti, Tina

    2015-06-01

    We examined links between sharing, respect for moral others, and sympathy in an ethnically diverse sample of 7- and 15-year-olds (N = 146). Sharing was assessed through children's allocation of resources in the dictator game. Children reported their respect towards hypothetical characters performing moral acts. Sympathy was evaluated via caregiver and child reports. Respect and caregiver-reported sympathy interacted in predicting sharing: Higher levels of respect were associated with higher levels of sharing for children with low, but not medium or high, levels of sympathy. The motivational components of other-oriented respect may compensate for low levels of sympathetic concern in the promotion of sharing. © 2015 The British Psychological Society.

  6. Performance evaluation of a six-axis generalized force-reflecting teleoperator

    NASA Technical Reports Server (NTRS)

    Hannaford, B.; Wood, L.; Guggisberg, B.; Mcaffee, D.; Zak, H.

    1989-01-01

    Work in real-time distributed computation and control has culminated in a prototype force-reflecting telemanipulation system having a dissimilar master (cable-driven, force-reflecting hand controller) and a slave (PUMA 560 robot with custom controller), an extremely high sampling rate (1000 Hz), and a low loop computation delay (5 msec). In a series of experiments with this system and five trained test operators covering over 100 hours of teleoperation, performance was measured in a series of generic and application-driven tasks with and without force feedback, and with control shared between teleoperation and local sensor referenced control. Measurements defining task performance included 100-Hz recording of six-axis force/torque information from the slave manipulator wrist, task completion time, and visual observation of predefined task errors. The task consisted of high precision peg-in-hole insertion, electrical connectors, velcro attach-de-attach, and a twist-lock multi-pin connector. Each task was repeated three times under several operating conditions: normal bilateral telemanipulation, forward position control without force feedback, and shared control. In shared control, orientation was locally servo controlled to comply with applied torques, while translation was under operator control. All performance measures improved as capability was added along a spectrum of capabilities ranging from pure position control through force-reflecting teleoperation and shared control. Performance was optimal for the bare-handed operator.

  7. Job-sharing a clinical teacher's position: an evaluation.

    PubMed

    Williams, S; Murphy, L

    1994-01-01

    The aim of this study was to evaluate the effects on staff of having two teachers share one clinical teaching position in their intensive care unit (ICU). Three, six and 12 months after the job-sharing arrangement was initiated, an 11 item questionnaire was distributed to 26 students in post-registration critical care courses, 41 clinical staff in ICU and 9 RN-managers with responsibilities for the unit. The overall response rate to the three questionnaires was 58%. All groups agreed that job-sharing was a viable alternative to full-time work. Three months after the shared position was initiated, there was uncertainty about the consistency of the teachers' performance and the adequacy of communication between them. Nine months later, there was a high level of positive responses to all areas of the teachers' performance. Most respondents felt they could approach either teacher and that more diverse ideas were generated by having two people in the teaching position.

  8. Evaluation of a shared-work program for reducing assistance provided to supported workers with severe multiple disabilities.

    PubMed

    Parsons, Marsha B; Reid, Dennis H; Green, Carolyn W; Browning, Leah B; Hensley, Mary B

    2002-01-01

    Concern has been expressed recently regarding the need to enhance the performance of individuals with highly significant disabilities in community-based, supported jobs. We evaluated a shared-work program for reducing job coach assistance provided to three workers with severe multiple disabilities in a publishing company. Following systematic observations of the assistance provided as each worker worked on entire job tasks, steps comprising the tasks were then re-assigned across workers. The re-assignment involved assigning each worker only those task steps for which the respective worker received the least amount of assistance (e.g., re-assigning steps that a worker could not complete due to physical disabilities), and ensuring the entire tasks were still completed by combining steps performed by all three workers. The shared-work program was accompanied by reductions in job coach assistance provided to each worker. Work productivity of the supported workers initially decreased but then increased to a level equivalent to the higher ranges of baseline productivity. These results suggested that the shared-work program appears to represent a viable means of enhancing supported work performance of people with severe multiple disabilities in some types of community jobs. Future research needs discussed focus on evaluating shared-work approaches with other jobs, and developing additional community work models specifically for people with highly significant disabilities.

  9. Parallelization of NAS Benchmarks for Shared Memory Multiprocessors

    NASA Technical Reports Server (NTRS)

    Waheed, Abdul; Yan, Jerry C.; Saini, Subhash (Technical Monitor)

    1998-01-01

    This paper presents our experiences of parallelizing the sequential implementation of NAS benchmarks using compiler directives on SGI Origin2000 distributed shared memory (DSM) system. Porting existing applications to new high performance parallel and distributed computing platforms is a challenging task. Ideally, a user develops a sequential version of the application, leaving the task of porting to new generations of high performance computing systems to parallelization tools and compilers. Due to the simplicity of programming shared-memory multiprocessors, compiler developers have provided various facilities to allow the users to exploit parallelism. Native compilers on SGI Origin2000 support multiprocessing directives to allow users to exploit loop-level parallelism in their programs. Additionally, supporting tools can accomplish this process automatically and present the results of parallelization to the users. We experimented with these compiler directives and supporting tools by parallelizing sequential implementation of NAS benchmarks. Results reported in this paper indicate that with minimal effort, the performance gain is comparable with the hand-parallelized, carefully optimized, message-passing implementations of the same benchmarks.

  10. Social Networking Adapted for Distributed Scientific Collaboration

    NASA Technical Reports Server (NTRS)

    Karimabadi, Homa

    2012-01-01

    Share is a social networking site with novel, specially designed feature sets to enable simultaneous remote collaboration and sharing of large data sets among scientists. The site will include not only the standard features found on popular consumer-oriented social networking sites such as Facebook and Myspace, but also a number of powerful tools to extend its functionality to a science collaboration site. A Virtual Observatory is a promising technology for making data accessible from various missions and instruments through a Web browser. Sci-Share augments services provided by Virtual Observatories by enabling distributed collaboration and sharing of downloaded and/or processed data among scientists. This will, in turn, increase science returns from NASA missions. Sci-Share also enables better utilization of NASA s high-performance computing resources by providing an easy and central mechanism to access and share large files on users space or those saved on mass storage. The most common means of remote scientific collaboration today remains the trio of e-mail for electronic communication, FTP for file sharing, and personalized Web sites for dissemination of papers and research results. Each of these tools has well-known limitations. Sci-Share transforms the social networking paradigm into a scientific collaboration environment by offering powerful tools for cooperative discourse and digital content sharing. Sci-Share differentiates itself by serving as an online repository for users digital content with the following unique features: a) Sharing of any file type, any size, from anywhere; b) Creation of projects and groups for controlled sharing; c) Module for sharing files on HPC (High Performance Computing) sites; d) Universal accessibility of staged files as embedded links on other sites (e.g. Facebook) and tools (e.g. e-mail); e) Drag-and-drop transfer of large files, replacing awkward e-mail attachments (and file size limitations); f) Enterprise-level data and messaging encryption; and g) Easy-to-use intuitive workflow.

  11. Learning to Share

    ERIC Educational Resources Information Center

    Raths, David

    2010-01-01

    In the tug-of-war between researchers and IT for supercomputing resources, a centralized approach can help both sides get more bang for their buck. As 2010 began, the University of Washington was preparing to launch its first shared high-performance computing cluster, a 1,500-node system called Hyak, dedicated to research activities. Like other…

  12. Reliable file sharing in distributed operating system using web RTC

    NASA Astrophysics Data System (ADS)

    Dukiya, Rajesh

    2017-12-01

    Since, the evolution of distributed operating system, distributed file system is come out to be important part in operating system. P2P is a reliable way in Distributed Operating System for file sharing. It was introduced in 1999, later it became a high research interest topic. Peer to Peer network is a type of network, where peers share network workload and other load related tasks. A P2P network can be a period of time connection, where a bunch of computers connected by a USB (Universal Serial Bus) port to transfer or enable disk sharing i.e. file sharing. Currently P2P requires special network that should be designed in P2P way. Nowadays, there is a big influence of browsers in our life. In this project we are going to study of file sharing mechanism in distributed operating system in web browsers, where we will try to find performance bottlenecks which our research will going to be an improvement in file sharing by performance and scalability in distributed file systems. Additionally, we will discuss the scope of Web Torrent file sharing and free-riding in peer to peer networks.

  13. Contention Modeling for Multithreaded Distributed Shared Memory Machines: The Cray XMT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Secchi, Simone; Tumeo, Antonino; Villa, Oreste

    Distributed Shared Memory (DSM) machines are a wide class of multi-processor computing systems where a large virtually-shared address space is mapped on a network of physically distributed memories. High memory latency and network contention are two of the main factors that limit performance scaling of such architectures. Modern high-performance computing DSM systems have evolved toward exploitation of massive hardware multi-threading and fine-grained memory hashing to tolerate irregular latencies, avoid network hot-spots and enable high scaling. In order to model the performance of such large-scale machines, parallel simulation has been proved to be a promising approach to achieve good accuracy inmore » reasonable times. One of the most critical factors in solving the simulation speed-accuracy trade-off is network modeling. The Cray XMT is a massively multi-threaded supercomputing architecture that belongs to the DSM class, since it implements a globally-shared address space abstraction on top of a physically distributed memory substrate. In this paper, we discuss the development of a contention-aware network model intended to be integrated in a full-system XMT simulator. We start by measuring the effects of network contention in a 128-processor XMT machine and then investigate the trade-off that exists between simulation accuracy and speed, by comparing three network models which operate at different levels of accuracy. The comparison and model validation is performed by executing a string-matching algorithm on the full-system simulator and on the XMT, using three datasets that generate noticeably different contention patterns.« less

  14. Characterizing output bottlenecks in a supercomputer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xie, Bing; Chase, Jeffrey; Dillow, David A

    2012-01-01

    Supercomputer I/O loads are often dominated by writes. HPC (High Performance Computing) file systems are designed to absorb these bursty outputs at high bandwidth through massive parallelism. However, the delivered write bandwidth often falls well below the peak. This paper characterizes the data absorption behavior of a center-wide shared Lustre parallel file system on the Jaguar supercomputer. We use a statistical methodology to address the challenges of accurately measuring a shared machine under production load and to obtain the distribution of bandwidth across samples of compute nodes, storage targets, and time intervals. We observe and quantify limitations from competing traffic,more » contention on storage servers and I/O routers, concurrency limitations in the client compute node operating systems, and the impact of variance (stragglers) on coupled output such as striping. We then examine the implications of our results for application performance and the design of I/O middleware systems on shared supercomputers.« less

  15. Quantifying Scheduling Challenges for Exascale System Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mondragon, Oscar; Bridges, Patrick G.; Jones, Terry R

    2015-01-01

    The move towards high-performance computing (HPC) ap- plications comprised of coupled codes and the need to dra- matically reduce data movement is leading to a reexami- nation of time-sharing vs. space-sharing in HPC systems. In this paper, we discuss and begin to quantify the perfor- mance impact of a move away from strict space-sharing of nodes for HPC applications. Specifically, we examine the po- tential performance cost of time-sharing nodes between ap- plication components, we determine whether a simple coor- dinated scheduling mechanism can address these problems, and we research how suitable simple constraint-based opti- mization techniques are for solvingmore » scheduling challenges in this regime. Our results demonstrate that current general- purpose HPC system software scheduling and resource al- location systems are subject to significant performance de- ciencies which we quantify for six representative applica- tions. Based on these results, we discuss areas in which ad- ditional research is needed to meet the scheduling challenges of next-generation HPC systems.« less

  16. High-performance computing — an overview

    NASA Astrophysics Data System (ADS)

    Marksteiner, Peter

    1996-08-01

    An overview of high-performance computing (HPC) is given. Different types of computer architectures used in HPC are discussed: vector supercomputers, high-performance RISC processors, various parallel computers like symmetric multiprocessors, workstation clusters, massively parallel processors. Software tools and programming techniques used in HPC are reviewed: vectorizing compilers, optimization and vector tuning, optimization for RISC processors; parallel programming techniques like shared-memory parallelism, message passing and data parallelism; and numerical libraries.

  17. A 300MHz Embedded Flash Memory with Pipeline Architecture and Offset-Free Sense Amplifiers for Dual-Core Automotive Microcontrollers

    NASA Astrophysics Data System (ADS)

    Kajiyama, Shinya; Fujito, Masamichi; Kasai, Hideo; Mizuno, Makoto; Yamaguchi, Takanori; Shinagawa, Yutaka

    A novel 300MHz embedded flash memory for dual-core microcontrollers with a shared ROM architecture is proposed. One of its features is a three-stage pipeline read operation, which enables reduced access pitch and therefore reduces performance penalty due to conflict of shared ROM accesses. Another feature is a highly sensitive sense amplifier that achieves efficient pipeline operation with two-cycle latency one-cycle pitch as a result of a shortened sense time of 0.63ns. The combination of the pipeline architecture and proposed sense amplifiers significantly reduces access-conflict penalties with shared ROM and enhances performance of 32-bit RISC dual-core microcontrollers by 30%.

  18. Foreign-born Peers and Academic Performance.

    PubMed

    Conger, Dylan

    2015-04-01

    The academic performance of foreign-born youth in the United States is well studied, yet little is known about whether and how foreign-born students influence their classmates. In this article, I develop a set of expectations regarding the potential consequences of immigrant integration across schools, with a distinction between the effects of sharing schools with immigrants who are designated as English language learners (ELL) and those who are not. I then use administrative data on multiple cohorts of Florida public high school students to estimate the effect of immigrant shares on immigrant and native-born students' academic performance. The identification strategy pays careful attention to the selection problem by estimating the effect of foreign-born peers from deviations in the share foreign-born across cohorts of students attending the same school in different years. The assumption underlying this approach is that students choose schools based on the composition of the entire school, not on the composition of each entering cohort. The results of the analysis, which hold under several robustness checks, indicate that foreign-born peers (both those who are ELL and those who are non-ELL) have no effect on their high school classmates' academic performance.

  19. iDASH: integrating data for analysis, anonymization, and sharing

    PubMed Central

    Bafna, Vineet; Boxwala, Aziz A; Chapman, Brian E; Chapman, Wendy W; Chaudhuri, Kamalika; Day, Michele E; Farcas, Claudiu; Heintzman, Nathaniel D; Jiang, Xiaoqian; Kim, Hyeoneui; Kim, Jihoon; Matheny, Michael E; Resnic, Frederic S; Vinterbo, Staal A

    2011-01-01

    iDASH (integrating data for analysis, anonymization, and sharing) is the newest National Center for Biomedical Computing funded by the NIH. It focuses on algorithms and tools for sharing data in a privacy-preserving manner. Foundational privacy technology research performed within iDASH is coupled with innovative engineering for collaborative tool development and data-sharing capabilities in a private Health Insurance Portability and Accountability Act (HIPAA)-certified cloud. Driving Biological Projects, which span different biological levels (from molecules to individuals to populations) and focus on various health conditions, help guide research and development within this Center. Furthermore, training and dissemination efforts connect the Center with its stakeholders and educate data owners and data consumers on how to share and use clinical and biological data. Through these various mechanisms, iDASH implements its goal of providing biomedical and behavioral researchers with access to data, software, and a high-performance computing environment, thus enabling them to generate and test new hypotheses. PMID:22081224

  20. iDASH: integrating data for analysis, anonymization, and sharing.

    PubMed

    Ohno-Machado, Lucila; Bafna, Vineet; Boxwala, Aziz A; Chapman, Brian E; Chapman, Wendy W; Chaudhuri, Kamalika; Day, Michele E; Farcas, Claudiu; Heintzman, Nathaniel D; Jiang, Xiaoqian; Kim, Hyeoneui; Kim, Jihoon; Matheny, Michael E; Resnic, Frederic S; Vinterbo, Staal A

    2012-01-01

    iDASH (integrating data for analysis, anonymization, and sharing) is the newest National Center for Biomedical Computing funded by the NIH. It focuses on algorithms and tools for sharing data in a privacy-preserving manner. Foundational privacy technology research performed within iDASH is coupled with innovative engineering for collaborative tool development and data-sharing capabilities in a private Health Insurance Portability and Accountability Act (HIPAA)-certified cloud. Driving Biological Projects, which span different biological levels (from molecules to individuals to populations) and focus on various health conditions, help guide research and development within this Center. Furthermore, training and dissemination efforts connect the Center with its stakeholders and educate data owners and data consumers on how to share and use clinical and biological data. Through these various mechanisms, iDASH implements its goal of providing biomedical and behavioral researchers with access to data, software, and a high-performance computing environment, thus enabling them to generate and test new hypotheses.

  1. Love, Happiness, and America's Schools: The Role of Educational Leadership in the 21st Century.

    ERIC Educational Resources Information Center

    Hoyle, John R.; Slater, Robert O.

    2001-01-01

    Some want schools reformed to produce high-performing future leaders. Others desire schools that teach students how to live, share, and serve others. Competition and high performance need not substitute for happiness, love, and service-values that counter America's culture of hyper-individualism, isolationism, and declining social/political…

  2. QoS support for end users of I/O-intensive applications using shared storage systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davis, Marion Kei; Zhang, Xuechen; Jiang, Song

    2011-01-19

    I/O-intensive applications are becoming increasingly common on today's high-performance computing systems. While performance of compute-bound applications can be effectively guaranteed with techniques such as space sharing or QoS-aware process scheduling, it remains a challenge to meet QoS requirements for end users of I/O-intensive applications using shared storage systems because it is difficult to differentiate I/O services for different applications with individual quality requirements. Furthermore, it is difficult for end users to accurately specify performance goals to the storage system using I/O-related metrics such as request latency or throughput. As access patterns, request rates, and the system workload change in time,more » a fixed I/O performance goal, such as bounds on throughput or latency, can be expensive to achieve and may not lead to a meaningful performance guarantees such as bounded program execution time. We propose a scheme supporting end-users QoS goals, specified in terms of program execution time, in shared storage environments. We automatically translate the users performance goals into instantaneous I/O throughput bounds using a machine learning technique, and use dynamically determined service time windows to efficiently meet the throughput bounds. We have implemented this scheme in the PVFS2 parallel file system and have conducted an extensive evaluation. Our results show that this scheme can satisfy realistic end-user QoS requirements by making highly efficient use of the I/O resources. The scheme seeks to balance programs attainment of QoS requirements, and saves as much of the remaining I/O capacity as possible for best-effort programs.« less

  3. Interdisciplinary shared governance: a partnership model for high performance in a managed care environment.

    PubMed

    Anderson, D A; Bankston, K; Stindt, J L; Weybright, D W

    2000-09-01

    Today's managed care environment is forcing hospitals to seek new and innovative ways to deliver a seamless continuum of high-quality care and services to defined populations at lower costs. Many are striving to achieve this goal through the implementation of shared governance models that support point-of-service decision making, interdisciplinary partnerships, and the integration of work across clinical settings and along the service delivery continuum. The authors describe the key processes and strategies used to facilitate the design and successful implementation of an interdisciplinary shared governance model at The University Hospital, Cincinnati, Ohio. Implementation costs and initial benefits obtained over a 2-year period also are identified.

  4. Aho-Corasick String Matching on Shared and Distributed Memory Parallel Architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tumeo, Antonino; Villa, Oreste; Chavarría-Miranda, Daniel

    String matching is at the core of many critical applications, including network intrusion detection systems, search engines, virus scanners, spam filters, DNA and protein sequencing, and data mining. For all of these applications string matching requires a combination of (sometimes all) the following characteristics: high and/or predictable performance, support for large data sets and flexibility of integration and customization. Many software based implementations targeting conventional cache-based microprocessors fail to achieve high and predictable performance requirements, while Field-Programmable Gate Array (FPGA) implementations and dedicated hardware solutions fail to support large data sets (dictionary sizes) and are difficult to integrate and customize.more » The advent of multicore, multithreaded, and GPU-based systems is opening the possibility for software based solutions to reach very high performance at a sustained rate. This paper compares several software-based implementations of the Aho-Corasick string searching algorithm for high performance systems. We discuss the implementation of the algorithm on several types of shared-memory high-performance architectures (Niagara 2, large x86 SMPs and Cray XMT), distributed memory with homogeneous processing elements (InfiniBand cluster of x86 multicores) and heterogeneous processing elements (InfiniBand cluster of x86 multicores with NVIDIA Tesla C10 GPUs). We describe in detail how each solution achieves the objectives of supporting large dictionaries, sustaining high performance, and enabling customization and flexibility using various data sets.« less

  5. Health Care, Heal Thyself! An Exploration of What Drives (and Sustains) High Performance in Organizations Today

    ERIC Educational Resources Information Center

    Wolf, Jason A.

    2008-01-01

    What happens when researching the radical unveils the simplest of solutions? This article tells the story of the 2007 ISPI Annual Conference Encore Presentation, Healthcare, Heal Thyself, sharing the findings of an exploration into high-performance health care facilities and their relevance to all organizations today. It shows how to overcome…

  6. Student-Led Project Teams: Significance of Regulation Strategies in High- and Low-Performing Teams

    ERIC Educational Resources Information Center

    Ainsworth, Judith

    2016-01-01

    We studied group and individual co-regulatory and self-regulatory strategies of self-managed student project teams using data from intragroup peer evaluations and a postproject survey. We found that high team performers shared their research and knowledge with others, collaborated to advise and give constructive criticism, and demonstrated moral…

  7. Design of shared instruments to utilize simulated gravities generated by a large-gradient, high-field superconducting magnet.

    PubMed

    Wang, Y; Yin, D C; Liu, Y M; Shi, J Z; Lu, H M; Shi, Z H; Qian, A R; Shang, P

    2011-03-01

    A high-field superconducting magnet can provide both high-magnetic fields and large-field gradients, which can be used as a special environment for research or practical applications in materials processing, life science studies, physical and chemical reactions, etc. To make full use of a superconducting magnet, shared instruments (the operating platform, sample holders, temperature controller, and observation system) must be prepared as prerequisites. This paper introduces the design of a set of sample holders and a temperature controller in detail with an emphasis on validating the performance of the force and temperature sensors in the high-magnetic field.

  8. Design of shared instruments to utilize simulated gravities generated by a large-gradient, high-field superconducting magnet

    NASA Astrophysics Data System (ADS)

    Wang, Y.; Yin, D. C.; Liu, Y. M.; Shi, J. Z.; Lu, H. M.; Shi, Z. H.; Qian, A. R.; Shang, P.

    2011-03-01

    A high-field superconducting magnet can provide both high-magnetic fields and large-field gradients, which can be used as a special environment for research or practical applications in materials processing, life science studies, physical and chemical reactions, etc. To make full use of a superconducting magnet, shared instruments (the operating platform, sample holders, temperature controller, and observation system) must be prepared as prerequisites. This paper introduces the design of a set of sample holders and a temperature controller in detail with an emphasis on validating the performance of the force and temperature sensors in the high-magnetic field.

  9. Implicit Coordination Strategies for Effective Team Communication.

    PubMed

    Butchibabu, Abhizna; Sparano-Huiban, Christopher; Sonenberg, Liz; Shah, Julie

    2016-06-01

    We investigated implicit communication strategies for anticipatory information sharing during team performance of tasks with varying degrees of complexity. We compared the strategies used by teams with the highest level of performance to those used by the lowest-performing teams to evaluate the frequency and methods of communications used as a function of task structure. High-performing teams share information by anticipating the needs of their teammates rather than explicitly requesting the exchange of information. As the complexity of a task increases to involve more interdependence among teammates, the impact of coordination on team performance also increases. This observation motivated us to conduct a study of anticipatory information sharing as a function of task complexity. We conducted an experiment in which 13 teams of four people performed collaborative search-and-deliver tasks with varying degrees of complexity in a simulation environment. We elaborated upon prior characterizations of communication as implicit versus explicit by dividing implicit communication into two subtypes: (a) deliberative/goal information and (b) reactive status updates. We then characterized relationships between task structure, implicit communication, and team performance. We found that the five teams with the fastest task completion times and lowest idle times exhibited higher rates of deliberative communication versus reactive communication during high-complexity tasks compared with the five teams with the slowest completion times and longest idle times (p = .039). Teams in which members proactively communicated information about their next goal to teammates exhibited improved team performance. The findings from our work can inform the design of communication strategies for team training to improve performance of complex tasks. © 2016, Human Factors and Ergonomics Society.

  10. Distributed deep learning networks among institutions for medical imaging.

    PubMed

    Chang, Ken; Balachandar, Niranjan; Lam, Carson; Yi, Darvin; Brown, James; Beers, Andrew; Rosen, Bruce; Rubin, Daniel L; Kalpathy-Cramer, Jayashree

    2018-03-29

    Deep learning has become a promising approach for automated support for clinical diagnosis. When medical data samples are limited, collaboration among multiple institutions is necessary to achieve high algorithm performance. However, sharing patient data often has limitations due to technical, legal, or ethical concerns. In this study, we propose methods of distributing deep learning models as an attractive alternative to sharing patient data. We simulate the distribution of deep learning models across 4 institutions using various training heuristics and compare the results with a deep learning model trained on centrally hosted patient data. The training heuristics investigated include ensembling single institution models, single weight transfer, and cyclical weight transfer. We evaluated these approaches for image classification in 3 independent image collections (retinal fundus photos, mammography, and ImageNet). We find that cyclical weight transfer resulted in a performance that was comparable to that of centrally hosted patient data. We also found that there is an improvement in the performance of cyclical weight transfer heuristic with a high frequency of weight transfer. We show that distributing deep learning models is an effective alternative to sharing patient data. This finding has implications for any collaborative deep learning study.

  11. Using Cryptography to Improve Conjunction Analysis

    NASA Astrophysics Data System (ADS)

    Hemenway, B.; Welser, B.; Baiocchi, D.

    2012-09-01

    Coordination of operations between satellite operators is becoming increasingly important to prevent collisions. Unfortunately, this coordination is often handicapped by a lack of trust. Coordination and cooperation between satellite operators can take many forms, however, one specific area where cooperation between operators would yield significant benefits is in the computation of conjunction analyses. Passively collected orbital are of generally of too low fidelity to be of use in conjunction analyses. Each operator, however, maintains high fidelity data about their own satellites. These high fidelity data are significantly more valuable in calculating conjunction analyses than the lower-fidelity data. If operators were to share their high fidelity data overall space situational awareness could be improved. At present, many operators do not share data and as a consequence space situational awareness suffers. Restrictive data sharing policies are primarily motivated by privacy concerns on the part of the satellite operators, as each operator is reluctant or unwilling to share data that might compromise its political or commercial interests. In order to perform the necessary conjunction analyses while still maintaining the privacy of their own data, a few operators have entered data sharing agreements. These operators provide their private data to a trusted outside party, who then performs the conjunction analyses and reports the results to the operators. These types of agreements are not an ideal solution as they require a degree of trust between the parties, and the cost of employing the trusted party can be large. In this work, we present and analyze cryptographic tools that would allow satellite operators to securely calculate conjunction analyses without the help of a trusted outside party, while provably maintaining the privacy of their own orbital information. For example, recent advances in cryptographic protocols, specifically in the area of secure Multiparty Computation (MPC) have the potential to allow satellite operators to perform the necessary conjunction analyses without the need to reveal their orbital information to anyone. This talk will describe how MPC works, and how we propose to use it to facilitate secure information sharing between satellite operators.

  12. Force-reflection and shared compliant control in operating telemanipulators with time delay

    NASA Technical Reports Server (NTRS)

    Kim, Won S.; Hannaford, Blake; Bejczy, Antal K.

    1992-01-01

    The performance of an advanced telemanipulation system in the presence of a wide range of time delays between a master control station and a slave robot is quantified. The contemplated applications include multiple satellite links to LEO, geosynchronous operation, spacecraft local area networks, and general-purpose computer-based short-distance designs. The results of high-precision peg-in-hole tasks performed by six test operators indicate that task performance decreased linearly with introduced time delays for both kinesthetic force feedback (KFF) and shared compliant control (SCC). The rate of this decrease was substantially improved with SCC compared to KFF. Task performance at delays above 1 s was not possible using KFF. SCC enabled task performance for such delays, which are realistic values for ground-controlled remote manipulation of telerobots in space.

  13. Building High-Performing and Improving Education Systems. Systems and Structures: Powers, Duties and Funding. Review

    ERIC Educational Resources Information Center

    Slater, Liz

    2013-01-01

    This Review looks at the way high-performing and improving education systems share out power and responsibility. Resources--in the form of funding, capital investment or payment of salaries and other ongoing costs--are some of the main levers used to make policy happen, but are not a substitute for well thought-through and appropriate policy…

  14. Poverty, Performance, and Frog Ponds: What Best-Practice Research Tells Us about Their Connections

    ERIC Educational Resources Information Center

    Angelis, Janet I.; Wilcox, Kristen C.

    2011-01-01

    Having studied schools in the past eight years that have high concentrations of students living in poverty, but who consistently exceed the performance of similarly impoverished schools, the authors conclude that such higher-performing schools share common characteristics setting them apart. The three most essential are: Teachers, administrators,…

  15. Market Earnings and Household Work: New Tests of Gender Performance Theory

    ERIC Educational Resources Information Center

    Schneider, Daniel

    2011-01-01

    I examine the contested finding that men and women engage in gender performance through housework. Prior scholarship has found a curvilinear association between earnings share and housework that has been interpreted as evidence of gender performance. I reexamine these findings by conducting the first such analysis to use high-quality time diary…

  16. Color extended visual cryptography using error diffusion.

    PubMed

    Kang, InKoo; Arce, Gonzalo R; Lee, Heung-Kyu

    2011-01-01

    Color visual cryptography (VC) encrypts a color secret message into n color halftone image shares. Previous methods in the literature show good results for black and white or gray scale VC schemes, however, they are not sufficient to be applied directly to color shares due to different color structures. Some methods for color visual cryptography are not satisfactory in terms of producing either meaningless shares or meaningful shares with low visual quality, leading to suspicion of encryption. This paper introduces the concept of visual information pixel (VIP) synchronization and error diffusion to attain a color visual cryptography encryption method that produces meaningful color shares with high visual quality. VIP synchronization retains the positions of pixels carrying visual information of original images throughout the color channels and error diffusion generates shares pleasant to human eyes. Comparisons with previous approaches show the superior performance of the new method.

  17. Multiparty Quantum Direct Secret Sharing of Classical Information with Bell States and Bell Measurements

    NASA Astrophysics Data System (ADS)

    Song, Yun; Li, Yongming; Wang, Wenhua

    2018-02-01

    This paper proposed a new and efficient multiparty quantum direct secret sharing (QDSS) by using swapping quantum entanglement of Bell states. In the proposed scheme, the quantum correlation between the possible measurement results of the members (except dealer) and the original local unitary operation encoded by the dealer was presented. All agents only need to perform Bell measurements to share dealer's secret by recovering dealer's operation without performing any unitary operation. Our scheme has several advantages. The dealer is not required to retain any photons, and can further share a predetermined key instead of a random key to the agents. It has high capacity as two bits of secret messages can be transmitted by an EPR pair and the intrinsic efficiency approaches 100%, because no classical bit needs to be transmitted except those for detection. Without inserting any checking sets for detecting the eavesdropping, the scheme can resist not only the existing attacks, but also the cheating attack from the dishonest agent.

  18. Parallel k-means++ for Multiple Shared-Memory Architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mackey, Patrick S.; Lewis, Robert R.

    2016-09-22

    In recent years k-means++ has become a popular initialization technique for improved k-means clustering. To date, most of the work done to improve its performance has involved parallelizing algorithms that are only approximations of k-means++. In this paper we present a parallelization of the exact k-means++ algorithm, with a proof of its correctness. We develop implementations for three distinct shared-memory architectures: multicore CPU, high performance GPU, and the massively multithreaded Cray XMT platform. We demonstrate the scalability of the algorithm on each platform. In addition we present a visual approach for showing which platform performed k-means++ the fastest for varyingmore » data sizes.« less

  19. Comparative Investigation of Shared Filesystems for the LHCb Online Cluster

    NASA Astrophysics Data System (ADS)

    Vijay Kartik, S.; Neufeld, Niko

    2012-12-01

    This paper describes the investigative study undertaken to evaluate shared filesystem performance and suitability in the LHCb Online environment. Particular focus is given to the measurements and field tests designed and performed on an in-house OpenAFS setup; related comparisons with NFSv4 and GPFS (a clustered filesystem from IBM) are presented. The motivation for the investigation and the test setup arises from the need to serve common user-space like home directories, experiment software and control areas, and clustered log areas. Since the operational requirements on such user-space are stringent in terms of read-write operations (in frequency and access speed) and unobtrusive data relocation, test results are presented with emphasis on file-level performance, stability and “high-availability” of the shared filesystems. Use cases specific to the experiment operation in LHCb, including the specific handling of shared filesystems served to a cluster of 1500 diskless nodes, are described. Issues of prematurely expiring authenticated sessions are explicitly addressed, keeping in mind long-running analysis jobs on the Online cluster. In addition, quantitative test results are also presented with alternatives including NFSv4. Comparative measurements of filesystem performance benchmarks are presented, which are seen to be used as reference for decisions on potential migration of the current storage solution deployed in the LHCb online cluster.

  20. Respecting High-Schoolers as Partners, Not Inferiors

    ERIC Educational Resources Information Center

    Cushman, Kathleen

    2006-01-01

    In the current climate of school improvement, no one feels more pressure than a high school principal. A principal bears responsibility not just for the organization's daily functioning but for the performance of its students in high school and beyond. However, high school students perceive principals as individuals who held the lion's share of…

  1. Perpetual Motion

    ERIC Educational Resources Information Center

    McKibben, Sarah

    2010-01-01

    This article presents an interview with Lucy Beckham, the 2010 MetLife/NASSP National High School Principal of the Year, and principal of Wando High School, one of South Carolina's largest and highest-performing schools. Wando High School in Mount Pleasant, South Carolina, has 3,250 in enrollment and has 209 staff members. Beckham shares her story…

  2. Parallel performance investigations of an unstructured mesh Navier-Stokes solver

    NASA Technical Reports Server (NTRS)

    Mavriplis, Dimitri J.

    2000-01-01

    A Reynolds-averaged Navier-Stokes solver based on unstructured mesh techniques for analysis of high-lift configurations is described. The method makes use of an agglomeration multigrid solver for convergence acceleration. Implicit line-smoothing is employed to relieve the stiffness associated with highly stretched meshes. A GMRES technique is also implemented to speed convergence at the expense of additional memory usage. The solver is cache efficient and fully vectorizable, and is parallelized using a two-level hybrid MPI-OpenMP implementation suitable for shared and/or distributed memory architectures, as well as clusters of shared memory machines. Convergence and scalability results are illustrated for various high-lift cases.

  3. Performing an allreduce operation using shared memory

    DOEpatents

    Archer, Charles J [Rochester, MN; Dozsa, Gabor [Ardsley, NY; Ratterman, Joseph D [Rochester, MN; Smith, Brian E [Rochester, MN

    2012-04-17

    Methods, apparatus, and products are disclosed for performing an allreduce operation using shared memory that include: receiving, by at least one of a plurality of processing cores on a compute node, an instruction to perform an allreduce operation; establishing, by the core that received the instruction, a job status object for specifying a plurality of shared memory allreduce work units, the plurality of shared memory allreduce work units together performing the allreduce operation on the compute node; determining, by an available core on the compute node, a next shared memory allreduce work unit in the job status object; and performing, by that available core on the compute node, that next shared memory allreduce work unit.

  4. Performing an allreduce operation using shared memory

    DOEpatents

    Archer, Charles J; Dozsa, Gabor; Ratterman, Joseph D; Smith, Brian E

    2014-06-10

    Methods, apparatus, and products are disclosed for performing an allreduce operation using shared memory that include: receiving, by at least one of a plurality of processing cores on a compute node, an instruction to perform an allreduce operation; establishing, by the core that received the instruction, a job status object for specifying a plurality of shared memory allreduce work units, the plurality of shared memory allreduce work units together performing the allreduce operation on the compute node; determining, by an available core on the compute node, a next shared memory allreduce work unit in the job status object; and performing, by that available core on the compute node, that next shared memory allreduce work unit.

  5. The Effect of Share 35 on Biliary Complications: an Interrupted Time Series Analysis.

    PubMed

    Fleming, J N; Taber, D J; Axelrod, D; Chavin, K D

    2018-05-16

    The purpose of the Share 35 allocation policy was to improve liver transplant waitlist mortality, targeting high MELD waitlisted patients. However, policy changes may also have unintended consequences that must be balanced with the primary desired outcome. We performed an interrupted time series assessing the impact of Share 35 on biliary complications in a select national liver transplant population using the Vizient CDB/RM ™ database. Liver transplants that occurred between October 2012 and September 2015 were included. There was a significant change in the incident-rate of biliary complications between Pre-Share 35 (n=3,018) and Post-Share 35 (n=9,984) cohorts over time (p=0.023, r2=0.44). As a control, a subanalysis was performed throughout the same time period in Region 9 transplant centers, where a broad sharing agreement had previously been implemented. In the subanalysis, there was no change in the incident-rate of biliary complications between the two time periods. Length of stay and mean direct cost demonstrated a change after implementation of Share 35, although they did not meet statistical difference. While the target of improved waitlist mortality is of utmost importance for the equitable allocation of organs, unintended consequences of policy changes should be studied for a full assessment of a policy's impact. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  6. Optimizing End-to-End Big Data Transfers over Terabits Network Infrastructure

    DOE PAGES

    Kim, Youngjae; Atchley, Scott; Vallee, Geoffroy R.; ...

    2016-04-05

    While future terabit networks hold the promise of significantly improving big-data motion among geographically distributed data centers, significant challenges must be overcome even on today's 100 gigabit networks to realize end-to-end performance. Multiple bottlenecks exist along the end-to-end path from source to sink, for instance, the data storage infrastructure at both the source and sink and its interplay with the wide-area network are increasingly the bottleneck to achieving high performance. In this study, we identify the issues that lead to congestion on the path of an end-to-end data transfer in the terabit network environment, and we present a new bulkmore » data movement framework for terabit networks, called LADS. LADS exploits the underlying storage layout at each endpoint to maximize throughput without negatively impacting the performance of shared storage resources for other users. LADS also uses the Common Communication Interface (CCI) in lieu of the sockets interface to benefit from hardware-level zero-copy, and operating system bypass capabilities when available. It can further improve data transfer performance under congestion on the end systems using buffering at the source using flash storage. With our evaluations, we show that LADS can avoid congested storage elements within the shared storage resource, improving input/output bandwidth, and data transfer rates across the high speed networks. We also investigate the performance degradation problems of LADS due to I/O contention on the parallel file system (PFS), when multiple LADS tools share the PFS. We design and evaluate a meta-scheduler to coordinate multiple I/O streams while sharing the PFS, to minimize the I/O contention on the PFS. Finally, with our evaluations, we observe that LADS with meta-scheduling can further improve the performance by up to 14 percent relative to LADS without meta-scheduling.« less

  7. Optimizing End-to-End Big Data Transfers over Terabits Network Infrastructure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Youngjae; Atchley, Scott; Vallee, Geoffroy R.

    While future terabit networks hold the promise of significantly improving big-data motion among geographically distributed data centers, significant challenges must be overcome even on today's 100 gigabit networks to realize end-to-end performance. Multiple bottlenecks exist along the end-to-end path from source to sink, for instance, the data storage infrastructure at both the source and sink and its interplay with the wide-area network are increasingly the bottleneck to achieving high performance. In this study, we identify the issues that lead to congestion on the path of an end-to-end data transfer in the terabit network environment, and we present a new bulkmore » data movement framework for terabit networks, called LADS. LADS exploits the underlying storage layout at each endpoint to maximize throughput without negatively impacting the performance of shared storage resources for other users. LADS also uses the Common Communication Interface (CCI) in lieu of the sockets interface to benefit from hardware-level zero-copy, and operating system bypass capabilities when available. It can further improve data transfer performance under congestion on the end systems using buffering at the source using flash storage. With our evaluations, we show that LADS can avoid congested storage elements within the shared storage resource, improving input/output bandwidth, and data transfer rates across the high speed networks. We also investigate the performance degradation problems of LADS due to I/O contention on the parallel file system (PFS), when multiple LADS tools share the PFS. We design and evaluate a meta-scheduler to coordinate multiple I/O streams while sharing the PFS, to minimize the I/O contention on the PFS. Finally, with our evaluations, we observe that LADS with meta-scheduling can further improve the performance by up to 14 percent relative to LADS without meta-scheduling.« less

  8. OpenMSI: A High-Performance Web-Based Platform for Mass Spectrometry Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rubel, Oliver; Greiner, Annette; Cholia, Shreyas

    Mass spectrometry imaging (MSI) enables researchers to directly probe endogenous molecules directly within the architecture of the biological matrix. Unfortunately, efficient access, management, and analysis of the data generated by MSI approaches remain major challenges to this rapidly developing field. Despite the availability of numerous dedicated file formats and software packages, it is a widely held viewpoint that the biggest challenge is simply opening, sharing, and analyzing a file without loss of information. Here we present OpenMSI, a software framework and platform that addresses these challenges via an advanced, high-performance, extensible file format and Web API for remote data accessmore » (http://openmsi.nersc.gov). The OpenMSI file format supports storage of raw MSI data, metadata, and derived analyses in a single, self-describing format based on HDF5 and is supported by a large range of analysis software (e.g., Matlab and R) and programming languages (e.g., C++, Fortran, and Python). Careful optimization of the storage layout of MSI data sets using chunking, compression, and data replication accelerates common, selective data access operations while minimizing data storage requirements and are critical enablers of rapid data I/O. The OpenMSI file format has shown to provide >2000-fold improvement for image access operations, enabling spectrum and image retrieval in less than 0.3 s across the Internet even for 50 GB MSI data sets. To make remote high-performance compute resources accessible for analysis and to facilitate data sharing and collaboration, we describe an easy-to-use yet powerful Web API, enabling fast and convenient access to MSI data, metadata, and derived analysis results stored remotely to facilitate high-performance data analysis and enable implementation of Web based data sharing, visualization, and analysis.« less

  9. MaMR: High-performance MapReduce programming model for material cloud applications

    NASA Astrophysics Data System (ADS)

    Jing, Weipeng; Tong, Danyu; Wang, Yangang; Wang, Jingyuan; Liu, Yaqiu; Zhao, Peng

    2017-02-01

    With the increasing data size in materials science, existing programming models no longer satisfy the application requirements. MapReduce is a programming model that enables the easy development of scalable parallel applications to process big data on cloud computing systems. However, this model does not directly support the processing of multiple related data, and the processing performance does not reflect the advantages of cloud computing. To enhance the capability of workflow applications in material data processing, we defined a programming model for material cloud applications that supports multiple different Map and Reduce functions running concurrently based on hybrid share-memory BSP called MaMR. An optimized data sharing strategy to supply the shared data to the different Map and Reduce stages was also designed. We added a new merge phase to MapReduce that can efficiently merge data from the map and reduce modules. Experiments showed that the model and framework present effective performance improvements compared to previous work.

  10. Blended shared control utilizing online identification : Regulating grasping forces of a surrogate surgical grasper.

    PubMed

    Stephens, Trevor K; Kong, Nathan J; Dockter, Rodney L; O'Neill, John J; Sweet, Robert M; Kowalewski, Timothy M

    2018-06-01

    Surgical robots are increasingly common, yet routine tasks such as tissue grasping remain potentially harmful with high occurrences of tissue crush injury due to the lack of force feedback from the grasper. This work aims to investigate whether a blended shared control framework which utilizes real-time identification of the object being grasped as part of the feedback may help address the prevalence of tissue crush injury in robotic surgeries. This work tests the proposed shared control framework and tissue identification algorithm on a custom surrogate surgical robotic grasping setup. This scheme utilizes identification of the object being grasped as part of the feedback to regulate to a desired force. The blended shared control is arbitrated between human and an implicit force controller based on a computed confidence in the identification of the grasped object. The online identification is performed using least squares based on a nonlinear tissue model. Testing was performed on five silicone tissue surrogates. Twenty grasps were conducted, with half of the grasps performed under manual control and half of the grasps performed with the proposed blended shared control, to test the efficacy of the control scheme. The identification method resulted in an average of 95% accuracy across all time samples of all tissue grasps using a full leave-grasp-out cross-validation. There was an average convergence time of [Formula: see text] ms across all training grasps for all tissue surrogates. Additionally, there was a reduction in peak forces induced during grasping for all tissue surrogates when applying blended shared control online. The blended shared control using online identification more successfully regulated grasping forces to the desired target force when compared with manual control. The preliminary work on this surrogate setup for surgical grasping merits further investigation on real surgical tools and with real human tissues.

  11. Characterizing Teamwork in Cardiovascular Care Outcomes: A Network Analytics Approach.

    PubMed

    Carson, Matthew B; Scholtens, Denise M; Frailey, Conor N; Gravenor, Stephanie J; Powell, Emilie S; Wang, Amy Y; Kricke, Gayle Shier; Ahmad, Faraz S; Mutharasan, R Kannan; Soulakis, Nicholas D

    2016-11-01

    The nature of teamwork in healthcare is complex and interdisciplinary, and provider collaboration based on shared patient encounters is crucial to its success. Characterizing the intensity of working relationships with risk-adjusted patient outcomes supplies insight into provider interactions in a hospital environment. We extracted 4 years of patient, provider, and activity data for encounters in an inpatient cardiology unit from Northwestern Medicine's Enterprise Data Warehouse. We then created a provider-patient network to identify healthcare providers who jointly participated in patient encounters and calculated satisfaction rates for provider-provider pairs. We demonstrated the application of a novel parameter, the shared positive outcome ratio, a measure that assesses the strength of a patient-sharing relationship between 2 providers based on risk-adjusted encounter outcomes. We compared an observed collaboration network of 334 providers and 3453 relationships to 1000 networks with shared positive outcome ratio scores based on randomized outcomes and found 188 collaborative relationships between pairs of providers that showed significantly higher than expected patient satisfaction ratings. A group of 22 providers performed exceptionally in terms of patient satisfaction. Our results indicate high variability in collaboration scores across the network and highlight our ability to identify relationships with both higher and lower than expected scores across a set of shared patient encounters. Satisfaction rates seem to vary across different teams of providers. Team collaboration can be quantified using a composite measure of collaboration across provider pairs. Tracking provider pair outcomes over a sufficient set of shared encounters may inform quality improvement strategies such as optimizing team staffing, identifying characteristics and practices of high-performing teams, developing evidence-based team guidelines, and redesigning inpatient care processes. © 2016 American Heart Association, Inc.

  12. Cooperative Data Sharing: Simple Support for Clusters of SMP Nodes

    NASA Technical Reports Server (NTRS)

    DiNucci, David C.; Balley, David H. (Technical Monitor)

    1997-01-01

    Libraries like PVM and MPI send typed messages to allow for heterogeneous cluster computing. Lower-level libraries, such as GAM, provide more efficient access to communication by removing the need to copy messages between the interface and user space in some cases. still lower-level interfaces, such as UNET, get right down to the hardware level to provide maximum performance. However, these are all still interfaces for passing messages from one process to another, and have limited utility in a shared-memory environment, due primarily to the fact that message passing is just another term for copying. This drawback is made more pertinent by today's hybrid architectures (e.g. clusters of SMPs), where it is difficult to know beforehand whether two communicating processes will share memory. As a result, even portable language tools (like HPF compilers) must either map all interprocess communication, into message passing with the accompanying performance degradation in shared memory environments, or they must check each communication at run-time and implement the shared-memory case separately for efficiency. Cooperative Data Sharing (CDS) is a single user-level API which abstracts all communication between processes into the sharing and access coordination of memory regions, in a model which might be described as "distributed shared messages" or "large-grain distributed shared memory". As a result, the user programs to a simple latency-tolerant abstract communication specification which can be mapped efficiently to either a shared-memory or message-passing based run-time system, depending upon the available architecture. Unlike some distributed shared memory interfaces, the user still has complete control over the assignment of data to processors, the forwarding of data to its next likely destination, and the queuing of data until it is needed, so even the relatively high latency present in clusters can be accomodated. CDS does not require special use of an MMU, which can add overhead to some DSM systems, and does not require an SPMD programming model. unlike some message-passing interfaces, CDS allows the user to implement efficient demand-driven applications where processes must "fight" over data, and does not perform copying if processes share memory and do not attempt concurrent writes. CDS also supports heterogeneous computing, dynamic process creation, handlers, and a very simple thread-arbitration mechanism. Additional support for array subsections is currently being considered. The CDS1 API, which forms the kernel of CDS, is built primarily upon only 2 communication primitives, one process initiation primitive, and some data translation (and marshalling) routines, memory allocation routines, and priority control routines. The entire current collection of 28 routines provides enough functionality to implement most (or all) of MPI 1 and 2, which has a much larger interface consisting of hundreds of routines. still, the API is small enough to consider integrating into standard os interfaces for handling inter-process communication in a network-independent way. This approach would also help to solve many of the problems plaguing other higher-level standards such as MPI and PVM which must, in some cases, "play OS" to adequately address progress and process control issues. The CDS2 API, a higher level of interface roughly equivalent in functionality to MPI and to be built entirely upon CDS1, is still being designed. It is intended to add support for the equivalent of communicators, reduction and other collective operations, process topologies, additional support for process creation, and some automatic memory management. CDS2 will not exactly match MPI, because the copy-free semantics of communication from CDS1 will be supported. CDS2 application programs will be free to carefully also use CDS1. CDS1 has been implemented on networks of workstations running unmodified Unix-based operating systems, using UDP/IP and vendor-supplied high- performance locks. Although its inter-node performance is currently unimpressive due to rudimentary implementation technique, it even now outperforms highly-optimized MPI implementation on intra-node communication due to its support for non-copy communication. The similarity of the CDS1 architecture to that of other projects such as UNET and TRAP suggests that the inter-node performance can be increased significantly to surpass MPI or PVM, and it may be possible to migrate some of its functionality to communication controllers.

  13. High-functioning autism patients share similar but more severe impairments in verbal theory of mind than schizophrenia patients.

    PubMed

    Tin, L N W; Lui, S S Y; Ho, K K Y; Hung, K S Y; Wang, Y; Yeung, H K H; Wong, T Y; Lam, S M; Chan, R C K; Cheung, E F C

    2018-06-01

    Evidence suggests that autism and schizophrenia share similarities in genetic, neuropsychological and behavioural aspects. Although both disorders are associated with theory of mind (ToM) impairments, a few studies have directly compared ToM between autism patients and schizophrenia patients. This study aimed to investigate to what extent high-functioning autism patients and schizophrenia patients share and differ in ToM performance. Thirty high-functioning autism patients, 30 schizophrenia patients and 30 healthy individuals were recruited. Participants were matched in age, gender and estimated intelligence quotient. The verbal-based Faux Pas Task and the visual-based Yoni Task were utilised to examine first- and higher-order, affective and cognitive ToM. The task/item difficulty of two paradigms was examined using mixed model analyses of variance (ANOVAs). Multiple ANOVAs and mixed model ANOVAs were used to examine group differences in ToM. The Faux Pas Task was more difficult than the Yoni Task. High-functioning autism patients showed more severely impaired verbal-based ToM in the Faux Pas Task, but shared similar visual-based ToM impairments in the Yoni Task with schizophrenia patients. The findings that individuals with high-functioning autism shared similar but more severe impairments in verbal ToM than individuals with schizophrenia support the autism-schizophrenia continuum. The finding that verbal-based but not visual-based ToM was more impaired in high-functioning autism patients than schizophrenia patients could be attributable to the varied task/item difficulty between the two paradigms.

  14. ERDC MSRC (Major Shared Resource Center) Resource. High Performance Computing for the Warfighter. Fall 2008

    DTIC Science & Technology

    2008-01-01

    32 “Solving the Hard Problems” at UGC 2008 in Seattle By Rose J. Dykes, ERDC MSRC...two fields to remain competitive in the global market . The ERDC MSRC attempts to take every available opportunity to encourage students to enter these...Attendees of the 18th annual DoD High Performance Computing Mod ern- ization Pr ogram (HPCMP) Users Gr oup Confer ence ( UGC

  15. The dynamics of shared leadership: building trust and enhancing performance.

    PubMed

    Drescher, Marcus A; Korsgaard, M Audrey; Welpe, Isabell M; Picot, Arnold; Wigand, Rolf T

    2014-09-01

    In this study, we examined how the dynamics of shared leadership are related to group performance. We propose that, over time, the expansion of shared leadership within groups is related to growth in group trust. In turn, growth in group trust is related to performance improvement. Longitudinal data from 142 groups engaged in a strategic simulation game over a 4-month period provide support for positive changes in trust mediating the relationship between positive changes in shared leadership and positive changes in performance. Our findings contribute to the literature on shared leadership and group dynamics by demonstrating how the growth in shared leadership contributes to the emergence of trust and a positive performance trend over time. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  16. Experimental evaluation of multiprocessor cache-based error recovery

    NASA Technical Reports Server (NTRS)

    Janssens, Bob; Fuchs, W. K.

    1991-01-01

    Several variations of cache-based checkpointing for rollback error recovery in shared-memory multiprocessors have been recently developed. By modifying the cache replacement policy, these techniques use the inherent redundancy in the memory hierarchy to periodically checkpoint the computation state. Three schemes, different in the manner in which they avoid rollback propagation, are evaluated. By simulation with address traces from parallel applications running on an Encore Multimax shared-memory multiprocessor, the performance effect of integrating the recovery schemes in the cache coherence protocol are evaluated. The results indicate that the cache-based schemes can provide checkpointing capability with low performance overhead but uncontrollable high variability in the checkpoint interval.

  17. Dispensing behaviour of pharmacies in prescription drug markets.

    PubMed

    Guhl, Dennis; Stargardt, Tom; Schneider, Udo; Fischer, Katharina E

    2016-02-01

    We aim to investigate pharmacies' dispensing behaviour under the existing dispensing regulations in Germany. Using administrative data, we performed a cross-sectional retrospective study to analyse whether the competitive environment and pharmacy characteristics, i.e., organisation, lead to dispensing choices aimed at by third-party payers. We specified generalised linear models with the share of imported pharmaceuticals, generic share, and share of preferred brands as dependent variables. The final dataset contained 49,260,902 prescriptions from 16,797 pharmacies. The average share of imported pharmaceuticals across the pharmacies was 18.4% (standard deviation (SD) 8.8), the average generic share was 92.8% (SD 2.1), and compliance with preferred brands was 81.3% (SD 5.9). Pharmacies with little competition used fewer imported pharmaceuticals (p<0.001), generics (p<0.001) and preferred brands (p<0.001); less organised pharmacies yielded similar results. The difference in outcomes between pharmacies in the first and 4th quartiles of the pharmacy organisation variable is 17.4% vs. 17.0% for share of imported pharmaceuticals, 92.8% vs. 92.7% for generic share and 81.9% vs. 81.1% for compliance with preferred brands. We show that pharmacies' dispensing choices meet the aims of payers at high levels. However, dispensing behaviour varies between pharmacies. Increasing competition among pharmacies and targeting pharmacies with high shares of bill auditing seem viable options to improving dispensing behaviour as defined by payers. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  18. Performance Refactoring of Instrumentation, Measurement, and Analysis Technologies for Petascale Computing. The PRIMA Project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Malony, Allen D.; Wolf, Felix G.

    2014-01-31

    The growing number of cores provided by today’s high-­end computing systems present substantial challenges to application developers in their pursuit of parallel efficiency. To find the most effective optimization strategy, application developers need insight into the runtime behavior of their code. The University of Oregon (UO) and the Juelich Supercomputing Centre of Forschungszentrum Juelich (FZJ) develop the performance analysis tools TAU and Scalasca, respectively, which allow high-­performance computing (HPC) users to collect and analyze relevant performance data – even at very large scales. TAU and Scalasca are considered among the most advanced parallel performance systems available, and are used extensivelymore » across HPC centers in the U.S., Germany, and around the world. The TAU and Scalasca groups share a heritage of parallel performance tool research and partnership throughout the past fifteen years. Indeed, the close interactions of the two groups resulted in a cross-­fertilization of tool ideas and technologies that pushed TAU and Scalasca to what they are today. It also produced two performance systems with an increasing degree of functional overlap. While each tool has its specific analysis focus, the tools were implementing measurement infrastructures that were substantially similar. Because each tool provides complementary performance analysis, sharing of measurement results is valuable to provide the user with more facets to understand performance behavior. However, each measurement system was producing performance data in different formats, requiring data interoperability tools to be created. A common measurement and instrumentation system was needed to more closely integrate TAU and Scalasca and to avoid the duplication of development and maintenance effort. The PRIMA (Performance Refactoring of Instrumentation, Measurement, and Analysis) project was proposed over three years ago as a joint international effort between UO and FZJ to accomplish these objectives: (1) refactor TAU and Scalasca performance system components for core code sharing and (2) integrate TAU and Scalasca functionality through data interfaces, formats, and utilities. As presented in this report, the project has completed these goals. In addition to shared technical advances, the groups have worked to engage with users through application performance engineering and tools training. In this regard, the project benefits from the close interactions the teams have with national laboratories in the United States and Germany. We have also sought to enhance our interactions through joint tutorials and outreach. UO has become a member of the Virtual Institute of High-­Productivity Supercomputing (VI-­HPS) established by the Helmholtz Association of German Research Centres as a center of excellence, focusing on HPC tools for diagnosing programming errors and optimizing performance. UO and FZJ have conducted several VI-­HPS training activities together within the past three years.« less

  19. Performance Refactoring of Instrumentation, Measurement, and Analysis Technologies for Petascale Computing: the PRIMA Project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Malony, Allen D.; Wolf, Felix G.

    2014-01-31

    The growing number of cores provided by today’s high-end computing systems present substantial challenges to application developers in their pursuit of parallel efficiency. To find the most effective optimization strategy, application developers need insight into the runtime behavior of their code. The University of Oregon (UO) and the Juelich Supercomputing Centre of Forschungszentrum Juelich (FZJ) develop the performance analysis tools TAU and Scalasca, respectively, which allow high-performance computing (HPC) users to collect and analyze relevant performance data – even at very large scales. TAU and Scalasca are considered among the most advanced parallel performance systems available, and are used extensivelymore » across HPC centers in the U.S., Germany, and around the world. The TAU and Scalasca groups share a heritage of parallel performance tool research and partnership throughout the past fifteen years. Indeed, the close interactions of the two groups resulted in a cross-fertilization of tool ideas and technologies that pushed TAU and Scalasca to what they are today. It also produced two performance systems with an increasing degree of functional overlap. While each tool has its specific analysis focus, the tools were implementing measurement infrastructures that were substantially similar. Because each tool provides complementary performance analysis, sharing of measurement results is valuable to provide the user with more facets to understand performance behavior. However, each measurement system was producing performance data in different formats, requiring data interoperability tools to be created. A common measurement and instrumentation system was needed to more closely integrate TAU and Scalasca and to avoid the duplication of development and maintenance effort. The PRIMA (Performance Refactoring of Instrumentation, Measurement, and Analysis) project was proposed over three years ago as a joint international effort between UO and FZJ to accomplish these objectives: (1) refactor TAU and Scalasca performance system components for core code sharing and (2) integrate TAU and Scalasca functionality through data interfaces, formats, and utilities. As presented in this report, the project has completed these goals. In addition to shared technical advances, the groups have worked to engage with users through application performance engineering and tools training. In this regard, the project benefits from the close interactions the teams have with national laboratories in the United States and Germany. We have also sought to enhance our interactions through joint tutorials and outreach. UO has become a member of the Virtual Institute of High-Productivity Supercomputing (VI-HPS) established by the Helmholtz Association of German Research Centres as a center of excellence, focusing on HPC tools for diagnosing programming errors and optimizing performance. UO and FZJ have conducted several VI-HPS training activities together within the past three years.« less

  20. Data Storage and sharing for the long tail of science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, B.; Pouchard, L.; Smith, P. M.

    Research data infrastructure such as storage must now accommodate new requirements resulting from trends in research data management that require researchers to store their data for the long term and make it available to other researchers. We propose Data Depot, a system and service that provides capabilities for shared space within a group, shared applications, flexible access patterns and ease of transfer at Purdue University. We evaluate Depot as a solution for storing and sharing multiterabytes of data produced in the long tail of science with a use case in soundscape ecology studies from the Human- Environment Modeling and Analysismore » Laboratory. We observe that with the capabilities enabled by Data Depot, researchers can easily deploy fine-grained data access control, manage data transfer and sharing, as well as integrate their workflows into a High Performance Computing environment.« less

  1. Genetic algorithm based task reordering to improve the performance of batch scheduled massively parallel scientific applications

    DOE PAGES

    Sankaran, Ramanan; Angel, Jordan; Brown, W. Michael

    2015-04-08

    The growth in size of networked high performance computers along with novel accelerator-based node architectures has further emphasized the importance of communication efficiency in high performance computing. The world's largest high performance computers are usually operated as shared user facilities due to the costs of acquisition and operation. Applications are scheduled for execution in a shared environment and are placed on nodes that are not necessarily contiguous on the interconnect. Furthermore, the placement of tasks on the nodes allocated by the scheduler is sub-optimal, leading to performance loss and variability. Here, we investigate the impact of task placement on themore » performance of two massively parallel application codes on the Titan supercomputer, a turbulent combustion flow solver (S3D) and a molecular dynamics code (LAMMPS). Benchmark studies show a significant deviation from ideal weak scaling and variability in performance. The inter-task communication distance was determined to be one of the significant contributors to the performance degradation and variability. A genetic algorithm-based parallel optimization technique was used to optimize the task ordering. This technique provides an improved placement of the tasks on the nodes, taking into account the application's communication topology and the system interconnect topology. As a result, application benchmarks after task reordering through genetic algorithm show a significant improvement in performance and reduction in variability, therefore enabling the applications to achieve better time to solution and scalability on Titan during production.« less

  2. CaLRS: A Critical-Aware Shared LLC Request Scheduling Algorithm on GPGPU

    PubMed Central

    Ma, Jianliang; Meng, Jinglei; Chen, Tianzhou; Wu, Minghui

    2015-01-01

    Ultra high thread-level parallelism in modern GPUs usually introduces numerous memory requests simultaneously. So there are always plenty of memory requests waiting at each bank of the shared LLC (L2 in this paper) and global memory. For global memory, various schedulers have already been developed to adjust the request sequence. But we find few work has ever focused on the service sequence on the shared LLC. We measured that a big number of GPU applications always queue at LLC bank for services, which provide opportunity to optimize the service order on LLC. Through adjusting the GPU memory request service order, we can improve the schedulability of SM. So we proposed a critical-aware shared LLC request scheduling algorithm (CaLRS) in this paper. The priority representative of memory request is critical for CaLRS. We use the number of memory requests that originate from the same warp but have not been serviced when they arrive at the shared LLC bank to represent the criticality of each warp. Experiments show that the proposed scheme can boost the SM schedulability effectively by promoting the scheduling priority of the memory requests with high criticality and improves the performance of GPU indirectly. PMID:25729772

  3. Combination of Sharing Matrix and Image Encryption for Lossless $(k,n)$ -Secret Image Sharing.

    PubMed

    Bao, Long; Yi, Shuang; Zhou, Yicong

    2017-12-01

    This paper first introduces a (k,n) -sharing matrix S (k, n) and its generation algorithm. Mathematical analysis is provided to show its potential for secret image sharing. Combining sharing matrix with image encryption, we further propose a lossless (k,n) -secret image sharing scheme (SMIE-SIS). Only with no less than k shares, all the ciphertext information and security key can be reconstructed, which results in a lossless recovery of original information. This can be proved by the correctness and security analysis. Performance evaluation and security analysis demonstrate that the proposed SMIE-SIS with arbitrary settings of k and n has at least five advantages: 1) it is able to fully recover the original image without any distortion; 2) it has much lower pixel expansion than many existing methods; 3) its computation cost is much lower than the polynomial-based secret image sharing methods; 4) it is able to verify and detect a fake share; and 5) even using the same original image with the same initial settings of parameters, every execution of SMIE-SIS is able to generate completely different secret shares that are unpredictable and non-repetitive. This property offers SMIE-SIS a high level of security to withstand many different attacks.

  4. Mission Possible: Measuring Critical Thinking and Problem Solving

    ERIC Educational Resources Information Center

    Wren, Doug; Cashwell, Amy

    2018-01-01

    The author describes how Virginia Beach City Public Schools developed a performance assessment that they administer to all 4th graders, 7th graders, and high school students in the district. He describes lessons learned about creating good performance tasks and developing a successful scoring process, as well as sharing tools connected to this…

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sewell, Christopher Meyer

    This is a set of slides from a guest lecture for a class at the University of Texas, El Paso on visualization and data analysis for high-performance computing. The topics covered are the following: trends in high-performance computing; scientific visualization, such as OpenGL, ray tracing and volume rendering, VTK, and ParaView; data science at scale, such as in-situ visualization, image databases, distributed memory parallelism, shared memory parallelism, VTK-m, "big data", and then an analysis example.

  6. Relationships between core factors of knowledge management in hospital nursing organisations and outcomes of nursing performance.

    PubMed

    Lee, Eun Ju; Kim, Hong Soon; Kim, Hye Young

    2014-12-01

    The study was conducted to investigate the levels of implementation of knowledge management and outcomes of nursing performance, to examine the relationships between core knowledge management factors and nursing performance outcomes and to identify core knowledge management factors affecting these outcomes. Effective knowledge management is very important to achieve strong organisational performance. The success or failure of knowledge management depends on how effectively an organisation's members share and use their knowledge. Because knowledge management plays a key role in enhancing nursing performance, identifying the core factors and investigating the level of knowledge management in a given hospital are priorities to ensure a high quality of nursing for patients. The study employed a descriptive research procedure. The study sample consisted of 192 nurses registered in three large healthcare organisations in South Korea. The variables demographic characteristics, implementation of core knowledge management factors and outcomes of nursing performance were examined and analysed in this study. The relationships between the core knowledge management factors and outcomes of nursing performance as well as the factors affecting the performance outcomes were investigated. A knowledge-sharing culture and organisational learning were found to be core factors affecting nursing performance. The study results provide basic data that can be used to formulate effective knowledge management strategies for enhancing nursing performance in hospital nursing organisations. In particular, prioritising the adoption of a knowledge-sharing culture and organisational learning in knowledge management systems might be one method for organisations to more effectively manage their knowledge resources and thus to enhance the outcomes of nursing performance and achieve greater business competitiveness. The study results can contribute to the development of effective and efficient knowledge management systems and strategies for enhancing knowledge-sharing culture and organisational learning that can improve both the productivity and competitiveness of healthcare organisations. © 2014 John Wiley & Sons Ltd.

  7. Combining Distributed and Shared Memory Models: Approach and Evolution of the Global Arrays Toolkit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nieplocha, Jarek; Harrison, Robert J.; Kumar, Mukul

    2002-07-29

    Both shared memory and distributed memory models have advantages and shortcomings. Shared memory model is much easier to use but it ignores data locality/placement. Given the hierarchical nature of the memory subsystems in the modern computers this characteristic might have a negative impact on performance and scalability. Various techniques, such as code restructuring to increase data reuse and introducing blocking in data accesses, can address the problem and yield performance competitive with message passing[Singh], however at the cost of compromising the ease of use feature. Distributed memory models such as message passing or one-sided communication offer performance and scalability butmore » they compromise the ease-of-use. In this context, the message-passing model is sometimes referred to as?assembly programming for the scientific computing?. The Global Arrays toolkit[GA1, GA2] attempts to offer the best features of both models. It implements a shared-memory programming model in which data locality is managed explicitly by the programmer. This management is achieved by explicit calls to functions that transfer data between a global address space (a distributed array) and local storage. In this respect, the GA model has similarities to the distributed shared-memory models that provide an explicit acquire/release protocol. However, the GA model acknowledges that remote data is slower to access than local data and allows data locality to be explicitly specified and hence managed. The GA model exposes to the programmer the hierarchical memory of modern high-performance computer systems, and by recognizing the communication overhead for remote data transfer, it promotes data reuse and locality of reference. This paper describes the characteristics of the Global Arrays programming model, capabilities of the toolkit, and discusses its evolution.« less

  8. A Metric to Quantify Shared Visual Attention in Two-Person Teams

    NASA Technical Reports Server (NTRS)

    Gontar, Patrick; Mulligan, Jeffrey B.

    2015-01-01

    Critical tasks in high-risk environments are often performed by teams, the members of which must work together efficiently. In some situations, the team members may have to work together to solve a particular problem, while in others it may be better for them to divide the work into separate tasks that can be completed in parallel. We hypothesize that these two team strategies can be differentiated on the basis of shared visual attention, measured by gaze tracking.

  9. Trends in life science grid: from computing grid to knowledge grid.

    PubMed

    Konagaya, Akihiko

    2006-12-18

    Grid computing has great potential to become a standard cyberinfrastructure for life sciences which often require high-performance computing and large data handling which exceeds the computing capacity of a single institution. This survey reviews the latest grid technologies from the viewpoints of computing grid, data grid and knowledge grid. Computing grid technologies have been matured enough to solve high-throughput real-world life scientific problems. Data grid technologies are strong candidates for realizing "resourceome" for bioinformatics. Knowledge grids should be designed not only from sharing explicit knowledge on computers but also from community formulation for sharing tacit knowledge among a community. Extending the concept of grid from computing grid to knowledge grid, it is possible to make use of a grid as not only sharable computing resources, but also as time and place in which people work together, create knowledge, and share knowledge and experiences in a community.

  10. The highs and lows of theoretical interpretation in animal-metacognition research

    PubMed Central

    Smith, J. David; Couchman, Justin J.; Beran, Michael J.

    2012-01-01

    Humans feel uncertain. They know when they do not know. These feelings and the responses to them ground the research literature on metacognition. It is a natural question whether animals share this cognitive capacity, and thus animal metacognition has become an influential research area within comparative psychology. Researchers have explored this question by testing many species using perception and memory paradigms. There is an emerging consensus that animals share functional parallels with humans’ conscious metacognition. Of course, this research area poses difficult issues of scientific inference. How firmly should we hold the line in insisting that animals’ performances are low-level and associative? How high should we set the bar for concluding that animals share metacognitive capacities with humans? This area offers a constructive case study for considering theoretical problems that often confront comparative psychologists. The authors present this case study and address diverse issues of scientific judgement and interpretation within comparative psychology. PMID:22492748

  11. Trends in life science grid: from computing grid to knowledge grid

    PubMed Central

    Konagaya, Akihiko

    2006-01-01

    Background Grid computing has great potential to become a standard cyberinfrastructure for life sciences which often require high-performance computing and large data handling which exceeds the computing capacity of a single institution. Results This survey reviews the latest grid technologies from the viewpoints of computing grid, data grid and knowledge grid. Computing grid technologies have been matured enough to solve high-throughput real-world life scientific problems. Data grid technologies are strong candidates for realizing "resourceome" for bioinformatics. Knowledge grids should be designed not only from sharing explicit knowledge on computers but also from community formulation for sharing tacit knowledge among a community. Conclusion Extending the concept of grid from computing grid to knowledge grid, it is possible to make use of a grid as not only sharable computing resources, but also as time and place in which people work together, create knowledge, and share knowledge and experiences in a community. PMID:17254294

  12. Enhancing Application Performance Using Mini-Apps: Comparison of Hybrid Parallel Programming Paradigms

    NASA Technical Reports Server (NTRS)

    Lawson, Gary; Sosonkina, Masha; Baurle, Robert; Hammond, Dana

    2017-01-01

    In many fields, real-world applications for High Performance Computing have already been developed. For these applications to stay up-to-date, new parallel strategies must be explored to yield the best performance; however, restructuring or modifying a real-world application may be daunting depending on the size of the code. In this case, a mini-app may be employed to quickly explore such options without modifying the entire code. In this work, several mini-apps have been created to enhance a real-world application performance, namely the VULCAN code for complex flow analysis developed at the NASA Langley Research Center. These mini-apps explore hybrid parallel programming paradigms with Message Passing Interface (MPI) for distributed memory access and either Shared MPI (SMPI) or OpenMP for shared memory accesses. Performance testing shows that MPI+SMPI yields the best execution performance, while requiring the largest number of code changes. A maximum speedup of 23 was measured for MPI+SMPI, but only 11 was measured for MPI+OpenMP.

  13. Peregrine System Configuration | High-Performance Computing | NREL

    Science.gov Websites

    nodes and storage are connected by a high speed InfiniBand network. Compute nodes are diskless with an directories are mounted on all nodes, along with a file system dedicated to shared projects. A brief processors with 64 GB of memory. All nodes are connected to the high speed Infiniband network and and a

  14. Design distributed simulation platform for vehicle management system

    NASA Astrophysics Data System (ADS)

    Wen, Zhaodong; Wang, Zhanlin; Qiu, Lihua

    2006-11-01

    Next generation military aircraft requires the airborne management system high performance. General modules, data integration, high speed data bus and so on are needed to share and manage information of the subsystems efficiently. The subsystems include flight control system, propulsion system, hydraulic power system, environmental control system, fuel management system, electrical power system and so on. The unattached or mixed architecture is changed to integrated architecture. That means the whole airborne system is regarded into one system to manage. So the physical devices are distributed but the system information is integrated and shared. The process function of each subsystem are integrated (including general process modules, dynamic reconfiguration), furthermore, the sensors and the signal processing functions are shared. On the other hand, it is a foundation for power shared. Establish a distributed vehicle management system using 1553B bus and distributed processors which can provide a validation platform for the research of airborne system integrated management. This paper establishes the Vehicle Management System (VMS) simulation platform. Discuss the software and hardware configuration and analyze the communication and fault-tolerant method.

  15. Global Arrays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krishnamoorthy, Sriram; Daily, Jeffrey A.; Vishnu, Abhinav

    2015-11-01

    Global Arrays (GA) is a distributed-memory programming model that allows for shared-memory-style programming combined with one-sided communication, to create a set of tools that combine high performance with ease-of-use. GA exposes a relatively straightforward programming abstraction, while supporting fully-distributed data structures, locality of reference, and high-performance communication. GA was originally formulated in the early 1990’s to provide a communication layer for the Northwest Chemistry (NWChem) suite of chemistry modeling codes that was being developed concurrently.

  16. Future Challenges in Managing Human Health and Performance Risks for Space Flight

    NASA Technical Reports Server (NTRS)

    Corbin, Barbara J.; Barratt, Michael

    2013-01-01

    The global economy forces many nations to consider their national investments and make difficult decisions regarding their investment in future exploration. To enable safe, reliable, and productive human space exploration, we must pool global resources to understand and mitigate human health & performance risks prior to embarking on human exploration of deep space destinations. Consensus on the largest risks to humans during exploration is required to develop an integrated approach to mitigating risks. International collaboration in human space flight research will focus research on characterizing the effects of spaceflight on humans and the development of countermeasures or systems. Sharing existing data internationally will facilitate high quality research and sufficient power to make sound recommendations. Efficient utilization of ISS and unique ground-based analog facilities allows greater progress. Finally, a means to share results of human research in time to influence decisions for follow-on research, system design, new countermeasures and medical practices should be developed. Although formidable barriers to overcome, International working groups are working to define the risks, establish international research opportunities, share data among partners, share flight hardware and unique analog facilities, and establish forums for timely exchange of results. Representatives from the ISS partnership research and medical communities developed a list of the top ten human health & performance risks and their impact on exploration missions. They also drafted a multilateral data sharing plan to establish guidelines and principles for sharing human spaceflight data. Other working groups are also developing methods to promote international research solicitations. Collaborative use of analog facilities and shared development of space flight research and medical hardware continues. Establishing a forum for exchange of results between researchers, aerospace physicians and program managers takes careful consideration of researcher concerns and decision maker needs. Active participation by researchers in the development of this forum is essential, and the benefit can be tremendous. The ability to rapidly respond to research results without compromising publication rights and intellectual property will facilitate timely reduction in human health and performance risks in support of international exploration missions.

  17. Applications considerations in the system design of highly concurrent multiprocessors

    NASA Technical Reports Server (NTRS)

    Lundstrom, Stephen F.

    1987-01-01

    A flow model processor approach to parallel processing is described, using very-high-performance individual processors, high-speed circuit switched interconnection networks, and a high-speed synchronization capability to minimize the effect of the inherently serial portions of applications on performance. Design studies related to the determination of the number of processors, the memory organization, and the structure of the networks used to interconnect the processor and memory resources are discussed. Simulations indicate that applications centered on the large shared data memory should be able to sustain over 500 million floating point operations per second.

  18. Implementing High-Performance Geometric Multigrid Solver with Naturally Grained Messages

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shan, Hongzhang; Williams, Samuel; Zheng, Yili

    2015-10-26

    Structured-grid linear solvers often require manually packing and unpacking of communication data to achieve high performance.Orchestrating this process efficiently is challenging, labor-intensive, and potentially error-prone.In this paper, we explore an alternative approach that communicates the data with naturally grained messagesizes without manual packing and unpacking. This approach is the distributed analogue of shared-memory programming, taking advantage of the global addressspace in PGAS languages to provide substantial programming ease. However, its performance may suffer from the large number of small messages. We investigate theruntime support required in the UPC ++ library for this naturally grained version to close the performance gapmore » between the two approaches and attain comparable performance at scale using the High-Performance Geometric Multgrid (HPGMG-FV) benchmark as a driver.« less

  19. Comparison of Object Recognition Behavior in Human and Monkey

    PubMed Central

    Rajalingham, Rishi; Schmidt, Kailyn

    2015-01-01

    Although the rhesus monkey is used widely as an animal model of human visual processing, it is not known whether invariant visual object recognition behavior is quantitatively comparable across monkeys and humans. To address this question, we systematically compared the core object recognition behavior of two monkeys with that of human subjects. To test true object recognition behavior (rather than image matching), we generated several thousand naturalistic synthetic images of 24 basic-level objects with high variation in viewing parameters and image background. Monkeys were trained to perform binary object recognition tasks on a match-to-sample paradigm. Data from 605 human subjects performing the same tasks on Mechanical Turk were aggregated to characterize “pooled human” object recognition behavior, as well as 33 separate Mechanical Turk subjects to characterize individual human subject behavior. Our results show that monkeys learn each new object in a few days, after which they not only match mean human performance but show a pattern of object confusion that is highly correlated with pooled human confusion patterns and is statistically indistinguishable from individual human subjects. Importantly, this shared human and monkey pattern of 3D object confusion is not shared with low-level visual representations (pixels, V1+; models of the retina and primary visual cortex) but is shared with a state-of-the-art computer vision feature representation. Together, these results are consistent with the hypothesis that rhesus monkeys and humans share a common neural shape representation that directly supports object perception. SIGNIFICANCE STATEMENT To date, several mammalian species have shown promise as animal models for studying the neural mechanisms underlying high-level visual processing in humans. In light of this diversity, making tight comparisons between nonhuman and human primates is particularly critical in determining the best use of nonhuman primates to further the goal of the field of translating knowledge gained from animal models to humans. To the best of our knowledge, this study is the first systematic attempt at comparing a high-level visual behavior of humans and macaque monkeys. PMID:26338324

  20. Neuroinformatics Database (NiDB) – A Modular, Portable Database for the Storage, Analysis, and Sharing of Neuroimaging Data

    PubMed Central

    Anderson, Beth M.; Stevens, Michael C.; Glahn, David C.; Assaf, Michal; Pearlson, Godfrey D.

    2013-01-01

    We present a modular, high performance, open-source database system that incorporates popular neuroimaging database features with novel peer-to-peer sharing, and a simple installation. An increasing number of imaging centers have created a massive amount of neuroimaging data since fMRI became popular more than 20 years ago, with much of that data unshared. The Neuroinformatics Database (NiDB) provides a stable platform to store and manipulate neuroimaging data and addresses several of the impediments to data sharing presented by the INCF Task Force on Neuroimaging Datasharing, including 1) motivation to share data, 2) technical issues, and 3) standards development. NiDB solves these problems by 1) minimizing PHI use, providing a cost effective simple locally stored platform, 2) storing and associating all data (including genome) with a subject and creating a peer-to-peer sharing model, and 3) defining a sample, normalized definition of a data storage structure that is used in NiDB. NiDB not only simplifies the local storage and analysis of neuroimaging data, but also enables simple sharing of raw data and analysis methods, which may encourage further sharing. PMID:23912507

  1. Study of parameters of the nearest neighbour shared algorithm on clustering documents

    NASA Astrophysics Data System (ADS)

    Mustika Rukmi, Alvida; Budi Utomo, Daryono; Imro’atus Sholikhah, Neni

    2018-03-01

    Document clustering is one way of automatically managing documents, extracting of document topics and fastly filtering information. Preprocess of clustering documents processed by textmining consists of: keyword extraction using Rapid Automatic Keyphrase Extraction (RAKE) and making the document as concept vector using Latent Semantic Analysis (LSA). Furthermore, the clustering process is done so that the documents with the similarity of the topic are in the same cluster, based on the preprocesing by textmining performed. Shared Nearest Neighbour (SNN) algorithm is a clustering method based on the number of "nearest neighbors" shared. The parameters in the SNN Algorithm consist of: k nearest neighbor documents, ɛ shared nearest neighbor documents and MinT minimum number of similar documents, which can form a cluster. Characteristics The SNN algorithm is based on shared ‘neighbor’ properties. Each cluster is formed by keywords that are shared by the documents. SNN algorithm allows a cluster can be built more than one keyword, if the value of the frequency of appearing keywords in document is also high. Determination of parameter values on SNN algorithm affects document clustering results. The higher parameter value k, will increase the number of neighbor documents from each document, cause similarity of neighboring documents are lower. The accuracy of each cluster is also low. The higher parameter value ε, caused each document catch only neighbor documents that have a high similarity to build a cluster. It also causes more unclassified documents (noise). The higher the MinT parameter value cause the number of clusters will decrease, since the number of similar documents can not form clusters if less than MinT. Parameter in the SNN Algorithm determine performance of clustering result and the amount of noise (unclustered documents ). The Silhouette coeffisient shows almost the same result in many experiments, above 0.9, which means that SNN algorithm works well with different parameter values.

  2. NASA Human Health and Performance Center (NHHPC)

    NASA Technical Reports Server (NTRS)

    Davis, J. R.; Richard, E. E.

    2010-01-01

    The NASA Human Health and Performance Center (NHHPC) will provide a collaborative and virtual forum to integrate all disciplines of the human system to address spaceflight, aviation, and terrestrial human health and performance topics and issues. The NHHPC will serve a vital role as integrator, convening members to share information and capture a diverse knowledge base, while allowing the parties to collaborate to address the most important human health and performance topics of interest to members. The Center and its member organizations will address high-priority risk reduction strategies, including research and technology development, improved medical and environmental health diagnostics and therapeutics, and state-of-the art design approaches for human factors and habitability. Once full established in 2011, the NHHPC will focus on a number of collaborative projects focused on human health and performance, including workshops, education and outreach, information sharing and knowledge management, and research and technology development projects, to advance the study of the human system for spaceflight and other national and international priorities.

  3. Evaluation of 2 cognitive abilities tests in a dual-task environment

    NASA Technical Reports Server (NTRS)

    Vidulich, M. A.; Tsang, P. S.

    1986-01-01

    Most real world operators are required to perform multiple tasks simultaneously. In some cases, such as flying a high performance aircraft or trouble shooting a failing nuclear power plant, the operator's ability to time share or process in parallel" can be driven to extremes. This has created interest in selection tests of cognitive abilities. Two tests that have been suggested are the Dichotic Listening Task and the Cognitive Failures Questionnaire. Correlations between these test results and time sharing performance were obtained and the validity of these tests were examined. The primary task was a tracking task with dynamically varying bandwidth. This was performed either alone or concurrently with either another tracking task or a spatial transformation task. The results were: (1) An unexpected negative correlation was detected between the two tests; (2) The lack of correlation between either test and task performance made the predictive utility of the tests scores appear questionable; (3) Pilots made more errors on the Dichotic Listening Task than college students.

  4. Corporate Characteristics and Internal Control Information Disclosure- Evidence from Annual Reports in 2009 of Listed Companies in Shenzhen Stock Exchange

    NASA Astrophysics Data System (ADS)

    Xiaowen, Song

    Under the research framework of internal control disclosure and combined the current economic situation, the paper empirically analyzes the relationship between corporate characteristics and internal control information disclosure. The paper selects 647 A share companies listed in Shenzhen Stock Exchanges in 2009 as a sample. The results show: (1) the companies with excellent performance and high liquidity tend to disclose more internal control information; (2) the companies with the high leverage and also issued B shares are not willing to disclosure internal control information; (3) the companies sizes and companies which have hired Four-big accounting firms have no significant effects on internal control disclosure.

  5. Shared performance monitor in a multiprocessor system

    DOEpatents

    Chiu, George; Gara, Alan G; Salapura, Valentina

    2014-12-02

    A performance monitoring unit (PMU) and method for monitoring performance of events occurring in a multiprocessor system. The multiprocessor system comprises a plurality of processor devices units, each processor device for generating signals representing occurrences of events in the processor device, and, a single shared counter resource for performance monitoring. The performance monitor unit is shared by all processor cores in the multiprocessor system. The PMU is further programmed to monitor event signals issued from non-processor devices.

  6. Development of a Performance Assessment Task and Rubric to Measure Prospective Secondary School Mathematics Teachers' Pedagogical Content Knowledge and Skills

    ERIC Educational Resources Information Center

    Koirala, Hari P.; Davis, Marsha; Johnson, Peter

    2008-01-01

    The purpose of this paper is to share a performance assessment task and rubric designed to assess secondary school mathematics preservice teachers' pedagogical content knowledge and skills. The assessment task and rubric were developed in collaboration with five education faculty, four arts and sciences faculty, and four high school teachers over…

  7. Using VirtualGL/TurboVNC Software on the Peregrine System |

    Science.gov Websites

    High-Performance Computing | NREL VirtualGL/TurboVNC Software on the Peregrine System Using , allowing users to access and share large-memory visualization nodes with high-end graphics processing units may be better than just using X11 forwarding when connecting from a remote site with low bandwidth and

  8. pFlogger: The Parallel Fortran Logging Utility

    NASA Technical Reports Server (NTRS)

    Clune, Tom; Cruz, Carlos A.

    2017-01-01

    In the context of high performance computing (HPC), software investments in support of text-based diagnostics, which monitor a running application, are typically limited compared to those for other types of IO. Examples of such diagnostics include reiteration of configuration parameters, progress indicators, simple metrics (e.g., mass conservation, convergence of solvers, etc.), and timers. To some degree, this difference in priority is justifiable as other forms of output are the primary products of a scientific model and, due to their large data volume, much more likely to be a significant performance concern. In contrast, text-based diagnostic content is generally not shared beyond the individual or group running an application and is most often used to troubleshoot when something goes wrong. We suggest that a more systematic approach enabled by a logging facility (or 'logger)' similar to those routinely used by many communities would provide significant value to complex scientific applications. In the context of high-performance computing, an appropriate logger would provide specialized support for distributed and shared-memory parallelism and have low performance overhead. In this paper, we present our prototype implementation of pFlogger - a parallel Fortran-based logging framework, and assess its suitability for use in a complex scientific application.

  9. Proceedings: Sisal `93

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feo, J.T.

    1993-10-01

    This report contain papers on: Programmability and performance issues; The case of an iterative partial differential equation solver; Implementing the kernal of the Australian Region Weather Prediction Model in Sisal; Even and quarter-even prime length symmetric FFTs and their Sisal Implementations; Top-down thread generation for Sisal; Overlapping communications and computations on NUMA architechtures; Compiling technique based on dataflow analysis for funtional programming language Valid; Copy elimination for true multidimensional arrays in Sisal 2.0; Increasing parallelism for an optimization that reduces copying in IF2 graphs; Caching in on Sisal; Cache performance of Sisal Vs. FORTRAN; FFT algorithms on a shared-memory multiprocessor;more » A parallel implementation of nonnumeric search problems in Sisal; Computer vision algorithms in Sisal; Compilation of Sisal for a high-performance data driven vector processor; Sisal on distributed memory machines; A virtual shared addressing system for distributed memory Sisal; Developing a high-performance FFT algorithm in Sisal for a vector supercomputer; Implementation issues for IF2 on a static data-flow architechture; and Systematic control of parallelism in array-based data-flow computation. Selected papers have been indexed separately for inclusion in the Energy Science and Technology Database.« less

  10. Outperforming whom? A multilevel study of performance-prove goal orientation, performance, and the moderating role of shared team identification.

    PubMed

    Dietz, Bart; van Knippenberg, Daan; Hirst, Giles; Restubog, Simon Lloyd D

    2015-11-01

    Performance-prove goal orientation affects performance because it drives people to try to outperform others. A proper understanding of the performance-motivating potential of performance-prove goal orientation requires, however, that we consider the question of whom people desire to outperform. In a multilevel analysis of this issue, we propose that the shared team identification of a team plays an important moderating role here, directing the performance-motivating influence of performance-prove goal orientation to either the team level or the individual level of performance. A multilevel study of salespeople nested in teams supports this proposition, showing that performance-prove goal orientation motivates team performance more with higher shared team identification, whereas performance-prove goal orientation motivates individual performance more with lower shared team identification. Establishing the robustness of these findings, a second study replicates them with individual and team performance in an educational context. (c) 2015 APA, all rights reserved).

  11. Team Knowledge Sharing Intervention Effects on Team Shared Mental Models and Student Performance in an Undergraduate Science Course

    ERIC Educational Resources Information Center

    Sikorski, Eric G.; Johnson, Tristan E.; Ruscher, Paul H.

    2012-01-01

    The purpose of this study was to examine the effects of a shared mental model (SMM) based intervention on student team mental model similarity and ultimately team performance in an undergraduate meteorology course. The team knowledge sharing (TKS) intervention was designed to promote team reflection, communication, and improvement planning.…

  12. The Relationship between Shared Mental Models and Task Performance in an Online Team- Based Learning Environment

    ERIC Educational Resources Information Center

    Johnson, Tristan E.; Lee, Youngmin

    2008-01-01

    In an effort to better understand learning teams, this study examines the effects of shared mental models on team and individual performance. The results indicate that each team's shared mental model changed significantly over the time that subjects participated in team-based learning activities. The results also showed that the shared mental…

  13. Exploiting GPUs in Virtual Machine for BioCloud

    PubMed Central

    Jo, Heeseung; Jeong, Jinkyu; Lee, Myoungho; Choi, Dong Hoon

    2013-01-01

    Recently, biological applications start to be reimplemented into the applications which exploit many cores of GPUs for better computation performance. Therefore, by providing virtualized GPUs to VMs in cloud computing environment, many biological applications will willingly move into cloud environment to enhance their computation performance and utilize infinite cloud computing resource while reducing expenses for computations. In this paper, we propose a BioCloud system architecture that enables VMs to use GPUs in cloud environment. Because much of the previous research has focused on the sharing mechanism of GPUs among VMs, they cannot achieve enough performance for biological applications of which computation throughput is more crucial rather than sharing. The proposed system exploits the pass-through mode of PCI express (PCI-E) channel. By making each VM be able to access underlying GPUs directly, applications can show almost the same performance as when those are in native environment. In addition, our scheme multiplexes GPUs by using hot plug-in/out device features of PCI-E channel. By adding or removing GPUs in each VM in on-demand manner, VMs in the same physical host can time-share their GPUs. We implemented the proposed system using the Xen VMM and NVIDIA GPUs and showed that our prototype is highly effective for biological GPU applications in cloud environment. PMID:23710465

  14. Exploiting GPUs in virtual machine for BioCloud.

    PubMed

    Jo, Heeseung; Jeong, Jinkyu; Lee, Myoungho; Choi, Dong Hoon

    2013-01-01

    Recently, biological applications start to be reimplemented into the applications which exploit many cores of GPUs for better computation performance. Therefore, by providing virtualized GPUs to VMs in cloud computing environment, many biological applications will willingly move into cloud environment to enhance their computation performance and utilize infinite cloud computing resource while reducing expenses for computations. In this paper, we propose a BioCloud system architecture that enables VMs to use GPUs in cloud environment. Because much of the previous research has focused on the sharing mechanism of GPUs among VMs, they cannot achieve enough performance for biological applications of which computation throughput is more crucial rather than sharing. The proposed system exploits the pass-through mode of PCI express (PCI-E) channel. By making each VM be able to access underlying GPUs directly, applications can show almost the same performance as when those are in native environment. In addition, our scheme multiplexes GPUs by using hot plug-in/out device features of PCI-E channel. By adding or removing GPUs in each VM in on-demand manner, VMs in the same physical host can time-share their GPUs. We implemented the proposed system using the Xen VMM and NVIDIA GPUs and showed that our prototype is highly effective for biological GPU applications in cloud environment.

  15. General Recommendations on Fatigue Risk Management for the Canadian Forces

    DTIC Science & Technology

    2010-04-01

    missions performed in aviation require an individual(s) to process large amount of information in a short period of time and to do this on a continuous...information processing required during sustained operations can deteriorate an individual’s ability to perform a task. Given the high operational tempo...memory, which, in turn, is utilized to perform human thought processes (Baddeley, 2003). While various versions of this theory exist, they all share

  16. Fault tolerant onboard packet switch architecture for communication satellites: Shared memory per beam approach

    NASA Technical Reports Server (NTRS)

    Shalkhauser, Mary JO; Quintana, Jorge A.; Soni, Nitin J.

    1994-01-01

    The NASA Lewis Research Center is developing a multichannel communication signal processing satellite (MCSPS) system which will provide low data rate, direct to user, commercial communications services. The focus of current space segment developments is a flexible, high-throughput, fault tolerant onboard information switching processor. This information switching processor (ISP) is a destination-directed packet switch which performs both space and time switching to route user information among numerous user ground terminals. Through both industry study contracts and in-house investigations, several packet switching architectures were examined. A contention-free approach, the shared memory per beam architecture, was selected for implementation. The shared memory per beam architecture, fault tolerance insertion, implementation, and demonstration plans are described.

  17. Pharmaceutical research in the Kingdom of Saudi Arabia: A scientometric analysis during 2001–2010

    PubMed Central

    Alhaider, Ibrahim; Mueen Ahmed, K.K.; Gupta, B.M.

    2013-01-01

    Studies on the performance of Saudi Arabia in the pharmaceutical science research using quantitative and qualitative measures. They analyze the productivity and global publication share and rank of the top 15 countries. The author studies Saudi Arabia’s publications output, growth and citation quality, international collaborative publication share and most important the collaborating partners, contribution and citation impact of its top 15 organizations and authors, productivity patterns of its top publishing journals and characteristics of its highly cited papers. PMID:26106268

  18. Evaluating the effects of trophic complexity on a keystone predator by disassembling a partial intraguild predation food web.

    PubMed

    Davenport, Jon M; Chalcraft, David R

    2012-01-01

    1. Many taxa can be found in food webs that differ in trophic complexity, but it is unclear how trophic complexity affects the performance of particular taxa. In pond food webs, larvae of the salamander Ambystoma opacum occupy the intermediate predator trophic position in a partial intraguild predation (IGP) food web and can function as keystone predators. Larval A. opacum are also found in simpler food webs lacking either top predators or shared prey. 2. We conducted an experiment where a partial IGP food web was simplified, and we measured the growth and survival of larval A. opacum in each set of food webs. Partial IGP food webs that had either a low abundance or high abundance of total prey were also simplified by independently removing top predators and/or shared prey. 3. Removing top predators always increased A. opacum survival, but removal of shared prey had no effect on A. opacum survival, regardless of total prey abundance. 4. Surprisingly, food web simplification had no effect on the growth of A. opacum when present in food webs with a low abundance of prey but had important effects on A. opacum growth in food webs with a high abundance of prey. Simplifying a partial IGP food web with a high abundance of prey reduced A. opacum growth when either top predators or shared prey were removed from the food web and the loss of top predators and shared prey influenced A. opacum growth in a non-additive fashion. 5. The non-additive response in A. opacum growth appears to be the result of supplemental prey availability augmenting the beneficial effects of top predators. Top predators had a beneficial effect on A. opacum populations by reducing the abundance of A. opacum present and thereby reducing the intensity of intraspecific competition. 6. Our study indicates that the effects of food web simplification on the performance of A. opacum are complex and depend on both how a partial IGP food web is simplified and how abundant prey are in the food web. These findings are important because they demonstrate how trophic complexity can create variation in the performance of intermediate predators that play important roles in temporary pond food webs. © 2011 The Authors. Journal of Animal Ecology © 2011 British Ecological Society.

  19. Using Python on the Peregrine System | High-Performance Computing | NREL

    Science.gov Websites

    was not designed for use in a shared computing environment. The following example creates a new Python is run. For example an environment.yml file can be created on the developer's laptop and used on the

  20. Teaching Dispositions: Shared Understanding for Teacher Preparation

    ERIC Educational Resources Information Center

    DeMuth, Lynn

    2012-01-01

    This qualitative phenomenological study explored the perceptions of 16 high-performing teachers related to teaching dispositions, effects of dispositions on teaching and learning, and recommendations for assessment of teaching dispositions during teacher preparation. Participants' perceptions were gathered using six guided interview questions…

  1. In Search of Joy in Practice: A Report of 23 High-Functioning Primary Care Practices

    PubMed Central

    Sinsky, Christine A.; Willard-Grace, Rachel; Schutzbank, Andrew M.; Sinsky, Thomas A.; Margolius, David; Bodenheimer, Thomas

    2013-01-01

    We highlight primary care innovations gathered from high-functioning primary care practices, innovations we believe can facilitate joy in practice and mitigate physician burnout. To do so, we made site visits to 23 high-performing primary care practices and focused on how these practices distribute functions among the team, use technology to their advantage, improve outcomes with data, and make the job of primary care feasible and enjoyable as a life’s vocation. Innovations identified include (1) proactive planned care, with previsit planning and previsit laboratory tests; (2) sharing clinical care among a team, with expanded rooming protocols, standing orders, and panel management; (3) sharing clerical tasks with collaborative documentation (scribing), nonphysician order entry, and streamlined prescription management; (4) improving communication by verbal messaging and in-box management; and (5) improving team functioning through co-location, team meetings, and work flow mapping. Our observations suggest that a shift from a physician-centric model of work distribution and responsibility to a shared-care model, with a higher level of clinical support staff per physician and frequent forums for communication, can result in high-functioning teams, improved professional satisfaction, and greater joy in practice. PMID:23690328

  2. Influence of Lift Offset on Rotorcraft Performance

    NASA Technical Reports Server (NTRS)

    Johnson, Wayne

    2009-01-01

    The influence of lift offset on the performance of several rotorcraft configurations is explored. A lift-offset rotor, or advancing blade concept, is a hingeless rotor that can attain good efficiency at high speed by operating with more lift on the advancing side than on the retreating side of the rotor disk. The calculated performance capability of modern-technology coaxial rotors utilizing a lift offset is examined, including rotor performance optimized for hover and high-speed cruise. The ideal induced power loss of coaxial rotors in hover and twin rotors in forward flight is presented. The aerodynamic modeling requirements for performance calculations are evaluated, including wake and drag models for the high-speed flight condition. The influence of configuration on the performance of rotorcraft with lift-offset rotors is explored, considering tandem and side-by-side rotorcraft as well as wing-rotor lift share.

  3. Influence of Lift Offset on Rotorcraft Performance

    NASA Technical Reports Server (NTRS)

    Johnson, Wayne

    2008-01-01

    The influence of lift offset on the performance of several rotorcraft configurations is explored. A lift-offset rotor, or advancing blade concept, is a hingeless rotor that can attain good efficiency at high speed, by operating with more lift on the advancing side than on the retreating side of the rotor disk. The calculated performance capability of modern-technology coaxial rotors utilizing a lift offset is examined, including rotor performance optimized for hover and high-speed cruise. The ideal induced power loss of coaxial rotors in hover and twin rotors in forward flight is presented. The aerodynamic modeling requirements for performance calculations are evaluated, including wake and drag models for the high speed flight condition. The influence of configuration on the performance of rotorcraft with lift-offset rotors is explored, considering tandem and side-by-side rotorcraft as well as wing-rotor lift share.

  4. Performance Modeling and Measurement of Parallelized Code for Distributed Shared Memory Multiprocessors

    NASA Technical Reports Server (NTRS)

    Waheed, Abdul; Yan, Jerry

    1998-01-01

    This paper presents a model to evaluate the performance and overhead of parallelizing sequential code using compiler directives for multiprocessing on distributed shared memory (DSM) systems. With increasing popularity of shared address space architectures, it is essential to understand their performance impact on programs that benefit from shared memory multiprocessing. We present a simple model to characterize the performance of programs that are parallelized using compiler directives for shared memory multiprocessing. We parallelized the sequential implementation of NAS benchmarks using native Fortran77 compiler directives for an Origin2000, which is a DSM system based on a cache-coherent Non Uniform Memory Access (ccNUMA) architecture. We report measurement based performance of these parallelized benchmarks from four perspectives: efficacy of parallelization process; scalability; parallelization overhead; and comparison with hand-parallelized and -optimized version of the same benchmarks. Our results indicate that sequential programs can conveniently be parallelized for DSM systems using compiler directives but realizing performance gains as predicted by the performance model depends primarily on minimizing architecture-specific data locality overhead.

  5. Interconnect Performance Evaluation of SGI Altix 3700 BX2, Cray X1, Cray Opteron Cluster, and Dell PowerEdge

    NASA Technical Reports Server (NTRS)

    Fatoohi, Rod; Saini, Subbash; Ciotti, Robert

    2006-01-01

    We study the performance of inter-process communication on four high-speed multiprocessor systems using a set of communication benchmarks. The goal is to identify certain limiting factors and bottlenecks with the interconnect of these systems as well as to compare these interconnects. We measured network bandwidth using different number of communicating processors and communication patterns, such as point-to-point communication, collective communication, and dense communication patterns. The four platforms are: a 512-processor SGI Altix 3700 BX2 shared-memory machine with 3.2 GB/s links; a 64-processor (single-streaming) Cray XI shared-memory machine with 32 1.6 GB/s links; a 128-processor Cray Opteron cluster using a Myrinet network; and a 1280-node Dell PowerEdge cluster with an InfiniBand network. Our, results show the impact of the network bandwidth and topology on the overall performance of each interconnect.

  6. CHARACTERIZATION OF EMISSIONS FROM HAND-HELD TWO-STROKE ENGINES

    EPA Science Inventory

    Despite their extremely high organic and particulate matter emission rates, two-stroke engines remain among the least studied of engine types. Such studies are rare because they are costly to perform. Results reported in this paper were obtained using a facility that shares e...

  7. Expanding Bicycle-Sharing Systems: Lessons Learnt from an Analysis of Usage

    PubMed Central

    Zhang, Ying; Thomas, Tom; Brussel, M. J. G.; van Maarseveen, M. F. A. M.

    2016-01-01

    Bike-sharing programs, with initiatives to increase bike use and improve accessibility of urban transit, have received increasing attention in growing number of cities across the world. The latest generation of bike-sharing systems has employed smart card technology that produces station-based data or trip-level data. This facilitates the studies of the practical use of these systems. However, few studies have paid attention to the changes in users and system usage over the years, as well as the impact of system expansion on its usage. Monitoring the changes of system usage over years enables the identification of system performance and can serve as an input for improving the location-allocation of stations. The objective of this study is to explore the impact of the expansion of a bicycle-sharing system on the usage of the system. This was conducted for a bicycle-sharing system in Zhongshan (China), using operational usage data of different years following system expansion. To this end, we performed statistical and spatial analyses to examine the changes in both users and system usage between before and after the system expansion. The findings show that there is a big variation in users and aggregate usage following the system expansion. However, the trend in spatial distribution of demand shows no substantial difference over the years, i.e. the same high-demand and low-demand areas appear. There are decreases in demand for some old stations over the years, which can be attributed to either the negative performance of the system or the competition of nearby new stations. Expanding the system not only extends the original users’ ability to reach new areas but also attracts new users to use bike-sharing systems. In the conclusions, we present and discuss the findings, and offer recommendations for the further expansion of system. PMID:27977794

  8. Expanding Bicycle-Sharing Systems: Lessons Learnt from an Analysis of Usage.

    PubMed

    Zhang, Ying; Thomas, Tom; Brussel, M J G; van Maarseveen, M F A M

    2016-01-01

    Bike-sharing programs, with initiatives to increase bike use and improve accessibility of urban transit, have received increasing attention in growing number of cities across the world. The latest generation of bike-sharing systems has employed smart card technology that produces station-based data or trip-level data. This facilitates the studies of the practical use of these systems. However, few studies have paid attention to the changes in users and system usage over the years, as well as the impact of system expansion on its usage. Monitoring the changes of system usage over years enables the identification of system performance and can serve as an input for improving the location-allocation of stations. The objective of this study is to explore the impact of the expansion of a bicycle-sharing system on the usage of the system. This was conducted for a bicycle-sharing system in Zhongshan (China), using operational usage data of different years following system expansion. To this end, we performed statistical and spatial analyses to examine the changes in both users and system usage between before and after the system expansion. The findings show that there is a big variation in users and aggregate usage following the system expansion. However, the trend in spatial distribution of demand shows no substantial difference over the years, i.e. the same high-demand and low-demand areas appear. There are decreases in demand for some old stations over the years, which can be attributed to either the negative performance of the system or the competition of nearby new stations. Expanding the system not only extends the original users' ability to reach new areas but also attracts new users to use bike-sharing systems. In the conclusions, we present and discuss the findings, and offer recommendations for the further expansion of system.

  9. Dynamic load-sharing characteristic analysis of face gear power-split gear system based on tooth contact characteristics

    NASA Astrophysics Data System (ADS)

    Dong, Hao; Hu, Yahui

    2018-04-01

    The bend-torsion coupling dynamics load-sharing model of the helicopter face gear split torque transmission system is established by using concentrated quality standard, to analyzing the dynamic load-sharing characteristic. The mathematical models include nonlinear support stiffness, time-varying meshing stiffness, damping, gear backlash. The results showed that the errors collectively influenced the load sharing characteristics, only reduce a certain error, it is never fully reached the perfect loading sharing characteristics. The system load-sharing performance can be improved through floating shaft support. The above-method will provide a theoretical basis and data support for its dynamic performance optimization design.

  10. Integrating multiple data sources in species distribution modeling: A framework for data fusion

    USGS Publications Warehouse

    Pacifici, Krishna; Reich, Brian J.; Miller, David A.W.; Gardner, Beth; Stauffer, Glenn E.; Singh, Susheela; McKerrow, Alexa; Collazo, Jaime A.

    2017-01-01

    The last decade has seen a dramatic increase in the use of species distribution models (SDMs) to characterize patterns of species’ occurrence and abundance. Efforts to parameterize SDMs often create a tension between the quality and quantity of data available to fit models. Estimation methods that integrate both standardized and non-standardized data types offer a potential solution to the tradeoff between data quality and quantity. Recently several authors have developed approaches for jointly modeling two sources of data (one of high quality and one of lesser quality). We extend their work by allowing for explicit spatial autocorrelation in occurrence and detection error using a Multivariate Conditional Autoregressive (MVCAR) model and develop three models that share information in a less direct manner resulting in more robust performance when the auxiliary data is of lesser quality. We describe these three new approaches (“Shared,” “Correlation,” “Covariates”) for combining data sources and show their use in a case study of the Brown-headed Nuthatch in the Southeastern U.S. and through simulations. All three of the approaches which used the second data source improved out-of-sample predictions relative to a single data source (“Single”). When information in the second data source is of high quality, the Shared model performs the best, but the Correlation and Covariates model also perform well. When the information quality in the second data source is of lesser quality, the Correlation and Covariates model performed better suggesting they are robust alternatives when little is known about auxiliary data collected opportunistically or through citizen scientists. Methods that allow for both data types to be used will maximize the useful information available for estimating species distributions.

  11. Effects of turning and through lane sharing on traffic performance at intersections

    NASA Astrophysics Data System (ADS)

    Li, Xiang; Sun, Jian-Qiao

    2016-02-01

    Turning vehicles strongly influence traffic flows at intersections. Effective regulation of turning vehicles is important to achieve better traffic performance. This paper studies the impact of lane sharing and turning signals on traffic performance at intersections by using cellular automata. Both right-turn and left-turn lane sharing are studied. Interactions between vehicles and pedestrians are considered. The transportation efficiency, road safety and energy economy are the traffic performance metrics. Extensive simulations are carried out to study the traffic performance indices. It is observed that shared turning lanes and permissive left-turn signal improve the transportation efficiency and reduce the fuel consumption in most cases, but the safety is usually sacrificed. It is not always beneficial for the through vehicles when they are allowed to be in the turning lanes.

  12. STEMsalabim: A high-performance computing cluster friendly code for scanning transmission electron microscopy image simulations of thin specimens.

    PubMed

    Oelerich, Jan Oliver; Duschek, Lennart; Belz, Jürgen; Beyer, Andreas; Baranovskii, Sergei D; Volz, Kerstin

    2017-06-01

    We present a new multislice code for the computer simulation of scanning transmission electron microscope (STEM) images based on the frozen lattice approximation. Unlike existing software packages, the code is optimized to perform well on highly parallelized computing clusters, combining distributed and shared memory architectures. This enables efficient calculation of large lateral scanning areas of the specimen within the frozen lattice approximation and fine-grained sweeps of parameter space. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. SSeCloud: Using secret sharing scheme to secure keys

    NASA Astrophysics Data System (ADS)

    Hu, Liang; Huang, Yang; Yang, Disheng; Zhang, Yuzhen; Liu, Hengchang

    2017-08-01

    With the use of cloud storage services, one of the concerns is how to protect sensitive data securely and privately. While users enjoy the convenience of data storage provided by semi-trusted cloud storage providers, they are confronted with all kinds of risks at the same time. In this paper, we present SSeCloud, a secure cloud storage system that improves security and usability by applying secret sharing scheme to secure keys. The system encrypts uploading files on the client side and splits encrypted keys into three shares. Each of them is respectively stored by users, cloud storage providers and the alternative third trusted party. Any two of the parties can reconstruct keys. Evaluation results of prototype system show that SSeCloud provides high security without too much performance penalty.

  14. Labour time required for piglet castration with isoflurane-anaesthesia using shared and stationary inhaler devices.

    PubMed

    Weber, Sabrina; Das, Gürbüz; Waldmann, Karl-Heinz; Gauly, Matthias

    2014-01-01

    Isoflurane-anaesthesia combined with an analgesic represents a welfare-friendly method of pain mitigation for castration of piglets. However, it requires an inhaler device, which is uneconomic for small farms. Sharing a device among farms may be an economical option if the shared use does not increase labour time and the resulting costs. This study aimed to investigate the amount and components of labour time required for piglet castration with isoflurane anaesthesia performed with stationary and shared devices. Piglets (N = 1579) were anaesthetised with isoflurane (using either stationary or shared devices) and castrated.The stationary devices were used in a group (n = 5) of larger farms (84 sows/farm on an average), whereas smaller farms (n = 7; 32 sows/farm on an average) shared one device. Each farm was visited four times and labour time for each process-step was recorded. The complete process included machine set-up, anaesthesia and castration by a practitioner, and preparation, collection and transport of piglets by a farmer. Labour time of the complete process was increased (P = 0.012) on farms sharing a device (266 s/piglet) compared to farms using stationary devices (177 s/ piglet), due to increased time for preparation (P = 0.055), castration (P = 0.026) and packing (P = 0.010) when sharing a device. However, components of the time budget of farms using stationary or shared devices did not differ significantly (P > 0.05). Cost arising from time spent by farmers did not differ considerably between the use of stationary (0.28 Euro per piglet) and shared (0.26 Euro) devices. It is concluded that costs arising from the increased labour time due to sharing a device can be considered marginal, since the high expenses originating from purchasing an inhaler device are shared among several farms.

  15. Introducing a Short Measure of Shared Servant Leadership Impacting Team Performance through Team Behavioral Integration.

    PubMed

    Sousa, Milton; Van Dierendonck, Dirk

    2015-01-01

    The research reported in this paper was designed to study the influence of shared servant leadership on team performance through the mediating effect of team behavioral integration, while validating a new short measure of shared servant leadership. A round-robin approach was used to collect data in two similar studies. Study 1 included 244 undergraduate students in 61 teams following an intense HRM business simulation of 2 weeks. The following year, study 2 included 288 students in 72 teams involved in the same simulation. The most important findings were that (1) shared servant leadership was a strong determinant of team behavioral integration, (2) information exchange worked as the main mediating process between shared servant leadership and team performance, and (3) the essence of servant leadership can be captured on the key dimensions of empowerment, humility, stewardship and accountability, allowing for a new promising shortened four-dimensional measure of shared servant leadership.

  16. Investigating pianists' individuality in the performance of five timbral nuances through patterns of articulation, touch, dynamics, and pedaling

    PubMed Central

    Bernays, Michel; Traube, Caroline

    2014-01-01

    Timbre is an essential expressive feature in piano performance. Concert pianists use a vast palette of timbral nuances to color their performances at the microstructural level. Although timbre is generally envisioned in the pianistic community as an abstract concept carried through an imaged vocabulary, performers may share some common strategies of timbral expression in piano performance. Yet there may remain further leeway for idiosyncratic processes in the production of piano timbre nuances. In this study, we examined the patterns of timbral expression in performances by four expert pianists. Each pianist performed four short pieces, each with five different timbral intentions (bright, dark, dry, round, and velvety). The performances were recorded with the high-accuracy Bösendorfer CEUS system. Fine-grained performance features of dynamics, touch, articulation and pedaling were extracted. Reduced PCA performance spaces and descriptive performance portraits confirmed that pianists exhibited unique, specific profiles for different timbral intentions, derived from underlying traits of general individuality, while sharing some broad commonalities of dynamics and articulation for each timbral intention. These results confirm that pianists' abstract notions of timbre correspond to reliable patterns of performance technique. Furthermore, these effects suggest that pianists can express individual styles while complying with specific timbral intentions. PMID:24624099

  17. Taking Attention Away from the Auditory Modality: Context-dependent Effects on Early Sensory Encoding of Speech.

    PubMed

    Xie, Zilong; Reetzke, Rachel; Chandrasekaran, Bharath

    2018-05-24

    Increasing visual perceptual load can reduce pre-attentive auditory cortical activity to sounds, a reflection of the limited and shared attentional resources for sensory processing across modalities. Here, we demonstrate that modulating visual perceptual load can impact the early sensory encoding of speech sounds, and that the impact of visual load is highly dependent on the predictability of the incoming speech stream. Participants (n = 20, 9 females) performed a visual search task of high (target similar to distractors) and low (target dissimilar to distractors) perceptual load, while early auditory electrophysiological responses were recorded to native speech sounds. Speech sounds were presented either in a 'repetitive context', or a less predictable 'variable context'. Independent of auditory stimulus context, pre-attentive auditory cortical activity was reduced during high visual load, relative to low visual load. We applied a data-driven machine learning approach to decode speech sounds from the early auditory electrophysiological responses. Decoding performance was found to be poorer under conditions of high (relative to low) visual load, when the incoming acoustic stream was predictable. When the auditory stimulus context was less predictable, decoding performance was substantially greater for the high (relative to low) visual load conditions. Our results provide support for shared attentional resources between visual and auditory modalities that substantially influence the early sensory encoding of speech signals in a context-dependent manner. Copyright © 2018 IBRO. Published by Elsevier Ltd. All rights reserved.

  18. Tightening up the performance-pay linkage: roles of contingent reward leadership and profit-sharing in the cross-level influence of individual pay-for-performance.

    PubMed

    Han, Joo Hun; Bartol, Kathryn M; Kim, Seongsu

    2015-03-01

    Drawing upon line-of-sight (Lawler, 1990, 2000; Murphy, 1999) as a unifying concept, we examine the cross-level influence of organizational use of individual pay-for-performance (PFP), theorizing that its impact on individual employees' performance-reward expectancy is boosted by the moderating effects of immediate group managers' contingent reward leadership and organizational use of profit-sharing. Performance-reward expectancy is then expected to mediate the interactive effects of individual PFP with contingent reward leadership and profit-sharing on employee job performance. Analyses of cross-organizational and cross-level data from 912 employees in 194 workgroups from 45 companies reveal that organizations' individual PFP was positively related to employees' performance-reward expectancy, which was strengthened when it was accompanied by higher levels of contingent reward leadership and profit-sharing. Also, performance-reward expectancy significantly transmitted the effects of individual PFP onto job performance under higher levels of contingent reward leadership and profit-sharing, thus delineating cross-level mediating and moderating processes by which organizations' individual PFP is linked to important individual-level employee outcomes. Several theoretical and practical implications are discussed. PsycINFO Database Record (c) 2015 APA, all rights reserved.

  19. Confidence Sharing: An Economic Strategy for Efficient Information Flows in Animal Groups

    PubMed Central

    Korman, Amos; Greenwald, Efrat; Feinerman, Ofer

    2014-01-01

    Social animals may share information to obtain a more complete and accurate picture of their surroundings. However, physical constraints on communication limit the flow of information between interacting individuals in a way that can cause an accumulation of errors and deteriorated collective behaviors. Here, we theoretically study a general model of information sharing within animal groups. We take an algorithmic perspective to identify efficient communication schemes that are, nevertheless, economic in terms of communication, memory and individual internal computation. We present a simple and natural algorithm in which each agent compresses all information it has gathered into a single parameter that represents its confidence in its behavior. Confidence is communicated between agents by means of active signaling. We motivate this model by novel and existing empirical evidences for confidence sharing in animal groups. We rigorously show that this algorithm competes extremely well with the best possible algorithm that operates without any computational constraints. We also show that this algorithm is minimal, in the sense that further reduction in communication may significantly reduce performances. Our proofs rely on the Cramér-Rao bound and on our definition of a Fisher Channel Capacity. We use these concepts to quantify information flows within the group which are then used to obtain lower bounds on collective performance. The abstract nature of our model makes it rigorously solvable and its conclusions highly general. Indeed, our results suggest confidence sharing as a central notion in the context of animal communication. PMID:25275649

  20. Confidence sharing: an economic strategy for efficient information flows in animal groups.

    PubMed

    Korman, Amos; Greenwald, Efrat; Feinerman, Ofer

    2014-10-01

    Social animals may share information to obtain a more complete and accurate picture of their surroundings. However, physical constraints on communication limit the flow of information between interacting individuals in a way that can cause an accumulation of errors and deteriorated collective behaviors. Here, we theoretically study a general model of information sharing within animal groups. We take an algorithmic perspective to identify efficient communication schemes that are, nevertheless, economic in terms of communication, memory and individual internal computation. We present a simple and natural algorithm in which each agent compresses all information it has gathered into a single parameter that represents its confidence in its behavior. Confidence is communicated between agents by means of active signaling. We motivate this model by novel and existing empirical evidences for confidence sharing in animal groups. We rigorously show that this algorithm competes extremely well with the best possible algorithm that operates without any computational constraints. We also show that this algorithm is minimal, in the sense that further reduction in communication may significantly reduce performances. Our proofs rely on the Cramér-Rao bound and on our definition of a Fisher Channel Capacity. We use these concepts to quantify information flows within the group which are then used to obtain lower bounds on collective performance. The abstract nature of our model makes it rigorously solvable and its conclusions highly general. Indeed, our results suggest confidence sharing as a central notion in the context of animal communication.

  1. Jenkins-CI, an Open-Source Continuous Integration System, as a Scientific Data and Image-Processing Platform.

    PubMed

    Moutsatsos, Ioannis K; Hossain, Imtiaz; Agarinis, Claudia; Harbinski, Fred; Abraham, Yann; Dobler, Luc; Zhang, Xian; Wilson, Christopher J; Jenkins, Jeremy L; Holway, Nicholas; Tallarico, John; Parker, Christian N

    2017-03-01

    High-throughput screening generates large volumes of heterogeneous data that require a diverse set of computational tools for management, processing, and analysis. Building integrated, scalable, and robust computational workflows for such applications is challenging but highly valuable. Scientific data integration and pipelining facilitate standardized data processing, collaboration, and reuse of best practices. We describe how Jenkins-CI, an "off-the-shelf," open-source, continuous integration system, is used to build pipelines for processing images and associated data from high-content screening (HCS). Jenkins-CI provides numerous plugins for standard compute tasks, and its design allows the quick integration of external scientific applications. Using Jenkins-CI, we integrated CellProfiler, an open-source image-processing platform, with various HCS utilities and a high-performance Linux cluster. The platform is web-accessible, facilitates access and sharing of high-performance compute resources, and automates previously cumbersome data and image-processing tasks. Imaging pipelines developed using the desktop CellProfiler client can be managed and shared through a centralized Jenkins-CI repository. Pipelines and managed data are annotated to facilitate collaboration and reuse. Limitations with Jenkins-CI (primarily around the user interface) were addressed through the selection of helper plugins from the Jenkins-CI community.

  2. Jenkins-CI, an Open-Source Continuous Integration System, as a Scientific Data and Image-Processing Platform

    PubMed Central

    Moutsatsos, Ioannis K.; Hossain, Imtiaz; Agarinis, Claudia; Harbinski, Fred; Abraham, Yann; Dobler, Luc; Zhang, Xian; Wilson, Christopher J.; Jenkins, Jeremy L.; Holway, Nicholas; Tallarico, John; Parker, Christian N.

    2016-01-01

    High-throughput screening generates large volumes of heterogeneous data that require a diverse set of computational tools for management, processing, and analysis. Building integrated, scalable, and robust computational workflows for such applications is challenging but highly valuable. Scientific data integration and pipelining facilitate standardized data processing, collaboration, and reuse of best practices. We describe how Jenkins-CI, an “off-the-shelf,” open-source, continuous integration system, is used to build pipelines for processing images and associated data from high-content screening (HCS). Jenkins-CI provides numerous plugins for standard compute tasks, and its design allows the quick integration of external scientific applications. Using Jenkins-CI, we integrated CellProfiler, an open-source image-processing platform, with various HCS utilities and a high-performance Linux cluster. The platform is web-accessible, facilitates access and sharing of high-performance compute resources, and automates previously cumbersome data and image-processing tasks. Imaging pipelines developed using the desktop CellProfiler client can be managed and shared through a centralized Jenkins-CI repository. Pipelines and managed data are annotated to facilitate collaboration and reuse. Limitations with Jenkins-CI (primarily around the user interface) were addressed through the selection of helper plugins from the Jenkins-CI community. PMID:27899692

  3. Adaptable, high recall, event extraction system with minimal configuration.

    PubMed

    Miwa, Makoto; Ananiadou, Sophia

    2015-01-01

    Biomedical event extraction has been a major focus of biomedical natural language processing (BioNLP) research since the first BioNLP shared task was held in 2009. Accordingly, a large number of event extraction systems have been developed. Most such systems, however, have been developed for specific tasks and/or incorporated task specific settings, making their application to new corpora and tasks problematic without modification of the systems themselves. There is thus a need for event extraction systems that can achieve high levels of accuracy when applied to corpora in new domains, without the need for exhaustive tuning or modification, whilst retaining competitive levels of performance. We have enhanced our state-of-the-art event extraction system, EventMine, to alleviate the need for task-specific tuning. Task-specific details are specified in a configuration file, while extensive task-specific parameter tuning is avoided through the integration of a weighting method, a covariate shift method, and their combination. The task-specific configuration and weighting method have been employed within the context of two different sub-tasks of BioNLP shared task 2013, i.e. Cancer Genetics (CG) and Pathway Curation (PC), removing the need to modify the system specifically for each task. With minimal task specific configuration and tuning, EventMine achieved the 1st place in the PC task, and 2nd in the CG, achieving the highest recall for both tasks. The system has been further enhanced following the shared task by incorporating the covariate shift method and entity generalisations based on the task definitions, leading to further performance improvements. We have shown that it is possible to apply a state-of-the-art event extraction system to new tasks with high levels of performance, without having to modify the system internally. Both covariate shift and weighting methods are useful in facilitating the production of high recall systems. These methods and their combination can adapt a model to the target data with no deep tuning and little manual configuration.

  4. Tracking performance of a single-crystal and a polycrystalline diamond pixel-detector

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Menasce, D.; et al.

    2013-06-01

    We present a comparative characterization of the performance of a single-crystal and a polycrystalline diamond pixel-detector employing the standard CMS pixel readout chips. Measurements were carried out at the Fermilab Test Beam Facility, FTBF, using protons of momentum 120 GeV/c tracked by a high-resolution pixel telescope. Particular attention was directed to the study of the charge-collection, the charge-sharing among adjacent pixels and the achievable position resolution. The performance of the single-crystal detector was excellent and comparable to the best available silicon pixel-detectors. The measured average detection-efficiency was near unity, ε = 0.99860±0.00006, and the position-resolution for shared hits was aboutmore » 6 μm. On the other hand, the performance of the polycrystalline detector was hampered by its lower charge collection distance and the readout chip threshold. A new readout chip, capable of operating at much lower threshold (around 1 ke $-$), would be required to fully exploit the potential performance of the polycrystalline diamond pixel-detector.« less

  5. Computer architecture evaluation for structural dynamics computations: Project summary

    NASA Technical Reports Server (NTRS)

    Standley, Hilda M.

    1989-01-01

    The intent of the proposed effort is the examination of the impact of the elements of parallel architectures on the performance realized in a parallel computation. To this end, three major projects are developed: a language for the expression of high level parallelism, a statistical technique for the synthesis of multicomputer interconnection networks based upon performance prediction, and a queueing model for the analysis of shared memory hierarchies.

  6. Position Paper - pFLogger: The Parallel Fortran Logging framework for HPC Applications

    NASA Technical Reports Server (NTRS)

    Clune, Thomas L.; Cruz, Carlos A.

    2017-01-01

    In the context of high performance computing (HPC), software investments in support of text-based diagnostics, which monitor a running application, are typically limited compared to those for other types of IO. Examples of such diagnostics include reiteration of configuration parameters, progress indicators, simple metrics (e.g., mass conservation, convergence of solvers, etc.), and timers. To some degree, this difference in priority is justifiable as other forms of output are the primary products of a scientific model and, due to their large data volume, much more likely to be a significant performance concern. In contrast, text-based diagnostic content is generally not shared beyond the individual or group running an application and is most often used to troubleshoot when something goes wrong. We suggest that a more systematic approach enabled by a logging facility (or logger) similar to those routinely used by many communities would provide significant value to complex scientific applications. In the context of high-performance computing, an appropriate logger would provide specialized support for distributed and shared-memory parallelism and have low performance overhead. In this paper, we present our prototype implementation of pFlogger a parallel Fortran-based logging framework, and assess its suitability for use in a complex scientific application.

  7. POSITION PAPER - pFLogger: The Parallel Fortran Logging Framework for HPC Applications

    NASA Technical Reports Server (NTRS)

    Clune, Thomas L.; Cruz, Carlos A.

    2017-01-01

    In the context of high performance computing (HPC), software investments in support of text-based diagnostics, which monitor a running application, are typically limited compared to those for other types of IO. Examples of such diagnostics include reiteration of configuration parameters, progress indicators, simple metrics (e.g., mass conservation, convergence of solvers, etc.), and timers. To some degree, this difference in priority is justifiable as other forms of output are the primary products of a scientific model and, due to their large data volume, much more likely to be a significant performance concern. In contrast, text-based diagnostic content is generally not shared beyond the individual or group running an application and is most often used to troubleshoot when something goes wrong. We suggest that a more systematic approach enabled by a logging facility (or 'logger') similar to those routinely used by many communities would provide significant value to complex scientific applications. In the context of high-performance computing, an appropriate logger would provide specialized support for distributed and shared-memory parallelism and have low performance overhead. In this paper, we present our prototype implementation of pFlogger - a parallel Fortran-based logging framework, and assess its suitability for use in a complex scientific application.

  8. Bank ownership, lending, and local economic performance during the 2008–2009 financial crisis

    PubMed Central

    Coleman, Nicholas; Feler, Leo

    2017-01-01

    Although government banks are frequently associated with political capture and resource misallocation, they may be well-positioned during times of crisis to provide countercyclical support. Following the collapse of Lehman Brothers in September 2008, Brazil’s government banks substantially increased lending. Localities in Brazil with a high share of government banks received more loans and experienced better employment outcomes relative to localities with a low share of government banks. While increased government bank lending mitigated an economic downturn, we find that this lending was politically targeted, inefficiently allocated, and reduced productivity growth. PMID:28936027

  9. Bank ownership, lending, and local economic performance during the 2008-2009 financial crisis.

    PubMed

    Coleman, Nicholas; Feler, Leo

    2015-04-01

    Although government banks are frequently associated with political capture and resource misallocation, they may be well-positioned during times of crisis to provide countercyclical support. Following the collapse of Lehman Brothers in September 2008, Brazil's government banks substantially increased lending. Localities in Brazil with a high share of government banks received more loans and experienced better employment outcomes relative to localities with a low share of government banks. While increased government bank lending mitigated an economic downturn, we find that this lending was politically targeted, inefficiently allocated, and reduced productivity growth.

  10. Anti-jamming communication for body area network using chaotic frequency hopping.

    PubMed

    Gopalakrishnan, Balamurugan; Bhagyaveni, Marcharla Anjaneyulu

    2017-12-01

    The healthcare industries research trends focus on patient reliable communication and security is a paramount requirement of healthcare applications. Jamming in wireless communication medium has become a major research issue due to the ease of blocking communication in wireless networks and throughput degradation. The most commonly used technique to overcome jamming is frequency hopping (FH). However, in traditional FH pre-sharing of key for channel selection and a high-throughput overhead is required. So to overcome this pre-sharing of key and to increase the security chaotic frequency hopping (CFH) has been proposed. The design of chaos-based hop selection is a new development that offers improved performance in transmission of information without pre-shared key and also increases the security. The authors analysed the performance of proposed CFH system under different reactive jamming durations. The percentage of error reduction by the reactive jamming for jamming duration 0.01 and 0.05 s for FH and CFH is 55.03 and 84.24%, respectively. The obtained result shows that CFH is more secure and difficult to jam by the reactive jammer.

  11. Air Force Research Laboratory High Power Electric Propulsion Technology Development

    DTIC Science & Technology

    2009-10-27

    Plasmas in a Coaxial Double Theta Pinch, “ Doctoral Dissertation, Department of Aerospace Engineering, University of Michigan, Ann Arbor, MI, 2008. [6...surpasses the level of DARPA FAST goals. Several evolving propulsion concepts may enable a viable high-power plasma propulsion device suitable for...of PEPL) 5 performance operation with multiple cathodes or in a single- shared cathode configuration [4]. However, the local plasma properties

  12. Hydrogen storage container

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Jy-An John; Feng, Zhili; Zhang, Wei

    An apparatus and system is described for storing high-pressure fluids such as hydrogen. An inner tank and pre-stressed concrete pressure vessel share the structural and/or pressure load on the inner tank. The system and apparatus provide a high performance and low cost container while mitigating hydrogen embrittlement of the metal tank. System is useful for distributing hydrogen to a power grid or to a vehicle refueling station.

  13. Contrasting effects of feature-based statistics on the categorisation and identification of visual objects

    PubMed Central

    Taylor, Kirsten I.; Devereux, Barry J.; Acres, Kadia; Randall, Billi; Tyler, Lorraine K.

    2013-01-01

    Conceptual representations are at the heart of our mental lives, involved in every aspect of cognitive functioning. Despite their centrality, a long-standing debate persists as to how the meanings of concepts are represented and processed. Many accounts agree that the meanings of concrete concepts are represented by their individual features, but disagree about the importance of different feature-based variables: some views stress the importance of the information carried by distinctive features in conceptual processing, others the features which are shared over many concepts, and still others the extent to which features co-occur. We suggest that previously disparate theoretical positions and experimental findings can be unified by an account which claims that task demands determine how concepts are processed in addition to the effects of feature distinctiveness and co-occurrence. We tested these predictions in a basic-level naming task which relies on distinctive feature information (Experiment 1) and a domain decision task which relies on shared feature information (Experiment 2). Both used large-scale regression designs with the same visual objects, and mixed-effects models incorporating participant, session, stimulus-related and feature statistic variables to model the performance. We found that concepts with relatively more distinctive and more highly correlated distinctive relative to shared features facilitated basic-level naming latencies, while concepts with relatively more shared and more highly correlated shared relative to distinctive features speeded domain decisions. These findings demonstrate that the feature statistics of distinctiveness (shared vs. distinctive) and correlational strength, as well as the task demands, determine how concept meaning is processed in the conceptual system. PMID:22137770

  14. Engineering the CernVM-Filesystem as a High Bandwidth Distributed Filesystem for Auxiliary Physics Data

    NASA Astrophysics Data System (ADS)

    Dykstra, D.; Bockelman, B.; Blomer, J.; Herner, K.; Levshina, T.; Slyz, M.

    2015-12-01

    A common use pattern in the computing models of particle physics experiments is running many distributed applications that read from a shared set of data files. We refer to this data is auxiliary data, to distinguish it from (a) event data from the detector (which tends to be different for every job), and (b) conditions data about the detector (which tends to be the same for each job in a batch of jobs). Relatively speaking, conditions data also tends to be relatively small per job where both event data and auxiliary data are larger per job. Unlike event data, auxiliary data comes from a limited working set of shared files. Since there is spatial locality of the auxiliary data access, the use case appears to be identical to that of the CernVM- Filesystem (CVMFS). However, we show that distributing auxiliary data through CVMFS causes the existing CVMFS infrastructure to perform poorly. We utilize a CVMFS client feature called "alien cache" to cache data on existing local high-bandwidth data servers that were engineered for storing event data. This cache is shared between the worker nodes at a site and replaces caching CVMFS files on both the worker node local disks and on the site's local squids. We have tested this alien cache with the dCache NFSv4.1 interface, Lustre, and the Hadoop Distributed File System (HDFS) FUSE interface, and measured performance. In addition, we use high-bandwidth data servers at central sites to perform the CVMFS Stratum 1 function instead of the low-bandwidth web servers deployed for the CVMFS software distribution function. We have tested this using the dCache HTTP interface. As a result, we have a design for an end-to-end high-bandwidth distributed caching read-only filesystem, using existing client software already widely deployed to grid worker nodes and existing file servers already widely installed at grid sites. Files are published in a central place and are soon available on demand throughout the grid and cached locally on the site with a convenient POSIX interface. This paper discusses the details of the architecture and reports performance measurements.

  15. Engineering the CernVM-Filesystem as a High Bandwidth Distributed Filesystem for Auxiliary Physics Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dykstra, D.; Bockelman, B.; Blomer, J.

    A common use pattern in the computing models of particle physics experiments is running many distributed applications that read from a shared set of data files. We refer to this data is auxiliary data, to distinguish it from (a) event data from the detector (which tends to be different for every job), and (b) conditions data about the detector (which tends to be the same for each job in a batch of jobs). Relatively speaking, conditions data also tends to be relatively small per job where both event data and auxiliary data are larger per job. Unlike event data, auxiliarymore » data comes from a limited working set of shared files. Since there is spatial locality of the auxiliary data access, the use case appears to be identical to that of the CernVM- Filesystem (CVMFS). However, we show that distributing auxiliary data through CVMFS causes the existing CVMFS infrastructure to perform poorly. We utilize a CVMFS client feature called 'alien cache' to cache data on existing local high-bandwidth data servers that were engineered for storing event data. This cache is shared between the worker nodes at a site and replaces caching CVMFS files on both the worker node local disks and on the site's local squids. We have tested this alien cache with the dCache NFSv4.1 interface, Lustre, and the Hadoop Distributed File System (HDFS) FUSE interface, and measured performance. In addition, we use high-bandwidth data servers at central sites to perform the CVMFS Stratum 1 function instead of the low-bandwidth web servers deployed for the CVMFS software distribution function. We have tested this using the dCache HTTP interface. As a result, we have a design for an end-to-end high-bandwidth distributed caching read-only filesystem, using existing client software already widely deployed to grid worker nodes and existing file servers already widely installed at grid sites. Files are published in a central place and are soon available on demand throughout the grid and cached locally on the site with a convenient POSIX interface. This paper discusses the details of the architecture and reports performance measurements.« less

  16. A comparative study of 11 local health department organizational networks.

    PubMed

    Merrill, Jacqueline; Keeling, Jonathan W; Carley, Kathleen M

    2010-01-01

    Although the nation's local health departments (LHDs) share a common mission, variability in administrative structures is a barrier to identifying common, optimal management strategies. There is a gap in understanding what unifying features LHDs share as organizations that could be leveraged systematically for achieving high performance. To explore sources of commonality and variability in a range of LHDs by comparing intraorganizational networks. We used organizational network analysis to document relationships between employees, tasks, knowledge, and resources within LHDs, which may exist regardless of formal administrative structure. A national sample of 11 LHDs from seven states that differed in size, geographic location, and governance. Relational network data were collected via an on-line survey of all employees in 11 LHDs. A total of 1062 out of 1239 employees responded (84% response rate). Network measurements were compared using coefficient of variation. Measurements were correlated with scores from the National Public Health Performance Assessment and with LHD demographics. Rankings of tasks, knowledge, and resources were correlated across pairs of LHDs. We found that 11 LHDs exhibited compound organizational structures in which centralized hierarchies were coupled with distributed networks at the point of service. Local health departments were distinguished from random networks by a pattern of high centralization and clustering. Network measurements were positively associated with performance for 3 of 10 essential services (r > 0.65). Patterns in the measurements suggest how LHDs adapt to the population served. Shared network patterns across LHDs suggest where common organizational management strategies are feasible. This evidence supports national efforts to promote uniform standards for service delivery to diverse populations.

  17. Key concepts relevant to quality of complex and shared decision-making in health care: a literature review.

    PubMed

    Dy, Sydney M; Purnell, Tanjala S

    2012-02-01

    High-quality provider-patient decision-making is key to quality care for complex conditions. We performed an analysis of key elements relevant to quality and complex, shared medical decision-making. Based on a search of electronic databases, including Medline and the Cochrane Library, as well as relevant articles' reference lists, reviews of tools, and annotated bibliographies, we developed a list of key concepts and applied them to a decision-making example. Key concepts identified included provider competence, trustworthiness, and cultural competence; communication with patients and families; information quality; patient/surrogate competence; and roles and involvement. We applied this concept list to a case example, shared decision-making for live donor kidney transplantation, and identified the likely most important concepts as provider and cultural competence, information quality, and communication with patients and families. This concept list may be useful for conceptualizing the quality of complex shared decision-making and in guiding research in this area. Copyright © 2011 Elsevier Ltd. All rights reserved.

  18. Center for the Built Environment

    Science.gov Websites

    wellbeing research, who will share insights on how to design, operate and measure healthy and productive buildings that showcase sustainable design that provide high quality spaces for work and study have been recognized annual Livable Buildings Award. Reports Reveal New Insights into Energy Performance and Design

  19. Policies | High-Performance Computing | NREL

    Science.gov Websites

    Use Learn about policy governing user accountability, resource use, use by foreign nationals states. Data Security Learn about the data security policy, including data protection, data security retention policy, including project-centric and user-centric data. Shared Storage Usage Learn about a policy

  20. Public-Private Consortium Aims to Cut Preclinical Cancer Drug Discovery from Six Years to Just One | Frederick National Laboratory for Cancer Research

    Cancer.gov

    Scientists from two U.S. national laboratories, industry, and academia today launched an unprecedented effort to transform the way cancer drugs are discovered by creating an open and sharable platform that integrates high-performance computing, share

  1. Management in Training

    ERIC Educational Resources Information Center

    Babick, Christine

    2009-01-01

    Advancement offices have their share of management issues. Do any of these situations sound familiar? An underachieving alumni director should have been let go long ago, but without a single bad performance review, he can't be fired. A development officer hires the wrong person and now spends too much time supervising her. A high-performing…

  2. Neural Mechanisms of Interference Control Underlie the Relationship Between Fluid Intelligence and Working Memory Span

    PubMed Central

    Burgess, Gregory C.; Gray, Jeremy R.; Conway, Andrew R. A.; Braver, Todd S.

    2014-01-01

    Fluid intelligence (gF) and working memory (WM) span predict success in demanding cognitive situations. Recent studies show that much of the variance in gF and WM span is shared, suggesting common neural mechanisms. This study provides a direct investigation of the degree to which shared variance in gF and WM span can be explained by neural mechanisms of interference control. We measured performance and fMRI activity in 102 participants during the n-back WM task, focusing on the selective activation effects associated with high-interference lure trials. Brain activity on these trials was correlated with gF, WM span, and task performance in core brain regions linked to WM and executive control, including bilateral dorsolateral PFC (middle frontal gyrus, BA9) and parietal cortex (inferior parietal cortex; BA 40/7). Interference-related performance and interference-related activity accounted for a significant proportion of the shared variance in gF and WM span. Path analyses indicate that interference control activity may affect gF through a common set of processes that also influence WM span. These results suggest that individual differences in interference control mechanisms are important for understanding the relationship between gF and WM span. PMID:21787103

  3. Sherpas share genetic variations with Tibetans for high-altitude adaptation.

    PubMed

    Bhandari, Sushil; Zhang, Xiaoming; Cui, Chaoying; Yangla; Liu, Lan; Ouzhuluobu; Baimakangzhuo; Gonggalanzi; Bai, Caijuan; Bianba; Peng, Yi; Zhang, Hui; Xiang, Kun; Shi, Hong; Liu, Shiming; Gengdeng; Wu, Tianyi; Qi, Xuebin; Su, Bing

    2017-01-01

    Sherpas, a highlander population living in Khumbu region of Nepal, are well known for their superior climbing ability in Himalayas. However, the genetic basis of their adaptation to high-altitude environments remains elusive. We collected DNA samples of 582 Sherpas from Nepal and Tibetan Autonomous Region of China, and we measured their hemoglobin levels and degrees of blood oxygen saturation. We genotyped 29 EPAS1 SNPs, two EGLN1 SNPs and the TED polymorphism (3.4 kb deletion) in Sherpas. We also performed genetic association analysis among these sequence variants with phenotypic data. We found similar allele frequencies on the tested 32 variants of these genes in Sherpas and Tibetans. Sherpa individuals carrying the derived alleles of EPAS1 (rs113305133, rs116611511 and rs12467821), EGLN1 (rs186996510 and rs12097901) and TED have lower hemoglobin levels when compared with those wild-type allele carriers. Most of the EPAS1 variants showing significant association with hemoglobin levels in Tibetans were replicated in Sherpas. The shared sequence variants and hemoglobin trait between Sherpas and Tibetans indicate a shared genetic basis for high-altitude adaptation, consistent with the proposal that Sherpas are in fact a recently derived population from Tibetans and they inherited adaptive variants for high-altitude adaptation from their Tibetan ancestors.

  4. Shared performance monitor in a multiprocessor system

    DOEpatents

    Chiu, George; Gara, Alan G.; Salapura, Valentina

    2012-07-24

    A performance monitoring unit (PMU) and method for monitoring performance of events occurring in a multiprocessor system. The multiprocessor system comprises a plurality of processor devices units, each processor device for generating signals representing occurrences of events in the processor device, and, a single shared counter resource for performance monitoring. The performance monitor unit is shared by all processor cores in the multiprocessor system. The PMU comprises: a plurality of performance counters each for counting signals representing occurrences of events from one or more the plurality of processor units in the multiprocessor system; and, a plurality of input devices for receiving the event signals from one or more processor devices of the plurality of processor units, the plurality of input devices programmable to select event signals for receipt by one or more of the plurality of performance counters for counting, wherein the PMU is shared between multiple processing units, or within a group of processors in the multiprocessing system. The PMU is further programmed to monitor event signals issued from non-processor devices.

  5. How strong are passwords used to protect personal health information in clinical trials?

    PubMed

    El Emam, Khaled; Moreau, Katherine; Jonker, Elizabeth

    2011-02-11

    Findings and statements about how securely personal health information is managed in clinical research are mixed. The objective of our study was to evaluate the security of practices used to transfer and share sensitive files in clinical trials. Two studies were performed. First, 15 password-protected files that were transmitted by email during regulated Canadian clinical trials were obtained. Commercial password recovery tools were used on these files to try to crack their passwords. Second, interviews with 20 study coordinators were conducted to understand file-sharing practices in clinical trials for files containing personal health information. We were able to crack the passwords for 93% of the files (14/15). Among these, 13 files contained thousands of records with sensitive health information on trial participants. The passwords tended to be relatively weak, using common names of locations, animals, car brands, and obvious numeric sequences. Patient information is commonly shared by email in the context of query resolution. Files containing personal health information are shared by email and, by posting them on shared drives with common passwords, to facilitate collaboration. If files containing sensitive patient information must be transferred by email, mechanisms to encrypt them and to ensure that password strength is high are necessary. More sophisticated collaboration tools are required to allow file sharing without password sharing. We provide recommendations to implement these practices.

  6. How Strong are Passwords Used to Protect Personal Health Information in Clinical Trials?

    PubMed Central

    Moreau, Katherine; Jonker, Elizabeth

    2011-01-01

    Background Findings and statements about how securely personal health information is managed in clinical research are mixed. Objective The objective of our study was to evaluate the security of practices used to transfer and share sensitive files in clinical trials. Methods Two studies were performed. First, 15 password-protected files that were transmitted by email during regulated Canadian clinical trials were obtained. Commercial password recovery tools were used on these files to try to crack their passwords. Second, interviews with 20 study coordinators were conducted to understand file-sharing practices in clinical trials for files containing personal health information. Results We were able to crack the passwords for 93% of the files (14/15). Among these, 13 files contained thousands of records with sensitive health information on trial participants. The passwords tended to be relatively weak, using common names of locations, animals, car brands, and obvious numeric sequences. Patient information is commonly shared by email in the context of query resolution. Files containing personal health information are shared by email and, by posting them on shared drives with common passwords, to facilitate collaboration. Conclusion If files containing sensitive patient information must be transferred by email, mechanisms to encrypt them and to ensure that password strength is high are necessary. More sophisticated collaboration tools are required to allow file sharing without password sharing. We provide recommendations to implement these practices. PMID:21317106

  7. [Series: Medical Applications of the PHITS Code (2): Acceleration by Parallel Computing].

    PubMed

    Furuta, Takuya; Sato, Tatsuhiko

    2015-01-01

    Time-consuming Monte Carlo dose calculation becomes feasible owing to the development of computer technology. However, the recent development is due to emergence of the multi-core high performance computers. Therefore, parallel computing becomes a key to achieve good performance of software programs. A Monte Carlo simulation code PHITS contains two parallel computing functions, the distributed-memory parallelization using protocols of message passing interface (MPI) and the shared-memory parallelization using open multi-processing (OpenMP) directives. Users can choose the two functions according to their needs. This paper gives the explanation of the two functions with their advantages and disadvantages. Some test applications are also provided to show their performance using a typical multi-core high performance workstation.

  8. Flight Testing of the Capillary Pumped Loop 3 Experiment

    NASA Technical Reports Server (NTRS)

    Ottenstein, Laura; Butler, Dan; Ku, Jentung; Cheung, Kwok; Baldauff, Robert; Hoang, Triem

    2002-01-01

    The Capillary Pumped Loop 3 (CAPL 3) experiment was a multiple evaporator capillary pumped loop experiment that flew in the Space Shuttle payload bay in December 2001 (STS-108). The main objective of CAPL 3 was to demonstrate in micro-gravity a multiple evaporator capillary pumped loop system, capable of reliable start-up, reliable continuous operation, and heat load sharing, with hardware for a deployable radiator. Tests performed on orbit included start-ups, power cycles, low power tests (100 W total), high power tests (up to 1447 W total), heat load sharing, variable/fixed conductance transition tests, and saturation temperature change tests. The majority of the tests were completed successfully, although the experiment did exhibit an unexpected sensitivity to shuttle maneuvers. This paper describes the experiment, the tests performed during the mission, and the test results.

  9. Introducing a Short Measure of Shared Servant Leadership Impacting Team Performance through Team Behavioral Integration

    PubMed Central

    Sousa, Milton; Van Dierendonck, Dirk

    2016-01-01

    The research reported in this paper was designed to study the influence of shared servant leadership on team performance through the mediating effect of team behavioral integration, while validating a new short measure of shared servant leadership. A round-robin approach was used to collect data in two similar studies. Study 1 included 244 undergraduate students in 61 teams following an intense HRM business simulation of 2 weeks. The following year, study 2 included 288 students in 72 teams involved in the same simulation. The most important findings were that (1) shared servant leadership was a strong determinant of team behavioral integration, (2) information exchange worked as the main mediating process between shared servant leadership and team performance, and (3) the essence of servant leadership can be captured on the key dimensions of empowerment, humility, stewardship and accountability, allowing for a new promising shortened four-dimensional measure of shared servant leadership. PMID:26779104

  10. Leading virtual teams: hierarchical leadership, structural supports, and shared team leadership.

    PubMed

    Hoch, Julia E; Kozlowski, Steve W J

    2014-05-01

    Using a field sample of 101 virtual teams, this research empirically evaluates the impact of traditional hierarchical leadership, structural supports, and shared team leadership on team performance. Building on Bell and Kozlowski's (2002) work, we expected structural supports and shared team leadership to be more, and hierarchical leadership to be less, strongly related to team performance when teams were more virtual in nature. As predicted, results from moderation analyses indicated that the extent to which teams were more virtual attenuated relations between hierarchical leadership and team performance but strengthened relations for structural supports and team performance. However, shared team leadership was significantly related to team performance regardless of the degree of virtuality. Results are discussed in terms of needed research extensions for understanding leadership processes in virtual teams and practical implications for leading virtual teams. (c) 2014 APA, all rights reserved.

  11. Governance of global health research consortia: Sharing sovereignty and resources within Future Health Systems.

    PubMed

    Pratt, Bridget; Hyder, Adnan A

    2017-02-01

    Global health research partnerships are increasingly taking the form of consortia that conduct programs of research in low and middle-income countries (LMICs). An ethical framework has been developed that describes how the governance of consortia comprised of institutions from high-income countries and LMICs should be structured to promote health equity. It encompasses initial guidance for sharing sovereignty in consortia decision-making and sharing consortia resources. This paper describes a first effort to examine whether and how consortia can uphold that guidance. Case study research was undertaken with the Future Health Systems consortium, performs research to improve health service delivery for the poor in Bangladesh, China, India, and Uganda. Data were thematically analysed and revealed that proposed ethical requirements for sharing sovereignty and sharing resources are largely upheld by Future Health Systems. Facilitating factors included having a decentralised governance model, LMIC partners with good research capacity, and firm budgets. Higher labour costs in the US and UK and the funder's policy of allocating funds to consortia on a reimbursement basis prevented full alignment with guidance on sharing resources. The lessons described in this paper can assist other consortia to more systematically link their governance policy and practice to the promotion of health equity. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Bridging Hydroinformatics Services Between HydroShare and SWATShare

    NASA Astrophysics Data System (ADS)

    Merwade, V.; Zhao, L.; Song, C. X.; Tarboton, D. G.; Goodall, J. L.; Stealey, M.; Rajib, A.; Morsy, M. M.; Dash, P. K.; Miles, B.; Kim, I. L.

    2016-12-01

    Many cyberinfrastructure systems in the hydrologic and related domains emerged in the past decade with more being developed to address various data management and modeling needs. Although clearly beneficial to the broad user community, it is a challenging task to build interoperability across these systems due to various obstacles including technological, organizational, semantic, and social issues. This work presents our experience in developing interoperability between two hydrologic cyberinfrastructure systems - SWATShare and HydroShare. HydroShare is a large-scale online system aiming at enabling the hydrologic user community to share their data, models, and analysis online for solving complex hydrologic research questions. On the other side, SWATShare is a focused effort to allow SWAT (Soil and Water Assessment Tool) modelers share, execute and analyze SWAT models using high performance computing resources. Making these two systems interoperable required common sign-in through OAuth, sharing of models through common metadata standards and use of standard web-services for implementing key import/export functionalities. As a result, users from either community can leverage the resources and services across these systems without having to manually importing, exporting, or processing their models. Overall, this use case is an example that can serve as a model for the interoperability among other systems as no one system can provide all the functionality needed to address large interdisciplinary problems.

  13. Reducing the Threat of Terrorism through Knowledge Sharing in a Virtual Environment Between Law Enforcement and the Private Security Industry

    DTIC Science & Technology

    2008-03-01

    NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA THESIS Approved for public release; distribution is unlimited REDUCING THE...FUNDING NUMBERS 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Naval Postgraduate School Monterey, CA 93943-5000 8. PERFORMING ORGANIZATION...relationships. While leaders did not demonstrate a high level of concern regarding the threat of a local terrorist act occurring in the next five years

  14. Optimized light sharing for high-resolution TOF PET detector based on digital silicon photomultipliers.

    PubMed

    Marcinkowski, R; España, S; Van Holen, R; Vandenberghe, S

    2014-12-07

    The majority of current whole-body PET scanners are based on pixelated scintillator arrays with a transverse pixel size of 4 mm. However, recent studies have shown that decreasing the pixel size to 2 mm can significantly improve image spatial resolution. In this study, the performance of Digital Photon Counter (DPC) from Philips Digital Photon Counting (PDPC) was evaluated to determine their potential for high-resolution whole-body time of flight (TOF) PET scanners. Two detector configurations were evaluated. First, the DPC3200-44-22 DPC array was coupled to a LYSO block of 15 × 15 2 × 2 × 22 mm(3) pixels through a 1 mm thick light guide. Due to light sharing among the dies neighbour logic of the DPC was used. In a second setup the same DPC was coupled directly to a scalable 4 × 4 LYSO matrix of 1.9 × 1.9 × 22 mm(3) crystals with a dedicated reflector arrangement allowing for controlled light sharing patterns inside the matrix. With the first approach an average energy resolution of 14.5% and an average CRT of 376 ps were achieved. For the second configuration an average energy resolution of 11% and an average CRT of 295 ps were achieved. Our studies show that the DPC is a suitable photosensor for a high-resolution TOF-PET detector. The dedicated reflector arrangement allows one to achieve better performances than the light guide approach. The count loss, caused by dark counts, is overcome by fitting the matrix size to the size of DPC single die.

  15. Revisiting the Development of Time Sharing Using a Dual Motor Task Performance

    ERIC Educational Resources Information Center

    Getchell, Nancy; Pabreja, Priya

    2006-01-01

    In this article, the authors discuss and examine how to develop time sharing using a dual motor task and its effects. They state that when one is required to perform two tasks at the same time (time sharing), an individual may experience difficulty in expressing one or both of the tasks. This phenomenon, known as interference, has been studied…

  16. Evaluation of the Display of Cognitive State Feedback to Drive Adaptive Task Sharing

    PubMed Central

    Dorneich, Michael C.; Passinger, Břetislav; Hamblin, Christopher; Keinrath, Claudia; Vašek, Jiři; Whitlow, Stephen D.; Beekhuyzen, Martijn

    2017-01-01

    This paper presents an adaptive system intended to address workload imbalances between pilots in future flight decks. Team performance can be maximized when task demands are balanced within crew capabilities and resources. Good communication skills enable teams to adapt to changes in workload, and include the balancing of workload between team members This work addresses human factors priorities in the aviation domain with the goal to develop concepts that balance operator workload, support future operator roles and responsibilities, and support new task requirements, while allowing operators to focus on the most safety critical tasks. A traditional closed-loop adaptive system includes the decision logic to turn automated adaptations on and off. This work takes a novel approach of replacing the decision logic, normally performed by the automation, with human decisions. The Crew Workload Manager (CWLM) was developed to objectively display the workload between pilots and recommend task sharing; it is then the pilots who “close the loop” by deciding how to best mitigate unbalanced workload. The workload was manipulated by the Shared Aviation Task Battery (SAT-B), which was developed to provide opportunities for pilots to mitigate imbalances in workload between crew members. Participants were put in situations of high and low workload (i.e., workload was manipulated as opposed to being measured), the workload was then displayed to pilots, and pilots were allowed to decide how to mitigate the situation. An evaluation was performed that utilized the SAT-B to manipulate workload and create workload imbalances. Overall, the CWLM reduced the time spent in unbalanced workload and improved the crew coordination in task sharing while not negatively impacting concurrent task performance. Balancing workload has the potential to improve crew resource management and task performance over time, and reduce errors and fatigue. Paired with a real-time workload measurement system, the CWLM could help teams manage their own task load distribution. PMID:28400716

  17. Evaluation of the Display of Cognitive State Feedback to Drive Adaptive Task Sharing.

    PubMed

    Dorneich, Michael C; Passinger, Břetislav; Hamblin, Christopher; Keinrath, Claudia; Vašek, Jiři; Whitlow, Stephen D; Beekhuyzen, Martijn

    2017-01-01

    This paper presents an adaptive system intended to address workload imbalances between pilots in future flight decks. Team performance can be maximized when task demands are balanced within crew capabilities and resources. Good communication skills enable teams to adapt to changes in workload, and include the balancing of workload between team members This work addresses human factors priorities in the aviation domain with the goal to develop concepts that balance operator workload, support future operator roles and responsibilities, and support new task requirements, while allowing operators to focus on the most safety critical tasks. A traditional closed-loop adaptive system includes the decision logic to turn automated adaptations on and off. This work takes a novel approach of replacing the decision logic, normally performed by the automation, with human decisions. The Crew Workload Manager (CWLM) was developed to objectively display the workload between pilots and recommend task sharing; it is then the pilots who "close the loop" by deciding how to best mitigate unbalanced workload. The workload was manipulated by the Shared Aviation Task Battery (SAT-B), which was developed to provide opportunities for pilots to mitigate imbalances in workload between crew members. Participants were put in situations of high and low workload (i.e., workload was manipulated as opposed to being measured), the workload was then displayed to pilots, and pilots were allowed to decide how to mitigate the situation. An evaluation was performed that utilized the SAT-B to manipulate workload and create workload imbalances. Overall, the CWLM reduced the time spent in unbalanced workload and improved the crew coordination in task sharing while not negatively impacting concurrent task performance. Balancing workload has the potential to improve crew resource management and task performance over time, and reduce errors and fatigue. Paired with a real-time workload measurement system, the CWLM could help teams manage their own task load distribution.

  18. Skill sharing and delegation practice in two Queensland regional allied health cancer care services: a comparison of tasks.

    PubMed

    Passfield, Juanine; Nielsen, Ilsa; Brebner, Neil; Johnstone, Cara

    2017-07-24

    Objective Delegation and skill sharing are emerging service strategies for allied health (AH) professionals working in Queensland regional cancer care services. The aim of the present study was to describe the consistency between two services for the types and frequency of tasks provided and the agreement between teams in the decision to delegate or skill share clinical tasks, thereby determining the potential applicability to other services. Methods Datasets provided by two similar services were collated. Descriptive statistical analyses were used to assess the extent of agreement. Results In all, 214 tasks were identified as being undertaken by the services (92% agreement). Across the services, 70 tasks were identified as high frequency (equal to or more frequently than weekly) and 29 as not high frequency (46% agreement). Of the 68 tasks that were risk assessed, agreement was 66% for delegation and 60% for skill sharing, with high-frequency and intervention tasks more likely to be delegated. Conclusions Strong consistency was apparent for the clinical tasks undertaken by the two cancer care AH teams, with moderate agreement for the frequency of tasks performed. The proportion of tasks considered appropriate for skill sharing and/or delegation was similar, although variation at the task level was apparent. Further research is warranted to examine the range of factors that affect the decision to skill share or delegate. What is known about the topic? There is limited research evidence regarding the use of skill sharing and delegation service models for AH in cancer care services. In particular, the extent to which decisions about task safety and appropriateness for delegation or skill sharing can be generalised across services has not been investigated. What does this paper add? This study investigated the level of clinical task consistency between two similar AH cancer care teams in regional centres. It also examined the level of agreement with regard to delegation and skill sharing to provide an indication of the level of local service influence on workforce and service model decisions. What are the implications for practitioners? Local factors have a modest influence on delegation and skill sharing decisions of AH teams. Practitioners need to be actively engaged in decision making at the local level to ensure the clinical service model meets local needs. However, teams should also capitalise on commonalities between settings to limit duplication of training and resource development through collaborative networks.

  19. Shared effects of organic microcontaminants and environmental stressors on biofilms and invertebrates in impaired rivers.

    PubMed

    Sabater, S; Barceló, D; De Castro-Català, N; Ginebreda, A; Kuzmanovic, M; Petrovic, M; Picó, Y; Ponsatí, L; Tornés, E; Muñoz, I

    2016-03-01

    Land use type, physical and chemical stressors, and organic microcontaminants were investigated for their effects on the biological communities (biofilms and invertebrates) in several Mediterranean rivers. The diversity of invertebrates, and the scores of the first principal component of a PCA performed with the diatom communities were the best descriptors of the distribution patterns of the biological communities against the river stressors. These two metrics decreased according to the progressive site impairment (associated to higher area of agricultural and urban-industrial, high water conductivity, higher dissolved organic carbon and dissolved inorganic nitrogen concentrations, and higher concentration of organic microcontaminants, particularly pharmaceutical and industrial compounds). The variance partition analyses (RDAs) attributed the major share (10%) of the biological communities' response to the environmental stressors (nutrients, altered discharge, dissolved organic matter), followed by the land use occupation (6%) and of the organic microcontaminants (2%). However, the variance shared by the three groups of descriptors was very high (41%), indicating that their simultaneous occurrence determined most of the variation in the biological communities. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  20. Cloud Retrieval Intercomparisons Between SEVIRI, MODIS and VIIRS with CHIMAERA PGE06 Data Collection 6 Products

    NASA Technical Reports Server (NTRS)

    Wind, Galina; Riedi, Jerome; Platnick, Steven; Heidinger, Andrew

    2014-01-01

    The Cross-platform HIgh resolution Multi-instrument AtmosphEric Retrieval Algorithms (CHIMAERA) system allows us to perform MODIS-like cloud top, optical and microphysical properties retrievals on any sensor that possesses a minimum set of common spectral channels. The CHIMAERA system uses a shared-core architecture that takes retrieval method out of the equation when intercomparisons are made. Here we show an example of such retrieval and a comparison of simultaneous retrievals done using SEVIRI, MODIS and VIIRS sensors. All sensor retrievals are performed using CLAVR-x (or CLAVR-x based) cloud top properties algorithm. SEVIRI uses the SAF_NWC cloud mask. MODIS and VIIRS use the IFF-based cloud mask that is a shared algorithm between MODIS and VIIRS. The MODIS and VIIRS retrievals are performed using a VIIRS branch of CHIMAERA that limits available MODIS channel set. Even though in that mode certain MODIS products such as multilayer cloud map are not available, the cloud retrieval remains fully equivalent to operational Data Collection 6.

  1. Performance of children with autism spectrum disorder on advanced theory of mind tasks.

    PubMed

    Brent, Ella; Rios, Patricia; Happé, Francesca; Charman, Tony

    2004-09-01

    Although a number of advanced theory of mind tasks have been developed, there is sparse information on whether performance on different tasks is associated. The study examined the performance of 20 high-functioning 6- to 12-year-old children with autism spectrum disorder and 20 controls on three high-level theory of mind tasks: Strange Stories, Cartoons and the children's version of the Eyes task. The pattern of findings suggests that the three tasks may share differing, non-specific, information-processing requirements in addition to tapping any putative mentalizing ability. They may also indicate a degree of dissociation between social-cognitive and social-perceptual or affective components of the mentalizing system.

  2. Supporting the Development and Adoption of Automatic Lameness Detection Systems in Dairy Cattle: Effect of System Cost and Performance on Potential Market Shares

    PubMed Central

    Van Weyenberg, Stephanie; Van Nuffel, Annelies; Lauwers, Ludwig; Vangeyte, Jürgen

    2017-01-01

    Simple Summary Most prototypes of systems to automatically detect lameness in dairy cattle are still not available on the market. Estimating their potential adoption rate could support developers in defining development goals towards commercially viable and well-adopted systems. We simulated the potential market shares of such prototypes to assess the effect of altering the system cost and detection performance on the potential adoption rate. We found that system cost and lameness detection performance indeed substantially influence the potential adoption rate. In order for farmers to prefer automatic detection over current visual detection, the usefulness that farmers attach to a system with specific characteristics should be higher than that of visual detection. As such, we concluded that low system costs and high detection performances are required before automatic lameness detection systems become applicable in practice. Abstract Most automatic lameness detection system prototypes have not yet been commercialized, and are hence not yet adopted in practice. Therefore, the objective of this study was to simulate the effect of detection performance (percentage missed lame cows and percentage false alarms) and system cost on the potential market share of three automatic lameness detection systems relative to visual detection: a system attached to the cow, a walkover system, and a camera system. Simulations were done using a utility model derived from survey responses obtained from dairy farmers in Flanders, Belgium. Overall, systems attached to the cow had the largest market potential, but were still not competitive with visual detection. Increasing the detection performance or lowering the system cost led to higher market shares for automatic systems at the expense of visual detection. The willingness to pay for extra performance was €2.57 per % less missed lame cows, €1.65 per % less false alerts, and €12.7 for lame leg indication, respectively. The presented results could be exploited by system designers to determine the effect of adjustments to the technology on a system’s potential adoption rate. PMID:28991188

  3. The Role of the Conductor's Goal Orientation and Use of Shared Performance Cues on Collegiate Instrumentalists' Motivational Beliefs and Performance in Large Musical Ensembles

    ERIC Educational Resources Information Center

    Matthews, Wendy K.; Kitsantas, Anastasia

    2013-01-01

    This study examined the effects of the conductor's goal orientation (mastery vs. performance) and use of shared performance cues (basic vs. interpretive vs. expressive) on instrumentalists' self-efficacy, collective efficacy, attributions, and performance. Eighty-one college instrumentalists from two musical ensembles participated in the study. It…

  4. Direction of interaction between mountain pine beetle (Dendroctonus ponderosae) and resource-sharing wood-boring beetles depends on plant parasite infection.

    PubMed

    Klutsch, Jennifer G; Najar, Ahmed; Cale, Jonathan A; Erbilgin, Nadir

    2016-09-01

    Plant pathogens can have cascading consequences on insect herbivores, though whether they alter competition among resource-sharing insect herbivores is unknown. We experimentally tested whether the infection of a plant pathogen, the parasitic plant dwarf mistletoe (Arceuthobium americanum), on jack pine (Pinus banksiana) altered the competitive interactions among two groups of beetles sharing the same resources: wood-boring beetles (Coleoptera: Cerambycidae) and the invasive mountain pine beetle (Dendroctonus ponderosae) (Coleoptera: Curculionidae). We were particularly interested in identifying potential mechanisms governing the direction of interactions (from competition to facilitation) between the two beetle groups. At the lowest and highest disease severity, wood-boring beetles increased their consumption rate relative to feeding levels at moderate severity. The performance (brood production and feeding) of mountain pine beetle was negatively associated with wood-boring beetle feeding and disease severity when they were reared separately. However, when both wood-boring beetles and high severity of plant pathogen infection occurred together, mountain pine beetle escaped from competition and improved its performance (increased brood production and feeding). Species-specific responses to changes in tree defense compounds and quality of resources (available phloem) were likely mechanisms driving this change of interactions between the two beetle groups. This is the first study demonstrating that a parasitic plant can be an important force in mediating competition among resource-sharing subcortical insect herbivores.

  5. NAS Parallel Benchmark. Results 11-96: Performance Comparison of HPF and MPI Based NAS Parallel Benchmarks. 1.0

    NASA Technical Reports Server (NTRS)

    Saini, Subash; Bailey, David; Chancellor, Marisa K. (Technical Monitor)

    1997-01-01

    High Performance Fortran (HPF), the high-level language for parallel Fortran programming, is based on Fortran 90. HALF was defined by an informal standards committee known as the High Performance Fortran Forum (HPFF) in 1993, and modeled on TMC's CM Fortran language. Several HPF features have since been incorporated into the draft ANSI/ISO Fortran 95, the next formal revision of the Fortran standard. HPF allows users to write a single parallel program that can execute on a serial machine, a shared-memory parallel machine, or a distributed-memory parallel machine. HPF eliminates the complex, error-prone task of explicitly specifying how, where, and when to pass messages between processors on distributed-memory machines, or when to synchronize processors on shared-memory machines. HPF is designed in a way that allows the programmer to code an application at a high level, and then selectively optimize portions of the code by dropping into message-passing or calling tuned library routines as 'extrinsics'. Compilers supporting High Performance Fortran features first appeared in late 1994 and early 1995 from Applied Parallel Research (APR) Digital Equipment Corporation, and The Portland Group (PGI). IBM introduced an HPF compiler for the IBM RS/6000 SP/2 in April of 1996. Over the past two years, these implementations have shown steady improvement in terms of both features and performance. The performance of various hardware/ programming model (HPF and MPI (message passing interface)) combinations will be compared, based on latest NAS (NASA Advanced Supercomputing) Parallel Benchmark (NPB) results, thus providing a cross-machine and cross-model comparison. Specifically, HPF based NPB results will be compared with MPI based NPB results to provide perspective on performance currently obtainable using HPF versus MPI or versus hand-tuned implementations such as those supplied by the hardware vendors. In addition we would also present NPB (Version 1.0) performance results for the following systems: DEC Alpha Server 8400 5/440, Fujitsu VPP Series (VX, VPP300, and VPP700), HP/Convex Exemplar SPP2000, IBM RS/6000 SP P2SC node (120 MHz) NEC SX-4/32, SGI/CRAY T3E, SGI Origin2000.

  6. High-performance workplace practices in nursing homes: an economic perspective.

    PubMed

    Bishop, Christine E

    2014-02-01

    To develop implications for research, practice and policy, selected economics and human resources management research literature was reviewed to compare and contrast nursing home culture change work practices with high-performance human resource management systems in other industries. The organization of nursing home work under culture change has much in common with high-performance work systems, which are characterized by increased autonomy for front-line workers, self-managed teams, flattened supervisory hierarchy, and the aspiration that workers use specific knowledge gained on the job to enhance quality and customization. However, successful high-performance work systems also entail intensive recruitment, screening, and on-going training of workers, and compensation that supports selective hiring and worker commitment; these features are not usual in the nursing home sector. Thus despite many parallels with high-performance work systems, culture change work systems are missing essential elements: those that require higher compensation. If purchasers, including public payers, were willing to pay for customized, resident-centered care, productivity gains could be shared with workers, and the nursing home sector could move from a low-road to a high-road employment system.

  7. Effects of Ordering Strategies and Programming Paradigms on Sparse Matrix Computations

    NASA Technical Reports Server (NTRS)

    Oliker, Leonid; Li, Xiaoye; Husbands, Parry; Biswas, Rupak; Biegel, Bryan (Technical Monitor)

    2002-01-01

    The Conjugate Gradient (CG) algorithm is perhaps the best-known iterative technique to solve sparse linear systems that are symmetric and positive definite. For systems that are ill-conditioned, it is often necessary to use a preconditioning technique. In this paper, we investigate the effects of various ordering and partitioning strategies on the performance of parallel CG and ILU(O) preconditioned CG (PCG) using different programming paradigms and architectures. Results show that for this class of applications: ordering significantly improves overall performance on both distributed and distributed shared-memory systems, that cache reuse may be more important than reducing communication, that it is possible to achieve message-passing performance using shared-memory constructs through careful data ordering and distribution, and that a hybrid MPI+OpenMP paradigm increases programming complexity with little performance gains. A implementation of CG on the Cray MTA does not require special ordering or partitioning to obtain high efficiency and scalability, giving it a distinct advantage for adaptive applications; however, it shows limited scalability for PCG due to a lack of thread level parallelism.

  8. Solar Assisted Ground Source Heat Pump Performance in Nearly Zero Energy Building in Baltic Countries

    NASA Astrophysics Data System (ADS)

    Januševičius, Karolis; Streckienė, Giedrė

    2013-12-01

    In near zero energy buildings (NZEB) built in Baltic countries, heat production systems meet the challenge of large share domestic hot water demand and high required heating capacity. Due to passive solar design, cooling demand in residential buildings also needs an assessment and solution. Heat pump systems are a widespread solution to reduce energy use. A combination of heat pump and solar thermal collectors helps to meet standard requirements and increases the share of renewable energy use in total energy balance of country. The presented paper describes a simulation study of solar assisted heat pump systems carried out in TRNSYS. The purpose of this simulation was to investigate how the performance of a solar assisted heat pump combination varies in near zero energy building. Results of three systems were compared to autonomous (independent) systems simulated performance. Different solar assisted heat pump design solutions with serial and parallel solar thermal collector connections to the heat pump loop were modelled and a passive cooling possibility was assessed. Simulations were performed for three Baltic countries: Lithuania, Latvia and Estonia.

  9. NREL Evaluates Aquarius Liquid-Cooled High-Performance Computing Technology

    Science.gov Websites

    HPC and influence the modern data center designer towards adoption of liquid cooling. Our shared technology. Aquila and Sandia chose NREL's HPC Data Center for the initial installation and evaluation because the data center is configured for liquid cooling, along with the required instrumentation to

  10. 77 FR 7124 - Information Sharing With Agency Stakeholders; Public Meeting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-10

    ... efforts to transform itself into a customer-focused, high-performing organization. In this context, USDA's... relationships. As part of a larger effort to enhance stakeholder communication, APHIS is hosting an open meeting... headquarters for support? Why? As we continue to look at ways to improve our processes and enhance customer...

  11. Representing Valued Bodies in PE: A Visual Inquiry with British Asian Girls

    ERIC Educational Resources Information Center

    Hill, Joanne; Azzarito, Laura

    2012-01-01

    Background: Status or value in sport and physical education (PE) contexts is often associated with performances of highly proficient sporting bodies, which produce hierarchies of privileged and marginalised gendered and racialised positions. This may be communicated through text and images shared within school, physical cultures and media that…

  12. Transforming Ontario's Apprenticeship Training System: Supplying the Tradespersons Needed for Sustained Growth--A Proposal from Ontario's Colleges

    ERIC Educational Resources Information Center

    Colleges Ontario, 2009

    2009-01-01

    Ontario's colleges share the provincial government's belief that apprenticeship must play a greater role in addressing skills shortages and contributing to innovative, high-performance workplaces that enhance Ontario's competitiveness. Given the severity of the economic downturn, Ontario faces an immediate, serious challenge as apprenticeship…

  13. A Personnel Model: Hiring, Developing and Promoting Community College Employees.

    ERIC Educational Resources Information Center

    Adams, Frank G.

    The high priority placed on staff development by business and industry has not been shared by the community college which has tended to seek talents outside the institution rather than to develop those within. Community college staff development programs are usually designed to improve job performance rather than to enhance employee growth and…

  14. Reasoning abstractly about resources

    NASA Technical Reports Server (NTRS)

    Clement, B.; Barrett, A.

    2001-01-01

    r describes a way to schedule high level activities before distributing them across multiple rovers in order to coordinate the resultant use of shared resources regardless of how each rover decides how to perform its activities. We present an algorithm for summarizing the metric resource requirements of an abstract activity based n the resource usages of its potential refinements.

  15. Performances of multiprocessor multidisk architectures for continuous media storage

    NASA Astrophysics Data System (ADS)

    Gennart, Benoit A.; Messerli, Vincent; Hersch, Roger D.

    1996-03-01

    Multimedia interfaces increase the need for large image databases, capable of storing and reading streams of data with strict synchronicity and isochronicity requirements. In order to fulfill these requirements, we consider a parallel image server architecture which relies on arrays of intelligent disk nodes, each disk node being composed of one processor and one or more disks. This contribution analyzes through bottleneck performance evaluation and simulation the behavior of two multi-processor multi-disk architectures: a point-to-point architecture and a shared-bus architecture similar to current multiprocessor workstation architectures. We compare the two architectures on the basis of two multimedia algorithms: the compute-bound frame resizing by resampling and the data-bound disk-to-client stream transfer. The results suggest that the shared bus is a potential bottleneck despite its very high hardware throughput (400Mbytes/s) and that an architecture with addressable local memories located closely to their respective processors could partially remove this bottleneck. The point- to-point architecture is scalable and able to sustain high throughputs for simultaneous compute- bound and data-bound operations.

  16. Activities in a social networking-based discussion group by endoscopic retrograde cholangiopancreatography doctors.

    PubMed

    Kang, Xiaoyu; Zhao, Lina; Liu, Na; Wang, Xiangping; Zhang, Rongchun; Liu, Zhiguo; Liang, Shuhui; Yao, Shaowei; Tao, Qin; Jia, Hui; Pan, Yanglin; Guo, Xuegang

    2017-10-01

    Online social networking is increasingly being used among medical practitioners. However, few studies have evaluated its use in therapeutic endoscopy. Here, we aimed to analyze the shared topics and activities of a group of endoscopic retrograde cholangiopancreatography (ERCP) doctors in a social networking-based endoscopic retrograde cholangiopancreatography discussion group (EDG). Six ERCP trainers working in Xijing Hospital and 48 graduated endoscopists who had finished ERCP training in the same hospital were invited to join in EDG. All group members were informed not to divulge any private information of patients when using EDG. The activities of group members on EDG were retrospectively extracted. The individual data of the graduated endoscopists were collected by a questionnaire. From June 2014 to May 2015, 6924 messages were posted on EDG, half of which were ERCP related. In total, 214 ERCP-related topics were shared, which could be categorized into three types: sharing experience/cases (52.3%), asking questions (38.3%), and sharing literatures/advances (9.3%). Among the 48 graduated endoscopists, 21 had a low case volume of less than 50 per year and 27 had a high volume case volume of 50 or more. High-volume graduated endoscopists posted more ERCP-related messages (P=0.008) and shared more discussion topics (P=0.003) compared with low-volume graduated endoscopists. A survey showed that EDG was useful for graduated endoscopists in ERCP performance and management of post-ERCP complications, etc. A wide range of ERCP-related topics were shared on the social networking-based EDG. The ERCP-related behaviors on EDG were more active in graduated endoscopists with an ERCP case volume of more than 50 per year.

  17. Contrasting effects of feature-based statistics on the categorisation and basic-level identification of visual objects.

    PubMed

    Taylor, Kirsten I; Devereux, Barry J; Acres, Kadia; Randall, Billi; Tyler, Lorraine K

    2012-03-01

    Conceptual representations are at the heart of our mental lives, involved in every aspect of cognitive functioning. Despite their centrality, a long-standing debate persists as to how the meanings of concepts are represented and processed. Many accounts agree that the meanings of concrete concepts are represented by their individual features, but disagree about the importance of different feature-based variables: some views stress the importance of the information carried by distinctive features in conceptual processing, others the features which are shared over many concepts, and still others the extent to which features co-occur. We suggest that previously disparate theoretical positions and experimental findings can be unified by an account which claims that task demands determine how concepts are processed in addition to the effects of feature distinctiveness and co-occurrence. We tested these predictions in a basic-level naming task which relies on distinctive feature information (Experiment 1) and a domain decision task which relies on shared feature information (Experiment 2). Both used large-scale regression designs with the same visual objects, and mixed-effects models incorporating participant, session, stimulus-related and feature statistic variables to model the performance. We found that concepts with relatively more distinctive and more highly correlated distinctive relative to shared features facilitated basic-level naming latencies, while concepts with relatively more shared and more highly correlated shared relative to distinctive features speeded domain decisions. These findings demonstrate that the feature statistics of distinctiveness (shared vs. distinctive) and correlational strength, as well as the task demands, determine how concept meaning is processed in the conceptual system. Copyright © 2011 Elsevier B.V. All rights reserved.

  18. A meta-analysis of shared leadership and team effectiveness.

    PubMed

    Wang, Danni; Waldman, David A; Zhang, Zhen

    2014-03-01

    A growing number of studies have examined the "sharedness" of leadership processes in teams (i.e., shared leadership, collective leadership, and distributed leadership). We meta-analytically cumulated 42 independent samples of shared leadership and examined its relationship to team effectiveness. Our findings reveal an overall positive relationship (ρ = .34). But perhaps more important, what is actually shared among members appears to matter with regard to team effectiveness. That is, shared traditional forms of leadership (e.g., initiating structure and consideration) show a lower relationship (ρ = .18) than either shared new-genre leadership (e.g., charismatic and transformational leadership; ρ = .34) or cumulative, overall shared leadership (ρ = .35). In addition, shared leadership tends to be more strongly related to team attitudinal outcomes and behavioral processes and emergent team states, compared with team performance. Moreover, the effects of shared leadership are stronger when the work of team members is more complex. Our findings further suggest that the referent used in measuring shared leadership does not influence its relationship with team effectiveness and that compared with vertical leadership, shared leadership shows unique effects in relation to team performance. In total, our study not only cumulates extant research on shared leadership but also provides directions for future research to move forward in the study of plural forms of leadership.

  19. Performance Analysis of Multilevel Parallel Applications on Shared Memory Architectures

    NASA Technical Reports Server (NTRS)

    Biegel, Bryan A. (Technical Monitor); Jost, G.; Jin, H.; Labarta J.; Gimenez, J.; Caubet, J.

    2003-01-01

    Parallel programming paradigms include process level parallelism, thread level parallelization, and multilevel parallelism. This viewgraph presentation describes a detailed performance analysis of these paradigms for Shared Memory Architecture (SMA). This analysis uses the Paraver Performance Analysis System. The presentation includes diagrams of a flow of useful computations.

  20. LADS: Optimizing Data Transfers using Layout-Aware Data Scheduling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Youngjae; Atchley, Scott; Vallee, Geoffroy R

    While future terabit networks hold the promise of signifi- cantly improving big-data motion among geographically distributed data centers, significant challenges must be overcome even on today s 100 gigabit networks to real- ize end-to-end performance. Multiple bottlenecks exist along the end-to-end path from source to sink. Data stor- age infrastructure at both the source and sink and its in- terplay with the wide-area network are increasingly the bottleneck to achieving high performance. In this paper, we identify the issues that lead to congestion on the path of an end-to-end data transfer in the terabit network en- vironment, and we presentmore » a new bulk data movement framework called LADS for terabit networks. LADS ex- ploits the underlying storage layout at each endpoint to maximize throughput without negatively impacting the performance of shared storage resources for other users. LADS also uses the Common Communication Interface (CCI) in lieu of the sockets interface to use zero-copy, OS-bypass hardware when available. It can further im- prove data transfer performance under congestion on the end systems using buffering at the source using flash storage. With our evaluations, we show that LADS can avoid congested storage elements within the shared stor- age resource, improving I/O bandwidth, and data transfer rates across the high speed networks.« less

  1. A Comparative Study of 11 Local Health Department Organizational Networks

    PubMed Central

    Merrill, Jacqueline; Keeling, Jonathan W.; Carley, Kathleen M.

    2013-01-01

    Context Although the nation’s local health departments (LHDs) share a common mission, variability in administrative structures is a barrier to identifying common, optimal management strategies. There is a gap in understanding what unifying features LHDs share as organizations that could be leveraged systematically for achieving high performance. Objective To explore sources of commonality and variability in a range of LHDs by comparing intraorganizational networks. Intervention We used organizational network analysis to document relationships between employees, tasks, knowledge, and resources within LHDs, which may exist regardless of formal administrative structure. Setting A national sample of 11 LHDs from seven states that differed in size, geographic location, and governance. Participants Relational network data were collected via an on-line survey of all employees in 11 LHDs. A total of 1 062 out of 1 239 employees responded (84% response rate). Outcome Measures Network measurements were compared using coefficient of variation. Measurements were correlated with scores from the National Public Health Performance Assessment and with LHD demographics. Rankings of tasks, knowledge, and resources were correlated across pairs of LHDs. Results We found that 11 LHDs exhibited compound organizational structures in which centralized hierarchies were coupled with distributed networks at the point of service. Local health departments were distinguished from random networks by a pattern of high centralization and clustering. Network measurements were positively associated with performance for 3 of 10 essential services (r > 0.65). Patterns in the measurements suggest how LHDs adapt to the population served. Conclusions Shared network patterns across LHDs suggest where common organizational management strategies are feasible. This evidence supports national efforts to promote uniform standards for service delivery to diverse populations. PMID:20445462

  2. Model My Watershed and BiG CZ Data Portal: Interactive geospatial analysis and hydrological modeling web applications that leverage the Amazon cloud for scientists, resource managers and students

    NASA Astrophysics Data System (ADS)

    Aufdenkampe, A. K.; Mayorga, E.; Tarboton, D. G.; Sazib, N. S.; Horsburgh, J. S.; Cheetham, R.

    2016-12-01

    The Model My Watershed Web app (http://wikiwatershed.org/model/) was designed to enable citizens, conservation practitioners, municipal decision-makers, educators, and students to interactively select any area of interest anywhere in the continental USA to: (1) analyze real land use and soil data for that area; (2) model stormwater runoff and water-quality outcomes; and (3) compare how different conservation or development scenarios could modify runoff and water quality. The BiG CZ Data Portal is a web application for scientists for intuitive, high-performance map-based discovery, visualization, access and publication of diverse earth and environmental science data via a map-based interface that simultaneously performs geospatial analysis of selected GIS and satellite raster data for a selected area of interest. The two web applications share a common codebase (https://github.com/WikiWatershed and https://github.com/big-cz), high performance geospatial analysis engine (http://geotrellis.io/ and https://github.com/geotrellis) and deployment on the Amazon Web Services (AWS) cloud cyberinfrastructure. Users can use "on-the-fly" rapid watershed delineation over the national elevation model to select their watershed or catchment of interest. The two web applications also share the goal of enabling the scientists, resource managers and students alike to share data, analyses and model results. We will present these functioning web applications and their potential to substantially lower the bar for studying and understanding our water resources. We will also present work in progress, including a prototype system for enabling citizen-scientists to register open-source sensor stations (http://envirodiy.org/mayfly/) to stream data into these systems, so that they can be reshared using Water One Flow web services.

  3. Teaching hospital performance: towards a community of shared values?

    PubMed

    Mauro, Marianna; Cardamone, Emma; Cavallaro, Giusy; Minvielle, Etienne; Rania, Francesco; Sicotte, Claude; Trotta, Annarita

    2014-01-01

    This paper explores the performance dimensions of Italian teaching hospitals (THs) by considering the multiple constituent model approach, using measures that are subjective and based on individual ideals and preferences. Our research replicates a study of a French TH and deepens it by adjusting it to the context of an Italian TH. The purposes of this research were as follows: to identify emerging views on the performance of teaching hospitals and to analyze how these views vary among hospital stakeholders. We conducted an in-depth case study of a TH using a quantitative survey method. The survey uses a questionnaire based on Parsons' social system action theory, which embraces the major models of organizational performance and covers three groups of internal stakeholders: physicians, caregivers and administrative staff. The questionnaires were distributed between April and September 2011. The results confirm that hospital performance is multifaceted and includes the dimensions of efficiency, effectiveness and quality of care, as well as organizational and human features. There is a high degree of consensus among all observed stakeholder groups about these values, and a shared view of performance is emerging. Our research provides useful information for defining management priorities to improve the performance of THs. Copyright © 2013 Elsevier Ltd. All rights reserved.

  4. Performance Assessment Institute-NV

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lombardo, Joesph

    2012-12-31

    The National Supercomputing Center for Energy and the Environment’s intention is to purchase a multi-purpose computer cluster in support of the Performance Assessment Institute (PA Institute). The PA Institute will serve as a research consortium located in Las Vegas Nevada with membership that includes: national laboratories, universities, industry partners, and domestic and international governments. This center will provide a one-of-a-kind centralized facility for the accumulation of information for use by Institutions of Higher Learning, the U.S. Government, and Regulatory Agencies and approved users. This initiative will enhance and extend High Performance Computing (HPC) resources in Nevada to support critical nationalmore » and international needs in "scientific confirmation". The PA Institute will be promoted as the leading Modeling, Learning and Research Center worldwide. The program proposes to utilize the existing supercomputing capabilities and alliances of the University of Nevada Las Vegas as a base, and to extend these resource and capabilities through a collaborative relationship with its membership. The PA Institute will provide an academic setting for interactive sharing, learning, mentoring and monitoring of multi-disciplinary performance assessment and performance confirmation information. The role of the PA Institute is to facilitate research, knowledge-increase, and knowledge-sharing among users.« less

  5. Social Consequences of Academic Teaming in Middle School: The Influence of Shared Course-Taking on Peer Victimization

    PubMed Central

    Echols, Leslie

    2014-01-01

    This study examined the influence of academic teaming (i.e., sharing academic classes with the same classmates) on the relationship between social preference and peer victimization among 6th grade students in middle school. Approximately 1,000 participants were drawn from 5 middle schools that varied in their practice of academic teaming. A novel methodology for measuring academic teaming at the individual level was employed, in which students received their own teaming score based on the unique set of classmates with whom they shared academic courses in their class schedule. Using both peer- and self-reports of victimization, the results of two path models indicated that students with low social preference in highly teamed classroom environments were more victimized than low preference students who experienced less teaming throughout the school day. This effect was exaggerated in higher performing classrooms. Implications for the practice of academic teaming were discussed. PMID:25937668

  6. Formation Flight of Multiple UAVs via Onboard Sensor Information Sharing.

    PubMed

    Park, Chulwoo; Cho, Namhoon; Lee, Kyunghyun; Kim, Youdan

    2015-07-17

    To monitor large areas or simultaneously measure multiple points, multiple unmanned aerial vehicles (UAVs) must be flown in formation. To perform such flights, sensor information generated by each UAV should be shared via communications. Although a variety of studies have focused on the algorithms for formation flight, these studies have mainly demonstrated the performance of formation flight using numerical simulations or ground robots, which do not reflect the dynamic characteristics of UAVs. In this study, an onboard sensor information sharing system and formation flight algorithms for multiple UAVs are proposed. The communication delays of radiofrequency (RF) telemetry are analyzed to enable the implementation of the onboard sensor information sharing system. Using the sensor information sharing, the formation guidance law for multiple UAVs, which includes both a circular and close formation, is designed. The hardware system, which includes avionics and an airframe, is constructed for the proposed multi-UAV platform. A numerical simulation is performed to demonstrate the performance of the formation flight guidance and control system for multiple UAVs. Finally, a flight test is conducted to verify the proposed algorithm for the multi-UAV system.

  7. Formation Flight of Multiple UAVs via Onboard Sensor Information Sharing

    PubMed Central

    Park, Chulwoo; Cho, Namhoon; Lee, Kyunghyun; Kim, Youdan

    2015-01-01

    To monitor large areas or simultaneously measure multiple points, multiple unmanned aerial vehicles (UAVs) must be flown in formation. To perform such flights, sensor information generated by each UAV should be shared via communications. Although a variety of studies have focused on the algorithms for formation flight, these studies have mainly demonstrated the performance of formation flight using numerical simulations or ground robots, which do not reflect the dynamic characteristics of UAVs. In this study, an onboard sensor information sharing system and formation flight algorithms for multiple UAVs are proposed. The communication delays of radiofrequency (RF) telemetry are analyzed to enable the implementation of the onboard sensor information sharing system. Using the sensor information sharing, the formation guidance law for multiple UAVs, which includes both a circular and close formation, is designed. The hardware system, which includes avionics and an airframe, is constructed for the proposed multi-UAV platform. A numerical simulation is performed to demonstrate the performance of the formation flight guidance and control system for multiple UAVs. Finally, a flight test is conducted to verify the proposed algorithm for the multi-UAV system. PMID:26193281

  8. Cooperative outcome interdependence, task reflexivity, and team effectiveness: a motivated information processing perspective.

    PubMed

    De Dreu, Carsten K W

    2007-05-01

    A motivated information processing perspective (C. K. W. De Dreu & P. J. D. Carnevale, 2003; see also V. B. Hinsz, R. S. Tindale, & D. A. Vollrath, 1997) was used to predict that perceived cooperative outcome interdependence interacts with team-level reflexivity to predict information sharing, learning, and team effectiveness. A cross-sectional field study involving management and cross-functional teams (N = 46) performing nonroutine, complex tasks corroborated predictions: The more team members perceived cooperative outcome interdependence, the better they shared information, the more they learned and the more effective they were, especially when task reflexivity was high. When task reflexivity was low, no significant relationship was found between cooperative outcome interdependence and team processes and performance. The author concludes that the motivated information processing perspective is valid outside the confines of the laboratory and can be extended toward teamwork in organizations. 2007 APA, all rights reserved

  9. Implementation of Parallel Dynamic Simulation on Shared-Memory vs. Distributed-Memory Environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Shuangshuang; Chen, Yousu; Wu, Di

    2015-12-09

    Power system dynamic simulation computes the system response to a sequence of large disturbance, such as sudden changes in generation or load, or a network short circuit followed by protective branch switching operation. It consists of a large set of differential and algebraic equations, which is computational intensive and challenging to solve using single-processor based dynamic simulation solution. High-performance computing (HPC) based parallel computing is a very promising technology to speed up the computation and facilitate the simulation process. This paper presents two different parallel implementations of power grid dynamic simulation using Open Multi-processing (OpenMP) on shared-memory platform, and Messagemore » Passing Interface (MPI) on distributed-memory clusters, respectively. The difference of the parallel simulation algorithms and architectures of the two HPC technologies are illustrated, and their performances for running parallel dynamic simulation are compared and demonstrated.« less

  10. Let’s Dance Together: Synchrony, Shared Intentionality and Cooperation

    PubMed Central

    Reddish, Paul; Fischer, Ronald; Bulbulia, Joseph

    2013-01-01

    Previous research has shown that the matching of rhythmic behaviour between individuals (synchrony) increases cooperation. Such synchrony is most noticeable in music, dance and collective rituals. As well as the matching of behaviour, such collective performances typically involve shared intentionality: performers actively collaborate to produce joint actions. Over three experiments we examined the importance of shared intentionality in promoting cooperation from group synchrony. Experiment 1 compared a condition in which group synchrony was produced through shared intentionality to conditions in which synchrony or asynchrony were created as a by-product of hearing the same or different rhythmic beats. We found that synchrony combined with shared intentionality produced the greatest level of cooperation. To examinef the importance of synchrony when shared intentionality is present, Experiment 2 compared a condition in which participants deliberately worked together to produce synchrony with a condition in which participants deliberately worked together to produce asynchrony. We found that synchrony combined with shared intentionality produced the greatest level of cooperation. Experiment 3 manipulated both the presence of synchrony and shared intentionality and found significantly greater cooperation with synchrony and shared intentionality combined. Path analysis supported a reinforcement of cooperation model according to which perceiving synchrony when there is a shared goal to produce synchrony provides immediate feedback for successful cooperation so reinforcing the group’s cooperative tendencies. The reinforcement of cooperation model helps to explain the evolutionary conservation of traditional music and dance performances, and furthermore suggests that the collectivist values of such cultures may be an essential part of the mechanisms by which synchrony galvanises cooperative behaviours. PMID:23951106

  11. Does the market share of generic medicines influence the price level?: a European analysis.

    PubMed

    Dylst, Pieter; Simoens, Steven

    2011-10-01

    After the expiry of patents for originator medicines, generic medicines can enter the market, and price competition may occur. This process generates savings to the healthcare payer and to patients, but knowledge about the factors affecting price competition in the pharmaceutical market following patent expiry is still limited. This study aimed to investigate the relationship between the market share of generic medicines and the change of the medicine price level in European off-patent markets. Data on medicine volumes and values for 35 active substances were purchased from IMS Health. Ex-manufacturer prices were used, and the analysis was limited to medicines in immediate-release, oral, solid dosage forms. Countries included were Austria, Belgium, Denmark, Germany, France, Italy, the Netherlands, Spain, Sweden and the UK, which constitute a mix of countries with low and high generic medicines market shares. Data were available from June 2002 until March 2007. Market volume has risen in both high and low generic market share countries (+29.27% and +27.40%, respectively), but the cause of the rise is different for the two markets. In low generic market share countries, the rise was caused by the increased use of generic medicines, while in high market share countries, the rise was driven by the increased use of generic medicines and a shift of use from originator to generic medicines. Market value was substantially decreased in high generic market share countries (-26.6%), while the decrease in low generic market share countries was limited (-0.06%). In high generic market share countries, medicine prices dropped by -43.18% versus -21.56% in low market share countries. The extent to which price competition from generic medicines leads to price reductions appears to vary according to the market share of generic medicines. High generic market share countries have seen a larger decrease in medicine prices than low market share countries.

  12. Social Networks and Performance in Distributed Learning Communities

    ERIC Educational Resources Information Center

    Cadima, Rita; Ojeda, Jordi; Monguet, Josep M.

    2012-01-01

    Social networks play an essential role in learning environments as a key channel for knowledge sharing and students' support. In distributed learning communities, knowledge sharing does not occur as spontaneously as when a working group shares the same physical space; knowledge sharing depends even more on student informal connections. In this…

  13. The Conceptual Framework of Factors Affecting Shared Mental Model

    ERIC Educational Resources Information Center

    Lee, Miyoung; Johnson, Tristan; Lee, Youngmin; O'Connor, Debra; Khalil, Mohammed

    2004-01-01

    Many researchers have paid attention to the potentiality and possibility of the shared mental model because it enables teammates to perform their job better by sharing team knowledge, skills, attitudes, dynamics and environments. Even though theoretical and experimental evidences provide a close relationship between the shared mental model and…

  14. Association of the Shared Epitope, Smoking and the Interaction Between the Two With the Presence of Autoantibodies (Anti-CCP and FR) in Patients With Rheumatoid Arthritis in a Hospital in Seville, Spain.

    PubMed

    García de Veas Silva, José Luis; González Rodríguez, Concepción; Hernández Cruz, Blanca

    2017-11-01

    To evaluate the association of shared epitope, smoking and their interaction on the presence of autoantibodies (anti-cyclic citrullinated peptide [CCP] antibodies and rheumatoid factor) in patients with rheumatoid arthritis in our geographical area. A descriptive and cross-sectional study was carried out in a cohort of 106 patients diagnosed with RA. Odds ratios (OR) for antibody development were calculated for shared epitope, tobacco exposure and smoking dose. Statistical analysis was performed with univariate and multivariate statistics using ordinal logistic regression. Odds ratios were calculated with 95% confidence interval (95% CI) and a value of P<.05 was considered significant. In univariate analysis, shared epitope (OR=2.68; 95% CI: 1.11-6.46), tobacco exposure (OR=2.79; 95% CI: 1.12-6.97) and heavy smoker (>20 packs/year) (OR=8.93; 95% CI: 1.95-40.82) were associated with the presence of anti-CCP antibodies. For rheumatoid factor, the association was only significant for tobacco exposure (OR=3.89; 95% CI: 1.06-14.28) and smoking dose (OR=8.33; 95% CI: 1.05-66.22). By ordinal logistic regression analysis, an association with high titers of anti-CCP (>200U/mL) was identified with South American mestizos, patients with homozygous shared epitope, positive FR and heavy smokers. Being a South American mestizo, having a shared epitope, rheumatoid factor positivity and a smoking dose>20 packs/year are independent risk factors for the development of rheumatoid arthritis with a high titer of anti-CCP (>200U/mL). In shared epitope-positive rheumatoid arthritis patients, the intensity of smoking is more strongly associated than tobacco exposure with an increased risk of positive anti-CCP. Copyright © 2017 Elsevier España, S.L.U. and Sociedad Española de Reumatología y Colegio Mexicano de Reumatología. All rights reserved.

  15. Multisource feedback, human capital, and the financial performance of organizations.

    PubMed

    Kim, Kyoung Yong; Atwater, Leanne; Patel, Pankaj C; Smither, James W

    2016-11-01

    We investigated the relationship between organizations' use of multisource feedback (MSF) programs and their financial performance. We proposed a moderated mediation framework in which the employees' ability and knowledge sharing mediate the relationship between MSF and organizational performance and the purpose for which MSF is used moderates the relationship of MSF with employees' ability and knowledge sharing. With a sample of 253 organizations representing 8,879 employees from 2005 to 2007 in South Korea, we found that MSF had a positive effect on organizational financial performance via employees' ability and knowledge sharing. We also found that when MSF was used for dual purpose (both administrative and developmental purposes), the relationship between MSF and knowledge sharing was stronger, and this interaction carried through to organizational financial performance. However, the purpose of MSF did not moderate the relationship between MSF and employees' ability. The theoretical relevance and practical implications of the findings are discussed. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  16. Cancer-testis antigen expression is shared between epithelial ovarian cancer tumors.

    PubMed

    Garcia-Soto, Arlene E; Schreiber, Taylor; Strbo, Natasa; Ganjei-Azar, Parvin; Miao, Feng; Koru-Sengul, Tulay; Simpkins, Fiona; Nieves-Neira, Wilberto; Lucci, Joseph; Podack, Eckhard R

    2017-06-01

    Cancer-testis (CT) antigens have been proposed as potential targets for cancer immunotherapy. Our objective was to evaluate the expression of a panel of CT antigens in epithelial ovarian cancer (EOC) tumor specimens, and to determine if antigen sharing occurs between tumors. RNA was isolated from EOC tumor specimens, EOC cell lines and benign ovarian tissue specimens. Real time-PCR analysis was performed to determine the expression level of 20 CT antigens. A total of 62 EOC specimens, 8 ovarian cancer cell lines and 3 benign ovarian tissues were evaluated for CT antigen expression. The majority of the specimens were: high grade (62%), serous (68%) and advanced stage (74%). 58 (95%) of the EOC tumors analyzed expressed at least one of the CT antigens evaluated. The mean number of CT antigen expressed was 4.5 (0-17). The most frequently expressed CT antigen was MAGE A4 (65%). Antigen sharing analysis showed the following: 9 tumors shared only one antigen with 62% of the evaluated specimens, while 37 tumors shared 4 or more antigens with 82%. 5 tumors expressed over 10 CT antigens, which were shared with 90% of the tumor panel. CT antigens are expressed in 95% of EOC tumor specimens. However, not a single antigen was universally expressed across all samples. The degree of antigen sharing between tumors increased with the total number of antigens expressed. These data suggest a multi-epitope approach for development of immunotherapy for ovarian cancer treatment. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. Crowdsourcing seizure detection: algorithm development and validation on human implanted device recordings.

    PubMed

    Baldassano, Steven N; Brinkmann, Benjamin H; Ung, Hoameng; Blevins, Tyler; Conrad, Erin C; Leyde, Kent; Cook, Mark J; Khambhati, Ankit N; Wagenaar, Joost B; Worrell, Gregory A; Litt, Brian

    2017-06-01

    There exist significant clinical and basic research needs for accurate, automated seizure detection algorithms. These algorithms have translational potential in responsive neurostimulation devices and in automatic parsing of continuous intracranial electroencephalography data. An important barrier to developing accurate, validated algorithms for seizure detection is limited access to high-quality, expertly annotated seizure data from prolonged recordings. To overcome this, we hosted a kaggle.com competition to crowdsource the development of seizure detection algorithms using intracranial electroencephalography from canines and humans with epilepsy. The top three performing algorithms from the contest were then validated on out-of-sample patient data including standard clinical data and continuous ambulatory human data obtained over several years using the implantable NeuroVista seizure advisory system. Two hundred teams of data scientists from all over the world participated in the kaggle.com competition. The top performing teams submitted highly accurate algorithms with consistent performance in the out-of-sample validation study. The performance of these seizure detection algorithms, achieved using freely available code and data, sets a new reproducible benchmark for personalized seizure detection. We have also shared a 'plug and play' pipeline to allow other researchers to easily use these algorithms on their own datasets. The success of this competition demonstrates how sharing code and high quality data results in the creation of powerful translational tools with significant potential to impact patient care. © The Author (2017). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  18. Genetic influences on heart rate variability

    PubMed Central

    Golosheykin, Simon; Grant, Julia D.; Novak, Olga V.; Heath, Andrew C.; Anokhin, Andrey P.

    2016-01-01

    Heart rate variability (HRV) is the variation of cardiac inter-beat intervals over time resulting largely from the interplay between the sympathetic and parasympathetic branches of the autonomic nervous system. Individual differences in HRV are associated with emotion regulation, personality, psychopathology, cardiovascular health, and mortality. Previous studies have shown significant heritability of HRV measures. Here we extend genetic research on HRV by investigating sex differences in genetic underpinnings of HRV, the degree of genetic overlap among different measurement domains of HRV, and phenotypic and genetic relationships between HRV and the resting heart rate (HR). We performed electrocardiogram (ECG) recordings in a large population-representative sample of young adult twins (n = 1060 individuals) and computed HRV measures from three domains: time, frequency, and nonlinear dynamics. Genetic and environmental influences on HRV measures were estimated using linear structural equation modeling of twin data. The results showed that variability of HRV and HR measures can be accounted for by additive genetic and non-shared environmental influences (AE model), with no evidence for significant shared environmental effects. Heritability estimates ranged from 47 to 64%, with little difference across HRV measurement domains. Genetic influences did not differ between genders for most variables except the square root of the mean squared differences between successive R-R intervals (RMSSD, higher heritability in males) and the ratio of low to high frequency power (LF/HF, distinct genetic factors operating in males and females). The results indicate high phenotypic and especially genetic correlations between HRV measures from different domains, suggesting that >90% of genetic influences are shared across measures. Finally, about 40% of genetic variance in HRV was shared with HR. In conclusion, both HR and HRV measures are highly heritable traits in the general population of young adults, with high degree of genetic overlap across different measurement domains. PMID:27114045

  19. Connecting Music Education and Virtual Performance Practices from YouTube

    ERIC Educational Resources Information Center

    Cayari, Christopher

    2018-01-01

    The Internet has inspired musicians to explore technologies to produce recorded music performances. Social media sites like YouTube provide spaces for musicians to share their works, and the advances of technologies that afford venues and opportunities for performers to share their crafts. As amateur Internet musicians develop practices to create…

  20. Economic benefits of sharing and redistributing influenza vaccines when shortages occurred.

    PubMed

    Chen, Sheng-I

    2017-01-01

    Recurrent influenza outbreak has been a concern for government health institutions in Taiwan. Over 10% of the population is infected by influenza viruses every year, and the infection has caused losses to both health and the economy. Approximately three million free vaccine doses are ordered and administered to high-risk populations at the beginning of flu season to control the disease. The government recommends sharing and redistributing vaccine inventories when shortages occur. While this policy intends to increase inventory flexibility, and has been proven as widely valuable, its impact on vaccine availability has not been previously reported. This study developed an inventory model adapted to vaccination protocols to evaluate government recommended polices under different levels of vaccine production. Demands were uncertain and stratified by ages and locations according to the demographic data in Taiwan. When vaccine supply is sufficient, sharing pediatric vaccine reduced vaccine unavailability by 43% and overstock by 54%, and sharing adult vaccine reduced vaccine unavailability by 9% and overstock by 15%. Redistributing vaccines obtained greater gains for both pediatrics and adults (by 75%). When the vaccine supply is in short, only sharing pediatric vaccine yielded a 48% reduction of unused inventory, while other polices do not improve performances. When implementing vaccination activities for seasonal influenza intervention, it is important to consider mismatches of demand and vaccine inventory. Our model confirmed that sharing and redistributing vaccines can substantially increase availability and reduce unused vaccines.

  1. Using a shared governance structure to evaluate the implementation of a new model of care: the shared experience of a performance improvement committee.

    PubMed

    Myers, Mary; Parchen, Debra; Geraci, Marilla; Brenholtz, Roger; Knisely-Carrigan, Denise; Hastings, Clare

    2013-10-01

    Sustaining change in the behaviors and habits of experienced practicing nurses can be frustrating and daunting, even when changes are based on evidence. Partnering with an active shared governance structure to communicate change and elicit feedback is an established method to foster partnership, equity, accountability, and ownership. Few recent exemplars in the literature link shared governance, change management, and evidence-based practice to transitions in care models. This article describes an innovative staff-driven approach used by nurses in a shared governance performance improvement committee to use evidence-based practice in determining the best methods to evaluate the implementation of a new model of care.

  2. Using a Shared Governance Structure to Evaluate the Implementation of a New Model of Care: The Shared Experience of a Performance Improvement Committee

    PubMed Central

    Myers, Mary; Parchen, Debra; Geraci, Marilla; Brenholtz, Roger; Knisely-Carrigan, Denise; Hastings, Clare

    2013-01-01

    Sustaining change in the behaviors and habits of experienced practicing nurses can be frustrating and daunting, even when changes are based on evidence. Partnering with an active shared governance structure to communicate change and elicit feedback is an established method to foster partnership, equity, accountability and ownership. Few recent exemplars in the literature link shared governance, change management and evidence-based practice to transitions in care models. This article describes an innovative staff-driven approach used by nurses in a shared governance performance improvement committee to use evidence based practice in determining the best methods to evaluate the implementation of a new model of care. PMID:24061583

  3. Novel high-resolution VGA QWIP detector

    NASA Astrophysics Data System (ADS)

    Kataria, H.; Asplund, C.; Lindberg, A.; Smuk, S.; Alverbro, J.; Evans, D.; Sehlin, S.; Becanovic, S.; Tinghag, P.; Höglund, L.; Sjöström, F.; Costard, E.

    2017-02-01

    Continuing with its legacy of producing high performance infrared detectors, IRnova introduces its high resolution LWIR IDDCA (Integrated Detector Dewar Cooler assembly) based on QWIP (quantum well infrared photodetector) technology. The Focal Plane Array (FPA) has 640×512 pixels, with small (15μm) pixel pitch, and is based on the FLIRIndigo ISC0403 Readout Integrated Circuit (ROIC). The QWIP epitaxial structures are grown by metal-organic vapor phase epitaxy (MOVPE) at IRnova. Detector stability and response uniformity inherent to III/V based material will be demonstrated in terms of high performing detectors. Results showing low NETD at high frame rate will be presented. This makes it one of the first 15μm pitch QWIP based LWIR IDDCA commercially available on the market. High operability and stability of our other QWIP based products will also be shared.

  4. Design, Implementation, and Evaluation of a Virtual Shared Memory System in a Multi-Transputer Network.

    DTIC Science & Technology

    1987-12-01

    Synchronization and Data Passing Mechanism ........ 50 4. System Shut Down .................................................................. 51 5...high performance, fault tolerance, and extensibility. These features are attained by synchronizing and coordinating the dis- tributed multicomputer... synchronizing all processors in the network. In a multitransputer network, processes that communicate with each other do so synchronously . This makes

  5. High Performance Active Database Management on a Shared-Nothing Parallel Processor

    DTIC Science & Technology

    1998-05-01

    either stored or virtual. A stored node is like a materialized view. It actually contains the specified tuples. A virtual node is like a real view...90292-6695 DL-5 COLUMBIA UNIV/DEPT COMPUTER SCIENCi ATTN: OR GAIL £. KAISER 450 COMPUTER SCIENCE 3LDG 500 WEST 12ÖTH STRSET NEW YORK NY 10027

  6. Monozygotic twin differences in school performance are stable and systematic.

    PubMed

    von Stumm, Sophie; Plomin, Robert

    2018-06-19

    School performance is one of the most stable and heritable psychological characteristics. Notwithstanding, monozygotic twins (MZ), who have identical genotypes, differ in school performance. These MZ differences result from non-shared environments that do not contribute to the similarity within twin pairs. Because to date few non-shared environmental factors have been reliably associated with MZ differences in school performance, they are thought to be idiosyncratic and due to chance, suggesting that the effect of non-shared environments on MZ differences are age- and trait-specific. In a sample of 2768 MZ twin pairs, we found first that MZ differences in school performance were moderately stable from age 12 through 16, with differences at the ages 12 and 14 accounting for 20% of the variance in MZ differences at age 16. Second, MZ differences in school performance correlated positively with MZ differences across 16 learning-related variables, including measures of intelligence, personality and school attitudes, with the twin who scored higher on one also scoring higher on the other measures. Finally, MZ differences in the 16 learning-related variables accounted for 22% of the variance in MZ differences in school performance at age 16. These findings suggest that, unlike for other psychological domains, non-shared environmental factors affect school performance in systematic ways that have long-term and generalist influence. Our findings should motivate the search for non-shared environmental factors responsible for the stable and systematic effects on children's differences in school performance. A video abstract of this article can be viewed at: https://youtu.be/0bw2Fl_HGq0. © 2018 John Wiley & Sons Ltd.

  7. Adaptable, high recall, event extraction system with minimal configuration

    PubMed Central

    2015-01-01

    Background Biomedical event extraction has been a major focus of biomedical natural language processing (BioNLP) research since the first BioNLP shared task was held in 2009. Accordingly, a large number of event extraction systems have been developed. Most such systems, however, have been developed for specific tasks and/or incorporated task specific settings, making their application to new corpora and tasks problematic without modification of the systems themselves. There is thus a need for event extraction systems that can achieve high levels of accuracy when applied to corpora in new domains, without the need for exhaustive tuning or modification, whilst retaining competitive levels of performance. Results We have enhanced our state-of-the-art event extraction system, EventMine, to alleviate the need for task-specific tuning. Task-specific details are specified in a configuration file, while extensive task-specific parameter tuning is avoided through the integration of a weighting method, a covariate shift method, and their combination. The task-specific configuration and weighting method have been employed within the context of two different sub-tasks of BioNLP shared task 2013, i.e. Cancer Genetics (CG) and Pathway Curation (PC), removing the need to modify the system specifically for each task. With minimal task specific configuration and tuning, EventMine achieved the 1st place in the PC task, and 2nd in the CG, achieving the highest recall for both tasks. The system has been further enhanced following the shared task by incorporating the covariate shift method and entity generalisations based on the task definitions, leading to further performance improvements. Conclusions We have shown that it is possible to apply a state-of-the-art event extraction system to new tasks with high levels of performance, without having to modify the system internally. Both covariate shift and weighting methods are useful in facilitating the production of high recall systems. These methods and their combination can adapt a model to the target data with no deep tuning and little manual configuration. PMID:26201408

  8. Individual differences in airline captains' personalities, communication strategies, and crew performance

    NASA Technical Reports Server (NTRS)

    Orasanu, Judith

    1991-01-01

    Aircrew effectiveness in coping with emergencies has been linked to captain's personality profile. The present study analyzed cockpit communication during simulated flight to examine the relation between captains' discourse strategies, personality profiles, and crew performance. Positive Instrumental/Expressive captains and Instrumental-Negative captains used very similar communication strategies and their crews made few errors. Their talk was distinguished by high levels of planning and strategizing, gathering information, predicting/alerting, and explaining, especially during the emergency flight phase. Negative-Expressive captains talked less overall, and engaged in little problem solving talk, even during emergencies. Their crews made many errors. Findings support the theory that high crew performance results when captains use language to build shared mental models for problem situations.

  9. 3D environment modeling and location tracking using off-the-shelf components

    NASA Astrophysics Data System (ADS)

    Luke, Robert H.

    2016-05-01

    The remarkable popularity of smartphones over the past decade has led to a technological race for dominance in market share. This has resulted in a flood of new processors and sensors that are inexpensive, low power and high performance. These sensors include accelerometers, gyroscope, barometers and most importantly cameras. This sensor suite, coupled with multicore processors, allows a new community of researchers to build small, high performance platforms for low cost. This paper describes a system using off-the-shelf components to perform position tracking as well as environment modeling. The system relies on tracking using stereo vision and inertial navigation to determine movement of the system as well as create a model of the environment sensed by the system.

  10. Implementation and evaluation of shared-memory communication and synchronization operations in MPICH2 using the Nemesis communication subsystem.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buntinas, D.; Mercier, G.; Gropp, W.

    2007-09-01

    This paper presents the implementation of MPICH2 over the Nemesis communication subsystem and the evaluation of its shared-memory performance. We describe design issues as well as some of the optimization techniques we employed. We conducted a performance evaluation over shared memory using microbenchmarks. The evaluation shows that MPICH2 Nemesis has very low communication overhead, making it suitable for smaller-grained applications.

  11. Job Management and Task Bundling

    NASA Astrophysics Data System (ADS)

    Berkowitz, Evan; Jansen, Gustav R.; McElvain, Kenneth; Walker-Loud, André

    2018-03-01

    High Performance Computing is often performed on scarce and shared computing resources. To ensure computers are used to their full capacity, administrators often incentivize large workloads that are not possible on smaller systems. Measurements in Lattice QCD frequently do not scale to machine-size workloads. By bundling tasks together we can create large jobs suitable for gigantic partitions. We discuss METAQ and mpi_jm, software developed to dynamically group computational tasks together, that can intelligently backfill to consume idle time without substantial changes to users' current workflows or executables.

  12. Client/Server data serving for high performance computing

    NASA Technical Reports Server (NTRS)

    Wood, Chris

    1994-01-01

    This paper will attempt to examine the industry requirements for shared network data storage and sustained high speed (10's to 100's to thousands of megabytes per second) network data serving via the NFS and FTP protocol suite. It will discuss the current structural and architectural impediments to achieving these sorts of data rates cost effectively today on many general purpose servers and will describe and architecture and resulting product family that addresses these problems. The sustained performance levels that were achieved in the lab will be shown as well as a discussion of early customer experiences utilizing both the HIPPI-IP and ATM OC3-IP network interfaces.

  13. An efficient 3-dim FFT for plane wave electronic structure calculations on massively parallel machines composed of multiprocessor nodes

    NASA Astrophysics Data System (ADS)

    Goedecker, Stefan; Boulet, Mireille; Deutsch, Thierry

    2003-08-01

    Three-dimensional Fast Fourier Transforms (FFTs) are the main computational task in plane wave electronic structure calculations. Obtaining a high performance on a large numbers of processors is non-trivial on the latest generation of parallel computers that consist of nodes made up of a shared memory multiprocessors. A non-dogmatic method for obtaining high performance for such 3-dim FFTs in a combined MPI/OpenMP programming paradigm will be presented. Exploiting the peculiarities of plane wave electronic structure calculations, speedups of up to 160 and speeds of up to 130 Gflops were obtained on 256 processors.

  14. Shared knowledge or shared affordances? Insights from an ecological dynamics approach to team coordination in sports.

    PubMed

    Silva, Pedro; Garganta, Júlio; Araújo, Duarte; Davids, Keith; Aguiar, Paulo

    2013-09-01

    Previous research has proposed that team coordination is based on shared knowledge of the performance context, responsible for linking teammates' mental representations for collective, internalized action solutions. However, this representational approach raises many questions including: how do individual schemata of team members become reformulated together? How much time does it take for this collective cognitive process to occur? How do different cues perceived by different individuals sustain a general shared mental representation? This representational approach is challenged by an ecological dynamics perspective of shared knowledge in team coordination. We argue that the traditional shared knowledge assumption is predicated on 'knowledge about' the environment, which can be used to share knowledge and influence intentions of others prior to competition. Rather, during competitive performance, the control of action by perceiving surrounding informational constraints is expressed in 'knowledge of' the environment. This crucial distinction emphasizes perception of shared affordances (for others and of others) as the main communication channel between team members during team coordination tasks. From this perspective, the emergence of coordinated behaviours in sports teams is based on the formation of interpersonal synergies between players resulting from collective actions predicated on shared affordances.

  15. Speech recognition in advanced rotorcraft - Using speech controls to reduce manual control overload

    NASA Technical Reports Server (NTRS)

    Vidulich, Michael A.; Bortolussi, Michael R.

    1988-01-01

    An experiment has been conducted to ascertain the usefulness of helicopter pilot speech controls and their effect on time-sharing performance, under the impetus of multiple-resource theories of attention which predict that time-sharing should be more efficient with mixed manual and speech controls than with all-manual ones. The test simulation involved an advanced, single-pilot scout/attack helicopter. Performance and subjective workload levels obtained supported the claimed utility of speech recognition-based controls; specifically, time-sharing performance was improved while preparing a data-burst transmission of information during helicopter hover.

  16. Digital Photograph Security: What Plastic Surgeons Need to Know.

    PubMed

    Thomas, Virginia A; Rugeley, Patricia B; Lau, Frank H

    2015-11-01

    Sharing and storing digital patient photographs occur daily in plastic surgery. Two major risks associated with the practice, data theft and Health Insurance Portability and Accountability Act (HIPAA) violations, have been dramatically amplified by high-speed data connections and digital camera ubiquity. The authors review what plastic surgeons need to know to mitigate those risks and provide recommendations for implementing an ideal, HIPAA-compliant solution for plastic surgeons' digital photography needs: smartphones and cloud storage. Through informal discussions with plastic surgeons, the authors identified the most common photograph sharing and storage methods. For each method, a literature search was performed to identify the risks of data theft and HIPAA violations. HIPAA violation risks were confirmed by the second author (P.B.R.), a compliance liaison and privacy officer. A comprehensive review of HIPAA-compliant cloud storage services was performed. When possible, informal interviews with cloud storage services representatives were conducted. The most common sharing and storage methods are not HIPAA compliant, and several are prone to data theft. The authors' review of cloud storage services identified six HIPAA-compliant vendors that have strong to excellent security protocols and policies. These options are reasonably priced. Digital photography and technological advances offer major benefits to plastic surgeons but are not without risks. A proper understanding of data security and HIPAA regulations needs to be applied to these technologies to safely capture their benefits. Cloud storage services offer efficient photograph sharing and storage with layers of security to ensure HIPAA compliance and mitigate data theft risk.

  17. Developments of new force reflecting control schemes and an application to a teleoperation training simulator

    NASA Technical Reports Server (NTRS)

    Kim, Won S.

    1992-01-01

    Two schemes of force reflecting control, position-error based force reflection and low-pass-filtered force reflection, both combined with shared compliance control, were developed for dissimilar master-slave arms. These schemes enabled high force reflection gains, which were not possible with a conventional scheme when the slave arm was much stiffer than the master arm. The experimental results with a peg-in-hole task indicated that the newly force reflecting control schemes combined with compliance control resulted in best task performances. As a related application, a simulated force reflection/shared compliance control teleoperation trainer was developed that provided the operator with the feel of kinesthetic force virtual reality.

  18. Cooperation stimulation strategies for peer-to-peer wireless live video-sharing social networks.

    PubMed

    Lin, W Sabrina; Zhao, H Vicky; Liu, K J Ray

    2010-07-01

    Human behavior analysis in video sharing social networks is an emerging research area, which analyzes the behavior of users who share multimedia content and investigates the impact of human dynamics on video sharing systems. Users watching live streaming in the same wireless network share the same limited bandwidth of backbone connection to the Internet, thus, they might want to cooperate with each other to obtain better video quality. These users form a wireless live-streaming social network. Every user wishes to watch video with high quality while paying as little as possible cost to help others. This paper focuses on providing incentives for user cooperation. We propose a game-theoretic framework to model user behavior and to analyze the optimal strategies for user cooperation simulation in wireless live streaming. We first analyze the Pareto optimality and the time-sensitive bargaining equilibrium of the two-person game. We then extend the solution to the multiuser scenario. We also consider potential selfish users' cheating behavior and malicious users' attacking behavior and analyze the performance of the proposed strategies with the existence of cheating users and malicious attackers. Both our analytical and simulation results show that the proposed strategies can effectively stimulate user cooperation, achieve cheat free and attack resistance, and help provide reliable services for wireless live streaming applications.

  19. Performance management excellence among the Malcolm Baldrige National Quality Award Winners in Health Care.

    PubMed

    Duarte, Neville T; Goodson, Jane R; Arnold, Edwin W

    2013-01-01

    When carefully constructed, performance management systems can help health care organizations direct their efforts toward strategic goals, high performance, and continuous improvement needed to ensure high-quality patient care and cost control. The effective management of performance is an integral component in hospital and health care systems that are recognized for excellence by the Malcolm Baldrige National Quality Award in Health Care. Using the framework in the 2011-2012 Health Care Criteria for Performance Excellence, this article identifies the best practices in performance management demonstrated by 15 Baldrige recipients. The results show that all of the recipients base their performance management systems on strategic goals, outcomes, or competencies that cascade from the organizational to the individual level. At the individual level, each hospital or health system reinforces the strategic direction with performance evaluations of leaders and employees, including the governing board, based on key outcomes and competencies. Leader evaluations consistently include feedback from internal and external stakeholders, creating a culture of information sharing and performance improvement. The hospitals or health care systems also align their reward systems to promote high performance by emphasizing merit and recognition for contributions. Best practices can provide a guide for leaders in other health systems in developing high-performance work systems.

  20. The Generation Challenge Programme Platform: Semantic Standards and Workbench for Crop Science

    PubMed Central

    Bruskiewich, Richard; Senger, Martin; Davenport, Guy; Ruiz, Manuel; Rouard, Mathieu; Hazekamp, Tom; Takeya, Masaru; Doi, Koji; Satoh, Kouji; Costa, Marcos; Simon, Reinhard; Balaji, Jayashree; Akintunde, Akinnola; Mauleon, Ramil; Wanchana, Samart; Shah, Trushar; Anacleto, Mylah; Portugal, Arllet; Ulat, Victor Jun; Thongjuea, Supat; Braak, Kyle; Ritter, Sebastian; Dereeper, Alexis; Skofic, Milko; Rojas, Edwin; Martins, Natalia; Pappas, Georgios; Alamban, Ryan; Almodiel, Roque; Barboza, Lord Hendrix; Detras, Jeffrey; Manansala, Kevin; Mendoza, Michael Jonathan; Morales, Jeffrey; Peralta, Barry; Valerio, Rowena; Zhang, Yi; Gregorio, Sergio; Hermocilla, Joseph; Echavez, Michael; Yap, Jan Michael; Farmer, Andrew; Schiltz, Gary; Lee, Jennifer; Casstevens, Terry; Jaiswal, Pankaj; Meintjes, Ayton; Wilkinson, Mark; Good, Benjamin; Wagner, James; Morris, Jane; Marshall, David; Collins, Anthony; Kikuchi, Shoshi; Metz, Thomas; McLaren, Graham; van Hintum, Theo

    2008-01-01

    The Generation Challenge programme (GCP) is a global crop research consortium directed toward crop improvement through the application of comparative biology and genetic resources characterization to plant breeding. A key consortium research activity is the development of a GCP crop bioinformatics platform to support GCP research. This platform includes the following: (i) shared, public platform-independent domain models, ontology, and data formats to enable interoperability of data and analysis flows within the platform; (ii) web service and registry technologies to identify, share, and integrate information across diverse, globally dispersed data sources, as well as to access high-performance computational (HPC) facilities for computationally intensive, high-throughput analyses of project data; (iii) platform-specific middleware reference implementations of the domain model integrating a suite of public (largely open-access/-source) databases and software tools into a workbench to facilitate biodiversity analysis, comparative analysis of crop genomic data, and plant breeding decision making. PMID:18483570

  1. Hybrid Vehicle Technologies and their potential for reducing oil use

    NASA Astrophysics Data System (ADS)

    German, John

    2006-04-01

    Vehicles with hybrid gasoline-electric powertrains are starting to gain market share. Current hybrid vehicles add an electric motor, battery pack, and power electronics to the conventional powertrain. A variety of engine/motor configurations are possible, each with advantages and disadvantages. In general, efficiency is improved due to engine shut-off at idle, capture of energy during deceleration that is normally lost as heat in the brakes, downsizing of the conventional engine, and, in some cases, propulsion on the electric motor alone. Ongoing increases in hybrid market share are dependent on cost reduction, especially the battery pack, efficiency synergies with other vehicle technologies, use of the high electric power to provide features desired by customers, and future fuel price and availability. Potential barriers include historically low fuel prices, high discounting of the fuel savings by new vehicle purchasers, competing technologies, and tradeoffs with other factors desired by customers, such as performance, utility, safety, and luxury features.

  2. A Parallel Rendering Algorithm for MIMD Architectures

    NASA Technical Reports Server (NTRS)

    Crockett, Thomas W.; Orloff, Tobias

    1991-01-01

    Applications such as animation and scientific visualization demand high performance rendering of complex three dimensional scenes. To deliver the necessary rendering rates, highly parallel hardware architectures are required. The challenge is then to design algorithms and software which effectively use the hardware parallelism. A rendering algorithm targeted to distributed memory MIMD architectures is described. For maximum performance, the algorithm exploits both object-level and pixel-level parallelism. The behavior of the algorithm is examined both analytically and experimentally. Its performance for large numbers of processors is found to be limited primarily by communication overheads. An experimental implementation for the Intel iPSC/860 shows increasing performance from 1 to 128 processors across a wide range of scene complexities. It is shown that minimal modifications to the algorithm will adapt it for use on shared memory architectures as well.

  3. Scalability of a Low-Cost Multi-Teraflop Linux Cluster for High-End Classical Atomistic and Quantum Mechanical Simulations

    NASA Technical Reports Server (NTRS)

    Kikuchi, Hideaki; Kalia, Rajiv K.; Nakano, Aiichiro; Vashishta, Priya; Shimojo, Fuyuki; Saini, Subhash

    2003-01-01

    Scalability of a low-cost, Intel Xeon-based, multi-Teraflop Linux cluster is tested for two high-end scientific applications: Classical atomistic simulation based on the molecular dynamics method and quantum mechanical calculation based on the density functional theory. These scalable parallel applications use space-time multiresolution algorithms and feature computational-space decomposition, wavelet-based adaptive load balancing, and spacefilling-curve-based data compression for scalable I/O. Comparative performance tests are performed on a 1,024-processor Linux cluster and a conventional higher-end parallel supercomputer, 1,184-processor IBM SP4. The results show that the performance of the Linux cluster is comparable to that of the SP4. We also study various effects, such as the sharing of memory and L2 cache among processors, on the performance.

  4. Visual scanning with or without spatial uncertainty and time-sharing performance

    NASA Technical Reports Server (NTRS)

    Liu, Yili; Wickens, Christopher D.

    1989-01-01

    An experiment is reported that examines the pattern of task interference between visual scanning as a sequential and selective attention process and other concurrent spatial or verbal processing tasks. A distinction is proposed between visual scanning with or without spatial uncertainty regarding the possible differential effects of these two types of scanning on interference with other concurrent processes. The experiment required the subject to perform a simulated primary tracking task, which was time-shared with a secondary spatial or verbal decision task. The relevant information that was needed to perform the decision tasks were displayed with or without spatial uncertainty. The experiment employed a 2 x 2 x 2 design with types of scanning (with or without spatial uncertainty), expected scanning distance (low/high), and codes of concurrent processing (spatial/verbal) as the three experimental factors. The results provide strong evidence that visual scanning as a spatial exploratory activity produces greater task interference with concurrent spatial tasks than with concurrent verbal tasks. Furthermore, spatial uncertainty in visual scanning is identified to be the crucial factor in producing this differential effect.

  5. Factors associated with the practice of nursing staff sharing information about patients' nutritional status with their colleagues in hospitals.

    PubMed

    Kawasaki, Y; Tamaura, Y; Akamatsu, R; Sakai, M; Fujiwara, K

    2018-01-01

    Nursing staff have an important role in patients' nutritional care. The aim of this study was to demonstrate how the practice of sharing a patient's nutritional status with colleagues was affected by the nursing staff's attitude, knowledge and their priority to provide nutritional care. The participants were 492 nursing staff. We obtained participants' demographic data, the practice of sharing patients' nutritional information and information about participants' knowledge, attitude and priority of providing nutritional care by the questionnaire. We performed partial correlation analyses and linear regression analyses to describe the relationship between the total scores of the practice of sharing patients' nutritional information based on their knowledge, attitude and priority to provide nutritional care. Among the 492 participants, 396 nursing staff (80.5%) completed the questionnaire and were included in analyses. Mean±s.d. of total score of the 396 participants was 8.4±3.1. Nursing staff shared information when they had a high nutritional knowledge (r=0.36, P<0.01) and attitude (r=0.13, P<0.05); however, their correlation coefficients were low. In the linear regression analyses, job categories (β=-0.28, P<0.01), knowledge (β=0.33, P<0.01) and attitude (β=0.10, P<0.05) were independently associated with the practice of sharing information. Nursing staff's priority to provide nutritional care practice was not significantly associated with the practice of sharing information. Knowledge and attitude were independently associated with the practice of sharing patients' nutrition information with colleagues, regardless of their priority to provide nutritional care. An effective approach should be taken to improve the practice of providing nutritional care practice.

  6. 2006 Net Centric Operations Conference - Facilitating Net Centric Operations and Warfare

    DTIC Science & Technology

    2006-03-16

    22, 2005 • White Paper, “Facilitating Shared Services in the DoD,” Feb 12, 2006 • White Paper, “ Shared Services : Performance Accountability and Risk...who demand a culture of information sharing and improved organizational effectiveness.” 12 Facilitating Shared Services : Task “What should be the...distinct programs.” 13 Facilitating Shared Services : Focus Areas • Governance and Control Policy • Common Information Standards and Technical

  7. The High-Performance Computing and Communications program, the national information infrastructure and health care.

    PubMed Central

    Lindberg, D A; Humphreys, B L

    1995-01-01

    The High-Performance Computing and Communications (HPCC) program is a multiagency federal effort to advance the state of computing and communications and to provide the technologic platform on which the National Information Infrastructure (NII) can be built. The HPCC program supports the development of high-speed computers, high-speed telecommunications, related software and algorithms, education and training, and information infrastructure technology and applications. The vision of the NII is to extend access to high-performance computing and communications to virtually every U.S. citizen so that the technology can be used to improve the civil infrastructure, lifelong learning, energy management, health care, etc. Development of the NII will require resolution of complex economic and social issues, including information privacy. Health-related applications supported under the HPCC program and NII initiatives include connection of health care institutions to the Internet; enhanced access to gene sequence data; the "Visible Human" Project; and test-bed projects in telemedicine, electronic patient records, shared informatics tool development, and image systems. PMID:7614116

  8. Shared Decision Making for Better Schools.

    ERIC Educational Resources Information Center

    Brost, Paul

    2000-01-01

    Delegating decision making to those closest to implementation can result in better decisions, more support for improvement initiatives, and increased student performance. Shared decision making depends on capable school leadership, a professional community, instructional guidance mechanisms, knowledge and skills, information sharing, power, and…

  9. 76 FR 805 - Self-Regulatory Organizations; NYSE Arca, Inc.; Notice of Filing of Proposed Rule Change Relating...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-06

    ... Trading Shares of the SPDR Nuveen S&P High Yield Municipal Bond ETF December 30, 2010. Pursuant to Section... Change The Exchange proposes to list and trade shares of the SPDR Nuveen S&P High Yield Municipal Bond... for, the Proposed Rule Change 1. Purpose The Exchange proposes to list and trade shares (``Shares...

  10. Tracking performance under time sharing conditions with a digit processing task: A feedback control theory analysis. [attention sharing effect on operator performance

    NASA Technical Reports Server (NTRS)

    Gopher, D.; Wickens, C. D.

    1975-01-01

    A one dimensional compensatory tracking task and a digit processing reaction time task were combined in a three phase experiment designed to investigate tracking performance in time sharing. Adaptive techniques, elaborate feedback devices, and on line standardization procedures were used to adjust task difficulty to the ability of each individual subject and manipulate time sharing demands. Feedback control analysis techniques were employed in the description of tracking performance. The experimental results show that when the dynamics of a system are constrained, in such a manner that man machine system stability is no longer a major concern of the operator, he tends to adopt a first order control describing function, even with tracking systems of higher order. Attention diversion to a concurrent task leads to an increase in remnant level, or nonlinear power. This decrease in linearity is reflected both in the output magnitude spectra of the subjects, and in the linear fit of the amplitude ratio functions.

  11. Food-Sharing Networks in Lamalera, Indonesia: Status, Sharing, and Signaling

    PubMed Central

    Nolin, David A.

    2012-01-01

    Costly signaling has been proposed as a possible mechanism to explain food sharing in foraging populations. This sharing-as-signaling hypothesis predicts an association between sharing and status. Using exponential random graph modeling (ERGM), this prediction is tested on a social network of between-household food-sharing relationships in the fishing and sea-hunting village of Lamalera, Indonesia. Previous analyses (Nolin 2010) have shown that most sharing in Lamalera is consistent with reciprocal altruism. The question addressed here is whether any additional variation may be explained as sharing-as-signaling by high-status households. The results show that high-status households both give and receive more than other households, a pattern more consistent with reciprocal altruism than costly signaling. However, once the propensity to reciprocate and household productivity are controlled, households of men holding leadership positions show greater odds of unreciprocated giving when compared to households of non-leaders. This pattern of excessive giving by leaders is consistent with the sharing-as-signaling hypothesis. Wealthy households show the opposite pattern, giving less and receiving more than other households. These households may reciprocate in a currency other than food or their wealth may attract favor-seeking behavior from others. Overall, status covariates explain little variation in the sharing network as a whole, and much of the sharing observed by high-status households is best explained by the same factors that explain sharing by other households. This pattern suggests that multiple mechanisms may operate simultaneously to promote sharing in Lamalera and that signaling may motivate some sharing by some individuals even within sharing regimes primarily maintained by other mechanisms. PMID:22822299

  12. The uncompromising leader.

    PubMed

    Eisenstat, Russell A; Beer, Michael; Foote, Nathaniel; Fredberg, Tobias; Norrgren, Flemming

    2008-01-01

    Managing the tension between performance and people is at the heart of the CEO's job. But CEOs under fierce pressure from capital markets often focus solely on the shareholder, which can lead to employee disenchantment. Others put so much stock in their firms' heritage that they don't notice as their organizations slide into complacency. Some leaders, though, manage to avoid those traps and create high-commitment, high-performance (HCHP) companies. The authors' in-depth research of HCHP CEOs reveals several shared traits: These CEOs earn the trust of their organizations through their openness to the unvarnished truth. They are deeply engaged with their people, and their exchanges are direct and personal. They mobilize employees around a focused agenda, concentrating on only one or two initiatives. And they work to build collective leadership capabilities. These leaders also forge an emotionally resonant shared purpose across their companies. That consists of a three-part promise: The company will help employees build a better world and deliver performance they can be proud of, and will provide an environment in which they can grow. HCHP CEOs approach finding a firm's moral and strategic center in a competitive market as a calling, not an engineering problem. They drive their firms to be strongly market focused while at the same time reinforcing their firms' core values. They are committed to short-term performance while also investing in long-term leadership and organizational capabilities. By refusing to compromise on any of these terms, they build great companies.

  13. Quantification and characterization of alkaloids from roots of Rauwolfia serpentina using ultra high performance liquid chromatography-photo diode array-mass spectrometry (UHPLC-PDA-MS)

    USDA-ARS?s Scientific Manuscript database

    The roots of Rauwolfia serpentina (L.) Benth. ex Kurz has been used in native Indian medicine for treatment of various illnesses and has been mainly used to treat hypertension. Reserpine is potent substance which shared both central nervous system depressant and hypotensive actions. An UHPLC-UV meth...

  14. 21st Century Community Learning Centers: Providing Afterschool and Summer Learning Support to Communities Nationwide

    ERIC Educational Resources Information Center

    Afterschool Alliance, 2014

    2014-01-01

    The 21st Century Community Learning Centers (21st CCLC) initiative is the only federal funding source dedicated exclusively to before-school, afterschool, and summer learning programs. Each state education agency receives funds based on its share of Title I funding for low-income students at high-poverty, low performing schools. Funds are also…

  15. A Limit to Reflexivity: The Challenge for Working Women of Negotiating Sharing of Household Labor

    ERIC Educational Resources Information Center

    Walters, Peter; Whitehouse, Gillian

    2012-01-01

    Unpaid household labor is still predominantly performed by women, despite dramatic increases in female labor force participation over the past 50 years. For this article, interviews with 76 highly skilled women who had returned to the workforce following the birth of children were analyzed to capture reflexive understandings of the balance of paid…

  16. Teacher Perceptions and Principal Leadership Behaviors on Teacher Morale in High and Low-Performing Elementary Schools in South Carolina

    ERIC Educational Resources Information Center

    Hughes, Brenda C.

    2013-01-01

    This quantitative research study employed a correlational research method. Two survey instruments were used in this study. The Leadership Practices Inventory (LPI), which is a 30-item Likert scale questionnaire which measures 5 areas of leadership behaviors: (1) Model the Way; (2) Inspire a Shared Vision; (3) Challenge the Process; (4) Enable…

  17. How the World's Best Schools Stay on Top: Study's Key Findings Pinpoint Practices That Align with Learning Forward

    ERIC Educational Resources Information Center

    Killion, Joellen

    2016-01-01

    Key findings from a new study highlight how Learning Forward's long-standing position on professional learning correlates with practices in high-performing systems in Singapore, Shanghai, Hong Kong, and British Columbia. The purpose of this article is to share key findings from the study so that educators might apply them to strengthening…

  18. Primary Health Care as a Foundation for Strengthening Health Systems in Low- and Middle-Income Countries.

    PubMed

    Bitton, Asaf; Ratcliffe, Hannah L; Veillard, Jeremy H; Kress, Daniel H; Barkley, Shannon; Kimball, Meredith; Secci, Federica; Wong, Ethan; Basu, Lopa; Taylor, Chelsea; Bayona, Jaime; Wang, Hong; Lagomarsino, Gina; Hirschhorn, Lisa R

    2017-05-01

    Primary health care (PHC) has been recognized as a core component of effective health systems since the early part of the twentieth century. However, despite notable progress, there remains a large gap between what individuals and communities need, and the quality and effectiveness of care delivered. The Primary Health Care Performance Initiative (PHCPI) was established by an international consortium to catalyze improvements in PHC delivery and outcomes in low- and middle-income countries through better measurement and sharing of effective models and practices. PHCPI has developed a framework to illustrate the relationship between key financing, workforce, and supply inputs, and core primary health care functions of first-contact accessibility, comprehensiveness, coordination, continuity, and person-centeredness. The framework provides guidance for more effective assessment of current strengths and gaps in PHC delivery through a core set of 25 key indicators ("Vital Signs"). Emerging best practices that foster high-performing PHC system development are being codified and shared around low- and high-income countries. These measurement and improvement approaches provide countries and implementers with tools to assess the current state of their PHC delivery system and to identify where cross-country learning can accelerate improvements in PHC quality and effectiveness.

  19. Do reading and spelling share a lexicon?

    PubMed

    Jones, Angela C; Rawson, Katherine A

    2016-05-01

    In the reading and spelling literature, an ongoing debate concerns whether reading and spelling share a single orthographic lexicon or rely upon independent lexica. Available evidence tends to support a single lexicon account over an independent lexica account, but evidence is mixed and open to alternative explanation. In the current work, we propose another, largely ignored account--separate-but-shared lexica--according to which reading and spelling have separate orthographic lexica, but information can be shared between them. We report three experiments designed to competitively evaluate these three theoretical accounts. In each experiment, participants learned new words via reading training and/or spelling training. The key manipulation concerned the amount of reading versus spelling practice a given item received. Following training, we assessed both response time and accuracy on final outcome measures of reading and spelling. According to the independent lexica account, final performance in one modality will not be influenced by the level of practice in the other modality. According to the single lexicon account, final performance will depend on the overall amount of practice regardless of modality. According to the separate-but-shared account, final performance will be influenced by the level of practice in both modalities but will benefit more from same-modality practice. Results support the separate-but-shared account, indicating that reading and spelling rely upon separate lexica, but information can be shared between them. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. The Climate-G testbed: towards a large scale data sharing environment for climate change

    NASA Astrophysics Data System (ADS)

    Aloisio, G.; Fiore, S.; Denvil, S.; Petitdidier, M.; Fox, P.; Schwichtenberg, H.; Blower, J.; Barbera, R.

    2009-04-01

    The Climate-G testbed provides an experimental large scale data environment for climate change addressing challenging data and metadata management issues. The main scope of Climate-G is to allow scientists to carry out geographical and cross-institutional climate data discovery, access, visualization and sharing. Climate-G is a multidisciplinary collaboration involving both climate and computer scientists and it currently involves several partners such as: Centro Euro-Mediterraneo per i Cambiamenti Climatici (CMCC), Institut Pierre-Simon Laplace (IPSL), Fraunhofer Institut für Algorithmen und Wissenschaftliches Rechnen (SCAI), National Center for Atmospheric Research (NCAR), University of Reading, University of Catania and University of Salento. To perform distributed metadata search and discovery, we adopted a CMCC metadata solution (which provides a high level of scalability, transparency, fault tolerance and autonomy) leveraging both on P2P and grid technologies (GRelC Data Access and Integration Service). Moreover, data are available through OPeNDAP/THREDDS services, Live Access Server as well as the OGC compliant Web Map Service and they can be downloaded, visualized, accessed into the proposed environment through the Climate-G Data Distribution Centre (DDC), the web gateway to the Climate-G digital library. The DDC is a data-grid portal allowing users to easily, securely and transparently perform search/discovery, metadata management, data access, data visualization, etc. Godiva2 (integrated into the DDC) displays 2D maps (and animations) and also exports maps for display on the Google Earth virtual globe. Presently, Climate-G publishes (through the DDC) about 2TB of data related to the ENSEMBLES project (also including distributed replicas of data) as well as to the IPCC AR4. The main results of the proposed work are: wide data access/sharing environment for climate change; P2P/grid metadata approach; production-level Climate-G DDC; high quality tools for data visualization; metadata search/discovery across several countries/institutions; open environment for climate change data sharing.

  1. Statistical mechanics of broadcast channels using low-density parity-check codes.

    PubMed

    Nakamura, Kazutaka; Kabashima, Yoshiyuki; Morelos-Zaragoza, Robert; Saad, David

    2003-03-01

    We investigate the use of Gallager's low-density parity-check (LDPC) codes in a degraded broadcast channel, one of the fundamental models in network information theory. Combining linear codes is a standard technique in practical network communication schemes and is known to provide better performance than simple time sharing methods when algebraic codes are used. The statistical physics based analysis shows that the practical performance of the suggested method, achieved by employing the belief propagation algorithm, is superior to that of LDPC based time sharing codes while the best performance, when received transmissions are optimally decoded, is bounded by the time sharing limit.

  2. Cooperative Learning for Distributed In-Network Traffic Classification

    NASA Astrophysics Data System (ADS)

    Joseph, S. B.; Loo, H. R.; Ismail, I.; Andromeda, T.; Marsono, M. N.

    2017-04-01

    Inspired by the concept of autonomic distributed/decentralized network management schemes, we consider the issue of information exchange among distributed network nodes to network performance and promote scalability for in-network monitoring. In this paper, we propose a cooperative learning algorithm for propagation and synchronization of network information among autonomic distributed network nodes for online traffic classification. The results show that network nodes with sharing capability perform better with a higher average accuracy of 89.21% (sharing data) and 88.37% (sharing clusters) compared to 88.06% for nodes without cooperative learning capability. The overall performance indicates that cooperative learning is promising for distributed in-network traffic classification.

  3. What can Johnson & Johnson do to remain a giant in the health care industry?

    PubMed

    Carter, Tony

    2002-01-01

    As a major Fortune 500 corporation and manufacturer of significant drug products for the pharmaceutical industry, Johnson & Johnson has also had its share of marketing crisis, including the classic case example of The Tylenol Scare in Fall, 1982, so they can appreciate the need for effective marketing performance and customer responsiveness. This article will examine how Johnson & Johnson has adapted to a highly volatile business environment and how they can be benchmarked for highly competitive marketing strategies and practices.

  4. Efficient Numeric and Geometric Computations using Heterogeneous Shared Memory Architectures

    DTIC Science & Technology

    2017-10-04

    Report: Efficient Numeric and Geometric Computations using Heterogeneous Shared Memory Architectures The views, opinions and/or findings contained in this...Chapel Hill Title: Efficient Numeric and Geometric Computations using Heterogeneous Shared Memory Architectures Report Term: 0-Other Email: dm...algorithms for scientific and geometric computing by exploiting the power and performance efficiency of heterogeneous shared memory architectures . These

  5. Effects of an educational programme on shared decision-making among Korean nurses.

    PubMed

    Jo, Kae-Hwa; An, Gyeong-Ju

    2015-12-01

    This study was conducted to examine the effects of an educational programme on shared decision-making on end-of-life care performance, moral sensitivity and attitude towards shared decision-making among Korean nurses. A quasi-experimental study with a non-equivalent control group pretest-posttest design was used. Forty-one clinical nurses were recruited as participants from two different university hospitals located in Daegu, Korea. Twenty nurses in the control group received no intervention, and 21 nurses in the experimental group received the educational programme on shared decision-making. Data were collected with a questionnaire covering end-of-life care performance, moral sensitivity and attitude towards shared decision-making. Analysis of the data was done with the chi-square test, t-test and Fisher's exact test using SPSS/Win 17.0 (SPSS, Inc., Chicago, IL, USA). The experimental group showed significantly higher scores in moral sensitivity and attitude towards shared decision-making after the intervention compared with the control group. This study suggests that the educational programme on shared decision-making was effective in increasing the moral sensitivity and attitude towards shared decision-making among Korean nurses. © 2014 Wiley Publishing Asia Pty Ltd.

  6. Share your sweets: Chimpanzee (Pan troglodytes) and bonobo (Pan paniscus) willingness to share highly attractive, monopolizable food sources.

    PubMed

    Byrnit, Jill T; Høgh-Olesen, Henrik; Makransky, Guido

    2015-08-01

    All over the world, humans (Homo sapiens) display resource-sharing behavior, and common patterns of sharing seem to exist across cultures. Humans are not the only primates to share, and observations from the wild have long documented food sharing behavior in our closest phylogenetic relatives, chimpanzees (Pan troglodytes) and bonobos (Pan paniscus). However, few controlled studies have been made in which groups of Pan are introduced to food items that may be shared or monopolized by a first food possessor, and very few studies have examined what happens to these sharing patterns if the food in question is a highly attractive, monopolizable food source. The one study to date to include food quality as the independent variable used different types of food as high- and low-value items, making differences in food divisibility and size potentially confounding factors. It was the aim of the present study to examine the sharing behavior of groups of captive chimpanzees and bonobos when introducing the same type of food (branches) manipulated to be of 2 different degrees of desirability (with or without syrup). Results showed that the large majority of food transfers in both species came about as sharing in which group members were allowed to cofeed or remove food from the stock of the food possessor, and the introduction of high-value food resulted in more sharing, not less. Food sharing behavior differed between species in that chimpanzees displayed significantly more begging behavior than bonobos. Bonobos, instead, engaged in sexual invitations, which the chimpanzees never did. (PsycINFO Database Record (c) 2015 APA, all rights reserved).

  7. Power/Performance Trade-offs of Small Batched LU Based Solvers on GPUs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Villa, Oreste; Fatica, Massimiliano; Gawande, Nitin A.

    In this paper we propose and analyze a set of batched linear solvers for small matrices on Graphic Processing Units (GPUs), evaluating the various alternatives depending on the size of the systems to solve. We discuss three different solutions that operate with different level of parallelization and GPU features. The first, exploiting the CUBLAS library, manages matrices of size up to 32x32 and employs Warp level (one matrix, one Warp) parallelism and shared memory. The second works at Thread-block level parallelism (one matrix, one Thread-block), still exploiting shared memory but managing matrices up to 76x76. The third is Thread levelmore » parallel (one matrix, one thread) and can reach sizes up to 128x128, but it does not exploit shared memory and only relies on the high memory bandwidth of the GPU. The first and second solution only support partial pivoting, the third one easily supports partial and full pivoting, making it attractive to problems that require greater numerical stability. We analyze the trade-offs in terms of performance and power consumption as function of the size of the linear systems that are simultaneously solved. We execute the three implementations on a Tesla M2090 (Fermi) and on a Tesla K20 (Kepler).« less

  8. Not of one mind: mental models of clinical practice guidelines in the Veterans Health Administration.

    PubMed

    Hysong, Sylvia J; Best, Richard G; Pugh, Jacqueline A; Moore, Frank I

    2005-06-01

    The purpose of this paper is to present differences in mental models of clinical practice guidelines (CPGs) among 15 Veterans Health Administration (VHA) facilities throughout the United States. Two hundred and forty-four employees from 15 different VHA facilities across four service networks around the country were invited to participate. Participants were selected from different levels throughout each service setting from primary care personnel to facility leadership. This qualitative study used purposive sampling, a semistructured interview process for data collection, and grounded theory techniques for analysis. A semistructured interview was used to collect information on participants' mental models of CPGs, as well as implementation strategies and barriers in their facility. Analysis of these interviews using grounded theory techniques indicated that there was wide variability in employees' mental models of CPGs. Findings also indicated that high-performing facilities exhibited both (a) a clear, focused shared mental model of guidelines and (b) a tendency to use performance feedback as a learning opportunity, thus suggesting that a shared mental model is a necessary but not sufficient step toward successful guideline implementation. We conclude that a clear shared mental model of guidelines, in combination with a learning orientation toward feedback are important components for successful guideline implementation and improved quality of care.

  9. Computer simulation of a single pilot flying a modern high-performance helicopter

    NASA Technical Reports Server (NTRS)

    Zipf, Mark E.; Vogt, William G.; Mickle, Marlin H.; Hoelzeman, Ronald G.; Kai, Fei; Mihaloew, James R.

    1988-01-01

    Presented is a computer simulation of a human response pilot model able to execute operational flight maneuvers and vehicle stabilization of a modern high-performance helicopter. Low-order, single-variable, human response mechanisms, integrated to form a multivariable pilot structure, provide a comprehensive operational control over the vehicle. Evaluations of the integrated pilot were performed by direct insertion into a nonlinear, total-force simulation environment provided by NASA Lewis. Comparisons between the integrated pilot structure and single-variable pilot mechanisms are presented. Static and dynamically alterable configurations of the pilot structure are introduced to simulate pilot activities during vehicle maneuvers. These configurations, in conjunction with higher level, decision-making processes, are considered for use where guidance and navigational procedures, operational mode transfers, and resource sharing are required.

  10. Shared prefetching to reduce execution skew in multi-threaded systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eichenberger, Alexandre E; Gunnels, John A

    Mechanisms are provided for optimizing code to perform prefetching of data into a shared memory of a computing device that is shared by a plurality of threads that execute on the computing device. A memory stream of a portion of code that is shared by the plurality of threads is identified. A set of prefetch instructions is distributed across the plurality of threads. Prefetch instructions are inserted into the instruction sequences of the plurality of threads such that each instruction sequence has a separate sub-portion of the set of prefetch instructions, thereby generating optimized code. Executable code is generated basedmore » on the optimized code and stored in a storage device. The executable code, when executed, performs the prefetches associated with the distributed set of prefetch instructions in a shared manner across the plurality of threads.« less

  11. Share capitalism and worker wellbeing.

    PubMed

    Bryson, Alex; Clark, Andrew E; Freeman, Richard B; Green, Colin P

    2016-10-01

    We show that worker wellbeing is determined not only by the amount of compensation workers receive but also by how compensation is determined. While previous theoretical and empirical work has often been preoccupied with individual performance-related pay, we find that the receipt of a range of group-performance schemes (profit shares, group bonuses and share ownership) is associated with higher job satisfaction. This holds conditional on wage levels, so that pay methods are associated with greater job satisfaction in addition to that coming from higher wages. We use a variety of methods to control for unobserved individual and job-specific characteristics. We suggest that half of the share-capitalism effect is accounted for by employees reciprocating for the "gift"; we also show that share capitalism helps dampen the negative wellbeing effects of what we typically think of as "bad" aspects of job quality.

  12. Economic benefits of sharing and redistributing influenza vaccines when shortages occurred

    PubMed Central

    2017-01-01

    Background Recurrent influenza outbreak has been a concern for government health institutions in Taiwan. Over 10% of the population is infected by influenza viruses every year, and the infection has caused losses to both health and the economy. Approximately three million free vaccine doses are ordered and administered to high-risk populations at the beginning of flu season to control the disease. The government recommends sharing and redistributing vaccine inventories when shortages occur. While this policy intends to increase inventory flexibility, and has been proven as widely valuable, its impact on vaccine availability has not been previously reported. Material and methods This study developed an inventory model adapted to vaccination protocols to evaluate government recommended polices under different levels of vaccine production. Demands were uncertain and stratified by ages and locations according to the demographic data in Taiwan. Results When vaccine supply is sufficient, sharing pediatric vaccine reduced vaccine unavailability by 43% and overstock by 54%, and sharing adult vaccine reduced vaccine unavailability by 9% and overstock by 15%. Redistributing vaccines obtained greater gains for both pediatrics and adults (by 75%). When the vaccine supply is in short, only sharing pediatric vaccine yielded a 48% reduction of unused inventory, while other polices do not improve performances. Conclusions When implementing vaccination activities for seasonal influenza intervention, it is important to consider mismatches of demand and vaccine inventory. Our model confirmed that sharing and redistributing vaccines can substantially increase availability and reduce unused vaccines. PMID:29040317

  13. Multi-dimensional quantum state sharing based on quantum Fourier transform

    NASA Astrophysics Data System (ADS)

    Qin, Huawang; Tso, Raylin; Dai, Yuewei

    2018-03-01

    A scheme of multi-dimensional quantum state sharing is proposed. The dealer performs the quantum SUM gate and the quantum Fourier transform to encode a multi-dimensional quantum state into an entanglement state. Then the dealer distributes each participant a particle of the entanglement state, to share the quantum state among n participants. In the recovery, n-1 participants measure their particles and supply their measurement results; the last participant performs the unitary operation on his particle according to these measurement results and can reconstruct the initial quantum state. The proposed scheme has two merits: It can share the multi-dimensional quantum state and it does not need the entanglement measurement.

  14. Developmental Climate: A Cross-level Analysis of Voluntary Turnover and Job Performance

    PubMed Central

    Spell, Hannah B.; Eby, Lillian T.; Vandenberg, Robert J.

    2014-01-01

    This research investigates the influence of shared perceptions of developmental climate on individual-level perceptions of organizational commitment, engagement, and perceived competence, and whether these attitudes mediate the relationship between developmental climate and both individual voluntary turnover and supervisor-rated job performance. Survey data were collected from 361 intact employee-supervisory mentoring dyads and matched with employee turnover data collected one year later to test the proposed framework using multilevel modeling techniques. As expected, shared perceptions of developmental climate were significantly and positively related to all three individual work attitudes. In addition, both organizational commitment and perceived competence were significant mediators of the positive relationship between shared perceptions of developmental climate and voluntary turnover, as well as shared perceptions of developmental climate and supervisor-rated job performance. By contrast, no significant mediating effects were found for engagement. Theoretical implications, limitations, and future research are discussed. PMID:24748681

  15. Automatic Generation of Directive-Based Parallel Programs for Shared Memory Parallel Systems

    NASA Technical Reports Server (NTRS)

    Jin, Hao-Qiang; Yan, Jerry; Frumkin, Michael

    2000-01-01

    The shared-memory programming model is a very effective way to achieve parallelism on shared memory parallel computers. As great progress was made in hardware and software technologies, performance of parallel programs with compiler directives has demonstrated large improvement. The introduction of OpenMP directives, the industrial standard for shared-memory programming, has minimized the issue of portability. Due to its ease of programming and its good performance, the technique has become very popular. In this study, we have extended CAPTools, a computer-aided parallelization toolkit, to automatically generate directive-based, OpenMP, parallel programs. We outline techniques used in the implementation of the tool and present test results on the NAS parallel benchmarks and ARC3D, a CFD application. This work demonstrates the great potential of using computer-aided tools to quickly port parallel programs and also achieve good performance.

  16. Distributed simulation using a real-time shared memory network

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Mattern, Duane L.; Wong, Edmond; Musgrave, Jeffrey L.

    1993-01-01

    The Advanced Control Technology Branch of the NASA Lewis Research Center performs research in the area of advanced digital controls for aeronautic and space propulsion systems. This work requires the real-time implementation of both control software and complex dynamical models of the propulsion system. We are implementing these systems in a distributed, multi-vendor computer environment. Therefore, a need exists for real-time communication and synchronization between the distributed multi-vendor computers. A shared memory network is a potential solution which offers several advantages over other real-time communication approaches. A candidate shared memory network was tested for basic performance. The shared memory network was then used to implement a distributed simulation of a ramjet engine. The accuracy and execution time of the distributed simulation was measured and compared to the performance of the non-partitioned simulation. The ease of partitioning the simulation, the minimal time required to develop for communication between the processors and the resulting execution time all indicate that the shared memory network is a real-time communication technique worthy of serious consideration.

  17. Computing on quantum shared secrets

    NASA Astrophysics Data System (ADS)

    Ouyang, Yingkai; Tan, Si-Hui; Zhao, Liming; Fitzsimons, Joseph F.

    2017-11-01

    A (k ,n )-threshold secret-sharing scheme allows for a string to be split into n shares in such a way that any subset of at least k shares suffices to recover the secret string, but such that any subset of at most k -1 shares contains no information about the secret. Quantum secret-sharing schemes extend this idea to the sharing of quantum states. Here we propose a method of performing computation securely on quantum shared secrets. We introduce a (n ,n )-quantum secret sharing scheme together with a set of algorithms that allow quantum circuits to be evaluated securely on the shared secret without the need to decode the secret. We consider a multipartite setting, with each participant holding a share of the secret. We show that if there exists at least one honest participant, no group of dishonest participants can recover any information about the shared secret, independent of their deviations from the algorithm.

  18. Comparison of two paradigms for distributed shared memory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Levelt, W.G.; Kaashoek, M.F.; Bal, H.E.

    1990-08-01

    The paper compares two paradigms for Distributed Shared Memory on loosely coupled computing systems: the shared data-object model as used in Orca, a programming language specially designed for loosely coupled computing systems and the Shared Virtual Memory model. For both paradigms the authors have implemented two systems, one using only point-to-point messages, the other using broadcasting as well. They briefly describe these two paradigms and their implementations. Then they compare their performance on four applications: the traveling salesman problem, alpha-beta search, matrix multiplication and the all pairs shortest paths problem. The measurements show that both paradigms can be used efficientlymore » for programming large-grain parallel applications. Significant speedups were obtained on all applications. The unstructured Shared Virtual Memory paradigm achieves the best absolute performance, although this is largely due to the preliminary nature of the Orca compiler used. The structured shared data-object model achieves the highest speedups and is much easier to program and to debug.« less

  19. An Intervention to Improve School and Student Performance

    ERIC Educational Resources Information Center

    Shaver, Becky

    2008-01-01

    Georgia Leadership Institute for School Improvement (GLISI) used ISPI's 10 Standards of Performance Technology to share the design, development, and implementation of an intervention striving to help Georgia districts and schools share their success stories in a clear and concise format. This intervention took the form of a PowerPoint…

  20. Share (And Not) Share Alike: Improving Virtual Team Climate and Decision Performance

    ERIC Educational Resources Information Center

    Cordes, Sean

    2017-01-01

    Virtual teams face unique communication and collaboration challenges that impact climate development and performance. First, virtual teams rely on technology mediated communication which can constrain communication. Second, team members lack skill for adapting process to the virtual setting. A collaboration process structure was designed to…

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bolotnikov, A. E.; Camarda, G. S.; Cui, Y.

    Following our successful demonstration of the position-sensitive virtual Frisch-grid detectors, we investigated the feasibility of using high-granularity position sensing to correct response non-uniformities caused by the crystal defects in CdZnTe (CZT) pixelated detectors. The development of high-granularity detectors able to correct response non-uniformities on a scale comparable to the size of electron clouds opens the opportunity of using unselected off-the-shelf CZT material, whilst still assuring high spectral resolution for the majority of the detectors fabricated from an ingot. Here, we present the results from testing 3D position-sensitive 15×15×10 mm 3 pixelated detectors, fabricated with conventional pixel patterns with progressively smallermore » pixel sizes: 1.4, 0.8, and 0.5 mm. We employed the readout system based on the H3D front-end multi-channel ASIC developed by BNL's Instrumentation Division in collaboration with the University of Michigan. We use the sharing of electron clouds among several adjacent pixels to measure locations of interaction points with sub-pixel resolution. By using the detectors with small-pixel sizes and a high probability of the charge-sharing events, we were able to improve their spectral resolutions in comparison to the baseline levels, measured for the 1.4-mm pixel size detectors with small fractions of charge-sharing events. These results demonstrate that further enhancement of the performance of CZT pixelated detectors and reduction of costs are possible by using high spatial-resolution position information of interaction points to correct the small-scale response non-uniformities caused by crystal defects present in most devices.« less

  2. Shared mission operations concept

    NASA Technical Reports Server (NTRS)

    Spradlin, Gary L.; Rudd, Richard P.; Linick, Susan H.

    1994-01-01

    Historically, new JPL flight projects have developed a Mission Operations System (MOS) as unique as their spacecraft, and have utilized a mission-dedicated staff to monitor and control the spacecraft through the MOS. NASA budgetary pressures to reduce mission operations costs have led to the development and reliance on multimission ground system capabilities. The use of these multimission capabilities has not eliminated an ongoing requirement for a nucleus of personnel familiar with a given spacecraft and its mission to perform mission-dedicated operations. The high cost of skilled personnel required to support projects with diverse mission objectives has the potential for significant reduction through shared mission operations among mission-compatible projects. Shared mission operations are feasible if: (1) the missions do not conflict with one another in terms of peak activity periods, (2) a unique MOS is not required, and (3) there is sufficient similarity in the mission profiles so that greatly different skills would not be required to support each mission. This paper will further develop this shared mission operations concept. We will illustrate how a Discovery-class mission would enter a 'partner' relationship with the Voyager Project, and can minimize MOS development and operations costs by early and careful consideration of mission operations requirements.

  3. Opus: A Coordination Language for Multidisciplinary Applications

    NASA Technical Reports Server (NTRS)

    Chapman, Barbara; Haines, Matthew; Mehrotra, Piyush; Zima, Hans; vanRosendale, John

    1997-01-01

    Data parallel languages, such as High Performance fortran, can be successfully applied to a wide range of numerical applications. However, many advanced scientific and engineering applications are multidisciplinary and heterogeneous in nature, and thus do not fit well into the data parallel paradigm. In this paper we present Opus, a language designed to fill this gap. The central concept of Opus is a mechanism called ShareD Abstractions (SDA). An SDA can be used as a computation server, i.e., a locus of computational activity, or as a data repository for sharing data between asynchronous tasks. SDAs can be internally data parallel, providing support for the integration of data and task parallelism as well as nested task parallelism. They can thus be used to express multidisciplinary applications in a natural and efficient way. In this paper we describe the features of the language through a series of examples and give an overview of the runtime support required to implement these concepts in parallel and distributed environments.

  4. Intelligent Energy Management System for PV-Battery-based Microgrids in Future DC Homes

    NASA Astrophysics Data System (ADS)

    Chauhan, R. K.; Rajpurohit, B. S.; Gonzalez-Longatt, F. M.; Singh, S. N.

    2016-06-01

    This paper presents a novel intelligent energy management system (IEMS) for a DC microgrid connected to the public utility (PU), photovoltaic (PV) and multi-battery bank (BB). The control objectives of the proposed IEMS system are: (i) to ensure the load sharing (according to the source capacity) among sources, (ii) to reduce the power loss (high efficient) in the system, and (iii) to enhance the system reliability and power quality. The proposed IEMS is novel because it follows the ideal characteristics of the battery (with some assumptions) for the power sharing and the selection of the closest source to minimize the power losses. The IEMS allows continuous and accurate monitoring with intelligent control of distribution system operations such as battery bank energy storage (BBES) system, PV system and customer utilization of electric power. The proposed IEMS gives the better operational performance for operating conditions in terms of load sharing, loss minimization, and reliability enhancement of the DC microgrid.

  5. Genomic microsatellites identify shared Jewish ancestry intermediate between Middle Eastern and European populations.

    PubMed

    Kopelman, Naama M; Stone, Lewi; Wang, Chaolong; Gefel, Dov; Feldman, Marcus W; Hillel, Jossi; Rosenberg, Noah A

    2009-12-08

    Genetic studies have often produced conflicting results on the question of whether distant Jewish populations in different geographic locations share greater genetic similarity to each other or instead, to nearby non-Jewish populations. We perform a genome-wide population-genetic study of Jewish populations, analyzing 678 autosomal microsatellite loci in 78 individuals from four Jewish groups together with similar data on 321 individuals from 12 non-Jewish Middle Eastern and European populations. We find that the Jewish populations show a high level of genetic similarity to each other, clustering together in several types of analysis of population structure. Further, Bayesian clustering, neighbor-joining trees, and multidimensional scaling place the Jewish populations as intermediate between the non-Jewish Middle Eastern and European populations. These results support the view that the Jewish populations largely share a common Middle Eastern ancestry and that over their history they have undergone varying degrees of admixture with non-Jewish populations of European descent.

  6. Genomic microsatellites identify shared Jewish ancestry intermediate between Middle Eastern and European populations

    PubMed Central

    2009-01-01

    Background Genetic studies have often produced conflicting results on the question of whether distant Jewish populations in different geographic locations share greater genetic similarity to each other or instead, to nearby non-Jewish populations. We perform a genome-wide population-genetic study of Jewish populations, analyzing 678 autosomal microsatellite loci in 78 individuals from four Jewish groups together with similar data on 321 individuals from 12 non-Jewish Middle Eastern and European populations. Results We find that the Jewish populations show a high level of genetic similarity to each other, clustering together in several types of analysis of population structure. Further, Bayesian clustering, neighbor-joining trees, and multidimensional scaling place the Jewish populations as intermediate between the non-Jewish Middle Eastern and European populations. Conclusion These results support the view that the Jewish populations largely share a common Middle Eastern ancestry and that over their history they have undergone varying degrees of admixture with non-Jewish populations of European descent. PMID:19995433

  7. Can't get no satisfaction? Will pay for performance help?: toward an economic framework for understanding performance-based risk-sharing agreements for innovative medical products.

    PubMed

    Towse, Adrian; Garrison, Louis P

    2010-01-01

    This article examines performance-based risk-sharing agreements for pharmaceuticals from a theoretical economic perspective. We position these agreements as a form of coverage with evidence development. New performance-based risk sharing could produce a more efficient market equilibrium, achieved by adjustment of the price post-launch to reflect outcomes combined with a new approach to the post-launch costs of evidence collection. For this to happen, the party best able to manage or to bear specific risks must do so. Willingness to bear risk will depend not only on ability to manage it, but on the degree of risk aversion. We identify three related frameworks that provide relevant insights: value of information, real option theory and money-back guarantees. We identify four categories of risk sharing: budget impact, price discounting, outcomes uncertainty and subgroup uncertainty. We conclude that a value of information/real option framework is likely to be the most helpful approach for understanding the costs and benefits of risk sharing. There are a number of factors that are likely to be crucial in determining if performance-based or risk-sharing agreements are efficient and likely to become more important in the future: (i) the cost and practicality of post-launch evidence collection relative to pre-launch; (ii) the feasibility of coverage with evidence development without a pre-agreed contract as to how the evidence will be used to adjust price, revenues or use, in which uncertainty around the pay-off to additional research will reduce the incentive for the manufacturer to collect the information; (iii) the difficulty of writing and policing risk-sharing agreements; (iv) the degree of risk aversion (and therefore opportunity to trade) on the part of payers and manufacturers; and (v) the extent of transferability of data from one country setting to another to support coverage with evidence development in a risk-sharing framework. There is no doubt that--in principle--risk sharing can provide manufacturers and payers additional real options that increase overall efficiency. Given the lack of empirical evidence on the success of schemes already agreed and on the issues we set out above, it is too early to tell if the recent surge of interest in these arrangements is likely to be a trend or only a fad.

  8. ChRIS--A web-based neuroimaging and informatics system for collecting, organizing, processing, visualizing and sharing of medical data.

    PubMed

    Pienaar, Rudolph; Rannou, Nicolas; Bernal, Jorge; Hahn, Daniel; Grant, P Ellen

    2015-01-01

    The utility of web browsers for general purpose computing, long anticipated, is only now coming into fruition. In this paper we present a web-based medical image data and information management software platform called ChRIS ([Boston] Children's Research Integration System). ChRIS' deep functionality allows for easy retrieval of medical image data from resources typically found in hospitals, organizes and presents information in a modern feed-like interface, provides access to a growing library of plugins that process these data - typically on a connected High Performance Compute Cluster, allows for easy data sharing between users and instances of ChRIS and provides powerful 3D visualization and real time collaboration.

  9. Rapid solution of large-scale systems of equations

    NASA Technical Reports Server (NTRS)

    Storaasli, Olaf O.

    1994-01-01

    The analysis and design of complex aerospace structures requires the rapid solution of large systems of linear and nonlinear equations, eigenvalue extraction for buckling, vibration and flutter modes, structural optimization and design sensitivity calculation. Computers with multiple processors and vector capabilities can offer substantial computational advantages over traditional scalar computer for these analyses. These computers fall into two categories: shared memory computers and distributed memory computers. This presentation covers general-purpose, highly efficient algorithms for generation/assembly or element matrices, solution of systems of linear and nonlinear equations, eigenvalue and design sensitivity analysis and optimization. All algorithms are coded in FORTRAN for shared memory computers and many are adapted to distributed memory computers. The capability and numerical performance of these algorithms will be addressed.

  10. Exploring community resilience in workforce communities of first responders serving Katrina survivors.

    PubMed

    Wyche, Karen Fraser; Pfefferbaum, Rose L; Pfefferbaum, Betty; Norris, Fran H; Wisnieski, Deborah; Younger, Hayden

    2011-01-01

    Community resilience activities were assessed in workplace teams that became first responders for Hurricane Katrina survivors. Community resilience was assessed by a survey, focus groups, and key informant interviews. On the survey, 90 first responders ranked their team's disaster response performance as high on community resilience activities. The same participants, interviewed in 11 focus groups and 3 key informant interviews, discussed how their teams engaged in community resilience activities to strengthen their ability to deliver services. Specifically, their resilient behaviors were characterized by: shared organizational identity, purpose, and values; mutual support and trust; role flexibility; active problem solving; self-reflection; shared leadership; and skill building. The implications for research, policy, practice, and education of professionals are discussed. © 2011 American Orthopsychiatric Association.

  11. Design and evaluation of Nemesis, a scalable, low-latency, message-passing communication subsystem.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buntinas, D.; Mercier, G.; Gropp, W.

    2005-12-02

    This paper presents a new low-level communication subsystem called Nemesis. Nemesis has been designed and implemented to be scalable and efficient both in the intranode communication context using shared-memory and in the internode communication case using high-performance networks and is natively multimethod-enabled. Nemesis has been integrated in MPICH2 as a CH3 channel and delivers better performance than other dedicated communication channels in MPICH2. Furthermore, the resulting MPICH2 architecture outperforms other MPI implementations in point-to-point benchmarks.

  12. Maintain workplace civility by sharing the vow of personal responsibility.

    PubMed

    Chism, Marlene

    2012-01-01

    Office gossip, power struggles, employee burnout, and short fuses are becoming more the rule than the exception in running a medical practice. The difficult conversation avoided today can turn into the lawsuit 15 years later. Managers often find it hard to confront high performers and authority figures in the workplace. In order to deal with disruptive behavior and incivility before it ruins the medical practice, practice managers should institute the four steps outlined in this article plus the Vow of Personal Responsibility to improve clarity, teamwork, and personal performance.

  13. High Performance Programming Using Explicit Shared Memory Model on the Cray T3D

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Simon, Horst D.; Lasinski, T. A. (Technical Monitor)

    1994-01-01

    The Cray T3D is the first-phase system in Cray Research Inc.'s (CRI) three-phase massively parallel processing program. In this report we describe the architecture of the T3D, as well as the CRAFT (Cray Research Adaptive Fortran) programming model, and contrast it with PVM, which is also supported on the T3D We present some performance data based on the NAS Parallel Benchmarks to illustrate both architectural and software features of the T3D.

  14. Women Share in Science and Technology Education and Their Job Performance in Nigeria

    NASA Astrophysics Data System (ADS)

    Osezuah, Simon; Nwadiani, C. O.

    2012-10-01

    This investigation focused on womenís share in Science and Technology education and their job performance in Nigeria. The investigation was conducted with two questions that were raised as a guide. A sample of 4886 was drawn through the questionnaire method. Analysis of the data was conducted through the use of frequency count. Findings obtained indicated that there was disparity between male and female gender in access to Science and Technology education in Nigeria, and also that there were no differences between women and men scientists and technologists in job performance. The conclusion was therefore reached that women do not have equal share with men in Science and Technology education even though the male and female scientists and technologists perform jobs equally in Nigeria. Recommendation was therefore made accordingly.

  15. The effect of processing code, response modality and task difficulty on dual task performance and subjective workload in a manual system

    NASA Technical Reports Server (NTRS)

    Liu, Yili; Wickens, Christopher D.

    1987-01-01

    This paper reports on the first experiment of a series studying the effect of task structure and difficulty demand on time-sharing performance and workload in both automated and corresponding manual systems. The experimental task involves manual control time-shared with spatial and verbal decisions tasks of two levels of difficulty and two modes of response (voice or manual). The results provide strong evidence that tasks and processes competing for common processing resources are time shared less effecively and have higher workload than tasks competing for separate resources. Subjective measures and the structure of multiple resources are used in conjunction to predict dual task performance. The evidence comes from both single-task and from dual-task performance.

  16. Efficient Access to Massive Amounts of Tape-Resident Data

    NASA Astrophysics Data System (ADS)

    Yu, David; Lauret, Jérôme

    2017-10-01

    Randomly restoring files from tapes degrades the read performance primarily due to frequent tape mounts. The high latency and time-consuming tape mount and dismount is a major issue when accessing massive amounts of data from tape storage. BNL’s mass storage system currently holds more than 80 PB of data on tapes, managed by HPSS. To restore files from HPSS, we make use of a scheduler software, called ERADAT. This scheduler system was originally based on code from Oak Ridge National Lab, developed in the early 2000s. After some major modifications and enhancements, ERADAT now provides advanced HPSS resource management, priority queuing, resource sharing, web-browser visibility of real-time staging activities and advanced real-time statistics and graphs. ERADAT is also integrated with ACSLS and HPSS for near real-time mount statistics and resource control in HPSS. ERADAT is also the interface between HPSS and other applications such as the locally developed Data Carousel, providing fair resource-sharing policies and related capabilities. ERADAT has demonstrated great performance at BNL.

  17. A Tensor Product Formulation of Strassen's Matrix Multiplication Algorithm with Memory Reduction

    DOE PAGES

    Kumar, B.; Huang, C. -H.; Sadayappan, P.; ...

    1995-01-01

    In this article, we present a program generation strategy of Strassen's matrix multiplication algorithm using a programming methodology based on tensor product formulas. In this methodology, block recursive programs such as the fast Fourier Transforms and Strassen's matrix multiplication algorithm are expressed as algebraic formulas involving tensor products and other matrix operations. Such formulas can be systematically translated to high-performance parallel/vector codes for various architectures. In this article, we present a nonrecursive implementation of Strassen's algorithm for shared memory vector processors such as the Cray Y-MP. A previous implementation of Strassen's algorithm synthesized from tensor product formulas required working storagemore » of size O(7 n ) for multiplying 2 n × 2 n matrices. We present a modified formulation in which the working storage requirement is reduced to O(4 n ). The modified formulation exhibits sufficient parallelism for efficient implementation on a shared memory multiprocessor. Performance results on a Cray Y-MP8/64 are presented.« less

  18. Performance of children and adolescents with Asperger syndrome or high-functioning autism on advanced theory of mind tasks.

    PubMed

    Kaland, Nils; Callesen, Kirsten; Møller-Nielsen, Annette; Mortensen, Erik Lykke; Smith, Lars

    2008-07-01

    Although a number of advanced theory of mind tasks have been developed, there is a dearth of information on whether performances on different tasks are associated. The present study examined the performance of 21 children and adolescents with diagnoses of Asperger syndrome (AS) and 20 typically developing controls on three advanced theory of mind tasks: The Eyes Task, the Strange Stories, and the Stories from Everyday Life. The participants in the clinical group demonstrated lower performance than the controls on all the three tasks. The pattern of findings, however, indicates that these tasks may share different information-processing requirements in addition to tapping different mentalizing abilities.

  19. Physician Willingness and Resources to Serve More Medicaid Patients: Perspectives from Primary Care Physicians

    PubMed Central

    Sommers, Anna S.; Paradise, Julia; Miller, Carolyn

    2011-01-01

    Objective Sixteen million people will gain Medicaid under health reform. This study compares primary care physicians (PCPs) on reported acceptance of new Medicaid patients and practice characteristics. Data and Methods Sample of 1,460 PCPs in outpatient settings was drawn from a 2008 nationally representative survey of physicians. PCPs were classified into four categories based on distribution of practice revenue from Medicaid and Medicare and acceptance of new Medicaid patients. Fifteen in-depth telephone interviews supplemented analysis. Findings Most high- and moderate-share Medicaid PCPs report accepting “all” or “most” new Medicaid patients. High-share Medicaid PCPs were more likely than others to work in hospital-based practices (20%) and health centers (18%). About 30% of high- and moderate-share Medicaid PCPs worked in practices with a hospital ownership interest. Health IT use was similar between these two groups and high-share Medicare PCPs, but more high- and moderate-share Medicaid PCPs provided interpreters and non-physician staff for patient education. Over 40% of high- and moderate-share Medicaid PCPs reported inadequate patient time as a major problem. Low- and no-share Medicaid PCPs practiced in higher-income areas than high-share Medicaid PCPs. In interviews, difficulty arranging specialist care, reimbursement, and administrative hassles emerged as reasons for limiting Medicaid patients. Policy Implications PCPs already serving Medicaid are positioned to expand capacity but also face constraints. Targeted efforts to increase their capacity could help. Acceptance of new Medicaid patients under health reform will hinge on multiple factors, not payment alone. Trends toward hospital ownership could increase practices' capacity and willingness to serve Medicaid. PMID:22340772

  20. Collaborate and share: an experimental study of the effects of task and reward interdependencies in online games.

    PubMed

    Choi, Boreum; Lee, Inseong; Choi, Dongseong; Kim, Jinwoo

    2007-08-01

    Today millions of players interact with one another in online games, especially massively multiplayer online role-playing games (MMORPGs). These games promote interaction among players by offering interdependency features, but to date few studies have asked what interdependency design factors of MMORPGs make them fun for players, produce experiences of flow, or enhance player performance. In this study, we focused on two game design features: task and reward interdependency. We conducted a controlled experiment that compared the interaction effects of low and high task-interdependency conditions and low and high reward-interdependency conditions on three dependent variables: fun, flow, and performance. We found that in a low task-interdependency condition, players had more fun, experienced higher levels of flow, and perceived better performance when a low reward-interdependency condition also obtained. In contrast, in a high task-interdependency condition, all of these measures were higher when a high reward-interdependency condition also obtained.

  1. Can Confidence Come Too Soon? Collective Efficacy, Conflict and Group Performance over Time

    ERIC Educational Resources Information Center

    Goncalo, Jack A.; Polman, Evan; Maslach, Christina

    2010-01-01

    Groups with a strong sense of collective efficacy set more challenging goals, persist in the face of difficulty, and are ultimately more likely to succeed than groups who do not share this belief. Given the many advantages that may accrue to groups who are confident, it would be logical to advise groups to build a high level of collective efficacy…

  2. Please Move Inactive Files Off the /projects File System | High-Performance

    Science.gov Websites

    Computing | NREL Please Move Inactive Files Off the /projects File System Please Move Inactive Files Off the /projects File System January 11, 2018 The /projects file system is a shared resource . This year this has created a space crunch - the file system is now about 90% full and we need your help

  3. West Europe Report, Science and Technology

    DTIC Science & Technology

    1986-03-27

    exemption of employee shares), the French system does not sufficiently encourage creation and development of companies which depend on the often...pharmaceutical industry because with good reason the multina- tional chemical companies are investing millions in this area. The effects upon the...build upon knowledge gained from the high- performance wings of the Airbus A310 and Airbus A320. The com- pany considers that here systems of

  4. Single-transistor-clocked flip-flop

    DOEpatents

    Zhao, Peiyi; Darwish, Tarek; Bayoumi, Magdy

    2005-08-30

    The invention provides a low power, high performance flip-flop. The flip-flop uses only one clocked transistor. The single clocked transistor is shared by the first and second branches of the device. A pulse generator produces a clock pulse to trigger the flip-flop. In one preferred embodiment the device can be made as a static explicit pulsed flip-flop which employs only two clocked transistors.

  5. Grins and Giggles: The Launch Pad to High Performance

    NASA Technical Reports Server (NTRS)

    Patnode, Norman H.

    2003-01-01

    Long ago I observed that people get more things done when they're having fun . At the time, I had no idea why. Now I think I have an answer. When children play, look at the energy that's put into it, that's shared with everyone else. This sort of energy brings people together, unleashes their creativity and indeed inspires them to do amazing things.

  6. HPC on Competitive Cloud Resources

    NASA Astrophysics Data System (ADS)

    Bientinesi, Paolo; Iakymchuk, Roman; Napper, Jeff

    Computing as a utility has reached the mainstream. Scientists can now easily rent time on large commercial clusters that can be expanded and reduced on-demand in real-time. However, current commercial cloud computing performance falls short of systems specifically designed for scientific applications. Scientific computing needs are quite different from those of the web applications that have been the focus of cloud computing vendors. In this chapter we demonstrate through empirical evaluation the computational efficiency of high-performance numerical applications in a commercial cloud environment when resources are shared under high contention. Using the Linpack benchmark as a case study, we show that cache utilization becomes highly unpredictable and similarly affects computation time. For some problems, not only is it more efficient to underutilize resources, but the solution can be reached sooner in realtime (wall-time). We also show that the smallest, cheapest (64-bit) instance on the studied environment is the best for price to performance ration. In light of the high-contention we witness, we believe that alternative definitions of efficiency for commercial cloud environments should be introduced where strong performance guarantees do not exist. Concepts like average, expected performance and execution time, expected cost to completion, and variance measures--traditionally ignored in the high-performance computing context--now should complement or even substitute the standard definitions of efficiency.

  7. Methodology for fast detection of false sharing in threaded scientific codes

    DOEpatents

    Chung, I-Hsin; Cong, Guojing; Murata, Hiroki; Negishi, Yasushi; Wen, Hui-Fang

    2014-11-25

    A profiling tool identifies a code region with a false sharing potential. A static analysis tool classifies variables and arrays in the identified code region. A mapping detection library correlates memory access instructions in the identified code region with variables and arrays in the identified code region while a processor is running the identified code region. The mapping detection library identifies one or more instructions at risk, in the identified code region, which are subject to an analysis by a false sharing detection library. A false sharing detection library performs a run-time analysis of the one or more instructions at risk while the processor is re-running the identified code region. The false sharing detection library determines, based on the performed run-time analysis, whether two different portions of the cache memory line are accessed by the generated binary code.

  8. Share capitalism and worker wellbeing⋆, ⋆⋆

    PubMed Central

    Clark, Andrew E.; Freeman, Richard B.; Green, Colin P.

    2017-01-01

    We show that worker wellbeing is determined not only by the amount of compensation workers receive but also by how compensation is determined. While previous theoretical and empirical work has often been preoccupied with individual performance-related pay, we find that the receipt of a range of group-performance schemes (profit shares, group bonuses and share ownership) is associated with higher job satisfaction. This holds conditional on wage levels, so that pay methods are associated with greater job satisfaction in addition to that coming from higher wages. We use a variety of methods to control for unobserved individual and job-specific characteristics. We suggest that half of the share-capitalism effect is accounted for by employees reciprocating for the “gift”; we also show that share capitalism helps dampen the negative wellbeing effects of what we typically think of as “bad” aspects of job quality. PMID:28725118

  9. Sharing control with haptics: seamless driver support from manual to automatic control.

    PubMed

    Mulder, Mark; Abbink, David A; Boer, Erwin R

    2012-10-01

    Haptic shared control was investigated as a human-machine interface that can intuitively share control between drivers and an automatic controller for curve negotiation. As long as automation systems are not fully reliable, a role remains for the driver to be vigilant to the system and the environment to catch any automation errors. The conventional binary switches between supervisory and manual control has many known issues, and haptic shared control is a promising alternative. A total of 42 respondents of varying age and driving experience participated in a driving experiment in a fixed-base simulator, in which curve negotiation behavior during shared control was compared to during manual control, as well as to three haptic tunings of an automatic controller without driver intervention. Under the experimental conditions studied, the main beneficial effect of haptic shared control compared to manual control was that less control activity (16% in steering wheel reversal rate, 15% in standard deviation of steering wheel angle) was needed for realizing an improved safety performance (e.g., 11% in peak lateral error). Full automation removed the need for any human control activity and improved safety performance (e.g., 35% in peak lateral error) but put the human in a supervisory position. Haptic shared control kept the driver in the loop, with enhanced performance at reduced control activity, mitigating the known issues that plague full automation. Haptic support for vehicular control ultimately seeks to intuitively combine human intelligence and creativity with the benefits of automation systems.

  10. Harnessing modern web application technology to create intuitive and efficient data visualization and sharing tools.

    PubMed

    Wood, Dylan; King, Margaret; Landis, Drew; Courtney, William; Wang, Runtang; Kelly, Ross; Turner, Jessica A; Calhoun, Vince D

    2014-01-01

    Neuroscientists increasingly need to work with big data in order to derive meaningful results in their field. Collecting, organizing and analyzing this data can be a major hurdle on the road to scientific discovery. This hurdle can be lowered using the same technologies that are currently revolutionizing the way that cultural and social media sites represent and share information with their users. Web application technologies and standards such as RESTful webservices, HTML5 and high-performance in-browser JavaScript engines are being utilized to vastly improve the way that the world accesses and shares information. The neuroscience community can also benefit tremendously from these technologies. We present here a web application that allows users to explore and request the complex datasets that need to be shared among the neuroimaging community. The COINS (Collaborative Informatics and Neuroimaging Suite) Data Exchange uses web application technologies to facilitate data sharing in three phases: Exploration, Request/Communication, and Download. This paper will focus on the first phase, and how intuitive exploration of large and complex datasets is achieved using a framework that centers around asynchronous client-server communication (AJAX) and also exposes a powerful API that can be utilized by other applications to explore available data. First opened to the neuroscience community in August 2012, the Data Exchange has already provided researchers with over 2500 GB of data.

  11. Harnessing modern web application technology to create intuitive and efficient data visualization and sharing tools

    PubMed Central

    Wood, Dylan; King, Margaret; Landis, Drew; Courtney, William; Wang, Runtang; Kelly, Ross; Turner, Jessica A.; Calhoun, Vince D.

    2014-01-01

    Neuroscientists increasingly need to work with big data in order to derive meaningful results in their field. Collecting, organizing and analyzing this data can be a major hurdle on the road to scientific discovery. This hurdle can be lowered using the same technologies that are currently revolutionizing the way that cultural and social media sites represent and share information with their users. Web application technologies and standards such as RESTful webservices, HTML5 and high-performance in-browser JavaScript engines are being utilized to vastly improve the way that the world accesses and shares information. The neuroscience community can also benefit tremendously from these technologies. We present here a web application that allows users to explore and request the complex datasets that need to be shared among the neuroimaging community. The COINS (Collaborative Informatics and Neuroimaging Suite) Data Exchange uses web application technologies to facilitate data sharing in three phases: Exploration, Request/Communication, and Download. This paper will focus on the first phase, and how intuitive exploration of large and complex datasets is achieved using a framework that centers around asynchronous client-server communication (AJAX) and also exposes a powerful API that can be utilized by other applications to explore available data. First opened to the neuroscience community in August 2012, the Data Exchange has already provided researchers with over 2500 GB of data. PMID:25206330

  12. Use of high-granularity CdZnTe pixelated detectors to correct response non-uniformities caused by defects in crystals

    DOE PAGES

    Bolotnikov, A. E.; Camarda, G. S.; Cui, Y.; ...

    2015-09-06

    Following our successful demonstration of the position-sensitive virtual Frisch-grid detectors, we investigated the feasibility of using high-granularity position sensing to correct response non-uniformities caused by the crystal defects in CdZnTe (CZT) pixelated detectors. The development of high-granularity detectors able to correct response non-uniformities on a scale comparable to the size of electron clouds opens the opportunity of using unselected off-the-shelf CZT material, whilst still assuring high spectral resolution for the majority of the detectors fabricated from an ingot. Here, we present the results from testing 3D position-sensitive 15×15×10 mm 3 pixelated detectors, fabricated with conventional pixel patterns with progressively smallermore » pixel sizes: 1.4, 0.8, and 0.5 mm. We employed the readout system based on the H3D front-end multi-channel ASIC developed by BNL's Instrumentation Division in collaboration with the University of Michigan. We use the sharing of electron clouds among several adjacent pixels to measure locations of interaction points with sub-pixel resolution. By using the detectors with small-pixel sizes and a high probability of the charge-sharing events, we were able to improve their spectral resolutions in comparison to the baseline levels, measured for the 1.4-mm pixel size detectors with small fractions of charge-sharing events. These results demonstrate that further enhancement of the performance of CZT pixelated detectors and reduction of costs are possible by using high spatial-resolution position information of interaction points to correct the small-scale response non-uniformities caused by crystal defects present in most devices.« less

  13. Relativistic (2,3)-threshold quantum secret sharing

    NASA Astrophysics Data System (ADS)

    Ahmadi, Mehdi; Wu, Ya-Dong; Sanders, Barry C.

    2017-09-01

    In quantum secret sharing protocols, the usual presumption is that the distribution of quantum shares and players' collaboration are both performed inertially. Here we develop a quantum secret sharing protocol that relaxes these assumptions wherein we consider the effects due to the accelerating motion of the shares. Specifically, we solve the (2,3)-threshold continuous-variable quantum secret sharing in noninertial frames. To this aim, we formulate the effect of relativistic motion on the quantum field inside a cavity as a bosonic quantum Gaussian channel. We investigate how the fidelity of quantum secret sharing is affected by nonuniform motion of the quantum shares. Furthermore, we fully characterize the canonical form of the Gaussian channel, which can be utilized in quantum-information-processing protocols to include relativistic effects.

  14. Extending Current Theories of Cross-Boundary Information Sharing and Integration: A Case Study of Taiwan e-Government

    ERIC Educational Resources Information Center

    Yang, Tung-Mou

    2011-01-01

    Information sharing and integration has long been considered an important approach for increasing organizational efficiency and performance. With advancements in information and communication technologies, sharing and integrating information across organizations becomes more attractive and practical to organizations. However, achieving…

  15. Business Value of Information Sharing and the Role of Emerging Technologies

    ERIC Educational Resources Information Center

    Kumar, Sanjeev

    2009-01-01

    Information Technology has brought significant benefits to organizations by allowing greater information sharing within and across firm boundaries leading to performance improvements. Emerging technologies such as Service Oriented Architecture (SOA) and Web2.0 have transformed the volume and process of information sharing. However, a comprehensive…

  16. Two-way sequential time synchronization: Preliminary results from the SIRIO-1 experiment

    NASA Technical Reports Server (NTRS)

    Detoma, E.; Leschiutta, S.

    1981-01-01

    A two-way time synchronization experiment performed in the spring of 1979 and 1980 via the Italian SIRIO-1 experimental telecommunications satellite is described. The experiment was designed and implemented to precisely monitor the satellite motion and to evaluate the possibility of performing a high precision, two-way time synchronization using a single communication channel, time-shared between the participating sites. Results show that the precision of the time synchronization is between 1 and 5 ns, while the evaluation and correction of the satellite motion effect was performed with an accuracy of a few nanoseconds or better over a time interval from 1 up to 20 seconds.

  17. The Effects of Job Sharing on Student Performance Literature Review.

    ERIC Educational Resources Information Center

    Garman, Dorothy

    The River Forest (Illinois) District 90 wished to examine the educational literature on the effects of job sharing by teachers on student performance. This document presents a review of the literature and summarizes and synthesizes this information. Only limited information was found on this subject. However, anecdotal reports of the impact of job…

  18. The development of children's knowledge of attention and resource allocation in single and dual tasks.

    PubMed

    Dossett, D; Burns, B

    2000-06-01

    Developmental changes in kindergarten, 1st-, and 4th-grade children's knowledge about the variables that affect attention sharing and resource allocation were examined. Findings from the 2 experiments showed that kindergartners understood that person and strategy variables affect performance in attention-sharing tasks. However, knowledge of how task variables affect performance was not evident to them and was inconsistent for 1st and 4th graders. Children's knowledge about resource allocation revealed a different pattern and varied according to the dissimilarity of task demands in the attention-sharing task. In Experiment 1, in which the dual attention tasks were similar (i.e., visual detection), kindergarten and 1st-grade children did not differentiate performance in single and dual tasks. Fourth graders demonstrated knowledge that performance on a single task would be better than performance on the dual tasks for only 2 of the variables examined. In Experiment 2, in which the dual attention tasks were dissimilar (i.e., visual and auditory detection), kindergarten and 1st-grade children demonstrated knowledge that performance in the single task would be better than in the dual tasks for 1 of the task variables examined. However, 4th-grade children consistently gave higher ratings for performance on the single than on the dual attention tasks for all variables examined. These findings (a) underscore that children's meta-attention is not unitary and (b) demonstrate that children's knowledge about variables affecting attention sharing and resource allocation have different developmental pathways. Results show that knowledge about attention sharing and about the factors that influence the control of attention develops slowly and undergoes reorganization in middle childhood.

  19. Cryptonite: A Secure and Performant Data Repository on Public Clouds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kumbhare, Alok; Simmhan, Yogesh; Prasanna, Viktor

    2012-06-29

    Cloud storage has become immensely popular for maintaining synchronized copies of files and for sharing documents with collaborators. However, there is heightened concern about the security and privacy of Cloud-hosted data due to the shared infrastructure model and an implicit trust in the service providers. Emerging needs of secure data storage and sharing for domains like Smart Power Grids, which deal with sensitive consumer data, require the persistence and availability of Cloud storage but with client-controlled security and encryption, low key management overhead, and minimal performance costs. Cryptonite is a secure Cloud storage repository that addresses these requirements using amore » StrongBox model for shared key management.We describe the Cryptonite service and desktop client, discuss performance optimizations, and provide an empirical analysis of the improvements. Our experiments shows that Cryptonite clients achieve a 40% improvement in file upload bandwidth over plaintext storage using the Azure Storage Client API despite the added security benefits, while our file download performance is 5 times faster than the baseline for files greater than 100MB.« less

  20. Spindle

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2013-04-04

    Spindle is software infrastructure that solves file system scalabiltiy problems associated with starting dynamically linked applications in HPC environments. When an HPC applications starts up thousands of pricesses at once, and those processes simultaneously access a shared file system to look for shared libraries, it can cause significant performance problems for both the application and other users. Spindle scalably coordinates the distribution of shared libraries to an application to avoid hammering the shared file system.

  1. Retrieval of publications addressing shared decision making: an evaluation of full-text searches on medical journal websites.

    PubMed

    Blanc, Xavier; Collet, Tinh-Hai; Auer, Reto; Iriarte, Pablo; Krause, Jan; Légaré, France; Cornuz, Jacques; Clair, Carole

    2015-04-07

    Full-text searches of articles increase the recall, defined by the proportion of relevant publications that are retrieved. However, this method is rarely used in medical research due to resource constraints. For the purpose of a systematic review of publications addressing shared decision making, a full-text search method was required to retrieve publications where shared decision making does not appear in the title or abstract. The objective of our study was to assess the efficiency and reliability of full-text searches in major medical journals for identifying shared decision making publications. A full-text search was performed on the websites of 15 high-impact journals in general internal medicine to look up publications of any type from 1996-2011 containing the phrase "shared decision making". The search method was compared with a PubMed search of titles and abstracts only. The full-text search was further validated by requesting all publications from the same time period from the individual journal publishers and searching through the collected dataset. The full-text search for "shared decision making" on journal websites identified 1286 publications in 15 journals compared to 119 through the PubMed search. The search within the publisher-provided publications of 6 journals identified 613 publications compared to 646 with the full-text search on the respective journal websites. The concordance rate was 94.3% between both full-text searches. Full-text searching on medical journal websites is an efficient and reliable way to identify relevant articles in the field of shared decision making for review or other purposes. It may be more widely used in biomedical research in other fields in the future, with the collaboration of publishers and journals toward open-access data.

  2. Modeling reciprocal team cohesion-performance relationships, as impacted by shared leadership and members' competence.

    PubMed

    Mathieu, John E; Kukenberger, Michael R; D'Innocenzo, Lauren; Reilly, Greg

    2015-05-01

    Despite the lengthy history of team cohesion-performance research, little is known about their reciprocal relationships over time. Using meta-analysis, we synthesize findings from 17 CLP design studies, and analyze their results using SEM. Results support that team cohesion and performance are related reciprocally with each other over time. We then used longitudinal data from 205 members of 57 student teams who competed in a complex business simulation over 10 weeks, to test: (a) whether team cohesion and performance were related reciprocally over multiple time periods, (b) the relative magnitude of those relationships, and (c) whether they were stable over time. We also considered the influence of team members' academic competence and degree of shared leadership on these dynamics. As anticipated, cohesion and performance were related positively, and reciprocally, over time. However, the cohesion → performance relationship was significantly higher than the performance → cohesion relationship. Moreover, the cohesion → performance relationship grew stronger over time whereas the performance → cohesion relationship remained fairly consistent over time. As expected, shared leadership related positively to team cohesion but not directly to their performance; whereas average team member academic competence related positively to team performance but was unrelated to team cohesion. Finally, we conducted and report a replication using a second sample of students competing in a business simulation. Our earlier substantive relationships were mostly replicated, and we illustrated the dynamic temporal properties of shared leadership. We discuss these findings in terms of theoretical importance, applied implications, and directions for future research. (c) 2015 APA, all rights reserved.

  3. Kaiser Permanente's performance improvement system, Part 4: Creating a learning organization.

    PubMed

    Schilling, Lisa; Dearing, James W; Staley, Paul; Harvey, Patti; Fahey, Linda; Kuruppu, Francesca

    2011-12-01

    In 2006, recognizing variations in performance in quality, safety, service, and efficiency, Kaiser Permanente leaders initiated the development of a performance improvement (PI) system. Kaiser Permanente has implemented a strategy for creating the systemic capacity for continuous improvement that characterizes a learning organization. Six "building blocks" were identified to enable Kaiser Permanente to make the transition to becoming a learning organization: real-time sharing of meaningful performance data; formal training in problem-solving methodology; workforce engagement and informal knowledge sharing; leadership structures, beliefs, and behaviors; internal and external benchmarking; and technical knowledge sharing. Putting each building block into place required multiple complex strategies combining top-down and bottom-up approaches. Although the strategies have largely been successful, challenges remain. The demand for real-time meaningful performance data can conflict with prioritized changes to health information systems. It is an ongoing challenge to teach PI, change management, innovation, and project management to all managers and staff without consuming too much training time. Challenges with workforce engagement include low initial use of tools intended to disseminate information through virtual social networking. Uptake of knowledge-sharing technologies is still primarily by innovators and early adopters. Leaders adopt new behaviors at varying speeds and have a range of abilities to foster an environment that is psychologically safe and stimulates inquiry. A learning organization has the capability to improve, and it develops structures and processes that facilitate the acquisition and sharing of knowledge.

  4. Lowering Cost Share May Improve Rates of Home Glucose Monitoring Among Patients with Diabetes Using Insulin.

    PubMed

    Xie, Yiqiong; Agiro, Abiy; Bowman, Kevin; DeVries, Andrea

    2017-08-01

    Not much is known about the extent to which lower cost share for blood glucose strips is associated with persistent filling. To evaluate the relationship between cost sharing for blood glucose testing strips and continued use of testing strips. This is a retrospective observational study using medical and pharmacy claims data integrated with laboratory hemoglobin A1c (A1c) values for patients using insulin and blood glucose testing strips. Diabetic patients using insulin who had at least 1 fill of blood glucose testing strips between 2010 and 2012 were included. Patients were divided into a low cost-share group (out-of-pocket cost percentage of total testing strip costs over a 1-year period from the initial fill < 20%; n = 3,575) and a high cost-share group (out-of-pocket cost percentage ≥ 20%; n = 3,580). We compared the likelihood of continued testing strip fills after the initial fill between the 2 groups by using modified Poisson regression models. Patients with low cost share had higher rates of continued testing strip fills compared with those with high cost share (89% vs. 82%, P < 0.001). Lower cost share was associated with greater probability of continued fills (adjusted risk ratio [aRR] = 1.05, 95% CI = 1.03-1.07, P < 0.001). Other patient characteristics associated with continued fills included type 1 diabetes diagnosis, types of insulin regimens, and health insurance plan type. In a subset analysis of patients whose A1c values at baseline were above the target level (8%) set by the National Committee for Quality Assurance guidelines, we saw a slight increase in magnitude of relationship between cost share and continued fills (RR = 1.06, 95% CI = 1.03-1.10, P < 0.01). There was a statistically significant association between cost share for testing strips and continued blood glucose self-monitoring. Among patients not achieving A1c control at baseline, there was an increase in the magnitude of relationship. Lowering cost share for testing strips can remove a barrier to persistence in diabetes self-management. Funding for this study was provided by Anthem, which had no role in the study design, data interpretation, or preparation or review of the manuscript. The decision to publish was strictly that of the authors. Xie, Agiro, and DeVries are employees of HealthCore, a wholly owned subsidiary of Anthem. Bowman is an employee of Anthem. Study concept and design were contributed by all the authors. Xie took the lead in data collection, along with Agiro, and data interpretation was performed by all the authors. The manuscript was written by Xie and Agiro, along with DeVries, and revised by Xie, Agiro, and Devries, along with Bowman.

  5. Data Sharing and Scientific Impact in Eddy Covariance Research

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bond-Lamberty, B.

    Do the benefits of data sharing outweigh its perceived costs? This is a critical question, and one with the potential to change culture and behavior. Dai et al. (2018) examine how data sharing is related to scientific impact in the field of eddy covariance (EC), and find that data sharers are disproportionately high-impact researchers, and vice versa; they also note strong regional differences in EC data sharing norms. The current policies and restrictions of EC journals and repositories are highly uneven. Incentivizing data sharing and enhancing computational reproducibility are critical next steps for EC, ecology, and science more broadly.

  6. Performance pressure and caffeine both affect cognitive performance, but likely through independent mechanisms.

    PubMed

    Boere, Julia J; Fellinger, Lizz; Huizinga, Duncan J H; Wong, Sebastiaan F; Bijleveld, Erik

    2016-02-01

    A prevalent combination in daily life, performance pressure and caffeine intake have both been shown to impact people's cognitive performance. Here, we examined the possibility that pressure and caffeine affect cognitive performance via a shared pathway. In an experiment, participants performed a modular arithmetic task. Performance pressure and caffeine intake were orthogonally manipulated. Findings indicated that pressure and caffeine both negatively impacted performance. However, (a) pressure vs. caffeine affected performance on different trial types, and (b) there was no hint of an interactive effect. So, though the evidence is indirect, findings suggest that pressure and caffeine shape performance via distinct mechanisms, rather than a shared one. Copyright © 2015 Elsevier Inc. All rights reserved.

  7. Study of a GaAs:Cr-based Timepix detector using synchrotron facility

    NASA Astrophysics Data System (ADS)

    Smolyanskiy, P.; Kozhevnikov, D.; Bakina, O.; Chelkov, G.; Dedovich, D.; Kuper, K.; Leyva Fabelo, A.; Zhemchugov, A.

    2017-11-01

    High resistivity gallium arsenide compensated by chromium fabricated by Tomsk State University has demonstrated a good suitability as a sensor material for hybrid pixel detectors used in X-ray imaging systems with photon energies up to 60 keV. The material is available with a thickness up to 1 mm and due to its Z number a high absorption efficiency in this energy region is provided. However, the performance of thick GaAs:Cr-based detectors in spectroscopic applications is limited by readout electronics with relatively small pixels due to the charge sharing effect. In this paper, we present the experimental investigation of the charge sharing effect contribution in the GaAs:Cr-based Timepix detector. By means of scanning the detector with a pencil photon beam generated by the synchrotron facility, the geometrical mapping of pixel sensitivity is obtained, as well as the energy resolution of a single pixel. The experimental results are supported by numerical simulations. The observed limitation of the GaAs:Cr-based Timepix detector for the high flux X-ray imaging is discussed.

  8. Cake: Enabling High-level SLOs on Shared Storage Systems

    DTIC Science & Technology

    2012-11-07

    Cake: Enabling High-level SLOs on Shared Storage Systems Andrew Wang Shivaram Venkataraman Sara Alspaugh Randy H. Katz Ion Stoica Electrical...Date) * * * * * * * Professor R. Katz Second Reader (Date) Cake: Enabling High-level SLOs on Shared Storage Systems Andrew Wang, Shivaram Venkataraman ...Report MIT-LCS-TR-667, MIT, Laboratory for Computer Science, 1995. [39] A. Wang, S. Venkataraman , S. Alspaugh, I. Stoica, and R. Katz. Sweet storage SLOs

  9. An Efficient Multiparty Quantum Secret Sharing Protocol Based on Bell States in the High Dimension Hilbert Space

    NASA Astrophysics Data System (ADS)

    Gao, Gan; Wang, Li-Ping

    2010-11-01

    We propose a quantum secret sharing protocol, in which Bell states in the high dimension Hilbert space are employed. The biggest advantage of our protocol is the high source capacity. Compared with the previous secret sharing protocol, ours has the higher controlling efficiency. In addition, as decoy states in the high dimension Hilbert space are used, we needn’t destroy quantum entanglement for achieving the goal to check the channel security.

  10. The Effect of Socially Shared Regulation Approach on Learning Performance in Computer-Supported Collaborative Learning

    ERIC Educational Resources Information Center

    Zheng, Lanqin; Li, Xin; Huang, Ronghuai

    2017-01-01

    Students' abilities to socially shared regulation of their learning are crucial to productive and successful collaborative learning. However, how group members sustain and regulate collaborative processes is a neglected area in the field of collaborative learning. Furthermore, how group members engage in socially shared regulation still remains to…

  11. Can Universities Encourage Students' Continued Motivation for Knowledge Sharing and How Can This Help Organizations?

    ERIC Educational Resources Information Center

    Shoemaker, Nikki

    2014-01-01

    Both practitioners and researchers recognize the increasing importance of knowledge sharing in organizations (Bock, Zmud, Kim, & Lee, 2005; Vera-Muñoz, Ho, & Chow, 2006). Knowledge sharing influences a firm's knowledge creation, organizational learning, performance achievement, growth, and competitive advantage (Bartol &…

  12. Roles of the eye care workforce for task sharing in management of diabetic retinopathy in Cambodia

    PubMed Central

    Shah, Mufarriq; Ormsby, Gail M.; Noor, Ayesha; Chakrabarti, Rahul; Mörchen, Manfred; Islam, Fakir M Amirul; Harper, C Alex; Keeffe, Jill E

    2018-01-01

    AIM To identify the current roles of eye and health care workers in eye care delivery and investigate their potential roles in screening and detection for management of diabetic retinopathy (DR) through task sharing. METHODS Purposive sampling of 24 participants including health administrators, members from non-government organizations and all available eye care workers in Takeo province were recruited. This cross sectional mixed method study comprised a survey and in-depth interviews. Data were collected from medical records at Caritas Takeo Eye Hospital (CTEH) and Kiri Vong District Referral Hospital Vision Centre, and a survey and interviews with participants were done to explore the potential roles for task sharing in DR management. Qualitative data were transcribed into a text program and then entered into N-Vivo (version 10) software for data management and analysis. RESULTS From 2009 to 2012, a total of 105 178 patients were examined and 14 030 eye surgeries were performed in CTEH by three ophthalmologists supported by ophthalmic nurses in operating and eye examination for patients. Between January 2011 and September 2012, 151 patients (72 males) with retinal pathology including 125 (83%) with DR visited CTEH. In addition 170 patients with diabetes were referred to CTEH for eye examinations from Mo Po Tsyo screening programs for people with diabetes. Factors favouring task sharing included high demand for eye care services and scarcity of ophthalmologists. CONCLUSION Task sharing and team work for eye care services is functional. Participants favor the potential role of ophthalmic nurses in screening for DR through task sharing. PMID:29375999

  13. Inter-strand current sharing and ac loss measurements in superconducting YBCO Roebel cables

    DOE PAGES

    Majoros, M.; Sumption, M. D.; Collings, E. W.; ...

    2015-04-08

    A Roebel cable, one twist pitch long, was modified from its as-received state by soldering copper strips between the strands to provide inter-strand connections enabling current sharing. Various DC transport currents (representing different percentages of its critical current) were applied to a single strand of such a modified cable at 77 K in a liquid nitrogen bath. Simultaneous monitoring of I–V curves in different parts of the strand as well as in its interconnections with other strands was made using a number of sensitive Keithley nanovoltmeters in combination with a multichannel high-speed data acquisition card, all controlled via LabView software.more » Current sharing onset was observed at about 1.02 of strand I c. At a strand current of 1.3I c about 5% of the current was shared through the copper strip interconnections. A finite element method modeling was performed to estimate the inter-strand resistivities required to enable different levels of current sharing. The relative contributions of coupling and hysteretic magnetization (and loss) were compared, and for our cable and tape geometry, and at dB/dt=1 T s -1, and our inter-strand resistance of 0.77 mΩ, (enabling a current sharing of 5% at 1.3I c) the coupling component was 0.32% of the hysteretic component. However, inter-strand contact resistance values of 100–1000 times smaller (close to those of NbTi and Nb 3Sn based accelerator cables) would make the coupling components comparable in size to the hysteretic components.« less

  14. Inter-strand current sharing and ac loss measurements in superconducting YBCO Roebel cables

    DOE PAGES

    sumption, Mike; Majoros, Milan; Collings, E. W.; ...

    2014-11-07

    A Roebel cable, one twist pitch long, was modified from its as-received state by soldering copper strips between the strands to provide inter-strand connections enabling current sharing. Various DC transport currents (representing different percentages of its critical current) were applied to a single strand of such a modified cable at 77 K in a liquid nitrogen bath. Simultaneous monitoring of I–V curves in different parts of the strand as well as in its interconnections with other strands was made using a number of sensitive Keithley nanovoltmeters in combination with a multichannel high-speed data acquisition card, all controlled via LabView software.more » Current sharing onset was observed at about 1.02 of strand I c. At a strand current of 1.3I c about 5% of the current was shared through the copper strip interconnections. A finite element method modeling was performed to estimate the inter-strand resistivities required to enable different levels of current sharing. The relative contributions of coupling and hysteretic magnetization (and loss) were compared, and for our cable and tape geometry, and at dB/dt=1 T s -1, and our inter-strand resistance of 0.77 mΩ, (enabling a current sharing of 5% at 1.3I c ) the coupling component was 0.32% of the hysteretic component. However, inter-strand contact resistance values of 100–1000 times smaller (close to those of NbTi and Nb 3Sn based accelerator cables) would make the coupling components comparable in size to the hysteretic components.« less

  15. Inter-strand current sharing and ac loss measurements in superconducting YBCO Roebel cables

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Majoros, M.; Sumption, M. D.; Collings, E. W.

    A Roebel cable, one twist pitch long, was modified from its as-received state by soldering copper strips between the strands to provide inter-strand connections enabling current sharing. Various DC transport currents (representing different percentages of its critical current) were applied to a single strand of such a modified cable at 77 K in a liquid nitrogen bath. Simultaneous monitoring of I–V curves in different parts of the strand as well as in its interconnections with other strands was made using a number of sensitive Keithley nanovoltmeters in combination with a multichannel high-speed data acquisition card, all controlled via LabView software.more » Current sharing onset was observed at about 1.02 of strand I c. At a strand current of 1.3I c about 5% of the current was shared through the copper strip interconnections. A finite element method modeling was performed to estimate the inter-strand resistivities required to enable different levels of current sharing. The relative contributions of coupling and hysteretic magnetization (and loss) were compared, and for our cable and tape geometry, and at dB/dt=1 T s -1, and our inter-strand resistance of 0.77 mΩ, (enabling a current sharing of 5% at 1.3I c) the coupling component was 0.32% of the hysteretic component. However, inter-strand contact resistance values of 100–1000 times smaller (close to those of NbTi and Nb 3Sn based accelerator cables) would make the coupling components comparable in size to the hysteretic components.« less

  16. Inter-strand current sharing and ac loss measurements in superconducting YBCO Roebel cables

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    sumption, Mike; Majoros, Milan; Collings, E. W.

    A Roebel cable, one twist pitch long, was modified from its as-received state by soldering copper strips between the strands to provide inter-strand connections enabling current sharing. Various DC transport currents (representing different percentages of its critical current) were applied to a single strand of such a modified cable at 77 K in a liquid nitrogen bath. Simultaneous monitoring of I–V curves in different parts of the strand as well as in its interconnections with other strands was made using a number of sensitive Keithley nanovoltmeters in combination with a multichannel high-speed data acquisition card, all controlled via LabView software.more » Current sharing onset was observed at about 1.02 of strand I c. At a strand current of 1.3I c about 5% of the current was shared through the copper strip interconnections. A finite element method modeling was performed to estimate the inter-strand resistivities required to enable different levels of current sharing. The relative contributions of coupling and hysteretic magnetization (and loss) were compared, and for our cable and tape geometry, and at dB/dt=1 T s -1, and our inter-strand resistance of 0.77 mΩ, (enabling a current sharing of 5% at 1.3I c ) the coupling component was 0.32% of the hysteretic component. However, inter-strand contact resistance values of 100–1000 times smaller (close to those of NbTi and Nb 3Sn based accelerator cables) would make the coupling components comparable in size to the hysteretic components.« less

  17. Drug preparation, injection, and sharing practices in Tajikistan: a qualitative study in Kulob and Khorog.

    PubMed

    Otiashvili, David; Latypov, Alisher; Kirtadze, Irma; Ibragimov, Umedjon; Zule, William

    2016-06-02

    Sharing injection equipment remains an important rout of transmission of HIV and HCV infections in the region of Eastern Europe and Central Asia. Tajikistan is one of the most affected countries with high rates of injection drug use and related epidemics.The aim of this qualitative study was to describe drug use practices and related behaviors in two Tajik cities - Kulob and Khorog. Twelve focus group discussions (6 per city) with 100 people who inject drugs recruited through needle and syringe program (NSP) outreach in May 2014. Topics covered included specific drugs injected, drug prices and purity, access to sterile equipment, safe injection practices and types of syringes and needles used. Qualitative thematic analysis was performed using NVivo 10 software. All participants were male and ranged in age from 20 to 78 years. Thematic analysis showed that cheap Afghan heroin, often adulterated by dealers with other admixtures, was the only drug injected. Drug injectors often added Dimedrol (Diphenhydramine) to increase the potency of "low quality" heroin. NSPs were a major source of sterile equipment. Very few participants report direct sharing of needles and syringes. Conversely, many participants reported preparing drugs jointly and sharing injection paraphernalia. Using drugs in an outdoor setting and experiencing withdrawal were major contributors to sharing equipment, using non-sterile water, not boiling and not filtering the drug solution. Qualitative research can provide insights into risk behaviors that may be missed in quantitative studies. These finding have important implications for planning risk reduction interventions in Tajikistan. Prevention should specifically focus on indirect sharing practices.

  18. On the Structure of Neuronal Population Activity under Fluctuations in Attentional State

    PubMed Central

    Denfield, George H.; Bethge, Matthias; Tolias, Andreas S.

    2016-01-01

    Attention is commonly thought to improve behavioral performance by increasing response gain and suppressing shared variability in neuronal populations. However, both the focus and the strength of attention are likely to vary from one experimental trial to the next, thereby inducing response variability unknown to the experimenter. Here we study analytically how fluctuations in attentional state affect the structure of population responses in a simple model of spatial and feature attention. In our model, attention acts on the neural response exclusively by modulating each neuron's gain. Neurons are conditionally independent given the stimulus and the attentional gain, and correlated activity arises only from trial-to-trial fluctuations of the attentional state, which are unknown to the experimenter. We find that this simple model can readily explain many aspects of neural response modulation under attention, such as increased response gain, reduced individual and shared variability, increased correlations with firing rates, limited range correlations, and differential correlations. We therefore suggest that attention may act primarily by increasing response gain of individual neurons without affecting their correlation structure. The experimentally observed reduction in correlations may instead result from reduced variability of the attentional gain when a stimulus is attended. Moreover, we show that attentional gain fluctuations, even if unknown to a downstream readout, do not impair the readout accuracy despite inducing limited-range correlations, whereas fluctuations of the attended feature can in principle limit behavioral performance. SIGNIFICANCE STATEMENT Covert attention is one of the most widely studied examples of top-down modulation of neural activity in the visual system. Recent studies argue that attention improves behavioral performance by shaping of the noise distribution to suppress shared variability rather than by increasing response gain. Our work shows, however, that latent, trial-to-trial fluctuations of the focus and strength of attention lead to shared variability that is highly consistent with known experimental observations. Interestingly, fluctuations in the strength of attention do not affect coding performance. As a consequence, the experimentally observed changes in response variability may not be a mechanism of attention, but rather a side effect of attentional allocation strategies in different behavioral contexts. PMID:26843656

  19. Cognitive performance and BMI in childhood: Shared genetic influences between reaction time but not response inhibition

    USDA-ARS?s Scientific Manuscript database

    The aim of this work is to understand whether shared genetic influences can explain the associationbetween obesity and cognitive performance, including slower and more variable reaction times(RTs) and worse response inhibition. RT on a four-choice RT task and the go/no-go task, and commission errors...

  20. Shared Memory Parallelization of an Implicit ADI-type CFD Code

    NASA Technical Reports Server (NTRS)

    Hauser, Th.; Huang, P. G.

    1999-01-01

    A parallelization study designed for ADI-type algorithms is presented using the OpenMP specification for shared-memory multiprocessor programming. Details of optimizations specifically addressed to cache-based computer architectures are described and performance measurements for the single and multiprocessor implementation are summarized. The paper demonstrates that optimization of memory access on a cache-based computer architecture controls the performance of the computational algorithm. A hybrid MPI/OpenMP approach is proposed for clusters of shared memory machines to further enhance the parallel performance. The method is applied to develop a new LES/DNS code, named LESTool. A preliminary DNS calculation of a fully developed channel flow at a Reynolds number of 180, Re(sub tau) = 180, has shown good agreement with existing data.

  1. Shared Memory Parallelism for 3D Cartesian Discrete Ordinates Solver

    NASA Astrophysics Data System (ADS)

    Moustafa, Salli; Dutka-Malen, Ivan; Plagne, Laurent; Ponçot, Angélique; Ramet, Pierre

    2014-06-01

    This paper describes the design and the performance of DOMINO, a 3D Cartesian SN solver that implements two nested levels of parallelism (multicore+SIMD) on shared memory computation nodes. DOMINO is written in C++, a multi-paradigm programming language that enables the use of powerful and generic parallel programming tools such as Intel TBB and Eigen. These two libraries allow us to combine multi-thread parallelism with vector operations in an efficient and yet portable way. As a result, DOMINO can exploit the full power of modern multi-core processors and is able to tackle very large simulations, that usually require large HPC clusters, using a single computing node. For example, DOMINO solves a 3D full core PWR eigenvalue problem involving 26 energy groups, 288 angular directions (S16), 46 × 106 spatial cells and 1 × 1012 DoFs within 11 hours on a single 32-core SMP node. This represents a sustained performance of 235 GFlops and 40:74% of the SMP node peak performance for the DOMINO sweep implementation. The very high Flops/Watt ratio of DOMINO makes it a very interesting building block for a future many-nodes nuclear simulation tool.

  2. RabbitQR: fast and flexible big data processing at LSST data rates using existing, shared-use hardware

    NASA Astrophysics Data System (ADS)

    Kotulla, Ralf; Gopu, Arvind; Hayashi, Soichi

    2016-08-01

    Processing astronomical data to science readiness was and remains a challenge, in particular in the case of multi detector instruments such as wide-field imagers. One such instrument, the WIYN One Degree Imager, is available to the astronomical community at large, and, in order to be scientifically useful to its varied user community on a short timescale, provides its users fully calibrated data in addition to the underlying raw data. However, time-efficient re-processing of the often large datasets with improved calibration data and/or software requires more than just a large number of CPU-cores and disk space. This is particularly relevant if all computing resources are general purpose and shared with a large number of users in a typical university setup. Our approach to address this challenge is a flexible framework, combining the best of both high performance (large number of nodes, internal communication) and high throughput (flexible/variable number of nodes, no dedicated hardware) computing. Based on the Advanced Message Queuing Protocol, we a developed a Server-Manager- Worker framework. In addition to the server directing the work flow and the worker executing the actual work, the manager maintains a list of available worker, adds and/or removes individual workers from the worker pool, and re-assigns worker to different tasks. This provides the flexibility of optimizing the worker pool to the current task and workload, improves load balancing, and makes the most efficient use of the available resources. We present performance benchmarks and scaling tests, showing that, today and using existing, commodity shared- use hardware we can process data with data throughputs (including data reduction and calibration) approaching that expected in the early 2020s for future observatories such as the Large Synoptic Survey Telescope.

  3. Production and evaluation of measuring equipment for share viscosity of polymer melts included nanofiller with injection molding machine

    NASA Astrophysics Data System (ADS)

    Kameda, Takao; Sugino, Naoto; Takei, Satoshi

    2016-10-01

    Shear viscosity measurement device was produced to evaluate the injection molding workability for high-performance resins. Observation was possible in shear rate from 10 to 10000 [1/sec] that were higher than rotary rheometer by measuring with a plasticization cylinder of the injection molding machine. The result of measurements extrapolated result of a measurement of the rotary rheometer.

  4. Data Serving Climate Simulation Science at the NASA Center for Climate Simulation

    NASA Technical Reports Server (NTRS)

    Salmon, Ellen M.

    2011-01-01

    The NASA Center for Climate Simulation (NCCS) provides high performance computational resources, a multi-petabyte archive, and data services in support of climate simulation research and other NASA-sponsored science. This talk describes the NCCS's data-centric architecture and processing, which are evolving in anticipation of researchers' growing requirements for higher resolution simulations and increased data sharing among NCCS users and the external science community.

  5. A Highly Flexible and Efficient Passive Optical Network Employing Dynamic Wavelength Allocation

    NASA Astrophysics Data System (ADS)

    Hsueh, Yu-Li; Rogge, Matthew S.; Yamamoto, Shu; Kazovsky, Leonid G.

    2005-01-01

    A novel and high-performance passive optical network (PON), the SUCCESS-DWA PON, employs dynamic wavelength allocation to provide bandwidth sharing across multiple physical PONs. In the downstream, tunable lasers, an arrayed waveguide grating, and coarse/fine filtering combine to create a flexible new optical access solution. In the upstream, several distributed and centralized schemes are proposed and investigated. The network performance is compared to conventional TDM-PONs under different traffic models, including the self-similar traffic model and the transaction-oriented model. Broadcast support and deployment issues are addressed. The network's excellent scalability can bridge the gap between conventional TDM-PONs and WDM-PONs. The powerful architecture is a promising candidate for next generation optical access networks.

  6. RC64, a Rad-Hard Many-Core High- Performance DSP for Space Applications

    NASA Astrophysics Data System (ADS)

    Ginosar, Ran; Aviely, Peleg; Gellis, Hagay; Liran, Tuvia; Israeli, Tsvika; Nesher, Roy; Lange, Fredy; Dobkin, Reuven; Meirov, Henri; Reznik, Dror

    2015-09-01

    RC64, a novel rad-hard 64-core signal processing chip targets DSP performance of 75 GMACs (16bit), 150 GOPS and 38 single precision GFLOPS while dissipating less than 10 Watts. RC64 integrates advanced DSP cores with a multi-bank shared memory and a hardware scheduler, also supporting DDR2/3 memory and twelve 3.125 Gbps full duplex high speed serial links using SpaceFibre and other protocols. The programming model employs sequential fine-grain tasks and a separate task map to define task dependencies. RC64 is implemented as a 300 MHz integrated circuit on a 65nm CMOS technology, assembled in hermetically sealed ceramic CCGA624 package and qualified to the highest space standards.

  7. RC64, a Rad-Hard Many-Core High-Performance DSP for Space Applications

    NASA Astrophysics Data System (ADS)

    Ginosar, Ran; Aviely, Peleg; Liran, Tuvia; Alon, Dov; Mandler, Alberto; Lange, Fredy; Dobkin, Reuven; Goldberg, Miki

    2014-08-01

    RC64, a novel rad-hard 64-core signal processing chip targets DSP performance of 75 GMACs (16bit), 150 GOPS and 20 single precision GFLOPS while dissipating less than 10 Watts. RC64 integrates advanced DSP cores with a multi-bank shared memory and a hardware scheduler, also supporting DDR2/3 memory and twelve 2.5 Gbps full duplex high speed serial links using SpaceFibre and other protocols. The programming model employs sequential fine-grain tasks and a separate task map to define task dependencies. RC64 is implemented as a 300 MHz integrated circuit on a 65nm CMOS technology, assembled in hermetically sealed ceramic CCGA624 package and qualified to the highest space standards.

  8. Adapting existing natural language processing resources for cardiovascular risk factors identification in clinical notes.

    PubMed

    Khalifa, Abdulrahman; Meystre, Stéphane

    2015-12-01

    The 2014 i2b2 natural language processing shared task focused on identifying cardiovascular risk factors such as high blood pressure, high cholesterol levels, obesity and smoking status among other factors found in health records of diabetic patients. In addition, the task involved detecting medications, and time information associated with the extracted data. This paper presents the development and evaluation of a natural language processing (NLP) application conceived for this i2b2 shared task. For increased efficiency, the application main components were adapted from two existing NLP tools implemented in the Apache UIMA framework: Textractor (for dictionary-based lookup) and cTAKES (for preprocessing and smoking status detection). The application achieved a final (micro-averaged) F1-measure of 87.5% on the final evaluation test set. Our attempt was mostly based on existing tools adapted with minimal changes and allowed for satisfying performance with limited development efforts. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. Bias correction in the hierarchical likelihood approach to the analysis of multivariate survival data.

    PubMed

    Jeon, Jihyoun; Hsu, Li; Gorfine, Malka

    2012-07-01

    Frailty models are useful for measuring unobserved heterogeneity in risk of failures across clusters, providing cluster-specific risk prediction. In a frailty model, the latent frailties shared by members within a cluster are assumed to act multiplicatively on the hazard function. In order to obtain parameter and frailty variate estimates, we consider the hierarchical likelihood (H-likelihood) approach (Ha, Lee and Song, 2001. Hierarchical-likelihood approach for frailty models. Biometrika 88, 233-243) in which the latent frailties are treated as "parameters" and estimated jointly with other parameters of interest. We find that the H-likelihood estimators perform well when the censoring rate is low, however, they are substantially biased when the censoring rate is moderate to high. In this paper, we propose a simple and easy-to-implement bias correction method for the H-likelihood estimators under a shared frailty model. We also extend the method to a multivariate frailty model, which incorporates complex dependence structure within clusters. We conduct an extensive simulation study and show that the proposed approach performs very well for censoring rates as high as 80%. We also illustrate the method with a breast cancer data set. Since the H-likelihood is the same as the penalized likelihood function, the proposed bias correction method is also applicable to the penalized likelihood estimators.

  10. Determinants of success in Shared Savings Programs: An analysis of ACO and market characteristics.

    PubMed

    Ouayogodé, Mariétou H; Colla, Carrie H; Lewis, Valerie A

    2017-03-01

    Medicare's Accountable Care Organization (ACO) programs introduced shared savings to traditional Medicare, which allow providers who reduce health care costs for their patients to retain a percentage of the savings they generate. To examine ACO and market factors associated with superior financial performance in Medicare ACO programs. We obtained financial performance data from the Centers for Medicare and Medicaid Services (CMS); we derived market-level characteristics from Medicare claims; and we collected ACO characteristics from the National Survey of ACOs for 215 ACOs. We examined the association between ACO financial performance and ACO provider composition, leadership structure, beneficiary characteristics, risk bearing experience, quality and process improvement capabilities, physician performance management, market competition, CMS-assigned financial benchmark, and ACO contract start date. We examined two outcomes from Medicare ACOs' first performance year: savings per Medicare beneficiary and earning shared savings payments (a dichotomous variable). When modeling the ACO ability to save and earn shared savings payments, we estimated positive regression coefficients for a greater proportion of primary care providers in the ACO, more practicing physicians on the governing board, physician leadership, active engagement in reducing hospital re-admissions, a greater proportion of disabled Medicare beneficiaries assigned to the ACO, financial incentives offered to physicians, a larger financial benchmark, and greater ACO market penetration. No characteristic of organizational structure was significantly associated with both outcomes of savings per beneficiary and likelihood of achieving shared savings. ACO prior experience with risk-bearing contracts was positively correlated with savings and significantly increased the likelihood of receiving shared savings payments. In the first year, performance is quite heterogeneous, yet organizational structure does not consistently predict performance. Organizations with large financial benchmarks at baseline have greater opportunities to achieve savings. Findings on prior risk bearing suggest that ACOs learn over time under risk-bearing contracts. Given the lack of predictive power for organizational characteristics, CMS should continue to encourage diversity in organizational structures for ACO participants, and provide alternative funding and risk bearing mechanisms to continue to allow a diverse group of organizations to participate. III. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. Determinants of Success in Shared Savings Programs: An Analysis of ACO and Market Characteristics

    PubMed Central

    Colla, Carrie H.; Lewis, Valerie A.

    2016-01-01

    Background Medicare’s Accountable Care Organization (ACO) programs introduced shared savings to traditional Medicare, which allow providers who reduce health care costs for their patients to retain a percentage of the savings they generate. Objective To examine ACO and market factors associated with superior financial performance in Medicare ACO programs. Methods We obtained financial performance data from the Centers for Medicare and Medicaid Services (CMS); we derived market-level characteristics from Medicare claims; and we collected ACO characteristics from the National Survey of ACOs for 215 ACOs. We examined the association between ACO financial performance and ACO provider composition, leadership structure, beneficiary characteristics, risk bearing experience, quality and process improvement capabilities, physician performance management, market competition, CMS-assigned financial benchmark, and ACO contract start date. We examined two outcomes from Medicare ACOs’ first performance year: savings per Medicare beneficiary and earning shared savings payments (a dichotomous variable). Results When modeling the ACO ability to save and earn shared savings payments, we estimated positive regression coefficients for a greater proportion of primary care providers in the ACO, more practicing physicians on the governing board, physician leadership, active engagement in reducing hospital re-admissions, a greater proportion of disabled Medicare beneficiaries assigned to the ACO, financial incentives offered to physicians, a larger financial benchmark, and greater ACO market penetration. No characteristic of organizational structure was significantly associated with both outcomes of savings per beneficiary and likelihood of achieving shared savings. ACO prior experience with risk-bearing contracts was positively correlated with savings and significantly increased the likelihood of receiving shared savings payments. Conclusions In the first year performance is quite heterogeneous, yet organizational structure does not consistently predict performance. Organizations with large financial benchmarks at baseline have greater opportunities to achieve savings. Findings on prior risk bearing suggest that ACOs learn over time under risk-bearing contracts. Implications Given the lack of predictive power for organizational characteristics, CMS should continue to encourage diversity in organizational structures for ACO participants, and provide alternative funding and risk bearing mechanisms to continue to allow a diverse group of organizations to participate. Level of evidence III PMID:27687917

  12. Investigating the Effects of Exam Length on Performance and Cognitive Fatigue

    PubMed Central

    Jensen, Jamie L.; Berry, Dane A.; Kummer, Tyler A.

    2013-01-01

    This study examined the effects of exam length on student performance and cognitive fatigue in an undergraduate biology classroom. Exams tested higher order thinking skills. To test our hypothesis, we administered standard- and extended-length high-level exams to two populations of non-majors biology students. We gathered exam performance data between conditions as well as performance on the first and second half of exams within conditions. We showed that lengthier exams led to better performance on assessment items shared between conditions, possibly lending support to the spreading activation theory. It also led to greater performance on the final exam, lending support to the testing effect in creative problem solving. Lengthier exams did not result in lower performance due to fatiguing conditions, although students perceived subjective fatigue. Implications of these findings are discussed with respect to assessment practices. PMID:23950918

  13. Advanced Performance Modeling with Combined Passive and Active Monitoring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dovrolis, Constantine; Sim, Alex

    2015-04-15

    To improve the efficiency of resource utilization and scheduling of scientific data transfers on high-speed networks, the "Advanced Performance Modeling with combined passive and active monitoring" (APM) project investigates and models a general-purpose, reusable and expandable network performance estimation framework. The predictive estimation model and the framework will be helpful in optimizing the performance and utilization of networks as well as sharing resources with predictable performance for scientific collaborations, especially in data intensive applications. Our prediction model utilizes historical network performance information from various network activity logs as well as live streaming measurements from network peering devices. Historical network performancemore » information is used without putting extra load on the resources by active measurement collection. Performance measurements collected by active probing is used judiciously for improving the accuracy of predictions.« less

  14. Scalable, High-performance 3D Imaging Software Platform: System Architecture and Application to Virtual Colonoscopy

    PubMed Central

    Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli; Brett, Bevin

    2013-01-01

    One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. In this work, we have developed a software platform that is designed to support high-performance 3D medical image processing for a wide range of applications using increasingly available and affordable commodity computing systems: multi-core, clusters, and cloud computing systems. To achieve scalable, high-performance computing, our platform (1) employs size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D image processing algorithms; (2) supports task scheduling for efficient load distribution and balancing; and (3) consists of a layered parallel software libraries that allow a wide range of medical applications to share the same functionalities. We evaluated the performance of our platform by applying it to an electronic cleansing system in virtual colonoscopy, with initial experimental results showing a 10 times performance improvement on an 8-core workstation over the original sequential implementation of the system. PMID:23366803

  15. Ubiquitous Mobile Knowledge Construction in Collaborative Learning Environments

    PubMed Central

    Baloian, Nelson; Zurita, Gustavo

    2012-01-01

    Knowledge management is a critical activity for any organization. It has been said to be a differentiating factor and an important source of competitiveness if this knowledge is constructed and shared among its members, thus creating a learning organization. Knowledge construction is critical for any collaborative organizational learning environment. Nowadays workers must perform knowledge creation tasks while in motion, not just in static physical locations; therefore it is also required that knowledge construction activities be performed in ubiquitous scenarios, and supported by mobile and pervasive computational systems. These knowledge creation systems should help people in or outside organizations convert their tacit knowledge into explicit knowledge, thus supporting the knowledge construction process. Therefore in our understanding, we consider highly relevant that undergraduate university students learn about the knowledge construction process supported by mobile and ubiquitous computing. This has been a little explored issue in this field. This paper presents the design, implementation, and an evaluation of a system called MCKC for Mobile Collaborative Knowledge Construction, supporting collaborative face-to-face tacit knowledge construction and sharing in ubiquitous scenarios. The MCKC system can be used by undergraduate students to learn how to construct knowledge, allowing them anytime and anywhere to create, make explicit and share their knowledge with their co-learners, using visual metaphors, gestures and sketches to implement the human-computer interface of mobile devices (PDAs). PMID:22969333

  16. Implicit Object Naming in Visual Search: Evidence from Phonological Competition

    PubMed Central

    Walenchok, Stephen C.; Hout, Michael C.; Goldinger, Stephen D.

    2016-01-01

    During visual search, people are distracted by objects that visually resemble search targets; search is impaired when targets and distractors share overlapping features. In this study, we examined whether a nonvisual form of similarity, overlapping object names, can also affect search performance. In three experiments, people searched for images of real-world objects (e.g., a beetle) among items whose names either all shared the same phonological onset (/bi/), or were phonologically varied. Participants either searched for one or three potential targets per trial, with search targets designated either visually or verbally. We examined standard visual search (Experiments 1 and 3) and a self-paced serial search task wherein participants manually rejected each distractor (Experiment 2). We hypothesized that people would maintain visual templates when searching for single targets, but would rely more on object names when searching for multiple items and when targets were verbally cued. This reliance on target names would make performance susceptible to interference from similar-sounding distractors. Experiments 1 and 2 showed the predicted interference effect in conditions with high memory load and verbal cues. In Experiment 3, eye-movement results showed that phonological interference resulted from small increases in dwell time to all distractors. The results suggest that distractor names are implicitly activated during search, slowing attention disengagement when targets and distractors share similar names. PMID:27531018

  17. Ubiquitous mobile knowledge construction in collaborative learning environments.

    PubMed

    Baloian, Nelson; Zurita, Gustavo

    2012-01-01

    Knowledge management is a critical activity for any organization. It has been said to be a differentiating factor and an important source of competitiveness if this knowledge is constructed and shared among its members, thus creating a learning organization. Knowledge construction is critical for any collaborative organizational learning environment. Nowadays workers must perform knowledge creation tasks while in motion, not just in static physical locations; therefore it is also required that knowledge construction activities be performed in ubiquitous scenarios, and supported by mobile and pervasive computational systems. These knowledge creation systems should help people in or outside organizations convert their tacit knowledge into explicit knowledge, thus supporting the knowledge construction process. Therefore in our understanding, we consider highly relevant that undergraduate university students learn about the knowledge construction process supported by mobile and ubiquitous computing. This has been a little explored issue in this field. This paper presents the design, implementation, and an evaluation of a system called MCKC for Mobile Collaborative Knowledge Construction, supporting collaborative face-to-face tacit knowledge construction and sharing in ubiquitous scenarios. The MCKC system can be used by undergraduate students to learn how to construct knowledge, allowing them anytime and anywhere to create, make explicit and share their knowledge with their co-learners, using visual metaphors, gestures and sketches to implement the human-computer interface of mobile devices (PDAs).

  18. Not of One Mind: Mental Models of Clinical Practice Guidelines in the Veterans Health Administration

    PubMed Central

    Hysong, Sylvia J; Best, Richard G; Pugh, Jacqueline A; Moore, Frank I

    2005-01-01

    Objective The purpose of this paper is to present differences in mental models of clinical practice guidelines (CPGs) among 15 Veterans Health Administration (VHA) facilities throughout the United States. Data Sources Two hundred and forty-four employees from 15 different VHA facilities across four service networks around the country were invited to participate. Participants were selected from different levels throughout each service setting from primary care personnel to facility leadership. Study Design This qualitative study used purposive sampling, a semistructured interview process for data collection, and grounded theory techniques for analysis. Data Collection A semistructured interview was used to collect information on participants' mental models of CPGs, as well as implementation strategies and barriers in their facility. Findings Analysis of these interviews using grounded theory techniques indicated that there was wide variability in employees' mental models of CPGs. Findings also indicated that high-performing facilities exhibited both (a) a clear, focused shared mental model of guidelines and (b) a tendency to use performance feedback as a learning opportunity, thus suggesting that a shared mental model is a necessary but not sufficient step toward successful guideline implementation. Conclusions We conclude that a clear shared mental model of guidelines, in combination with a learning orientation toward feedback are important components for successful guideline implementation and improved quality of care. PMID:15960693

  19. Merlin - Massively parallel heterogeneous computing

    NASA Technical Reports Server (NTRS)

    Wittie, Larry; Maples, Creve

    1989-01-01

    Hardware and software for Merlin, a new kind of massively parallel computing system, are described. Eight computers are linked as a 300-MIPS prototype to develop system software for a larger Merlin network with 16 to 64 nodes, totaling 600 to 3000 MIPS. These working prototypes help refine a mapped reflective memory technique that offers a new, very general way of linking many types of computer to form supercomputers. Processors share data selectively and rapidly on a word-by-word basis. Fast firmware virtual circuits are reconfigured to match topological needs of individual application programs. Merlin's low-latency memory-sharing interfaces solve many problems in the design of high-performance computing systems. The Merlin prototypes are intended to run parallel programs for scientific applications and to determine hardware and software needs for a future Teraflops Merlin network.

  20. Large-Scale Astrophysical Visualization on Smartphones

    NASA Astrophysics Data System (ADS)

    Becciani, U.; Massimino, P.; Costa, A.; Gheller, C.; Grillo, A.; Krokos, M.; Petta, C.

    2011-07-01

    Nowadays digital sky surveys and long-duration, high-resolution numerical simulations using high performance computing and grid systems produce multidimensional astrophysical datasets in the order of several Petabytes. Sharing visualizations of such datasets within communities and collaborating research groups is of paramount importance for disseminating results and advancing astrophysical research. Moreover educational and public outreach programs can benefit greatly from novel ways of presenting these datasets by promoting understanding of complex astrophysical processes, e.g., formation of stars and galaxies. We have previously developed VisIVO Server, a grid-enabled platform for high-performance large-scale astrophysical visualization. This article reviews the latest developments on VisIVO Web, a custom designed web portal wrapped around VisIVO Server, then introduces VisIVO Smartphone, a gateway connecting VisIVO Web and data repositories for mobile astrophysical visualization. We discuss current work and summarize future developments.

  1. High Blood Pressure (Hypertension)

    MedlinePlus

    ... For Consumers Consumer Information by Audience For Women High Blood Pressure (Hypertension) Share Tweet Linkedin Pin it More sharing options ... En Español Who is at risk? How is high blood pressure treated? Understanding your blood pressure: What do the ...

  2. Shared and Disorder-Specific Neurocomputational Mechanisms of Decision-Making in Autism Spectrum Disorder and Obsessive-Compulsive Disorder.

    PubMed

    Carlisi, Christina O; Norman, Luke; Murphy, Clodagh M; Christakou, Anastasia; Chantiluke, Kaylita; Giampietro, Vincent; Simmons, Andrew; Brammer, Michael; Murphy, Declan G; Mataix-Cols, David; Rubia, Katya

    2017-12-01

    Autism spectrum disorder (ASD) and obsessive-compulsive disorder (OCD) often share phenotypes of repetitive behaviors, possibly underpinned by abnormal decision-making. To compare neural correlates underlying decision-making between these disorders, brain activation of boys with ASD (N = 24), OCD (N = 20) and typically developing controls (N = 20) during gambling was compared, and computational modeling compared performance. Patients were unimpaired on number of risky decisions, but modeling showed that both patient groups had lower choice consistency and relied less on reinforcement learning compared to controls. ASD individuals had disorder-specific choice perseverance abnormalities compared to OCD individuals. Neurofunctionally, ASD and OCD boys shared dorsolateral/inferior frontal underactivation compared to controls during decision-making. During outcome anticipation, patients shared underactivation compared to controls in lateral inferior/orbitofrontal cortex and ventral striatum. During reward receipt, ASD boys had disorder-specific enhanced activation in inferior frontal/insular regions relative to OCD boys and controls. Results showed that ASD and OCD individuals shared decision-making strategies that differed from controls to achieve comparable performance to controls. Patients showed shared abnormalities in lateral-(orbito)fronto-striatal reward circuitry, but ASD boys had disorder-specific lateral inferior frontal/insular overactivation, suggesting that shared and disorder-specific mechanisms underpin decision-making in these disorders. Findings provide evidence for shared neurobiological substrates that could serve as possible future biomarkers. © The Author 2017. Published by Oxford University Press.

  3. Adapted shared reading at school for minimally verbal students with autism.

    PubMed

    Mucchetti, Charlotte A

    2013-05-01

    Almost nothing is known about the capacity of minimally verbal students with autism to develop literacy skills. Shared reading is a regular practice in early education settings and is widely thought to encourage language and literacy development. There is some evidence that children with severe disabilities can be engaged in adapted shared reading activities. The current study examines the impact of teacher-led adapted shared reading activities on engagement and story comprehension in minimally verbal 5-6-year-old children with autism using a multiple baseline/alternating treatment design. Four students and three teachers participated. Teachers conducted adapted shared reading activities with modified books (visual supports, three-dimensional objects, simplified text) and used specific strategies for increasing student engagement. Student performance during adapted activities was compared to performance during standard shared reading sessions. All four students showed increased story comprehension and engagement during adapted shared reading. Average percentage of session engaged was 87%-100% during adapted sessions, compared with 41%-52% during baseline. Average number of correct responses to story comprehension questions was 4.2-4.8 out of 6 during adapted sessions compared with 1.2-2 during baseline. Visual supports, tactile objects, and specific teaching strategies offer ways for minimally verbal students to meaningfully participate in literacy activities. Future research should investigate adapted shared reading activities implemented classroom-wide as well as joint engagement, language, and literacy outcomes after using such activities over time.

  4. Which comes first: employee attitudes or organizational financial and market performance?

    PubMed

    Schneider, Benjamin; Hanges, Paul J; Smith, D Brent; Salvaggio, Amy Nicole

    2003-10-01

    Employee attitude data from 35 companies over 8 years were analyzed at the organizational level of analysis against financial (return on assets; ROA) and market performance (earnings per share: EPS) data using lagged analyses permitting exploration of priority in likely causal ordering. Analyses revealed statistically significant and stable relationships across various time lags for 3 of 7 scales. Overall Job Satisfaction and Satisfaction With Security were predicted by ROA and EPS more strongly than the reverse (although some of the reverse relationships were also significant); Satisfaction With Pay suggested a more reciprocal relationship with ROA and EPS. The discussion of results provides a preliminary framework for understanding issues surrounding employee attitudes, high-performance work practices, and organizational financial and market performance.

  5. Coordinating Cognition: The Costs and Benefits of Shared Gaze during Collaborative Search

    ERIC Educational Resources Information Center

    Brennan, Susan E.; Chen, Xin; Dickinson, Christopher A.; Neider, Mark B.; Zelinsky, Gregory J.

    2008-01-01

    Collaboration has its benefits, but coordination has its costs. We explored the potential for remotely located pairs of people to collaborate during visual search, using shared gaze and speech. Pairs of searchers wearing eyetrackers jointly performed an O-in-Qs search task alone, or in one of three collaboration conditions: shared gaze (with one…

  6. The Process of Sharing Stories with Young People

    ERIC Educational Resources Information Center

    Sturm, Brian W.

    2008-01-01

    Storytelling is a wonderful way to share the rich emotions of life. It allows adults to connect with children in personal and powerful ways, building a sense of trust and community between the performer and the audience. In this article, the author offers an approach to learning and sharing stories. The author also provides twenty-two…

  7. Analysis of scalability of high-performance 3D image processing platform for virtual colonoscopy

    NASA Astrophysics Data System (ADS)

    Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli

    2014-03-01

    One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. For this purpose, we previously developed a software platform for high-performance 3D medical image processing, called HPC 3D-MIP platform, which employs increasingly available and affordable commodity computing systems such as the multicore, cluster, and cloud computing systems. To achieve scalable high-performance computing, the platform employed size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D-MIP algorithms, supported task scheduling for efficient load distribution and balancing, and consisted of a layered parallel software libraries that allow image processing applications to share the common functionalities. We evaluated the performance of the HPC 3D-MIP platform by applying it to computationally intensive processes in virtual colonoscopy. Experimental results showed a 12-fold performance improvement on a workstation with 12-core CPUs over the original sequential implementation of the processes, indicating the efficiency of the platform. Analysis of performance scalability based on the Amdahl's law for symmetric multicore chips showed the potential of a high performance scalability of the HPC 3DMIP platform when a larger number of cores is available.

  8. Simulation modelling of central order processing system under resource sharing strategy in demand-driven garment supply chains

    NASA Astrophysics Data System (ADS)

    Ma, K.; Thomassey, S.; Zeng, X.

    2017-10-01

    In this paper we proposed a central order processing system under resource sharing strategy for demand-driven garment supply chains to increase supply chain performances. We examined this system by using simulation technology. Simulation results showed that significant improvement in various performance indicators was obtained in new collaborative model with proposed system.

  9. Should Earnings per Share (EPS) Be Taught as a Means of Comparing Intercompany Performance?

    ERIC Educational Resources Information Center

    Jordan, Charles E.; Clark, Stanley J.; Smith, W. Robert

    2007-01-01

    Accounting standards state that the purpose of presenting earnings per share (EPS) is to provide financial statement users with information on the performance of a single entity. Yet, several textbook authors go further to state that EPS can be used to make comparisons among firms. In this article, the authors show that although EPS comparisons…

  10. An Inquiry into the Relationship between Projected Changes in Earnings per Share and Subsequent Security Performance.

    ERIC Educational Resources Information Center

    Barbee, William C., Jr.

    The purpose of this study was to examine the variable of estimated earnings in order to determine how forecasts might be utilized to develop a securities portfolio strategy. The hypothesis stated that there is an inverse relationship between projected change in earnings per share and security performance. Ninety-one New York Stock Exchange…

  11. Structural and Psychological Empowerment Climates, Performance, and the Moderating Role of Shared Felt Accountability: A Managerial Perspective

    ERIC Educational Resources Information Center

    Wallace, J. Craig; Johnson, Paul D.; Mathe, Kimberly; Paul, Jeff

    2011-01-01

    The authors proposed and tested a model in which data were collected from managers (n = 539) at 116 corporate-owned quick service restaurants to assess the structural and psychological empowerment process as moderated by shared-felt accountability on indices of performance from a managerial perspective. The authors found that empowering leadership…

  12. Shared Mental Models on the Performance of e-Learning Content Development Teams

    ERIC Educational Resources Information Center

    Jo, Il-Hyun

    2012-01-01

    The primary purpose of the study was to investigate team-based e-Learning content development projects from the perspective of the shared mental model (SMM) theory. The researcher conducted a study of 79 e-Learning content development teams in Korea to examine the relationship between taskwork and teamwork SMMs and the performance of the teams.…

  13. Lessons Learned for Improving Spacecraft Ground Operations

    NASA Technical Reports Server (NTRS)

    Bell, Michael; Henderson, Gena; Stambolian, Damon

    2013-01-01

    NASA policy requires each Program or Project to develop a plan for how they will address Lessons Learned. Projects have the flexibility to determine how best to promote and implement lessons learned. A large project might budget for a lessons learned position to coordinate elicitation, documentation and archival of the project lessons. The lessons learned process crosses all NASA Centers and includes the contactor community. o The Office of The Chief Engineer at NASA Headquarters in Washington D.C., is the overall process owner, and field locations manage the local implementation. One tool used to transfer knowledge between program and projects is the Lessons Learned Information System (LLIS). Most lessons come from NASA in partnership with support contractors. A search for lessons that might impact a new design is often performed by a contractor team member. Knowledge is not found with only one person, one project team, or one organization. Sometimes, another project team, or person, knows something that can help your project or your task. Knowledge sharing is an everyday activity at the Kennedy Space Center through storytelling, Kennedy Engineering Academy presentations and through searching the Lessons Learned Information system. o Project teams search the lessons repository to ensure the best possible results are delivered. o The ideas from the past are not always directly applicable but usually spark new ideas and innovations. Teams have a great responsibility to collect and disseminate these lessons so that they are shared with future generations of space systems designers. o Leaders should set a goal for themselves to host a set numbers of lesson learned events each year and do more to promote multiple methods of lessons learned activities. o High performing employees are expected to share their lessons, however formal knowledge sharing presentation are not the norm for many employees.

  14. Multivariable confounding adjustment in distributed data networks without sharing of patient-level data.

    PubMed

    Toh, Sengwee; Reichman, Marsha E; Houstoun, Monika; Ding, Xiao; Fireman, Bruce H; Gravel, Eric; Levenson, Mark; Li, Lingling; Moyneur, Erick; Shoaibi, Azadeh; Zornberg, Gwen; Hennessy, Sean

    2013-11-01

    It is increasingly necessary to analyze data from multiple sources when conducting public health safety surveillance or comparative effectiveness research. However, security, privacy, proprietary, and legal concerns often reduce data holders' willingness to share highly granular information. We describe and compare two approaches that do not require sharing of patient-level information to adjust for confounding in multi-site studies. We estimated the risks of angioedema associated with angiotensin-converting enzyme inhibitors (ACEIs), angiotensin receptor blockers (ARBs), and aliskiren in comparison with beta-blockers within Mini-Sentinel, which has created a distributed data system of 18 health plans. To obtain the adjusted hazard ratios (HRs) and 95% confidence intervals (CIs), we performed (i) a propensity score-stratified case-centered logistic regression analysis, a method identical to a stratified Cox regression analysis but needing only aggregated risk set data, and (ii) an inverse variance-weighted meta-analysis, which requires only the site-specific HR and variance. We also performed simulations to further compare the two methods. Compared with beta-blockers, the adjusted HR was 3.04 (95% CI: 2.81, 3.27) for ACEIs, 1.16 (1.00, 1.34) for ARBs, and 2.85 (1.34, 6.04) for aliskiren in the case-centered analysis. The corresponding HRs were 2.98 (2.76, 3.21), 1.15 (1.00, 1.33), and 2.86 (1.35, 6.04) in the meta-analysis. Simulations suggested that the two methods may produce different results under certain analytic scenarios. The case-centered analysis and the meta-analysis produced similar results without the need to share patient-level data across sites in our empirical study, but may provide different results in other study settings. Copyright © 2013 John Wiley & Sons, Ltd.

  15. On the impact of communication complexity in the design of parallel numerical algorithms

    NASA Technical Reports Server (NTRS)

    Gannon, D.; Vanrosendale, J.

    1984-01-01

    This paper describes two models of the cost of data movement in parallel numerical algorithms. One model is a generalization of an approach due to Hockney, and is suitable for shared memory multiprocessors where each processor has vector capabilities. The other model is applicable to highly parallel nonshared memory MIMD systems. In the second model, algorithm performance is characterized in terms of the communication network design. Techniques used in VLSI complexity theory are also brought in, and algorithm independent upper bounds on system performance are derived for several problems that are important to scientific computation.

  16. The impact of the Balanced Budget Act on the utilization and financial condition of children's services in California hospitals.

    PubMed

    McCue, Michael J

    2002-01-01

    The objective of this study was to evaluate the utilization and financial performance of children's services after the Balanced Budget Act of 1997. The author analyzed these performance factors by hospital ownership, HMO penetration, and disproportionate share hospitals. Using data from California hospitals and conducting an analysis from 1997 to 1999, the author found that public hospitals were able to increase their profits from pediatric and neonatal intensive care services. The study also revealed that DSH hospitals located in high HMO penetration markets reduced their operating losses in nursery and pediatric services.

  17. On the impact of communication complexity on the design of parallel numerical algorithms

    NASA Technical Reports Server (NTRS)

    Gannon, D. B.; Van Rosendale, J.

    1984-01-01

    This paper describes two models of the cost of data movement in parallel numerical alorithms. One model is a generalization of an approach due to Hockney, and is suitable for shared memory multiprocessors where each processor has vector capabilities. The other model is applicable to highly parallel nonshared memory MIMD systems. In this second model, algorithm performance is characterized in terms of the communication network design. Techniques used in VLSI complexity theory are also brought in, and algorithm-independent upper bounds on system performance are derived for several problems that are important to scientific computation.

  18. The relationship between human resource investments and organizational performance: a firm-level examination of equilibrium theory.

    PubMed

    Subramony, Mahesh; Krause, Nicole; Norton, Jacqueline; Burns, Gary N

    2008-07-01

    It is commonly believed that human resource investments can yield positive performance-related outcomes for organizations. Utilizing the theory of organizational equilibrium (H. A. Simon, D. W. Smithburg, & V. A. Thompson, 1950; J. G. March & H. A. Simon, 1958), the authors proposed that organizational inducements in the form of competitive pay will lead to 2 firm-level performance outcomes--labor productivity and customer satisfaction--and that financially successful organizations would be more likely to provide these inducements to their employees. To test their hypotheses, the authors gathered employee-survey and objective performance data from a sample of 126 large publicly traded U.S. organizations over a period of 3 years. Results indicated that (a) firm-level financial performance (net income) predicted employees' shared perceptions of competitive pay, (b) shared pay perceptions predicted future labor productivity, and (c) the relationship between shared pay perceptions and customer satisfaction was fully mediated by employee morale.

  19. MPI, HPF or OpenMP: A Study with the NAS Benchmarks

    NASA Technical Reports Server (NTRS)

    Jin, Hao-Qiang; Frumkin, Michael; Hribar, Michelle; Waheed, Abdul; Yan, Jerry; Saini, Subhash (Technical Monitor)

    1999-01-01

    Porting applications to new high performance parallel and distributed platforms is a challenging task. Writing parallel code by hand is time consuming and costly, but the task can be simplified by high level languages and would even better be automated by parallelizing tools and compilers. The definition of HPF (High Performance Fortran, based on data parallel model) and OpenMP (based on shared memory parallel model) standards has offered great opportunity in this respect. Both provide simple and clear interfaces to language like FORTRAN and simplify many tedious tasks encountered in writing message passing programs. In our study we implemented the parallel versions of the NAS Benchmarks with HPF and OpenMP directives. Comparison of their performance with the MPI implementation and pros and cons of different approaches will be discussed along with experience of using computer-aided tools to help parallelize these benchmarks. Based on the study,potentials of applying some of the techniques to realistic aerospace applications will be presented

  20. MPI, HPF or OpenMP: A Study with the NAS Benchmarks

    NASA Technical Reports Server (NTRS)

    Jin, H.; Frumkin, M.; Hribar, M.; Waheed, A.; Yan, J.; Saini, Subhash (Technical Monitor)

    1999-01-01

    Porting applications to new high performance parallel and distributed platforms is a challenging task. Writing parallel code by hand is time consuming and costly, but this task can be simplified by high level languages and would even better be automated by parallelizing tools and compilers. The definition of HPF (High Performance Fortran, based on data parallel model) and OpenMP (based on shared memory parallel model) standards has offered great opportunity in this respect. Both provide simple and clear interfaces to language like FORTRAN and simplify many tedious tasks encountered in writing message passing programs. In our study, we implemented the parallel versions of the NAS Benchmarks with HPF and OpenMP directives. Comparison of their performance with the MPI implementation and pros and cons of different approaches will be discussed along with experience of using computer-aided tools to help parallelize these benchmarks. Based on the study, potentials of applying some of the techniques to realistic aerospace applications will be presented.

  1. 76 FR 9065 - Self-Regulatory Organizations; NYSE Arca, Inc.; Order Approving a Proposed Rule Change To List...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-02-16

    ...-Regulatory Organizations; NYSE Arca, Inc.; Order Approving a Proposed Rule Change To List and Trade Shares of...,\\2\\ a proposed rule change to list and trade shares (``Shares'') of the SPDR Nuveen S&P High Yield... shares (``Shares'') under NYSE Arca Equities Rule 5.2(j)(3), Commentary .02, which governs the listing...

  2. Highly effective cystic fibrosis clinical research teams: critical success factors.

    PubMed

    Retsch-Bogart, George Z; Van Dalfsen, Jill M; Marshall, Bruce C; George, Cynthia; Pilewski, Joseph M; Nelson, Eugene C; Goss, Christopher H; Ramsey, Bonnie W

    2014-08-01

    Bringing new therapies to patients with rare diseases depends in part on optimizing clinical trial conduct through efficient study start-up processes and rapid enrollment. Suboptimal execution of clinical trials in academic medical centers not only results in high cost to institutions and sponsors, but also delays the availability of new therapies. Addressing the factors that contribute to poor outcomes requires novel, systematic approaches tailored to the institution and disease under study. To use clinical trial performance metrics data analysis to select high-performing cystic fibrosis (CF) clinical research teams and then identify factors contributing to their success. Mixed-methods research, including semi-structured qualitative interviews of high-performing research teams. CF research teams at nine clinical centers from the CF Foundation Therapeutics Development Network. Survey of site characteristics, direct observation of team meetings and facilities, and semi-structured interviews with clinical research team members and institutional program managers and leaders in clinical research. Critical success factors noted at all nine high-performing centers were: 1) strong leadership, 2) established and effective communication within the research team and with the clinical care team, and 3) adequate staff. Other frequent characteristics included a mature culture of research, customer service orientation in interactions with study participants, shared efficient processes, continuous process improvement activities, and a businesslike approach to clinical research. Clinical research metrics allowed identification of high-performing clinical research teams. Site visits identified several critical factors leading to highly successful teams that may help other clinical research teams improve clinical trial performance.

  3. Value-based cost sharing in the United States and elsewhere can increase patients' use of high-value goods and services.

    PubMed

    Thomson, Sarah; Schang, Laura; Chernew, Michael E

    2013-04-01

    This article reviews efforts in the United States and several other member countries of the Organization for Economic Cooperation and Development to encourage patients, through cost sharing, to use goods such as medications, services, and providers that offer better value than other options--an approach known as value-based cost sharing. Among the countries we reviewed, we found that value-based approaches were most commonly applied to drug cost sharing. A few countries, including the United States, employed financial incentives, such as lower copayments, to encourage use of preferred providers or preventive services. Evidence suggests that these efforts can increase patients' use of high-value services--although they may also be associated with high administrative costs and could exacerbate health inequalities among various groups. With careful design, implementation, and evaluation, value-based cost sharing can be an important tool for aligning patient and provider incentives to pursue high-value care.

  4. Modern gyrokinetic particle-in-cell simulation of fusion plasmas on top supercomputers

    DOE PAGES

    Wang, Bei; Ethier, Stephane; Tang, William; ...

    2017-06-29

    The Gyrokinetic Toroidal Code at Princeton (GTC-P) is a highly scalable and portable particle-in-cell (PIC) code. It solves the 5D Vlasov-Poisson equation featuring efficient utilization of modern parallel computer architectures at the petascale and beyond. Motivated by the goal of developing a modern code capable of dealing with the physics challenge of increasing problem size with sufficient resolution, new thread-level optimizations have been introduced as well as a key additional domain decomposition. GTC-P's multiple levels of parallelism, including inter-node 2D domain decomposition and particle decomposition, as well as intra-node shared memory partition and vectorization have enabled pushing the scalability ofmore » the PIC method to extreme computational scales. In this paper, we describe the methods developed to build a highly parallelized PIC code across a broad range of supercomputer designs. This particularly includes implementations on heterogeneous systems using NVIDIA GPU accelerators and Intel Xeon Phi (MIC) co-processors and performance comparisons with state-of-the-art homogeneous HPC systems such as Blue Gene/Q. New discovery science capabilities in the magnetic fusion energy application domain are enabled, including investigations of Ion-Temperature-Gradient (ITG) driven turbulence simulations with unprecedented spatial resolution and long temporal duration. Performance studies with realistic fusion experimental parameters are carried out on multiple supercomputing systems spanning a wide range of cache capacities, cache-sharing configurations, memory bandwidth, interconnects and network topologies. These performance comparisons using a realistic discovery-science-capable domain application code provide valuable insights on optimization techniques across one of the broadest sets of current high-end computing platforms worldwide.« less

  5. Modern gyrokinetic particle-in-cell simulation of fusion plasmas on top supercomputers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Bei; Ethier, Stephane; Tang, William

    The Gyrokinetic Toroidal Code at Princeton (GTC-P) is a highly scalable and portable particle-in-cell (PIC) code. It solves the 5D Vlasov-Poisson equation featuring efficient utilization of modern parallel computer architectures at the petascale and beyond. Motivated by the goal of developing a modern code capable of dealing with the physics challenge of increasing problem size with sufficient resolution, new thread-level optimizations have been introduced as well as a key additional domain decomposition. GTC-P's multiple levels of parallelism, including inter-node 2D domain decomposition and particle decomposition, as well as intra-node shared memory partition and vectorization have enabled pushing the scalability ofmore » the PIC method to extreme computational scales. In this paper, we describe the methods developed to build a highly parallelized PIC code across a broad range of supercomputer designs. This particularly includes implementations on heterogeneous systems using NVIDIA GPU accelerators and Intel Xeon Phi (MIC) co-processors and performance comparisons with state-of-the-art homogeneous HPC systems such as Blue Gene/Q. New discovery science capabilities in the magnetic fusion energy application domain are enabled, including investigations of Ion-Temperature-Gradient (ITG) driven turbulence simulations with unprecedented spatial resolution and long temporal duration. Performance studies with realistic fusion experimental parameters are carried out on multiple supercomputing systems spanning a wide range of cache capacities, cache-sharing configurations, memory bandwidth, interconnects and network topologies. These performance comparisons using a realistic discovery-science-capable domain application code provide valuable insights on optimization techniques across one of the broadest sets of current high-end computing platforms worldwide.« less

  6. Verbal learning in schizopsychotic outpatients and healthy volunteers as a function of cognitive performance levels.

    PubMed

    Karilampi, Ulla; Helldin, Lars; Hjärthag, Fredrik; Norlander, Torsten; Archer, Trevor

    2007-02-01

    The aim was to analyze and compare neurocognitive test profiles related to different levels of verbal learning performance among schizopsychotic patients and healthy volunteers. A single-center patient cohort of 196 participants was compared with an equal-sized volunteer group to form three cognitive subgroups based on the shared verbal learning performance. 43.9% of the patients had normal learning ability. Despite this, all patients underperformed the volunteers on all subtests with the exception of working memory, and, for those with high learning ability, even verbal facility. All patients also presented equally poor visuomotor processing speed/efficacy. A global neurocognitive retardation of speed-related processing in schizophrenia is suggested.

  7. Shared susceptibility loci at 2q33 region for lung and esophageal cancers in high-incidence areas of esophageal cancer in northern China

    PubMed Central

    Song, Xin; Hu, Shou Jia; Lv, Shuang; Cheng, Rang; Zhang, Tang Juan; Han, Xue Na; Ren, Jing Li; Qi, Yi Jun

    2017-01-01

    Background Cancers from lung and esophagus are the leading causes of cancer-related deaths in China and share many similarities in terms of histological type, risk factors and genetic variants. Recent genome-wide association studies (GWAS) in Chinese esophageal cancer patients have demonstrated six high-risk candidate single nucleotide polymorphisms (SNPs). Thus, the present study aimed to determine the risk of these SNPs predisposing to lung cancer in Chinese population. Methods A total of 1170 lung cancer patients and 1530 normal subjects were enrolled in this study from high-incidence areas for esophageal cancer in Henan, northern China. Five milliliters of blood were collected from all subjects for genotyping. Genotyping of 20 high-risk SNP loci identified from genome-wide association studies (GWAS) on esophageal, lung and gastric cancers was performed using TaqMan allelic discrimination assays. Polymorphisms were examined for deviation from Hardy-Weinberg equilibrium (HWE) using Х2 test. Bonferroni correction was performed to correct the statistical significance of 20 SNPs with the risk of lung cancer. The Pearson’s Х2 test was used to compare the distributions of gender, TNM stage, histopathological type, smoking and family history by lung susceptibility genotypes. Kaplan-Meier and Cox regression analyses were carried out to evaluate the associations between genetic variants and overall survival. Results Four of the 20 SNPs identified as high-risk SNPs in Chinese esophageal cancer showed increased risk for Chinese lung cancer, which included rs3769823 (OR = 1.26; 95% CI = 1.107–1.509; P = 0.02), rs10931936 (OR = 1.283; 95% CI = 1.100–1.495; P = 0.04), rs2244438 (OR = 1.294; 95% CI = 1.098–1.525; P = 0.04) and rs13016963 (OR = 1.268; 95% CI = 1.089–1.447; P = 0.04). All these SNPs were located at 2q33 region harboringgenes of CASP8, ALS2CR12 and TRAK2. However, none of these susceptibility SNPs was observed to be significantly associated with gender, TNM stage, histopathological type, smoking, family history and overall survival. Conclusions The present study identified four high-risk SNPs at 2q33 locus for Chinese lung cancer and demonstrated the shared susceptibility loci at 2q33 region for Chinese lung and esophageal cancers. PMID:28542283

  8. Shared susceptibility loci at 2q33 region for lung and esophageal cancers in high-incidence areas of esophageal cancer in northern China.

    PubMed

    Zhao, Xue Ke; Mao, Yi Min; Meng, Hui; Song, Xin; Hu, Shou Jia; Lv, Shuang; Cheng, Rang; Zhang, Tang Juan; Han, Xue Na; Ren, Jing Li; Qi, Yi Jun; Wang, Li Dong

    2017-01-01

    Cancers from lung and esophagus are the leading causes of cancer-related deaths in China and share many similarities in terms of histological type, risk factors and genetic variants. Recent genome-wide association studies (GWAS) in Chinese esophageal cancer patients have demonstrated six high-risk candidate single nucleotide polymorphisms (SNPs). Thus, the present study aimed to determine the risk of these SNPs predisposing to lung cancer in Chinese population. A total of 1170 lung cancer patients and 1530 normal subjects were enrolled in this study from high-incidence areas for esophageal cancer in Henan, northern China. Five milliliters of blood were collected from all subjects for genotyping. Genotyping of 20 high-risk SNP loci identified from genome-wide association studies (GWAS) on esophageal, lung and gastric cancers was performed using TaqMan allelic discrimination assays. Polymorphisms were examined for deviation from Hardy-Weinberg equilibrium (HWE) using Х2 test. Bonferroni correction was performed to correct the statistical significance of 20 SNPs with the risk of lung cancer. The Pearson's Х2 test was used to compare the distributions of gender, TNM stage, histopathological type, smoking and family history by lung susceptibility genotypes. Kaplan-Meier and Cox regression analyses were carried out to evaluate the associations between genetic variants and overall survival. Four of the 20 SNPs identified as high-risk SNPs in Chinese esophageal cancer showed increased risk for Chinese lung cancer, which included rs3769823 (OR = 1.26; 95% CI = 1.107-1.509; P = 0.02), rs10931936 (OR = 1.283; 95% CI = 1.100-1.495; P = 0.04), rs2244438 (OR = 1.294; 95% CI = 1.098-1.525; P = 0.04) and rs13016963 (OR = 1.268; 95% CI = 1.089-1.447; P = 0.04). All these SNPs were located at 2q33 region harboringgenes of CASP8, ALS2CR12 and TRAK2. However, none of these susceptibility SNPs was observed to be significantly associated with gender, TNM stage, histopathological type, smoking, family history and overall survival. The present study identified four high-risk SNPs at 2q33 locus for Chinese lung cancer and demonstrated the shared susceptibility loci at 2q33 region for Chinese lung and esophageal cancers.

  9. Describing functional requirements for knowledge sharing communities

    NASA Technical Reports Server (NTRS)

    Garrett, Sandra; Caldwell, Barrett

    2002-01-01

    Human collaboration in distributed knowledge sharing groups depends on the functionality of information and communication technologies (ICT) to support performance. Since many of these dynamic environments are constrained by time limits, knowledge must be shared efficiently by adapting the level of information detail to the specific situation. This paper focuses on the process of knowledge and context sharing with and without mediation by ICT, as well as issues to be resolved when determining appropriate ICT channels. Both technology-rich and non-technology examples are discussed.

  10. Lightweight Metal Matrix Composite Segmented for Manufacturing High-Precision Mirrors

    NASA Technical Reports Server (NTRS)

    Vudler, Vladimir

    2012-01-01

    High-precision mirrors for space applications are traditionally manufactured from one piece of material, such as lightweight glass sandwich or beryllium. The purpose of this project was to develop and test the feasibility of a manufacturing process capable of producing mirrors out of welded segments of AlBeMet(Registered Trademark) (AM162H). AlBeMet(Registered Trademark) is a HIP'd (hot isostatic pressed) material containing approximately 62% beryllium and 38% aluminum. As a result, AlBeMet shares many of the benefits of both of those materials for use in high performance mirrors, while minimizing many of their weaknesses.

  11. High-efficiency reconciliation for continuous variable quantum key distribution

    NASA Astrophysics Data System (ADS)

    Bai, Zengliang; Yang, Shenshen; Li, Yongmin

    2017-04-01

    Quantum key distribution (QKD) is the most mature application of quantum information technology. Information reconciliation is a crucial step in QKD and significantly affects the final secret key rates shared between two legitimate parties. We analyze and compare various construction methods of low-density parity-check (LDPC) codes and design high-performance irregular LDPC codes with a block length of 106. Starting from these good codes and exploiting the slice reconciliation technique based on multilevel coding and multistage decoding, we realize high-efficiency Gaussian key reconciliation with efficiency higher than 95% for signal-to-noise ratios above 1. Our demonstrated method can be readily applied in continuous variable QKD.

  12. Research into display sharing techniques for distributed computing environments

    NASA Technical Reports Server (NTRS)

    Hugg, Steven B.; Fitzgerald, Paul F., Jr.; Rosson, Nina Y.; Johns, Stephen R.

    1990-01-01

    The X-based Display Sharing solution for distributed computing environments is described. The Display Sharing prototype includes the base functionality for telecast and display copy requirements. Since the prototype implementation is modular and the system design provided flexibility for the Mission Control Center Upgrade (MCCU) operational consideration, the prototype implementation can be the baseline for a production Display Sharing implementation. To facilitate the process the following discussions are presented: Theory of operation; System of architecture; Using the prototype; Software description; Research tools; Prototype evaluation; and Outstanding issues. The prototype is based on the concept of a dedicated central host performing the majority of the Display Sharing processing, allowing minimal impact on each individual workstation. Each workstation participating in Display Sharing hosts programs to facilitate the user's access to Display Sharing as host machine.

  13. Shared and differentiated motor skill impairments in children with dyslexia and/or attention deficit disorder: From simple to complex sequential coordination

    PubMed Central

    Morin-Moncet, Olivier; Bélanger, Anne-Marie; Beauchamp, Miriam H.; Leonard, Gabriel

    2017-01-01

    Dyslexia and Attention deficit disorder (AD) are prevalent neurodevelopmental conditions in children and adolescents. They have high comorbidity rates and have both been associated with motor difficulties. Little is known, however, about what is shared or differentiated in dyslexia and AD in terms of motor abilities. Even when motor skill problems are identified, few studies have used the same measurement tools, resulting in inconstant findings. The present study assessed increasingly complex gross motor skills in children and adolescents with dyslexia, AD, and with both Dyslexia and AD. Our results suggest normal performance on simple motor-speed tests, whereas all three groups share a common impairment on unimanual and bimanual sequential motor tasks. Children in these groups generally improve with practice to the same level as normal subjects, though they make more errors. In addition, children with AD are the most impaired on complex bimanual out-of-phase movements and with manual dexterity. These latter findings are examined in light of the Multiple Deficit Model. PMID:28542319

  14. Simulating cloud environment for HIS backup using secret sharing.

    PubMed

    Kuroda, Tomohiro; Kimura, Eizen; Matsumura, Yasushi; Yamashita, Yoshinori; Hiramatsu, Haruhiko; Kume, Naoto

    2013-01-01

    In the face of a disaster hospitals are expected to be able to continue providing efficient and high-quality care to patients. It is therefore crucial for hospitals to develop business continuity plans (BCPs) that identify their vulnerabilities, and prepare procedures to overcome them. A key aspect of most hospitals' BCPs is creating the backup of the hospital information system (HIS) data at multiple remote sites. However, the need to keep the data confidential dramatically increases the costs of making such backups. Secret sharing is a method to split an original secret message so that individual pieces are meaningless, but putting sufficient number of pieces together reveals the original message. It allows creation of pseudo-redundant arrays of independent disks for privacy-sensitive data over the Internet. We developed a secret sharing environment for StarBED, a large-scale network experiment environment, and evaluated its potential and performance during disaster recovery. Simulation results showed that the entire main HIS database of Kyoto University Hospital could be retrieved within three days even if one of the distributed storage systems crashed during a disaster.

  15. 14 CFR 1274.801 - Adjustments to performance costs.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... NASA's initial cost share or funding levels, detailed cost analysis techniques may be applied, which... shall continue to maintain the share ratio requirements (normally 50/50) stated in § 1274.204(b). ...

  16. A Trusted Platform for Transportation Data Sharing & Stakeholder Engagement

    DOT National Transportation Integrated Search

    2018-03-01

    Information sharing to support critical transportation systems presents numerous challenges given the diversity of information sources and visual representations typically used to portray system performance and characteristics12. This research projec...

  17. FELIN: tailored optronics and systems solutions for dismounted combat

    NASA Astrophysics Data System (ADS)

    Milcent, A. M.

    2009-05-01

    The FELIN French modernization program for dismounted combat provides the Armies with info-centric systems which dramatically enhance the performances of the soldier and the platoon. Sagem now has available a portfolio of various equipments, providing C4I, data and voice digital communication, and enhanced vision for day and night operations, through compact high performance electro-optics. The FELIN system provides the infantryman with a high-tech integrated and modular system which increases significantly their detection, recognition, identification capabilities, their situation awareness and information sharing, and this in any dismounted close combat situation. Among the key technologies used in this system, infrared and intensified vision provide a significant improvement in capability, observation performance and protection of the ground soldiers. This paper presents in detail the developed equipments, with an emphasis on lessons learned from the technical and operational feedback from dismounted close combat field tests.

  18. LTR-Retrotransposons from Bdelloid Rotifers Capture Additional ORFs Shared between Highly Diverse Retroelement Types.

    PubMed

    Rodriguez, Fernando; Kenefick, Aubrey W; Arkhipova, Irina R

    2017-04-11

    Rotifers of the class Bdelloidea, microscopic freshwater invertebrates, possess a highlydiversified repertoire of transposon families, which, however, occupy less than 4% of genomic DNA in the sequenced representative Adineta vaga . We performed a comprehensive analysis of A. vaga retroelements, and found that bdelloid long terminal repeat (LTR)retrotransposons, in addition to conserved open reading frame (ORF) 1 and ORF2 corresponding to gag and pol genes, code for an unusually high variety of ORF3 sequences. Retrovirus-like LTR families in A. vaga belong to four major lineages, three of which are rotiferspecific and encode a dUTPase domain. However only one lineage contains a canonical env like fusion glycoprotein acquired from paramyxoviruses (non-segmented negative-strand RNA viruses), although smaller ORFs with transmembrane domains may perform similar roles. A different ORF3 type encodes a GDSL esterase/lipase, which was previously identified as ORF1 in several clades of non-LTR retrotransposons, and implicated in membrane targeting. Yet another ORF3 type appears in unrelated LTR-retrotransposon lineages, and displays strong homology to DEDDy-type exonucleases involved in 3'-end processing of RNA and single-stranded DNA. Unexpectedly, each of the enzymatic ORF3s is also associated with different subsets of Penelope -like Athena retroelement families. The unusual association of the same ORF types with retroelements from different classes reflects their modular structure with a high degree of flexibility, and points to gene sharing between different groups of retroelements.

  19. Video streaming technologies using ActiveX and LabVIEW

    NASA Astrophysics Data System (ADS)

    Panoiu, M.; Rat, C. L.; Panoiu, C.

    2015-06-01

    The goal of this paper is to present the possibilities of remote image processing through data exchange between two programming technologies: LabVIEW and ActiveX. ActiveX refers to the process of controlling one program from another via ActiveX component; where one program acts as the client and the other as the server. LabVIEW can be either client or server. Both programs (client and server) exist independent of each other but are able to share information. The client communicates with the ActiveX objects that the server opens to allow the sharing of information [7]. In the case of video streaming [1] [2], most ActiveX controls can only display the data, being incapable of transforming it into a data type that LabVIEW can process. This becomes problematic when the system is used for remote image processing. The LabVIEW environment itself provides little if any possibilities for video streaming, and the methods it does offer are usually not high performance, but it possesses high performance toolkits and modules specialized in image processing, making it ideal for processing the captured data. Therefore, we chose to use existing software, specialized in video streaming along with LabVIEW and to capture the data provided by them, for further use, within LabVIEW. The software we studied (the ActiveX controls of a series of media players that utilize streaming technology) provide high quality data and a very small transmission delay, ensuring the reliability of the results of the image processing.

  20. Generalist genes and learning disabilities: a multivariate genetic analysis of low performance in reading, mathematics, language and general cognitive ability in a sample of 8000 12-year-old twins.

    PubMed

    Haworth, Claire M A; Kovas, Yulia; Harlaar, Nicole; Hayiou-Thomas, Marianna E; Petrill, Stephen A; Dale, Philip S; Plomin, Robert

    2009-10-01

    Our previous investigation found that the same genes influence poor reading and mathematics performance in 10-year-olds. Here we assess whether this finding extends to language and general cognitive disabilities, as well as replicating the earlier finding for reading and mathematics in an older and larger sample. Using a representative sample of 4000 pairs of 12-year-old twins from the UK Twins Early Development Study, we investigated the genetic and environmental overlap between internet-based batteries of language and general cognitive ability tests in addition to tests of reading and mathematics for the bottom 15% of the distribution using DeFries-Fulker extremes analysis. We compared these results to those for the entire distribution. All four traits were highly correlated at the low extreme (average group phenotypic correlation = .58). and in the entire distribution (average phenotypic correlation = .59). Genetic correlations for the low extreme were consistently high (average = .67), and non-shared environmental correlations were modest (average = .23). These results are similar to those seen across the entire distribution (.68 and .23, respectively). The 'Generalist Genes Hypothesis' holds for language and general cognitive disabilities, as well as reading and mathematics disabilities. Genetic correlations were high, indicating a strong degree of overlap in genetic influences on these diverse traits. In contrast, non-shared environmental influences were largely specific to each trait, causing phenotypic differentiation of traits.

  1. Shared Solar. Current Landscape, Market Potential, and the Impact of Federal Securities Regulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feldman, David; Brockway, Anna M.; Ulrich, Elaine

    2015-04-07

    This report provides a high-level overview of the current U.S. shared solar landscape, the impact that a given shared solar program’s structure has on requiring federal securities oversight, as well as an estimate of market potential for U.S. shared solar deployment.

  2. Shared Solar. Current Landscape, Market Potential, and the Impact of Federal Securities Regulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feldman, David; Brockway, Anna M.; Ulrich, Elaine

    2015-04-01

    This report provides a high-level overview of the current U.S. shared solar landscape and the impact that a given shared solar program’s structure has on requiring federal securities oversight, as well as an estimate of market potential for U.S. shared solar deployment.

  3. Shared vision promotes family firm performance.

    PubMed

    Neff, John E

    2015-01-01

    A clear picture of the influential drivers of private family firm performance has proven to be an elusive target. The unique characteristics of private family owned firms necessitate a broader, non-financial approach to reveal firm performance drivers. This research study sought to specify and evaluate the themes that distinguish successful family firms from less successful family firms. In addition, this study explored the possibility that these themes collectively form an effective organizational culture that improves longer-term firm performance. At an organizational level of analysis, research findings identified four significant variables: Shared Vision (PNS), Role Clarity (RCL), Confidence in Management (CON), and Professional Networking (OLN) that positively impacted family firm financial performance. Shared Vision exhibited the strongest positive influence among the significant factors. In addition, Family Functionality (APGAR), the functional integrity of the family itself, exhibited a significant supporting role. Taken together, the variables collectively represent an effective family business culture (EFBC) that positively impacted the long-term financial sustainability of family owned firms. The index of effective family business culture also exhibited potential as a predictive non-financial model of family firm performance.

  4. Shared vision promotes family firm performance

    PubMed Central

    Neff, John E.

    2015-01-01

    A clear picture of the influential drivers of private family firm performance has proven to be an elusive target. The unique characteristics of private family owned firms necessitate a broader, non-financial approach to reveal firm performance drivers. This research study sought to specify and evaluate the themes that distinguish successful family firms from less successful family firms. In addition, this study explored the possibility that these themes collectively form an effective organizational culture that improves longer-term firm performance. At an organizational level of analysis, research findings identified four significant variables: Shared Vision (PNS), Role Clarity (RCL), Confidence in Management (CON), and Professional Networking (OLN) that positively impacted family firm financial performance. Shared Vision exhibited the strongest positive influence among the significant factors. In addition, Family Functionality (APGAR), the functional integrity of the family itself, exhibited a significant supporting role. Taken together, the variables collectively represent an effective family business culture (EFBC) that positively impacted the long-term financial sustainability of family owned firms. The index of effective family business culture also exhibited potential as a predictive non-financial model of family firm performance. PMID:26042075

  5. Protection of Mission-Critical Applications from Untrusted Execution Environment: Resource Efficient Replication and Migration of Virtual Machines

    DTIC Science & Technology

    2015-09-28

    the performance of log-and- replay can degrade significantly for VMs configured with multiple virtual CPUs, since the shared memory communication...whether based on checkpoint replication or log-and- replay , existing HA ap- proaches use in- memory backups. The backup VM sits in the memory of a...efficiently. 15. SUBJECT TERMS High-availability virtual machines, live migration, memory and traffic overheads, application suspension, Java

  6. Shortcomings in Information Sharing Facilitates Transnational Organized Crime

    DTIC Science & Technology

    2017-06-09

    SHORTCOMINGS IN INFORMATION SHARING FACILITATES TRANSNATIONAL ORGANIZED CRIME A thesis presented to the Faculty of the U.S...JUN 2017 4. TITLE AND SUBTITLE Shortcomings in Information Sharing Facilitates Transnational Organized Crime 5a. CONTRACT NUMBER 5b...NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) U.S. Army Command and General Staff College ATTN: ATZL-SWD-GD Fort Leavenworth, KS 66027

  7. He Asked Me What!?--Using Shared Online Accounts as Training Tools for Distance Learning Librarians

    ERIC Educational Resources Information Center

    Robinson, Kelly; Casey, Anne Marie; Citro, Kathleen

    2017-01-01

    This study explores the idea of creating a knowledge base from shared online accounts to use in training librarians who perform distance reference services. Through a survey, follow-up interviews and a case study, the investigators explored current and potential use of shared online accounts as training tools. This study revealed that the…

  8. The Feasibility of Job Sharing by Public Employees in Hawaii. Some Preliminary Considerations.

    ERIC Educational Resources Information Center

    Nishimura, Charles H.; And Others

    A two-part study was conducted to determine the feasibility of implementing job-sharing in state and county governments in Hawaii. First, a literature review was performed to obtain an overview of the job-sharing concept and of the results of its implementation in other state and local governments and businesses. The legislation relating to…

  9. A Foundation for Understanding Knowledge Sharing: Organizational Culture, Informal Workplace Learning, Performance Support, and Knowledge Management

    ERIC Educational Resources Information Center

    Caruso, Shirley J.

    2017-01-01

    This paper serves as an exploration into some of the ways in which organizations can promote, capture, share, and manage the valuable knowledge of their employees. The problem is that employees typically do not share valuable information, skills, or expertise with other employees or with the entire organization. The author uses research as well as…

  10. Blending of brain-machine interface and vision-guided autonomous robotics improves neuroprosthetic arm performance during grasping.

    PubMed

    Downey, John E; Weiss, Jeffrey M; Muelling, Katharina; Venkatraman, Arun; Valois, Jean-Sebastien; Hebert, Martial; Bagnell, J Andrew; Schwartz, Andrew B; Collinger, Jennifer L

    2016-03-18

    Recent studies have shown that brain-machine interfaces (BMIs) offer great potential for restoring upper limb function. However, grasping objects is a complicated task and the signals extracted from the brain may not always be capable of driving these movements reliably. Vision-guided robotic assistance is one possible way to improve BMI performance. We describe a method of shared control where the user controls a prosthetic arm using a BMI and receives assistance with positioning the hand when it approaches an object. Two human subjects with tetraplegia used a robotic arm to complete object transport tasks with and without shared control. The shared control system was designed to provide a balance between BMI-derived intention and computer assistance. An autonomous robotic grasping system identified and tracked objects and defined stable grasp positions for these objects. The system identified when the user intended to interact with an object based on the BMI-controlled movements of the robotic arm. Using shared control, BMI controlled movements and autonomous grasping commands were blended to ensure secure grasps. Both subjects were more successful on object transfer tasks when using shared control compared to BMI control alone. Movements made using shared control were more accurate, more efficient, and less difficult. One participant attempted a task with multiple objects and successfully lifted one of two closely spaced objects in 92 % of trials, demonstrating the potential for users to accurately execute their intention while using shared control. Integration of BMI control with vision-guided robotic assistance led to improved performance on object transfer tasks. Providing assistance while maintaining generalizability will make BMI systems more attractive to potential users. NCT01364480 and NCT01894802 .

  11. Medications for High Blood Pressure

    MedlinePlus

    ... Consumers Home For Consumers Consumer Updates Medications for High Blood Pressure Share Tweet Linkedin Pin it More sharing options Linkedin Pin it Email Print Hypertension tends to worsen with age and you cannot ...

  12. Effect of Heterogeneity on Decorrelation Mechanisms in Spiking Neural Networks: A Neuromorphic-Hardware Study

    NASA Astrophysics Data System (ADS)

    Pfeil, Thomas; Jordan, Jakob; Tetzlaff, Tom; Grübl, Andreas; Schemmel, Johannes; Diesmann, Markus; Meier, Karlheinz

    2016-04-01

    High-level brain function, such as memory, classification, or reasoning, can be realized by means of recurrent networks of simplified model neurons. Analog neuromorphic hardware constitutes a fast and energy-efficient substrate for the implementation of such neural computing architectures in technical applications and neuroscientific research. The functional performance of neural networks is often critically dependent on the level of correlations in the neural activity. In finite networks, correlations are typically inevitable due to shared presynaptic input. Recent theoretical studies have shown that inhibitory feedback, abundant in biological neural networks, can actively suppress these shared-input correlations and thereby enable neurons to fire nearly independently. For networks of spiking neurons, the decorrelating effect of inhibitory feedback has so far been explicitly demonstrated only for homogeneous networks of neurons with linear subthreshold dynamics. Theory, however, suggests that the effect is a general phenomenon, present in any system with sufficient inhibitory feedback, irrespective of the details of the network structure or the neuronal and synaptic properties. Here, we investigate the effect of network heterogeneity on correlations in sparse, random networks of inhibitory neurons with nonlinear, conductance-based synapses. Emulations of these networks on the analog neuromorphic-hardware system Spikey allow us to test the efficiency of decorrelation by inhibitory feedback in the presence of hardware-specific heterogeneities. The configurability of the hardware substrate enables us to modulate the extent of heterogeneity in a systematic manner. We selectively study the effects of shared input and recurrent connections on correlations in membrane potentials and spike trains. Our results confirm that shared-input correlations are actively suppressed by inhibitory feedback also in highly heterogeneous networks exhibiting broad, heavy-tailed firing-rate distributions. In line with former studies, cell heterogeneities reduce shared-input correlations. Overall, however, correlations in the recurrent system can increase with the level of heterogeneity as a consequence of diminished effective negative feedback.

  13. Single-Parent Family Forms and Children's Educational Performance in a Comparative Perspective: Effects of School's Share of Single-Parent Families

    ERIC Educational Resources Information Center

    de Lange, Marloes; Dronkers, Jaap; Wolbers, Maarten H. J.

    2014-01-01

    Living in a single-parent family is negatively related with children's educational performance compared to living with 2 biological parents. In this article, we aim to find out to what extent the context of the school's share of single-parent families affects this negative relationship. We use pooled data from the Organisation for Economic…

  14. Ephedrine QoS: An Antidote to Slow, Congested, Bufferless NoCs

    PubMed Central

    Fang, Juan; Yao, Zhicheng; Sui, Xiufeng; Bao, Yungang

    2014-01-01

    Datacenters consolidate diverse applications to improve utilization. However when multiple applications are colocated on such platforms, contention for shared resources like networks-on-chip (NoCs) can degrade the performance of latency-critical online services (high-priority applications). Recently proposed bufferless NoCs (Nychis et al.) have the advantages of requiring less area and power, but they pose challenges in quality-of-service (QoS) support, which usually relies on buffer-based virtual channels (VCs). We propose QBLESS, a QoS-aware bufferless NoC scheme for datacenters. QBLESS consists of two components: a routing mechanism (QBLESS-R) that can substantially reduce flit deflection for high-priority applications and a congestion-control mechanism (QBLESS-CC) that guarantees performance for high-priority applications and improves overall system throughput. We use trace-driven simulation to model a 64-core system, finding that, when compared to BLESS, a previous state-of-the-art bufferless NoC design, QBLESS, improves performance of high-priority applications by an average of 33.2% and reduces network-hops by an average of 42.8%. PMID:25250386

  15. An object-based storage model for distributed remote sensing images

    NASA Astrophysics Data System (ADS)

    Yu, Zhanwu; Li, Zhongmin; Zheng, Sheng

    2006-10-01

    It is very difficult to design an integrated storage solution for distributed remote sensing images to offer high performance network storage services and secure data sharing across platforms using current network storage models such as direct attached storage, network attached storage and storage area network. Object-based storage, as new generation network storage technology emerged recently, separates the data path, the control path and the management path, which solves the bottleneck problem of metadata existed in traditional storage models, and has the characteristics of parallel data access, data sharing across platforms, intelligence of storage devices and security of data access. We use the object-based storage in the storage management of remote sensing images to construct an object-based storage model for distributed remote sensing images. In the storage model, remote sensing images are organized as remote sensing objects stored in the object-based storage devices. According to the storage model, we present the architecture of a distributed remote sensing images application system based on object-based storage, and give some test results about the write performance comparison of traditional network storage model and object-based storage model.

  16. Manifold regularized multitask learning for semi-supervised multilabel image classification.

    PubMed

    Luo, Yong; Tao, Dacheng; Geng, Bo; Xu, Chao; Maybank, Stephen J

    2013-02-01

    It is a significant challenge to classify images with multiple labels by using only a small number of labeled samples. One option is to learn a binary classifier for each label and use manifold regularization to improve the classification performance by exploring the underlying geometric structure of the data distribution. However, such an approach does not perform well in practice when images from multiple concepts are represented by high-dimensional visual features. Thus, manifold regularization is insufficient to control the model complexity. In this paper, we propose a manifold regularized multitask learning (MRMTL) algorithm. MRMTL learns a discriminative subspace shared by multiple classification tasks by exploiting the common structure of these tasks. It effectively controls the model complexity because different tasks limit one another's search volume, and the manifold regularization ensures that the functions in the shared hypothesis space are smooth along the data manifold. We conduct extensive experiments, on the PASCAL VOC'07 dataset with 20 classes and the MIR dataset with 38 classes, by comparing MRMTL with popular image classification algorithms. The results suggest that MRMTL is effective for image classification.

  17. Oculomotor responses and visuospatial perceptual judgments compete for common limited resources

    PubMed Central

    Tibber, Marc S.; Grant, Simon; Morgan, Michael J.

    2010-01-01

    While there is evidence for multiple spatial and attentional maps in the brain it is not clear to what extent visuoperceptual and oculomotor tasks rely on common neural representations and attentional mechanisms. Using a dual-task interference paradigm we tested the hypothesis that eye movements and perceptual judgments made to simultaneously presented visuospatial information compete for shared limited resources. Observers undertook judgments of stimulus collinearity (perceptual extrapolation) using a pointer and Gabor patch and/or performed saccades to a peripheral dot target while their eye movements were recorded. In addition, observers performed a non-spatial control task (contrast discrimination), matched for task difficulty and stimulus structure, which on the basis of previous studies was expected to represent a lesser load on putative shared resources. Greater mutual interference was indeed found between the saccade and extrapolation task pair than between the saccade and contrast discrimination task pair. These data are consistent with visuoperceptual and oculomotor responses competing for common limited resources as well as spatial tasks incurring a relatively high attentional cost. PMID:20053112

  18. MLP: A Parallel Programming Alternative to MPI for New Shared Memory Parallel Systems

    NASA Technical Reports Server (NTRS)

    Taft, James R.

    1999-01-01

    Recent developments at the NASA AMES Research Center's NAS Division have demonstrated that the new generation of NUMA based Symmetric Multi-Processing systems (SMPs), such as the Silicon Graphics Origin 2000, can successfully execute legacy vector oriented CFD production codes at sustained rates far exceeding processing rates possible on dedicated 16 CPU Cray C90 systems. This high level of performance is achieved via shared memory based Multi-Level Parallelism (MLP). This programming approach, developed at NAS and outlined below, is distinct from the message passing paradigm of MPI. It offers parallelism at both the fine and coarse grained level, with communication latencies that are approximately 50-100 times lower than typical MPI implementations on the same platform. Such latency reductions offer the promise of performance scaling to very large CPU counts. The method draws on, but is also distinct from, the newly defined OpenMP specification, which uses compiler directives to support a limited subset of multi-level parallel operations. The NAS MLP method is general, and applicable to a large class of NASA CFD codes.

  19. Avoiding unintended incentives in ACO payment models.

    PubMed

    Douven, Rudy; McGuire, Thomas G; McWilliams, J Michael

    2015-01-01

    One goal of the Medicare Shared Savings Program for accountable care organizations (ACOs) is to reduce Medicare spending for ACOs' patients relative to the organizations' spending history. However, we found that current rules for setting ACO spending targets (or benchmarks) diminish ACOs' incentives to generate savings and may even encourage higher instead of lower Medicare spending. Spending in the three years before ACOs enter or renew a contract is weighted unequally in the benchmark calculation, with a high weight of 0.6 given to the year just before a new contract starts. Thus, ACOs have incentives to increase spending in that year to inflate their benchmark for future years and thereby make it easier to obtain shared savings from Medicare in the new contract period. We suggest strategies to improve incentives for ACOs, including changes to the weights used to determine benchmarks and new payment models that base an ACO's spending target not only on its own past performance but also on the performance of other ACOs or Medicare providers. Project HOPE—The People-to-People Health Foundation, Inc.

  20. Information trade-offs for optical quantum communication.

    PubMed

    Wilde, Mark M; Hayden, Patrick; Guha, Saikat

    2012-04-06

    Recent work has precisely characterized the achievable trade-offs between three key information processing tasks-classical communication (generation or consumption), quantum communication (generation or consumption), and shared entanglement (distribution or consumption), measured in bits, qubits, and ebits per channel use, respectively. Slices and corner points of this three-dimensional region reduce to well-known protocols for quantum channels. A trade-off coding technique can attain any point in the region and can outperform time sharing between the best-known protocols for accomplishing each information processing task by itself. Previously, the benefits of trade-off coding that had been found were too small to be of practical value (viz., for the dephasing and the universal cloning machine channels). In this Letter, we demonstrate that the associated performance gains are in fact remarkably high for several physically relevant bosonic channels that model free-space or fiber-optic links, thermal-noise channels, and amplifiers. We show that significant performance gains from trade-off coding also apply when trading photon-number resources between transmitting public and private classical information simultaneously over secret-key-assisted bosonic channels. © 2012 American Physical Society

  1. How a Spatial Arrangement of Secondary Structure Elements Is Dispersed in the Universe of Protein Folds

    PubMed Central

    Minami, Shintaro; Sawada, Kengo; Chikenji, George

    2014-01-01

    It has been known that topologically different proteins of the same class sometimes share the same spatial arrangement of secondary structure elements (SSEs). However, the frequency by which topologically different structures share the same spatial arrangement of SSEs is unclear. It is important to estimate this frequency because it provides both a deeper understanding of the geometry of protein folds and a valuable suggestion for predicting protein structures with novel folds. Here we clarified the frequency with which protein folds share the same SSE packing arrangement with other folds, the types of spatial arrangement of SSEs that are frequently observed across different folds, and the diversity of protein folds that share the same spatial arrangement of SSEs with a given fold, using a protein structure alignment program MICAN, which we have been developing. By performing comprehensive structural comparison of SCOP fold representatives, we found that approximately 80% of protein folds share the same spatial arrangement of SSEs with other folds. We also observed that many protein pairs that share the same spatial arrangement of SSEs belong to the different classes, often with an opposing N- to C-terminal direction of the polypeptide chain. The most frequently observed spatial arrangement of SSEs was the 2-layer α/β packing arrangement and it was dispersed among as many as 27% of SCOP fold representatives. These results suggest that the same spatial arrangements of SSEs are adopted by a wide variety of different folds and that the spatial arrangement of SSEs is highly robust against the N- to C-terminal direction of the polypeptide chain. PMID:25243952

  2. USDOT guidance summary for connected vehicle deployments : data sharing.

    DOT National Transportation Integrated Search

    2016-07-01

    AbstractThe document provides guidance to Pilot Deployers in the timely and successful completion of Concept Development Phase deliverables, specifically in developing the Data Sharing Framework portion of the Performance Measurement and Evaluation S...

  3. Real-time dynamic pricing for bicycle sharing programs.

    DOT National Transportation Integrated Search

    2014-10-01

    This paper presents a new conceptual approach to improve the operational performance of public bike sharing systems : using pricing schemes. Its methodological developments are accompanied by experimental analyses with bike demand : data from Capital...

  4. Membership Eligibility and Performance Measures for PESP

    EPA Pesticide Factsheets

    PESP members represent diverse segments of the pesticide-user community. They often share common pesticide challenges. PESP membership is divided into four groups of members who share pesticide interests, e.g., community IPM and sustainable agriculture.

  5. Performance Analysis of Ivshmem for High-Performance Computing in Virtual Machines

    NASA Astrophysics Data System (ADS)

    Ivanovic, Pavle; Richter, Harald

    2018-01-01

    High-Performance computing (HPC) is rarely accomplished via virtual machines (VMs). In this paper, we present a remake of ivshmem which can change this. Ivshmem was a shared memory (SHM) between virtual machines on the same server, with SHM-access synchronization included, until about 5 years ago when newer versions of Linux and its virtualization library libvirt evolved. We restored that SHM-access synchronization feature because it is indispensable for HPC and made ivshmem runnable with contemporary versions of Linux, libvirt, KVM, QEMU and especially MPICH, which is an implementation of MPI - the standard HPC communication library. Additionally, MPICH was transparently modified by us to get ivshmem included, resulting in a three to ten times performance improvement compared to TCP/IP. Furthermore, we have transparently replaced MPI_PUT, a single-side MPICH communication mechanism, by an own MPI_PUT wrapper. As a result, our ivshmem even surpasses non-virtualized SHM data transfers for block lengths greater than 512 KBytes, showing the benefits of virtualization. All improvements were possible without using SR-IOV.

  6. Incentivizing shared decision making in the USA--where are we now?

    PubMed

    Durand, Marie-Anne; Barr, Paul J; Walsh, Thom; Elwyn, Glyn

    2015-06-01

    The Affordable Care Act raised significant interest in the process of shared decision making, the role of patient decision aids, and incentivizing their utilization. However, it has not been clear how best to put incentives into practice, and how the implementation of shared decision making and the use of patient decision aids would be measured. Our goal was to review developments and proposals put forward. We performed a qualitative document analysis following a pragmatic search of Medline, Google, Google Scholar, Business Source Complete (Ebscohost), and LexisNexis from 2009-2013 using the following key words: "Patient Protection and Affordable Care Act", "Decision Making", "Affordable Care Act", "Shared Decision Making", "measurement", "incentives", and "payment." We observed a lack of clarity about how to measure shared decision making, about how best to reward the use of patient decisions aids, and therefore how best to incentivize the process. Many documents clearly imply that providing and disseminating patient decision aids might be equivalent to shared decision making. However, there is little evidence that these tools, when used by patients in advance of clinical encounters, lead to significant change in patient-provider communication. The assessment of shared decision making for performance management remains challenging. Efforts to incentivize shared decision making are at risk of being limited to the promotion of patient decision aids, passing over the opportunity to influence the communication processes between patients and providers. Copyright © 2014 Elsevier Inc. All rights reserved.

  7. Designing HIV Testing Algorithms Based on 2015 WHO Guidelines Using Data from Six Sites in Sub-Saharan Africa

    PubMed Central

    Kosack, Cara S.; Shanks, Leslie; Beelaert, Greet; Benson, Tumwesigye; Savane, Aboubacar; Ng'ang'a, Anne; Bita, André; Zahinda, Jean-Paul B. N.; Fransen, Katrien

    2017-01-01

    ABSTRACT Our objective was to evaluate the performance of HIV testing algorithms based on WHO recommendations, using data from specimens collected at six HIV testing and counseling sites in sub-Saharan Africa (Conakry, Guinea; Kitgum and Arua, Uganda; Homa Bay, Kenya; Douala, Cameroon; Baraka, Democratic Republic of Congo). A total of 2,780 samples, including 1,306 HIV-positive samples, were included in the analysis. HIV testing algorithms were designed using Determine as a first test. Second and third rapid diagnostic tests (RDTs) were selected based on site-specific performance, adhering where possible to the WHO-recommended minimum requirements of ≥99% sensitivity and specificity. The threshold for specificity was reduced to 98% or 96% if necessary. We also simulated algorithms consisting of one RDT followed by a simple confirmatory assay. The positive predictive values (PPV) of the simulated algorithms ranged from 75.8% to 100% using strategies recommended for high-prevalence settings, 98.7% to 100% using strategies recommended for low-prevalence settings, and 98.1% to 100% using a rapid test followed by a simple confirmatory assay. Although we were able to design algorithms that met the recommended PPV of ≥99% in five of six sites using the applicable high-prevalence strategy, options were often very limited due to suboptimal performance of individual RDTs and to shared falsely reactive results. These results underscore the impact of the sequence of HIV tests and of shared false-reactivity data on algorithm performance. Where it is not possible to identify tests that meet WHO-recommended specifications, the low-prevalence strategy may be more suitable. PMID:28747371

  8. A Large-Scale Analysis of Impact Factor Biased Journal Self-Citations.

    PubMed

    Chorus, Caspar; Waltman, Ludo

    2016-01-01

    Based on three decades of citation data from across scientific fields of science, we study trends in impact factor biased self-citations of scholarly journals, using a purpose-built and easy to use citation based measure. Our measure is given by the ratio between i) the relative share of journal self-citations to papers published in the last two years, and ii) the relative share of journal self-citations to papers published in preceding years. A ratio higher than one suggests that a journal's impact factor is disproportionally affected (inflated) by self-citations. Using recently reported survey data, we show that there is a relation between high values of our proposed measure and coercive journal self-citation malpractices. We use our measure to perform a large-scale analysis of impact factor biased journal self-citations. Our main empirical result is, that the share of journals for which our measure has a (very) high value has remained stable between the 1980s and the early 2000s, but has since risen strongly in all fields of science. This time span corresponds well with the growing obsession with the impact factor as a journal evaluation measure over the last decade. Taken together, this suggests a trend of increasingly pervasive journal self-citation malpractices, with all due unwanted consequences such as inflated perceived importance of journals and biased journal rankings.

  9. Improving Fatigue Performance of AHSS Welds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feng, Zhili; Yu, Xinghua; Erdman, III, Donald L.

    Reported herein is technical progress on a U.S. Department of Energy CRADA project with industry cost-share aimed at developing the technical basis and demonstrate the viability of innovative in-situ weld residual stresses mitigation technology that can substantially improve the weld fatigue performance and durability of auto-body structures. The developed technology would be costeffective and practical in high-volume vehicle production environment. Enhancing weld fatigue performance would address a critical technology gap that impedes the widespread use of advanced high-strength steels (AHSS) and other lightweight materials for auto body structure light-weighting. This means that the automotive industry can take full advantage ofmore » the AHSS in strength, durability and crashworthiness without the concern of the relatively weak weld fatigue performance. The project comprises both technological innovations in weld residual stress mitigation and due-diligence residual stress measurement and fatigue performance evaluation. Two approaches were investigated. The first one was the use of low temperature phase transformation (LTPT) weld filler wire, and the second focused on novel thermo-mechanical stress management technique. Both technical approaches have resulted in considerable improvement in fatigue lives of welded joints made of high-strength steels. Synchrotron diffraction measurement confirmed the reduction of high tensile weld residual stresses by the two weld residual stress mitigation techniques.« less

  10. Sharing-based social capital associated with harvest production and wealth in the Canadian Arctic

    PubMed Central

    2018-01-01

    Social institutions that facilitate sharing and redistribution may help mitigate the impact of resource shocks. In the North American Arctic, traditional food sharing may direct food to those who need it and provide a form of natural insurance against temporal variability in hunting returns within households. Here, network properties that facilitate resource flow (network size, quality, and density) are examined in a country food sharing network comprising 109 Inuit households from a village in Nunavik (Canada), using regressions to investigate the relationships between these network measures and household socioeconomic attributes. The results show that although single women and elders have larger networks, the sharing network is not structured to prioritize sharing towards households with low food availability. Rather, much food sharing appears to be driven by reciprocity between high-harvest households, meaning that poor, low-harvest households tend to have less sharing-based social capital than more affluent, high-harvest households. This suggests that poor, low-harvest households may be more vulnerable to disruptions in the availability of country food. PMID:29529040

  11. Test anxiety and a high-stakes standardized reading comprehension test: A behavioral genetics perspective.

    PubMed

    Wood, Sarah G; Hart, Sara A; Little, Callie W; Phillips, Beth M

    2016-07-01

    Past research suggests that reading comprehension test performance does not rely solely on targeted cognitive processes such as word reading, but also on other non-target aspects such as test anxiety. Using a genetically sensitive design, we sought to understand the genetic and environmental etiology of the association between test anxiety and reading comprehension as measured by a high-stakes test. Mirroring the behavioral literature of test anxiety, three different dimensions of test anxiety were examined in relation to reading comprehension, namely intrusive thoughts, autonomic reactions, and off-task behaviors. Participants included 426 sets of twins from the Florida Twin Project on Reading. The results indicated test anxiety was negatively associated with reading comprehension test performance, specifically through common shared environmental influences. The significant contribution of test anxiety to reading comprehension on a high-stakes test supports the notion that non-targeted factors may be interfering with accurately assessing students' reading abilities.

  12. Implementing California's School Funding Formula: Will High-Need Students Benefit? Technical Appendix

    ERIC Educational Resources Information Center

    Hill, Laura; Ugo, Iwunze

    2015-01-01

    Intended to accompany "Implementing California's School Funding Formula: Will High-Need Students Benefit?," this appendix examines the extent to which school shares of high-need students vary relative to their district concentrations by grouping approximately 950 school districts by their share of high-need students, arraying them into…

  13. The effects of voice and manual control mode on dual task performance

    NASA Technical Reports Server (NTRS)

    Wickens, C. D.; Zenyuh, J.; Culp, V.; Marshak, W.

    1986-01-01

    Two fundamental principles of human performance, compatibility and resource competition, are combined with two structural dichotomies in the human information processing system, manual versus voice output, and left versus right cerebral hemisphere, in order to predict the optimum combination of voice and manual control with either hand, for time-sharing performance of a dicrete and continuous task. Eight right handed male subjected performed a discrete first-order tracking task, time-shared with an auditorily presented Sternberg Memory Search Task. Each task could be controlled by voice, or by the left or right hand, in all possible combinations except for a dual voice mode. When performance was analyzed in terms of a dual-task decrement from single task control conditions, the following variables influenced time-sharing efficiency in diminishing order of magnitude, (1) the modality of control, (discrete manual control of tracking was superior to discrete voice control of tracking and the converse was true with the memory search task), (2) response competition, (performance was degraded when both tasks were responded manually), (3) hemispheric competition, (performance degraded whenever two tasks were controlled by the left hemisphere) (i.e., voice or right handed control). The results confirm the value of predictive models invoice control implementation.

  14. Decomposing the relation between Rapid Automatized Naming (RAN) and reading ability.

    PubMed

    Arnell, Karen M; Joanisse, Marc F; Klein, Raymond M; Busseri, Michael A; Tannock, Rosemary

    2009-09-01

    The Rapid Automatized Naming (RAN) test involves rapidly naming sequences of items presented in a visual array. RAN has generated considerable interest because RAN performance predicts reading achievement. This study sought to determine what elements of RAN are responsible for the shared variance between RAN and reading performance using a series of cognitive tasks and a latent variable modelling approach. Participants performed RAN measures, a test of reading speed and comprehension, and six tasks, which tapped various hypothesised components of the RAN. RAN shared 10% of the variance with reading comprehension and 17% with reading rate. Together, the decomposition tasks explained 52% and 39% of the variance shared between RAN and reading comprehension and between RAN and reading rate, respectively. Significant predictors suggested that working memory encoding underlies part of the relationship between RAN and reading ability.

  15. Quantum secret sharing with identity authentication based on Bell states

    NASA Astrophysics Data System (ADS)

    Abulkasim, Hussein; Hamad, Safwat; Khalifa, Amal; El Bahnasy, Khalid

    Quantum secret sharing techniques allow two parties or more to securely share a key, while the same number of parties or less can efficiently deduce the secret key. In this paper, we propose an authenticated quantum secret sharing protocol, where a quantum dialogue protocol is adopted to authenticate the identity of the parties. The participants simultaneously authenticate the identity of each other based on parts of a prior shared key. Moreover, the whole prior shared key can be reused for deducing the secret data. Although the proposed scheme does not significantly improve the efficiency performance, it is more secure compared to some existing quantum secret sharing scheme due to the identity authentication process. In addition, the proposed scheme can stand against participant attack, man-in-the-middle attack, impersonation attack, Trojan-horse attack as well as information leaks.

  16. Facial Recognition of Happiness Is Impaired in Musicians with High Music Performance Anxiety.

    PubMed

    Sabino, Alini Daniéli Viana; Camargo, Cristielli M; Chagas, Marcos Hortes N; Osório, Flávia L

    2018-01-01

    Music performance anxiety (MPA) can be defined as a lasting and intense apprehension connected with musical performance in public. Studies suggest that MPA can be regarded as a subtype of social anxiety. Since individuals with social anxiety have deficits in the recognition of facial emotion, we hypothesized that musicians with high levels of MPA would share similar impairments. The aim of this study was to compare parameters of facial emotion recognition (FER) between musicians with high and low MPA. 150 amateur and professional musicians with different musical backgrounds were assessed in respect to their level of MPA and completed a dynamic FER task. The outcomes investigated were accuracy, response time, emotional intensity, and response bias. Musicians with high MPA were less accurate in the recognition of happiness ( p  = 0.04; d  = 0.34), had increased response bias toward fear ( p  = 0.03), and increased response time to facial emotions as a whole ( p  = 0.02; d  = 0.39). Musicians with high MPA displayed FER deficits that were independent of general anxiety levels and possibly of general cognitive capacity. These deficits may favor the maintenance and exacerbation of experiences of anxiety during public performance, since cues of approval, satisfaction, and encouragement are not adequately recognized.

  17. Team Collaboration Software

    NASA Technical Reports Server (NTRS)

    Wang, Yeou-Fang; Schrock, Mitchell; Baldwin, John R.; Borden, Charles S.

    2010-01-01

    The Ground Resource Allocation and Planning Environment (GRAPE 1.0) is a Web-based, collaborative team environment based on the Microsoft SharePoint platform, which provides Deep Space Network (DSN) resource planners tools and services for sharing information and performing analysis.

  18. 12 CFR 341.2 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... agents under this part. 1. A transfer agent of stock or shares in a mutual fund maintains the records of... performs these functions. 2. A registrar of stock or shares in a mutual fund monitors the issuance of such...

  19. 12 CFR 341.2 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... agents under this part. 1. A transfer agent of stock or shares in a mutual fund maintains the records of... performs these functions. 2. A registrar of stock or shares in a mutual fund monitors the issuance of such...

  20. Shared Decision-Making as the Future of Emergency Cardiology.

    PubMed

    Probst, Marc A; Noseworthy, Peter A; Brito, Juan P; Hess, Erik P

    2018-02-01

    Shared decision-making is playing an increasingly large role in emergency cardiovascular care. Although there are many challenges to successfully performing shared decision-making in the emergency department, there are numerous clinical scenarios in which it should be used. In this article, we explore new research and emerging decision aids in the following emergency care scenarios: (1) low-risk chest pain; (2) new-onset atrial fibrillation; and (3) moderate-risk syncope. These decision aids are designed to engage patients and facilitate shared decision-making for specific treatment and disposition (admit vs discharge) decisions. We then offer a 3-step, practical approach to performing shared decision-making in the acute care setting, on the basis of broad stakeholder input and previous conceptual work. Step 1 involves simply acknowledging that a clinical decision needs to be made. Step 2 involves a shared discussion about the working diagnosis and the options for care in the context of the patient's values, preferences, and circumstances. The third and final step requires the patient and provider to agree on a plan of action regarding further medical care. The implementation of shared decision-making in emergency cardiology has the potential to shift the paradigm of clinical practice from paternalism toward mutualism and improve the quality and experience of care for our patients. Copyright © 2017 Canadian Cardiovascular Society. Published by Elsevier Inc. All rights reserved.

  1. Managing hydroclimatological risk to water supply with option contracts and reservoir index insurance

    NASA Astrophysics Data System (ADS)

    Brown, Casey; Carriquiry, Miguel

    2007-11-01

    This paper explores the performance of a system of economic instruments designed to facilitate the reduction of hydroclimatologic variability-induced impacts on stakeholders of shared water supply. The system is composed of bulk water option contracts between urban water suppliers and agricultural users and insurance indexed on reservoir inflows. The insurance is designed to cover the financial needs of the water supplier in situations where the option is likely to be exercised. Insurance provides the irregularly needed funds for exercising the water options. The combined option contract - reservoir index insurance system creates risk sharing between sectors that is currently lacking in many shared water situations. Contracts are designed for a shared agriculture - urban water system in Metro Manila, Philippines, using optimization and Monte Carlo analysis. Observed reservoir inflows are used to simulate contract performance. Results indicate the option - insurance design effectively smooths water supply costs of hydrologic variability for both agriculture and urban water.

  2. Ethnic diversity and value sharing: A longitudinal social network perspective on interactive group processes.

    PubMed

    Meeussen, Loes; Agneessens, Filip; Delvaux, Ellen; Phalet, Karen

    2018-04-01

    People often collaborate in groups that are increasingly diverse. As research predominantly investigated effects of diversity, the processes behind these effects remain understudied. We follow recent research that shows creating shared values is important for group functioning but seems hindered in high diversity groups - and use longitudinal social network analyses to study two interpersonal processes behind value sharing: creating relations between members or 'social bonding' (network tie formation and homophily) and sharing values - potentially through these relationships - or 'social norming' (network convergence and influence). We investigate these processes in small interactive groups with low and high ethnic diversity as they collaborate over time. In both low and high diversity groups, members showed social bonding and this creation of relations between members was not organized along ethnic lines. Low diversity groups also showed social norming: Members adjusted their relational values to others they liked and achievement values converged regardless of liking. In high diversity groups, however, there was no evidence for social norming. Thus, ethnic diversity seems to especially affect processes of social norming in groups, suggesting that targeted interventions should focus on facilitating social norming to stimulate value sharing in high diversity groups. © 2018 The British Psychological Society.

  3. The Dockstore: enabling modular, community-focused sharing of Docker-based genomics tools and workflows

    PubMed Central

    O'Connor, Brian D.; Yuen, Denis; Chung, Vincent; Duncan, Andrew G.; Liu, Xiang Kun; Patricia, Janice; Paten, Benedict; Stein, Lincoln; Ferretti, Vincent

    2017-01-01

    As genomic datasets continue to grow, the feasibility of downloading data to a local organization and running analysis on a traditional compute environment is becoming increasingly problematic. Current large-scale projects, such as the ICGC PanCancer Analysis of Whole Genomes (PCAWG), the Data Platform for the U.S. Precision Medicine Initiative, and the NIH Big Data to Knowledge Center for Translational Genomics, are using cloud-based infrastructure to both host and perform analysis across large data sets. In PCAWG, over 5,800 whole human genomes were aligned and variant called across 14 cloud and HPC environments; the processed data was then made available on the cloud for further analysis and sharing. If run locally, an operation at this scale would have monopolized a typical academic data centre for many months, and would have presented major challenges for data storage and distribution. However, this scale is increasingly typical for genomics projects and necessitates a rethink of how analytical tools are packaged and moved to the data. For PCAWG, we embraced the use of highly portable Docker images for encapsulating and sharing complex alignment and variant calling workflows across highly variable environments. While successful, this endeavor revealed a limitation in Docker containers, namely the lack of a standardized way to describe and execute the tools encapsulated inside the container. As a result, we created the Dockstore ( https://dockstore.org), a project that brings together Docker images with standardized, machine-readable ways of describing and running the tools contained within. This service greatly improves the sharing and reuse of genomics tools and promotes interoperability with similar projects through emerging web service standards developed by the Global Alliance for Genomics and Health (GA4GH). PMID:28344774

  4. The Dockstore: enabling modular, community-focused sharing of Docker-based genomics tools and workflows.

    PubMed

    O'Connor, Brian D; Yuen, Denis; Chung, Vincent; Duncan, Andrew G; Liu, Xiang Kun; Patricia, Janice; Paten, Benedict; Stein, Lincoln; Ferretti, Vincent

    2017-01-01

    As genomic datasets continue to grow, the feasibility of downloading data to a local organization and running analysis on a traditional compute environment is becoming increasingly problematic. Current large-scale projects, such as the ICGC PanCancer Analysis of Whole Genomes (PCAWG), the Data Platform for the U.S. Precision Medicine Initiative, and the NIH Big Data to Knowledge Center for Translational Genomics, are using cloud-based infrastructure to both host and perform analysis across large data sets. In PCAWG, over 5,800 whole human genomes were aligned and variant called across 14 cloud and HPC environments; the processed data was then made available on the cloud for further analysis and sharing. If run locally, an operation at this scale would have monopolized a typical academic data centre for many months, and would have presented major challenges for data storage and distribution. However, this scale is increasingly typical for genomics projects and necessitates a rethink of how analytical tools are packaged and moved to the data. For PCAWG, we embraced the use of highly portable Docker images for encapsulating and sharing complex alignment and variant calling workflows across highly variable environments. While successful, this endeavor revealed a limitation in Docker containers, namely the lack of a standardized way to describe and execute the tools encapsulated inside the container. As a result, we created the Dockstore ( https://dockstore.org), a project that brings together Docker images with standardized, machine-readable ways of describing and running the tools contained within. This service greatly improves the sharing and reuse of genomics tools and promotes interoperability with similar projects through emerging web service standards developed by the Global Alliance for Genomics and Health (GA4GH).

  5. A cloud-based home health care information sharing system to connect patients with home healthcare staff -A case report of a study in a mountainous region.

    PubMed

    Nomoto, Shinichi; Utsumi, Momoe; Sasayama, Satoshi; Dekigai, Hiroshi

    2017-01-01

    We have developed a cloud system, the e-Renraku Notebook (e-RN) for sharing of home care information based on the concept of "patient-centricity". In order to assess the likelihood that our system will enhance the communication and sharing of information between home healthcare staff members and home-care patients, we selected patients who were residing in mountainous regions for inclusion in our study. We herein report the findings.Eighteen staff members from 7 medical facilities and 9 patients participated in the present study.The e-RN was developed for two reasons: to allow patients to independently report their health status and to have staff members view and respond to the information received. The patients and staff members were given iPads with the pre-installed applications and the information being exchanged was reviewed over a 54-day period.Information was mainly input by the patients (61.6%), followed by the nurses who performed home visits (19.9%). The amount of information input by patients requiring high-level nursing care and their corresponding staff member was significantly greater than that input by patients who required low-level of nursing care.This patient-centric system in which patients can independently report and share information with a member of the healthcare staff provides a sense of security. It also allows staff members to understand the patient's health status before making a home visit, thereby giving them a sense of security and confidence. It was also noteworthy that elderly patients requiring high-level nursing care and their staff counterpart input information in the system significantly more frequently than patients who required low-level care.

  6. Piscivory limits diversification of feeding morphology in centrarchid fishes.

    PubMed

    Collar, David C; O'Meara, Brian C; Wainwright, Peter C; Near, Thomas J

    2009-06-01

    Proximity to an adaptive peak influences a lineage's potential to diversify. We tested whether piscivory, a high quality but functionally demanding trophic strategy, represents an adaptive peak that limits morphological diversification in the teleost fish clade, Centrarchidae. We synthesized published diet data and applied a well-resolved, multilocus and time-calibrated phylogeny to reconstruct ancestral piscivory. We measured functional features of the skull and performed principal components analysis on species' values for these variables. To assess the role of piscivory on morphological diversification, we compared the fit of several models of evolution for each principal component (PC), where model parameters were allowed to vary between lineages that differed in degree of piscivory. According to the best-fitting model, two adaptive peaks influenced PC 1 evolution, one peak shared between highly and moderately piscivorous lineages and another for nonpiscivores. Brownian motion better fit PCs 2, 3, and 4, but the best Brownian models infer a slow rate of PC 2 evolution shared among all piscivores and a uniquely slow rate of PC 4 evolution in highly piscivorous lineages. These results suggest that piscivory limits feeding morphology diversification, but this effect is most severe in lineages that exhibit an extreme form of this diet.

  7. The use of concept maps for knowledge management: from classrooms to research labs.

    PubMed

    Correia, Paulo Rogério Miranda

    2012-02-01

    Our contemporary society asks for new strategies to manage knowledge. The main activities developed by academics involve knowledge transmission (teaching) and production (research). Creativity and collaboration are valuable assets for establishing learning organizations in classrooms and research labs. Concept mapping is a useful graphical technique to foster some of the disciplines required to create and develop high-performance teams. The need for a linking phrase to clearly state conceptual relationships makes concept maps (Cmaps) very useful for organizing our own ideas (externalization), as well as, sharing them with other people (elicitation and consensus building). The collaborative knowledge construction (CKC) is supported by Cmaps because they improve the communication signal-to-noise ratio among participants with high information asymmetry. In other words, we can identify knowledge gaps and insightful ideas in our own Cmaps when discussing them with our counterparts. Collaboration involving low and high information asymmetry can also be explored through peer review and student-professor/advisor interactions, respectively. In conclusion, when it is used properly, concept mapping can provide a competitive advantage to produce and share knowledge in our contemporary society. To map is to know, as stated by Wandersee in 1990.

  8. Army Incentives for the PCMH

    DTIC Science & Technology

    2011-01-24

    Performance Metrics Community Based Medical Homes Slide 8 of 10 2011 MHS Conference  Increase our primary care market share Net increase in primary... Sharing Knowledge: Achieving Breakthrough Performance 2011 Military Health System Conference Army Incentives for the PCMH 24 January 2011 Mr. Ken...enroll as soon as fully staffed  Operate at economic advantage to DoD Improve ER/ UCC usage rates Improve utilization rates Business Rules Army

  9. The Etiology of Science Performance: Decreasing Heritability and Increasing Importance of the Shared Environment from 9 to 12 Years of Age

    ERIC Educational Resources Information Center

    Haworth, Claire M. A.; Dale, Philip S.; Plomin, Robert

    2009-01-01

    During childhood and adolescence, increases in heritability and decreases in shared environmental influences have typically been found for cognitive abilities. A sample of more than 2,500 pairs of twins from the Twins Early Development Study was used to investigate whether a similar pattern would be found for science performance from 9 to 12…

  10. A Simulation-based Approach to Measuring Team Situational Awareness in Emergency Medicine: A Multicenter, Observational Study.

    PubMed

    Rosenman, Elizabeth D; Dixon, Aurora J; Webb, Jessica M; Brolliar, Sarah; Golden, Simon J; Jones, Kerin A; Shah, Sachita; Grand, James A; Kozlowski, Steve W J; Chao, Georgia T; Fernandez, Rosemarie

    2018-02-01

    Team situational awareness (TSA) is critical for effective teamwork and supports dynamic decision making in unpredictable, time-pressured situations. Simulation provides a platform for developing and assessing TSA, but these efforts are limited by suboptimal measurement approaches. The objective of this study was to develop and evaluate a novel approach to TSA measurement in interprofessional emergency medicine (EM) teams. We performed a multicenter, prospective, simulation-based observational study to evaluate an approach to TSA measurement. Interprofessional emergency medical teams, consisting of EM resident physicians, nurses, and medical students, were recruited from the University of Washington (Seattle, WA) and Wayne State University (Detroit, MI). Each team completed a simulated emergency resuscitation scenario. Immediately following the simulation, team members completed a TSA measure, a team perception of shared understanding measure, and a team leader effectiveness measure. Subject matter expert reviews and pilot testing of the TSA measure provided evidence of content and response process validity. Simulations were recorded and independently coded for team performance using a previously validated measure. The relationships between the TSA measure and other variables (team clinical performance, team perception of shared understanding, team leader effectiveness, and team experience) were explored. The TSA agreement metric was indexed by averaging the pairwise agreement for each dyad on a team and then averaging across dyads to yield agreement at the team level. For the team perception of shared understanding and team leadership effectiveness measures, individual team member scores were aggregated within a team to create a single team score. We computed descriptive statistics for all outcomes. We calculated Pearson's product-moment correlations to determine bivariate correlations between outcome variables with two-tailed significance testing (p < 0.05). A total of 123 participants were recruited and formed three-person teams (n = 41 teams). All teams completed the assessment scenario and postsimulation measures. TSA agreement ranged from 0.19 to 0.9 and had a mean (±SD) of 0.61 (±0.17). TSA correlated with team clinical performance (p < 0.05) but did not correlate with team perception of shared understanding, team leader effectiveness, or team experience. Team situational awareness supports adaptive teams and is critical for high reliability organizations such as healthcare systems. Simulation can provide a platform for research aimed at understanding and measuring TSA. This study provides a feasible method for simulation-based assessment of TSA in interdisciplinary teams that addresses prior measure limitations and is appropriate for use in highly dynamic, uncertain situations commonly encountered in emergency department systems. Future research is needed to understand the development of and interactions between individual-, team-, and system (distributed)-level cognitive processes. © 2017 by the Society for Academic Emergency Medicine.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fadika, Zacharia; Dede, Elif; Govindaraju, Madhusudhan

    MapReduce is increasingly becoming a popular framework, and a potent programming model. The most popular open source implementation of MapReduce, Hadoop, is based on the Hadoop Distributed File System (HDFS). However, as HDFS is not POSIX compliant, it cannot be fully leveraged by applications running on a majority of existing HPC environments such as Teragrid and NERSC. These HPC environments typicallysupport globally shared file systems such as NFS and GPFS. On such resourceful HPC infrastructures, the use of Hadoop not only creates compatibility issues, but also affects overall performance due to the added overhead of the HDFS. This paper notmore » only presents a MapReduce implementation directly suitable for HPC environments, but also exposes the design choices for better performance gains in those settings. By leveraging inherent distributed file systems' functions, and abstracting them away from its MapReduce framework, MARIANE (MApReduce Implementation Adapted for HPC Environments) not only allows for the use of the model in an expanding number of HPCenvironments, but also allows for better performance in such settings. This paper shows the applicability and high performance of the MapReduce paradigm through MARIANE, an implementation designed for clustered and shared-disk file systems and as such not dedicated to a specific MapReduce solution. The paper identifies the components and trade-offs necessary for this model, and quantifies the performance gains exhibited by our approach in distributed environments over Apache Hadoop in a data intensive setting, on the Magellan testbed at the National Energy Research Scientific Computing Center (NERSC).« less

  12. Modeling and Dynamic Analysis of Paralleled dc/dc Converters With Master-Slave Current Sharing Control

    NASA Technical Reports Server (NTRS)

    Rajagopalan, J.; Xing, K.; Guo, Y.; Lee, F. C.; Manners, Bruce

    1996-01-01

    A simple, application-oriented, transfer function model of paralleled converters employing Master-Slave Current-sharing (MSC) control is developed. Dynamically, the Master converter retains its original design characteristics; all the Slave converters are forced to depart significantly from their original design characteristics into current-controlled current sources. Five distinct loop gains to assess system stability and performance are identified and their physical significance is described. A design methodology for the current share compensator is presented. The effect of this current sharing scheme on 'system output impedance' is analyzed.

  13. Optimization of Single-Sided Charge-Sharing Strip Detectors

    NASA Technical Reports Server (NTRS)

    Hamel, L.A.; Benoit, M.; Donmez, B.; Macri, J. R.; McConnell, M. L.; Ryan, J. M.; Narita, T.

    2006-01-01

    Simulation of the charge sharing properties of single-sided CZT strip detectors with small anode pads are presented. The effect of initial event size, carrier repulsion, diffusion, drift, trapping and detrapping are considered. These simulations indicate that such a detector with a 150 m pitch will provide good charge sharing between neighboring pads. This is supported by a comparison of simulations and measurements for a similar detector with a coarser pitch of 225 m that could not provide sufficient sharing. The performance of such a detector used as a gamma-ray imager is discussed.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    This presentation provides a high-level overview of the current U.S. shared solar landscape, the impact that a given shared solar program's structure has on requiring federal securities oversight, as well as an estimate of market potential for U.S. shared solar deployment.

  15. Friendly-Sharing: Improving the Performance of City Sensoring through Contact-Based Messaging Applications.

    PubMed

    Herrera-Tapia, Jorge; Hernández-Orallo, Enrique; Tomás, Andrés; Manzoni, Pietro; Tavares Calafate, Carlos; Cano, Juan-Carlos

    2016-09-18

    Regular citizens equipped with smart devices are being increasingly used as "sensors" by Smart Cities applications. Using contacts among users, data in the form of messages is obtained and shared. Contact-based messaging applications are based on establishing a short-range communication directly between mobile devices, and on storing the messages in these devices for subsequent delivery to cloud-based services. An effective way to increase the number of messages that can be shared is to increase the contact duration. We thus introduce the Friendly-Sharing diffusion approach, where, during a contact, the users are aware of the time needed to interchange the messages stored in their buffers, and they can thus decide to wait more time in order to increase the message sharing probability. The performance of this approach is anyway closely related to the size of the buffer in the device. We therefore compare various policies either for the message selection at forwarding times and for message dropping when the buffer is full. We evaluate our proposal with a modified version of the Opportunistic Networking Environment (ONE) simulator and using real human mobility traces.

  16. VLBI-resolution radio-map algorithms: Performance analysis of different levels of data-sharing on multi-socket, multi-core architectures

    NASA Astrophysics Data System (ADS)

    Tabik, S.; Romero, L. F.; Mimica, P.; Plata, O.; Zapata, E. L.

    2012-09-01

    A broad area in astronomy focuses on simulating extragalactic objects based on Very Long Baseline Interferometry (VLBI) radio-maps. Several algorithms in this scope simulate what would be the observed radio-maps if emitted from a predefined extragalactic object. This work analyzes the performance and scaling of this kind of algorithms on multi-socket, multi-core architectures. In particular, we evaluate a sharing approach, a privatizing approach and a hybrid approach on systems with complex memory hierarchy that includes shared Last Level Cache (LLC). In addition, we investigate which manual processes can be systematized and then automated in future works. The experiments show that the data-privatizing model scales efficiently on medium scale multi-socket, multi-core systems (up to 48 cores) while regardless of algorithmic and scheduling optimizations, the sharing approach is unable to reach acceptable scalability on more than one socket. However, the hybrid model with a specific level of data-sharing provides the best scalability over all used multi-socket, multi-core systems.

  17. Early experiences in developing and managing the neuroscience gateway.

    PubMed

    Sivagnanam, Subhashini; Majumdar, Amit; Yoshimoto, Kenneth; Astakhov, Vadim; Bandrowski, Anita; Martone, MaryAnn; Carnevale, Nicholas T

    2015-02-01

    The last few decades have seen the emergence of computational neuroscience as a mature field where researchers are interested in modeling complex and large neuronal systems and require access to high performance computing machines and associated cyber infrastructure to manage computational workflow and data. The neuronal simulation tools, used in this research field, are also implemented for parallel computers and suitable for high performance computing machines. But using these tools on complex high performance computing machines remains a challenge because of issues with acquiring computer time on these machines located at national supercomputer centers, dealing with complex user interface of these machines, dealing with data management and retrieval. The Neuroscience Gateway is being developed to alleviate and/or hide these barriers to entry for computational neuroscientists. It hides or eliminates, from the point of view of the users, all the administrative and technical barriers and makes parallel neuronal simulation tools easily available and accessible on complex high performance computing machines. It handles the running of jobs and data management and retrieval. This paper shares the early experiences in bringing up this gateway and describes the software architecture it is based on, how it is implemented, and how users can use this for computational neuroscience research using high performance computing at the back end. We also look at parallel scaling of some publicly available neuronal models and analyze the recent usage data of the neuroscience gateway.

  18. Early experiences in developing and managing the neuroscience gateway

    PubMed Central

    Sivagnanam, Subhashini; Majumdar, Amit; Yoshimoto, Kenneth; Astakhov, Vadim; Bandrowski, Anita; Martone, MaryAnn; Carnevale, Nicholas. T.

    2015-01-01

    SUMMARY The last few decades have seen the emergence of computational neuroscience as a mature field where researchers are interested in modeling complex and large neuronal systems and require access to high performance computing machines and associated cyber infrastructure to manage computational workflow and data. The neuronal simulation tools, used in this research field, are also implemented for parallel computers and suitable for high performance computing machines. But using these tools on complex high performance computing machines remains a challenge because of issues with acquiring computer time on these machines located at national supercomputer centers, dealing with complex user interface of these machines, dealing with data management and retrieval. The Neuroscience Gateway is being developed to alleviate and/or hide these barriers to entry for computational neuroscientists. It hides or eliminates, from the point of view of the users, all the administrative and technical barriers and makes parallel neuronal simulation tools easily available and accessible on complex high performance computing machines. It handles the running of jobs and data management and retrieval. This paper shares the early experiences in bringing up this gateway and describes the software architecture it is based on, how it is implemented, and how users can use this for computational neuroscience research using high performance computing at the back end. We also look at parallel scaling of some publicly available neuronal models and analyze the recent usage data of the neuroscience gateway. PMID:26523124

  19. Long term performance stability of silicon sensors

    NASA Astrophysics Data System (ADS)

    Mori, R.; Betancourt, C.; Kühn, S.; Hauser, M.; Messmer, I.; Hasenfratz, A.; Thomas, M.; Lohwasser, K.; Parzefall, U.; Jakobs, K.

    2015-10-01

    The HL-LHC investigations on silicon particle sensor performance are carried out with the intention to reproduce the harsh environments foreseen, but usually in individual short measurements. Recently, several groups have observed a decrease in the charge collection of silicon strip sensors after several days, in particular on sensors showing charge multiplication. This phenomenon has been explained with a surface effect, the increase of charge sharing due to the increment of positive charge in the silicon oxide coming from the source used for charge collection measurements. Observing a similar behaviour in other sensors for which we can exclude this surface effect, we propose and investigate alternative explanations, namely trapping related effects (change of polarization) and annealing related effects. Several n-on-p strip sensors, as-processed and irradiated with protons and neutrons up to 5 ×1015neq /cm2, have been subjected to charge collection efficiency measurements for several days, while parameters like the impedance have been monitored. The probable stressing conditions have been changed in an attempt to recover the collected charge in case of a decrease. The results show that for the investigated sensors the effect of charge sharing induced by a radioactive source is not important, and a main detrimental factor is due to very high voltage, while at lower voltages the performance is stable.

  20. Performance Analysis of Multilevel Parallel Applications on Shared Memory Architectures

    NASA Technical Reports Server (NTRS)

    Jost, Gabriele; Jin, Haoqiang; Labarta, Jesus; Gimenez, Judit; Caubet, Jordi; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    In this paper we describe how to apply powerful performance analysis techniques to understand the behavior of multilevel parallel applications. We use the Paraver/OMPItrace performance analysis system for our study. This system consists of two major components: The OMPItrace dynamic instrumentation mechanism, which allows the tracing of processes and threads and the Paraver graphical user interface for inspection and analyses of the generated traces. We describe how to use the system to conduct a detailed comparative study of a benchmark code implemented in five different programming paradigms applicable for shared memory

  1. Spectrum Sharing in an ISM Band: Outage Performance of a Hybrid DS/FH Spread Spectrum System with Beamforming

    NASA Astrophysics Data System (ADS)

    Li, Hanyu; Syed, Mubashir; Yao, Yu-Dong; Kamakaris, Theodoros

    2009-12-01

    This paper investigates spectrum sharing issues in the unlicensed industrial, scientific, and medical (ISM) bands. It presents a radio frequency measurement setup and measurement results in 2.4 GHz. It then develops an analytical model to characterize the coexistence interference in the ISM bands, based on radio frequency measurement results in the 2.4 GHz. Outage performance using the interference model is examined for a hybrid direct-sequence frequency-hopping spread spectrum system. The utilization of beamforming techniques in the system is also investigated, and a simplified beamforming model is proposed to analyze the system performance using beamforming. Numerical results show that beamforming significantly improves the system outage performance. The work presented in this paper provides a quantitative evaluation of signal outages in a spectrum sharing environment. It can be used as a tool in the development process for future dynamic spectrum access models as well as engineering designs for applications in unlicensed bands.

  2. A study on haptic collaborative game in shared virtual environment

    NASA Astrophysics Data System (ADS)

    Lu, Keke; Liu, Guanyang; Liu, Lingzhi

    2013-03-01

    A study on collaborative game in shared virtual environment with haptic feedback over computer networks is introduced in this paper. A collaborative task was used where the players located at remote sites and played the game together. The player can feel visual and haptic feedback in virtual environment compared to traditional networked multiplayer games. The experiment was desired in two conditions: visual feedback only and visual-haptic feedback. The goal of the experiment is to assess the impact of force feedback on collaborative task performance. Results indicate that haptic feedback is beneficial for performance enhancement for collaborative game in shared virtual environment. The outcomes of this research can have a powerful impact on the networked computer games.

  3. Performance Evaluation of Remote Memory Access (RMA) Programming on Shared Memory Parallel Computers

    NASA Technical Reports Server (NTRS)

    Jin, Hao-Qiang; Jost, Gabriele; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    The purpose of this study is to evaluate the feasibility of remote memory access (RMA) programming on shared memory parallel computers. We discuss different RMA based implementations of selected CFD application benchmark kernels and compare them to corresponding message passing based codes. For the message-passing implementation we use MPI point-to-point and global communication routines. For the RMA based approach we consider two different libraries supporting this programming model. One is a shared memory parallelization library (SMPlib) developed at NASA Ames, the other is the MPI-2 extensions to the MPI Standard. We give timing comparisons for the different implementation strategies and discuss the performance.

  4. Fail or flourish? Cognitive appraisal moderates the effect of solo status on performance.

    PubMed

    White, Judith B

    2008-09-01

    When everyone in a group shares a common social identity except one individual, the one who is different from the majority has solo status. Solo status increases one's visibility and performance pressure, which may result in stress. Stress has divergent effects on performance, and individuals' response to stressful situations is predicted by their cognitive appraisal (challenge or threat) of the situation. Two experiments test the hypothesis that cognitive appraisal moderates the effect of solo status on performance. Experiment 1 finds that at relatively high appraisal levels (resources exceed demands), solo status improves men's and women's performance; at relatively low appraisal levels, solo status hurts performance. Experiment 2 replicates this effect for solo status based on minimal group assignment. Results suggest that for individuals who feel challenged and not threatened by their work, it may help to be a solo.

  5. Theorizing Food Sharing Practices in a Junior High Classroom

    ERIC Educational Resources Information Center

    Rice, Mary

    2013-01-01

    This reflective essay analyzes interactions where food was shared between a teacher and her junior high school students. The author describes the official uses of food in junior high school classrooms and in educational contexts in general. The author then theorizes these interactions, suggesting other semiotic, dialogic, and culturally encoded…

  6. Telehealth solutions to enable global collaboration in rheumatic heart disease screening.

    PubMed

    Lopes, Eduardo Lv; Beaton, Andrea Z; Nascimento, Bruno R; Tompsett, Alison; Dos Santos, Julia Pa; Perlman, Lindsay; Diamantino, Adriana C; Oliveira, Kaciane Kb; Oliveira, Cassio M; Nunes, Maria do Carmo P; Bonisson, Leonardo; Ribeiro, Antônio Lp; Sable, Craig

    2018-02-01

    Background The global burden of rheumatic heart disease is nearly 33 million people. Telemedicine, using cloud-server technology, provides an ideal solution for sharing images performed by non-physicians with cardiologists who are experts in rheumatic heart disease. Objective We describe our experience in using telemedicine to support a large rheumatic heart disease outreach screening programme in the Brazilian state of Minas Gerais. Methods The Programa de Rastreamento da Valvopatia Reumática (PROVAR) is a prospective cross-sectional study aimed at gathering epidemiological data on the burden of rheumatic heart disease in Minas Gerais and testing of a non-expert, telemedicine-supported model of outreach rheumatic heart disease screening. The primary goal is to enable expert support of remote rheumatic heart disease outreach through cloud-based sharing of echocardiographic images between Minas Gerais and Washington. Secondary goals include (a) developing and sharing online training modules for non-physicians in echocardiography performance and interpretation and (b) utilising a secure web-based system to share clinical and research data. Results PROVAR included 4615 studies that were performed by non-experts at 21 schools and shared via cloud-telemedicine technology. Latent rheumatic heart disease was found in 251 subjects (4.2% of subjects: 3.7% borderline and 0.5% definite disease). Of the studies, 50% were preformed on full functional echocardiography machines and transmitted via Digital Imaging and Communications in Medicine (DICOM) and 50% were performed on handheld echocardiography machines and transferred via a secure Dropbox connection. The average time between study performance date and interpretation was 10 days. There was 100% success in initial image transfer. Less than 1% of studies performed by non-experts could not be interpreted. Discussion A sustainable, low-cost telehealth model, using task-shifting with non-medical personal in low and middle income countries can improve access to echocardiography for rheumatic heart disease.

  7. Deconstructing Bipolar Disorder and Schizophrenia: A cross-diagnostic cluster analysis of cognitive phenotypes.

    PubMed

    Lee, Junghee; Rizzo, Shemra; Altshuler, Lori; Glahn, David C; Miklowitz, David J; Sugar, Catherine A; Wynn, Jonathan K; Green, Michael F

    2017-02-01

    Bipolar disorder (BD) and schizophrenia (SZ) show substantial overlap. It has been suggested that a subgroup of patients might contribute to these overlapping features. This study employed a cross-diagnostic cluster analysis to identify subgroups of individuals with shared cognitive phenotypes. 143 participants (68 BD patients, 39 SZ patients and 36 healthy controls) completed a battery of EEG and performance assessments on perception, nonsocial cognition and social cognition. A K-means cluster analysis was conducted with all participants across diagnostic groups. Clinical symptoms, functional capacity, and functional outcome were assessed in patients. A two-cluster solution across 3 groups was the most stable. One cluster including 44 BD patients, 31 controls and 5 SZ patients showed better cognition (High cluster) than the other cluster with 24 BD patients, 35 SZ patients and 5 controls (Low cluster). BD patients in the High cluster performed better than BD patients in the Low cluster across cognitive domains. Within each cluster, participants with different clinical diagnoses showed different profiles across cognitive domains. All patients are in the chronic phase and out of mood episode at the time of assessment and most of the assessment were behavioral measures. This study identified two clusters with shared cognitive phenotype profiles that were not proxies for clinical diagnoses. The finding of better social cognitive performance of BD patients than SZ patients in the Lowe cluster suggest that relatively preserved social cognition may be important to identify disease process distinct to each disorder. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Funding Early College High School: Hold Harmless or Shared Commitment

    ERIC Educational Resources Information Center

    Leonard, Jack

    2013-01-01

    Early college high schools are a promising but expensive pathway to college readiness. Most such schools are supported with state funds and/or grants. This descriptive case study presents an early college program, now in its fourth year in a traditional high school, in which the families, high school and local community college shared the entire…

  9. Highway performance monitoring system catalog : new technology and techniques

    DOT National Transportation Integrated Search

    1999-03-01

    The Share the Road Campaign Research Study Final Report documents the independent study and review of the Federal Highway Administration (FHWA), Office of Motor Carrier and Highway Safety's (OMCHS), Share the Road program called the No-Zone Campaign....

  10. ISBP: Understanding the Security Rule of Users' Information-Sharing Behaviors in Partnership

    PubMed Central

    Wu, Hongchen; Wang, Xinjun

    2016-01-01

    The rapid growth of social network data has given rise to high security awareness among users, especially when they exchange and share their personal information. However, because users have different feelings about sharing their information, they are often puzzled about who their partners for exchanging information can be and what information they can share. Is it possible to assist users in forming a partnership network in which they can exchange and share information with little worry? We propose a modified information sharing behavior prediction (ISBP) model that can help in understanding the underlying rules by which users share their information with partners in light of three common aspects: what types of items users are likely to share, what characteristics of users make them likely to share information, and what features of users’ sharing behavior are easy to predict. This model is applied with machine learning techniques in WEKA to predict users’ decisions pertaining to information sharing behavior and form them into trustable partnership networks by learning their features. In the experiment section, by using two real-life datasets consisting of citizens’ sharing behavior, we identify the effect of highly sensitive requests on sharing behavior adjacent to individual variables: the younger participants’ partners are more difficult to predict than those of the older participants, whereas the partners of people who are not computer majors are easier to predict than those of people who are computer majors. Based on these findings, we believe that it is necessary and feasible to offer users personalized suggestions on information sharing decisions, and this is pioneering work that could benefit college researchers focusing on user-centric strategies and website owners who want to collect more user information without raising their privacy awareness or losing their trustworthiness. PMID:26950064

  11. ISBP: Understanding the Security Rule of Users' Information-Sharing Behaviors in Partnership.

    PubMed

    Wu, Hongchen; Wang, Xinjun

    2016-01-01

    The rapid growth of social network data has given rise to high security awareness among users, especially when they exchange and share their personal information. However, because users have different feelings about sharing their information, they are often puzzled about who their partners for exchanging information can be and what information they can share. Is it possible to assist users in forming a partnership network in which they can exchange and share information with little worry? We propose a modified information sharing behavior prediction (ISBP) model that can help in understanding the underlying rules by which users share their information with partners in light of three common aspects: what types of items users are likely to share, what characteristics of users make them likely to share information, and what features of users' sharing behavior are easy to predict. This model is applied with machine learning techniques in WEKA to predict users' decisions pertaining to information sharing behavior and form them into trustable partnership networks by learning their features. In the experiment section, by using two real-life datasets consisting of citizens' sharing behavior, we identify the effect of highly sensitive requests on sharing behavior adjacent to individual variables: the younger participants' partners are more difficult to predict than those of the older participants, whereas the partners of people who are not computer majors are easier to predict than those of people who are computer majors. Based on these findings, we believe that it is necessary and feasible to offer users personalized suggestions on information sharing decisions, and this is pioneering work that could benefit college researchers focusing on user-centric strategies and website owners who want to collect more user information without raising their privacy awareness or losing their trustworthiness.

  12. Listeners' and Performers' Shared Understanding of Jazz Improvisations.

    PubMed

    Schober, Michael F; Spiro, Neta

    2016-01-01

    This study explores the extent to which a large set of musically experienced listeners share understanding with a performing saxophone-piano duo, and with each other, of what happened in three improvisations on a jazz standard. In an online survey, 239 participants listened to audio recordings of three improvisations and rated their agreement with 24 specific statements that the performers and a jazz-expert commenting listener had made about them. Listeners endorsed statements that the performers had agreed upon significantly more than they endorsed statements that the performers had disagreed upon, even though the statements gave no indication of performers' levels of agreement. The findings show some support for a more-experienced-listeners-understand-more-like-performers hypothesis: Listeners with more jazz experience and with experience playing the performers' instruments endorsed the performers' statements more than did listeners with less jazz experience and experience on different instruments. The findings also strongly support a listeners-as-outsiders hypothesis: Listeners' ratings of the 24 statements were far more likely to cluster with the commenting listener's ratings than with either performer's. But the pattern was not universal; particular listeners even with similar musical backgrounds could interpret the same improvisations radically differently. The evidence demonstrates that it is possible for performers' interpretations to be shared with very few listeners, and that listeners' interpretations about what happened in a musical performance can be far more different from performers' interpretations than performers or other listeners might assume.

  13. Listeners' and Performers' Shared Understanding of Jazz Improvisations

    PubMed Central

    Schober, Michael F.; Spiro, Neta

    2016-01-01

    This study explores the extent to which a large set of musically experienced listeners share understanding with a performing saxophone-piano duo, and with each other, of what happened in three improvisations on a jazz standard. In an online survey, 239 participants listened to audio recordings of three improvisations and rated their agreement with 24 specific statements that the performers and a jazz-expert commenting listener had made about them. Listeners endorsed statements that the performers had agreed upon significantly more than they endorsed statements that the performers had disagreed upon, even though the statements gave no indication of performers' levels of agreement. The findings show some support for a more-experienced-listeners-understand-more-like-performers hypothesis: Listeners with more jazz experience and with experience playing the performers' instruments endorsed the performers' statements more than did listeners with less jazz experience and experience on different instruments. The findings also strongly support a listeners-as-outsiders hypothesis: Listeners' ratings of the 24 statements were far more likely to cluster with the commenting listener's ratings than with either performer's. But the pattern was not universal; particular listeners even with similar musical backgrounds could interpret the same improvisations radically differently. The evidence demonstrates that it is possible for performers' interpretations to be shared with very few listeners, and that listeners' interpretations about what happened in a musical performance can be far more different from performers' interpretations than performers or other listeners might assume. PMID:27853438

  14. Human resource constraints and the prospect of task-sharing among community health workers for the detection of early signs of pre-eclampsia in Ogun State, Nigeria.

    PubMed

    Akeju, David O; Vidler, Marianne; Sotunsa, J O; Osiberu, M O; Orenuga, E O; Oladapo, Olufemi T; Adepoju, A A; Qureshi, Rahat; Sawchuck, Diane; Adetoro, Olalekan O; von Dadelszen, Peter; Dada, Olukayode A

    2016-09-30

    The dearth of health personnel in low income countries has attracted global attention. Ways as to how health care services can be delivered in a more efficient and effective way using available health personnel are being explored. Task-sharing expands the responsibilities of low-cadre health workers and allows them to share these responsibilities with highly qualified health care providers in an effort to best utilize available human resources. This is appropriate in a country like Nigeria where there is a shortage of qualified health professionals and a huge burden of maternal mortality resulting from obstetric complications like pre-eclampsia. This study examines the prospect for task-sharing among Community Health Extension Workers (CHEW) for the detection of early signs of pre-eclampsia, in Ogun State, Nigeria. This study is part of a larger community-based trial evaluating the acceptability of community treatment for severe pre-eclampsia in Ogun State, Nigeria. Data was collected between 2011 and 2012 using focus group discussions; seven with CHEWs (n = 71), three with male decision-makers (n = 35), six with community leaders (n = 68), and one with member of the Society of Obstetricians and Gynaecologists of Nigeria (n = 9). In addition, interviews were conducted with the heads of the local government administration (n = 4), directors of planning (n = 4), medical officers (n = 4), and Chief Nursing Officers (n = 4). Qualitative data were analysed using NVivo version 10.0 3 computer software. The non-availability of health personnel is a major challenge, and has resulted in a high proportion of facility-based care performed by CHEWs. As a result, CHEWs often take on roles that are designated for senior health workers. This role expansion has exposed CHEWs to the basics of obstetric care, and has resulted in informal task-sharing among the health workers. The knowledge and ability of CHEWs to perform basic clinical assessments, such as measure blood pressure is not in doubt. Nevertheless, there were divergent views by senior and junior cadres of health practitioners about CHEWs' abilities in providing obstetric care. Similarly, there were concerns by various stakeholders, particularly the CHEWs themselves, on the regulatory restrictions placed on them by the Standing Order. Generally, the extent to which obstetric tasks could be shifted to community health workers will be determined by the training provided and the extent to which the observed barriers are addressed. NCT01911494.

  15. Part-time work and job sharing in health care: is the NHS a family-friendly employer?

    PubMed

    Branine, Mohamed

    2003-01-01

    This paper examines the nature and level of flexible employment in the National Health Service (NHS) by investigating the extent to which part-time work and job sharing arrangements are used in the provision and delivery of health care. It attempts to analyse the reasons for an increasing number of part-timers and a very limited number of job sharers in the NHS and to explain the advantages and disadvantages of each pattern of employment. Data collected through the use of questionnaires and interviews from 55 NHS trusts reveal that the use of part-time work is a tradition that seems to fit well with the cost-saving measures imposed on the management of the service but at the same time it has led to increasing employee dissatisfaction, and that job sharing arrangements are suitable for many NHS employees since the majority of them are women with a desire to combine family commitments with career prospects but a very limited number of employees have had the opportunity to job share. Therefore it is concluded that to attract and retain the quality of staff needed to ensure high performance standards in the provision and delivery of health care the NHS should accept the diversity that exists within its workforce and take a more proactive approach to promoting a variety of flexible working practices and family-friendly policies.

  16. Initial Performance Results on IBM POWER6

    NASA Technical Reports Server (NTRS)

    Saini, Subbash; Talcott, Dale; Jespersen, Dennis; Djomehri, Jahed; Jin, Haoqiang; Mehrotra, Piysuh

    2008-01-01

    The POWER5+ processor has a faster memory bus than that of the previous generation POWER5 processor (533 MHz vs. 400 MHz), but the measured per-core memory bandwidth of the latter is better than that of the former (5.7 GB/s vs. 4.3 GB/s). The reason for this is that in the POWER5+, the two cores on the chip share the L2 cache, L3 cache and memory bus. The memory controller is also on the chip and is shared by the two cores. This serializes the path to memory. For consistently good performance on a wide range of applications, the performance of the processor, the memory subsystem, and the interconnects (both latency and bandwidth) should be balanced. Recognizing this, IBM has designed the Power6 processor so as to avoid the bottlenecks due to the L2 cache, memory controller and buffer chips of the POWER5+. Unlike the POWER5+, each core in the POWER6 has its own L2 cache (4 MB - double that of the Power5+), memory controller and buffer chips. Each core in the POWER6 runs at 4.7 GHz instead of 1.9 GHz in POWER5+. In this paper, we evaluate the performance of a dual-core Power6 based IBM p6-570 system, and we compare its performance with that of a dual-core Power5+ based IBM p575+ system. In this evaluation, we have used the High- Performance Computing Challenge (HPCC) benchmarks, NAS Parallel Benchmarks (NPB), and four real-world applications--three from computational fluid dynamics and one from climate modeling.

  17. Using exploratory data analysis to identify and predict patterns of human Lyme disease case clustering within a multistate region, 2010-2014.

    PubMed

    Hendricks, Brian; Mark-Carew, Miguella

    2017-02-01

    Lyme disease is the most commonly reported vectorborne disease in the United States. The objective of our study was to identify patterns of Lyme disease reporting after multistate inclusion to mitigate potential border effects. County-level human Lyme disease surveillance data were obtained from Kentucky, Maryland, Ohio, Pennsylvania, Virginia, and West Virginia state health departments. Rate smoothing and Local Moran's I was performed to identify clusters of reporting activity and identify spatial outliers. A logistic generalized estimating equation was performed to identify significant associations in disease clustering over time. Resulting analyses identified statistically significant (P=0.05) clusters of high reporting activity and trends over time. High reporting activity aggregated near border counties in high incidence states, while low reporting aggregated near shared county borders in non-high incidence states. Findings highlight the need for exploratory surveillance approaches to describe the extent to which state level reporting affects accurate estimation of Lyme disease progression. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Voluntary Movement Frequencies in Submaximal One- and Two-Legged Knee Extension Exercise and Pedaling

    PubMed Central

    Stang, Julie; Wiig, Håvard; Hermansen, Marte; Hansen, Ernst Albin

    2016-01-01

    Understanding of behavior and control of human voluntary rhythmic stereotyped leg movements is useful in work to improve performance, function, and rehabilitation of exercising, healthy, and injured humans. The present study aimed at adding to the existing understanding within this field. To pursue the aim, correlations between freely chosen movement frequencies in relatively simple, single-joint, one- and two-legged knee extension exercise were investigated. The same was done for more complex, multiple-joint, one- and two-legged pedaling. These particular activities were chosen because they could be considered related to some extent, as they shared a key aspect of knee extension, and because they at the same time were different. The activities were performed at submaximal intensities, by healthy individuals (n = 16, thereof eight women; 23.4 ± 2.7 years; 1.70 ± 0.11 m; 68.6 ± 11.2 kg). High and fair correlations (R-values of 0.99 and 0.75) occurred between frequencies generated with the dominant leg and the nondominant leg during knee extension exercise and pedaling, respectively. Fair to high correlations (R-values between 0.71 and 0.95) occurred between frequencies performed with each of the two legs in an activity, and the two-legged frequency performed in the same type of activity. In general, the correlations were higher for knee extension exercise than for pedaling. Correlations between knee extension and pedaling frequencies were of modest occurrence. The correlations between movement frequencies generated separately by each of the legs might be interpreted to support the following working hypothesis, which was based on existing literature. It is likely that involved central pattern generators (CPGs) of the two legs share a common frequency generator or that separate frequency generators of each leg are attuned via interneuronal connections. Further, activity type appeared to be relevant. Thus, the apparent common rhythmogenesis for the two legs appeared to be stronger for the relatively simple single-joint activity of knee extension exercise as compared to the more complex multi-joint activity of pedaling. Finally, it appeared that the shared aspect of knee extension in the related types of activities of knee extension exercise and pedaling was insufficient to cause obvious correlations between generated movement frequencies in the two types of activities. PMID:26973486

  19. Shared leadership in a medical practice: keys to success.

    PubMed

    Daiker, Barbara L

    2009-01-01

    Medical practices are in a complex industry and require the expertise of both physician and business leaders to be successful. Sharing the leadership between these two professionals brings with it challenges that are best met if the environment is supportive. This support comes in the form of external aspects such as selection, role definition, organizational hierarchy, time, and process. Critical to shared leadership is communication, both frequency and quality. Conflicts are likely to occur, and how they are resolved is what determines the strength of a shared governance relationship. Reality is that finding the balance in shared governance is diffcult, but with effort and commitment, it can provide the organization with the performance it hopes to achieve.

  20. A review on the benchmarking concept in Malaysian construction safety performance

    NASA Astrophysics Data System (ADS)

    Ishak, Nurfadzillah; Azizan, Muhammad Azizi

    2018-02-01

    Construction industry is one of the major industries that propels Malaysia's economy in highly contributes to our nation's GDP growth, yet the high fatality rates on construction sites have caused concern among safety practitioners and the stakeholders. Hence, there is a need of benchmarking in performance of Malaysia's construction industry especially in terms of safety. This concept can create a fertile ground for ideas, but only in a receptive environment, organization that share good practices and compare their safety performance against other benefit most to establish improvement in safety culture. This research was conducted to study the awareness important, evaluate current practice and improvement, and also identify the constraint in implement of benchmarking on safety performance in our industry. Additionally, interviews with construction professionals were come out with different views on this concept. Comparison has been done to show the different understanding of benchmarking approach and how safety performance can be benchmarked. But, it's viewed as one mission, which to evaluate objectives identified through benchmarking that will improve the organization's safety performance. Finally, the expected result from this research is to help Malaysia's construction industry implement best practice in safety performance management through the concept of benchmarking.

  1. Enhancing Application Performance Using Mini-Apps: Comparison of Hybrid Parallel Programming Paradigms

    NASA Technical Reports Server (NTRS)

    Lawson, Gary; Poteat, Michael; Sosonkina, Masha; Baurle, Robert; Hammond, Dana

    2016-01-01

    In this work, several mini-apps have been created to enhance a real-world application performance, namely the VULCAN code for complex flow analysis developed at the NASA Langley Research Center. These mini-apps explore hybrid parallel programming paradigms with Message Passing Interface (MPI) for distributed memory access and either Shared MPI (SMPI) or OpenMP for shared memory accesses. Performance testing shows that MPI+SMPI yields the best execution performance, while requiring the largest number of code changes. A maximum speedup of 23X was measured for MPI+SMPI, but only 10X was measured for MPI+OpenMP.

  2. Optimized design and performance of a shared pump single clad 2 μm TDFA

    NASA Astrophysics Data System (ADS)

    Tench, Robert E.; Romano, Clément; Delavaux, Jean-Marc

    2018-05-01

    We report the design, experimental performance, and simulation of a single stage, co- and counter-pumped Tm-doped fiber amplifier (TDFA) in the 2 μm signal wavelength band with an optimized 1567 nm shared pump source. We investigate the dependence of output power, gain, and efficiency on pump coupling ratio and signal wavelength. Small signal gains of >50 dB, an output power of 2 W, and small signal noise figures of <3.5 dB are demonstrated. Simulations of TDFA performance agree well with the experimental data. We also discuss performance tradeoffs with respect to amplifier topology for this simple and efficient TDFA.

  3. Clicks versus Citations: Click Count as a Metric in High Energy Physics Publishing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bitton, Ayelet; /UC, San Diego /SLAC

    2011-06-22

    High-energy physicists worldwide rely on online resources such as SPIRES and arXiv to perform gather research and share their own publications. SPIRES is a tool designed to search the literature within high-energy physics, while arXiv provides the actual full-text documents of this literature. In high-energy physics, papers are often ranked according to the number of citations they acquire - meaning the number of times a later paper references the original. This paper investigates the correlation between the number of times a paper is clicked in order to be downloaded and the number of citations it receives following the click. Itmore » explores how physicists truly read what they cite.« less

  4. The planarian regeneration transcriptome reveals a shared but temporally shifted regulatory program between opposing head and tail scenarios.

    PubMed

    Kao, Damian; Felix, Daniel; Aboobaker, Aziz

    2013-11-16

    Planarians can regenerate entire animals from a small fragment of the body. The regenerating fragment is able to create new tissues and remodel existing tissues to form a complete animal. Thus different fragments with very different starting components eventually converge on the same solution. In this study, we performed an extensive RNA-seq time-course on regenerating head and tail fragments to observe the differences and similarities of the transcriptional landscape between head and tail fragments during regeneration. We have consolidated existing transcriptomic data for S. mediterranea to generate a high confidence set of transcripts for use in genome wide expression studies. We performed a RNA-seq time-course on regenerating head and tail fragments from 0 hours to 3 days. We found that the transcriptome profiles of head and tail regeneration were very different at the start of regeneration; however, an unexpected convergence of transcriptional profiles occurred at 48 hours when head and tail fragments are still morphologically distinct. By comparing differentially expressed transcripts at various time-points, we revealed that this divergence/convergence pattern is caused by a shared regulatory program that runs early in heads and later in tails.Additionally, we also performed RNA-seq on smed-prep(RNAi) tail fragments which ultimately fail to regenerate anterior structures. We find the gene regulation program in response to smed-prep(RNAi) to display the opposite regulatory trend compared to the previously mentioned share regulatory program during regeneration. Using annotation data and comparative approaches, we also identified a set of approximately 4,800 triclad specific transcripts that were enriched amongst the genes displaying differential expression during the regeneration time-course. The regeneration transcriptome of head and tail regeneration provides us with a rich resource for investigating the global expression changes that occurs during regeneration. We show that very different regenerative scenarios utilize a shared core regenerative program. Furthermore, our consolidated transcriptome and annotations allowed us to identity triclad specific transcripts that are enriched within this core regulatory program. Our data support the hypothesis that both conserved aspects of animal developmental programs and recent evolutionarily innovations work in concert to control regeneration.

  5. Team dynamics in isolated, confined environments - Saturation divers and high altitude climbers

    NASA Technical Reports Server (NTRS)

    Kanki, Barbara G.; Gregorich, Steven E.

    1992-01-01

    The effects of leadership dynamics and social organization factors on team performance under conditions of high altitude climbing and deep sea diving are studied. Teams of two to four members that know each other well and have a relaxed informal team structure with much sharing of responsibilities are found to do better than military teams with more than four members who do not know each other well and have a formal team structure with highly specialized rules. Professionally guided teams with more than four members, a formally defined team structure, and clearly designated role assignments did better than 'club' teams of more than four members with a fairly informal team structure and little role specialization.

  6. A Bridge Over Troubled Waters: The Vital Role of Intelligence Sharing in Shaping the Anglo-American Special Relationship

    DTIC Science & Technology

    2008-12-01

    of Intelligence Sharing in Shaping the Anglo-American “Special Relationship” 6. AUTHOR( S ) LT David B. Clark 5. FUNDING NUMBERS 7. PERFORMING...ORGANIZATION NAME( S ) AND ADDRESS(ES) Naval Postgraduate School Monterey, CA 93943-5000 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING...MONITORING AGENCY NAME( S ) AND ADDRESS(ES) N/A 10. SPONSORING/MONITORING AGENCY REPORT NUMBER 11. SUPPLEMENTARY NOTES The views expressed in

  7. Measures of Time-Sharing Skill and Gender as Predictors of Flight Simulator Performance.

    DTIC Science & Technology

    1979-01-01

    well as overall e- quations including gender as a variable. Besides gender in the overall equations, measures of time-sharing skill were the best ...study indicated the best predictors of dual or whole-task performance were other dual-tasks. Furthermore, the particular components involved in a dual...switching between tasks, or the use of efficient response strategies " (Damos and Wickens, 1977, p.2). Attentional flexibility. According to Keele

  8. Hybrid MPI+OpenMP Programming of an Overset CFD Solver and Performance Investigations

    NASA Technical Reports Server (NTRS)

    Djomehri, M. Jahed; Jin, Haoqiang H.; Biegel, Bryan (Technical Monitor)

    2002-01-01

    This report describes a two level parallelization of a Computational Fluid Dynamic (CFD) solver with multi-zone overset structured grids. The approach is based on a hybrid MPI+OpenMP programming model suitable for shared memory and clusters of shared memory machines. The performance investigations of the hybrid application on an SGI Origin2000 (O2K) machine is reported using medium and large scale test problems.

  9. The Structure of Liquid and Amorphous Hafnia.

    PubMed

    Gallington, Leighanne C; Ghadar, Yasaman; Skinner, Lawrie B; Weber, J K Richard; Ushakov, Sergey V; Navrotsky, Alexandra; Vazquez-Mayagoitia, Alvaro; Neuefeind, Joerg C; Stan, Marius; Low, John J; Benmore, Chris J

    2017-11-10

    Understanding the atomic structure of amorphous solids is important in predicting and tuning their macroscopic behavior. Here, we use a combination of high-energy X-ray diffraction, neutron diffraction, and molecular dynamics simulations to benchmark the atomic interactions in the high temperature stable liquid and low-density amorphous solid states of hafnia. The diffraction results reveal an average Hf-O coordination number of ~7 exists in both the liquid and amorphous nanoparticle forms studied. The measured pair distribution functions are compared to those generated from several simulation models in the literature. We have also performed ab initio and classical molecular dynamics simulations that show density has a strong effect on the polyhedral connectivity. The liquid shows a broad distribution of Hf-Hf interactions, while the formation of low-density amorphous nanoclusters can reproduce the sharp split peak in the Hf-Hf partial pair distribution function observed in experiment. The agglomeration of amorphous nanoparticles condensed from the gas phase is associated with the formation of both edge-sharing and corner-sharing HfO 6,7 polyhedra resembling that observed in the monoclinic phase.

  10. Cofiring lignite with hazelnut shell and cotton residue in a pilot-scale fluidized bed combustor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zuhal Gogebakan; Nevin Selcuk

    In this study, cofiring of high ash and sulfur content lignite with hazelnut shell and cotton residue was investigated in 0.3 MWt METU Atmospheric Bubbling Fluidized Bed Combustion (ABFBC) Test Rig in terms of combustion and emission performance of different fuel blends. The results reveal that cofiring of hazelnut shell and cotton residue with lignite increases the combustion efficiency and freeboard temperatures compared to those of lignite firing with limestone addition only. CO{sub 2} emission is not found sensitive to increase in hazelnut shell and cotton residue share in fuel blend. Cofiring lowers SO{sub 2} emissions considerably. Cofiring of hazelnutmore » shell reduces NO and N{sub 2}O emissions; on the contrary, cofiring cotton residue results in higher NO and N{sub 2}O emissions. Higher share of biomass in the fuel blend results in coarser cyclone ash particles. Hazelnut shell and cotton residue can be cofired with high ash and sulfur-containing lignite without operational problems. 32 refs., 12 figs., 11 tabs.« less

  11. The Structure of Liquid and Amorphous Hafnia

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gallington, Leighanne; Ghadar, Yasaman; Skinner, Lawrie

    Understanding the atomic structure of amorphous solids is important in predicting and tuning their macroscopic behavior. Here, we use a combination of high-energy X-ray diffraction, neutron diffraction, and molecular dynamics simulations to benchmark the atomic interactions in the high temperature stable liquid and low-density amorphous solid states of hafnia. The diffraction results reveal an average Hf–O coordination number of ~7 exists in both the liquid and amorphous nanoparticle forms studied. The measured pair distribution functions are compared to those generated from several simulation models in the literature. We have also performed ab initio and classical molecular dynamics simulations that showmore » density has a strong effect on the polyhedral connectivity. The liquid shows a broad distribution of Hf–Hf interactions, while the formation of low-density amorphous nanoclusters can reproduce the sharp split peak in the Hf–Hf partial pair distribution function observed in experiment. The agglomeration of amorphous nanoparticles condensed from the gas phase is associated with the formation of both edge-sharing and corner-sharing HfO 6,7 polyhedra resembling that observed in the monoclinic phase.« less

  12. The Structure of Liquid and Amorphous Hafnia

    DOE PAGES

    Gallington, Leighanne; Ghadar, Yasaman; Skinner, Lawrie; ...

    2017-11-10

    Understanding the atomic structure of amorphous solids is important in predicting and tuning their macroscopic behavior. Here, we use a combination of high-energy X-ray diffraction, neutron diffraction, and molecular dynamics simulations to benchmark the atomic interactions in the high temperature stable liquid and low-density amorphous solid states of hafnia. The diffraction results reveal an average Hf–O coordination number of ~7 exists in both the liquid and amorphous nanoparticle forms studied. The measured pair distribution functions are compared to those generated from several simulation models in the literature. We have also performed ab initio and classical molecular dynamics simulations that showmore » density has a strong effect on the polyhedral connectivity. The liquid shows a broad distribution of Hf–Hf interactions, while the formation of low-density amorphous nanoclusters can reproduce the sharp split peak in the Hf–Hf partial pair distribution function observed in experiment. The agglomeration of amorphous nanoparticles condensed from the gas phase is associated with the formation of both edge-sharing and corner-sharing HfO 6,7 polyhedra resembling that observed in the monoclinic phase.« less

  13. Remote sensing frequency sharing studies, tasks 1, 2, 5, and 6

    NASA Technical Reports Server (NTRS)

    Boyd, Douglas; Tillotson, Tom

    1986-01-01

    The following tasks are discussed: adjacent and harmonic band analysis; analysis of impact of sensor resolution on interference; development of performance criteria, interference criteria, sharing criteria, and coordination criteria; and spectrum engineering for NASA microwave sensor projects.

  14. Secondary Heat Exchanger Design and Comparison for Advanced High Temperature Reactor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Piyush Sabharwall; Ali Siahpush; Michael McKellar

    2012-06-01

    The goals of next generation nuclear reactors, such as the high temperature gas-cooled reactor and advance high temperature reactor (AHTR), are to increase energy efficiency in the production of electricity and provide high temperature heat for industrial processes. The efficient transfer of energy for industrial applications depends on the ability to incorporate effective heat exchangers between the nuclear heat transport system and the industrial process heat transport system. The need for efficiency, compactness, and safety challenge the boundaries of existing heat exchanger technology, giving rise to the following study. Various studies have been performed in attempts to update the secondarymore » heat exchanger that is downstream of the primary heat exchanger, mostly because its performance is strongly tied to the ability to employ more efficient conversion cycles, such as the Rankine super critical and subcritical cycles. This study considers two different types of heat exchangers—helical coiled heat exchanger and printed circuit heat exchanger—as possible options for the AHTR secondary heat exchangers with the following three different options: (1) A single heat exchanger transfers all the heat (3,400 MW(t)) from the intermediate heat transfer loop to the power conversion system or process plants; (2) Two heat exchangers share heat to transfer total heat of 3,400 MW(t) from the intermediate heat transfer loop to the power conversion system or process plants, each exchanger transfers 1,700 MW(t) with a parallel configuration; and (3) Three heat exchangers share heat to transfer total heat of 3,400 MW(t) from the intermediate heat transfer loop to the power conversion system or process plants. Each heat exchanger transfers 1,130 MW(t) with a parallel configuration. A preliminary cost comparison will be provided for all different cases along with challenges and recommendations.« less

  15. Testing New Programming Paradigms with NAS Parallel Benchmarks

    NASA Technical Reports Server (NTRS)

    Jin, H.; Frumkin, M.; Schultz, M.; Yan, J.

    2000-01-01

    Over the past decade, high performance computing has evolved rapidly, not only in hardware architectures but also with increasing complexity of real applications. Technologies have been developing to aim at scaling up to thousands of processors on both distributed and shared memory systems. Development of parallel programs on these computers is always a challenging task. Today, writing parallel programs with message passing (e.g. MPI) is the most popular way of achieving scalability and high performance. However, writing message passing programs is difficult and error prone. Recent years new effort has been made in defining new parallel programming paradigms. The best examples are: HPF (based on data parallelism) and OpenMP (based on shared memory parallelism). Both provide simple and clear extensions to sequential programs, thus greatly simplify the tedious tasks encountered in writing message passing programs. HPF is independent of memory hierarchy, however, due to the immaturity of compiler technology its performance is still questionable. Although use of parallel compiler directives is not new, OpenMP offers a portable solution in the shared-memory domain. Another important development involves the tremendous progress in the internet and its associated technology. Although still in its infancy, Java promisses portability in a heterogeneous environment and offers possibility to "compile once and run anywhere." In light of testing these new technologies, we implemented new parallel versions of the NAS Parallel Benchmarks (NPBs) with HPF and OpenMP directives, and extended the work with Java and Java-threads. The purpose of this study is to examine the effectiveness of alternative programming paradigms. NPBs consist of five kernels and three simulated applications that mimic the computation and data movement of large scale computational fluid dynamics (CFD) applications. We started with the serial version included in NPB2.3. Optimization of memory and cache usage was applied to several benchmarks, noticeably BT and SP, resulting in better sequential performance. In order to overcome the lack of an HPF performance model and guide the development of the HPF codes, we employed an empirical performance model for several primitives found in the benchmarks. We encountered a few limitations of HPF, such as lack of supporting the "REDISTRIBUTION" directive and no easy way to handle irregular computation. The parallelization with OpenMP directives was done at the outer-most loop level to achieve the largest granularity. The performance of six HPF and OpenMP benchmarks is compared with their MPI counterparts for the Class-A problem size in the figure in next page. These results were obtained on an SGI Origin2000 (195MHz) with MIPSpro-f77 compiler 7.2.1 for OpenMP and MPI codes and PGI pghpf-2.4.3 compiler with MPI interface for HPF programs.

  16. Characterizing parallel file-access patterns on a large-scale multiprocessor

    NASA Technical Reports Server (NTRS)

    Purakayastha, Apratim; Ellis, Carla Schlatter; Kotz, David; Nieuwejaar, Nils; Best, Michael

    1994-01-01

    Rapid increases in the computational speeds of multiprocessors have not been matched by corresponding performance enhancements in the I/O subsystem. To satisfy the large and growing I/O requirements of some parallel scientific applications, we need parallel file systems that can provide high-bandwidth and high-volume data transfer between the I/O subsystem and thousands of processors. Design of such high-performance parallel file systems depends on a thorough grasp of the expected workload. So far there have been no comprehensive usage studies of multiprocessor file systems. Our CHARISMA project intends to fill this void. The first results from our study involve an iPSC/860 at NASA Ames. This paper presents results from a different platform, the CM-5 at the National Center for Supercomputing Applications. The CHARISMA studies are unique because we collect information about every individual read and write request and about the entire mix of applications running on the machines. The results of our trace analysis lead to recommendations for parallel file system design. First the file system should support efficient concurrent access to many files, and I/O requests from many jobs under varying load conditions. Second, it must efficiently manage large files kept open for long periods. Third, it should expect to see small requests predominantly sequential access patterns, application-wide synchronous access, no concurrent file-sharing between jobs appreciable byte and block sharing between processes within jobs, and strong interprocess locality. Finally, the trace data suggest that node-level write caches and collective I/O request interfaces may be useful in certain environments.

  17. Modeling and Dynamic Analysis of Paralleled of dc/dc Converters with Master-Slave Current Sharing Control

    NASA Technical Reports Server (NTRS)

    Rajagopalan, J.; Xing, K.; Guo, Y.; Lee, F. C.; Manners, Bruce

    1996-01-01

    A simple, application-oriented, transfer function model of paralleled converters employing Master-Slave Current-sharing (MSC) control is developed. Dynamically, the Master converter retains its original design characteristics; all the Slave converters are forced to depart significantly from their original design characteristics into current-controlled current sources. Five distinct loop gains to assess system stability and performance are identified and their physical significance is described. A design methodology for the current share compensator is presented. The effect of this current sharing scheme on 'system output impedance' is analyzed.

  18. How do high cost-sharing policies for physician care affect inpatient care use and costs among people with chronic disease?

    PubMed

    Xin, Haichang

    2015-01-01

    Rapidly rising health care costs continue to be a significant concern in the United States. High cost-sharing strategies thus have been widely used to address rising health care costs. Since high cost-sharing policies can reduce needed care as well as unneeded care use, it raises the concern whether these policies for physician care are a good strategy for controlling costs among chronically ill patients, especially whether utilization and costs in inpatient care will increase in response. This study examined whether high cost sharing in physician care affects inpatient care utilization and costs differently between individuals with and without chronic conditions. Findings from this study will contribute to the insurance benefit design that can control care utilization and save costs of chronically ill individuals. Prior studies suffered from gaps that limit both internal validity and external validity of their findings. This study has its unique contributions by filling these gaps jointly. The study used data from the 2007 Medical Expenditure Panel Survey, a nationally representative sample, with a cross-sectional study design. Instrumental variable technique was used to address the endogeneity between health care utilization and cost-sharing levels. We used negative binomial regression to analyze the count data and generalized linear models for costs data. To account for national survey sampling design, weight and variance were adjusted. The study compared the effects of high cost-sharing policies on inpatient care utilization and costs between individuals with and without chronic conditions to answer the research question. The final study sample consisted of 4523 individuals; among them, 752 had hospitalizations. The multivariate analysis demonstrated consistent patterns. Compared with low cost-sharing policies, high cost-sharing policies for physician care were not associated with a greater increase in inpatient care utilization (P = .86 for chronically ill people and P = .67 for healthy people, respectively) and costs (P = .38 for chronically ill people and P = .68 for healthy people, respectively). The sensitivity analysis with a 10% cost-sharing level also generated consistent insignificant results for both chronically ill and healthy groups. Relative to nonchronically ill individuals, chronically ill individuals may increase their utilization and expenditures of inpatient care to a similar extent in response to increased physician care cost sharing. This may be due to cost pressure from inpatient care and short observation window. Although this study did not find evidence that high cost-sharing policies for physician care increase inpatient care differently for individuals with and without chronic conditions, interpretation of this finding should be cautious. It is possible that in the long run, these sick people would demonstrate substantial demands for medical care and there could be a total cost increase for health plans ultimately. Health plans need to be cautious of policies for chronically ill enrollees.

  19. Job Management Requirements for NAS Parallel Systems and Clusters

    NASA Technical Reports Server (NTRS)

    Saphir, William; Tanner, Leigh Ann; Traversat, Bernard

    1995-01-01

    A job management system is a critical component of a production supercomputing environment, permitting oversubscribed resources to be shared fairly and efficiently. Job management systems that were originally designed for traditional vector supercomputers are not appropriate for the distributed-memory parallel supercomputers that are becoming increasingly important in the high performance computing industry. Newer job management systems offer new functionality but do not solve fundamental problems. We address some of the main issues in resource allocation and job scheduling we have encountered on two parallel computers - a 160-node IBM SP2 and a cluster of 20 high performance workstations located at the Numerical Aerodynamic Simulation facility. We describe the requirements for resource allocation and job management that are necessary to provide a production supercomputing environment on these machines, prioritizing according to difficulty and importance, and advocating a return to fundamental issues.

  20. A parallel implementation of a multisensor feature-based range-estimation method

    NASA Technical Reports Server (NTRS)

    Suorsa, Raymond E.; Sridhar, Banavar

    1993-01-01

    There are many proposed vision based methods to perform obstacle detection and avoidance for autonomous or semi-autonomous vehicles. All methods, however, will require very high processing rates to achieve real time performance. A system capable of supporting autonomous helicopter navigation will need to extract obstacle information from imagery at rates varying from ten frames per second to thirty or more frames per second depending on the vehicle speed. Such a system will need to sustain billions of operations per second. To reach such high processing rates using current technology, a parallel implementation of the obstacle detection/ranging method is required. This paper describes an efficient and flexible parallel implementation of a multisensor feature-based range-estimation algorithm, targeted for helicopter flight, realized on both a distributed-memory and shared-memory parallel computer.

Top