Sample records for processor sharing gps

  1. Call Admission Control on Single Node Networks under Output Rate-Controlled Generalized Processor Sharing (ORC-GPS) Scheduler

    NASA Astrophysics Data System (ADS)

    Hanada, Masaki; Nakazato, Hidenori; Watanabe, Hitoshi

    Multimedia applications such as music or video streaming, video teleconferencing and IP telephony are flourishing in packet-switched networks. Applications that generate such real-time data can have very diverse quality-of-service (QoS) requirements. In order to guarantee diverse QoS requirements, the combined use of a packet scheduling algorithm based on Generalized Processor Sharing (GPS) and leaky bucket traffic regulator is the most successful QoS mechanism. GPS can provide a minimum guaranteed service rate for each session and tight delay bounds for leaky bucket constrained sessions. However, the delay bounds for leaky bucket constrained sessions under GPS are unnecessarily large because each session is served according to its associated constant weight until the session buffer is empty. In order to solve this problem, a scheduling policy called Output Rate-Controlled Generalized Processor Sharing (ORC-GPS) was proposed in [17]. ORC-GPS is a rate-based scheduling like GPS, and controls the service rate in order to lower the delay bounds for leaky bucket constrained sessions. In this paper, we propose a call admission control (CAC) algorithm for ORC-GPS, for leaky-bucket constrained sessions with deterministic delay requirements. This CAC algorithm for ORC-GPS determines the optimal values of parameters of ORC-GPS from the deterministic delay requirements of the sessions. In numerical experiments, we compare the CAC algorithm for ORC-GPS with one for GPS in terms of schedulable region and computational complexity.

  2. A Real-Time Capable Software-Defined Receiver Using GPU for Adaptive Anti-Jam GPS Sensors

    PubMed Central

    Seo, Jiwon; Chen, Yu-Hsuan; De Lorenzo, David S.; Lo, Sherman; Enge, Per; Akos, Dennis; Lee, Jiyun

    2011-01-01

    Due to their weak received signal power, Global Positioning System (GPS) signals are vulnerable to radio frequency interference. Adaptive beam and null steering of the gain pattern of a GPS antenna array can significantly increase the resistance of GPS sensors to signal interference and jamming. Since adaptive array processing requires intensive computational power, beamsteering GPS receivers were usually implemented using hardware such as field-programmable gate arrays (FPGAs). However, a software implementation using general-purpose processors is much more desirable because of its flexibility and cost effectiveness. This paper presents a GPS software-defined radio (SDR) with adaptive beamsteering capability for anti-jam applications. The GPS SDR design is based on an optimized desktop parallel processing architecture using a quad-core Central Processing Unit (CPU) coupled with a new generation Graphics Processing Unit (GPU) having massively parallel processors. This GPS SDR demonstrates sufficient computational capability to support a four-element antenna array and future GPS L5 signal processing in real time. After providing the details of our design and optimization schemes for future GPU-based GPS SDR developments, the jamming resistance of our GPS SDR under synthetic wideband jamming is presented. Since the GPS SDR uses commercial-off-the-shelf hardware and processors, it can be easily adopted in civil GPS applications requiring anti-jam capabilities. PMID:22164116

  3. A real-time capable software-defined receiver using GPU for adaptive anti-jam GPS sensors.

    PubMed

    Seo, Jiwon; Chen, Yu-Hsuan; De Lorenzo, David S; Lo, Sherman; Enge, Per; Akos, Dennis; Lee, Jiyun

    2011-01-01

    Due to their weak received signal power, Global Positioning System (GPS) signals are vulnerable to radio frequency interference. Adaptive beam and null steering of the gain pattern of a GPS antenna array can significantly increase the resistance of GPS sensors to signal interference and jamming. Since adaptive array processing requires intensive computational power, beamsteering GPS receivers were usually implemented using hardware such as field-programmable gate arrays (FPGAs). However, a software implementation using general-purpose processors is much more desirable because of its flexibility and cost effectiveness. This paper presents a GPS software-defined radio (SDR) with adaptive beamsteering capability for anti-jam applications. The GPS SDR design is based on an optimized desktop parallel processing architecture using a quad-core Central Processing Unit (CPU) coupled with a new generation Graphics Processing Unit (GPU) having massively parallel processors. This GPS SDR demonstrates sufficient computational capability to support a four-element antenna array and future GPS L5 signal processing in real time. After providing the details of our design and optimization schemes for future GPU-based GPS SDR developments, the jamming resistance of our GPS SDR under synthetic wideband jamming is presented. Since the GPS SDR uses commercial-off-the-shelf hardware and processors, it can be easily adopted in civil GPS applications requiring anti-jam capabilities.

  4. Time Manager Software for a Flight Processor

    NASA Technical Reports Server (NTRS)

    Zoerne, Roger

    2012-01-01

    Data analysis is a process of inspecting, cleaning, transforming, and modeling data to highlight useful information and suggest conclusions. Accurate timestamps and a timeline of vehicle events are needed to analyze flight data. By moving the timekeeping to the flight processor, there is no longer a need for a redundant time source. If each flight processor is initially synchronized to GPS, they can freewheel and maintain a fairly accurate time throughout the flight with no additional GPS time messages received. How ever, additional GPS time messages will ensure an even greater accuracy. When a timestamp is required, a gettime function is called that immediately reads the time-base register.

  5. Software Defined GPS Receiver for International Space Station

    NASA Technical Reports Server (NTRS)

    Duncan, Courtney B.; Robison, David E.; Koelewyn, Cynthia Lee

    2011-01-01

    JPL is providing a software defined radio (SDR) that will fly on the International Space Station (ISS) as part of the CoNNeCT project under NASA's SCaN program. The SDR consists of several modules including a Baseband Processor Module (BPM) and a GPS Module (GPSM). The BPM executes applications (waveforms) consisting of software components for the embedded SPARC processor and logic for two Virtex II Field Programmable Gate Arrays (FPGAs) that operate on data received from the GPSM. GPS waveforms on the SDR are enabled by an L-Band antenna, low noise amplifier (LNA), and the GPSM that performs quadrature downconversion at L1, L2, and L5. The GPS waveform for the JPL SDR will acquire and track L1 C/A, L2C, and L5 GPS signals from a CoNNeCT platform on ISS, providing the best GPS-based positioning of ISS achieved to date, the first use of multiple frequency GPS on ISS, and potentially the first L5 signal tracking from space. The system will also enable various radiometric investigations on ISS such as local multipath or ISS dynamic behavior characterization. In following the software-defined model, this work will create a highly portable GPS software and firmware package that can be adapted to another platform with the necessary processor and FPGA capability. This paper also describes ISS applications for the JPL CoNNeCT SDR GPS waveform, possibilities for future global navigation satellite system (GNSS) tracking development, and the applicability of the waveform components to other space navigation applications.

  6. Global positioning system for general aviation: Joint FAA-NASA Seminar. [conferences

    NASA Technical Reports Server (NTRS)

    1978-01-01

    Programs to examine and develop means to utilize the global positioning system (GPS) for civil aviation functions are described. User requirements in this regard are discussed, the development of technologies in the areas of antennas, receivers, and signal processors for the GPS are examined, and modifications to the GPS to fit operational and design criteria are evaluated.

  7. Shared performance monitor in a multiprocessor system

    DOEpatents

    Chiu, George; Gara, Alan G; Salapura, Valentina

    2014-12-02

    A performance monitoring unit (PMU) and method for monitoring performance of events occurring in a multiprocessor system. The multiprocessor system comprises a plurality of processor devices units, each processor device for generating signals representing occurrences of events in the processor device, and, a single shared counter resource for performance monitoring. The performance monitor unit is shared by all processor cores in the multiprocessor system. The PMU is further programmed to monitor event signals issued from non-processor devices.

  8. The geophysical processor system: Automated analysis of ERS-1 SAR imagery

    NASA Technical Reports Server (NTRS)

    Stern, Harry L.; Rothrock, D. Andrew; Kwok, Ronald; Holt, Benjamin

    1994-01-01

    The Geophysical Processor System (GPS) at the Alaska (U.S.) SAR (Synthetic Aperture Radar) Facility (ASF) uses ERS-1 SAR images as input to generate three types of products: sea ice motion, sea ice type, and ocean wave spectra. The GPS, operating automatically with minimal human intervention, delivers its output to the Archive and Catalog System (ACS) where scientists can search and order the products on line. The GPS has generated more than 10,000 products since it became operational in Feb. 1992, and continues to deliver 500 new products per month to the ACS. These products cover the Beaufort and Chukchi Seas and the western portion of the central Arctic Ocean. More geophysical processing systems are needed to handle the large volumes of data from current and future satellites. Images must be routinely and consistently analyzed to yield useful information for scientists. The current GPS is a good, working prototype on the way to more sophisticated systems.

  9. Shared performance monitor in a multiprocessor system

    DOEpatents

    Chiu, George; Gara, Alan G.; Salapura, Valentina

    2012-07-24

    A performance monitoring unit (PMU) and method for monitoring performance of events occurring in a multiprocessor system. The multiprocessor system comprises a plurality of processor devices units, each processor device for generating signals representing occurrences of events in the processor device, and, a single shared counter resource for performance monitoring. The performance monitor unit is shared by all processor cores in the multiprocessor system. The PMU comprises: a plurality of performance counters each for counting signals representing occurrences of events from one or more the plurality of processor units in the multiprocessor system; and, a plurality of input devices for receiving the event signals from one or more processor devices of the plurality of processor units, the plurality of input devices programmable to select event signals for receipt by one or more of the plurality of performance counters for counting, wherein the PMU is shared between multiple processing units, or within a group of processors in the multiprocessing system. The PMU is further programmed to monitor event signals issued from non-processor devices.

  10. 7 CFR 1435.310 - Sharing processors' allocations with producers.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... CREDIT CORPORATION, DEPARTMENT OF AGRICULTURE LOANS, PURCHASES, AND OTHER OPERATIONS SUGAR PROGRAM Flexible Marketing Allotments For Sugar § 1435.310 Sharing processors' allocations with producers. (a) Every sugar beet and sugarcane processor must provide CCC a certification that: (1) The processor...

  11. 7 CFR 1435.310 - Sharing processors' allocations with producers.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... CREDIT CORPORATION, DEPARTMENT OF AGRICULTURE LOANS, PURCHASES, AND OTHER OPERATIONS SUGAR PROGRAM Flexible Marketing Allotments For Sugar § 1435.310 Sharing processors' allocations with producers. (a) Every sugar beet and sugarcane processor must provide CCC a certification that: (1) The processor...

  12. 7 CFR 1435.310 - Sharing processors' allocations with producers.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... CREDIT CORPORATION, DEPARTMENT OF AGRICULTURE LOANS, PURCHASES, AND OTHER OPERATIONS SUGAR PROGRAM Flexible Marketing Allotments For Sugar § 1435.310 Sharing processors' allocations with producers. (a) Every sugar beet and sugarcane processor must provide CCC a certification that: (1) The processor...

  13. 7 CFR 1435.310 - Sharing processors' allocations with producers.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... CREDIT CORPORATION, DEPARTMENT OF AGRICULTURE LOANS, PURCHASES, AND OTHER OPERATIONS SUGAR PROGRAM Flexible Marketing Allotments For Sugar § 1435.310 Sharing processors' allocations with producers. (a) Every sugar beet and sugarcane processor must provide CCC a certification that: (1) The processor...

  14. 7 CFR 1435.310 - Sharing processors' allocations with producers.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... CREDIT CORPORATION, DEPARTMENT OF AGRICULTURE LOANS, PURCHASES, AND OTHER OPERATIONS SUGAR PROGRAM Flexible Marketing Allotments For Sugar § 1435.310 Sharing processors' allocations with producers. (a) Every sugar beet and sugarcane processor must provide CCC a certification that: (1) The processor...

  15. Innovative Navigation Systems to Support Digital Geophysical Mapping, ESTCP #200129, Phase II Demonstrations

    DTIC Science & Technology

    2004-09-25

    7 Figure 2-3 Blackhawk/ Applanix GPS/INS System...electro-mechanical system ms millisecond NP navigation processor OE ordnance and explosive POSLV Applanix Positioning and Orientation...demonstration GPS/INS positioning system. In Phase II, a man-portable modified version called the POSLV 310 UXO of the Applanix Positioning and

  16. First Results from a Hardware-in-the-Loop Demonstration of Closed-Loop Autonomous Formation Flying

    NASA Technical Reports Server (NTRS)

    Gill, E.; Naasz, Bo; Ebinuma, T.

    2003-01-01

    A closed-loop system for the demonstration of autonomous satellite formation flying technologies using hardware-in-the-loop has been developed. Making use of a GPS signal simulator with a dual radio frequency outlet, the system includes two GPS space receivers as well as a powerful onboard navigation processor dedicated to the GPS-based guidance, navigation, and control of a satellite formation in real-time. The closed-loop system allows realistic simulations of autonomous formation flying scenarios, enabling research in the fields of tracking and orbit control strategies for a wide range of applications. The autonomous closed-loop formation acquisition and keeping strategy is based on Lyapunov's direct control method as applied to the standard set of Keplerian elements. This approach not only assures global and asymptotic stability of the control but also maintains valuable physical insight into the applied control vectors. Furthermore, the approach can account for system uncertainties and effectively avoids a computationally expensive solution of the two point boundary problem, which renders the concept particularly attractive for implementation in onboard processors. A guidance law has been developed which strictly separates the relative from the absolute motion, thus avoiding the numerical integration of a target trajectory in the onboard processor. Moreover, upon using precise kinematic relative GPS solutions, a dynamical modeling or filtering is avoided which provides for an efficient implementation of the process on an onboard processor. A sample formation flying scenario has been created aiming at the autonomous transition of a Low Earth Orbit satellite formation from an initial along-track separation of 800 m to a target distance of 100 m. Assuming a low-thrust actuator which may be accommodated on a small satellite, a typical control accuracy of less than 5 m has been achieved which proves the applicability of autonomous formation flying techniques to formations of satellites as close as 50 m.

  17. GPS Metric Tracking Unit

    NASA Technical Reports Server (NTRS)

    2008-01-01

    As Global Positioning Satellite (GPS) applications become more prevalent for land- and air-based vehicles, GPS applications for space vehicles will also increase. The Applied Technology Directorate of Kennedy Space Center (KSC) has developed a lightweight, low-cost GPS Metric Tracking Unit (GMTU), the first of two steps in developing a lightweight, low-cost Space-Based Tracking and Command Subsystem (STACS) designed to meet Range Safety's link margin and latency requirements for vehicle command and telemetry data. The goals of STACS are to improve Range Safety operations and expand tracking capabilities for space vehicles. STACS will track the vehicle, receive commands, and send telemetry data through the space-based asset, which will dramatically reduce dependence on ground-based assets. The other step was the Low-Cost Tracking and Data Relay Satellite System (TDRSS) Transceiver (LCT2), developed by the Wallops Flight Facility (WFF), which allows the vehicle to communicate with a geosynchronous relay satellite. Although the GMTU and LCT2 were independently implemented and tested, the design collaboration of KSC and WFF engineers allowed GMTU and LCT2 to be integrated into one enclosure, leading to the final STACS. In operation, GMTU needs only a radio frequency (RF) input from a GPS antenna and outputs position and velocity data to the vehicle through a serial or pulse code modulation (PCM) interface. GMTU includes one commercial GPS receiver board and a custom board, the Command and Telemetry Processor (CTP) developed by KSC. The CTP design is based on a field-programmable gate array (FPGA) with embedded processors to support GPS functions.

  18. A wideband software reconfigurable modem

    NASA Astrophysics Data System (ADS)

    Turner, J. H., Jr.; Vickers, H.

    A wideband modem is described which provides signal processing capability for four Lx-band signals employing QPSK, MSK and PPM waveforms and employs a software reconfigurable architecture for maximum system flexibility and graceful degradation. The current processor uses a 2901 and two 8086 microprocessors per channel and performs acquisition, tracking, and data demodulation for JITDS, GPS, IFF and TACAN systems. The next generation processor will be implemented using a VHSIC chip set employing a programmable complex array vector processor module, a GP computer module, customized gate array modules, and a digital array correlator. This integrated processor has application to a wide number of diverse system waveforms, and will bring the benefits of VHSIC technology insertion into avionic antijam communications systems.

  19. Communications systems and methods for subsea processors

    DOEpatents

    Gutierrez, Jose; Pereira, Luis

    2016-04-26

    A subsea processor may be located near the seabed of a drilling site and used to coordinate operations of underwater drilling components. The subsea processor may be enclosed in a single interchangeable unit that fits a receptor on an underwater drilling component, such as a blow-out preventer (BOP). The subsea processor may issue commands to control the BOP and receive measurements from sensors located throughout the BOP. A shared communications bus may interconnect the subsea processor and underwater components and the subsea processor and a surface or onshore network. The shared communications bus may be operated according to a time division multiple access (TDMA) scheme.

  20. Multiprocessor shared-memory information exchange

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Santoline, L.L.; Bowers, M.D.; Crew, A.W.

    1989-02-01

    In distributed microprocessor-based instrumentation and control systems, the inter-and intra-subsystem communication requirements ultimately form the basis for the overall system architecture. This paper describes a software protocol which addresses the intra-subsystem communications problem. Specifically the protocol allows for multiple processors to exchange information via a shared-memory interface. The authors primary goal is to provide a reliable means for information to be exchanged between central application processor boards (masters) and dedicated function processor boards (slaves) in a single computer chassis. The resultant Multiprocessor Shared-Memory Information Exchange (MSMIE) protocol, a standard master-slave shared-memory interface suitable for use in nuclear safety systems, ismore » designed to pass unidirectional buffers of information between the processors while providing a minimum, deterministic cycle time for this data exchange.« less

  1. Distributed processor allocation for launching applications in a massively connected processors complex

    DOEpatents

    Pedretti, Kevin

    2008-11-18

    A compute processor allocator architecture for allocating compute processors to run applications in a multiple processor computing apparatus is distributed among a subset of processors within the computing apparatus. Each processor of the subset includes a compute processor allocator. The compute processor allocators can share a common database of information pertinent to compute processor allocation. A communication path permits retrieval of information from the database independently of the compute processor allocators.

  2. Digital signal processor and processing method for GPS receivers

    NASA Technical Reports Server (NTRS)

    Thomas, Jr., Jess B. (Inventor)

    1989-01-01

    A digital signal processor and processing method therefor for use in receivers of the NAVSTAR/GLOBAL POSITIONING SYSTEM (GPS) employs a digital carrier down-converter, digital code correlator and digital tracking processor. The digital carrier down-converter and code correlator consists of an all-digital, minimum bit implementation that utilizes digital chip and phase advancers, providing exceptional control and accuracy in feedback phase and in feedback delay. Roundoff and commensurability errors can be reduced to extremely small values (e.g., less than 100 nanochips and 100 nanocycles roundoff errors and 0.1 millichip and 1 millicycle commensurability errors). The digital tracking processor bases the fast feedback for phase and for group delay in the C/A, P.sub.1, and P.sub.2 channels on the L.sub.1 C/A carrier phase thereby maintaining lock at lower signal-to-noise ratios, reducing errors in feedback delays, reducing the frequency of cycle slips and in some cases obviating the need for quadrature processing in the P channels. Simple and reliable methods are employed for data bit synchronization, data bit removal and cycle counting. Improved precision in averaged output delay values is provided by carrier-aided data-compression techniques. The signal processor employs purely digital operations in the sense that exactly the same carrier phase and group delay measurements are obtained, to the last decimal place, every time the same sampled data (i.e., exactly the same bits) are processed.

  3. Hypercluster - Parallel processing for computational mechanics

    NASA Technical Reports Server (NTRS)

    Blech, Richard A.

    1988-01-01

    An account is given of the development status, performance capabilities and implications for further development of NASA-Lewis' testbed 'hypercluster' parallel computer network, in which multiple processors communicate through a shared memory. Processors have local as well as shared memory; the hypercluster is expanded in the same manner as the hypercube, with processor clusters replacing the normal single processor node. The NASA-Lewis machine has three nodes with a vector personality and one node with a scalar personality. Each of the vector nodes uses four board-level vector processors, while the scalar node uses four general-purpose microcomputer boards.

  4. Ordering of guarded and unguarded stores for no-sync I/O

    DOEpatents

    Gara, Alan; Ohmacht, Martin

    2013-06-25

    A parallel computing system processes at least one store instruction. A first processor core issues a store instruction. A first queue, associated with the first processor core, stores the store instruction. A second queue, associated with a first local cache memory device of the first processor core, stores the store instruction. The first processor core updates first data in the first local cache memory device according to the store instruction. The third queue, associated with at least one shared cache memory device, stores the store instruction. The first processor core invalidates second data, associated with the store instruction, in the at least one shared cache memory. The first processor core invalidates third data, associated with the store instruction, in other local cache memory devices of other processor cores. The first processor core flushing only the first queue.

  5. GPS Ocean Reflection Experiment (GORE) Wind Explorer (WindEx) Instrument Design and Development

    NASA Astrophysics Data System (ADS)

    Ganoe, G.

    2004-12-01

    This paper describes the design and development of the WindEx instrument, and the technology implemented by it. The important design trades will be covered along with the justification for the options selected. An evaluation of the operation of the instrument, and plans for continued development and enhancements will also be given. The WindEx instrument consists of a processor that receives data from an included GPS Surface reflection receiver, and computes ocean surface wind speeds in real time utilizing an algorithm developed at LaRC by Dr. Stephen J. Katzberg. The WindEx performs a windspeed server function as well as acting as a repository for the client moving map applications, and providing a web page with instructions on the installation and use of the WindEx system. The server receives the GPS reflection data produced by the receiver, performs wind speed processing, then makes the wind speed data available as a moving map display to requesting client processors on the aircraft network. The client processors are existing systems used by the research personnel onboard. They can be configured to be WINDEX clients by downloading the Java client application from the WINDEX server. The client application provides a graphical display of a moving map that shows the aircraft position along with the position of the reflection point from the surface of the ocean where the wind speed is being estimated, and any coastlines within the field of view. Information associated with the reflection point includes the estimated wind speed, and a confidence factor that gives the researcher an idea about the reliability of the wind speed measurement. The instrument has been installed on one of NOAA's Hurricane Hunters, a Gulfstream IV, whose nickname is "Gonzo". Based at MacDill AFB, Florida, "Gonzo" flies around the periphery of the storm deploying GPS-based dropsondes which measure local winds. The dropsondes are the "gold-standard" for determining surface winds, but can only be deployed sparingly. The GPS WindEx system allows for a continuous map between dropsonde releases as well as monitoring the ocean surface for suspicious areas. The GPS technique is insensitive to clouds or rain and can give information concerning surface conditions not available to the flight crew.

  6. Investigation for improving Global Positioning System (GPS) orbits using a discrete sequential estimator and stochastic models of selected physical processes

    NASA Technical Reports Server (NTRS)

    Goad, Clyde C.; Chadwell, C. David

    1993-01-01

    GEODYNII is a conventional batch least-squares differential corrector computer program with deterministic models of the physical environment. Conventional algorithms were used to process differenced phase and pseudorange data to determine eight-day Global Positioning system (GPS) orbits with several meter accuracy. However, random physical processes drive the errors whose magnitudes prevent improving the GPS orbit accuracy. To improve the orbit accuracy, these random processes should be modeled stochastically. The conventional batch least-squares algorithm cannot accommodate stochastic models, only a stochastic estimation algorithm is suitable, such as a sequential filter/smoother. Also, GEODYNII cannot currently model the correlation among data values. Differenced pseudorange, and especially differenced phase, are precise data types that can be used to improve the GPS orbit precision. To overcome these limitations and improve the accuracy of GPS orbits computed using GEODYNII, we proposed to develop a sequential stochastic filter/smoother processor by using GEODYNII as a type of trajectory preprocessor. Our proposed processor is now completed. It contains a correlated double difference range processing capability, first order Gauss Markov models for the solar radiation pressure scale coefficient and y-bias acceleration, and a random walk model for the tropospheric refraction correction. The development approach was to interface the standard GEODYNII output files (measurement partials and variationals) with software modules containing the stochastic estimator, the stochastic models, and a double differenced phase range processing routine. Thus, no modifications to the original GEODYNII software were required. A schematic of the development is shown. The observational data are edited in the preprocessor and the data are passed to GEODYNII as one of its standard data types. A reference orbit is determined using GEODYNII as a batch least-squares processor and the GEODYNII measurement partial (FTN90) and variational (FTN80, V-matrix) files are generated. These two files along with a control statement file and a satellite identification and mass file are passed to the filter/smoother to estimate time-varying parameter states at each epoch, improved satellite initial elements, and improved estimates of constant parameters.

  7. Conditional load and store in a shared memory

    DOEpatents

    Blumrich, Matthias A; Ohmacht, Martin

    2015-02-03

    A method, system and computer program product for implementing load-reserve and store-conditional instructions in a multi-processor computing system. The computing system includes a multitude of processor units and a shared memory cache, and each of the processor units has access to the memory cache. In one embodiment, the method comprises providing the memory cache with a series of reservation registers, and storing in these registers addresses reserved in the memory cache for the processor units as a result of issuing load-reserve requests. In this embodiment, when one of the processor units makes a request to store data in the memory cache using a store-conditional request, the reservation registers are checked to determine if an address in the memory cache is reserved for that processor unit. If an address in the memory cache is reserved for that processor, the data are stored at this address.

  8. TOGA - A GNSS Reflections Instrument for Remote Sensing Using Beamforming

    NASA Technical Reports Server (NTRS)

    Esterhuizen, S.; Meehan, T. K.; Robison, D.

    2009-01-01

    Remotely sensing the Earth's surface using GNSS signals as bi-static radar sources is one of the most challenging applications for radiometric instrument design. As part of NASA's Instrument Incubator Program, our group at JPL has built a prototype instrument, TOGA (Time-shifted, Orthometric, GNSS Array), to address a variety of GNSS science needs. Observing GNSS reflections is major focus of the design/development effort. The TOGA design features a steerable beam antenna array which can form a high-gain antenna pattern in multiple directions simultaneously. Multiple FPGAs provide flexible digital signal processing logic to process both GPS and Galileo reflections. A Linux OS based science processor serves as experiment scheduler and data post-processor. This paper outlines the TOGA design approach as well as preliminary results of reflection data collected from test flights over the Pacific ocean. This reflections data demonstrates observation of the GPS L1/L2C/L5 signals.

  9. Method for prefetching non-contiguous data structures

    DOEpatents

    Blumrich, Matthias A [Ridgefield, CT; Chen, Dong [Croton On Hudson, NY; Coteus, Paul W [Yorktown Heights, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Heidelberger, Philip [Cortlandt Manor, NY; Hoenicke, Dirk [Ossining, NY; Ohmacht, Martin [Brewster, NY; Steinmacher-Burow, Burkhard D [Mount Kisco, NY; Takken, Todd E [Mount Kisco, NY; Vranas, Pavlos M [Bedford Hills, NY

    2009-05-05

    A low latency memory system access is provided in association with a weakly-ordered multiprocessor system. Each processor in the multiprocessor shares resources, and each shared resource has an associated lock within a locking device that provides support for synchronization between the multiple processors in the multiprocessor and the orderly sharing of the resources. A processor only has permission to access a resource when it owns the lock associated with that resource, and an attempt by a processor to own a lock requires only a single load operation, rather than a traditional atomic load followed by store, such that the processor only performs a read operation and the hardware locking device performs a subsequent write operation rather than the processor. A simple perfecting for non-contiguous data structures is also disclosed. A memory line is redefined so that in addition to the normal physical memory data, every line includes a pointer that is large enough to point to any other line in the memory, wherein the pointers to determine which memory line to prefect rather than some other predictive algorithm. This enables hardware to effectively prefect memory access patterns that are non-contiguous, but repetitive.

  10. On nonlinear finite element analysis in single-, multi- and parallel-processors

    NASA Technical Reports Server (NTRS)

    Utku, S.; Melosh, R.; Islam, M.; Salama, M.

    1982-01-01

    Numerical solution of nonlinear equilibrium problems of structures by means of Newton-Raphson type iterations is reviewed. Each step of the iteration is shown to correspond to the solution of a linear problem, therefore the feasibility of the finite element method for nonlinear analysis is established. Organization and flow of data for various types of digital computers, such as single-processor/single-level memory, single-processor/two-level-memory, vector-processor/two-level-memory, and parallel-processors, with and without sub-structuring (i.e. partitioning) are given. The effect of the relative costs of computation, memory and data transfer on substructuring is shown. The idea of assigning comparable size substructures to parallel processors is exploited. Under Cholesky type factorization schemes, the efficiency of parallel processing is shown to decrease due to the occasional shared data, just as that due to the shared facilities.

  11. Rapid recovery from transient faults in the fault-tolerant processor with fault-tolerant shared memory

    NASA Technical Reports Server (NTRS)

    Harper, Richard E.; Butler, Bryan P.

    1990-01-01

    The Draper fault-tolerant processor with fault-tolerant shared memory (FTP/FTSM), which is designed to allow application tasks to continue execution during the memory alignment process, is described. Processor performance is not affected by memory alignment. In addition, the FTP/FTSM incorporates a hardware scrubber device to perform the memory alignment quickly during unused memory access cycles. The FTP/FTSM architecture is described, followed by an estimate of the time required for channel reintegration.

  12. Design and realization of the baseband processor in satellite navigation and positioning receiver

    NASA Astrophysics Data System (ADS)

    Zhang, Dawei; Hu, Xiulin; Li, Chen

    2007-11-01

    The content of this paper is focused on the Design and realization of the baseband processor in satellite navigation and positioning receiver. Baseband processor is the most important part of the satellite positioning receiver. The design covers baseband processor's main functions include multi-channel digital signal DDC, acquisition, code tracking, carrier tracking, demodulation, etc. The realization is based on an Altera's FPGA device, that makes the system can be improved and upgraded without modifying the hardware. It embodies the theory of software defined radio (SDR), and puts the theory of the spread spectrum into practice. This paper puts emphasis on the realization of baseband processor in FPGA. In the order of choosing chips, design entry, debugging and synthesis, the flow is presented detailedly. Additionally the paper detailed realization of Digital PLL in order to explain a method of reducing the consumption of FPGA. Finally, the paper presents the result of Synthesis. This design has been used in BD-1, BD-2 and GPS.

  13. Configurable Multi-Purpose Processor

    NASA Technical Reports Server (NTRS)

    Valencia, J. Emilio; Forney, Chirstopher; Morrison, Robert; Birr, Richard

    2010-01-01

    Advancements in technology have allowed the miniaturization of systems used in aerospace vehicles. This technology is driven by the need for next-generation systems that provide reliable, responsive, and cost-effective range operations while providing increased capabilities such as simultaneous mission support, increased launch trajectories, improved launch, and landing opportunities, etc. Leveraging the newest technologies, the command and telemetry processor (CTP) concept provides for a compact, flexible, and integrated solution for flight command and telemetry systems and range systems. The CTP is a relatively small circuit board that serves as a processing platform for high dynamic, high vibration environments. The CTP can be reconfigured and reprogrammed, allowing it to be adapted for many different applications. The design is centered around a configurable field-programmable gate array (FPGA) device that contains numerous logic cells that can be used to implement traditional integrated circuits. The FPGA contains two PowerPC processors running the Vx-Works real-time operating system and are used to execute software programs specific to each application. The CTP was designed and developed specifically to provide telemetry functions; namely, the command processing, telemetry processing, and GPS metric tracking of a flight vehicle. However, it can be used as a general-purpose processor board to perform numerous functions implemented in either hardware or software using the FPGA s processors and/or logic cells. Functionally, the CTP was designed for range safety applications where it would ultimately become part of a vehicle s flight termination system. Consequently, the major functions of the CTP are to perform the forward link command processing, GPS metric tracking, return link telemetry data processing, error detection and correction, data encryption/ decryption, and initiate flight termination action commands. Also, the CTP had to be designed to survive and operate in a launch environment. Additionally, the CTP was designed to interface with the WFF (Wallops Flight Facility) custom-designed transceiver board which is used in the Low Cost TDRSS Transceiver (LCT2) also developed by WFF. The LCT2 s transceiver board demodulates commands received from the ground via the forward link and sends them to the CTP, where they are processed. The CTP inputs and processes data from the inertial measurement unit (IMU) and the GPS receiver board, generates status data, and then sends the data to the transceiver board where it is modulated and sent to the ground via the return link. Overall, the CTP has combined processing with the ability to interface to a GPS receiver, an IMU, and a pulse code modulation (PCM) communication link, while providing the capability to support common interfaces including Ethernet and serial interfaces boarding a relatively small-sized, lightweight package.

  14. Toward shared care for people with cancer: developing the model with patients and GPs.

    PubMed

    Hall, Susan J; Samuel, Leslie M; Murchie, Peter

    2011-10-01

    The number of people surviving cancer for extended periods is increasing. Consequently, due to workload and quality issues, there is considerable interest in alternatives to traditional secondary care-led cancer follow-up. To explore the views of potential recipients of shared follow-up of cancer. To conduct a modelling exercise for shared follow-up and to explore the opinions and experiences of both the patients and GPs involved. Semi-structured audio-taped telephone or face-to-face interviews were conducted with 18 patients with a range of cancers currently attending for structured follow-up in secondary care. Six GPs and five patients (four with melanoma and one with stable metastatic colorectal cancer) took part in a shared follow-up modelling exercise. During the modelling exercise, the GPs attended 4 review meetings, which included brief training seminars, and at the conclusion 10 individuals took part in semi-structured audio-taped telephone or face-to-face interviews. Many rural patients, and some urban patients, would appreciate follow-up being available nearer to home with the associated benefits of time saved and easier parking and continuity of care. Patients have concerns related to the level of extra training received by the GP and loss of contact with their consultant. GPs have concerns about gaining and maintaining the clinical skills needed to conduct follow-up, especially if the numbers of patients seen are small. They also have concerns about lack of support from other GPs, and some administrative and organizational issues. Many patients would be willing to have GPs share their cancer follow-up with the caveat that they had received extra training and were appropriately supported by secondary care specialists. Patients attending shared care clinics appreciated a local service and longer appointment times. GPs stress the importance of maintaining their own clinical skills and reliable clinical and administrative support from secondary care.

  15. Architectures for reasoning in parallel

    NASA Technical Reports Server (NTRS)

    Hall, Lawrence O.

    1989-01-01

    The research conducted has dealt with rule-based expert systems. The algorithms that may lead to effective parallelization of them were investigated. Both the forward and backward chained control paradigms were investigated in the course of this work. The best computer architecture for the developed and investigated algorithms has been researched. Two experimental vehicles were developed to facilitate this research. They are Backpac, a parallel backward chained rule-based reasoning system and Datapac, a parallel forward chained rule-based reasoning system. Both systems have been written in Multilisp, a version of Lisp which contains the parallel construct, future. Applying the future function to a function causes the function to become a task parallel to the spawning task. Additionally, Backpac and Datapac have been run on several disparate parallel processors. The machines are an Encore Multimax with 10 processors, the Concert Multiprocessor with 64 processors, and a 32 processor BBN GP1000. Both the Concert and the GP1000 are switch-based machines. The Multimax has all its processors hung off a common bus. All are shared memory machines, but have different schemes for sharing the memory and different locales for the shared memory. The main results of the investigations come from experiments on the 10 processor Encore and the Concert with partitions of 32 or less processors. Additionally, experiments have been run with a stripped down version of EMYCIN.

  16. Integrated Reconfigurable Aperture, Digital Beam Forming, and Software GPS Receiver for UAV Navigation

    DTIC Science & Technology

    2007-12-11

    Implemented both carrier and code phase tracking loop for performance evaluation of a minimum power beam forming algorithm and null steering algorithm...4 Antennal Antenna2 Antenna K RF RF RF ct, Ct~2 ChKx1 X2 ....... Xk A W ~ ~ =Z, x W ,=1 Fig. 5. Schematics of a K-element antenna array spatial...adaptive processor Antennal Antenna K A N-i V/ ( Vil= .i= VK Fig. 6. Schematics of a K-element antenna array space-time adaptive processor Two additional

  17. Seeing through the glass darkly? A qualitative exploration of GPs' drinking and their alcohol intervention practices.

    PubMed

    Kaner, Eileen; Rapley, Tim; May, Carl

    2006-08-01

    Brief alcohol intervention is influenced by patients' personal characteristics as well as their clinical risk. Risk-drinkers from higher social-status groups are less likely to receive brief intervention from GPs than those from lower social-status groups. Thus GPs' perception of social similarity or distance may influence brief intervention. To explore the role that GPs' drinking behaviour plays in their recognition of alcohol-related risk in patients. A qualitative interview study with 29 GPs recruited according to maximum variation sampling. All interviews were audio-recorded and transcribed verbatim. Analysis was inductive with constant comparison within and between themes plus deviant case analysis. Analysis developed until category saturation was reached. GPs described a range of personal drinking practices that broadly mirrored population drinking patterns. Many saw themselves as part of mainstream society, sharing in culturally sanctioned behaviour. For some GPs, shared drinking practices could increase empathy for patients who drank, and facilitate discussion about alcohol. However, several GPs regarded themselves as distinct from 'others', separating their own drinking from that of patients. Several GPs described a form of bench-marking, wherein only patients who drank more, or differently, to themselves were felt to be 'at risk'. Alcohol is clearly a complex and emotive health and social issue and GPs are not immune to its effects. For some GPs' shared drinking behaviour can act as a window of opportunity enabling insight on alcohol issues and facilitating discussion. However, other GPs may see through the glass more darkly and selectively recognize risk only in those patients who are least like them.

  18. Parallel discrete event simulation: A shared memory approach

    NASA Technical Reports Server (NTRS)

    Reed, Daniel A.; Malony, Allen D.; Mccredie, Bradley D.

    1987-01-01

    With traditional event list techniques, evaluating a detailed discrete event simulation model can often require hours or even days of computation time. Parallel simulation mimics the interacting servers and queues of a real system by assigning each simulated entity to a processor. By eliminating the event list and maintaining only sufficient synchronization to insure causality, parallel simulation can potentially provide speedups that are linear in the number of processors. A set of shared memory experiments is presented using the Chandy-Misra distributed simulation algorithm to simulate networks of queues. Parameters include queueing network topology and routing probabilities, number of processors, and assignment of network nodes to processors. These experiments show that Chandy-Misra distributed simulation is a questionable alternative to sequential simulation of most queueing network models.

  19. Low latency memory access and synchronization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blumrich, Matthias A.; Chen, Dong; Coteus, Paul W.

    A low latency memory system access is provided in association with a weakly-ordered multiprocessor system. Each processor in the multiprocessor shares resources, and each shared resource has an associated lock within a locking device that provides support for synchronization between the multiple processors in the multiprocessor and the orderly sharing of the resources. A processor only has permission to access a resource when it owns the lock associated with that resource, and an attempt by a processor to own a lock requires only a single load operation, rather than a traditional atomic load followed by store, such that the processormore » only performs a read operation and the hardware locking device performs a subsequent write operation rather than the processor. A simple prefetching for non-contiguous data structures is also disclosed. A memory line is redefined so that in addition to the normal physical memory data, every line includes a pointer that is large enough to point to any other line in the memory, wherein the pointers to determine which memory line to prefetch rather than some other predictive algorithm. This enables hardware to effectively prefetch memory access patterns that are non-contiguous, but repetitive.« less

  20. Low latency memory access and synchronization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blumrich, Matthias A.; Chen, Dong; Coteus, Paul W.

    A low latency memory system access is provided in association with a weakly-ordered multiprocessor system. Bach processor in the multiprocessor shares resources, and each shared resource has an associated lock within a locking device that provides support for synchronization between the multiple processors in the multiprocessor and the orderly sharing of the resources. A processor only has permission to access a resource when it owns the lock associated with that resource, and an attempt by a processor to own a lock requires only a single load operation, rather than a traditional atomic load followed by store, such that the processormore » only performs a read operation and the hardware locking device performs a subsequent write operation rather than the processor. A simple prefetching for non-contiguous data structures is also disclosed. A memory line is redefined so that in addition to the normal physical memory data, every line includes a pointer that is large enough to point to any other line in the memory, wherein the pointers to determine which memory line to prefetch rather than some other predictive algorithm. This enables hardware to effectively prefetch memory access patterns that are non-contiguous, but repetitive.« less

  1. A message passing kernel for the hypercluster parallel processing test bed

    NASA Technical Reports Server (NTRS)

    Blech, Richard A.; Quealy, Angela; Cole, Gary L.

    1989-01-01

    A Message-Passing Kernel (MPK) for the Hypercluster parallel-processing test bed is described. The Hypercluster is being developed at the NASA Lewis Research Center to support investigations of parallel algorithms and architectures for computational fluid and structural mechanics applications. The Hypercluster resembles the hypercube architecture except that each node consists of multiple processors communicating through shared memory. The MPK efficiently routes information through the Hypercluster, using a message-passing protocol when necessary and faster shared-memory communication whenever possible. The MPK also interfaces all of the processors with the Hypercluster operating system (HYCLOPS), which runs on a Front-End Processor (FEP). This approach distributes many of the I/O tasks to the Hypercluster processors and eliminates the need for a separate I/O support program on the FEP.

  2. 50 CFR 680.40 - Crab Quota Share (QS), Processor QS (PQS), Individual Fishing Quota (IFQ), and Individual...

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 50 Wildlife and Fisheries 9 2010-10-01 2010-10-01 false Crab Quota Share (QS), Processor QS (PQS... established based on the regional designations determined on August 1, 2005. QS or PQS issued after this date... information is true, correct, and complete to the best of his/her knowledge and belief. If the application is...

  3. The GPS Burst Detector W-Sensor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCrady, D.D.; Phipps, P.

    1994-08-01

    The NAVSTAR satellites have two missions: navigation and nuclear detonation detection. The main objective of this paper is to describe one of the key elements of the Nuclear Detonation Detection System (NDS), the Burst Detector W-Sensor (BDW) that was developed for the Air Force Space and Missle Systems Center, its mission on GPS Block IIR, and how it utilizes GPS timing signals to precisely locate nuclear detonations (NUDET). The paper will also cover the interface to the Burst Detector Processor (BDP) which links the BDW to the ground station where the BDW is controlled and where data from multiple satellitesmore » are processed to determine the location of the NUDET. The Block IIR BDW is the culmination of a development program that has produced a state-of-the-art, space qualified digital receiver/processor that dissipates only 30 Watts, weighs 57 pounds, and has a 12in. {times} l4.2in. {times} 7.16in. footprint. The paper will highlight several of the key multilayer printed circuit cards without which the required power, weight, size, and radiation requirements could not have been met. In addition, key functions of the system software will be covered. The paper will be concluded with a discussion of the high speed digital signal processing and algorithm used to determine the time-of-arrival (TOA) of the electromagnetic pulse (EMP) from the NUDET.« less

  4. Parallel discrete event simulation using shared memory

    NASA Technical Reports Server (NTRS)

    Reed, Daniel A.; Malony, Allen D.; Mccredie, Bradley D.

    1988-01-01

    With traditional event-list techniques, evaluating a detailed discrete-event simulation-model can often require hours or even days of computation time. By eliminating the event list and maintaining only sufficient synchronization to ensure causality, parallel simulation can potentially provide speedups that are linear in the numbers of processors. A set of shared-memory experiments, using the Chandy-Misra distributed-simulation algorithm, to simulate networks of queues is presented. Parameters of the study include queueing network topology and routing probabilities, number of processors, and assignment of network nodes to processors. These experiments show that Chandy-Misra distributed simulation is a questionable alternative to sequential-simulation of most queueing network models.

  5. Characterization of Stationary Distributions of Reflected Diffusions

    DTIC Science & Technology

    2014-01-01

    Reiman , M. I. (2003). Fluid and heavy traffic limits for a generalized processor sharing model. Ann. Appl. Probab., 13, 100-139. [37] Ramanan, K. and... Reiman , M. I. (2008). The heavy traffic limit of an unbalanced generalized processor sharing model. Ann. Appl. Probab., 18, 22-58. [38] Reed, J. and...Control and Computing. [39] Reiman , M. I. and Williams, R. J. (1988). A boundary property of semimartingale reflecting Brownian motions. Probab. Theor

  6. Parallelising a molecular dynamics algorithm on a multi-processor workstation

    NASA Astrophysics Data System (ADS)

    Müller-Plathe, Florian

    1990-12-01

    The Verlet neighbour-list algorithm is parallelised for a multi-processor Hewlett-Packard/Apollo DN10000 workstation. The implementation makes use of memory shared between the processors. It is a genuine master-slave approach by which most of the computational tasks are kept in the master process and the slaves are only called to do part of the nonbonded forces calculation. The implementation features elements of both fine-grain and coarse-grain parallelism. Apart from three calls to library routines, two of which are standard UNIX calls, and two machine-specific language extensions, the whole code is written in standard Fortran 77. Hence, it may be expected that this parallelisation concept can be transfered in parts or as a whole to other multi-processor shared-memory computers. The parallel code is routinely used in production work.

  7. Multiprocessing on supercomputers for computational aerodynamics

    NASA Technical Reports Server (NTRS)

    Yarrow, Maurice; Mehta, Unmeel B.

    1990-01-01

    Very little use is made of multiple processors available on current supercomputers (computers with a theoretical peak performance capability equal to 100 MFLOPs or more) in computational aerodynamics to significantly improve turnaround time. The productivity of a computer user is directly related to this turnaround time. In a time-sharing environment, the improvement in this speed is achieved when multiple processors are used efficiently to execute an algorithm. The concept of multiple instructions and multiple data (MIMD) through multi-tasking is applied via a strategy which requires relatively minor modifications to an existing code for a single processor. Essentially, this approach maps the available memory to multiple processors, exploiting the C-FORTRAN-Unix interface. The existing single processor code is mapped without the need for developing a new algorithm. The procedure for building a code utilizing this approach is automated with the Unix stream editor. As a demonstration of this approach, a Multiple Processor Multiple Grid (MPMG) code is developed. It is capable of using nine processors, and can be easily extended to a larger number of processors. This code solves the three-dimensional, Reynolds averaged, thin-layer and slender-layer Navier-Stokes equations with an implicit, approximately factored and diagonalized method. The solver is applied to generic oblique-wing aircraft problem on a four processor Cray-2 computer. A tricubic interpolation scheme is developed to increase the accuracy of coupling of overlapped grids. For the oblique-wing aircraft problem, a speedup of two in elapsed (turnaround) time is observed in a saturated time-sharing environment.

  8. Medication decision making and patient outcomes in GP, nurse and pharmacist prescriber consultations.

    PubMed

    Weiss, Marjorie C; Platt, Jo; Riley, Ruth; Chewning, Betty; Taylor, Gordon; Horrocks, Susan; Taylor, Andrea

    2015-09-01

    Aim The aims of this study were twofold: (a) to explore whether specific components of shared decision making were present in consultations involving nurse prescribers (NPs), pharmacist prescribers (PPs) and general practitioners (GPs) and (b) to relate these to self-reported patient outcomes including satisfaction, adherence and patient perceptions of practitioner empathy. There are a range of ways for defining and measuring the process of concordance, or shared decision making as it relates to decisions about medicines. As a result, demonstrating a convincing link between shared decision making and patient benefit is challenging. In the United Kingdom, nurses and pharmacists can now take on a prescribing role, engaging in shared decision making. Given the different professional backgrounds of GPs, NPs and PPs, this study sought to explore the process of shared decision making across these three prescriber groups. Analysis of audio-recordings of consultations in primary care in South England between patients and GPs, NPs and PPs. Analysis of patient questionnaires completed post consultation. Findings A total of 532 consultations were audio-recorded with 20 GPs, 19 NPs and 12 PPs. Prescribing decisions occurred in 421 (79%). Patients were given treatment options in 21% (102/482) of decisions, the prescriber elicited the patient's treatment preference in 18% (88/482) and the patient expressed a treatment preference in 24% (118/482) of decisions. PPs were more likely to ask for the patient's preference about their treatment regimen (χ 2=6.6, P=0.036, Cramer's V=0.12) than either NPs or GPs. Of the 275 patient questionnaires, 192(70%) could be matched with a prescribing decision. NP patients had higher satisfaction levels than patients of GPs or PPs. More time describing treatment options was associated with increased satisfaction, adherence and greater perceived practitioner empathy. While defining, measuring and enabling the process of shared decision making remains challenging, it may have patient benefit.

  9. Use of a 17-Gene Prognostic Assay in Contemporary Urologic Practice: Results of an Interim Analysis in an Observational Cohort.

    PubMed

    Eure, Gregg; Germany, Raymond; Given, Robert; Lu, Ruixiao; Shindel, Alan W; Rothney, Megan; Glowacki, Richard; Henderson, Jonathan; Richardson, Tim; Goldfischer, Evan; Febbo, Phillip G; Denes, Bela S

    2017-09-01

    To study the impact of genomic testing in shared decision making for men with clinically low-risk prostate cancer (PCa). Patients with clinically low-risk PCa were enrolled in a prospective, multi-institutional study of a validated 17-gene tissue-based reverse transcription polymerase chain reaction assay (Genomic Prostate Score [GPS]). In this paper we report on outcomes in the first 297 patients enrolled in the study with valid 17-gene assay results and decision-change data. The primary end points were shared decision on initial management and persistence on active surveillance (AS) at 1 year post diagnosis. AS utilization and persistence were compared with similar end points in a group of patients who did not have genomic testing (baseline cohort). Secondary end points included perceived utility of the assay and patient decisional conflict before and after testing. One-year results were available on 258 patients. Shift between initial recommendation and shared decision occurred in 23% of patients. Utilization of AS was higher in the GPS-tested cohort than in the untested baseline cohort (62% vs 40%). The proportion of men who selected and persisted on AS at 1 year was 55% and 34% in the GPS and baseline cohorts, respectively. Physicians reported that GPS was useful in 90% of cases. Mean decisional conflict scores declined in patients after GPS testing. Patients who received GPS testing were more likely to select and persist on AS for initial management compared with a matched baseline group. These data indicate that GPS help guide shared decisions in clinically low-risk PCa. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  10. Vascular system modeling in parallel environment - distributed and shared memory approaches

    PubMed Central

    Jurczuk, Krzysztof; Kretowski, Marek; Bezy-Wendling, Johanne

    2011-01-01

    The paper presents two approaches in parallel modeling of vascular system development in internal organs. In the first approach, new parts of tissue are distributed among processors and each processor is responsible for perfusing its assigned parts of tissue to all vascular trees. Communication between processors is accomplished by passing messages and therefore this algorithm is perfectly suited for distributed memory architectures. The second approach is designed for shared memory machines. It parallelizes the perfusion process during which individual processing units perform calculations concerning different vascular trees. The experimental results, performed on a computing cluster and multi-core machines, show that both algorithms provide a significant speedup. PMID:21550891

  11. Error recovery in shared memory multiprocessors using private caches

    NASA Technical Reports Server (NTRS)

    Wu, Kun-Lung; Fuchs, W. Kent; Patel, Janak H.

    1990-01-01

    The problem of recovering from processor transient faults in shared memory multiprocesses systems is examined. A user-transparent checkpointing and recovery scheme using private caches is presented. Processes can recover from errors due to faulty processors by restarting from the checkpointed computation state. Implementation techniques using checkpoint identifiers and recovery stacks are examined as a means of reducing performance degradation in processor utilization during normal execution. This cache-based checkpointing technique prevents rollback propagation, provides rapid recovery, and can be integrated into standard cache coherence protocols. An analytical model is used to estimate the relative performance of the scheme during normal execution. Extensions to take error latency into account are presented.

  12. Management of patients with sore throats in relation to guidelines: an interview study in Sweden.

    PubMed

    Hedin, Katarina; Strandberg, Eva Lena; Gröndal, Hedvig; Brorsson, Annika; Thulesius, Hans; André, Malin

    2014-12-01

    To explore how a group of Swedish general practitioners (GPs) manage patients with a sore throat in relation to current guidelines as expressed in interviews. Qualitative content analysis was used to analyse semi-structured interviews. Swedish primary care. A strategic sample of 25 GPs. Perceived management of sore throat patients. It was found that nine of the interviewed GPs were adherent to current guidelines for sore throat and 16 were non-adherent. The two groups differed in terms of guideline knowledge, which was shared within the team for adherent GPs while idiosyncratic knowledge dominated for the non-adherent GPs. Adherent GPs had no or low concerns for bacterial infections and differential diagnosis whilst non-adherent GPs believed that in patients with a sore throat any bacterial infection should be identified and treated with antibiotics. Patient history and examination was mainly targeted by adherent GPs whilst for non-adherent GPs it was often redundant. Non-adherent GPs reported problems getting patients to abstain from antibiotics, whilst no such problems were reported in adherent GPs. This interview study of sore throat management in a strategically sampled group of Swedish GPs showed that while two-thirds were non-adherent and had a liberal attitude to antibiotics one-third were guideline adherent with a restricted view on antibiotics. Non-adherent GPs revealed significant knowledge gaps. Adherent GPs had discussed guidelines within the primary care team while non-adherent GPs had not. Guideline implementation thus seemed to be promoted by knowledge shared in team discussions.

  13. System and method for programmable bank selection for banked memory subsystems

    DOEpatents

    Blumrich, Matthias A.; Chen, Dong; Gara, Alan G.; Giampapa, Mark E.; Hoenicke, Dirk; Ohmacht, Martin; Salapura, Valentina; Sugavanam, Krishnan

    2010-09-07

    A programmable memory system and method for enabling one or more processor devices access to shared memory in a computing environment, the shared memory including one or more memory storage structures having addressable locations for storing data. The system comprises: one or more first logic devices associated with a respective one or more processor devices, each first logic device for receiving physical memory address signals and programmable for generating a respective memory storage structure select signal upon receipt of pre-determined address bit values at selected physical memory address bit locations; and, a second logic device responsive to each of the respective select signal for generating an address signal used for selecting a memory storage structure for processor access. The system thus enables each processor device of a computing environment memory storage access distributed across the one or more memory storage structures.

  14. Interconnect Performance Evaluation of SGI Altix 3700 BX2, Cray X1, Cray Opteron Cluster, and Dell PowerEdge

    NASA Technical Reports Server (NTRS)

    Fatoohi, Rod; Saini, Subbash; Ciotti, Robert

    2006-01-01

    We study the performance of inter-process communication on four high-speed multiprocessor systems using a set of communication benchmarks. The goal is to identify certain limiting factors and bottlenecks with the interconnect of these systems as well as to compare these interconnects. We measured network bandwidth using different number of communicating processors and communication patterns, such as point-to-point communication, collective communication, and dense communication patterns. The four platforms are: a 512-processor SGI Altix 3700 BX2 shared-memory machine with 3.2 GB/s links; a 64-processor (single-streaming) Cray XI shared-memory machine with 32 1.6 GB/s links; a 128-processor Cray Opteron cluster using a Myrinet network; and a 1280-node Dell PowerEdge cluster with an InfiniBand network. Our, results show the impact of the network bandwidth and topology on the overall performance of each interconnect.

  15. Patient-centeredness to anticipate and organize an end-of-life project for patients receiving at-home palliative care: a phenomenological study.

    PubMed

    Oude Engberink, Agnès; Badin, Mélanie; Serayet, Philippe; Pavageau, Sylvain; Lucas, François; Bourrel, Gérard; Norton, Joanna; Ninot, Grégory; Senesse, Pierre

    2017-02-23

    The development of end-of-life primary care is a socio-medical and ethical challenge. However, general practitioners (GPs) face many difficulties when initiating appropriate discussion on proactive shared palliative care. Anticipating palliative care is increasingly important given the ageing population and is an aim shared by many countries. We aimed to examine how French GPs approached and provided at-home palliative care. We inquired about their strategy for delivering care, and the skills and resources they used to devise new care strategies. Twenty-one GPs from the South of France recruited by phone according to their various experiences of palliative care agreed to participate. Semi-structured interview transcripts were examined using a phenomenological approach inspired by Grounded theory, and further studied with semiopragmatic analysis. Offering palliative care was perceived by GPs as a moral obligation. They felt vindicated in a process rooted in the paradigm values of their profession. This study results in two key findings: firstly, their patient-centred approach facilitated the anticipatory discussions of any potential event or intervention, which the GPs openly discussed with patients and their relatives; secondly, this approach contributed to build an "end-of-life project" meeting patients' wishes and needs. The GPs all shared the idea that the end-of-life process required human presence and recommended that at-home care be coordinated and shared by multi-professional referring teams. The main tenets of palliative care as provided by GPs are a patient-centred approach in the anticipatory discussion of potential events, personalized follow-up with referring multi-professional teams, and the collaborative design of an end-of-life project meeting the aspirations of the patient and his or her family. Consequently, coordination strategies involving specialized teams, GPs and families should be modelled according to the specificities of each care system.

  16. Solutions and debugging for data consistency in multiprocessors with noncoherent caches

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernstein, D.; Mendelson, B.; Breternitz, M. Jr.

    1995-02-01

    We analyze two important problems that arise in shared-memory multiprocessor systems. The stale data problem involves ensuring that data items in local memory of individual processors are current, independent of writes done by other processors. False sharing occurs when two processors have copies of the same shared data block but update different portions of the block. The false sharing problem involves guaranteeing that subsequent writes are properly combined. In modern architectures these problems are usually solved in hardware, by exploiting mechanisms for hardware controlled cache consistency. This leads to more expensive and nonscalable designs. Therefore, we are concentrating on softwaremore » methods for ensuring cache consistency that would allow for affordable and scalable multiprocessing systems. Unfortunately, providing software control is nontrivial, both for the compiler writer and for the application programmer. For this reason we are developing a debugging environment that will facilitate the development of compiler-based techniques and will help the programmer to tune his or her application using explicit cache management mechanisms. We extend the notion of a race condition for IBM Shared Memory System POWER/4, taking into consideration its noncoherent caches, and propose techniques for detection of false sharing problems. Identification of the stale data problem is discussed as well, and solutions are suggested.« less

  17. Autonomous Flight Safety System

    NASA Technical Reports Server (NTRS)

    Simpson, James

    2010-01-01

    The Autonomous Flight Safety System (AFSS) is an independent self-contained subsystem mounted onboard a launch vehicle. AFSS has been developed by and is owned by the US Government. Autonomously makes flight termination/destruct decisions using configurable software-based rules implemented on redundant flight processors using data from redundant GPS/IMU navigation sensors. AFSS implements rules determined by the appropriate Range Safety officials.

  18. Performance evaluation of throughput computing workloads using multi-core processors and graphics processors

    NASA Astrophysics Data System (ADS)

    Dave, Gaurav P.; Sureshkumar, N.; Blessy Trencia Lincy, S. S.

    2017-11-01

    Current trend in processor manufacturing focuses on multi-core architectures rather than increasing the clock speed for performance improvement. Graphic processors have become as commodity hardware for providing fast co-processing in computer systems. Developments in IoT, social networking web applications, big data created huge demand for data processing activities and such kind of throughput intensive applications inherently contains data level parallelism which is more suited for SIMD architecture based GPU. This paper reviews the architectural aspects of multi/many core processors and graphics processors. Different case studies are taken to compare performance of throughput computing applications using shared memory programming in OpenMP and CUDA API based programming.

  19. An efficient ASIC implementation of 16-channel on-line recursive ICA processor for real-time EEG system.

    PubMed

    Fang, Wai-Chi; Huang, Kuan-Ju; Chou, Chia-Ching; Chang, Jui-Chung; Cauwenberghs, Gert; Jung, Tzyy-Ping

    2014-01-01

    This is a proposal for an efficient very-large-scale integration (VLSI) design, 16-channel on-line recursive independent component analysis (ORICA) processor ASIC for real-time EEG system, implemented with TSMC 40 nm CMOS technology. ORICA is appropriate to be used in real-time EEG system to separate artifacts because of its highly efficient and real-time process features. The proposed ORICA processor is composed of an ORICA processing unit and a singular value decomposition (SVD) processing unit. Compared with previous work [1], this proposed ORICA processor has enhanced effectiveness and reduced hardware complexity by utilizing a deeper pipeline architecture, shared arithmetic processing unit, and shared registers. The 16-channel random signals which contain 8-channel super-Gaussian and 8-channel sub-Gaussian components are used to analyze the dependence of the source components, and the average correlation coefficient is 0.95452 between the original source signals and extracted ORICA signals. Finally, the proposed ORICA processor ASIC is implemented with TSMC 40 nm CMOS technology, and it consumes 15.72 mW at 100 MHz operating frequency.

  20. A qualitative study to explore influences on general practitioners' decisions to prescribe new drugs.

    PubMed

    Jacoby, Ann; Smith, Monica; Eccles, Martin

    2003-02-01

    Ensuring appropriate prescribing is an important challenge for the health service, and the need for research that takes account of the reasons behind individual general practitioners' (GPs) prescribing decisions has been highlighted. To explore differences among GPs in their decisions to prescribe new drugs. Qualitative approach, using in-depth semistructured interviews. Northern and Yorkshire Health Authority Region. Participants were identified from a random sample of 520 GPs in a quantitative study of patterns of uptake of eight recently introduced drugs. Purposeful sampling ensured inclusion of GPs prescribing any of the eight drugs and working in a range of practice settings. Fifty-six GPs were interviewed, using a topic guide. Interviews were recorded on audiotape. Transcribed text was methodically coded and data were analysed by constantly comparing emerging themes. Both low and high prescribers shared a view of themselves as conservative in their prescribing behaviour. Low prescribers appeared to conform more strongly to group norms and identified a consensus among practice partners in prescribing and cost-consciousness. Conformism to group norms was represented by a commitment to practice formularies. High prescribers more often expressed themselves to be indifferent to drug costs and a shared practice ethos. A shift in the attitudes of some GPs is required before cost-effectiveness is routinely incorporated in drug prescribing. The promotion of rational prescribing is likely to be more successful if efforts are focused on GPs' appreciation of cost issues and attitudes towards shared decision-making and responsibility.

  1. FPGA-based real-time embedded system for RISS/GPS integrated navigation.

    PubMed

    Abdelfatah, Walid Farid; Georgy, Jacques; Iqbal, Umar; Noureldin, Aboelmagd

    2012-01-01

    Navigation algorithms integrating measurements from multi-sensor systems overcome the problems that arise from using GPS navigation systems in standalone mode. Algorithms which integrate the data from 2D low-cost reduced inertial sensor system (RISS), consisting of a gyroscope and an odometer or wheel encoders, along with a GPS receiver via a Kalman filter has proved to be worthy in providing a consistent and more reliable navigation solution compared to standalone GPS receivers. It has been also shown to be beneficial, especially in GPS-denied environments such as urban canyons and tunnels. The main objective of this paper is to narrow the idea-to-implementation gap that follows the algorithm development by realizing a low-cost real-time embedded navigation system capable of computing the data-fused positioning solution. The role of the developed system is to synchronize the measurements from the three sensors, relative to the pulse per second signal generated from the GPS, after which the navigation algorithm is applied to the synchronized measurements to compute the navigation solution in real-time. Employing a customizable soft-core processor on an FPGA in the kernel of the navigation system, provided the flexibility for communicating with the various sensors and the computation capability required by the Kalman filter integration algorithm.

  2. FPGA-Based Real-Time Embedded System for RISS/GPS Integrated Navigation

    PubMed Central

    Abdelfatah, Walid Farid; Georgy, Jacques; Iqbal, Umar; Noureldin, Aboelmagd

    2012-01-01

    Navigation algorithms integrating measurements from multi-sensor systems overcome the problems that arise from using GPS navigation systems in standalone mode. Algorithms which integrate the data from 2D low-cost reduced inertial sensor system (RISS), consisting of a gyroscope and an odometer or wheel encoders, along with a GPS receiver via a Kalman filter has proved to be worthy in providing a consistent and more reliable navigation solution compared to standalone GPS receivers. It has been also shown to be beneficial, especially in GPS-denied environments such as urban canyons and tunnels. The main objective of this paper is to narrow the idea-to-implementation gap that follows the algorithm development by realizing a low-cost real-time embedded navigation system capable of computing the data-fused positioning solution. The role of the developed system is to synchronize the measurements from the three sensors, relative to the pulse per second signal generated from the GPS, after which the navigation algorithm is applied to the synchronized measurements to compute the navigation solution in real-time. Employing a customizable soft-core processor on an FPGA in the kernel of the navigation system, provided the flexibility for communicating with the various sensors and the computation capability required by the Kalman filter integration algorithm. PMID:22368460

  3. Direct access inter-process shared memory

    DOEpatents

    Brightwell, Ronald B; Pedretti, Kevin; Hudson, Trammell B

    2013-10-22

    A technique for directly sharing physical memory between processes executing on processor cores is described. The technique includes loading a plurality of processes into the physical memory for execution on a corresponding plurality of processor cores sharing the physical memory. An address space is mapped to each of the processes by populating a first entry in a top level virtual address table for each of the processes. The address space of each of the processes is cross-mapped into each of the processes by populating one or more subsequent entries of the top level virtual address table with the first entry in the top level virtual address table from other processes.

  4. Attitude determination for small satellites using GPS signal-to-noise ratio

    NASA Astrophysics Data System (ADS)

    Peters, Daniel

    An embedded system for GPS-based attitude determination (AD) using signal-to-noise (SNR) measurements was developed for CubeSat applications. The design serves as an evaluation testbed for conducting ground based experiments using various computational methods and antenna types to determine the optimum AD accuracy. Raw GPS data is also stored to non-volatile memory for downloading and post analysis. Two low-power microcontrollers are used for processing and to display information on a graphic screen for real-time performance evaluations. A new parallel inter-processor communication protocol was developed that is faster and uses less power than existing standard protocols. A shorted annular patch (SAP) antenna was fabricated for the initial ground-based AD experiments with the testbed. Static AD estimations with RMS errors in the range of 2.5° to 4.8° were achieved over a range of off-zenith attitudes.

  5. Detailed Test Objectives (DTOs) and Detailed Supplementary Objectives (DSOs)

    NASA Technical Reports Server (NTRS)

    2002-01-01

    The purpose of this experiment is to demonstrate the performance and operations of the GPS during orbiter ascent, entry and landing phases utilizing a modified military GPS receiver processor and the existing orbiter GPS antennas. The purpose of this experiment is to demonstrate the capability to perform a manually controlled landing in the presence of a crosswind. Changes in gastrointestinal function and physiology as a result of spaceflight affect drug absorption and the bioavailability of oral medications, which can compromise therapeutic effectiveness. This DSO will lead to the design and development of effective pharmocological countermeasures and therapeutic adjustments for spaceflight. A previous observation suggested that discordant sensory stimuli caused by an unusual motion environment disrupted spatial orientation and balance control in a returning crewmember by triggering a state change in central vestibular processing. The findings of the current investigation are expected to demonstrate the degree to which challenging motion environments may affect post-flight (re)adaptation to gravity.

  6. High-performance computing — an overview

    NASA Astrophysics Data System (ADS)

    Marksteiner, Peter

    1996-08-01

    An overview of high-performance computing (HPC) is given. Different types of computer architectures used in HPC are discussed: vector supercomputers, high-performance RISC processors, various parallel computers like symmetric multiprocessors, workstation clusters, massively parallel processors. Software tools and programming techniques used in HPC are reviewed: vectorizing compilers, optimization and vector tuning, optimization for RISC processors; parallel programming techniques like shared-memory parallelism, message passing and data parallelism; and numerical libraries.

  7. Tailored chemotherapy information faxed to general practitioners improves confidence in managing adverse effects and satisfaction with shared care: results from a randomized controlled trial.

    PubMed

    Jefford, Michael; Baravelli, Carl; Dudgeon, Paul; Dabscheck, Adrian; Evans, Melanie; Moloney, Michael; Schofield, Penelope

    2008-05-10

    General practitioners (GPs) play a critical role in the treatment of patients with cancer; yet often lack information for optimal care. We developed standardized information for GPs about chemotherapy (CT). In a randomized controlled trial we assessed the impact of sending, by fax, information tailored to the particular patient's CT regimen. Primary end points were: confidence treating patients who were receiving CT (confidence), knowledge of adverse effects and reasons to refer the patient to the treatment center (knowledge), and satisfaction with information and shared care of patients (satisfaction). Focus group work informed the development of the CT information which focused on potential adverse effects and recommended management strategies. GPs of patients due to commence CT were randomly assigned to receive usual correspondence with or without the faxed patient/regimen-specific information. Telephone questionnaire at baseline and 1 week postintervention assessed knowledge, confidence, and satisfaction. Ninety-seven GPs managed 97 patients receiving 23 types of CT. Eighty-one (83.5%) completed the follow-up questionnaire. GPs in the intervention group demonstrated a significantly greater increase in confidence (mean difference, 0.28; 95% CI, 0.10 to 0.47) and satisfaction (mean difference, 0.57; 95% CI, 0.27 to 0.88) compared with usual care, reflecting a 7.1% and 10.5% difference in score, respectively. No differences were detected for knowledge. GPs receiving the CT sheet found correspondence significantly more useful (P < .001) and instructive (P < .001) than GPs who received standard correspondence alone. Information about CT faxed to GPs is a simple, inexpensive intervention that increases confidence managing CT adverse effects and satisfaction with shared care. This intervention could have widespread application.

  8. Autonomous Flight Safety System

    NASA Technical Reports Server (NTRS)

    Ferrell, Bob; Santuro, Steve; Simpson, James; Zoerner, Roger; Bull, Barton; Lanzi, Jim

    2004-01-01

    Autonomous Flight Safety System (AFSS) is an independent flight safety system designed for small to medium sized expendable launch vehicles launching from or needing range safety protection while overlying relatively remote locations. AFSS replaces the need for a man-in-the-loop to make decisions for flight termination. AFSS could also serve as the prototype for an autonomous manned flight crew escape advisory system. AFSS utilizes onboard sensors and processors to emulate the human decision-making process using rule-based software logic and can dramatically reduce safety response time during critical launch phases. The Range Safety flight path nominal trajectory, its deviation allowances, limit zones and other flight safety rules are stored in the onboard computers. Position, velocity and attitude data obtained from onboard global positioning system (GPS) and inertial navigation system (INS) sensors are compared with these rules to determine the appropriate action to ensure that people and property are not jeopardized. The final system will be fully redundant and independent with multiple processors, sensors, and dead man switches to prevent inadvertent flight termination. AFSS is currently in Phase III which includes updated algorithms, integrated GPS/INS sensors, large scale simulation testing and initial aircraft flight testing.

  9. Proximity Operations Nano-Satellite Flight Demonstration (PONSFD) Rendezvous Proximity Operations Design and Trade Studies

    NASA Astrophysics Data System (ADS)

    Griesbach, J.; Westphal, J. J.; Roscoe, C.; Hawes, D. R.; Carrico, J. P.

    2013-09-01

    The Proximity Operations Nano-Satellite Flight Demonstration (PONSFD) program is to demonstrate rendezvous proximity operations (RPO), formation flying, and docking with a pair of 3U CubeSats. The program is sponsored by NASA Ames via the Office of the Chief Technologist (OCT) in support of its Small Spacecraft Technology Program (SSTP). The goal of the mission is to demonstrate complex RPO and docking operations with a pair of low-cost 3U CubeSat satellites using passive navigation sensors. The program encompasses the entire system evolution including system design, acquisition, satellite construction, launch, mission operations, and final disposal. The satellite is scheduled for launch in Fall 2015 with a 1-year mission lifetime. This paper provides a brief mission overview but will then focus on the current design and driving trade study results for the RPO mission specific processor and relevant ground software. The current design involves multiple on-board processors, each specifically tasked with providing mission critical capabilities. These capabilities range from attitude determination and control to image processing. The RPO system processor is responsible for absolute and relative navigation, maneuver planning, attitude commanding, and abort monitoring for mission safety. A low power processor running a Linux operating system has been selected for implementation. Navigation is one of the RPO processor's key tasks. This entails processing data obtained from the on-board GPS unit as well as the on-board imaging sensors. To do this, Kalman filters will be hosted on the processor to ingest and process measurements for maintenance of position and velocity estimates with associated uncertainties. While each satellite carries a GPS unit, it will be used sparsely to conserve power. As such, absolute navigation will mainly consist of propagating past known states, and relative navigation will be considered to be of greater importance. For relative observations, each spacecraft hosts 3 electro-optical sensors dedicated to imaging the companion satellite. The image processor will analyze the images to obtain estimates for range, bearing, and pose, with associated rates and uncertainties. These observations will be fed to the RPO processor's relative Kalman filter to perform relative navigation updates. This paper includes estimates for expected navigation accuracies for both absolute and relative position and velocity. Another key task for the RPO processor is maneuver planning. This includes automation to plan maneuvers to achieve a desired formation configuration or trajectory (including docking), as well as automation to safely react to potentially dangerous situations. This will allow each spacecraft to autonomously plan fuel-efficient maneuvers to achieve a desired trajectory as well as compute adjustment maneuvers to correct for thrusting errors. This paper discusses results from a trade study that has been conducted to examine maneuver targeting algorithms required on-board the spacecraft. Ground software will also work in conjunction with the on-board software to validate and approve maneuvers as necessary.

  10. An Adaptive Insertion and Promotion Policy for Partitioned Shared Caches

    NASA Astrophysics Data System (ADS)

    Mahrom, Norfadila; Liebelt, Michael; Raof, Rafikha Aliana A.; Daud, Shuhaizar; Hafizah Ghazali, Nur

    2018-03-01

    Cache replacement policies in chip multiprocessors (CMP) have been investigated extensively and proven able to enhance shared cache management. However, competition among multiple processors executing different threads that require simultaneous access to a shared memory may cause cache contention and memory coherence problems on the chip. These issues also exist due to some drawbacks of the commonly used Least Recently Used (LRU) policy employed in multiprocessor systems, which are because of the cache lines residing in the cache longer than required. In image processing analysis of for example extra pulmonary tuberculosis (TB), an accurate diagnosis for tissue specimen is required. Therefore, a fast and reliable shared memory management system to execute algorithms for processing vast amount of specimen image is needed. In this paper, the effects of the cache replacement policy in a partitioned shared cache are investigated. The goal is to quantify whether better performance can be achieved by using less complex replacement strategies. This paper proposes a Middle Insertion 2 Positions Promotion (MI2PP) policy to eliminate cache misses that could adversely affect the access patterns and the throughput of the processors in the system. The policy employs a static predefined insertion point, near distance promotion, and the concept of ownership in the eviction policy to effectively improve cache thrashing and to avoid resource stealing among the processors.

  11. Collaboration between general practitioners (GPs) and mental healthcare professionals within the context of reforms in Quebec

    PubMed Central

    2012-01-01

    Background In the context of the high prevalence and impact of mental disorders worldwide, and less than optimal utilisation of services and adequacy of care, strengthening primary mental healthcare should be a leading priority. This article assesses the state of collaboration among general practitioners (GPs), psychiatrists and psychosocial mental healthcare professionals, factors that enable and hinder shared care, and GPs’ perceptions of best practices in the management of mental disorders. A collaboration model is also developed. Methods The study employs a mixed-method approach, with emphasis on qualitative investigation. Drawing from a previous survey representative of the Quebec GP population, 60 GPs were selected for further investigation. Results Globally, GPs managed mental healthcare patients in solo practice in parallel or sequential follow-up with mental healthcare professionals. GPs cited psychologists and psychiatrists as their main partners. Numerous hindering factors associated with shared care were found: lack of resources (either professionals or services); long waiting times; lack of training, time and incentives for collaboration; and inappropriate GP payment modes. The ideal practice model includes GPs working in multidisciplinary group practice in their own settings. GPs recommended expanding psychosocial services and shared care to increase overall access and quality of care for these patients. Conclusion As increasing attention is devoted worldwide to the development of optimal integrated primary care, this article contributes to the discussion on mental healthcare service planning. A culture of collaboration has to be encouraged as comprehensive services and continuity of care are key recovery factors of patients with mental disorders. PMID:23730332

  12. Asynchronous Communication Scheme For Hypercube Computer

    NASA Technical Reports Server (NTRS)

    Madan, Herb S.

    1988-01-01

    Scheme devised for asynchronous-message communication system for Mark III hypercube concurrent-processor network. Network consists of up to 1,024 processing elements connected electrically as though were at corners of 10-dimensional cube. Each node contains two Motorola 68020 processors along with Motorola 68881 floating-point processor utilizing up to 4 megabytes of shared dynamic random-access memory. Scheme intended to support applications requiring passage of both polled or solicited and unsolicited messages.

  13. Tactical Operations Analysis Support Facility.

    DTIC Science & Technology

    1981-05-01

    Punch/Reader 2 DMC-11AR DDCMP Micro Processor 2 DMC-11DA Network Link Line Unit 2 DL-11E Async Serial Line Interface 4 Intel IN-1670 448K Words MOS Memory...86 5.3 VIRTUAL PROCESSORS - VAX-11/750 ........................... 89 5.4 A RELATIONAL DATA MANAGEMENT SYSTEM - ORACLE...Central Processing Unit (CPU) is a 16 bit processor for high-speed, real time applications, and for large multi-user, multi- task, time shared

  14. DMA shared byte counters in a parallel computer

    DOEpatents

    Chen, Dong; Gara, Alan G.; Heidelberger, Philip; Vranas, Pavlos

    2010-04-06

    A parallel computer system is constructed as a network of interconnected compute nodes. Each of the compute nodes includes at least one processor, a memory and a DMA engine. The DMA engine includes a processor interface for interfacing with the at least one processor, DMA logic, a memory interface for interfacing with the memory, a DMA network interface for interfacing with the network, injection and reception byte counters, injection and reception FIFO metadata, and status registers and control registers. The injection FIFOs maintain memory locations of the injection FIFO metadata memory locations including its current head and tail, and the reception FIFOs maintain the reception FIFO metadata memory locations including its current head and tail. The injection byte counters and reception byte counters may be shared between messages.

  15. Understanding the inverse care law: a register and survey-based study of patient deprivation and burnout in general practice.

    PubMed

    Pedersen, Anette Fischer; Vedsted, Peter

    2014-12-12

    According to the inverse care law, there is a mismatch between patients' medical needs and medical care supply. As an example, the number of doctors is often lower in areas with high deprivation compared to areas with no deprivation, and doctors with a deprived patient population may experience a high work pressure, have insufficient time for comprehensive tasks and be at higher risk for developing burnout. The mechanisms responsible for the inverse care law might be mutually reinforcing, but we know very little about this process. In this study, the association between patient deprivation and burnout in the general practitioners (GPs) was examined. Active GPs in the Central Denmark Region were invited to participate in a survey on job satisfaction and burnout and 601 GPs returned the questionnaire (72%). The Danish Regions provided information about which persons were registered with each practice, and information concerning socioeconomic characteristics for each patient on the list was obtained from Statistics Denmark. A composite deprivation index was also used. There was significantly more burnout among GPs in the highest quartile of the deprivation index compared to GPs in the lowest quartile (OR: 1.91; 95% CI: 1.06-3.44; p-value: 0.032). Among the eight variables included in the deprivation index, a high share of patients on social benefits was most strongly associated with burnout (OR: 2.62; 95% CI: 1.45-4.71; p-value: 0.001). A higher propensity of GP burnout was found among GPs with a high share of deprived patients on their lists compared to GPs with a low share of deprived patients. This applied in particular to patients on social benefits. This indicates that beside lower supply of GPs in deprived areas, people in these areas may also be served by GPs who are in higher risk of burnout and not performing optimally.

  16. Methodology for fast detection of false sharing in threaded scientific codes

    DOEpatents

    Chung, I-Hsin; Cong, Guojing; Murata, Hiroki; Negishi, Yasushi; Wen, Hui-Fang

    2014-11-25

    A profiling tool identifies a code region with a false sharing potential. A static analysis tool classifies variables and arrays in the identified code region. A mapping detection library correlates memory access instructions in the identified code region with variables and arrays in the identified code region while a processor is running the identified code region. The mapping detection library identifies one or more instructions at risk, in the identified code region, which are subject to an analysis by a false sharing detection library. A false sharing detection library performs a run-time analysis of the one or more instructions at risk while the processor is re-running the identified code region. The false sharing detection library determines, based on the performed run-time analysis, whether two different portions of the cache memory line are accessed by the generated binary code.

  17. Testing the Tester: Lessons Learned During the Testing of a State-of-the-Art Commercial 14nm Processor Under Proton Irradiation

    NASA Technical Reports Server (NTRS)

    Szabo, Carl M., Jr.; Duncan, Adam R.; Label, Kenneth A.

    2017-01-01

    Testing of an Intel 14nm desktop processor was conducted under proton irradiation. We share lessons learned, demonstrating that complex devices beget further complex challenges requiring practical and theoretical investigative expertise to solve.

  18. Helpful strategies for GPs seeing patients with medically unexplained physical symptoms: a focus group study.

    PubMed

    Aamland, Aase; Fosse, Anette; Ree, Eline; Abildsnes, Eirik; Malterud, Kirsti

    2017-08-01

    Patients with long-lasting and disabling medically unexplained physical symptoms (MUPS) are common in general practice. GPs have previously described the challenges regarding management and treatment of patients with MUPS. To explore GPs' experiences of the strategies perceived as helpful when seeing patients with MUPS. Three focus group interviews with a purposive sample of 24 experienced GPs were held in southern Norway. Discussions were audiotaped and transcribed. Systematic text condensation was used for analysis. Several strategies were considered helpful during consultations with patients with MUPS. A comprehensive outline of the patient's medical past and present could serve as the foundation of the dialogue. Reviewing the patient's records and sharing relevant information with them or conducting a thorough clinical examination could offer 'golden moments' of trust and common understanding. A very concrete exchange of symptoms and diagnosis interpretation sometimes created a space for explanations and action, and confrontations could even strengthen the alliance between the GP and the patient. Bypassing conventional answers and transcending tensions by negotiating innovative explanations could help patients resolve symptoms and establish innovative understanding. GPs use tangible, down-to-earth strategies in consultations with patients with MUPS. Important strategies were: thorough investigation of the patient's symptoms and story; sharing of interpretations; and negotiation of different explanations. Sharing helpful strategies with colleagues in a field in which frustration and dissatisfaction are not uncommon can encourage GPs to develop sustainable responsibility and innovative solutions. © British Journal of General Practice 2017.

  19. Applications considerations in the system design of highly concurrent multiprocessors

    NASA Technical Reports Server (NTRS)

    Lundstrom, Stephen F.

    1987-01-01

    A flow model processor approach to parallel processing is described, using very-high-performance individual processors, high-speed circuit switched interconnection networks, and a high-speed synchronization capability to minimize the effect of the inherently serial portions of applications on performance. Design studies related to the determination of the number of processors, the memory organization, and the structure of the networks used to interconnect the processor and memory resources are discussed. Simulations indicate that applications centered on the large shared data memory should be able to sustain over 500 million floating point operations per second.

  20. Reader set encoding for directory of shared cache memory in multiprocessor system

    DOEpatents

    Ahn, Dnaiel; Ceze, Luis H.; Gara, Alan; Ohmacht, Martin; Xiaotong, Zhuang

    2014-06-10

    In a parallel processing system with speculative execution, conflict checking occurs in a directory lookup of a cache memory that is shared by all processors. In each case, the same physical memory address will map to the same set of that cache, no matter which processor originated that access. The directory includes a dynamic reader set encoding, indicating what speculative threads have read a particular line. This reader set encoding is used in conflict checking. A bitset encoding is used to specify particular threads that have read the line.

  1. Cache-based error recovery for shared memory multiprocessor systems

    NASA Technical Reports Server (NTRS)

    Wu, Kun-Lung; Fuchs, W. Kent; Patel, Janak H.

    1989-01-01

    A multiprocessor cache-based checkpointing and recovery scheme for of recovering from transient processor errors in a shared-memory multiprocessor with private caches is presented. New implementation techniques that use checkpoint identifiers and recovery stacks to reduce performance degradation in processor utilization during normal execution are examined. This cache-based checkpointing technique prevents rollback propagation, provides for rapid recovery, and can be integrated into standard cache coherence protocols. An analytical model is used to estimate the relative performance of the scheme during normal execution. Extensions that take error latency into account are presented.

  2. Fault-Tolerant, Real-Time, Multi-Core Computer System

    NASA Technical Reports Server (NTRS)

    Gostelow, Kim P.

    2012-01-01

    A document discusses a fault-tolerant, self-aware, low-power, multi-core computer for space missions with thousands of simple cores, achieving speed through concurrency. The proposed machine decides how to achieve concurrency in real time, rather than depending on programmers. The driving features of the system are simple hardware that is modular in the extreme, with no shared memory, and software with significant runtime reorganizing capability. The document describes a mechanism for moving ongoing computations and data that is based on a functional model of execution. Because there is no shared memory, the processor connects to its neighbors through a high-speed data link. Messages are sent to a neighbor switch, which in turn forwards that message on to its neighbor until reaching the intended destination. Except for the neighbor connections, processors are isolated and independent of each other. The processors on the periphery also connect chip-to-chip, thus building up a large processor net. There is no particular topology to the larger net, as a function at each processor allows it to forward a message in the correct direction. Some chip-to-chip connections are not necessarily nearest neighbors, providing short cuts for some of the longer physical distances. The peripheral processors also provide the connections to sensors, actuators, radios, science instruments, and other devices with which the computer system interacts.

  3. PANDA: A distributed multiprocessor operating system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chubb, P.

    1989-01-01

    PANDA is a design for a distributed multiprocessor and an operating system. PANDA is designed to allow easy expansion of both hardware and software. As such, the PANDA kernel provides only message passing and memory and process management. The other features needed for the system (device drivers, secondary storage management, etc.) are provided as replaceable user tasks. The thesis presents PANDA's design and implementation, both hardware and software. PANDA uses multiple 68010 processors sharing memory on a VME bus, each such node potentially connected to others via a high speed network. The machine is completely homogeneous: there are no differencesmore » between processors that are detectable by programs running on the machine. A single two-processor node has been constructed. Each processor contains memory management circuits designed to allow processors to share page tables safely. PANDA presents a programmers' model similar to the hardware model: a job is divided into multiple tasks, each having its own address space. Within each task, multiple processes share code and data. Tasks can send messages to each other, and set up virtual circuits between themselves. Peripheral devices such as disc drives are represented within PANDA by tasks. PANDA divides secondary storage into volumes, each volume being accessed by a volume access task, or VAT. All knowledge about the way that data is stored on a disc is kept in its volume's VAT. The design is such that PANDA should provide a useful testbed for file systems and device drivers, as these can be installed without recompiling PANDA itself, and without rebooting the machine.« less

  4. [Improving the physician-dental surgeon relationship to improve patient care].

    PubMed

    Tenenbaum, Annabelle; Folliguet, Marysette; Berdougo, Brice; Hervé, Christian; Moutel, Grégoire

    2008-04-01

    This study had two aims: to assess the nature of the relationship between general practitioners (GPs) and dental surgeons in relation to patient care and to evaluate qualitatively their interest in the changes that health networks and shared patient medical files could bring. Questionnaires were completed by 12 GPs belonging to ASDES, a private practitioner-hospital health network that seeks to promote a partnership between physicians and dental surgeons, and by 13 private dental surgeons in the network catchment area. The GPs and dentists had quite different perceptions of their relationship. Most dentists rated their relationship with GPs as "good" to "excellent" and did not wish to modify it, while GPs rated their relationship with dentists as nonexistent and expressed a desire to change the situation. Some GPs and some dentists supported data exchange by sharing personal medical files through the network. Many obstacles hinder communication between GPs and dentists. There is insufficient coordination between professionals. Health professionals must be made aware of how changes in the health care system (health networks, personal medical files, etc) can help to provide patients with optimal care. Technical innovations in medicine will not be beneficial to patients unless medical education and training begins to include interdisciplinary and holistic approaches to health care and preventive care.

  5. Parallel ALLSPD-3D: Speeding Up Combustor Analysis Via Parallel Processing

    NASA Technical Reports Server (NTRS)

    Fricker, David M.

    1997-01-01

    The ALLSPD-3D Computational Fluid Dynamics code for reacting flow simulation was run on a set of benchmark test cases to determine its parallel efficiency. These test cases included non-reacting and reacting flow simulations with varying numbers of processors. Also, the tests explored the effects of scaling the simulation with the number of processors in addition to distributing a constant size problem over an increasing number of processors. The test cases were run on a cluster of IBM RS/6000 Model 590 workstations with ethernet and ATM networking plus a shared memory SGI Power Challenge L workstation. The results indicate that the network capabilities significantly influence the parallel efficiency, i.e., a shared memory machine is fastest and ATM networking provides acceptable performance. The limitations of ethernet greatly hamper the rapid calculation of flows using ALLSPD-3D.

  6. Systems and Methods for Locating a Target in a GPS-Denied Environment

    NASA Technical Reports Server (NTRS)

    Mackay, John D. (Inventor); Murdock, Ronald G. (Inventor); Cummins, Douglas A. (Inventor)

    2017-01-01

    A system for locating an object in a GPS-denied environment includes first and second stationary nodes of a network and an object out of synchronization with a common time base of the network. The system includes one or more processors that are configured to estimate distances between the first stationary node and the object and a distance between the second stationary node and the object by comparing time-stamps of messages relayed between the object and the nodes. A position of the object can then be trilaterated using a location of each of the first and second stationary nodes and the measured distances between the object and each of the first and second stationary nodes.

  7. An experimental distributed microprocessor implementation with a shared memory communications and control medium

    NASA Technical Reports Server (NTRS)

    Mejzak, R. S.

    1980-01-01

    The distributed processing concept is defined in terms of control primitives, variables, and structures and their use in performing a decomposed discrete Fourier transform (DET) application function. The design assumes interprocessor communications to be anonymous. In this scheme, all processors can access an entire common database by employing control primitives. Access to selected areas within the common database is random, enforced by a hardware lock, and determined by task and subtask pointers. This enables the number of processors to be varied in the configuration without any modifications to the control structure. Decompositional elements of the DFT application function in terms of tasks and subtasks are also described. The experimental hardware configuration consists of IMSAI 8080 chassis which are independent, 8 bit microcomputer units. These chassis are linked together to form a multiple processing system by means of a shared memory facility. This facility consists of hardware which provides a bus structure to enable up to six microcomputers to be interconnected. It provides polling and arbitration logic so that only one processor has access to shared memory at any one time.

  8. Fault tolerant onboard packet switch architecture for communication satellites: Shared memory per beam approach

    NASA Technical Reports Server (NTRS)

    Shalkhauser, Mary JO; Quintana, Jorge A.; Soni, Nitin J.

    1994-01-01

    The NASA Lewis Research Center is developing a multichannel communication signal processing satellite (MCSPS) system which will provide low data rate, direct to user, commercial communications services. The focus of current space segment developments is a flexible, high-throughput, fault tolerant onboard information switching processor. This information switching processor (ISP) is a destination-directed packet switch which performs both space and time switching to route user information among numerous user ground terminals. Through both industry study contracts and in-house investigations, several packet switching architectures were examined. A contention-free approach, the shared memory per beam architecture, was selected for implementation. The shared memory per beam architecture, fault tolerance insertion, implementation, and demonstration plans are described.

  9. Caring for cancer survivors: perspectives of oncologists, general practitioners and patients in Italy.

    PubMed

    Puglisi, Fabio; Agostinetto, Elisa; Gerratana, Lorenzo; Bozza, Claudia; Cancian, Maurizio; Iannelli, Elisabetta; Ratti, Giovanni; Cinieri, Saverio; Numico, Gianmauro

    2017-02-01

    The present survey investigates the views of medical oncologists, general practitioners (GPs) and patients about the various surveillance strategies. An online survey was conducted in Italy on a population of 329 medical oncologists, 380 GPs and 350 patients. Most of GPs (n = 291; 76%) claim that follow-up should be provided by the collaboration between GPs and medical oncologists. Most medical oncologists report to have a poor relationship with GPs (n = 151; 46%) or no relationships at all (n = 14; 4%). Most patients believe there is no real collaboration between medical oncologists and GPs (n = 138; 54%). GPs, medical oncologists and patients share the idea that the collaboration between oncologists and GPs for surveillance of cancer survivors is poor and should be improved.

  10. Computational performance of a smoothed particle hydrodynamics simulation for shared-memory parallel computing

    NASA Astrophysics Data System (ADS)

    Nishiura, Daisuke; Furuichi, Mikito; Sakaguchi, Hide

    2015-09-01

    The computational performance of a smoothed particle hydrodynamics (SPH) simulation is investigated for three types of current shared-memory parallel computer devices: many integrated core (MIC) processors, graphics processing units (GPUs), and multi-core CPUs. We are especially interested in efficient shared-memory allocation methods for each chipset, because the efficient data access patterns differ between compute unified device architecture (CUDA) programming for GPUs and OpenMP programming for MIC processors and multi-core CPUs. We first introduce several parallel implementation techniques for the SPH code, and then examine these on our target computer architectures to determine the most effective algorithms for each processor unit. In addition, we evaluate the effective computing performance and power efficiency of the SPH simulation on each architecture, as these are critical metrics for overall performance in a multi-device environment. In our benchmark test, the GPU is found to produce the best arithmetic performance as a standalone device unit, and gives the most efficient power consumption. The multi-core CPU obtains the most effective computing performance. The computational speed of the MIC processor on Xeon Phi approached that of two Xeon CPUs. This indicates that using MICs is an attractive choice for existing SPH codes on multi-core CPUs parallelized by OpenMP, as it gains computational acceleration without the need for significant changes to the source code.

  11. First Results from a Hardware-in-the-Loop Demonstration of Closed-Loop Autonomous Formation Flying

    NASA Technical Reports Server (NTRS)

    Gill, E.; Naasz, Bo; Ebinuma, T.

    2003-01-01

    A closed-loop system for the demonstration of formation flying technologies has been developed at NASA s Goddard Space Flight Center. Making use of a GPS signal simulator with a dual radio frequency outlet, the system includes two GPS space receivers as well as a powerful onboard navigation processor dedicated to the GPS-based guidance, navigation, and control of a satellite formation in real-time. The closed-loop system allows realistic simulations of autonomous formation flying scenarios, enabling research in the fields of tracking and orbit control strategies for a wide range of applications. A sample scenario has been set up where the autonomous transition of a satellite formation from an initial along-track separation of 800 m to a final distance of 100 m has been demonstrated. As a result, a typical control accuracy of about 5 m has been achieved which proves the applicability of autonomous formation flying techniques to formations of satellites as close as 50 m.

  12. General practitioners' perceptions of irritable bowel syndrome: a Q-methodological study.

    PubMed

    Bradley, Stephen; Alderson, Sarah; Ford, Alexander C; Foy, Robbie

    2018-01-16

    Irritable bowel syndrome (IBS) is a common disorder that imposes a significant burden upon societies, health care and quality of life, worldwide. While a diverse range of patient viewpoints on IBS have been explored, the opinions of the GPs they ideally need to develop therapeutic partnerships with are less well defined. To explore how GPs perceive IBS, using Q-methodology, which allows quantitative interpretation of qualitative data. A Q-methodological study of GPs in Leeds, UK. Thirty-three GPs completed an online Q-sort in which they ranked their level of agreement with 66 statements. Factor analysis of the Q-sorts was performed to determine the accounts that predominated in understandings of IBS. Ten of the GPs were interviewed in person and responses to the statements recorded to help explain the accounts. Analysis yielded one predominant account shared by all GPs-that IBS was a largely psychological disorder. This account overshadowed a debate represented by a minority, polarized between those who viewed IBS as almost exclusively psychological, versus those who believed IBS had an organic basis, with a psychological component. The overwhelming similarity in responses indicates that all GPs shared a common perspective on IBS. Interviews suggested degrees of uncertainty and discomfort around the aetiology of IBS. There was overwhelming agreement in the way GPs perceived IBS. This contrasts with the range of patient accounts of IBS and may explain why both GPs and their patients face difficult negotiations in achieving therapeutic relationships. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  13. Delayed prescribing for upper respiratory tract infections: a qualitative study of GPs' views and experiences

    PubMed Central

    Høye, Sigurd; Frich, Jan; Lindbœk, Morten

    2010-01-01

    Background Delayed prescribing has been promoted as a strategy that meets patients' expectations and helps to avoid unnecessary use of antibiotics in upper respiratory tract infections. Aim To explore GPs' views on and experiences with delayed prescribing in patients with acute upper respiratory tract infections. Design of study Qualitative study involving focus groups. Setting Norwegian general practice. Method Qualitative analysis of data collected from five focus groups comprising 33 GPs who took part in a quality-improvement programme of antibiotic prescribing. Results The views of GPs differed on the usefulness of delayed prescribing. GPs who endorsed the strategy emphasised shared decision making and the creation of opportunities for educating patients, whereas GPs who were negative applied the strategy mainly when being pressed to prescribe. Mild and mainly harmless conditions of a possible bacterial origin, such as acute sinusitis and acute otitis, were considered most suitable for delayed prescribing. A key argument for issuing a wait-and-see prescription was that it helped patients avoid seeking after-hours care. For issuing a wait-and-see prescription, the GPs required that the patient was ‘knowledgeable’, able to understand the indications for antibiotics, and motivated for shared decision making. GPs emphasised that patients should be informed thoroughly when receiving a wait-and-see prescription. Conclusion Not all GPs endorse delayed prescribing; however, it appears to be a feasible approach for managing patients with early symptoms of mild upper respiratory tract infections of a possible bacterial origin. Informing the patients properly while issuing wait-and-see prescriptions is essential. PMID:21144201

  14. Production Level CFD Code Acceleration for Hybrid Many-Core Architectures

    NASA Technical Reports Server (NTRS)

    Duffy, Austen C.; Hammond, Dana P.; Nielsen, Eric J.

    2012-01-01

    In this work, a novel graphics processing unit (GPU) distributed sharing model for hybrid many-core architectures is introduced and employed in the acceleration of a production-level computational fluid dynamics (CFD) code. The latest generation graphics hardware allows multiple processor cores to simultaneously share a single GPU through concurrent kernel execution. This feature has allowed the NASA FUN3D code to be accelerated in parallel with up to four processor cores sharing a single GPU. For codes to scale and fully use resources on these and the next generation machines, codes will need to employ some type of GPU sharing model, as presented in this work. Findings include the effects of GPU sharing on overall performance. A discussion of the inherent challenges that parallel unstructured CFD codes face in accelerator-based computing environments is included, with considerations for future generation architectures. This work was completed by the author in August 2010, and reflects the analysis and results of the time.

  15. Optimal processor assignment for pipeline computations

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Simha, Rahul; Choudhury, Alok N.; Narahari, Bhagirath

    1991-01-01

    The availability of large scale multitasked parallel architectures introduces the following processor assignment problem for pipelined computations. Given a set of tasks and their precedence constraints, along with their experimentally determined individual responses times for different processor sizes, find an assignment of processor to tasks. Two objectives are of interest: minimal response given a throughput requirement, and maximal throughput given a response time requirement. These assignment problems differ considerably from the classical mapping problem in which several tasks share a processor; instead, it is assumed that a large number of processors are to be assigned to a relatively small number of tasks. Efficient assignment algorithms were developed for different classes of task structures. For a p processor system and a series parallel precedence graph with n constituent tasks, an O(np2) algorithm is provided that finds the optimal assignment for the response time optimization problem; it was found that the assignment optimizing the constrained throughput in O(np2log p) time. Special cases of linear, independent, and tree graphs are also considered.

  16. Android Protection Mechanism: A Signed Code Security Mechanism for Smartphone Applications

    DTIC Science & Technology

    2011-03-01

    status registers, exceptions, endian support, unaligned access support, synchronization primitives , the Jazelle Extension, and saturated integer...supports comprehensive non-blocking shared-memory synchronization primitives that scale for multiple-processor system designs. This is an improvement... synchronization . Memory semaphores can be loaded and altered without interruption because the load and store operations are atomic. Processor

  17. System and method for memory allocation in a multiclass memory system

    DOEpatents

    Loh, Gabriel; Meswani, Mitesh; Ignatowski, Michael; Nutter, Mark

    2016-06-28

    A system for memory allocation in a multiclass memory system includes a processor coupleable to a plurality of memories sharing a unified memory address space, and a library store to store a library of software functions. The processor identifies a type of a data structure in response to a memory allocation function call to the library for allocating memory to the data structure. Using the library, the processor allocates portions of the data structure among multiple memories of the multiclass memory system based on the type of the data structure.

  18. A High Performance VLSI Computer Architecture For Computer Graphics

    NASA Astrophysics Data System (ADS)

    Chin, Chi-Yuan; Lin, Wen-Tai

    1988-10-01

    A VLSI computer architecture, consisting of multiple processors, is presented in this paper to satisfy the modern computer graphics demands, e.g. high resolution, realistic animation, real-time display etc.. All processors share a global memory which are partitioned into multiple banks. Through a crossbar network, data from one memory bank can be broadcasted to many processors. Processors are physically interconnected through a hyper-crossbar network (a crossbar-like network). By programming the network, the topology of communication links among processors can be reconfigurated to satisfy specific dataflows of different applications. Each processor consists of a controller, arithmetic operators, local memory, a local crossbar network, and I/O ports to communicate with other processors, memory banks, and a system controller. Operations in each processor are characterized into two modes, i.e. object domain and space domain, to fully utilize the data-independency characteristics of graphics processing. Special graphics features such as 3D-to-2D conversion, shadow generation, texturing, and reflection, can be easily handled. With the current high density interconnection (MI) technology, it is feasible to implement a 64-processor system to achieve 2.5 billion operations per second, a performance needed in most advanced graphics applications.

  19. Parallel processing on the Livermore VAX 11/780-4 parallel processor system with compatibility to Cray Research, Inc. (CRI) multitasking. Version 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Werner, N.E.; Van Matre, S.W.

    1985-05-01

    This manual describes the CRI Subroutine Library and Utility Package. The CRI library provides Cray multitasking functionality on the four-processor shared memory VAX 11/780-4. Additional functionality has been added for more flexibility. A discussion of the library, utilities, error messages, and example programs is provided.

  20. Opinions of general practitioners about psychotherapy and their relationships with mental health professionals in the management of major depression: A qualitative survey

    PubMed Central

    Dumesnil, Hélène; Apostolidis, Thémis; Verger, Pierre

    2018-01-01

    Background French general practitioners (GPs) refer their patients with major depression to psychiatrists or for psychotherapy at particularly low rates. Objectives This qualitative study aims to explore general practitioners' (GP) opinions about psychotherapy, their relationships with mental health professionals, their perceptions of their role and that of psychiatrists in treating depression, and the relations between these factors and the GPs' strategies for managing depression. Methods In 2011, in-depth interviews based on a semi-structured interview guide were conducted with 32 GPs practicing in southeastern France. Verbatim transcripts were examined by analyzing their thematic content. Results We identified three profiles of physicians according to their opinions and practices about treatment strategies for depression: pro-pharmacological treatment, pro-psychotherapy and those with mixed practices. Most participants considered their relationships with psychiatrists unsatisfactory, would like more and better collaboration with them and shared the same concept of management in general practice. This concept was based both on the values and principles of practice shared by GPs and on their strong differentiation of their management practices from those of psychiatrists, Conclusion Several attitudes and values common to GPs might contribute to their low rate of referrals for psychotherapy in France: strong occupational identity, substantial variations in GPs' attitudes and practices regarding depression treatment strategies, representations sometimes unfavorable toward psychiatrists. Actions to develop a common culture and improve cooperation between GPs and psychiatrists are essential. They include systems of collaborative care and the development of interdisciplinary training common to GPs and psychiatrists practicing in the same area. PMID:29385155

  1. General practice registrars' views on maternity care in general practice in New Zealand.

    PubMed

    Preston, Hanna; Jaye, Chrystal; Miller, Dawn L

    2015-12-01

    The number of general practitioners (GPs) providing maternity care in New Zealand has declined dramatically since legislative changes of the 1990s. The Ministry of Health wants GPs to provide maternity care again. To investigate New Zealand general practice registrars' perspectives on GPs' role in maternity care; specifically, whether maternity services should be provided by GPs, registrars' preparedness to provide such services, and training opportunities available or required to achieve this. An anonymous online questionnaire was distributed to all registrars enrolled in The Royal New Zealand College of General Practitioners' (RNZCGP's) General Practice Education Programme (GPEP) in 2012, via their online learning platform OWL. 165 of the 643 general practice registrars responded (25.7% response rate). Most (95%) believe that GPs interested and trained in maternity care should consider providing antenatal, postnatal or shared care with midwives, and 95% believe women should be able to access maternity care from their general practice. When practising as a GP, 90% would consider providing antenatal and postnatal care, 47.3% shared care, and 4.3% full pregnancy care. Professional factors including training and adequate funding were most important when considering providing maternity care as a GP. Ninety-five percent of general practice registrars who responded to our survey believe that GPs should provide some maternity services, and about 90% would consider providing maternity care in their future practice. Addressing professional issues of training, support and funding are essential if more GPs are to participate in maternity care in New Zealand.

  2. An enhanced Ada run-time system for real-time embedded processors

    NASA Technical Reports Server (NTRS)

    Sims, J. T.

    1991-01-01

    An enhanced Ada run-time system has been developed to support real-time embedded processor applications. The primary focus of this development effort has been on the tasking system and the memory management facilities of the run-time system. The tasking system has been extended to support efficient and precise periodic task execution as required for control applications. Event-driven task execution providing a means of task-asynchronous control and communication among Ada tasks is supported in this system. Inter-task control is even provided among tasks distributed on separate physical processors. The memory management system has been enhanced to provide object allocation and protected access support for memory shared between disjoint processors, each of which is executing a distinct Ada program.

  3. 7 CFR 1435.315 - Adjustments to proportionate shares.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... CORPORATION, DEPARTMENT OF AGRICULTURE LOANS, PURCHASES, AND OTHER OPERATIONS SUGAR PROGRAM Flexible Marketing Allotments For Sugar § 1435.315 Adjustments to proportionate shares. Whenever CCC determines that, because of... sufficient to enable state processors to produce sufficient sugar to meet the State's cane sugar allotment...

  4. 7 CFR 1435.315 - Adjustments to proportionate shares.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... CORPORATION, DEPARTMENT OF AGRICULTURE LOANS, PURCHASES, AND OTHER OPERATIONS SUGAR PROGRAM Flexible Marketing Allotments For Sugar § 1435.315 Adjustments to proportionate shares. Whenever CCC determines that, because of... sufficient to enable state processors to produce sufficient sugar to meet the State's cane sugar allotment...

  5. 7 CFR 1435.315 - Adjustments to proportionate shares.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... CORPORATION, DEPARTMENT OF AGRICULTURE LOANS, PURCHASES, AND OTHER OPERATIONS SUGAR PROGRAM Flexible Marketing Allotments For Sugar § 1435.315 Adjustments to proportionate shares. Whenever CCC determines that, because of... sufficient to enable state processors to produce sufficient sugar to meet the State's cane sugar allotment...

  6. 7 CFR 1435.315 - Adjustments to proportionate shares.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... CORPORATION, DEPARTMENT OF AGRICULTURE LOANS, PURCHASES, AND OTHER OPERATIONS SUGAR PROGRAM Flexible Marketing Allotments For Sugar § 1435.315 Adjustments to proportionate shares. Whenever CCC determines that, because of... sufficient to enable state processors to produce sufficient sugar to meet the State's cane sugar allotment...

  7. 7 CFR 1435.315 - Adjustments to proportionate shares.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... CORPORATION, DEPARTMENT OF AGRICULTURE LOANS, PURCHASES, AND OTHER OPERATIONS SUGAR PROGRAM Flexible Marketing Allotments For Sugar § 1435.315 Adjustments to proportionate shares. Whenever CCC determines that, because of... sufficient to enable state processors to produce sufficient sugar to meet the State's cane sugar allotment...

  8. Data traffic reduction schemes for sparse Cholesky factorizations

    NASA Technical Reports Server (NTRS)

    Naik, Vijay K.; Patrick, Merrell L.

    1988-01-01

    Load distribution schemes are presented which minimize the total data traffic in the Cholesky factorization of dense and sparse, symmetric, positive definite matrices on multiprocessor systems with local and shared memory. The total data traffic in factoring an n x n sparse, symmetric, positive definite matrix representing an n-vertex regular 2-D grid graph using n (sup alpha), alpha is equal to or less than 1, processors are shown to be O(n(sup 1 + alpha/2)). It is O(n(sup 3/2)), when n (sup alpha), alpha is equal to or greater than 1, processors are used. Under the conditions of uniform load distribution, these results are shown to be asymptotically optimal. The schemes allow efficient use of up to O(n) processors before the total data traffic reaches the maximum value of O(n(sup 3/2)). The partitioning employed within the scheme, allows a better utilization of the data accessed from shared memory than those of previously published methods.

  9. Single-Frequency GPS Relative Navigation in a High Ionosphere Orbital Environment

    NASA Technical Reports Server (NTRS)

    Conrad, Patrick R.; Naasz, Bo J.

    2007-01-01

    The Global Positioning System (GPS) provides a convenient source for space vehicle relative navigation measurements, especially for low Earth orbit formation flying and autonomous rendezvous mission concepts. For single-frequency GPS receivers, ionospheric path delay can be a significant error source if not properly mitigated. In particular, ionospheric effects are known to cause significant radial position error bias and add dramatically to relative state estimation error if the onboard navigation software does not force the use of measurements from common or shared GPS space vehicles. Results from GPS navigation simulations are presented for a pair of space vehicles flying in formation and using GPS pseudorange measurements to perform absolute and relative orbit determination. With careful measurement selection techniques relative state estimation accuracy to less than 20 cm with standard GPS pseudorange processing and less than 10 cm with single-differenced pseudorange processing is shown.

  10. Deprescribing medication in very elderly patients with multimorbidity: the view of Dutch GPs. A qualitative study.

    PubMed

    Schuling, Jan; Gebben, Henkjan; Veehof, Leonardus Johannes Gerardus; Haaijer-Ruskamp, Flora Marcia

    2012-07-09

    Elderly patients with multimorbidity who are treated according to guidelines use a large number of drugs. This number of drugs increases the risk of adverse drug events (ADEs). Stopping medication may relieve these effects, and thereby improve the patient's wellbeing. To facilitate management of polypharmacy expert-driven instruments have been developed, sofar with little effect on the patient's quality of life. Recently, much attention has been paid to shared decision-making in general practice, mainly focusing on patient preferences. This study explores how experienced GPs feel about deprescribing medication in older patients with multimorbidity and to what extent they involve patients in these decisions. Focusgroups of GPs were used to develop a conceptual framework for understanding and categorizing the GP's view on the subject. Audiotapes were transcribed verbatim and studied by the first and second author. They selected independently relevant textfragments. In a next step they labeled these fragments and sorted them. From these labelled and sorted fragments central themes were extracted. GPs discern symptomatic medication and preventive medication; deprescribing the latter category is seen as more difficult by the GPs due to lack of benefit/risk information for these patients.Factors influencing GPs'deprescribing were beliefs concerning patients (patients have no problem with polypharmacy; patients may interpret a proposal to stop preventive medication as a sign of having been given up on; and confronting the patient with a discussion of life expectancy vs quality of life is 'not done'), guidelines for treatment (GPs feel compelled to prescribe by the present guidelines) and organization of healthcare (collaboration with prescribing medical specialists and dispensing pharmacists. The GPs' beliefs concerning elderly patients are a barrier to explore patient preferences when reviewing preventive medication. GPs would welcome decision support when dealing with several guidelines for one patient. Explicit rules for collaborating with medical specialists in this field are required. Training in shared decision making could help GPs to elicit patient preferences.

  11. Design of a search and rescue terminal based on the dual-mode satellite and CDMA network

    NASA Astrophysics Data System (ADS)

    Zhao, Junping; Zhang, Xuan; Zheng, Bing; Zhou, Yubin; Song, Hao; Song, Wei; Zhang, Meikui; Liu, Tongze; Zhou, Li

    2010-12-01

    The current goal is to create a set of portable terminals with GPS/BD2 dual-mode satellite positioning, vital signs monitoring and wireless transmission functions. The terminal depends on an ARM processor to collect and combine data related to vital signs and GPS/BD2 location information, and sends the message to headquarters through the military CDMA network. It integrates multiple functions as a whole. The satellite positioning and wireless transmission capabilities are integrated into the motherboard, and the vital signs sensors used in the form of belts communicate with the board through Bluetooth. It can be adjusted according to the headquarters' instructions. This kind of device is of great practical significance for operations during disaster relief, search and rescue of the wounded in wartime, non-war military operations and other special circumstances.

  12. Validation of GOME-2/Metop total column water vapour with ground-based and in situ measurements

    NASA Astrophysics Data System (ADS)

    Kalakoski, Niilo; Kujanpää, Jukka; Sofieva, Viktoria; Tamminen, Johanna; Grossi, Margherita; Valks, Pieter

    2016-04-01

    The total column water vapour product from the Global Ozone Monitoring Experiment-2 on board Metop-A and Metop-B satellites (GOME-2/Metop-A and GOME-2/Metop-B) produced by the Satellite Application Facility on Ozone and Atmospheric Chemistry Monitoring (O3M SAF) is compared with co-located radiosonde observations and global positioning system (GPS) retrievals. The validation is performed using recently reprocessed data by the GOME Data Processor (GDP) version 4.7. The time periods for the validation are January 2007-July 2013 (GOME-2A) and December 2012-July 2013 (GOME-2B). The radiosonde data are from the Integrated Global Radiosonde Archive (IGRA) maintained by the National Climatic Data Center (NCDC). The ground-based GPS observations from the COSMIC/SuomiNet network are used as the second independent data source. We find a good general agreement between the GOME-2 and the radiosonde/GPS data. The median relative difference of GOME-2 to the radiosonde observations is -2.7 % for GOME-2A and -0.3 % for GOME-2B. Against the GPS, the median relative differences are 4.9 % and 3.2 % for GOME-2A and B, respectively. For water vapour total columns below 10 kg m-2, large wet biases are observed, especially against the GPS retrievals. Conversely, at values above 50 kg m-2, GOME-2 generally underestimates both ground-based observations.

  13. Preference for practice: a Danish study on young doctors' choice of general practice using a discrete choice experiment.

    PubMed

    Pedersen, Line Bjørnskov; Gyrd-Hansen, Dorte

    2014-07-01

    This study examines the preferences of general practitioners (GPs) in training for organizational characteristics in general practice with focus on aspects that can mitigate problems with GP shortages. A discrete choice experiment was used to investigate preferences for the attributes practice type, number of GPs in general practice, collaboration with other practices, change in weekly working hours (administrative versus patient related), and change in yearly surplus. In May 2011, all doctors actively engaged in the family medicine program in Denmark were invited to participate in a web-based survey. A total of 485 GPs in training responded to the questionnaire, resulting in a response rate of 56%. A mixed logit model showed that GPs in training prefer to work in smaller shared practices (2 GPs). This stands in contrast to the preferences of current GPs. Hence, a generational change in the GP population is likely to introduce more productive practice forms, and problems with GP shortages are likely to be mitigated over the coming years. Results further showed that a majority of the respondents are willing to work in larger shared practices (with 3-4 GPs) if they receive an increase in surplus (approximately 50,000 DKK/6,719 EUR per year) and that they may be willing to take in more patient-related work if the increase in surplus is sufficient (approximately 200,000 DKK/26,875 EUR per year for 5 extra hours per week). Monetary incentives may therefore be an effective tool for further improving productivity.

  14. Development of a Dynamic Time Sharing Scheduled Environment Final Report CRADA No. TC-824-94E

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jette, M.; Caliga, D.

    Massively parallel computers, such as the Cray T3D, have historically supported resource sharing solely with space sharing. In that method, multiple problems are solved by executing them on distinct processors. This project developed a dynamic time- and space-sharing scheduler to achieve greater interactivity and throughput than could be achieved with space-sharing alone. CRI and LLNL worked together on the design, testing, and review aspects of this project. There were separate software deliverables. CFU implemented a general purpose scheduling system as per the design specifications. LLNL ported the local gang scheduler software to the LLNL Cray T3D. In this approach, processorsmore » are allocated simultaneously to aU components of a parallel program (in a “gang”). Program execution is preempted as needed to provide for interactivity. Programs are also reIocated to different processors as needed to efficiently pack the computer’s torus of processors. In phase one, CRI developed an interface specification after discussions with LLNL for systemlevel software supporting a time- and space-sharing environment on the LLNL T3D. The two parties also discussed interface specifications for external control tools (such as scheduling policy tools, system administration tools) and applications programs. CRI assumed responsibility for the writing and implementation of all the necessary system software in this phase. In phase two, CRI implemented job-rolling on the Cray T3D, a mechanism for preempting a program, saving its state to disk, and later restoring its state to memory for continued execution. LLNL ported its gang scheduler to the LLNL T3D utilizing the CRI interface implemented in phases one and two. During phase three, the functionality and effectiveness of the LLNL gang scheduler was assessed to provide input to CRI time- and space-sharing, efforts. CRI will utilize this information in the development of general schedulers suitable for other sites and future architectures.« less

  15. Parallel processing approach to transform-based image coding

    NASA Astrophysics Data System (ADS)

    Normile, James O.; Wright, Dan; Chu, Ken; Yeh, Chia L.

    1991-06-01

    This paper describes a flexible parallel processing architecture designed for use in real time video processing. The system consists of floating point DSP processors connected to each other via fast serial links, each processor has access to a globally shared memory. A multiple bus architecture in combination with a dual ported memory allows communication with a host control processor. The system has been applied to prototyping of video compression and decompression algorithms. The decomposition of transform based algorithms for decompression into a form suitable for parallel processing is described. A technique for automatic load balancing among the processors is developed and discussed, results ar presented with image statistics and data rates. Finally techniques for accelerating the system throughput are analyzed and results from the application of one such modification described.

  16. Visualization Co-Processing of a CFD Simulation

    NASA Technical Reports Server (NTRS)

    Vaziri, Arsi

    1999-01-01

    OVERFLOW, a widely used CFD simulation code, is combined with a visualization system, pV3, to experiment with an environment for simulation/visualization co-processing on a SGI Origin 2000 computer(O2K) system. The shared memory version of the solver is used with the O2K 'pfa' preprocessor invoked to automatically discover parallelism in the source code. No other explicit parallelism is enabled. In order to study the scaling and performance of the visualization co-processing system, sample runs are made with different processor groups in the range of 1 to 254 processors. The data exchange between the visualization system and the simulation system is rapid enough for user interactivity when the problem size is small. This shared memory version of OVERFLOW, with minimal parallelization, does not scale well to an increasing number of available processors. The visualization task takes about 18 to 30% of the total processing time and does not appear to be a major contributor to the poor scaling. Improper load balancing and inter-processor communication overhead are contributors to this poor performance. Work is in progress which is aimed at obtaining improved parallel performance of the solver and removing the limitations of serial data transfer to pV3 by examining various parallelization/communication strategies, including the use of the explicit message passing.

  17. 76 FR 3090 - Proposed Information Collection; Comment Request; Alaska Region; Bering Sea and Aleutian Islands...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-19

    ... submitted on or before March 21, 2011. ADDRESSES: Direct all written comments to Diana Hynek, Departmental... fisheries. Program components include quota share allocation, processor quota share allocation, individual... Binding Arbitration process, and fee collection. II. Method of Collection Responses are mailed, except the...

  18. Collaboration between general practitioners and mental health care professionals: a qualitative study.

    PubMed

    Fredheim, Terje; Danbolt, Lars J; Haavet, Ole R; Kjønsberg, Kari; Lien, Lars

    2011-05-23

    Collaboration between general practice and mental health care has been recognised as necessary to provide good quality healthcare services to people with mental health problems. Several studies indicate that collaboration often is poor, with the result that patient' needs for coordinated services are not sufficiently met, and that resources are inefficiently used. An increasing number of mental health care workers should improve mental health services, but may complicate collaboration and coordination between mental health workers and other professionals in the treatment chain. The aim of this qualitative study is to investigate strengths and weaknesses in today's collaboration, and to suggest improvements in the interaction between General Practitioners (GPs) and specialised mental health service. This paper presents a qualitative focus group study with data drawn from six groups and eight group sessions with 28 health professionals (10 GPs, 12 nurses, and 6 physicians doing post-doctoral training in psychiatry), all working in the same region and assumed to make professional contact with each other. GPs and mental health professionals shared each others expressions of strengths, weaknesses and suggestions for improvement in today's collaboration. Strengths in today's collaboration were related to common consultations between GPs and mental health professionals, and when GPs were able to receive advice about diagnostic treatment dilemmas. Weaknesses were related to the GPs' possibility to meet mental health professionals, and lack of mutual knowledge in mental health services. The results describe experiences and importance of interpersonal knowledge, mutual accessibility and familiarity with existing systems and resources. There is an agreement between GPs and mental health professionals that services will improve with shared knowledge about patients through systematic collaborative services, direct cell-phone lines to mental health professionals and allocated times for telephone consultation. GPs and mental health professionals experience collaboration as important. GPs are the gate-keepers to specialised health care, and lack of collaboration seems to create problems for GPs, mental health professionals, and for the patients. Suggestions for improvement included identification of situations that could increase mutual knowledge, and make it easier for GPs to reach the right mental health care professional when needed.

  19. Formulation of consumables management models. Development approach for the mission planning processor working model

    NASA Technical Reports Server (NTRS)

    Connelly, L. C.

    1977-01-01

    The mission planning processor is a user oriented tool for consumables management and is part of the total consumables subsystem management concept. The approach to be used in developing a working model of the mission planning processor is documented. The approach includes top-down design, structured programming techniques, and application of NASA approved software development standards. This development approach: (1) promotes cost effective software development, (2) enhances the quality and reliability of the working model, (3) encourages the sharing of the working model through a standard approach, and (4) promotes portability of the working model to other computer systems.

  20. Prescription Patterns and the Cost of Migraine Treatments in German General and Neurological Practices.

    PubMed

    Jacob, Louis; Kostev, Karel

    2017-07-01

    The aim of this study was to analyze prescription patterns and the cost of migraine treatments in general practices (GPs) and neurological practices (NPs) in Germany. This study included 43,149 patients treated in GPs and 13,674 patients treated in NPs who were diagnosed with migraine in 2015. Ten different families of migraine therapy were included in the analysis: triptans, analgesics, anti-emetics, beta-blockers, antivertigo products, gastroprokinetics, anti-epileptics, calcium channel blockers, tricyclic antidepressants, and other medications (all other classes used in the treatment of migraine including homeopathic medications). The share of migraine therapies and their costs were estimated for GPs and NPs. The mean age was 44.4 years in GPs and 44.1 years in NPs. Triptans and analgesics were the 2 most commonly prescribed families of drugs in all patients and in the 9 specific subgroups. Interestingly, triptans were more commonly prescribed in NPs than in GPs (30.9% to 55.0% vs. 30.0% to 44.7%), whereas analgesics were less frequently given in NPs than in GPs (11.5% to 17.2% vs. 35.3% to 42.4%). Finally, the share of patients who received no therapy was higher in NPs than in GPs (33.9% to 58.4% vs. 27.5% to 37.9%). The annual cost per patient was €66.04 in GPs and €94.71 in NPs. Finally, the annual cost per patient increased with age and was higher in women and in individuals with private health insurance coverage than in men and individuals with public health insurance coverage. Triptans and analgesics were the 2 most commonly prescribed drugs for the treatment of migraine. Furthermore, approximately 30% to 40% of patients did not receive any therapy. Finally, the annual cost per patient was higher in NPs than in GPs. © 2016 World Institute of Pain.

  1. GPS Block 2R Time Standard Assembly (TSA) architecture

    NASA Technical Reports Server (NTRS)

    Baker, Anthony P.

    1990-01-01

    The underlying philosophy of the Global Positioning System (GPS) 2R Time Standard Assembly (TSA) architecture is to utilize two frequency sources, one fixed frequency reference source and one system frequency source, and to couple the system frequency source to the reference frequency source via a sample data loop. The system source is used to provide the basic clock frequency and timing for the space vehicle (SV) and it uses a voltage controlled crystal oscillator (VCXO) with high short term stability. The reference source is an atomic frequency standard (AFS) with high long term stability. The architecture can support any type of frequency standard. In the system design rubidium, cesium, and H2 masers outputting a canonical frequency were accommodated. The architecture is software intensive. All VCXO adjustments are digital and are calculated by a processor. They are applied to the VCXO via a digital to analog converter.

  2. Sea ice type maps from Alaska synthetic aperture radar facility imagery: An assessment

    NASA Technical Reports Server (NTRS)

    Fetterer, Florence M.; Gineris, Denise; Kwok, Ronald

    1994-01-01

    Synthetic aperture radar (SAR) imagery received at the Alaskan SAR Facility is routinely and automatically classified on the Geophysical Processor System (GPS) to create ice type maps. We evaluated the wintertime performance of the GPS classification algorithm by comparing ice type percentages from supervised classification with percentages from the algorithm. The root mean square (RMS) difference for multiyear ice is about 6%, while the inconsistency in supervised classification is about 3%. The algorithm separates first-year from multiyear ice well, although it sometimes fails to correctly classify new ice and open water owing to the wide distribution of backscatter for these classes. Our results imply a high degree of accuracy and consistency in the growing archive of multiyear and first-year ice distribution maps. These results have implications for heat and mass balance studies which are furthered by the ability to accurately characterize ice type distributions over a large part of the Arctic.

  3. An architecture for real-time vision processing

    NASA Technical Reports Server (NTRS)

    Chien, Chiun-Hong

    1994-01-01

    To study the feasibility of developing an architecture for real time vision processing, a task queue server and parallel algorithms for two vision operations were designed and implemented on an i860-based Mercury Computing System 860VS array processor. The proposed architecture treats each vision function as a task or set of tasks which may be recursively divided into subtasks and processed by multiple processors coordinated by a task queue server accessible by all processors. Each idle processor subsequently fetches a task and associated data from the task queue server for processing and posts the result to shared memory for later use. Load balancing can be carried out within the processing system without the requirement for a centralized controller. The author concludes that real time vision processing cannot be achieved without both sequential and parallel vision algorithms and a good parallel vision architecture.

  4. Multiprocessing on supercomputers for computational aerodynamics

    NASA Technical Reports Server (NTRS)

    Yarrow, Maurice; Mehta, Unmeel B.

    1991-01-01

    Little use is made of multiple processors available on current supercomputers (computers with a theoretical peak performance capability equal to 100 MFLOPS or more) to improve turnaround time in computational aerodynamics. The productivity of a computer user is directly related to this turnaround time. In a time-sharing environment, such improvement in this speed is achieved when multiple processors are used efficiently to execute an algorithm. The concept of multiple instructions and multiple data (MIMD) is applied through multitasking via a strategy that requires relatively minor modifications to an existing code for a single processor. This approach maps the available memory to multiple processors, exploiting the C-Fortran-Unix interface. The existing code is mapped without the need for developing a new algorithm. The procedure for building a code utilizing this approach is automated with the Unix stream editor.

  5. Performances of multiprocessor multidisk architectures for continuous media storage

    NASA Astrophysics Data System (ADS)

    Gennart, Benoit A.; Messerli, Vincent; Hersch, Roger D.

    1996-03-01

    Multimedia interfaces increase the need for large image databases, capable of storing and reading streams of data with strict synchronicity and isochronicity requirements. In order to fulfill these requirements, we consider a parallel image server architecture which relies on arrays of intelligent disk nodes, each disk node being composed of one processor and one or more disks. This contribution analyzes through bottleneck performance evaluation and simulation the behavior of two multi-processor multi-disk architectures: a point-to-point architecture and a shared-bus architecture similar to current multiprocessor workstation architectures. We compare the two architectures on the basis of two multimedia algorithms: the compute-bound frame resizing by resampling and the data-bound disk-to-client stream transfer. The results suggest that the shared bus is a potential bottleneck despite its very high hardware throughput (400Mbytes/s) and that an architecture with addressable local memories located closely to their respective processors could partially remove this bottleneck. The point- to-point architecture is scalable and able to sustain high throughputs for simultaneous compute- bound and data-bound operations.

  6. Smartphones for Geological Data Collection- an Android Phone Application

    NASA Astrophysics Data System (ADS)

    Sun, F.; Weng, Y.; Grigsby, J. D.

    2010-12-01

    Recently, smartphones have attracted great attention in the wireless device market because of their powerful processors, ample memory capacity, advanced connectivity, and numerous utility programs. Considering the prominent new features a smartphone has, such as the large touch screen, speaker, microphone, camera, GPS receiver, accelerometer, and Internet connections, it can serve as a perfect digital aide for data recording on any geological field trip. We have designed and developed an application by using aforementioned features in an Android phone to provide functionalities used in field studies. For example, employing the accelerometer in the Android phone, the application turns the handset into a brunton-like device by which users can measure directions, strike and dip of a bedding plane or trend and plunge of a fold. Our application also includes functionalities of image taking, GPS coordinates tracking, videotaping, audio recording, and note writing. Data recorded from the application are tied together by the time log, which makes the task easy to track all data regarding a specific geologic object. The application pulls the GPS reading from the phone’s built-in GPS receiver and uses it as a spatial index to link up the other type of data, then maps them to the Google Maps/Earth for visualization. In this way, notes, pictures, audio or video recordings to depict the characteristics of the outcrops and their spatial relations, all can be well documented and organized in one handy gadget.

  7. Performance and Application of Parallel OVERFLOW Codes on Distributed and Shared Memory Platforms

    NASA Technical Reports Server (NTRS)

    Djomehri, M. Jahed; Rizk, Yehia M.

    1999-01-01

    The presentation discusses recent studies on the performance of the two parallel versions of the aerodynamics CFD code, OVERFLOW_MPI and _MLP. Developed at NASA Ames, the serial version, OVERFLOW, is a multidimensional Navier-Stokes flow solver based on overset (Chimera) grid technology. The code has recently been parallelized in two ways. One is based on the explicit message-passing interface (MPI) across processors and uses the _MPI communication package. This approach is primarily suited for distributed memory systems and workstation clusters. The second, termed the multi-level parallel (MLP) method, is simple and uses shared memory for all communications. The _MLP code is suitable on distributed-shared memory systems. For both methods, the message passing takes place across the processors or processes at the advancement of each time step. This procedure is, in effect, the Chimera boundary conditions update, which is done in an explicit "Jacobi" style. In contrast, the update in the serial code is done in more of the "Gauss-Sidel" fashion. The programming efforts for the _MPI code is more complicated than for the _MLP code; the former requires modification of the outer and some inner shells of the serial code, whereas the latter focuses only on the outer shell of the code. The _MPI version offers a great deal of flexibility in distributing grid zones across a specified number of processors in order to achieve load balancing. The approach is capable of partitioning zones across multiple processors or sending each zone and/or cluster of several zones into a single processor. The message passing across the processors consists of Chimera boundary and/or an overlap of "halo" boundary points for each partitioned zone. The MLP version is a new coarse-grain parallel concept at the zonal and intra-zonal levels. A grouping strategy is used to distribute zones into several groups forming sub-processes which will run in parallel. The total volume of grid points in each group are approximately balanced. A proper number of threads are initially allocated to each group, and in subsequent iterations during the run-time, the number of threads are adjusted to achieve load balancing across the processes. Each process exploits the multitasking directives already established in Overflow.

  8. WATERLOPP V2/64: A highly parallel machine for numerical computation

    NASA Astrophysics Data System (ADS)

    Ostlund, Neil S.

    1985-07-01

    Current technological trends suggest that the high performance scientific machines of the future are very likely to consist of a large number (greater than 1024) of processors connected and communicating with each other in some as yet undetermined manner. Such an assembly of processors should behave as a single machine in obtaining numerical solutions to scientific problems. However, the appropriate way of organizing both the hardware and software of such an assembly of processors is an unsolved and active area of research. It is particularly important to minimize the organizational overhead of interprocessor comunication, global synchronization, and contention for shared resources if the performance of a large number ( n) of processors is to be anything like the desirable n times the performance of a single processor. In many situations, adding a processor actually decreases the performance of the overall system since the extra organizational overhead is larger than the extra processing power added. The systolic loop architecture is a new multiple processor architecture which attemps at a solution to the problem of how to organize a large number of asynchronous processors into an effective computational system while minimizing the organizational overhead. This paper gives a brief overview of the basic systolic loop architecture, systolic loop algorithms for numerical computation, and a 64-processor implementation of the architecture, WATERLOOP V2/64, that is being used as a testbed for exploring the hardware, software, and algorithmic aspects of the architecture.

  9. Parallel algorithms for boundary value problems

    NASA Technical Reports Server (NTRS)

    Lin, Avi

    1990-01-01

    A general approach to solve boundary value problems numerically in a parallel environment is discussed. The basic algorithm consists of two steps: the local step where all the P available processors work in parallel, and the global step where one processor solves a tridiagonal linear system of the order P. The main advantages of this approach are two fold. First, this suggested approach is very flexible, especially in the local step and thus the algorithm can be used with any number of processors and with any of the SIMD or MIMD machines. Secondly, the communication complexity is very small and thus can be used as easily with shared memory machines. Several examples for using this strategy are discussed.

  10. C-MOS array design techniques: SUMC multiprocessor system study

    NASA Technical Reports Server (NTRS)

    Clapp, W. A.; Helbig, W. A.; Merriam, A. S.

    1972-01-01

    The current capabilities of LSI techniques for speed and reliability, plus the possibilities of assembling large configurations of LSI logic and storage elements, have demanded the study of multiprocessors and multiprocessing techniques, problems, and potentialities. Evaluated are three previous systems studies for a space ultrareliable modular computer multiprocessing system, and a new multiprocessing system is proposed that is flexibly configured with up to four central processors, four 1/0 processors, and 16 main memory units, plus auxiliary memory and peripheral devices. This multiprocessor system features a multilevel interrupt, qualified S/360 compatibility for ground-based generation of programs, virtual memory management of a storage hierarchy through 1/0 processors, and multiport access to multiple and shared memory units.

  11. Methods for synchronizing a countdown routine of a timer key and electronic device

    DOEpatents

    Condit, Reston A.; Daniels, Michael A.; Clemens, Gregory P.; Tomberlin, Eric S.; Johnson, Joel A.

    2015-06-02

    A timer key relating to monitoring a countdown time of a countdown routine of an electronic device is disclosed. The timer key comprises a processor configured to respond to a countdown time associated with operation of the electronic device, a display operably coupled with the processor, and a housing configured to house at least the processor. The housing has an associated structure configured to engage with the electronic device to share the countdown time between the electronic device and the timer key. The processor is configured to begin a countdown routine based at least in part on the countdown time, wherein the countdown routine is at least substantially synchronized with a countdown routine of the electronic device when the timer key is removed from the electronic device. A system and method for synchronizing countdown routines of a timer key and an electronic device are also disclosed.

  12. Apparatus, system, and method for synchronizing a timer key

    DOEpatents

    Condit, Reston A; Daniels, Michael A; Clemens, Gregory P; Tomberlin, Eric S; Johnson, Joel A

    2014-04-22

    A timer key relating to monitoring a countdown time of a countdown routine of an electronic device is disclosed. The timer key comprises a processor configured to respond to a countdown time associated with operation of the electronic device, a display operably coupled with the processor, and a housing configured to house at least the processor. The housing has an associated structure configured to engage with the electronic device to share the countdown time between the electronic device and the timer key. The processor is configured to begin a countdown routine based at least in part on the countdown time, wherein the countdown routine is at least substantially synchronized with a countdown routine of the electronic device when the timer key is removed from the electronic device. A system and method for synchronizing countdown routines of a timer key and an electronic device are also disclosed.

  13. Data preprocessing for determining outer/inner parallelization in the nested loop problem using OpenMP

    NASA Astrophysics Data System (ADS)

    Handhika, T.; Bustamam, A.; Ernastuti, Kerami, D.

    2017-07-01

    Multi-thread programming using OpenMP on the shared-memory architecture with hyperthreading technology allows the resource to be accessed by multiple processors simultaneously. Each processor can execute more than one thread for a certain period of time. However, its speedup depends on the ability of the processor to execute threads in limited quantities, especially the sequential algorithm which contains a nested loop. The number of the outer loop iterations is greater than the maximum number of threads that can be executed by a processor. The thread distribution technique that had been found previously only be applied by the high-level programmer. This paper generates a parallelization procedure for low-level programmer in dealing with 2-level nested loop problems with the maximum number of threads that can be executed by a processor is smaller than the number of the outer loop iterations. Data preprocessing which is related to the number of the outer loop and the inner loop iterations, the computational time required to execute each iteration and the maximum number of threads that can be executed by a processor are used as a strategy to determine which parallel region that will produce optimal speedup.

  14. A Hybrid-Cloud Science Data System Enabling Advanced Rapid Imaging & Analysis for Monitoring Hazards

    NASA Astrophysics Data System (ADS)

    Hua, H.; Owen, S. E.; Yun, S.; Lundgren, P.; Moore, A. W.; Fielding, E. J.; Radulescu, C.; Sacco, G.; Stough, T. M.; Mattmann, C. A.; Cervelli, P. F.; Poland, M. P.; Cruz, J.

    2012-12-01

    Volcanic eruptions, landslides, and levee failures are some examples of hazards that can be more accurately forecasted with sufficient monitoring of precursory ground deformation, such as the high-resolution measurements from GPS and InSAR. In addition, coherence and reflectivity change maps can be used to detect surface change due to lava flows, mudslides, tornadoes, floods, and other natural and man-made disasters. However, it is difficult for many volcano observatories and other monitoring agencies to process GPS and InSAR products in an automated scenario needed for continual monitoring of events. Additionally, numerous interoperability barriers exist in multi-sensor observation data access, preparation, and fusion to create actionable products. Combining high spatial resolution InSAR products with high temporal resolution GPS products--and automating this data preparation & processing across global-scale areas of interests--present an untapped science and monitoring opportunity. The global coverage offered by satellite-based SAR observations, and the rapidly expanding GPS networks, can provide orders of magnitude more data on these hazardous events if we have a data system that can efficiently and effectively analyze the voluminous raw data, and provide users the tools to access data from their regions of interest. Currently, combined GPS & InSAR time series are primarily generated for specific research applications, and are not implemented to run on large-scale continuous data sets and delivered to decision-making communities. We are developing an advanced service-oriented architecture for hazard monitoring leveraging NASA-funded algorithms and data management to enable both science and decision-making communities to monitor areas of interests via seamless data preparation, processing, and distribution. Our objectives: * Enable high-volume and low-latency automatic generation of NASA Solid Earth science data products (InSAR and GPS) to support hazards monitoring. * Facilitate NASA-USGS collaborations to share NASA InSAR and GPS data products, which are difficult to process in high-volume and low-latency, for decision-support. * Enable interoperable discovery, access, and sharing of NASA observations and derived actionable products, and between the observation and decision-making communities. * Enable their improved understanding through visualization, mining, and cross-agency sharing. Existing InSAR & GPS processing packages and other software are integrated for generating geodetic decision support monitoring products. We employ semantic and cloud-based data management and processing techniques for handling large data volumes, reducing end product latency, codifying data system information with semantics, and deploying interoperable services for actionable products to decision-making communities.

  15. OpenMP Performance on the Columbia Supercomputer

    NASA Technical Reports Server (NTRS)

    Haoqiang, Jin; Hood, Robert

    2005-01-01

    This presentation discusses Columbia World Class Supercomputer which is one of the world's fastest supercomputers providing 61 TFLOPs (10/20/04). Conceived, designed, built, and deployed in just 120 days. A 20-node supercomputer built on proven 512-processor nodes. The largest SGI system in the world with over 10,000 Intel Itanium 2 processors and provides the largest node size incorporating commodity parts (512) and the largest shared-memory environment (2048) with 88% efficiency tops the scalar systems on the Top500 list.

  16. Hypercluster Parallel Processor

    NASA Technical Reports Server (NTRS)

    Blech, Richard A.; Cole, Gary L.; Milner, Edward J.; Quealy, Angela

    1992-01-01

    Hypercluster computer system includes multiple digital processors, operation of which coordinated through specialized software. Configurable according to various parallel-computing architectures of shared-memory or distributed-memory class, including scalar computer, vector computer, reduced-instruction-set computer, and complex-instruction-set computer. Designed as flexible, relatively inexpensive system that provides single programming and operating environment within which one can investigate effects of various parallel-computing architectures and combinations on performance in solution of complicated problems like those of three-dimensional flows in turbomachines. Hypercluster software and architectural concepts are in public domain.

  17. GPs' interactional styles in consultations with Dutch and ethnic minority patients.

    PubMed

    Schouten, Barbara C; Meeuwesen, Ludwien; Harmsen, Hans A M

    2009-12-01

    The aim of this study was to examine interactional styles of general practitioners (GPs) in consultations with Dutch patients as compared to ethnic minority patients, from the perspective of level of mutual understanding between patient and GP. Data of 103 transcripts of video-registered medical interviews were analyzed to assess GPs' communication styles in terms of involvement, detachment, shared decision-making and patient-centeredness. Surveys were used to collect data on patients' characteristics and mutual understanding. Results show that overall, GPs communicate less adequately with ethnic minority patients than with Dutch patients; they involve them less in decision-making and check their understanding of what has been discussed less often. Intercultural consultations are thus markedly distinguishable from intracultural consultations by a lack of adequate communicative behavior by GPs. As every patient has a moral and legal right to make informed decisions, it is concluded that GPs should check more often whether their ethnic minority patients have understood what has been said during the medical consultation.

  18. The prescribing of specialist medicines: what factors influence GPs' decision making?

    PubMed

    Crowe, Sarah; Tully, Mary P; Cantrill, Judith A

    2009-08-01

    As Governments worldwide strive to integrate efficient health care delivery across the primary-secondary care divide, particular significance has been placed on the need to understand GPs' prescribing of specialist drugs. To explore the factors which influence GPs' decision-making process when requested to prescribe specialist drugs. A qualitative approach was used to explore the perspectives of a wide range of practice-, primary care trust-, strategic health authority-level staff and other relevant stakeholders in the North-West of England. All semi-structured interviews (n = 47) were analysed comprehensively using the five-stage 'framework' approach. Six diverse factors were identified as having a crucial bearing on how GPs evaluate initial requests and subsequently decide whether or not to prescribe. These include GPs' lack of knowledge and expertise in using specialist drugs, the shared care arrangement, the influence of a locally agreed advisory list, financial and resource considerations, patient convenience and understanding and GPs' specific areas of interest. This exploration of GPs' decision-making process is needed to support future integrated health care delivery.

  19. Improving specialist drug prescribing in primary care using task and error analysis: an observational study.

    PubMed

    Chana, Narinder; Porat, Talya; Whittlesea, Cate; Delaney, Brendan

    2017-03-01

    Electronic prescribing has benefited from computerised clinical decision support systems (CDSSs); however, no published studies have evaluated the potential for a CDSS to support GPs in prescribing specialist drugs. To identify potential weaknesses and errors in the existing process of prescribing specialist drugs that could be addressed in the development of a CDSS. Semi-structured interviews with key informants followed by an observational study involving GPs in the UK. Twelve key informants were interviewed to investigate the use of CDSSs in the UK. Nine GPs were observed while performing case scenarios depicting requests from hospitals or patients to prescribe a specialist drug. Activity diagrams, hierarchical task analysis, and systematic human error reduction and prediction approach analyses were performed. The current process of prescribing specialist drugs by GPs is prone to error. Errors of omission due to lack of information were the most common errors, which could potentially result in a GP prescribing a specialist drug that should only be prescribed in hospitals, or prescribing a specialist drug without reference to a shared care protocol. Half of all possible errors in the prescribing process had a high probability of occurrence. A CDSS supporting GPs during the process of prescribing specialist drugs is needed. This could, first, support the decision making of whether or not to undertake prescribing, and, second, provide drug-specific parameters linked to shared care protocols, which could reduce the errors identified and increase patient safety. © British Journal of General Practice 2017.

  20. General practitioners' management of mental disorders: a rewarding practice with considerable obstacles.

    PubMed

    Fleury, Marie-Josée; Imboua, Armelle; Aubé, Denise; Farand, Lambert; Lambert, Yves

    2012-03-16

    Primary care improvement is the cornerstone of current reforms. Mental disorders (MDs) are a leading cause of morbidity worldwide and widespread in industrialised countries. MDs are treated mainly in primary care by general practitioners (GPs), even though the latter ability to detect, diagnose, and treat patients with MDs is often considered unsatisfactory. This article examines GPs' management of MDs in an effort to acquire more information regarding means by which GPs deal with MD cases, impact of such cases on their practices, factors that enable or hinder MD management, and patient-management strategies. This study employs a mixed-method approach with emphasis on qualitative investigation. Based on a previous survey of 398 GPs in Quebec, Canada, 60 GPs representing a variety of practice settings were selected for further study. A 10-minute-long questionnaire comprising 27 items was administered, and 70-minute-long interviews were conducted. Quantitative (SPSS) and qualitative (NVivo) analyses were performed. At least 20% of GP visits were MD-related. GPs were comfortable managing common MDs, but not serious MDs. GPs' based their treatment of MDs on pharmacotherapy, support therapy, and psycho-education. They used clinical intuition with few clinical tools, and closely followed their patients with MDs. Practice features (salary or hourly fees payment; psycho-social teams on-site; strong informal networks), and GPs' individual characteristics (continuing medical education; exposure and interest in MDs; traits like empathy) favoured MD management. Collaboration with psychologists and psychiatrists was considered key to good MD management. Limited access to specialists, system fragmentation, and underdeveloped group practice and shared-care models were impediments. MD management was seen as burdensome because it required more time, flexibility, and emotional investment. Strategies exist to reduce the burden (one-problem-per-visit rule; longer time slots). GPs found MD practice rewarding as patients were seen as grateful and more complying with medical recommendations compared to other patients, generally leading to positive outcomes. To improve MD management, this study highlights the importance of extending multidisciplinary GP practice settings with salary or hourly fee payment; access to psychotherapeutic and psychiatric expertise; and case-discussion training involving local networks of GPs and MD specialists that encourage both knowledge transfer and shared care.

  1. General practitioners' management of mental disorders: A rewarding practice with considerable obstacles

    PubMed Central

    2012-01-01

    Background Primary care improvement is the cornerstone of current reforms. Mental disorders (MDs) are a leading cause of morbidity worldwide and widespread in industrialised countries. MDs are treated mainly in primary care by general practitioners (GPs), even though the latter ability to detect, diagnose, and treat patients with MDs is often considered unsatisfactory. This article examines GPs' management of MDs in an effort to acquire more information regarding means by which GPs deal with MD cases, impact of such cases on their practices, factors that enable or hinder MD management, and patient-management strategies. Methods This study employs a mixed-method approach with emphasis on qualitative investigation. Based on a previous survey of 398 GPs in Quebec, Canada, 60 GPs representing a variety of practice settings were selected for further study. A 10-minute-long questionnaire comprising 27 items was administered, and 70-minute-long interviews were conducted. Quantitative (SPSS) and qualitative (NVivo) analyses were performed. Results At least 20% of GP visits were MD-related. GPs were comfortable managing common MDs, but not serious MDs. GPs' based their treatment of MDs on pharmacotherapy, support therapy, and psycho-education. They used clinical intuition with few clinical tools, and closely followed their patients with MDs. Practice features (salary or hourly fees payment; psycho-social teams on-site; strong informal networks), and GPs' individual characteristics (continuing medical education; exposure and interest in MDs; traits like empathy) favoured MD management. Collaboration with psychologists and psychiatrists was considered key to good MD management. Limited access to specialists, system fragmentation, and underdeveloped group practice and shared-care models were impediments. MD management was seen as burdensome because it required more time, flexibility, and emotional investment. Strategies exist to reduce the burden (one-problem-per-visit rule; longer time slots). GPs found MD practice rewarding as patients were seen as grateful and more complying with medical recommendations compared to other patients, generally leading to positive outcomes. Conclusions To improve MD management, this study highlights the importance of extending multidisciplinary GP practice settings with salary or hourly fee payment; access to psychotherapeutic and psychiatric expertise; and case-discussion training involving local networks of GPs and MD specialists that encourage both knowledge transfer and shared care. PMID:22423592

  2. Comparison of GOME-2/Metop total column water vapour with ground-based and in situ measurements

    NASA Astrophysics Data System (ADS)

    Kalakoski, N.; Kujanpää, J.; Sofieva, V.; Tamminen, J.; Grossi, M.; Valks, P.

    2014-12-01

    Total column water vapour product from the Global Ozone Monitoring Experiment-2 on board Metop-A and Metop-B satellites (GOME-2/Metop-A and GOME-2/Metop-B) produced by the Satellite Application Facility on Ozone and Atmospheric Chemistry Monitoring (O3M SAF) is compared with co-located radiosonde and Global Positioning System (GPS) observations. The comparisons are performed using recently reprocessed data by the GOME Data Processor (GDP) version 4.7. The comparisons are performed for the period of January 2007-July 2013 (GOME-2A) and from December 2012 to July 2013 (GOME-2B). Radiosonde data are from the Integrated Global Radiosonde Archive (IGRA) maintained by National Climatic Data Center (NCDC) and screened for soundings with incomplete tropospheric column. Ground-based GPS observations from COSMIC/SuomiNet network are used as the second independent data source. Good general agreement between GOME-2 and the ground-based observations is found. The median relative difference of GOME-2 to radiosonde observations is -2.7% for GOME-2A and -0.3% for GOME-2B. Against GPS observations, the median relative differences are 4.9 and 3.2% for GOME-2A and B, respectively. For water vapour total columns below 10 kg m-2, large wet biases are observed, especially against GPS observations. Conversely, at values above 50 kg m-2, GOME-2 generally underestimates both ground-based observations.

  3. [On the front line: survey on shared responsibility. General practitioners and schizophrenia].

    PubMed

    Stip, Emmanuel; Boyer, Richard; Sepehry, Amir Ali; Rodriguez, Jean Pierre; Umbricht, Daniel; Tempier, Adrien; Simon, Andor E

    2007-01-01

    General practitioners (GP) play a preponderant role in the treatment of patients suffering of schizophrenia. Discovering the number of patients with schizophrenia who are treated by GPs ; the needs and attitudes of GPs, their knowledge concerning diagnosis, and the treatment they provide. A postal survey was conducted with Quebec GPs who were randomly chosen. A total of 1003 GPs have participated in the survey. Among them, a small percentage have to treat an early onset schizophrenia and the GPs have expressed their wish to be more informed on the accessibility of specialized services. Results pertaining to questions on diagnoses and knowledge on treatments are inconsistent. The majority of GPs treat the first psychotic episodes with antipsychotic medication. Only a third of GPs surveyed propose maintaining the treatment after a first psychotic episode, in accordance with international recommendations and the recent Canadian guidelines on practices that recommends at least 6 to 12 months of treatment after a partial or complete clinical response. Time given by male GPs to a first contact varies between 10 and 20 minutes, while 80 % of female GPs spend at least 20 minutes. The adverse effects of antipsychotic medication that raise most concern is weight gain before neurological signs. some of this survey's data should be considered by various professional and governmental associations, in order to improve the place of GPs in a health plan destined to treat schizophrenia.

  4. An Alternative Flight Software Trigger Paradigm: Applying Multivariate Logistic Regression to Sense Trigger Conditions Using Inaccurate or Scarce Information

    NASA Technical Reports Server (NTRS)

    Smith, Kelly M.; Gay, Robert S.; Stachowiak, Susan J.

    2013-01-01

    In late 2014, NASA will fly the Orion capsule on a Delta IV-Heavy rocket for the Exploration Flight Test-1 (EFT-1) mission. For EFT-1, the Orion capsule will be flying with a new GPS receiver and new navigation software. Given the experimental nature of the flight, the flight software must be robust to the loss of GPS measurements. Once the high-speed entry is complete, the drogue parachutes must be deployed within the proper conditions to stabilize the vehicle prior to deploying the main parachutes. When GPS is available in nominal operations, the vehicle will deploy the drogue parachutes based on an altitude trigger. However, when GPS is unavailable, the navigated altitude errors become excessively large, driving the need for a backup barometric altimeter to improve altitude knowledge. In order to increase overall robustness, the vehicle also has an alternate method of triggering the parachute deployment sequence based on planet-relative velocity if both the GPS and the barometric altimeter fail. However, this backup trigger results in large altitude errors relative to the targeted altitude. Motivated by this challenge, this paper demonstrates how logistic regression may be employed to semi-automatically generate robust triggers based on statistical analysis. Logistic regression is used as a ground processor pre-flight to develop a statistical classifier. The classifier would then be implemented in flight software and executed in real-time. This technique offers improved performance even in the face of highly inaccurate measurements. Although the logistic regression-based trigger approach will not be implemented within EFT-1 flight software, the methodology can be carried forward for future missions and vehicles.

  5. An Alternative Flight Software Paradigm: Applying Multivariate Logistic Regression to Sense Trigger Conditions using Inaccurate or Scarce Information

    NASA Technical Reports Server (NTRS)

    Smith, Kelly; Gay, Robert; Stachowiak, Susan

    2013-01-01

    In late 2014, NASA will fly the Orion capsule on a Delta IV-Heavy rocket for the Exploration Flight Test-1 (EFT-1) mission. For EFT-1, the Orion capsule will be flying with a new GPS receiver and new navigation software. Given the experimental nature of the flight, the flight software must be robust to the loss of GPS measurements. Once the high-speed entry is complete, the drogue parachutes must be deployed within the proper conditions to stabilize the vehicle prior to deploying the main parachutes. When GPS is available in nominal operations, the vehicle will deploy the drogue parachutes based on an altitude trigger. However, when GPS is unavailable, the navigated altitude errors become excessively large, driving the need for a backup barometric altimeter to improve altitude knowledge. In order to increase overall robustness, the vehicle also has an alternate method of triggering the parachute deployment sequence based on planet-relative velocity if both the GPS and the barometric altimeter fail. However, this backup trigger results in large altitude errors relative to the targeted altitude. Motivated by this challenge, this paper demonstrates how logistic regression may be employed to semi-automatically generate robust triggers based on statistical analysis. Logistic regression is used as a ground processor pre-flight to develop a statistical classifier. The classifier would then be implemented in flight software and executed in real-time. This technique offers improved performance even in the face of highly inaccurate measurements. Although the logistic regression-based trigger approach will not be implemented within EFT-1 flight software, the methodology can be carried forward for future missions and vehicles

  6. An Alternative Flight Software Trigger Paradigm: Applying Multivariate Logistic Regression to Sense Trigger Conditions using Inaccurate or Scarce Information

    NASA Technical Reports Server (NTRS)

    Smith, Kelly M.; Gay, Robert S.; Stachowiak, Susan J.

    2013-01-01

    In late 2014, NASA will fly the Orion capsule on a Delta IV-Heavy rocket for the Exploration Flight Test-1 (EFT-1) mission. For EFT-1, the Orion capsule will be flying with a new GPS receiver and new navigation software. Given the experimental nature of the flight, the flight software must be robust to the loss of GPS measurements. Once the high-speed entry is complete, the drogue parachutes must be deployed within the proper conditions to stabilize the vehicle prior to deploying the main parachutes. When GPS is available in nominal operations, the vehicle will deploy the drogue parachutes based on an altitude trigger. However, when GPS is unavailable, the navigated altitude errors become excessively large, driving the need for a backup barometric altimeter. In order to increase overall robustness, the vehicle also has an alternate method of triggering the drogue parachute deployment based on planet-relative velocity if both the GPS and the barometric altimeter fail. However, this velocity-based trigger results in large altitude errors relative to the targeted altitude. Motivated by this challenge, this paper demonstrates how logistic regression may be employed to automatically generate robust triggers based on statistical analysis. Logistic regression is used as a ground processor pre-flight to develop a classifier. The classifier would then be implemented in flight software and executed in real-time. This technique offers excellent performance even in the face of highly inaccurate measurements. Although the logistic regression-based trigger approach will not be implemented within EFT-1 flight software, the methodology can be carried forward for future missions and vehicles.

  7. The UNAVCO Real-time GPS Data Processing System and Community Reference Data Sets

    NASA Astrophysics Data System (ADS)

    Sievers, C.; Mencin, D.; Berglund, H. T.; Blume, F.; Meertens, C. M.; Mattioli, G. S.

    2013-12-01

    UNAVCO has constructed a real-time GPS (RT-GPS) network of 420 GPS stations. The majority of the streaming stations come from the EarthScope Plate Boundary Observatory (PBO) through an NSF-ARRA funded Cascadia Upgrade Initiative that upgraded 100 backbone stations throughout the PBO footprint and 282 stations focused in the Pacific Northwest. Additional contributions from NOAA (~30 stations in Southern California) and the USGS (8 stations at Yellowstone) account for the other real-time stations. Based on community based outcomes of a workshop focused on real-time GPS position data products and formats hosted by UNAVCO in Spring of 2011, UNAVCO now provides real-time PPP positions for all 420 stations using Trimble's PIVOT software and for 50 stations using TrackRT at the volcanic centers located at Yellowstone (Figure 1 shows an example ensemble of TrackRT networks used in processing the Yellowstone data), Mt St Helens, and Montserrat. The UNAVCO real-time system has the potential to enhance our understanding of earthquakes, seismic wave propagation, volcanic eruptions, magmatic intrusions, movement of ice, landslides, and the dynamics of the atmosphere. Beyond its increasing uses for science and engineering, RT-GPS has the potential to provide early warning of hazards to emergency managers, utilities, other infrastructure managers, first responders and others. With the goal of characterizing stability and improving software and higher level products based on real-time GPS time series, UNAVCO is developing an open community standard data set where data processors can provide solutions based on common sets of RT-GPS data which simulate real world scenarios and events. UNAVCO is generating standard data sets for playback that include not only real and synthetic events but also background noise, antenna movement (e.g., steps, linear trends, sine waves, and realistic earthquake-like motions), receiver drop out and online return, interruption of communications (such as, bulk regional failures due to specific carriers during an actual event), satellites rising and setting, various constellation outages and differences in performance between real-time and simulated (retroactive) real-time. We present an overview of the UNAVCO RT-GPS system, a comparison of the UNAVCO generated real-time data products, and an overview of available common data sets.

  8. [Barriers to evidence-based medicine encountered among GPs - an issue based on misunderstanding? A qualitative study in the general practice setting].

    PubMed

    Bölter, Regine; Kühlein, Thomas; Ose, Dominik; Götz, Katja; Freund, Tobias; Szecsenyi, Joachim; Miksch, Antje

    2010-01-01

    The Chronic Care Model (CCM) is a framework for the structured care of patients with chronic conditions. It requires access of both physicians and patients to scientific evidence in order to facilitate shared treatment decision-making on the basis of the patient's individual needs and the best available external evidence. The aim of this study was to find out whether general practitioners (GP) actually make use of evidence-based information and guidelines and whether and how they communicate this information to their patients. We interviewed 14 general practitioners and conducted a content analysis. The majority of these GPs take a sceptical view towards evidence-based guidelines. Their main point of criticism is that guidelines disregard the individual patient's reality and life style. Instead, GPs emphasize the relevance of their own knowledge of the personal and medical history of and the continual care for their patients. Since GPs themselves often do not accept guidelines, they seldom impart their content to their patients. According to the GPs' experience there are contradictions between guideline-conformant therapy and individual treatment. The integrative character of evidence-based medicine is not recognized. The reason is that evidence-based medicine is equated with guidelines and trial results by the majority of the GPs interviewed. To facilitate guideline implementation in everyday practice GPs need to be provided with adequate access to scientific evidence and an understanding of the intentions of guidelines. If the doctors themselves do not accept guidelines, they will not share them with their patients. It must be made clear that guidelines are not intended as normative demands for a specific therapy for every patient, but are rather meant to assist the physician with his struggle for the best therapy for individual patients. Copyright © 2010. Published by Elsevier GmbH.

  9. International GPS (Global Positioning System) Service for Geodynamics

    NASA Technical Reports Server (NTRS)

    Zumberge, J. F. (Editor); Liu, R. (Editor); Neilan, R. E. (Editor)

    1995-01-01

    The International GPS (Global Positioning System) Service for Geodynamics (IGS) began formal operation on January 1, 1994. This first annual report is divided into sections, which mirror different aspects of the service. Section (1) contains general information, including the history of the IGS, its organization, and the global network of GPS tracking sites; (2) contains information on the Central Bureau Information System; (3) describes the International Earth Rotation Service (IERS); (4) details collecting and distributing IGS data in Data Center reports; (6) describes how the IGS Analysis Centers generate their products; (7) contains miscellaneous contributions from other organizations that share common interests with the IGS.

  10. High speed quantitative digital microscopy

    NASA Technical Reports Server (NTRS)

    Castleman, K. R.; Price, K. H.; Eskenazi, R.; Ovadya, M. M.; Navon, M. A.

    1984-01-01

    Modern digital image processing hardware makes possible quantitative analysis of microscope images at high speed. This paper describes an application to automatic screening for cervical cancer. The system uses twelve MC6809 microprocessors arranged in a pipeline multiprocessor configuration. Each processor executes one part of the algorithm on each cell image as it passes through the pipeline. Each processor communicates with its upstream and downstream neighbors via shared two-port memory. Thus no time is devoted to input-output operations as such. This configuration is expected to be at least ten times faster than previous systems.

  11. Using SDI-12 with ST microelectronics MCU's

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saari, Alexandra; Hinzey, Shawn Adrian; Frigo, Janette Rose

    2015-09-03

    ST Microelectronics microcontrollers and processors are readily available, capable and economical processors. Unfortunately they lack a broad user base like similar offerings from Texas Instrument, Atmel, or Microchip. All of these devices could be useful in economical devices for remote sensing applications used with environmental sensing. With the increased need for environmental studies, and limited budgets, flexibility in hardware is very important. To that end, and in an effort to increase open support of ST devices, I am sharing my teams' experience in interfacing a common environmental sensor communication protocol (SDI-12) with ST devices.

  12. Multitasking OS manages a team of processors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ripps, D.L.

    1983-07-21

    MTOS-68k is a real-time multitasking operating system designed for the popular MC68000 microprocessors. It aproaches task coordination and synchronization in a fashion that matches uniquely the structural simplicity and regularity of the 68000 instruction set. Since in many 68000 applications the speed and power of one CPU are not enough, MTOS-68k has been designed to support multiple processors, as well as multiple tasks. Typically, the devices are tightly coupled single-board computers, that is they share a backplane and parts of global memory.

  13. Second International Workshop on Software Engineering and Code Design in Parallel Meteorological and Oceanographic Applications

    NASA Technical Reports Server (NTRS)

    OKeefe, Matthew (Editor); Kerr, Christopher L. (Editor)

    1998-01-01

    This report contains the abstracts and technical papers from the Second International Workshop on Software Engineering and Code Design in Parallel Meteorological and Oceanographic Applications, held June 15-18, 1998, in Scottsdale, Arizona. The purpose of the workshop is to bring together software developers in meteorology and oceanography to discuss software engineering and code design issues for parallel architectures, including Massively Parallel Processors (MPP's), Parallel Vector Processors (PVP's), Symmetric Multi-Processors (SMP's), Distributed Shared Memory (DSM) multi-processors, and clusters. Issues to be discussed include: (1) code architectures for current parallel models, including basic data structures, storage allocation, variable naming conventions, coding rules and styles, i/o and pre/post-processing of data; (2) designing modular code; (3) load balancing and domain decomposition; (4) techniques that exploit parallelism efficiently yet hide the machine-related details from the programmer; (5) tools for making the programmer more productive; and (6) the proliferation of programming models (F--, OpenMP, MPI, and HPF).

  14. Resource and Performance Evaluations of Fixed Point QRD-RLS Systolic Array through FPGA Implementation

    NASA Astrophysics Data System (ADS)

    Yokoyama, Yoshiaki; Kim, Minseok; Arai, Hiroyuki

    At present, when using space-time processing techniques with multiple antennas for mobile radio communication, real-time weight adaptation is necessary. Due to the progress of integrated circuit technology, dedicated processor implementation with ASIC or FPGA can be employed to implement various wireless applications. This paper presents a resource and performance evaluation of the QRD-RLS systolic array processor based on fixed-point CORDIC algorithm with FPGA. In this paper, to save hardware resources, we propose the shared architecture of a complex CORDIC processor. The required precision of internal calculation, the circuit area for the number of antenna elements and wordlength, and the processing speed will be evaluated. The resource estimation provides a possible processor configuration with a current FPGA on the market. Computer simulations assuming a fading channel will show a fast convergence property with a finite number of training symbols. The proposed architecture has also been implemented and its operation was verified by beamforming evaluation through a radio propagation experiment.

  15. A High-Throughput Processor for Flight Control Research Using Small UAVs

    NASA Technical Reports Server (NTRS)

    Klenke, Robert H.; Sleeman, W. C., IV; Motter, Mark A.

    2006-01-01

    There are numerous autopilot systems that are commercially available for small (<100 lbs) UAVs. However, they all share several key disadvantages for conducting aerodynamic research, chief amongst which is the fact that most utilize older, slower, 8- or 16-bit microcontroller technologies. This paper describes the development and testing of a flight control system (FCS) for small UAV s based on a modern, high throughput, embedded processor. In addition, this FCS platform contains user-configurable hardware resources in the form of a Field Programmable Gate Array (FPGA) that can be used to implement custom, application-specific hardware. This hardware can be used to off-load routine tasks such as sensor data collection, from the FCS processor thereby further increasing the computational throughput of the system.

  16. A study of information management in the patient surgical pathway in NHSScotland.

    PubMed

    Bouamrane, Matt-Mouley; Mair, Frances S

    2013-01-01

    We conducted a study of information management processes across the patient surgical pathway in NHSScotland. While the majority of general practitioners (GPs) consider electronic medical records systems as an essential and integral part of their work during the patient consultation, many were not fully satisfied with the functionalities of these systems. A majority of GPs considered that the national eReferral system streamlined referral processes. Almost all GPs reported marked variability in the quality of discharge information. Preoperative processes vary significantly across Scotland, with most services using paper-based systems. Insufficient use is made of information provided through the patient electronic referral leading to a considerable duplication of tasks already performed in primary care. Three health-boards have implemented electronic preoperative information systems. These have transformed clinical practices and facilitated communication and information-sharing among the multi-disciplinary team and within the health-boards. Substantial progress has been made towards improving information transfer and sharing within the surgical pathway in recent years. However, there remains scope for further improvements at the interface between services.

  17. Negotiating refusal in primary care consultations: a qualitative study.

    PubMed

    Walter, Alex; Chew-Graham, Carolyn; Harrison, Stephen

    2012-08-01

    How GPs negotiate patient requests is vital to their gatekeeper role but also a source of potential conflict, practitioner stress and patient dissatisfaction. Difficulties may arise when demands of shared decision-making conflict with resource allocation, which may be exacerbated by new commissioning arrangements, with GPs responsible for available services. To explore GPs' accounts of negotiating refusal of patient requests and their negotiation strategies. A qualitative design was employed with two focus groups of GPs and GP registrars followed by 20 semi-structured interviews. Participants were sampled by gender, experience, training/non-training, principal versus salaried or locum. Thematic content analysis proceeded in parallel with interviews and further sampling. The setting was GP practices within an English urban primary care trust. Sickness certification, antibiotics and benzodiazepines were cited most frequently as problematic patient requests. GP trainees reported more conflict within interactions than experienced GPs. Negotiation strategies, such as blaming distant third parties such as the primary care organization, were designed to prevent conflict and preserve the doctor-patient relationship. GPs reported patients' expectations being strongly influenced by previous encounters with other health care professionals. The findings reiterate the prominence of the doctor-patient relationship in GPs' accounts. GPs' relationships with colleagues and the wider National Health Service (NHS) are particular of relevance in light of provisions in the Health and Social Care Bill for clinical commissioning consortia. The ability of GPs to offset blame for rationing decisions to third parties will be undermined if the same GPs commission services.

  18. A pervasive parallel framework for visualization: final report for FWP 10-014707

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moreland, Kenneth D.

    2014-01-01

    We are on the threshold of a transformative change in the basic architecture of highperformance computing. The use of accelerator processors, characterized by large core counts, shared but asymmetrical memory, and heavy thread loading, is quickly becoming the norm in high performance computing. These accelerators represent significant challenges in updating our existing base of software. An intrinsic problem with this transition is a fundamental programming shift from message passing processes to much more fine thread scheduling with memory sharing. Another problem is the lack of stability in accelerator implementation; processor and compiler technology is currently changing rapidly. This report documentsmore » the results of our three-year ASCR project to address these challenges. Our project includes the development of the Dax toolkit, which contains the beginnings of new algorithms for a new generation of computers and the underlying infrastructure to rapidly prototype and build further algorithms as necessary.« less

  19. Sharing Responsibilities within the General Practice Team - A Cross-Sectional Study of Task Delegation in Germany.

    PubMed

    Mergenthal, Karola; Beyer, Martin; Gerlach, Ferdinand M; Guethlin, Corina

    2016-01-01

    Expected growth in the demand for health services has generated interest in the more effective deployment of health care assistants. Programs encouraging German general practitioners (GPs) to share responsibility for care with specially qualified health care assistants in the family practice (VERAHs) have existed for several years. But no studies have been conducted on the tasks German GPs are willing to rely on specially qualified personnel to perform, what they are prepared to delegate to all non-physician practice staff and what they prefer to do themselves. As part of an evaluation study on the deployment of VERAHs in GP-centered health care, we used a questionnaire to ask about task delegation within the practice team. From a list of tasks that VERAHs are specifically trained to carry out, GPs were asked to indicate which they actually delegate. We also asked GPs why they had employed a VERAH in their practice and for their opinions on the benefits and limitations of assigning tasks to VERAHs. The aim of the study was to find out which tasks GPs delegate to their specially qualified personnel, which they permit all HCAs to carry out, and which tasks they do not delegate at all. The survey was filled in and returned by 245 GPs (83%). Some tasks were exclusively delegated to VERAHs (e.g. home visits), while others were delegated to all HCAs (e.g. vaccinations). About half the GPs rated the assessment of mental health, as part of the comprehensive assessment of a patient's condition, as the sole responsibility of a GP. The possibility to delegate more complex tasks was the main reason given for employing a VERAH. Doctors said the delegation of home visits provided them with the greatest relief. In Germany, where GPs are solely accountable for the health care provided in their practices, experience with the transfer of responsibility to other non-physician health care personnel is still very limited. When HCAs have undergone special training, GPs seem to be prepared to delegate tasks that demand a substantial degree of know-how, such as home visits and case management. This "new" role allocation within the practice may signal a shift in the provision of health care by family practice teams in Germany.

  20. Sharing Responsibilities within the General Practice Team – A Cross-Sectional Study of Task Delegation in Germany

    PubMed Central

    Mergenthal, Karola; Beyer, Martin; Gerlach, Ferdinand M.; Guethlin, Corina

    2016-01-01

    Background Expected growth in the demand for health services has generated interest in the more effective deployment of health care assistants. Programs encouraging German general practitioners (GPs) to share responsibility for care with specially qualified health care assistants in the family practice (VERAHs) have existed for several years. But no studies have been conducted on the tasks German GPs are willing to rely on specially qualified personnel to perform, what they are prepared to delegate to all non-physician practice staff and what they prefer to do themselves. Methods As part of an evaluation study on the deployment of VERAHs in GP-centered health care, we used a questionnaire to ask about task delegation within the practice team. From a list of tasks that VERAHs are specifically trained to carry out, GPs were asked to indicate which they actually delegate. We also asked GPs why they had employed a VERAH in their practice and for their opinions on the benefits and limitations of assigning tasks to VERAHs. The aim of the study was to find out which tasks GPs delegate to their specially qualified personnel, which they permit all HCAs to carry out, and which tasks they do not delegate at all. Results The survey was filled in and returned by 245 GPs (83%). Some tasks were exclusively delegated to VERAHs (e.g. home visits), while others were delegated to all HCAs (e.g. vaccinations). About half the GPs rated the assessment of mental health, as part of the comprehensive assessment of a patient’s condition, as the sole responsibility of a GP. The possibility to delegate more complex tasks was the main reason given for employing a VERAH. Doctors said the delegation of home visits provided them with the greatest relief. Conclusions In Germany, where GPs are solely accountable for the health care provided in their practices, experience with the transfer of responsibility to other non-physician health care personnel is still very limited. When HCAs have undergone special training, GPs seem to be prepared to delegate tasks that demand a substantial degree of know-how, such as home visits and case management. This “new” role allocation within the practice may signal a shift in the provision of health care by family practice teams in Germany. PMID:27280415

  1. A RLS-SVM Aided Fusion Methodology for INS during GPS Outages

    PubMed Central

    Yao, Yiqing; Xu, Xiaosu

    2017-01-01

    In order to maintain a relatively high accuracy of navigation performance during global positioning system (GPS) outages, a novel robust least squares support vector machine (LS-SVM)-aided fusion methodology is explored to provide the pseudo-GPS position information for the inertial navigation system (INS). The relationship between the yaw, specific force, velocity, and the position increment is modeled. Rather than share the same weight in the traditional LS-SVM, the proposed algorithm allocates various weights for different data, which makes the system immune to the outliers. Field test data was collected to evaluate the proposed algorithm. The comparison results indicate that the proposed algorithm can effectively provide position corrections for standalone INS during the 300 s GPS outage, which outperforms the traditional LS-SVM method. Historical information is also involved to better represent the vehicle dynamics. PMID:28245549

  2. A RLS-SVM Aided Fusion Methodology for INS during GPS Outages.

    PubMed

    Yao, Yiqing; Xu, Xiaosu

    2017-02-24

    In order to maintain a relatively high accuracy of navigation performance during global positioning system (GPS) outages, a novel robust least squares support vector machine (LS-SVM)-aided fusion methodology is explored to provide the pseudo-GPS position information for the inertial navigation system (INS). The relationship between the yaw, specific force, velocity, and the position increment is modeled. Rather than share the same weight in the traditional LS-SVM, the proposed algorithm allocates various weights for different data, which makes the system immune to the outliers. Field test data was collected to evaluate the proposed algorithm. The comparison results indicate that the proposed algorithm can effectively provide position corrections for standalone INS during the 300 s GPS outage, which outperforms the traditional LS-SVM method. Historical information is also involved to better represent the vehicle dynamics.

  3. A Hands-on Physical Analog Demonstration of Real-Time Volcano Deformation Monitoring with GNSS/GPS

    NASA Astrophysics Data System (ADS)

    Jones, J. R.; Schobelock, J.; Nguyen, T. T.; Rajaonarison, T. A.; Malloy, S.; Njinju, E. A.; Guerra, L.; Stamps, D. S.; Glesener, G. B.

    2017-12-01

    Teaching about volcano deformation and how scientists study these processes using GNSS/GPS may present some challenge since the volcanoes and/or GNSS/GPS equipment are not quite accessible to most teachers. Educators and curriculum materials specialists have developed and shared a number of activities and demonstrations to help students visualize volcanic processes and ways scientist use GNSS/GPS in their research. From resources provided by MEDL (the Modeling and Educational Demonstrations Laboratory) in the Department of Geosciences at Virginia Tech, we combined multiple materials and techniques from these previous works to produce a hands-on physical analog model from which students can learn about GNSS/GPS studies of volcano deformation. The model functions as both a qualitative and quantitative learning tool with good analogical affordances. In our presentation, we will describe multiple ways of teaching with the model, what kinds of materials can be used to build it, and ways we think the model could be enhanced with the addition of Vernier sensors for data collection.

  4. Beliefs about menopause of general practitioners and mid-aged women.

    PubMed

    Liao, K; Hunter, M S; White, P

    1994-12-01

    Recent general population studies suggest that experience of the normal menopause transition is relatively unremarkable for the majority of women, but negative stereotyped beliefs about menopause remain pervasive. This study explored GPs' beliefs and opinions about menopause in general, and compared the GPs' beliefs with those of their mid-aged female patients. All GPs at five general practices (n = 24) and 101 45-year-old women registered at the same practices took part. Large proportions of both groups believed that most women experience somatic and psychological difficulties during menopause. GPs expressed more negative beliefs than patients but were also more likely to express positive/neutral beliefs. Some causal attributions of menopausal problems were shared by the two groups, but they differed on others. When both GPs and patients hold negative social stereotypes about menopause, problems of mid-aged women may be misattributed to menopause. Health information on menopause may be biased towards negative images of menopause and of aging women.

  5. Unclassified Information Sharing and Coordination in Security, Stabilization, Transition and Reconstruction Efforts

    DTIC Science & Technology

    2008-03-01

    is implemented using the Drupal (2007) content management system (CMS) and many of the baseline information sharing and collaboration tools have...been contributed through the Dru- pal open source community. Drupal is a very modular open source software written in PHP hypertext processor...needed to suit the particular problem domain. While other frameworks have the potential to provide similar advantages (“Ruby,” 2007), Drupal was

  6. A cache-aided multiprocessor rollback recovery scheme

    NASA Technical Reports Server (NTRS)

    Wu, Kun-Lung; Fuchs, W. Kent

    1989-01-01

    This paper demonstrates how previous uniprocessor cache-aided recovery schemes can be applied to multiprocessor architectures, for recovering from transient processor failures, utilizing private caches and a global shared memory. As with cache-aided uniprocessor recovery, the multiprocessor cache-aided recovery scheme of this paper can be easily integrated into standard bus-based snoopy cache coherence protocols. A consistent shared memory state is maintained without the necessity of global check-pointing.

  7. Initial Performance Results on IBM POWER6

    NASA Technical Reports Server (NTRS)

    Saini, Subbash; Talcott, Dale; Jespersen, Dennis; Djomehri, Jahed; Jin, Haoqiang; Mehrotra, Piysuh

    2008-01-01

    The POWER5+ processor has a faster memory bus than that of the previous generation POWER5 processor (533 MHz vs. 400 MHz), but the measured per-core memory bandwidth of the latter is better than that of the former (5.7 GB/s vs. 4.3 GB/s). The reason for this is that in the POWER5+, the two cores on the chip share the L2 cache, L3 cache and memory bus. The memory controller is also on the chip and is shared by the two cores. This serializes the path to memory. For consistently good performance on a wide range of applications, the performance of the processor, the memory subsystem, and the interconnects (both latency and bandwidth) should be balanced. Recognizing this, IBM has designed the Power6 processor so as to avoid the bottlenecks due to the L2 cache, memory controller and buffer chips of the POWER5+. Unlike the POWER5+, each core in the POWER6 has its own L2 cache (4 MB - double that of the Power5+), memory controller and buffer chips. Each core in the POWER6 runs at 4.7 GHz instead of 1.9 GHz in POWER5+. In this paper, we evaluate the performance of a dual-core Power6 based IBM p6-570 system, and we compare its performance with that of a dual-core Power5+ based IBM p575+ system. In this evaluation, we have used the High- Performance Computing Challenge (HPCC) benchmarks, NAS Parallel Benchmarks (NPB), and four real-world applications--three from computational fluid dynamics and one from climate modeling.

  8. Shared versus distributed memory multiprocessors

    NASA Technical Reports Server (NTRS)

    Jordan, Harry F.

    1991-01-01

    The question of whether multiprocessors should have shared or distributed memory has attracted a great deal of attention. Some researchers argue strongly for building distributed memory machines, while others argue just as strongly for programming shared memory multiprocessors. A great deal of research is underway on both types of parallel systems. Special emphasis is placed on systems with a very large number of processors for computation intensive tasks and considers research and implementation trends. It appears that the two types of systems will likely converge to a common form for large scale multiprocessors.

  9. OASIS - ORBIT ANALYSIS AND SIMULATION SOFTWARE

    NASA Technical Reports Server (NTRS)

    Wu, S. C.

    1994-01-01

    The Orbit Analysis and Simulation Software, OASIS, is a software system developed for covariance and simulation analyses of problems involving earth satellites, especially the Global Positioning System (GPS). It provides a flexible, versatile and efficient accuracy analysis tool for earth satellite navigation and GPS-based geodetic studies. To make future modifications and enhancements easy, the system is modular, with five major modules: PATH/VARY, REGRES, PMOD, FILTER/SMOOTHER, and OUTPUT PROCESSOR. PATH/VARY generates satellite trajectories. Among the factors taken into consideration are: 1) the gravitational effects of the planets, moon and sun; 2) space vehicle orientation and shapes; 3) solar pressure; 4) solar radiation reflected from the surface of the earth; 5) atmospheric drag; and 6) space vehicle gas leaks. The REGRES module reads the user's input, then determines if a measurement should be made based on geometry and time. PMOD modifies a previously generated REGRES file to facilitate various analysis needs. FILTER/SMOOTHER is especially suited to a multi-satellite precise orbit determination and geodetic-type problems. It can be used for any situation where parameters are simultaneously estimated from measurements and a priori information. Examples of nonspacecraft areas of potential application might be Very Long Baseline Interferometry (VLBI) geodesy and radio source catalogue studies. OUTPUT PROCESSOR translates covariance analysis results generated by FILTER/SMOOTHER into user-desired easy-to-read quantities, performs mapping of orbit covariances and simulated solutions, transforms results into different coordinate systems, and computes post-fit residuals. The OASIS program was developed in 1986. It is designed to be implemented on a DEC VAX 11/780 computer using VAX VMS 3.7 or higher. It can also be implemented on a Micro VAX II provided sufficient disk space is available.

  10. Scalability of a Low-Cost Multi-Teraflop Linux Cluster for High-End Classical Atomistic and Quantum Mechanical Simulations

    NASA Technical Reports Server (NTRS)

    Kikuchi, Hideaki; Kalia, Rajiv K.; Nakano, Aiichiro; Vashishta, Priya; Shimojo, Fuyuki; Saini, Subhash

    2003-01-01

    Scalability of a low-cost, Intel Xeon-based, multi-Teraflop Linux cluster is tested for two high-end scientific applications: Classical atomistic simulation based on the molecular dynamics method and quantum mechanical calculation based on the density functional theory. These scalable parallel applications use space-time multiresolution algorithms and feature computational-space decomposition, wavelet-based adaptive load balancing, and spacefilling-curve-based data compression for scalable I/O. Comparative performance tests are performed on a 1,024-processor Linux cluster and a conventional higher-end parallel supercomputer, 1,184-processor IBM SP4. The results show that the performance of the Linux cluster is comparable to that of the SP4. We also study various effects, such as the sharing of memory and L2 cache among processors, on the performance.

  11. Meeting and treating cultural difference in primary care: a qualitative interview study.

    PubMed

    Wachtler, Caroline; Brorsson, Annika; Troein, Margareta

    2006-02-01

    Primary care doctors see patients from diverse cultural backgrounds and communication plays an important role in diagnosis and treatment. Communication problems can arise when patient and doctor do not share the same cultural background. The aim of this study was to examine how consultations with immigrant patients are understood by GPs and how GPs manage these consultations. Semi-structured interviews with GPs about their experiences with immigrant patients were recorded on audio-tape, transcribed and analysed using a qualitative thematic analysis methodology. A constructivist approach was taken to analysis and interpretation. Culture is not in focus when GPs meet immigrant patients. The consultation is seen as a meeting between individuals, where cultural difference is just one of many individual factors that influence how well doctor and patient understand each other. However, when mutual understanding is poor and the consultation not successful, cultural differences are central. The GPs try to conduct their consultations with immigrant patients in the same way that they conduct all their consultations. There is no specific focus on culture, instead, GPs tend to avoid addressing even pronounced cultural differences. This study indicates that cultural difference is not treated in GPs consultation with immigrant patients. Learning about cultural difference's effect on mutual understanding between doctor and patient could improve GPs cross-cultural communication. Increased awareness of the culture the doctor brings to the consultation could facilitate management of cross-cultural consultations.

  12. Formal Professional Relationships Between General Practitioners and Specialists in Shared Care: Possible Associations with Patient Health and Pharmacy Costs.

    PubMed

    Lublóy, Ágnes; Keresztúri, Judit Lilla; Benedek, Gábor

    2016-04-01

    Shared care in chronic disease management aims at improving service delivery and patient outcomes, and reducing healthcare costs. The introduction of shared-care models is coupled with mixed evidence in relation to both patient health status and cost of care. Professional interactions among health providers are critical to a successful and efficient shared-care model. This article investigates whether the strength of formal professional relationships between general practitioners (GPs) and specialists (SPs) in shared care affects either the health status of patients or their pharmacy costs. In strong GP-SP relationships, the patient health status is expected to be high, due to efficient care coordination, and the pharmacy costs low, due to effective use of resources. This article measures the strength of formal professional relationships between GPs and SPs through the number of shared patients and proxies the patient health status by the number of comorbidities diagnosed and treated. To test the hypotheses and compare the characteristics of the strongest GP-SP connections with those of the weakest, this article concentrates on diabetes-a chronic condition where patient care coordination is likely important. Diabetes generates the largest shared patient cohort in Hungary, with the highest frequency of specialist medication prescriptions. This article finds that stronger ties result in lower pharmacy costs, but not in higher patient health status. Overall drug expenditure may be reduced by lowering patient care fragmentation through channelling a GP's patients to a small number of SPs.

  13. Practical use of a word processor in a histopathology laboratory.

    PubMed Central

    Briggs, J C; Ibrahim, N B; Mackintosh, I; Norris, D

    1982-01-01

    Some of the facilities available with a commercially purchased word processing program, linked to a DEC PDP 11/23 computer are described, together with an account of the practical histopathological use. The system is based on a share of the computer with a Clinical Chemistry Department. Development was time-consuming and required the constant availability of the Department of Physics. However, once working, considerable saving in secretarial time has resulted and a number of projects have been started which would not have been contemplated without the use of the word processor and its linked computer. Images PMID:7068906

  14. Reconfigurable tree architectures using subtree oriented fault tolerance

    NASA Technical Reports Server (NTRS)

    Lowrie, Matthew B.

    1987-01-01

    An approach to the design of reconfigurable tree architecture is presented in which spare processors are allocated at the leaves. The approach is unique in that spares are associated with subtrees and sharing of spares between these subtrees can occur. The Subtree Oriented Fault Tolerance (SOFT) approach is more reliable than previous approaches capable of tolerating link and switch failures for both single chip and multichip tree implementations while reducing redundancy in terms of both spare processors and links. VLSI layout is 0(n) for binary trees and is directly extensible to N-ary trees and fault tolerance through performance degradation.

  15. Parallel Gaussian elimination of a block tridiagonal matrix using multiple microcomputers

    NASA Technical Reports Server (NTRS)

    Blech, Richard A.

    1989-01-01

    The solution of a block tridiagonal matrix using parallel processing is demonstrated. The multiprocessor system on which results were obtained and the software environment used to program that system are described. Theoretical partitioning and resource allocation for the Gaussian elimination method used to solve the matrix are discussed. The results obtained from running 1, 2 and 3 processor versions of the block tridiagonal solver are presented. The PASCAL source code for these solvers is given in the appendix, and may be transportable to other shared memory parallel processors provided that the synchronization outlines are reproduced on the target system.

  16. Processor Would Find Best Paths On Map

    NASA Technical Reports Server (NTRS)

    Eberhardt, Silvio P.

    1990-01-01

    Proposed very-large-scale integrated (VLSI) circuit image-data processor finds path of least cost from specified origin to any destination on map. Cost of traversal assigned to each picture element of map. Path of least cost from originating picture element to every other picture element computed as path that preserves as much as possible of signal transmitted by originating picture element. Dedicated microprocessor at each picture element stores cost of traversal and performs its share of computations of paths of least cost. Least-cost-path problem occurs in research, military maneuvers, and in planning routes of vehicles.

  17. Absolute risk representation in cardiovascular disease prevention: comprehension and preferences of health care consumers and general practitioners involved in a focus group study.

    PubMed

    Hill, Sophie; Spink, Janet; Cadilhac, Dominique; Edwards, Adrian; Kaufman, Caroline; Rogers, Sophie; Ryan, Rebecca; Tonkin, Andrew

    2010-03-04

    Communicating risk is part of primary prevention of coronary heart disease and stroke, collectively referred to as cardiovascular disease (CVD). In Australia, health organisations have promoted an absolute risk approach, thereby raising the question of suitable standardised formats for risk communication. Sixteen formats of risk representation were prepared including statements, icons, graphical formats, alone or in combination, and with variable use of colours. All presented the same risk, i.e., the absolute risk for a 55 year old woman, 16% risk of CVD in five years. Preferences for a five or ten-year timeframe were explored. Australian GPs and consumers were recruited for participation in focus groups, with the data analysed thematically and preferred formats tallied. Three focus groups with health consumers and three with GPs were held, involving 19 consumers and 18 GPs. Consumers and GPs had similar views on which formats were more easily comprehended and which conveyed 16% risk as a high risk. A simple summation of preferences resulted in three graphical formats (thermometers, vertical bar chart) and one statement format as the top choices. The use of colour to distinguish risk (red, yellow, green) and comparative information (age, sex, smoking status) were important ingredients. Consumers found formats which combined information helpful, such as colour, effect of changing behaviour on risk, or comparison with a healthy older person. GPs preferred formats that helped them relate the information about risk of CVD to their patients, and could be used to motivate patients to change behaviour.Several formats were reported as confusing, such as a percentage risk with no contextual information, line graphs, and icons, particularly those with larger numbers. Whilst consumers and GPs shared preferences, the use of one format for all situations was not recommended. Overall, people across groups felt that risk expressed over five years was preferable to a ten-year risk, the latter being too remote. Consumers and GPs shared preferences for risk representation formats. Both groups liked the option to combine formats and tailor the risk information to reflect a specific individual's risk, to maximise understanding and provide a good basis for discussion.

  18. An integrated autonomous rendezvous and docking system architecture using Centaur modern avionics

    NASA Technical Reports Server (NTRS)

    Nelson, Kurt

    1991-01-01

    The avionics system for the Centaur upper stage is in the process of being modernized with the current state-of-the-art in strapdown inertial guidance equipment. This equipment includes an integrated flight control processor with a ring laser gyro based inertial guidance system. This inertial navigation unit (INU) uses two MIL-STD-1750A processors and communicates over the MIL-STD-1553B data bus. Commands are translated into load activation through a Remote Control Unit (RCU) which incorporates the use of solid state relays. Also, a programmable data acquisition system replaces separate multiplexer and signal conditioning units. This modern avionics suite is currently being enhanced through independent research and development programs to provide autonomous rendezvous and docking capability using advanced cruise missile image processing technology and integrated GPS navigational aids. A system concept was developed to combine these technologies in order to achieve a fully autonomous rendezvous, docking, and autoland capability. The current system architecture and the evolution of this architecture using advanced modular avionics concepts being pursued for the National Launch System are discussed.

  19. Geolocating thermal binoculars based on a software defined camera core incorporating HOT MCT grown by MOVPE

    NASA Astrophysics Data System (ADS)

    Pillans, Luke; Harmer, Jack; Edwards, Tim; Richardson, Lee

    2016-05-01

    Geolocation is the process of calculating a target position based on bearing and range relative to the known location of the observer. A high performance thermal imager with integrated geolocation functions is a powerful long range targeting device. Firefly is a software defined camera core incorporating a system-on-a-chip processor running the AndroidTM operating system. The processor has a range of industry standard serial interfaces which were used to interface to peripheral devices including a laser rangefinder and a digital magnetic compass. The core has built in Global Positioning System (GPS) which provides the third variable required for geolocation. The graphical capability of Firefly allowed flexibility in the design of the man-machine interface (MMI), so the finished system can give access to extensive functionality without appearing cumbersome or over-complicated to the user. This paper covers both the hardware and software design of the system, including how the camera core influenced the selection of peripheral hardware, and the MMI design process which incorporated user feedback at various stages.

  20. Cost analysis of Navy acquisition alternatives for the NAVSTAR Global Positioning System

    NASA Astrophysics Data System (ADS)

    Darcy, T. F.; Smith, G. P.

    1982-12-01

    This research analyzes the life cycle cost (LCC) of the Navy's current and two hypothetical procurement alternatives for NAVSTAR Global Positioning System (GPS) user equipment. Costs are derived by the ARINC Research Corporation ACBEN cost estimating system. Data presentation is in a comparative format describing individual alternative LCC and differential costs between alternatives. Sensitivity analysis explores the impact receiver-processor unit (RPU) first unit production cost has on individual alternative LCC, as well as cost differentials between each alternative. Several benefits are discussed that might provide sufficient cost savings and/or system effectiveness improvements to warrant a procurement strategy other than the existing proposal.

  1. Embedded mobile farm robot for identification of diseased plants

    NASA Astrophysics Data System (ADS)

    Sadistap, S. S.; Botre, B. A.; Pandit, Harshavardhan; Chandrasekhar; Rao, Adesh

    2013-07-01

    This paper presents the development of a mobile robot used in farms for identification of diseased plants. It puts forth two of the major aspects of robotics namely automated navigation and image processing. The robot navigates on the basis of the GPS (Global Positioning System) location and data obtained from IR (Infrared) sensors to avoid any obstacles in its path. It uses an image processing algorithm to differentiate between diseased and non-diseased plants. A robotic platform consisting of an ARM9 processor, motor drivers, robot mechanical assembly, camera and infrared sensors has been used. Mini2440 microcontroller has been used wherein Embedded linux OS (Operating System) is implemented.

  2. Development of a real time bistatic radar receiver using signals of opportunity

    NASA Astrophysics Data System (ADS)

    Rainville, Nicholas

    Passive bistatic radar remote sensing offers a novel method of monitoring the Earth's surface by observing reflected signals of opportunity. The Global Positioning System (GPS) has been used as a source of signals for these observations and the scattering properties of GPS signals from rough surfaces are well understood. Recent work has extended GPS signal reflection observations and scattering models to include communications signals such as XM radio signals. However the communication signal reflectometry experiments to date have relied on collecting raw, high data-rate signals which are then post-processed after the end of the experiment. This thesis describes the development of a communication signal bistatic radar receiver which computes a real time correlation waveform, which can be used to retrieve measurements of the Earth's surface. The real time bistatic receiver greatly reduces the quantity of data that must be stored to perform the remote sensing measurements, as well as offering immediate feedback. This expands the applications for the receiver to include space and bandwidth limited platforms such as aircraft and satellites. It also makes possible the adjustment of flight plans to the observed conditions. This real time receiver required the development of an FGPA based signal processor, along with the integration of commercial Satellite Digital Audio Radio System (SDARS) components. The resulting device was tested both in a lab environment as well on NOAA WP-3D and NASA WB-57 aircraft.

  3. Tightly Coupled Inertial Navigation System/Global Positioning System (TCMIG)

    NASA Technical Reports Server (NTRS)

    Watson, Michael D.; Jackson, Kurt (Technical Monitor)

    2002-01-01

    Many NASA applications planned for execution later this decade are seeking high performance, miniaturized, low power Inertial Management Units (IMU). Much research has gone into Micro-Electro-Mechanical System (MEMS) over the past decade as a solution to these needs. While MEMS devices have proven to provide high accuracy acceleration measurements, they have not yet proven to have the accuracy required by many NASA missions in rotational measurements. Therefore, a new solution has been formulated integrating the best of all IMU technologies to address these mid-term needs in the form of a Tightly Coupled Micro Inertial Navigation System (INS)/Global Positioning System (GPS) (TCMIG). The TCMIG consists of an INS and a GPS tightly coupled by a Kalman filter executing on an embedded Field Programmable Gate Array (FPGA) processor. The INS consists of a highly integrated Interferometric Fiber Optic Gyroscope (IFOG) and a MEMS accelerometer. The IFOG utilizes a tightly wound fiber coil to reduce volume and the high level of integration and advanced optical components to reduce power. The MEMS accelerometer utilizes a newly developed deep etch process to increase the proof mass and yield a highly accurate accelerometer. The GPS receiver consists of a low power miniaturized version of the Blackjack receiver. Such an IMU configuration is ideal to meet the mid-term needs of the NASA Science Enterprises and the new launch vehicles being developed for the Space Launch Initiative (SLI).

  4. General practitioners' perceptions of pharmacists' new services in New Zealand.

    PubMed

    Hatah, Ernieda; Braund, Rhiannon; Duffull, Stephen; Tordoff, June

    2012-04-01

    In recent years, the pharmacy profession has moved towards more patient-oriented services. Some examples are medication review, screening and monitoring for disease, and prescribing. The new services are intended to be in close collaboration with general practitioners (GPs) yet little is known of how GPs in New Zealand perceive these new services. Objective To examine GPs' perceptions of pharmacists' new services. Study was undertaken at GPs' practices in two localities in New Zealand. Qualitative, face to face, semi-structured interviews were undertaken of 18 GPs. The cohort included GPs with less/more than 20 years of practice, and GPs who had experience of working in localities where some patients had undergone a medication review (Medicines Use Review, MUR) by community pharmacists. GPs were asked to share their perceptions about pharmacists providing some new services. Data were thematically analysed with constant comparison using NVivo 8 software. Using a business strategic planning approach, themes were further analysed and interpreted as the services' potential Strengths, Weaknesses, Opportunities and Threats (SWOTs). GPs' perceptions of pharmacists' new services. GPs were more supportive of pharmacists' playing active roles in medication review and less supportive of pharmacists practising screening-monitoring and prescribing. Discussions Pharmacists' knowledge and skills in medication use and the perceived benefits of the services to patients were considered the potential strengths of the services. Weaknesses centred around potential patient confusion and harm, conflict and irritation to GPs' practice, and the potential to fragment patient-care. Opportunities were the possibilities of improving communication, and having a close collaboration and integration with GPs' practice. Apparent threats were the GPs' perceptions of a related, and not renumerated, increase in their workloads, and the perception of limited benefit to patients. Pharmacists should exploit their own strengths and the potential opportunities for these services, and reduce any weaknesses and threats. A possible strategic plan should include increased effective communication, piloting services, and the integration of some services into medical practices.

  5. Performance Evaluation and Modeling Techniques for Parallel Processors. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Dimpsey, Robert Tod

    1992-01-01

    In practice, the performance evaluation of supercomputers is still substantially driven by singlepoint estimates of metrics (e.g., MFLOPS) obtained by running characteristic benchmarks or workloads. With the rapid increase in the use of time-shared multiprogramming in these systems, such measurements are clearly inadequate. This is because multiprogramming and system overhead, as well as other degradations in performance due to time varying characteristics of workloads, are not taken into account. In multiprogrammed environments, multiple jobs and users can dramatically increase the amount of system overhead and degrade the performance of the machine. Performance techniques, such as benchmarking, which characterize performance on a dedicated machine ignore this major component of true computer performance. Due to the complexity of analysis, there has been little work done in analyzing, modeling, and predicting the performance of applications in multiprogrammed environments. This is especially true for parallel processors, where the costs and benefits of multi-user workloads are exacerbated. While some may claim that the issue of multiprogramming is not a viable one in the supercomputer market, experience shows otherwise. Even in recent massively parallel machines, multiprogramming is a key component. It has even been claimed that a partial cause of the demise of the CM2 was the fact that it did not efficiently support time-sharing. In the same paper, Gordon Bell postulates that, multicomputers will evolve to multiprocessors in order to support efficient multiprogramming. Therefore, it is clear that parallel processors of the future will be required to offer the user a time-shared environment with reasonable response times for the applications. In this type of environment, the most important performance metric is the completion of response time of a given application. However, there are a few evaluation efforts addressing this issue.

  6. High order parallel numerical schemes for solving incompressible flows

    NASA Technical Reports Server (NTRS)

    Lin, Avi; Milner, Edward J.; Liou, May-Fun; Belch, Richard A.

    1992-01-01

    The use of parallel computers for numerically solving flow fields has gained much importance in recent years. This paper introduces a new high order numerical scheme for computational fluid dynamics (CFD) specifically designed for parallel computational environments. A distributed MIMD system gives the flexibility of treating different elements of the governing equations with totally different numerical schemes in different regions of the flow field. The parallel decomposition of the governing operator to be solved is the primary parallel split. The primary parallel split was studied using a hypercube like architecture having clusters of shared memory processors at each node. The approach is demonstrated using examples of simple steady state incompressible flows. Future studies should investigate the secondary split because, depending on the numerical scheme that each of the processors applies and the nature of the flow in the specific subdomain, it may be possible for a processor to seek better, or higher order, schemes for its particular subcase.

  7. Efficient Sorting on the Tilera Manycore Architecture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morari, Alessandro; Tumeo, Antonino; Villa, Oreste

    e present an efficient implementation of the radix sort algo- rithm for the Tilera TILEPro64 processor. The TILEPro64 is one of the first successful commercial manycore processors. It is com- posed of 64 tiles interconnected through multiple fast Networks- on-chip and features a fully coherent, shared distributed cache. The architecture has a large degree of flexibility, and allows various optimization strategies. We describe how we mapped the algorithm to this architecture. We present an in-depth analysis of the optimizations for each phase of the algorithm with respect to the processor’s sustained performance. We discuss the overall throughput reached by ourmore » radix sort implementation (up to 132 MK/s) and show that it provides comparable or better performance-per-watt with respect to state-of-the art implemen- tations on x86 processors and graphic processing units.« less

  8. Three-Dimensional High-Lift Analysis Using a Parallel Unstructured Multigrid Solver

    NASA Technical Reports Server (NTRS)

    Mavriplis, Dimitri J.

    1998-01-01

    A directional implicit unstructured agglomeration multigrid solver is ported to shared and distributed memory massively parallel machines using the explicit domain-decomposition and message-passing approach. Because the algorithm operates on local implicit lines in the unstructured mesh, special care is required in partitioning the problem for parallel computing. A weighted partitioning strategy is described which avoids breaking the implicit lines across processor boundaries, while incurring minimal additional communication overhead. Good scalability is demonstrated on a 128 processor SGI Origin 2000 machine and on a 512 processor CRAY T3E machine for reasonably fine grids. The feasibility of performing large-scale unstructured grid calculations with the parallel multigrid algorithm is demonstrated by computing the flow over a partial-span flap wing high-lift geometry on a highly resolved grid of 13.5 million points in approximately 4 hours of wall clock time on the CRAY T3E.

  9. Cache Sharing and Isolation Tradeoffs in Multicore Mixed-Criticality Systems

    DTIC Science & Technology

    2015-05-01

    of lockdown registers, to provide way-based partitioning. These alternatives are illustrated in Fig. 1 with respect to a quad-core ARM Cortex A9...presented a cache-partitioning scheme that allows multiple tasks to share the same cache partition on a single processor (as we do for Level-A and...sets and determined the fraction that were schedulable on our target hardware platform, the quad-core ARM Cortex A9 machine mentioned earlier, the LLC

  10. A Methodology for Distributing the Corporate Database.

    ERIC Educational Resources Information Center

    McFadden, Fred R.

    The trend to distributed processing is being fueled by numerous forces, including advances in technology, corporate downsizing, increasing user sophistication, and acquisitions and mergers. Increasingly, the trend in corporate information systems (IS) departments is toward sharing resources over a network of multiple types of processors, operating…

  11. The Tera Multithreaded Architecture and Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Bokhari, Shahid H.; Mavriplis, Dimitri J.

    1998-01-01

    The Tera Multithreaded Architecture (MTA) is a new parallel supercomputer currently being installed at San Diego Supercomputing Center (SDSC). This machine has an architecture quite different from contemporary parallel machines. The computational processor is a custom design and the machine uses hardware to support very fine grained multithreading. The main memory is shared, hardware randomized and flat. These features make the machine highly suited to the execution of unstructured mesh problems, which are difficult to parallelize on other architectures. We report the results of a study carried out during July-August 1998 to evaluate the execution of EUL3D, a code that solves the Euler equations on an unstructured mesh, on the 2 processor Tera MTA at SDSC. Our investigation shows that parallelization of an unstructured code is extremely easy on the Tera. We were able to get an existing parallel code (designed for a shared memory machine), running on the Tera by changing only the compiler directives. Furthermore, a serial version of this code was compiled to run in parallel on the Tera by judicious use of directives to invoke the "full/empty" tag bits of the machine to obtain synchronization. This version achieves 212 and 406 Mflop/s on one and two processors respectively, and requires no attention to partitioning or placement of data issues that would be of paramount importance in other parallel architectures.

  12. What happens when doctors are patients? Qualitative study of GPs.

    PubMed

    Fox, Fiona; Harris, Michael; Taylor, Gordon; Rodham, Karen; Sutton, Jane; Robinson, Brian; Scott, Jenny

    2009-11-01

    Current evidence about the experiences of doctors who are unwell is limited to poor quality data. To investigate GPs' experiences of significant illness, and how this affects their own subsequent practice. Qualitative study using interpretative phenomenological analysis to conduct and analyse semi-structured interviews with GPs who have experienced significant illness. Two primary care trusts in the West of England. A total of 17 GPs were recruited to take part in semi-structured interviews which were conducted and analysed using interpretative phenomenological analysis Results: Four main categories emerged from the data. The category, 'Who cares when doctors are ill?' embodies the tension between perceptions of medicine as a 'caring profession' and as a 'system'. 'Being a doctor-patient' covers the role ambiguity experienced by doctors who experience significant illness. The category 'Treating doctor-patients' reveals the fragility of negotiating shared medical care. 'Impact on practice' highlights ways in which personal illness can inform GPs' understanding of being a patient and their own consultation style. Challenging the culture of immunity to illness among GPs may require interventions at both individual and organisational levels. Training and development of doctors should include opportunities to consider personal health issues as well as how to cope with role ambiguity when being a patient and when treating doctor-patients. Guidelines about being and treating doctor-patients need to be developed, and GPs need easy access to an occupational health service.

  13. An efficient 3-dim FFT for plane wave electronic structure calculations on massively parallel machines composed of multiprocessor nodes

    NASA Astrophysics Data System (ADS)

    Goedecker, Stefan; Boulet, Mireille; Deutsch, Thierry

    2003-08-01

    Three-dimensional Fast Fourier Transforms (FFTs) are the main computational task in plane wave electronic structure calculations. Obtaining a high performance on a large numbers of processors is non-trivial on the latest generation of parallel computers that consist of nodes made up of a shared memory multiprocessors. A non-dogmatic method for obtaining high performance for such 3-dim FFTs in a combined MPI/OpenMP programming paradigm will be presented. Exploiting the peculiarities of plane wave electronic structure calculations, speedups of up to 160 and speeds of up to 130 Gflops were obtained on 256 processors.

  14. An MPA-IO interface to HPSS

    NASA Technical Reports Server (NTRS)

    Jones, Terry; Mark, Richard; Martin, Jeanne; May, John; Pierce, Elsie; Stanberry, Linda

    1996-01-01

    This paper describes an implementation of the proposed MPI-IO (Message Passing Interface - Input/Output) standard for parallel I/O. Our system uses third-party transfer to move data over an external network between the processors where it is used and the I/O devices where it resides. Data travels directly from source to destination, without the need for shuffling it among processors or funneling it through a central node. Our distributed server model lets multiple compute nodes share the burden of coordinating data transfers. The system is built on the High Performance Storage System (HPSS), and a prototype version runs on a Meiko CS-2 parallel computer.

  15. GPs' considerations in multimorbidity management: a qualitative study.

    PubMed

    Luijks, Hilde D; Loeffen, Maartje J W; Lagro-Janssen, Antoine L; van Weel, Chris; Lucassen, Peter L; Schermer, Tjard R

    2012-07-01

    Scientific evidence on how to manage multimorbidity is limited, but GPs have extensive practical experience with multimorbidity management. To explore GPs' considerations and main objectives in the management of multimorbidity and to explore factors influencing their management of multimorbidity. Focus group study of Dutch GPs; with heterogeneity in characteristics such as sex, age and urbanisation. The moderator used an interview guide in conducting the interviews. Two researchers performed the analysis as an iterative process, based on verbatim transcripts and by applying the technique of constant comparative analysis. Data collection proceeded until saturation was reached. Five focus groups were conducted with 25 participating GPs. The main themes concerning multimorbidity management were individualisation, applying an integrated approach, medical considerations placed in perspective, and sharing decision making and responsibility. A personal patient-doctor relationship was considered a major factor positively influencing the management of multimorbidity. Mental-health problems and interacting conditions were regarded as major barriers in this respect and participants experienced several practical problems. The concept of patient-centredness overarches the participants' main objectives. GPs' main objective in multimorbidity management is applying a patient-centred approach. This approach is welcomed since it counteracts some potential pitfalls of multimorbidity. Further research should include a similar design in a different setting and should aim at developing best practice in multimorbidity management.

  16. Memory access in shared virtual memory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berrendorf, R.

    1992-01-01

    Shared virtual memory (SVM) is a virtual memory layer with a single address space on top of a distributed real memory on parallel computers. We examine the behavior and performance of SVM running a parallel program with medium-grained, loop-level parallelism on top of it. A simulator for the underlying parallel architecture can be used to examine the behavior of SVM more deeply. The influence of several parameters, such as the number of processors, page size, cold or warm start, and restricted page replication, is studied.

  17. Memory access in shared virtual memory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berrendorf, R.

    1992-09-01

    Shared virtual memory (SVM) is a virtual memory layer with a single address space on top of a distributed real memory on parallel computers. We examine the behavior and performance of SVM running a parallel program with medium-grained, loop-level parallelism on top of it. A simulator for the underlying parallel architecture can be used to examine the behavior of SVM more deeply. The influence of several parameters, such as the number of processors, page size, cold or warm start, and restricted page replication, is studied.

  18. Compiler-directed cache management in multiprocessors

    NASA Technical Reports Server (NTRS)

    Cheong, Hoichi; Veidenbaum, Alexander V.

    1990-01-01

    The necessity of finding alternatives to hardware-based cache coherence strategies for large-scale multiprocessor systems is discussed. Three different software-based strategies sharing the same goals and general approach are presented. They consist of a simple invalidation approach, a fast selective invalidation scheme, and a version control scheme. The strategies are suitable for shared-memory multiprocessor systems with interconnection networks and a large number of processors. Results of trace-driven simulations conducted on numerical benchmark routines to compare the performance of the three schemes are presented.

  19. Pathway-GPS and SIGORA: identifying relevant pathways based on the over-representation of their gene-pair signatures

    PubMed Central

    Foroushani, Amir B.K.; Brinkman, Fiona S.L.

    2013-01-01

    Motivation. Predominant pathway analysis approaches treat pathways as collections of individual genes and consider all pathway members as equally informative. As a result, at times spurious and misleading pathways are inappropriately identified as statistically significant, solely due to components that they share with the more relevant pathways. Results. We introduce the concept of Pathway Gene-Pair Signatures (Pathway-GPS) as pairs of genes that, as a combination, are specific to a single pathway. We devised and implemented a novel approach to pathway analysis, Signature Over-representation Analysis (SIGORA), which focuses on the statistically significant enrichment of Pathway-GPS in a user-specified gene list of interest. In a comparative evaluation of several published datasets, SIGORA outperformed traditional methods by delivering biologically more plausible and relevant results. Availability. An efficient implementation of SIGORA, as an R package with precompiled GPS data for several human and mouse pathway repositories is available for download from http://sigora.googlecode.com/svn/. PMID:24432194

  20. An ultra-wide bandwidth-based range/GPS tight integration approach for relative positioning in vehicular ad hoc networks

    NASA Astrophysics Data System (ADS)

    Shen, Feng; Wayn Cheong, Joon; Dempster, Andrew G.

    2015-04-01

    Relative position awareness is a vital premise for the implementation of emerging intelligent transportation systems, such as collision warning. However, commercial global navigation satellite systems (GNSS) receivers do not satisfy the requirements of these applications. Fortunately, cooperative positioning (CP) techniques, through sharing the GNSS measurements between vehicles, can improve the performance of relative positioning in a vehicular ad hoc network (VANET). In this paper, while assuming there are no obstacles between vehicles, a new enhanced tightly coupled CP technique is presented by adding ultra-wide bandwidth (UWB)-based inter-vehicular range measurements. In the proposed CP method, each vehicle fuses the GPS measurements and the inter-vehicular range measurements. Based on analytical and experimental results, in the full GPS coverage environment, the new tight integration CP method outperforms the INS-aided tight CP method, tight CP method, and DGPS by 11%, 15%, and 24%, respectively; in the GPS outage scenario, the performance improvement achieves 60%, 65%, and 73%, respectively.

  1. Primary care units in Emilia-Romagna, Italy: an assessment of organizational culture.

    PubMed

    Pracilio, Valerie P; Keith, Scott W; McAna, John; Rossi, Giuseppina; Brianti, Ettore; Fabi, Massimo; Maio, Vittorio

    2014-01-01

    This study investigates the organizational culture and associated characteristics of the newly established primary care units (PCUs)-collaborative teams of general practitioners (GPs) who provide patients with integrated health care services-in the Emilia-Romagna Region (RER), Italy. A survey instrument covering 6 cultural dimensions was administered to all 301 GPs in 21 PCUs in the Local Health Authority (LHA) of Parma, RER; the response rate was 79.1%. Management style, organizational trust, and collegiality proved to be more important aspects of PCU organizational culture than information sharing, quality, and cohesiveness. Cultural dimension scores were positively associated with certain characteristics of the PCUs including larger PCU size and greater proportion of older GPs. The presence of female GPs in the PCUs had a negative impact on collegiality, organizational trust, and quality. Feedback collected through this assessment will be useful to the RER and LHAs for evaluating and guiding improvements in the PCUs. © 2013 by the American College of Medical Quality.

  2. TLALOCNet: A Continuous GPS-Met Array in Mexico for Seismotectonic and Atmospheric Research

    NASA Astrophysics Data System (ADS)

    Cabral-Cano, E.; Salazar-Tlaczani, L.; Galetzka, J.; DeMets, C.; Serra, Y. L.; Feaux, K.; Mattioli, G. S.; Miller, M. M.

    2015-12-01

    TLALOCNet is a network of continuous Global Positioning System (cGPS) and meteorology stations in Mexico for the interrogation of the earthquake cycle, tectonic processes, land subsidence, and atmospheric processes of Mexico. Once completed, TLALOCNet will span all of Mexico and will link existing GPS infrastructure in North America and the Caribbean aiming towards creating a continuous, federated network of networks in the Americas. Phase 1 (2014-2015), funded by NSF and UNAM, is building and upgrading 30+ cGPS-Met sites to the high standard of the EarthScope Plate Boundary Observatory (PBO). Phase 2 (2016) will add ~25 more cGPS-Met stations to be funded through CONACyT. TLALOCNet provides open and freely available raw GPS data, GPS-PWV, surface meteorology measurements, time series of daily positions, as well as a station velocity field to support a broad range of geoscience investigations. This is accomplished through the development of the TLALOCNet data center (http://tlalocnet.udg.mx) that serves as a collection and distribution point. This data center is based on UNAVCO's Dataworks-GSAC software and can work as part of UNAVCO's seamless archive for discovery, sharing, and access to data.The TLALOCNet data center also contains contributed data from several regional networks in Mexico. By using the same protocols and structure as the UNAVCO and other COCONet regional data centers, the geodetic community has the capability of accessing data from a large number of scientific and academically operated Mexican GPS sites. This archive provides a fully querable and scriptable GPS and Meteorological data retrieval point. Additionally Real-time 1Hz streams from selected TLALOCNet stations are available in BINEX, RTCM 2.3 and RTCM 3.1 formats via the Networked Transport of RTCM via Internet Protocol (NTRIP).

  3. Application of a distributed systems architecture for increased speed in image processing on an autonomous ground vehicle

    NASA Astrophysics Data System (ADS)

    Wright, Adam A.; Momin, Orko; Shin, Young Ho; Shakya, Rahul; Nepal, Kumud; Ahlgren, David J.

    2010-01-01

    This paper presents the application of a distributed systems architecture to an autonomous ground vehicle, Q, that participates in both the autonomous and navigation challenges of the Intelligent Ground Vehicle Competition. In the autonomous challenge the vehicle is required to follow a course, while avoiding obstacles and staying within the course boundaries, which are marked by white lines. For the navigation challenge, the vehicle is required to reach a set of target destinations, known as way points, with given GPS coordinates and avoid obstacles that it encounters in the process. Previously the vehicle utilized a single laptop to execute all processing activities including image processing, sensor interfacing and data processing, path planning and navigation algorithms and motor control. National Instruments' (NI) LabVIEW served as the programming language for software implementation. As an upgrade to last year's design, a NI compact Reconfigurable Input/Output system (cRIO) was incorporated to the system architecture. The cRIO is NI's solution for rapid prototyping that is equipped with a real time processor, an FPGA and modular input/output. Under the current system, the real time processor handles the path planning and navigation algorithms, the FPGA gathers and processes sensor data. This setup leaves the laptop to focus on running the image processing algorithm. Image processing as previously presented by Nepal et. al. is a multi-step line extraction algorithm and constitutes the largest processor load. This distributed approach results in a faster image processing algorithm which was previously Q's bottleneck. Additionally, the path planning and navigation algorithms are executed more reliably on the real time processor due to the deterministic nature of operation. The implementation of this architecture required exploration of various inter-system communication techniques. Data transfer between the laptop and the real time processor using UDP packets was established as the most reliable protocol after testing various options. Improvement can be made to the system by migrating more algorithms to the hardware based FPGA to further speed up the operations of the vehicle.

  4. Large-N in Volcano Settings: Volcanosri

    NASA Astrophysics Data System (ADS)

    Lees, J. M.; Song, W.; Xing, G.; Vick, S.; Phillips, D.

    2014-12-01

    We seek a paradigm shift in the approach we take on volcano monitoring where the compromise from high fidelity to large numbers of sensors is used to increase coverage and resolution. Accessibility, danger and the risk of equipment loss requires that we develop systems that are independent and inexpensive. Furthermore, rather than simply record data on hard disk for later analysis we desire a system that will work autonomously, capitalizing on wireless technology and in field network analysis. To this end we are currently producing a low cost seismic array which will incorporate, at the very basic level, seismological tools for first cut analysis of a volcano in crises mode. At the advanced end we expect to perform tomographic inversions in the network in near real time. Geophone (4 Hz) sensors connected to a low cost recording system will be installed on an active volcano where triggering earthquake location and velocity analysis will take place independent of human interaction. Stations are designed to be inexpensive and possibly disposable. In one of the first implementations the seismic nodes consist of an Arduino Due processor board with an attached Seismic Shield. The Arduino Due processor board contains an Atmel SAM3X8E ARM Cortex-M3 CPU. This 32 bit 84 MHz processor can filter and perform coarse seismic event detection on a 1600 sample signal in fewer than 200 milliseconds. The Seismic Shield contains a GPS module, 900 MHz high power mesh network radio, SD card, seismic amplifier, and 24 bit ADC. External sensors can be attached to either this 24-bit ADC or to the internal multichannel 12 bit ADC contained on the Arduino Due processor board. This allows the node to support attachment of multiple sensors. By utilizing a high-speed 32 bit processor complex signal processing tasks can be performed simultaneously on multiple sensors. Using a 10 W solar panel, second system being developed can run autonomously and collect data on 3 channels at 100Hz for 6 months with the installed 16Gb SD card. Initial designs and test results will be presented and discussed.

  5. Neuromorphic Computing: A Post-Moore's Law Complementary Architecture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schuman, Catherine D; Birdwell, John Douglas; Dean, Mark

    2016-01-01

    We describe our approach to post-Moore's law computing with three neuromorphic computing models that share a RISC philosophy, featuring simple components combined with a flexible and programmable structure. We envision these to be leveraged as co-processors, or as data filters to provide in situ data analysis in supercomputing environments.

  6. Expert Systems on Multiprocessor Architectures. Volume 2. Technical Reports

    DTIC Science & Technology

    1991-06-01

    Report RC 12936 (#58037). IBM T. J. Wartson Reiearch Center. July 1987. � Alan Jay Smith. Cache memories. Coniputing Sitrry., 1.1(3): I.3-5:30...basic-shared is an instrument for ashared memory design. The components panels are processor- qload-scrolling-bar-panel, memory-qload-scrolling-bar-panel

  7. Copyright in the Age of Photocopiers, Word Processors, and the Internet

    ERIC Educational Resources Information Center

    Shaw, Marjorie Hodges; Shaw, Brian B.

    2003-01-01

    Widespread digital infringement of the copyrighted material now has made security firms, night-vision goggles, and metal detectors common in movie previews. The current national controversy over peer-to-peer file sharing of music highlights the difficult questions facing colleges and universities as they grapple with dramatic technological…

  8. Memory Network For Distributed Data Processors

    NASA Technical Reports Server (NTRS)

    Bolen, David; Jensen, Dean; Millard, ED; Robinson, Dave; Scanlon, George

    1992-01-01

    Universal Memory Network (UMN) is modular, digital data-communication system enabling computers with differing bus architectures to share 32-bit-wide data between locations up to 3 km apart with less than one millisecond of latency. Makes it possible to design sophisticated real-time and near-real-time data-processing systems without data-transfer "bottlenecks". This enterprise network permits transmission of volume of data equivalent to an encyclopedia each second. Facilities benefiting from Universal Memory Network include telemetry stations, simulation facilities, power-plants, and large laboratories or any facility sharing very large volumes of data. Main hub of UMN is reflection center including smaller hubs called Shared Memory Interfaces.

  9. Keeping primary care "in the loop": General practitioners want better communication with specialists and hospitals when caring for people diagnosed with cancer.

    PubMed

    Lizama, Natalia; Johnson, Claire E; Ghosh, Manonita; Garg, Neeraj; Emery, Jonathan D; Saunders, Christobel

    2015-06-01

    To investigate general practitioners' (GP) perceptions about communication when providing cancer care. A self-report survey, which included an open response section, was mailed to a random sample of 1969 eligible Australian GPs. Content analysis of open response comments pertaining to communication was undertaken in order to ascertain GPs' views about communication issues in the provision of cancer care. Of the 648 GPs who completed the survey, 68 (10%) included open response comments about interprofessional communication. Participants who commented on communication were a median age of 50 years and worked 33 h/week; 28% were male and 59% practiced in the metropolitan area. Comments pertaining to communication were coded using five non-mutually exclusive categories: being kept in the loop; continuity of care; relationships with specialists; positive communication experiences; and strategies for improving communication.GPs repeatedly noted the importance of receiving detailed and timely communication from specialists and hospitals, particularly in relation to patients' treatment regimes and follow-up care. Several GPs remarked that they were left out of "the information loop" and that patients were "lost" or "dumped" after referral. While many GPs are currently involved in some aspects of cancer management, detailed and timely communication between specialists and GPs is imperative to support shared care and ensure optimal patient outcomes. This research highlights the need for established channels of communication between specialist and primary care medicine to support greater involvement by GPs in cancer care. © 2015 Wiley Publishing Asia Pty Ltd.

  10. How do general practitioners put preventive care recommendations into practice? A cross-sectional study in Switzerland and France.

    PubMed

    Sebo, Paul; Cerutti, Bernard; Fournier, Jean-Pascal; Rat, Cédric; Rougerie, Fabien; Senn, Nicolas; Haller, Dagmar M; Maisonneuve, Hubert

    2017-10-06

    We previously identified that general practitioners (GPs) in French-speaking regions of Europe had a variable uptake of common preventive recommendations. In this study, we describe GPs' reports of how they put different preventive recommendations into practice. Cross-sectional study conducted in 2015 in Switzerland and France. 3400 randomly selected GPs were asked to complete a postal (n=1100) or online (n=2300) questionnaire. GPs who exclusively practiced complementary and alternative medicine were not eligible for the study. 764 GPs (response rate: postal 47%, online 11%) returned the questionnaire (428 in Switzerland and 336 in France). We investigated how the GPs performed five preventive practices (screening for dyslipidaemia, colorectal and prostate cancer, identification of hazardous alcohol consumption and brief intervention), examining which age group they selected, the screening frequency, the test they used, whether they favoured shared decision for prostate cancer screening and their definition of hazardous alcohol use. A large variability was observed in the way in which GPs provide these practices. 41% reported screening yearly for cholesterol, starting and stopping at variable ages. 82% did not use any test to identify hazardous drinking. The most common responses for defining hazardous drinking were, for men, ≥21 drinks/week (24%) and ≥4 drinks/occasion for binge drinking (20%), and for women, ≥14 drinks/week (28%) and ≥3 drinks/occasion (21%). Screening for colorectal cancer, mainly with colonoscopy in Switzerland (86%) and stool-based tests in France (93%), was provided every 10 years in Switzerland (65%) and 2 years in France (91%) to patients between 50 years (87%) and 75 years (67%). Prostate cancer screening, usually with shared decision (82%), was provided yearly (62%) to patients between 50 years (74%) and 75-80 years (32%-34%). The large diversity in the way these practices are provided needs to be addressed, as it could be related to some misunderstandingof the current guidelines, to barriers for guideline uptake or, more likely, to the absence of agreement between the various recommendations. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  11. Fast and Accurate Simulation of the Cray XMT Multithreaded Supercomputer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Villa, Oreste; Tumeo, Antonino; Secchi, Simone

    Irregular applications, such as data mining and analysis or graph-based computations, show unpredictable memory/network access patterns and control structures. Highly multithreaded architectures with large processor counts, like the Cray MTA-1, MTA-2 and XMT, appear to address their requirements better than commodity clusters. However, the research on highly multithreaded systems is currently limited by the lack of adequate architectural simulation infrastructures due to issues such as size of the machines, memory footprint, simulation speed, accuracy and customization. At the same time, Shared-memory MultiProcessors (SMPs) with multi-core processors have become an attractive platform to simulate large scale machines. In this paper, wemore » introduce a cycle-level simulator of the highly multithreaded Cray XMT supercomputer. The simulator runs unmodified XMT applications. We discuss how we tackled the challenges posed by its development, detailing the techniques introduced to make the simulation as fast as possible while maintaining a high accuracy. By mapping XMT processors (ThreadStorm with 128 hardware threads) to host computing cores, the simulation speed remains constant as the number of simulated processors increases, up to the number of available host cores. The simulator supports zero-overhead switching among different accuracy levels at run-time and includes a network model that takes into account contention. On a modern 48-core SMP host, our infrastructure simulates a large set of irregular applications 500 to 2000 times slower than real time when compared to a 128-processor XMT, while remaining within 10\\% of accuracy. Emulation is only from 25 to 200 times slower than real time.« less

  12. TLALOCNet continuous GPS-Met Array in Mexico supporting the 2017 NAM GPS Hydrometeorological Network.

    NASA Astrophysics Data System (ADS)

    Cabral-Cano, E.; Salazar-Tlaczani, L.; Adams, D. K.; Vivoni, E. R.; Grutter, M.; Serra, Y. L.; DeMets, C.; Galetzka, J.; Feaux, K.; Mattioli, G. S.; Miller, M. M.

    2017-12-01

    TLALOCNet is a network of continuous GPS and meteorology stations in Mexico to study atmospheric and solid earth processes. This recently completed network spans most of Mexico with a strong coverage emphasis on southern and western Mexico. This network, funded by NSF, CONACyT and UNAM, recently built 40 cGPS-Met sites to EarthScope Plate Boundary Observatory standards and upgraded 25 additional GPS stations. TLALOCNet provides open and freely available raw GPS data, and high frequency surface meteorology measurements, and time series of daily positions. This is accomplished through the development of the TLALOCNet data center (http://tlalocnet.udg.mx) that serves as a collection and distribution point. This data center is based on UNAVCO's Dataworks-GSAC software and also works as part of UNAVCO's seamless archive for discovery, sharing, and access to GPS data. The TLALOCNet data center also contains contributed data from several regional GPS networks in Mexico for a total of 100+ stations. By using the same protocols and structure as the UNAVCO and other COCONet regional data centers, the scientific community has the capability of accessing data from the largest Mexican GPS network. This archive provides a fully queryable and scriptable GPS and Meteorological data retrieval point. In addition, real-time 1Hz streams from selected TLALOCNet stations are available in BINEX, RTCM 2.3 and RTCM 3.1 formats via the Networked Transport of RTCM via Internet Protocol (NTRIP) for real-time seismic and weather forecasting applications. TLALOCNet served as a GPS-Met backbone for the binational Mexico-US North American Monsoon GPS Hydrometeorological Network 2017 campaign experiment. This innovative experiment attempts to address water vapor source regions and land-surface water vapor flux contributions to precipitation (i.e., moisture recycling) during the 2017 North American Monsoon in Baja California, Sonora, Chihuahua, and Arizona. Models suggest that moisture recycling is a large contributor to summer rainfall. This experiment represents a first attempt to quantify the surface water vapor flux contribution to GPS-derived precipitable water vapor. Preliminary results from this campaign are presented.

  13. Integrated care information technology.

    PubMed

    Rowe, Ian; Brimacombe, Phil

    2003-02-21

    Counties Manukau District Health Board (CMDHB) uses information technology (IT) to drive its Integrated Care strategy. IT enables the sharing of relevant health information between care providers. This information sharing is critical to closing the gaps between fragmented areas of the health system. The tragic case of James Whakaruru demonstrates how people have been falling through those gaps. The starting point of the Integrated Care strategic initiative was the transmission of electronic discharges and referral status messages from CMDHB's secondary provider, South Auckland Health (SAH), to GPs in the district. Successful pilots of a Well Child system and a diabetes disease management system embracing primary and secondary providers followed this. The improved information flowing from hospital to GPs now enables GPs to provide better management for their patients. The Well Child system pilot helped improve reported immunization rates in a high health need area from 40% to 90%. The diabetes system pilot helped reduce the proportion of patients with HbA1c rang:9 from 47% to 16%. IT has been implemented as an integral component of an overall Integrated Care strategic initiative. Within this context, Integrated Care IT has helped to achieve significant improvements in care outcomes, broken down barriers between health system silos, and contributed to the establishment of a system of care continuum that is better for patients.

  14. To give the invisible child priority: children as next of kin in general practice.

    PubMed

    Gullbrå, Frøydis; Smith-Sivertsen, Tone; Rortveit, Guri; Anderssen, Norman; Hafting, Marit

    2014-03-01

    To explore general practitioners' (GPs') experiences in helping children as next of kin of drug-addicted, mentally ill, or severely somatic ill adults. These children are at risk of long-term mental and somatic health problems. Qualitative focus-group study. Focus-group interviews were conducted in western Norway with a total of 27 GPs. Participants were encouraged to share stories from clinical encounters with parents who had one of the above-mentioned problems and to discuss the GP's role in relation to helping the patients' children. The GPs brought up many examples of how they could aid children as next of kin, including identifying children at risk, counselling the parents, and taking part in collaboration with other healthcare professionals and social workers. They also experienced some barriers in fulfilling their potential. There were time constraints, the GPs had their main focus on the patient present in a consultation, and the child was often outside the attention of the doctors, or the GPs could be afraid of hurting or losing their vulnerable patients, thus avoiding bringing up the patients' children as a subject for discussion. Norwegian GPs are in a good position to help children as next of kin and doctors make a great effort to support many of them. Still, support of these children by GPs often seems to depend not on careful consideration of what is best for the patient and the child in the long run, but more on short-term convenience reasons.

  15. Interprofessional communication between community pharmacists and general practitioners: a qualitative study.

    PubMed

    Weissenborn, Marina; Haefeli, Walter E; Peters-Klimm, Frank; Seidling, Hanna M

    2017-06-01

    Background While collaboration between community pharmacists (CPs) and general practitioners (GPs) is essential to provide comprehensive patient care, their communication often is scarce and hampered by multiple barriers. Objective We aimed to assess both professions' perceptions of interprofessional communication with regard to content and methods of communication as a basis to subsequently develop best-practice recommendations for information exchange. Setting Ambulatory care setting in Germany. Method CPs and GPs shared their experience in focus groups and in-depth interviews which were conducted using a semi-structured interview guideline. Transcribed recordings were assessed using qualitative content analysis according to Mayring. Main outcome measure Specification of existing barriers, CPs'/GPs' general perceptions of interprofessional communication and similarities and differences regarding prioritization of specific information items and how to best communicate with each other. Results Four focus groups and fourteen interviews were conducted. Seven internal (e.g. professions were not personally known to one another) and nine external barriers (e.g. mutual accessibility) were identified. Ten organizational, eight medication-related, and four patient-related information items were identified requiring interprofessional communication. Their relevance varied between the professions, e.g. CPs rated organizational issues higher than GPs. Both professions indicated communication via phone to be the most frequently used method of communication. Conclusion CPs and GPs opinions often differ. However, communication between CPs and GPs is perceived as crucial suggesting that a future concept has to offer standardized recommendations, while leaving CPs and GPs room to adjust it to their individual needs.

  16. Arranging computer architectures to create higher-performance controllers

    NASA Technical Reports Server (NTRS)

    Jacklin, Stephen A.

    1988-01-01

    Techniques for integrating microprocessors, array processors, and other intelligent devices in control systems are reviewed, with an emphasis on the (re)arrangement of components to form distributed or parallel processing systems. Consideration is given to the selection of the host microprocessor, increasing the power and/or memory capacity of the host, multitasking software for the host, array processors to reduce computation time, the allocation of real-time and non-real-time events to different computer subsystems, intelligent devices to share the computational burden for real-time events, and intelligent interfaces to increase communication speeds. The case of a helicopter vibration-suppression and stabilization controller is analyzed as an example, and significant improvements in computation and throughput rates are demonstrated.

  17. Multi-processor including data flow accelerator module

    DOEpatents

    Davidson, George S.; Pierce, Paul E.

    1990-01-01

    An accelerator module for a data flow computer includes an intelligent memory. The module is added to a multiprocessor arrangement and uses a shared tagged memory architecture in the data flow computer. The intelligent memory module assigns locations for holding data values in correspondence with arcs leading to a node in a data dependency graph. Each primitive computation is associated with a corresponding memory cell, including a number of slots for operands needed to execute a primitive computation, a primitive identifying pointer, and linking slots for distributing the result of the cell computation to other cells requiring that result as an operand. Circuitry is provided for utilizing tag bits to determine automatically when all operands required by a processor are available and for scheduling the primitive for execution in a queue. Each memory cell of the module may be associated with any of the primitives, and the particular primitive to be executed by the processor associated with the cell is identified by providing an index, such as the cell number for the primitive, to the primitive lookup table of starting addresses. The module thus serves to perform functions previously performed by a number of sections of data flow architectures and coexists with conventional shared memory therein. A multiprocessing system including the module operates in a hybrid mode, wherein the same processing modules are used to perform some processing in a sequential mode, under immediate control of an operating system, while performing other processing in a data flow mode.

  18. Data traffic reduction schemes for Cholesky factorization on asynchronous multiprocessor systems

    NASA Technical Reports Server (NTRS)

    Naik, Vijay K.; Patrick, Merrell L.

    1989-01-01

    Communication requirements of Cholesky factorization of dense and sparse symmetric, positive definite matrices are analyzed. The communication requirement is characterized by the data traffic generated on multiprocessor systems with local and shared memory. Lower bound proofs are given to show that when the load is uniformly distributed the data traffic associated with factoring an n x n dense matrix using n to the alpha power (alpha less than or equal 2) processors is omega(n to the 2 + alpha/2 power). For n x n sparse matrices representing a square root of n x square root of n regular grid graph the data traffic is shown to be omega(n to the 1 + alpha/2 power), alpha less than or equal 1. Partitioning schemes that are variations of block assignment scheme are described and it is shown that the data traffic generated by these schemes are asymptotically optimal. The schemes allow efficient use of up to O(n to the 2nd power) processors in the dense case and up to O(n) processors in the sparse case before the total data traffic reaches the maximum value of O(n to the 3rd power) and O(n to the 3/2 power), respectively. It is shown that the block based partitioning schemes allow a better utilization of the data accessed from shared memory and thus reduce the data traffic than those based on column-wise wrap around assignment schemes.

  19. Design and implementation of a medium speed communications interface and protocol for a low cost, refreshed display computer

    NASA Technical Reports Server (NTRS)

    Phyne, J. R.; Nelson, M. D.

    1975-01-01

    The design and implementation of hardware and software systems involved in using a 40,000 bit/second communication line as the connecting link between an IMLAC PDS 1-D display computer and a Univac 1108 computer system were described. The IMLAC consists of two independent processors sharing a common memory. The display processor generates the deflection and beam control currents as it interprets a program contained in the memory; the minicomputer has a general instruction set and is responsible for starting and stopping the display processor and for communicating with the outside world through the keyboard, teletype, light pen, and communication line. The processing time associated with each data byte was minimized by designing the input and output processes as finite state machines which automatically sequence from each state to the next. Several tests of the communication link and the IMLAC software were made using a special low capacity computer grade cable between the IMLAC and the Univac.

  20. An efficient parallel-processing method for transposing large matrices in place.

    PubMed

    Portnoff, M R

    1999-01-01

    We have developed an efficient algorithm for transposing large matrices in place. The algorithm is efficient because data are accessed either sequentially in blocks or randomly within blocks small enough to fit in cache, and because the same indexing calculations are shared among identical procedures operating on independent subsets of the data. This inherent parallelism makes the method well suited for a multiprocessor computing environment. The algorithm is easy to implement because the same two procedures are applied to the data in various groupings to carry out the complete transpose operation. Using only a single processor, we have demonstrated nearly an order of magnitude increase in speed over the previously published algorithm by Gate and Twigg for transposing a large rectangular matrix in place. With multiple processors operating in parallel, the processing speed increases almost linearly with the number of processors. A simplified version of the algorithm for square matrices is presented as well as an extension for matrices large enough to require virtual memory.

  1. Physicians' assessments of work capacity in patients with severe subjective health complaints: a cross-sectional study on differences between five European countries

    PubMed Central

    Werner, Erik L; Merkus, Suzanne L; Mæland, Silje; Jourdain, Maud; Schaafsma, Frederieke; Canevet, Jean Paul; Weerdesteijn, Kristel H N; Rat, Cédric; Anema, Johannes R

    2016-01-01

    Objectives A comparison of appraisals made by general practitioners (GPs) in France and occupational physicians (OPs) and insurance physicians (IPs) in the Netherlands with those made by Scandinavian GPs on work capacity in patients with severe subjective health complaints (SHCs). Setting GPs in France and OPs/IPs in the Netherlands gathered to watch nine authentic video recordings from a Norwegian general practice. Participants 46 GPs in France and 93 OPs/IPs in the Netherlands were invited to a 1-day course on SHC. Outcomes Recommendation of sick leave (full or partial) or no sick leave for each of the patients. Results Compared with Norwegian GPs, sick leave was less likely to be granted by Swedish GPs (OR 0.51, 95% CI 0.30 to 0.86) and by Dutch OPs/IPs (OR 0.53, 95% CI 0.37 to 0.78). The differences between Swedish and Norwegian GPs were maintained in the adjusted analyses (OR 0.43, 95% CI 0.23 to 0.79). This was also true for the differences between Dutch and Norwegian physicians (OR 0.55, 95% CI 0.36 to 0.86). Overall, compared with the GPs, the Dutch OPs/IPs were less likely to grant sick leave (OR 0.60, 95% CI 0.45 to 0.87). Conclusions Swedish GPs and Dutch OPs/IPs were less likely to grant sick leave to patients with severe SHC compared with GPs from Norway, while GPs from Denmark and France were just as likely to grant sick leave as the Norwegian GPs. We suggest that these findings may be due to the guidelines on sick-listing and on patients with severe SHC which exist in Sweden and the Netherlands, respectively. Differences in the working conditions, relationships with patients and training of specialists in occupational medicine may also have affected the results. However, a pattern was observed in which of the patients the physicians in all countries thought should be sick-listed, suggesting that the physicians share tacit knowledge regarding sick leave decision-making in patients with severe SHC. PMID:27417198

  2. Implementing the PM Programming Language using MPI and OpenMP - a New Tool for Programming Geophysical Models on Parallel Systems

    NASA Astrophysics Data System (ADS)

    Bellerby, Tim

    2015-04-01

    PM (Parallel Models) is a new parallel programming language specifically designed for writing environmental and geophysical models. The language is intended to enable implementers to concentrate on the science behind the model rather than the details of running on parallel hardware. At the same time PM leaves the programmer in control - all parallelisation is explicit and the parallel structure of any given program may be deduced directly from the code. This paper describes a PM implementation based on the Message Passing Interface (MPI) and Open Multi-Processing (OpenMP) standards, looking at issues involved with translating the PM parallelisation model to MPI/OpenMP protocols and considering performance in terms of the competing factors of finer-grained parallelisation and increased communication overhead. In order to maximise portability, the implementation stays within the MPI 1.3 standard as much as possible, with MPI-2 MPI-IO file handling the only significant exception. Moreover, it does not assume a thread-safe implementation of MPI. PM adopts a two-tier abstract representation of parallel hardware. A PM processor is a conceptual unit capable of efficiently executing a set of language tasks, with a complete parallel system consisting of an abstract N-dimensional array of such processors. PM processors may map to single cores executing tasks using cooperative multi-tasking, to multiple cores or even to separate processing nodes, efficiently sharing tasks using algorithms such as work stealing. While tasks may move between hardware elements within a PM processor, they may not move between processors without specific programmer intervention. Tasks are assigned to processors using a nested parallelism approach, building on ideas from Reyes et al. (2009). The main program owns all available processors. When the program enters a parallel statement then either processors are divided out among the newly generated tasks (number of new tasks < number of processors) or tasks are divided out among the available processors (number of tasks > number of processors). Nested parallel statements may further subdivide the processor set owned by a given task. Tasks or processors are distributed evenly by default, but uneven distributions are possible under programmer control. It is also possible to explicitly enable child tasks to migrate within the processor set owned by their parent task, reducing load unbalancing at the potential cost of increased inter-processor message traffic. PM incorporates some programming structures from the earlier MIST language presented at a previous EGU General Assembly, while adopting a significantly different underlying parallelisation model and type system. PM code is available at www.pm-lang.org under an unrestrictive MIT license. Reference Ruymán Reyes, Antonio J. Dorta, Francisco Almeida, Francisco de Sande, 2009. Automatic Hybrid MPI+OpenMP Code Generation with llc, Recent Advances in Parallel Virtual Machine and Message Passing Interface, Lecture Notes in Computer Science Volume 5759, 185-195

  3. AN Integrated Bibliographic Information System: Concept and Application for Resource Sharing in Special Libraries

    DTIC Science & Technology

    1987-05-01

    workload (beyond that of say an equivalent academic or corporate technical libary ) for the Defense Department libraries. Figure 9 illustrates the range...summer. The hardware configuration for the system is as follows: " Digital Equipment Corporation VAX 11/750 central processor with 6 mega- bytes of real

  4. 50 CFR 680.40 - Crab Quota Share (QS), Processor QS (PQS), Individual Fishing Quota (IFQ), and Individual...

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... exclude any deadloss, test fishing, fishing conducted under an experimental, exploratory, or scientific..., education, exploratory, or experimental permit, or under the Western Alaska CDQ Program. (iv) Documentation... information is true, correct, and complete to the best of his/her knowledge and belief. If the application is...

  5. 50 CFR 680.40 - Crab Quota Share (QS), Processor QS (PQS), Individual Fishing Quota (IFQ), and Individual...

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... exclude any deadloss, test fishing, fishing conducted under an experimental, exploratory, or scientific..., education, exploratory, or experimental permit, or under the Western Alaska CDQ Program. (iv) Documentation... information is true, correct, and complete to the best of his/her knowledge and belief. If the application is...

  6. 50 CFR 680.40 - Crab Quota Share (QS), Processor QS (PQS), Individual Fishing Quota (IFQ), and Individual...

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... exclude any deadloss, test fishing, fishing conducted under an experimental, exploratory, or scientific..., education, exploratory, or experimental permit, or under the Western Alaska CDQ Program. (iv) Documentation... information is true, correct, and complete to the best of his/her knowledge and belief. If the application is...

  7. 50 CFR 680.40 - Crab Quota Share (QS), Processor QS (PQS), Individual Fishing Quota (IFQ), and Individual...

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... exclude any deadloss, test fishing, fishing conducted under an experimental, exploratory, or scientific..., education, exploratory, or experimental permit, or under the Western Alaska CDQ Program. (iv) Documentation... information is true, correct, and complete to the best of his/her knowledge and belief. If the application is...

  8. Sharing Writing on an Electronic Network.

    ERIC Educational Resources Information Center

    Schwartz, Jeffrey

    A writing exchange project at Bread Loaf School of English at Middlebury College in Vermont, funded by Apple Education Foundation and McDonnell Douglas, examined what happened when high school students use word processors and a modem to write to distant audiences. In the first exchange, students interviewed each other in pairs and wrote short…

  9. Cache write generate for parallel image processing on shared memory architectures.

    PubMed

    Wittenbrink, C M; Somani, A K; Chen, C H

    1996-01-01

    We investigate cache write generate, our cache mode invention. We demonstrate that for parallel image processing applications, the new mode improves main memory bandwidth, CPU efficiency, cache hits, and cache latency. We use register level simulations validated by the UW-Proteus system. Many memory, cache, and processor configurations are evaluated.

  10. Challenging prior evidence for a shared syntactic processor for language and music.

    PubMed

    Perruchet, Pierre; Poulin-Charronnat, Bénédicte

    2013-04-01

    A theoretical landmark in the growing literature comparing language and music is the shared syntactic integration resource hypothesis (SSIRH; e.g., Patel, 2008), which posits that the successful processing of linguistic and musical materials relies, at least partially, on the mastery of a common syntactic processor. Supporting the SSIRH, Slevc, Rosenberg, and Patel (Psychonomic Bulletin & Review 16(2):374-381, 2009) recently reported data showing enhanced syntactic garden path effects when the sentences were paired with syntactically unexpected chords, whereas the musical manipulation had no reliable effect on the processing of semantic violations. The present experiment replicated Slevc et al.'s (2009) procedure, except that syntactic garden paths were replaced with semantic garden paths. We observed the very same interactive pattern of results. These findings suggest that the element underpinning interactions is the garden path configuration, rather than the implication of an alleged syntactic module. We suggest that a different amount of attentional resources is recruited to process each type of linguistic manipulations, hence modulating the resources left available for the processing of music and, consequently, the effects of musical violations.

  11. Parallel design patterns for a low-power, software-defined compressed video encoder

    NASA Astrophysics Data System (ADS)

    Bruns, Michael W.; Hunt, Martin A.; Prasad, Durga; Gunupudi, Nageswara R.; Sonachalam, Sekar

    2011-06-01

    Video compression algorithms such as H.264 offer much potential for parallel processing that is not always exploited by the technology of a particular implementation. Consumer mobile encoding devices often achieve real-time performance and low power consumption through parallel processing in Application Specific Integrated Circuit (ASIC) technology, but many other applications require a software-defined encoder. High quality compression features needed for some applications such as 10-bit sample depth or 4:2:2 chroma format often go beyond the capability of a typical consumer electronics device. An application may also need to efficiently combine compression with other functions such as noise reduction, image stabilization, real time clocks, GPS data, mission/ESD/user data or software-defined radio in a low power, field upgradable implementation. Low power, software-defined encoders may be implemented using a massively parallel memory-network processor array with 100 or more cores and distributed memory. The large number of processor elements allow the silicon device to operate more efficiently than conventional DSP or CPU technology. A dataflow programming methodology may be used to express all of the encoding processes including motion compensation, transform and quantization, and entropy coding. This is a declarative programming model in which the parallelism of the compression algorithm is expressed as a hierarchical graph of tasks with message communication. Data parallel and task parallel design patterns are supported without the need for explicit global synchronization control. An example is described of an H.264 encoder developed for a commercially available, massively parallel memorynetwork processor device.

  12. Patients' involvement in decisions about medicines: GPs' perceptions of their preferences

    PubMed Central

    Cox, Kate; Britten, Nicky; Hooper, Richard; White, Patrick

    2007-01-01

    Background Patients vary in their desire to be involved in decisions about their care. Aim To assess the accuracy and impact of GPs' perceptions of their patients' desire for involvement. Design of study Consultation-based study. Setting Five primary care centres in south London. Method Consecutive patients completed decision-making preference questionnaires before and after consultation. Eighteen GPs completed a questionnaire at the beginning of the study and reported their perceptions of patients' preferences after each consultation. Patients' satisfaction was assessed using the Medical Interview Satisfaction Scale. Analyses were conducted in 190 patient–GP pairs that identified the same medicine decision about the same main health problem. Results A total of 479 patients participated (75.7% of those approached). Thirty-nine per cent of these patients wanted their GPs to share the decision, 45% wanted the GP to be the main (28%) or only (17%) decision maker regarding their care, and 16% wanted to be the main (14%) or only (2%) decision maker themselves. GPs accurately assessed patients' preferences in 32% of the consultations studied, overestimated patients' preferences for involvement in 45%, and underestimated them in 23% of consultations studied. Factors protective against GPs underestimating patients' preferences were: patients preferring the GP to make the decision (odds ratio [OR] 0.2 per point on the five-point scale; 95% confidence interval [CI] = 0.1 to 0.4), and the patient having discussed their main health problem before (OR 0.3; 95% CI = 0.1 to 0.9). Patients' educational attainment was independently associated with GPs underestimation of preferences. Conclusion GPs' perceptions of their patients' desire to be involved in decisions about medicines are inaccurate in most cases. Doctors are more likely to underestimate patients' preferred level of involvement when patients have not consulted about their condition before. PMID:17925134

  13. Supporting shared data structures on distributed memory architectures

    NASA Technical Reports Server (NTRS)

    Koelbel, Charles; Mehrotra, Piyush; Vanrosendale, John

    1990-01-01

    Programming nonshared memory systems is more difficult than programming shared memory systems, since there is no support for shared data structures. Current programming languages for distributed memory architectures force the user to decompose all data structures into separate pieces, with each piece owned by one of the processors in the machine, and with all communication explicitly specified by low-level message-passing primitives. A new programming environment is presented for distributed memory architectures, providing a global name space and allowing direct access to remote parts of data values. The analysis and program transformations required to implement this environment are described, and the efficiency of the resulting code on the NCUBE/7 and IPSC/2 hypercubes are described.

  14. Dual job holding general practitioners: the effect of patient shortage.

    PubMed

    Godager, Geir; Lurås, Hilde

    2009-10-01

    In 2001, a listpatient system with capitation payment was introduced in Norwegian general practice. After an allocation process where each inhabitant was listed with a general practitioner (GP), a considerable share of the GPs got fewer persons listed than they would have preferred. We examine whether GPs who experience a shortage of patients to a larger extent than other GPs seek to hold a second job in the community health service even though the wage rate is low compared with the wage rate in general practice. Assuming utility maximization, we model the effect of patient shortage on a GP's decision to contract for a second job in the community health service. The model predicts a positive relationship between patient shortage and participation in the community health service. This prediction is tested by means of censored regression analyses, taking account of labour supply as a censored variable. We find a significant effect of patient shortage on the number of hours the GPs supply to community health service. The estimated marginal effect is 1.72 hours per week.

  15. Improving long-term adherence to statin therapy: a qualitative study of GPs' experiences in primary care.

    PubMed

    Krüger, Karen; Leppkes, Niklas; Gehrke-Beck, Sabine; Herrmann, Wolfram; Algharably, Engi A; Kreutz, Reinhold; Heintze, Christoph; Filler, Iris

    2018-06-01

    Statins substantially reduce the risk of cardiovascular disease when taken regularly. Though statins are generally well tolerated, current studies show that one-third of patients discontinue use of statins within 2 years. A qualitative approach may improve the understanding of attitudes and behaviours towards statins, the mechanisms related to discontinuation, and how they are managed in primary care. To identify factors related to statin discontinuation and approaches for long-term statin adherence. A qualitative study of German GPs' experiences with statin therapy in rural and urban settings in primary care. Semi-structured interviews ( n = 16) with purposefully recruited GPs were recorded, transcribed, and analysed using qualitative content analysis. Sociodemographic patient factors, the nocebo effect, patient attitudes towards primary prevention, and negative media coverage had significant impacts on statin therapy according to GPs. To overcome these barriers, GPs described useful strategies combining patient motivation and education with person-centred care. GPs used computer programs for individual risk-benefit analyses in the context of shared decision making. They encouraged patients with strong concerns or perceived side effects to continue therapy with a modified medication regimen combined with individual therapy goals. GPs should be aware of barriers to statin therapy and useful approaches to overcome them. They could be supported by guideline recommendations that are more closely aligned to primary care as well as comprehensible patient information about lipid-lowering therapy. Future studies, exploring patients' specific needs and involving them in improving adherence behaviour, are recommended. © British Journal of General Practice 2018.

  16. Making sense of medically unexplained symptoms in general practice: a grounded theory study

    PubMed Central

    2013-01-01

    Background General practitioners often encounter patients with medically unexplained symptoms. These patients share many common features, but there is little agreement about the best diagnostic framework for describing them. Aims This study aimed to explore how GPs make sense of medically unexplained symptoms. Design Semi-structured interviews were conducted with 24 GPs. Each participant was asked to describe a patient with medically unexplained symptoms and discuss their assessment and management. Setting The study was conducted among GPs from teaching practices across Australia. Methods Participants were selected by purposive sampling and all interviews were transcribed. Iterative analysis was undertaken using constructivist grounded theory methodology. Results GPs used a variety of frameworks to understand and manage patients with medically unexplained symptoms. They used different frameworks to reason, to help patients make sense of their suffering, and to communicate with other health professionals. GPs tried to avoid using stigmatising labels such as ‘borderline personality disorder’, which were seen to apply a ‘layer of dismissal’ to patients. They worried about missing serious physical disease, but managed the risk by deliberately attending to physical cues during some consultations, and focusing on coping with medically unexplained symptoms in others. They also used referrals to exclude serious disease, but were wary of triggering a harmful cycle of uncoordinated care. Conclusion GPs were aware of the ethical relevance of psychiatric diagnoses, and attempted to protect their patients from stigma. They crafted helpful explanatory narratives for patients that shaped their experience of suffering. Disease surveillance remained an important role for GPs who were managing medically unexplained symptoms. PMID:24427176

  17. Sharing and interoperation of Digital Dongying geospatial data

    NASA Astrophysics Data System (ADS)

    Zhao, Jun; Liu, Gaohuan; Han, Lit-tao; Zhang, Rui-ju; Wang, Zhi-an

    2006-10-01

    Digital Dongying project was put forward by Dongying city, Shandong province, and authenticated by Ministry of Information Industry, Ministry of Science and Technology and Ministry of Construction P.R.CHINA in 2002. After five years of building, informationization level of Dongying has reached to the advanced degree. In order to forward the step of digital Dongying building, and to realize geospatial data sharing, geographic information sharing standards are drawn up and applied into realization. Secondly, Digital Dongying Geographic Information Sharing Platform has been constructed and developed, which is a highly integrated platform of WEBGIS. 3S (GIS, GPS, RS), Object oriented RDBMS, Internet, DCOM, etc. It provides an indispensable platform for sharing and interoperation of Digital Dongying Geospatial Data. According to the standards, and based on the platform, sharing and interoperation of "Digital Dongying" geospatial data have come into practice and the good results have been obtained. However, a perfect leadership group is necessary for data sharing and interoperation.

  18. A communication-avoiding, hybrid-parallel, rank-revealing orthogonalization method.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoemmen, Mark

    2010-11-01

    Orthogonalization consumes much of the run time of many iterative methods for solving sparse linear systems and eigenvalue problems. Commonly used algorithms, such as variants of Gram-Schmidt or Householder QR, have performance dominated by communication. Here, 'communication' includes both data movement between the CPU and memory, and messages between processors in parallel. Our Tall Skinny QR (TSQR) family of algorithms requires asymptotically fewer messages between processors and data movement between CPU and memory than typical orthogonalization methods, yet achieves the same accuracy as Householder QR factorization. Furthermore, in block orthogonalizations, TSQR is faster and more accurate than existing approaches formore » orthogonalizing the vectors within each block ('normalization'). TSQR's rank-revealing capability also makes it useful for detecting deflation in block iterative methods, for which existing approaches sacrifice performance, accuracy, or both. We have implemented a version of TSQR that exploits both distributed-memory and shared-memory parallelism, and supports real and complex arithmetic. Our implementation is optimized for the case of orthogonalizing a small number (5-20) of very long vectors. The shared-memory parallel component uses Intel's Threading Building Blocks, though its modular design supports other shared-memory programming models as well, including computation on the GPU. Our implementation achieves speedups of 2 times or more over competing orthogonalizations. It is available now in the development branch of the Trilinos software package, and will be included in the 10.8 release.« less

  19. Contention Modeling for Multithreaded Distributed Shared Memory Machines: The Cray XMT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Secchi, Simone; Tumeo, Antonino; Villa, Oreste

    Distributed Shared Memory (DSM) machines are a wide class of multi-processor computing systems where a large virtually-shared address space is mapped on a network of physically distributed memories. High memory latency and network contention are two of the main factors that limit performance scaling of such architectures. Modern high-performance computing DSM systems have evolved toward exploitation of massive hardware multi-threading and fine-grained memory hashing to tolerate irregular latencies, avoid network hot-spots and enable high scaling. In order to model the performance of such large-scale machines, parallel simulation has been proved to be a promising approach to achieve good accuracy inmore » reasonable times. One of the most critical factors in solving the simulation speed-accuracy trade-off is network modeling. The Cray XMT is a massively multi-threaded supercomputing architecture that belongs to the DSM class, since it implements a globally-shared address space abstraction on top of a physically distributed memory substrate. In this paper, we discuss the development of a contention-aware network model intended to be integrated in a full-system XMT simulator. We start by measuring the effects of network contention in a 128-processor XMT machine and then investigate the trade-off that exists between simulation accuracy and speed, by comparing three network models which operate at different levels of accuracy. The comparison and model validation is performed by executing a string-matching algorithm on the full-system simulator and on the XMT, using three datasets that generate noticeably different contention patterns.« less

  20. The Data Acquisition System of the Stockholm Educational Air Shower Array

    NASA Astrophysics Data System (ADS)

    Hofverberg, P.; Johansson, H.; Pearce, M.; Rydstrom, S.; Wikstrom, C.

    2005-12-01

    The Stockholm Educational Air Shower Array (SEASA) project is deploying an array of plastic scintillator detector stations on school roofs in the Stockholm area. Signals from GPS satellites are used to time synchronise signals from the widely separated detector stations, allowing cosmic ray air showers to be identified and studied. A low-cost and highly scalable data acquisition system has been produced using embedded Linux processors which communicate station data to a central server running a MySQL database. Air shower data can be visualised in real-time using a Java-applet client. It is also possible to query the database and manage detector stations from the client. In this paper, the design and performance of the system are described

  1. A parallel algorithm for multi-level logic synthesis using the transduction method. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Lim, Chieng-Fai

    1991-01-01

    The Transduction Method has been shown to be a powerful tool in the optimization of multilevel networks. Many tools such as the SYLON synthesis system (X90), (CM89), (LM90) have been developed based on this method. A parallel implementation is presented of SYLON-XTRANS (XM89) on an eight processor Encore Multimax shared memory multiprocessor. It minimizes multilevel networks consisting of simple gates through parallel pruning, gate substitution, gate merging, generalized gate substitution, and gate input reduction. This implementation, called Parallel TRANSduction (PTRANS), also uses partitioning to break large circuits up and performs inter- and intra-partition dynamic load balancing. With this, good speedups and high processor efficiencies are achievable without sacrificing the resulting circuit quality.

  2. A Tutorial on Parallel and Concurrent Programming in Haskell

    NASA Astrophysics Data System (ADS)

    Peyton Jones, Simon; Singh, Satnam

    This practical tutorial introduces the features available in Haskell for writing parallel and concurrent programs. We first describe how to write semi-explicit parallel programs by using annotations to express opportunities for parallelism and to help control the granularity of parallelism for effective execution on modern operating systems and processors. We then describe the mechanisms provided by Haskell for writing explicitly parallel programs with a focus on the use of software transactional memory to help share information between threads. Finally, we show how nested data parallelism can be used to write deterministically parallel programs which allows programmers to use rich data types in data parallel programs which are automatically transformed into flat data parallel versions for efficient execution on multi-core processors.

  3. 3D environment modeling and location tracking using off-the-shelf components

    NASA Astrophysics Data System (ADS)

    Luke, Robert H.

    2016-05-01

    The remarkable popularity of smartphones over the past decade has led to a technological race for dominance in market share. This has resulted in a flood of new processors and sensors that are inexpensive, low power and high performance. These sensors include accelerometers, gyroscope, barometers and most importantly cameras. This sensor suite, coupled with multicore processors, allows a new community of researchers to build small, high performance platforms for low cost. This paper describes a system using off-the-shelf components to perform position tracking as well as environment modeling. The system relies on tracking using stereo vision and inertial navigation to determine movement of the system as well as create a model of the environment sensed by the system.

  4. Teleoperated position control of a PUMA robot

    NASA Technical Reports Server (NTRS)

    Austin, Edmund; Fong, Chung P.

    1987-01-01

    A laboratory distributed computer control teleoperator system is developed to support NASA's future space telerobotic operation. This teleoperator system uses a universal force-reflecting hand controller in the local iste as the operator's input device. In the remote site, a PUMA controller recieves the Cartesian position commands and implements PID control laws to position the PUMA robot. The local site uses two microprocessors while the remote site uses three. The processors communicate with each other through shared memory. The PUMA robot controller was interfaced through custom made electronics to bypass VAL. The development status of this teleoperator system is reported. The execution time of each processor is analyzed, and the overall system throughput rate is reported. Methods to improve the efficiency and performance are discussed.

  5. 75 FR 13024 - Pacific Halibut Fisheries; Catch Sharing Plan

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-18

    ... system for guided charter vessels (75 FR 554) was also established January 5, 2010, for Areas 2C and 3A... resulting catch of which is sold or bartered; or is intended to be sold or bartered, other than (i) sport... fish processor; (t) ``VMS transmitter'' means a NMFS-approved vessel monitoring system transmitter that...

  6. Design, Implementation, and Evaluation of a Virtual Shared Memory System in a Multi-Transputer Network.

    DTIC Science & Technology

    1987-12-01

    Synchronization and Data Passing Mechanism ........ 50 4. System Shut Down .................................................................. 51 5...high performance, fault tolerance, and extensibility. These features are attained by synchronizing and coordinating the dis- tributed multicomputer... synchronizing all processors in the network. In a multitransputer network, processes that communicate with each other do so synchronously . This makes

  7. 77 FR 44216 - Fisheries of the Exclusive Economic Zone Off Alaska; Bering Sea and Aleutian Islands Crab...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-27

    ... of a zero (0) percent fee for cost recovery under the Bering Sea and Aleutian Islands Crab... Program includes a cost recovery provision to collect fees to recover the actual costs directly related to... processing sectors to each pay half the cost recovery fees. Catcher/processor quota share holders are...

  8. High Performance Active Database Management on a Shared-Nothing Parallel Processor

    DTIC Science & Technology

    1998-05-01

    either stored or virtual. A stored node is like a materialized view. It actually contains the specified tuples. A virtual node is like a real view...90292-6695 DL-5 COLUMBIA UNIV/DEPT COMPUTER SCIENCi ATTN: OR GAIL £. KAISER 450 COMPUTER SCIENCE 3LDG 500 WEST 12ÖTH STRSET NEW YORK NY 10027

  9. Gear Up Your Research Guides with the Emerging OPML Codes

    ERIC Educational Resources Information Center

    Wilcox, Kimberley

    2006-01-01

    Outline Processor Markup Language (OPML) is an emerging format that allows for the creation of customized research packages to push to patrons. It is a way to gather collections of Web resources (links, RSS feeds, multimedia files), organize them as outlines, and publish them in a format that others can share and even subscribe to. In this…

  10. Debugging Fortran on a shared memory machine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Allen, T.R.; Padua, D.A.

    1987-01-01

    Debugging on a parallel processor is more difficult than debugging on a serial machine because errors in a parallel program may introduce nondeterminism. The approach to parallel debugging presented here attempts to reduce the problem of debugging on a parallel machine to that of debugging on a serial machine by automatically detecting nondeterminism. 20 refs., 6 figs.

  11. Exploring the temperament and character traits of rural and urban doctors.

    PubMed

    Eley, Diann; Young, Louise; Przybeck, Thomas R

    2009-01-01

    Australia shares many dilemmas with North America regarding shortages of doctors in rural and remote locations. This preliminary study contributes to the establishment of a psychobiological profile for rural doctors by comparing temperament and character traits with an urban cohort. The aim was to compare the individual levels and combinations of temperament (mildly heritable and stable) and character (developmental and modifiable) traits of rural and urban general practitioners (GPs). Rural (n = 120) and urban (n = 94) GPs completed a demographic questionnaire and the TCI-R 140 to identify levels of the 7 basic dimensions of temperament and character. These are Novelty Seeking (NS), Harm Avoidance (HA), Reward Dependence (RD), Persistence (PS), Self-Directedness (SD), Cooperativeness (CO), and Self-Transcendence (ST). Preliminary results show rural GPs were higher in the temperament traits of NS and lower in HA compared with the urban sample. All female GPs were higher in RD and CO compared with all males, and all older GPs (over 55 years) were lower in RD compared with all younger GPs. This preliminary work may be the precursor to a new approach for the recruitment and retention of rural doctors through a greater awareness of personality traits conducive to the rural workforce. Further work may help inform appropriate policies to attract and retain this workforce and be a useful adjunct to the counseling of students interested in rural medicine by providing a better understanding of "what it takes" to be a rural doctor.

  12. Out-of-Hours Care Collaboration between General Practitioners and Hospital Emergency Departments in the Netherlands.

    PubMed

    van Gils-van Rooij, Elisabeth Sybilla Johanna; Yzermans, Christoffel Joris; Broekman, Sjoerd Michael; Meijboom, Berthold Rudy; Welling, Gerben Paul; de Bakker, Dingenus Herman

    2015-01-01

    In the Netherlands, general practitioners (GPs) and emergency departments (EDs) collaborate increasingly in what is called an Urgent Care Collaboration (UCC). In UCCs, GPs and EDs share 1 combined entrance and joint triage. The objective of this study was to determine if GPs treat a larger proportion of out-of-hours patients in the UCC system, and how this relates to patient characteristics. This observational study compared patients treated within UCCs with patients treated in the usual care setting, that is, GPs and EDs operating separately. Data on the characteristics of the patients, their consultations, and their health problems were derived from electronic medical records. We performed χ(2) tests, independent sample t tests, and multiple logistic regression analyses. A significantly higher proportion of patients attended their on-call GP within the UCC system. The proportion of ED patients was 22% smaller in UCCs compared to the usual care setting. Controlled for patient and health problem characteristics the difference remained statistically significant (OR=0.69; CI 0.66-0.72) but there were substantial differences between regions. Especially patients with trauma were treated more by general practitioners. Controlled for case mix, patients in the largest UCC-region were 1.2 times more likely to attend a GP than the reference group. When GPs and EDs collaborate, GPs take a substantially higher proportion of all out-of-hours patients. © Copyright 2015 by the American Board of Family Medicine.

  13. G-cueing microcontroller (a microprocessor application in simulators)

    NASA Technical Reports Server (NTRS)

    Horattas, C. G.

    1980-01-01

    A g cueing microcontroller is described which consists of a tandem pair of microprocessors, dedicated to the task of simulating pilot sensed cues caused by gravity effects. This task includes execution of a g cueing model which drives actuators that alter the configuration of the pilot's seat. The g cueing microcontroller receives acceleration commands from the aerodynamics model in the main computer and creates the stimuli that produce physical acceleration effects of the aircraft seat on the pilots anatomy. One of the two microprocessors is a fixed instruction processor that performs all control and interface functions. The other, a specially designed bipolar bit slice microprocessor, is a microprogrammable processor dedicated to all arithmetic operations. The two processors communicate with each other by a shared memory. The g cueing microcontroller contains its own dedicated I/O conversion modules for interface with the seat actuators and controls, and a DMA controller for interfacing with the simulation computer. Any application which can be microcoded within the available memory, the available real time and the available I/O channels, could be implemented in the same controller.

  14. 77 FR 38013 - Fisheries of the Exclusive Economic Zone Off Alaska; Groundfish of the Gulf of Alaska; Amendment...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-26

    ... participants in the entry level trawl fishery may qualify for quota share (QS) under the Central Gulf of Alaska... landings to an entry level processor in 2007, 2008, or 2009. This clarification is administrative in nature and does not change the distribution of rockfish QS to entry level trawl participants. DATES...

  15. 76 FR 35781 - Fisheries of the Exclusive Economic Zone Off Alaska; Bering Sea and Aleutian Islands Crab...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-20

    ... operational costs. NMFS also issued processor quota share (PQS) under the Program. Each year, PQS yields an... requirements. The RIR/FRFA prepared for this action describes the costs and benefits of Amendment 37 (see... person or company that holds in excess of 20 percent of the West-designated WAG QS; (2) any person or...

  16. Peregrine System Configuration | High-Performance Computing | NREL

    Science.gov Websites

    nodes and storage are connected by a high speed InfiniBand network. Compute nodes are diskless with an directories are mounted on all nodes, along with a file system dedicated to shared projects. A brief processors with 64 GB of memory. All nodes are connected to the high speed Infiniband network and and a

  17. Why K-12 IT Managers and Administrators Are Embracing the Intel-Based Mac

    ERIC Educational Resources Information Center

    Technology & Learning, 2007

    2007-01-01

    Over the past year, Apple has dramatically increased its share of the school computer marketplace--especially in the category of notebook computers. A recent study conducted by Grunwald Associates and Rockman et al. reports that one of the major reasons for this growth is Apple's introduction of the Intel processor to the entire line of Mac…

  18. Importance of balanced architectures in the design of high-performance imaging systems

    NASA Astrophysics Data System (ADS)

    Sgro, Joseph A.; Stanton, Paul C.

    1999-03-01

    Imaging systems employed in demanding military and industrial applications, such as automatic target recognition and computer vision, typically require real-time high-performance computing resources. While high- performances computing systems have traditionally relied on proprietary architectures and custom components, recent advances in high performance general-purpose microprocessor technology have produced an abundance of low cost components suitable for use in high-performance computing systems. A common pitfall in the design of high performance imaging system, particularly systems employing scalable multiprocessor architectures, is the failure to balance computational and memory bandwidth. The performance of standard cluster designs, for example, in which several processors share a common memory bus, is typically constrained by memory bandwidth. The symptom characteristic of this problem is failure to the performance of the system to scale as more processors are added. The problem becomes exacerbated if I/O and memory functions share the same bus. The recent introduction of microprocessors with large internal caches and high performance external memory interfaces makes it practical to design high performance imaging system with balanced computational and memory bandwidth. Real word examples of such designs will be presented, along with a discussion of adapting algorithm design to best utilize available memory bandwidth.

  19. Static analysis of the hull plate using the finite element method

    NASA Astrophysics Data System (ADS)

    Ion, A.

    2015-11-01

    This paper aims at presenting the static analysis for two levels of a container ship's construction as follows: the first level is at the girder / hull plate and the second level is conducted at the entire strength hull of the vessel. This article will describe the work for the static analysis of a hull plate. We shall use the software package ANSYS Mechanical 14.5. The program is run on a computer with four Intel Xeon X5260 CPU processors at 3.33 GHz, 32 GB memory installed. In terms of software, the shared memory parallel version of ANSYS refers to running ANSYS across multiple cores on a SMP system. The distributed memory parallel version of ANSYS (Distributed ANSYS) refers to running ANSYS across multiple processors on SMP systems or DMP systems.

  20. VLSI 'smart' I/O module development

    NASA Astrophysics Data System (ADS)

    Kirk, Dan

    The developmental history, design, and operation of the MIL-STD-1553A/B discrete and serial module (DSM) for the U.S. Navy AN/AYK-14(V) avionics computer are described and illustrated with diagrams. The ongoing preplanned product improvement for the AN/AYK-14(V) includes five dual-redundant MIL-STD-1553 channels based on DSMs. The DSM is a front-end processor for transferring data to and from a common memory, sharing memory with a host processor to provide improved 'smart' input/output performance. Each DSM comprises three hardware sections: three VLSI-6000 semicustomized CMOS arrays, memory units to support the arrays, and buffers and resynchronization circuits. The DSM hardware module design, VLSI-6000 design tools, controlware and test software, and checkout procedures (using a hardware simulator) are characterized in detail.

  1. HeinzelCluster: accelerated reconstruction for FORE and OSEM3D.

    PubMed

    Vollmar, S; Michel, C; Treffert, J T; Newport, D F; Casey, M; Knöss, C; Wienhard, K; Liu, X; Defrise, M; Heiss, W D

    2002-08-07

    Using iterative three-dimensional (3D) reconstruction techniques for reconstruction of positron emission tomography (PET) is not feasible on most single-processor machines due to the excessive computing time needed, especially so for the large sinogram sizes of our high-resolution research tomograph (HRRT). In our first approach to speed up reconstruction time we transform the 3D scan into the format of a two-dimensional (2D) scan with sinograms that can be reconstructed independently using Fourier rebinning (FORE) and a fast 2D reconstruction method. On our dedicated reconstruction cluster (seven four-processor systems, Intel PIII@700 MHz, switched fast ethernet and Myrinet, Windows NT Server), we process these 2D sinograms in parallel. We have achieved a speedup > 23 using 26 processors and also compared results for different communication methods (RPC, Syngo, Myrinet GM). The other approach is to parallelize OSEM3D (implementation of C Michel), which has produced the best results for HRRT data so far and is more suitable for an adequate treatment of the sinogram gaps that result from the detector geometry of the HRRT. We have implemented two levels of parallelization for four dedicated cluster (a shared memory fine-grain level on each node utilizing all four processors and a coarse-grain level allowing for 15 nodes) reducing the time for one core iteration from over 7 h to about 35 min.

  2. GPs' decision-making when prescribing medicines for breastfeeding women: Content analysis of a survey.

    PubMed

    Jayawickrama, Hiranya S; Amir, Lisa H; Pirotta, Marie V

    2010-03-23

    Many breastfeeding women seek medical care from general practitioners (GPs) for various health problems and GPs may consider prescribing medicines in these consultations. Prescribing medicines to a breastfeeding mother may lead to untimely cessation of breastfeeding or a breastfeeding mother may be denied medicines due to the possible risk to her infant, both of which may lead to unwanted consequences. Information on factors governing GPs' decision-making and their views in such situations is limited. GPs providing shared maternity care at the Royal Women's Hospital, Melbourne were surveyed using an anonymous postal survey to determine their knowledge, attitudes and practices on medicines and breastfeeding, in 2007/2008 (n = 640). Content analysis of their response to a question concerning decision-making about the use of medicine for a breastfeeding woman was conducted. A thematic network was constructed with basic, organising and global themes. 335 (52%) GPs responded to the survey, and 253 (76%) provided information on the last time they had to decide about the use of medicine for a breastfeeding woman. Conditions reported were mastitis (24%), other infections (24%) and depressive disorders (21%). The global theme that emerged was "complexity of managing risk in prescribing for breastfeeding women". The organising themes were: certainty around decision-making; uncertainty around decision-making; need for drug information to be available, consistent and reliable; joint decision-making; the vulnerable "third party" and infant feeding decision. Decision-making is a spectrum from a straight forward decision, such as treatment of mastitis, to a complicated one requiring multiple inputs and consideration. GPs use more information seeking and collaboration in decision-making when they perceive the problem to be more complex, for example, in postnatal depression. GPs feel that prescribing medicines for breastfeeding women is a contentious issue. They manage the risk of prescribing by gathering information and assessing the possible effects on the breastfed infant. Without evidence-based information, they sometimes recommend cessation of breastfeeding unnecessarily.

  3. Operational GPS Imaging System at Multiple Scales for Earth Science and Monitoring of Geohazards

    NASA Astrophysics Data System (ADS)

    Blewitt, Geoffrey; Hammond, William; Kreemer, Corné

    2016-04-01

    Toward scientific targets that range from slow deep Earth processes to geohazard rapid response, our operational GPS data analysis system produces smooth, yet detailed maps of 3-dimensional land motion with respect to our Earth's center of mass at multiple spatio-temporal scales with various latencies. "GPS Imaging" is implemented operationally as a back-end processor to our GPS data processing facility, which uses JPL's GIPSY OASIS II software to produce positions from 14,000 GPS stations in ITRF every 5 minutes, with coordinate precision that gradually improves as latency increases upward from 1 hour to 2 weeks. Our GPS Imaging system then applies sophisticated signal processing and image filtering techniques to generate images of land motion covering our Earth's continents with high levels of robustness, accuracy, spatial resolution, and temporal resolution. Techniques employed by our GPS Imaging system include: (1) similarity transformation of polyhedron coordinates to ITRF with optional common-mode filtering to enhance local transient signal to noise ratio, (2) a comprehensive database of ~100,000 potential step events based on earthquake catalogs and equipment logs, (3) an automatic, robust, and accurate non-parametric estimator of station velocity that is insensitive to prevalent step discontinuities, outliers, seasonality, and heteroscedasticity; (4) a realistic estimator of velocity error bars based on subsampling statistics; (5) image processing to create a map of land motion that is based on median spatial filtering on the Delauney triangulation, which is effective at despeckling the data while faithfully preserving edge features; (6) a velocity time series estimator to assist identification of transient behavior, such as unloading caused by drought, and (7) a method of integrating InSAR and GPS for fine-scale seamless imaging in ITRF. Our system is being used to address three main scientific focus areas, including (1) deep Earth processes, (2) anthropogenic lithospheric processes, and (3) dynamic solid Earth events. Our prototype images show that the striking, first-order signal in North America and Europe is large scale uplift and subsidence from mantle flow driven by Glacial Isostatic Adjustment. At regional scales, the images reveal that anthropogenic lithospheric processes can dominate vertical land motion in extended regions, such as the rapid subsidence of California's Central Valley (CV) exacerbated by drought. The Earth's crust is observed to rebound elastically as evidenced by uplift of surrounding mountain ranges. Images also reveal natural uplift of mountains, mantle relaxation associated with earthquakes over the last century, and uplift at plate boundaries driven by interseismic locking. Using the high-rate positions at low latency, earthquake events can be rapidly imaged, modeled, and monitored for afterslip, potential aftershocks, and subsequent deeper relaxation. Thus we are imaging deep Earth processes with unprecedented scope, resolution and accuracy. In addition to supporting these scientific focus areas, the data products are also being used to support the global reference frame (ITRF), and show potential to enhance missions such as GRACE and NISAR by providing complementary information on Earth processes.

  4. NASA Tech Briefs, August 2006

    NASA Technical Reports Server (NTRS)

    2006-01-01

    Topics covered include: Measurement and Controls Data Acquisition System IMU/GPS System Provides Position and Attitude Data Using Artificial Intelligence to Inform Pilots of Weather Fast Lossless Compression of Multispectral-Image Data Developing Signal-Pattern-Recognition Programs Implementing Access to Data Distributed on Many Processors Compact, Efficient Drive Circuit for a Piezoelectric Pump; Dual Common Planes for Time Multiplexing of Dual-Color QWIPs; MMIC Power Amplifier Puts Out 40 mW From 75 to 110 GHz; 2D/3D Visual Tracker for Rover Mast; Adding Hierarchical Objects to Relational Database General-Purpose XML-Based Information Managements; Vaporizable Scaffolds for Fabricating Thermoelectric Modules; Producing Quantum Dots by Spray Pyrolysis; Mobile Robot for Exploring Cold Liquid/Solid Environments; System Would Acquire Core and Powder Samples of Rocks; Improved Fabrication of Lithium Films Having Micron Features; Manufacture of Regularly Shaped Sol-Gel Pellets; Regulating Glucose and pH, and Monitoring Oxygen in a Bioreactor; Satellite Multiangle Spectropolarimetric Imaging of Aerosols; Interferometric System for Measuring Thickness of Sea Ice; Microscale Regenerative Heat Exchanger Protocols for Handling Messages Between Simulation Computers Statistical Detection of Atypical Aircraft Flights NASA's Aviation Safety and Modeling Project Multimode-Guided-Wave Ultrasonic Scanning of Materials Algorithms for Maneuvering Spacecraft Around Small Bodies Improved Solar-Radiation-Pressure Models for GPS Satellites Measuring Attitude of a Large, Flexible, Orbiting Structure

  5. Student-Built High-Altitude Balloon Payload with Sensor Array and Flight Computer

    NASA Astrophysics Data System (ADS)

    Jeffery, Russell; Slaton, William

    A payload was designed for a high-altitude weather balloon. The flight controller consisted of a Raspberry Pi running a Python 3.4 program to collect and store data. The entire payload was designed to be versatile and easy to modify so that it could be repurposed for other projects: The code was written with the expectation that more sensors and other functionality would be added later, and a Raspberry Pi was chosen as the processor because of its versatility, its active support community, and its ability to interface easily with sensors, servos, and other such hardware. For this project, extensive use was made of the Python 3.4 libraries gps3, PiCamera, and RPi.GPIO to collect data from a GPS breakout board, a Raspberry Pi camera, a geiger counter, two thermocouples, and a pressure sensor. The data collected clearly shows that pressure and temperature decrease as altitude increases, while β-radiation and γ-radiation increase as altitude increases. These trends in the data follow those predicted by theoretical calculations made for comparison. This payload was developed in such a way that future students could easily alter it to include additional sensors, biological experiments, and additional error monitoring and management. Arkansas Space Grant Consortium (ASGC) Workforce Development Grant.

  6. POD experiments using real and simulated time-sharing observations for GEO satellites in C-band transfer ranging system

    NASA Astrophysics Data System (ADS)

    Fen, Cao; XuHai, Yang; ZhiGang, Li; ChuGang, Feng

    2016-08-01

    The normal consecutive observing model in Chinese Area Positioning System (CAPS) can only supply observations of one GEO satellite in 1 day from one station. However, this can't satisfy the project need for observing many GEO satellites in 1 day. In order to obtain observations of several GEO satellites in 1 day like GPS/GLONASS/Galileo/BeiDou, the time-sharing observing model for GEO satellites in CAPS needs research. The principle of time-sharing observing model is illuminated with subsequent Precise Orbit Determination (POD) experiments using simulated time-sharing observations in 2005 and the real time-sharing observations in 2015. From time-sharing simulation experiments before 2014, the time-sharing observing 6 GEO satellites every 2 h has nearly the same orbit precision with the consecutive observing model. From POD experiments using the real time-sharing observations, POD precision for ZX12# and Yatai7# are about 3.234 m and 2.570 m, respectively, which indicates the time-sharing observing model is appropriate for CBTR system and can realize observing many GEO satellites in 1 day.

  7. Secure Embedded System Design Methodologies for Military Cryptographic Systems

    DTIC Science & Technology

    2016-03-31

    Fault- Tree Analysis (FTA); Built-In Self-Test (BIST) Introduction Secure access-control systems restrict operations to authorized users via methods...failures in the individual software/processor elements, the question of exactly how unlikely is difficult to answer. Fault- Tree Analysis (FTA) has a...Collins of Sandia National Laboratories for years of sharing his extensive knowledge of Fail-Safe Design Assurance and Fault- Tree Analysis

  8. Operating System Support for Shared Hardware Data Structures

    DTIC Science & Technology

    2013-01-31

    Carbon [73] uses hardware queues to improve fine-grained multitasking for Recognition, Mining , and Synthesis. Compared to software ap- proaches...web transaction processing, data mining , and multimedia. Early work in database processors [114, 96, 79, 111] reduce the costs of relational database...assignment can be solved statically or dynamically. Static assignment deter- mines offline which data structures are assigned to use HWDS resources and at

  9. Digital Collaboration Tools in the Military: Their Historical and Current Status

    DTIC Science & Technology

    2006-02-16

    Writer = online word processor that edits, stores and shares your documents from anywhere. February 16, 2006 31 Recent “ Disruptive ” Technologies Cell...Webcasts Wikis February 16, 2006 32 Now Consider: Disruptive Technologies (1997) becomes Disruptive Innovations in 2003. Military Transformation: Drivers...from http://www.sims.berkeley.edu/how-much-info-2003 Schneiderman, R. (2005). Preparing for the Disruptive Technologies of Tomorrow. http

  10. Distributed Systems Technology Survey.

    DTIC Science & Technology

    1987-03-01

    and prolocols. 2. Hardware Technology Ecnomic factor we a majo reonm for the prolierat of dlstbted systoe. Processors, memory, an magne tc ndoptical...destined messages and pertorn the a pro te forwarding. There gImsno agreement that a ightweight process mechanism is essential to support com- monly used...Xerox PARC environment [311. Shared file servers, discussed below, are essential to the success of such a scheme. 11. ecurlity A distributed

  11. New Developments in Geodetic Data Management Systems for Fostering International Collaborations in the Geosciences

    NASA Astrophysics Data System (ADS)

    Meertens, Charles; Boler, Fran; Miller, M. Meghan

    2015-04-01

    UNAVCO community investigators are actively engaged in using space and terrestrial geodetic techniques to study earthquake processes, mantle properties, active magmatic systems, plate tectonics, plate boundary zone deformation, intraplate deformation, glacial isostatic adjustment, and hydrologic and atmospheric processes. The first GPS field projects were conducted over thirty years ago, and from the beginning these science investigations and the UNAVCO constituency as a whole have been international and collaborative in scope and participation. Collaborations were driven by the nature of the scientific problems being addressed, the capability of the technology to make precise measurements over global scales, and inherent technical necessity for sharing of GPS tracking data across national boundaries. The International GNSS Service (IGS) was formed twenty years ago as a voluntary federation to share GPS data from now hundreds of locations around the globe to facilitate realization of global reference frames, ties to regional surveys, precise orbits, and to establish and improve best practices in analysis and infrastructure. Recently, however, numbers of regional stations have grown to the tens of thousands, often with data that are difficult to access. UNAVCO has been working to help remove technical barriers by providing open source tools such as the Geodetic Seamless Archive Centers software to facilitate cross-project data sharing and discovery and by developing Dataworks software to manage network data. Data web services also provide the framework for UNAVCO contributions to multi-technique, inter-disciplinary, and integrative activities such as CoopEUS, GEO Supersites, EarthScope, and EarthCube. Within the geodetic community, metadata standards and data exchange formats have been developed and evolved collaboratively through the efforts of global organizations such as the IGS. A new generation of metadata and data exchange formats, as well as the software tools that utilize these formats and that support more efficient exchange of the highest quality data and metadata, are currently being developed and deployed through multiple international efforts.

  12. Ion propulsion cost effectivity

    NASA Technical Reports Server (NTRS)

    Zafran, S.; Biess, J. J.

    1978-01-01

    Ion propulsion modules employing 8-cm thrusters and 30-cm thrusters were studied for Multimission Modular Spacecraft (MMS) applications. Recurring and nonrecurring cost elements were generated for these modules. As a result, ion propulsion cost drivers were identified to be Shuttle charges, solar array, power processing, and thruster costs. Cost effective design approaches included short length module configurations, array power sharing, operation at reduced thruster input power, simplified power processing units, and power processor output switching. The MMS mission model employed indicated that nonrecurring costs have to be shared with other programs unless the mission model grows. Extended performance missions exhibited the greatest benefits when compared with monopropellant hydrazine propulsion.

  13. The complexity of managing COPD exacerbations: a grounded theory study of European general practice

    PubMed Central

    Risør, Mette Bech; Spigt, Mark; Iversen, R; Godycki-Cwirko, M; Francis, N; Altiner, A; Andreeva, E; Kung, K; Melbye, H

    2013-01-01

    Objectives To understand the concerns and challenges faced by general practitioners (GPs) and respiratory physicians about primary care management of acute exacerbations in patients with chronic obstructive pulmonary disease (COPD). Design 21 focus group discussions (FGDs) were performed in seven countries with a Grounded Theory approach. Each country performed three rounds of FGDs. Setting Primary and secondary care in Norway, Germany, Wales, Poland, Russia, The Netherlands, China (Hong Kong). Participants 142 GPs and respiratory physicians were chosen to include urban and rural GPs as well as hospital-based and out patient-clinic respiratory physicians. Results Management of acute COPD exacerbations is dealt with within a scope of concerns. These concerns range from ‘dealing with comorbidity’ through ‘having difficult patients’ to ‘confronting a hopeless disease’. The first concern displays medical uncertainty regarding diagnosis, medication and hospitalisation. These clinical processes become blurred by comorbidity and the social context of the patient. The second concern shows how patients receive the label ‘difficult’ exactly because they need complex attention, but even more because they are time consuming, do not take responsibility and are non-compliant. The third concern relates to the emotional reactions by the physicians when confronted with ‘a hopeless disease’ due to the fact that most of the patients do not improve and the treatment slows down the process at best. GPs and respiratory physicians balance these concerns with medical knowledge and practical, situational knowledge, trying to encompass the complexity of a medical condition. Conclusions Knowing the patient is essential when dealing with comorbidities as well as with difficult relations in the consultations on exacerbations. This study suggests that it is crucial to improve the collaboration between primary and secondary care, in terms of, for example, shared consultations and defined work tasks, which may enhance shared knowledge of patients, medical decision-making and improved management planning. PMID:24319274

  14. Medical communication and technology: a video-based process study of the use of decision aids in primary care consultations.

    PubMed

    Kaner, Eileen; Heaven, Ben; Rapley, Tim; Murtagh, Madeleine; Graham, Ruth; Thomson, Richard; May, Carl

    2007-01-10

    Much of the research on decision-making in health care has focused on consultation outcomes. Less is known about the process by which clinicians and patients come to a treatment decision. This study aimed to quantitatively describe the behaviour shown by doctors and patients during primary care consultations when three types of decision aids were used to promote treatment decision-making in a randomised controlled trial. A video-based study set in an efficacy trial which compared the use of paper-based guidelines (control) with two forms of computer-based decision aids (implicit and explicit versions of DARTS II). Treatment decision concerned warfarin anti-coagulation to reduce the risk of stroke in older patients with atrial fibrillation. Twenty nine consultations were video-recorded. A ten-minute 'slice' of the consultation was sampled for detailed content analysis using existing interaction analysis protocols for verbal behaviour and ethological techniques for non-verbal behaviour. Median consultation times (quartiles) differed significantly depending on the technology used. Paper-based guidelines took 21 (19-26) minutes to work through compared to 31 (16-41) minutes for the implicit tool; and 44 (39-55) minutes for the explicit tool. In the ten minutes immediately preceding the decision point, GPs dominated the conversation, accounting for 64% (58-66%) of all utterances and this trend was similar across all three arms of the trial. Information-giving was the most frequent activity for both GPs and patients, although GPs did this at twice the rate compared to patients and at higher rates in consultations involving computerised decision aids. GPs' language was highly technically focused and just 7% of their conversation was socio-emotional in content; this was half the socio-emotional content shown by patients (15%). However, frequent head nodding and a close mirroring in the direction of eye-gaze suggested that both parties were active participants in the conversation Irrespective of the arm of the trial, both patients' and GPs' behaviour showed that they were reciprocally engaged in these consultations. However, even in consultations aimed at promoting shared decision-making, GPs' were verbally dominant, and they worked primarily as information providers for patients. In addition, computer-based decision aids significantly prolonged the consultations, particularly the later phases. These data suggest that decision aids may not lead to more 'sharing' in treatment decision-making and that, in their current form, they may take too long to negotiate for use in routine primary care.

  15. A Parallel Rendering Algorithm for MIMD Architectures

    NASA Technical Reports Server (NTRS)

    Crockett, Thomas W.; Orloff, Tobias

    1991-01-01

    Applications such as animation and scientific visualization demand high performance rendering of complex three dimensional scenes. To deliver the necessary rendering rates, highly parallel hardware architectures are required. The challenge is then to design algorithms and software which effectively use the hardware parallelism. A rendering algorithm targeted to distributed memory MIMD architectures is described. For maximum performance, the algorithm exploits both object-level and pixel-level parallelism. The behavior of the algorithm is examined both analytically and experimentally. Its performance for large numbers of processors is found to be limited primarily by communication overheads. An experimental implementation for the Intel iPSC/860 shows increasing performance from 1 to 128 processors across a wide range of scene complexities. It is shown that minimal modifications to the algorithm will adapt it for use on shared memory architectures as well.

  16. General-purpose interface bus for multiuser, multitasking computer system

    NASA Technical Reports Server (NTRS)

    Generazio, Edward R.; Roth, Don J.; Stang, David B.

    1990-01-01

    The architecture of a multiuser, multitasking, virtual-memory computer system intended for the use by a medium-size research group is described. There are three central processing units (CPU) in the configuration, each with 16 MB memory, and two 474 MB hard disks attached. CPU 1 is designed for data analysis and contains an array processor for fast-Fourier transformations. In addition, CPU 1 shares display images viewed with the image processor. CPU 2 is designed for image analysis and display. CPU 3 is designed for data acquisition and contains 8 GPIB channels and an analog-to-digital conversion input/output interface with 16 channels. Up to 9 users can access the third CPU simultaneously for data acquisition. Focus is placed on the optimization of hardware interfaces and software, facilitating instrument control, data acquisition, and processing.

  17. Performance prediction: A case study using a multi-ring KSR-1 machine

    NASA Technical Reports Server (NTRS)

    Sun, Xian-He; Zhu, Jianping

    1995-01-01

    While computers with tens of thousands of processors have successfully delivered high performance power for solving some of the so-called 'grand-challenge' applications, the notion of scalability is becoming an important metric in the evaluation of parallel machine architectures and algorithms. In this study, the prediction of scalability and its application are carefully investigated. A simple formula is presented to show the relation between scalability, single processor computing power, and degradation of parallelism. A case study is conducted on a multi-ring KSR1 shared virtual memory machine. Experimental and theoretical results show that the influence of topology variation of an architecture is predictable. Therefore, the performance of an algorithm on a sophisticated, heirarchical architecture can be predicted and the best algorithm-machine combination can be selected for a given application.

  18. Examining the spatial congruence between data obtained with a novel activity location questionnaire, continuous GPS tracking, and prompted recall surveys

    PubMed Central

    2013-01-01

    Background Place and health researchers are increasingly interested in integrating individuals’ mobility and the experience they have with multiple settings in their studies. In practice, however, few tools exist which allow for rapid and accurate gathering of detailed information on the geographic location of places where people regularly undertake activities. We describe the development and validation of a new activity location questionnaire which can be useful in accounting for multiple environmental influences in large population health investigations. Methods To develop the questionnaire, we relied on a literature review of similar data collection tools and on results of a pilot study wherein we explored content validity, test-retest reliability, and face validity. To estimate convergent validity, we used data from a study of users of a public bicycle share program conducted in Montreal, Canada in 2011. We examined the spatial congruence between questionnaire data and data from three other sources: 1) one-week GPS tracks; 2) activity locations extracted from the GPS tracks; and 3) a prompted recall survey of locations visited during the day. Proximity and convex hull measures were used to compare questionnaire-derived data and GPS and prompted recall survey data. Results In the sample, 75% of questionnaire-reported activity locations were located within 400 meters of an activity location recorded on the GPS track or through the prompted recall survey. Results from convex hull analyses suggested questionnaire activity locations were more concentrated in space than GPS or prompted-recall locations. Conclusions The new questionnaire has high convergent validity and can be used to accurately collect data on regular activity spaces in terms of locations regularly visited. The methods, measures, and findings presented provide new material to further study mobility in place and health research. PMID:24025119

  19. Caring for people living with, and beyond, cancer: an online survey of GPs in England.

    PubMed

    Walter, Fiona M; Usher-Smith, Juliet A; Yadlapalli, Suresh; Watson, Eila

    2015-11-01

    Increasing numbers of people are living with, and beyond, cancer. They are at risk of long-term morbidity and premature mortality due to the consequences of their disease and its treatment. Primary care can contribute to providing ongoing care. To determine the current practice and views of GPs in England regarding cancer survivorship care. Online survey of a sample of 500 GPs, stratified by NHS region in England. The survey included questions adapted from prior surveys assessing physician knowledge and attitudes regarding care of patients with cancer. In total, 500 GPs responded; approximately half reported often providing care to people living beyond cancer for treatment-related side effects (51%), psychological symptoms (65%), and lifestyle advice (55%). Only 29% felt very confident managing treatment-related side effects compared with 46% and 65% for psychological symptoms and lifestyle advice respectively. Half reported usually receiving cancer treatment summaries and survivorship care plans but most of the sample felt these would improve their ability to provide care (76%). Only 53% were convinced of the usefulness of cancer care reviews. Although most felt that primary and specialist care should share responsibility for managing bone (81%) and cardiovascular (77%) health consequences, fewer than half reported often taking previous history of cancer or cancer treatment into consideration when assessing bone health; only one-fifth did this in relation to cardiovascular health. Most responders were interested in receiving education to improve their knowledge and expertise. GPs have a potentially important role to play in caring for people following cancer treatment. This study has highlighted areas where further support and education are needed to enable GPs to optimise their role in cancer survivorship care. © British Journal of General Practice 2015.

  20. Realization of a single image haze removal system based on DaVinci DM6467T processor

    NASA Astrophysics Data System (ADS)

    Liu, Zhuang

    2014-10-01

    Video monitoring system (VMS) has been extensively applied in domains of target recognition, traffic management, remote sensing, auto navigation and national defence. However the VMS has a strong dependence on the weather, for instance, in foggy weather, the quality of images received by the VMS are distinct degraded and the effective range of VMS is also decreased. All in all, the VMS performs terribly in bad weather. Thus the research of fog degraded images enhancement has very high theoretical and practical application value. A design scheme of a fog degraded images enhancement system based on the TI DaVinci processor is presented in this paper. The main function of the referred system is to extract and digital cameras capture images and execute image enhancement processing to obtain a clear image. The processor used in this system is the dual core TI DaVinci DM6467T - ARM@500MHz+DSP@1GH. A MontaVista Linux operating system is running on the ARM subsystem which handles I/O and application processing. The DSP handles signal processing and the results are available to the ARM subsystem in shared memory.The system benefits from the DaVinci processor so that, with lower power cost and smaller volume, it provides the equivalent image processing capability of a X86 computer. The outcome shows that the system in this paper can process images at 25 frames per second on D1 resolution.

  1. Whose job is it anyway? Swedish general practitioners' perception of their responsibility for the patient's drug list.

    PubMed

    Rahmner, Pia Bastholm; Gustafsson, Lars L; Holmström, Inger; Rosenqvist, Urban; Tomson, Göran

    2010-01-01

    Information about the patient's current drug list is a prerequisite for safe drug prescribing. The aim of this study was to explore general practitioners' (GPs) understandings of who is responsible for the patient's drug list so that drugs prescribed by different physicians do not interact negatively or even cause harm. The study also sought to clarify how this responsibility was managed. We conducted a descriptive qualitative study among 20 Swedish physicians. We recruited the informants purposively and captured their view on responsibility by semistructured interviews. Data were analyzed using a phenomenographic approach. We found variation in understandings about who is responsible for the patient's drug list and, in particular, how the GPs use different strategies to manage this responsibility. Five categories emerged: (1) imposed responsibility, (2) responsible for own prescriptions, (3) responsible for all drugs, (4) different but shared responsibility, and (5) patient responsible for transferring drug information. The relation between categories is illustrated in an outcome space, which displays how the GPs reason in relation to managing drug lists. The understanding of the GP's responsibility for the patient's drug list varied, which may be a threat to safe patient care. We propose that GPs are made aware of variations in understanding responsibility so that health care quality can be improved.

  2. Collecting and registering sexual health information in the context of HIV risk in the electronic medical record of general practitioners: a qualitative exploration of the preference of general practitioners in urban communities in Flanders (Belgium).

    PubMed

    Vos, Jolien; Pype, Peter; Deblonde, Jessika; Van den Eynde, Sandra; Aelbrecht, Karolien; Deveugele, Myriam; Avonts, Dirk

    2016-07-01

    Background and aim Current health-care delivery requires increasingly proactive and inter-professional work. Therefore, collecting patient information and knowledge management is of paramount importance. General practitioners (GPs) are well placed to lead these evolving models of care delivery. However, it is unclear how they are handling these changes. To gain an insight into this matter, the HIV epidemic was chosen as a test case. Data were collected and analysed from 13 semi-structured interviews with GPs, working in urban communities in Flanders. Findings GPs use various types of patient information to estimate patients' risk of HIV. The way in which sexual health information is collected and registered, depends on the type of information under discussion. General patient information and medical history data are often automatically collected and registered. Proactively collecting sexual health information is uncommon. Moreover, the registration of the latter is not obvious, mostly owing to insufficient space in the electronic medical record (EMR). GPs seem willing to systematically collect and register sexual health information, in particular about HIV-risk factors. They expressed a need for guidance together with practical adjustments of the EMR to adequately capture and share this information.

  3. Suboptimal palliative sedation in primary care: an exploration.

    PubMed

    Pype, Peter; Teuwen, Inge; Mertens, Fien; Sercu, Marij; De Sutter, An

    2018-02-01

    Palliative sedation is a therapeutic option to control refractory symptoms in terminal palliative patients. This study aims at describing the occurrence and characteristics of suboptimal palliative sedations in primary care and at exploring the way general practitioners (GPs) experience suboptimal palliative sedation in their practice. We conducted a mixed methods study with a quantitative prospective survey in primary care and qualitative semi-structured interviews with GPs. The research team defined suboptimal palliative sedation as a time interval until deep sleep >1.5 h and/ or >2 awakenings after the start of the unconsciousness. Descriptive statistics were calculated on the quantitative data. Thematic analysis was used to analyse interview transcripts. We registered 63 palliative sedations in 1181 home deaths, 27 forms were completed. Eleven palliative sedations were suboptimal: eight due to the long time span until deep sleep; three due the number of unintended awakenings. GPs' interview analysis revealed two major themes: the shifting perception of failure and the burden of responsibility. Suboptimal palliative sedation occurs frequently in primary palliative care. Efficient communication towards family members is needed to prevent them from having unrealistic expectations and to prevent putting pressure on the GP to hasten the procedure. Sharing the burden of decision-making during the procedure with other health care professionals might diminish the heavy responsibility as perceived by GPs.

  4. Combating information overload: a six-month pilot evaluation of a knowledge management system in general practice.

    PubMed Central

    O'Brien, C; Cambouropoulos, P

    2000-01-01

    A six-month prospective study was conducted on the usefulness and usability of a representative electronic knowledge management tool, the WAX Active Library, for 19 general practitioners (GPs) evaluated using questionnaires and audit trail data. The number of pages accessed was highest in the final two months, when over half of the access trails were completed within 40 seconds. Most GPs rated the system as easy to learn, fast to use, and preferable to paper for providing information during consultations. Such tools could provide a medium for the activities of knowledge officers, help demand management, and promote sharing of information within primary care groups and across NHSnet or the Internet. PMID:10962792

  5. The control data "GIRAFFE" system for interactive graphic finite element analysis

    NASA Technical Reports Server (NTRS)

    Park, S.; Brandon, D. M., Jr.

    1975-01-01

    The Graphical Interface for Finite Elements (GIRAFFE) general purpose interactive graphics application package was described. This system may be used as a pre/post processor for structural analysis computer programs. It facilitates the operations of creating, editing, or reviewing all the structural input/output data on a graphics terminal in a time-sharing mode of operation. An application program for a simple three-dimensional plate problem was illustrated.

  6. Information Extraction Using Controlled English to Support Knowledge-Sharing and Decision-Making

    DTIC Science & Technology

    2012-06-01

    or language variants. CE-based information extraction will greatly facilitate the processes in the cognitive and social domains that enable forces...terminology or language variants. CE-based information extraction will greatly facilitate the processes in the cognitive and social domains that...processor is run to turn the atomic CE into a more “ stylistically felicitous” CE, using techniques such as: aggregating all information about an entity

  7. Towards Scalable 1024 Processor Shared Memory Systems

    NASA Technical Reports Server (NTRS)

    Ciotti, Robert B.; Thigpen, William W. (Technical Monitor)

    2001-01-01

    Over the past 3 years, NASA Ames has been involved in a cooperative effort with SGI to develop the largest single system image systems available. Currently a 1024 Origin3OOO is under development, with first boot expected later in the summer of 2001. This paper discusses some early results with a 512p Origin3OOO system and some arcane IRIX system calls that can dramatically improve scaling performance.

  8. Parallel Programming Paradigms

    DTIC Science & Technology

    1987-07-01

    Unclassified IS.. DECLASSIFICATIONIOOWNGRADIN G 16. DISTRIBUTION STATEMENT (of this Report) Distribution of this report is unlimited. 17...8416878 and by the Office of Naval Research Contracts No. N00014-86-K-0264 and No. N00014-85- K-0328. 8 ?~~ O . G 1 49 II Parallel Programming Paradigms...processors -. "to fetch from the same memory cell (list head) and thus seems to favor a shared memory - g implementation [37). In this dissertation, we

  9. Efficient Approximation Algorithms for Weighted $b$-Matching

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khan, Arif; Pothen, Alex; Mostofa Ali Patwary, Md.

    2016-01-01

    We describe a half-approximation algorithm, b-Suitor, for computing a b-Matching of maximum weight in a graph with weights on the edges. b-Matching is a generalization of the well-known Matching problem in graphs, where the objective is to choose a subset of M edges in the graph such that at most a specified number b(v) of edges in M are incident on each vertex v. Subject to this restriction we maximize the sum of the weights of the edges in M. We prove that the b-Suitor algorithm computes the same b-Matching as the one obtained by the greedy algorithm for themore » problem. We implement the algorithm on serial and shared-memory parallel processors, and compare its performance against a collection of approximation algorithms that have been proposed for the Matching problem. Our results show that the b-Suitor algorithm outperforms the Greedy and Locally Dominant edge algorithms by one to two orders of magnitude on a serial processor. The b-Suitor algorithm has a high degree of concurrency, and it scales well up to 240 threads on a shared memory multiprocessor. The b-Suitor algorithm outperforms the Locally Dominant edge algorithm by a factor of fourteen on 16 cores of an Intel Xeon multiprocessor.« less

  10. MPF: A portable message passing facility for shared memory multiprocessors

    NASA Technical Reports Server (NTRS)

    Malony, Allen D.; Reed, Daniel A.; Mcguire, Patrick J.

    1987-01-01

    The design, implementation, and performance evaluation of a message passing facility (MPF) for shared memory multiprocessors are presented. The MPF is based on a message passing model conceptually similar to conversations. Participants (parallel processors) can enter or leave a conversation at any time. The message passing primitives for this model are implemented as a portable library of C function calls. The MPF is currently operational on a Sequent Balance 21000, and several parallel applications were developed and tested. Several simple benchmark programs are presented to establish interprocess communication performance for common patterns of interprocess communication. Finally, performance figures are presented for two parallel applications, linear systems solution, and iterative solution of partial differential equations.

  11. Reducing Interprocessor Dependence in Recoverable Distributed Shared Memory

    NASA Technical Reports Server (NTRS)

    Janssens, Bob; Fuchs, W. Kent

    1994-01-01

    Checkpointing techniques in parallel systems use dependency tracking and/or message logging to ensure that a system rolls back to a consistent state. Traditional dependency tracking in distributed shared memory (DSM) systems is expensive because of high communication frequency. In this paper we show that, if designed correctly, a DSM system only needs to consider dependencies due to the transfer of blocks of data, resulting in reduced dependency tracking overhead and reduced potential for rollback propagation. We develop an ownership timestamp scheme to tolerate the loss of block state information and develop a passive server model of execution where interactions between processors are considered atomic. With our scheme, dependencies are significantly reduced compared to the traditional message-passing model.

  12. The management of acute adverse effects of breast cancer treatment in general practice: a video-vignette study.

    PubMed

    Jiwa, Moyez; Long, Anne; Shaw, Tim; Pagey, Georgina; Halkett, Georgia; Pillai, Vinita; Meng, Xingqiong

    2014-09-03

    There has been a focus recently on the use of the Internet and email to deliver education interventions to general practitioners (GPs). The treatment of breast cancer may include surgery, radiotherapy, chemotherapy, and/or hormone treatment. These treatments may have acute adverse effects. GPs need more information on the diagnosis and management of specific adverse effects encountered immediately after cancer treatment. The goal was to evaluate an Internet-based educational program developed for GPs to advise patients with acute adverse effects following breast cancer treatment. During phase 1, participants viewed 6 video vignettes of actor-patients reporting 1 of 6 acute symptoms following surgery and chemotherapy and/or radiotherapy treatment. GPs indicated their diagnosis and proposed management through an online survey program. They received feedback about each scenario in the form of a specialist clinic letter, as if the patient had been seen at a specialist clinic after they had attended the GP. This letter incorporated extracts from local guidelines on the management of the symptoms presented. This feedback was sent to the GPs electronically on the same survey platform. In phase 2, all GPs were invited to manage similar cases as phase 1. Their proposed management was compared to the guidelines. McNemar test was used to compare data from phases 1 and 2, and logistic regression was used to explore the GP characteristics that were associated with inappropriate case management. A total of 50 GPs participated. Participants were younger and more likely to be female than other GPs in Australia. For 5 of 6 vignettes in phase 1, management was consistent with expert opinion in the minority of cases (6%-46%). Participant demographic characteristics had a variable effect on different management decisions in phase 1. The variables modeled explained 15%-28% of the differences observed. Diagnosis and management improved significantly in phase 2, especially for diarrhea, neutropenia, and seroma sample cases. The proportion of incorrect management responses was reduced to a minimum (25.3%-49.3%) in phase 2. There was evidence that providing feedback by experts on specific cases had an impact on GPs' knowledge about how to appropriately manage acute treatment adverse effects. This educational intervention could be targeted to support the implementation of shared care during cancer treatment.

  13. On-board landmark navigation and attitude reference parallel processor system

    NASA Technical Reports Server (NTRS)

    Gilbert, L. E.; Mahajan, D. T.

    1978-01-01

    An approach to autonomous navigation and attitude reference for earth observing spacecraft is described along with the landmark identification technique based on a sequential similarity detection algorithm (SSDA). Laboratory experiments undertaken to determine if better than one pixel accuracy in registration can be achieved consistent with onboard processor timing and capacity constraints are included. The SSDA is implemented using a multi-microprocessor system including synchronization logic and chip library. The data is processed in parallel stages, effectively reducing the time to match the small known image within a larger image as seen by the onboard image system. Shared memory is incorporated in the system to help communicate intermediate results among microprocessors. The functions include finding mean values and summation of absolute differences over the image search area. The hardware is a low power, compact unit suitable to onboard application with the flexibility to provide for different parameters depending upon the environment.

  14. Description and Simulation of a Fast Packet Switch Architecture for Communication Satellites

    NASA Technical Reports Server (NTRS)

    Quintana, Jorge A.; Lizanich, Paul J.

    1995-01-01

    The NASA Lewis Research Center has been developing the architecture for a multichannel communications signal processing satellite (MCSPS) as part of a flexible, low-cost meshed-VSAT (very small aperture terminal) network. The MCSPS architecture is based on a multifrequency, time-division-multiple-access (MF-TDMA) uplink and a time-division multiplex (TDM) downlink. There are eight uplink MF-TDMA beams, and eight downlink TDM beams, with eight downlink dwells per beam. The information-switching processor, which decodes, stores, and transmits each packet of user data to the appropriate downlink dwell onboard the satellite, has been fully described by using VHSIC (Very High Speed Integrated-Circuit) Hardware Description Language (VHDL). This VHDL code, which was developed in-house to simulate the information switching processor, showed that the architecture is both feasible and viable. This paper describes a shared-memory-per-beam architecture, its VHDL implementation, and the simulation efforts.

  15. Computer-aided design/computer-aided manufacturing skull base drill.

    PubMed

    Couldwell, William T; MacDonald, Joel D; Thomas, Charles L; Hansen, Bradley C; Lapalikar, Aniruddha; Thakkar, Bharat; Balaji, Alagar K

    2017-05-01

    The authors have developed a simple device for computer-aided design/computer-aided manufacturing (CAD-CAM) that uses an image-guided system to define a cutting tool path that is shared with a surgical machining system for drilling bone. Information from 2D images (obtained via CT and MRI) is transmitted to a processor that produces a 3D image. The processor generates code defining an optimized cutting tool path, which is sent to a surgical machining system that can drill the desired portion of bone. This tool has applications for bone removal in both cranial and spine neurosurgical approaches. Such applications have the potential to reduce surgical time and associated complications such as infection or blood loss. The device enables rapid removal of bone within 1 mm of vital structures. The validity of such a machining tool is exemplified in the rapid (< 3 minutes machining time) and accurate removal of bone for transtemporal (for example, translabyrinthine) approaches.

  16. Efficient quantum walk on a quantum processor

    PubMed Central

    Qiang, Xiaogang; Loke, Thomas; Montanaro, Ashley; Aungskunsiri, Kanin; Zhou, Xiaoqi; O'Brien, Jeremy L.; Wang, Jingbo B.; Matthews, Jonathan C. F.

    2016-01-01

    The random walk formalism is used across a wide range of applications, from modelling share prices to predicting population genetics. Likewise, quantum walks have shown much potential as a framework for developing new quantum algorithms. Here we present explicit efficient quantum circuits for implementing continuous-time quantum walks on the circulant class of graphs. These circuits allow us to sample from the output probability distributions of quantum walks on circulant graphs efficiently. We also show that solving the same sampling problem for arbitrary circulant quantum circuits is intractable for a classical computer, assuming conjectures from computational complexity theory. This is a new link between continuous-time quantum walks and computational complexity theory and it indicates a family of tasks that could ultimately demonstrate quantum supremacy over classical computers. As a proof of principle, we experimentally implement the proposed quantum circuit on an example circulant graph using a two-qubit photonics quantum processor. PMID:27146471

  17. Automation of Data Traffic Control on DSM Architecture

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael; Jin, Hao-Qiang; Yan, Jerry

    2001-01-01

    The design of distributed shared memory (DSM) computers liberates users from the duty to distribute data across processors and allows for the incremental development of parallel programs using, for example, OpenMP or Java threads. DSM architecture greatly simplifies the development of parallel programs having good performance on a few processors. However, to achieve a good program scalability on DSM computers requires that the user understand data flow in the application and use various techniques to avoid data traffic congestions. In this paper we discuss a number of such techniques, including data blocking, data placement, data transposition and page size control and evaluate their efficiency on the NAS (NASA Advanced Supercomputing) Parallel Benchmarks. We also present a tool which automates the detection of constructs causing data congestions in Fortran array oriented codes and advises the user on code transformations for improving data traffic in the application.

  18. Parallel programming with Easy Java Simulations

    NASA Astrophysics Data System (ADS)

    Esquembre, F.; Christian, W.; Belloni, M.

    2018-01-01

    Nearly all of today's processors are multicore, and ideally programming and algorithm development utilizing the entire processor should be introduced early in the computational physics curriculum. Parallel programming is often not introduced because it requires a new programming environment and uses constructs that are unfamiliar to many teachers. We describe how we decrease the barrier to parallel programming by using a java-based programming environment to treat problems in the usual undergraduate curriculum. We use the easy java simulations programming and authoring tool to create the program's graphical user interface together with objects based on those developed by Kaminsky [Building Parallel Programs (Course Technology, Boston, 2010)] to handle common parallel programming tasks. Shared-memory parallel implementations of physics problems, such as time evolution of the Schrödinger equation, are available as source code and as ready-to-run programs from the AAPT-ComPADRE digital library.

  19. An Evaluation of Architectural Platforms for Parallel Navier-Stokes Computations

    NASA Technical Reports Server (NTRS)

    Jayasimha, D. N.; Hayder, M. E.; Pillay, S. K.

    1996-01-01

    We study the computational, communication, and scalability characteristics of a computational fluid dynamics application, which solves the time accurate flow field of a jet using the compressible Navier-Stokes equations, on a variety of parallel architecture platforms. The platforms chosen for this study are a cluster of workstations (the LACE experimental testbed at NASA Lewis), a shared memory multiprocessor (the Cray YMP), and distributed memory multiprocessors with different topologies - the IBM SP and the Cray T3D. We investigate the impact of various networks connecting the cluster of workstations on the performance of the application and the overheads induced by popular message passing libraries used for parallelization. The work also highlights the importance of matching the memory bandwidth to the processor speed for good single processor performance. By studying the performance of an application on a variety of architectures, we are able to point out the strengths and weaknesses of each of the example computing platforms.

  20. Parallelizing Navier-Stokes Computations on a Variety of Architectural Platforms

    NASA Technical Reports Server (NTRS)

    Jayasimha, D. N.; Hayder, M. E.; Pillay, S. K.

    1997-01-01

    We study the computational, communication, and scalability characteristics of a Computational Fluid Dynamics application, which solves the time accurate flow field of a jet using the compressible Navier-Stokes equations, on a variety of parallel architectural platforms. The platforms chosen for this study are a cluster of workstations (the LACE experimental testbed at NASA Lewis), a shared memory multiprocessor (the Cray YMP), distributed memory multiprocessors with different topologies-the IBM SP and the Cray T3D. We investigate the impact of various networks, connecting the cluster of workstations, on the performance of the application and the overheads induced by popular message passing libraries used for parallelization. The work also highlights the importance of matching the memory bandwidth to the processor speed for good single processor performance. By studying the performance of an application on a variety of architectures, we are able to point out the strengths and weaknesses of each of the example computing platforms.

  1. Molecular characterization of the full-length L and M RNAs of Tomato yellow ring virus, a member of the genus Tospovirus.

    PubMed

    Chen, Tsung-Chi; Li, Ju-Ting; Fan, Ya-Shu; Yeh, Yi-Chun; Yeh, Shyi-Dong; Kormelink, Richard

    2013-06-01

    Tomato yellow ring virus (TYRV), first isolated from tomato in Iran, was classified as a non-approved species of the genus Tospovirus based on the characterization of its genomic S RNA. In the current study, the complete sequences of the genomic L and M RNAs of TYRV were determined and analyzed. The L RNA has 8,877 nucleotides (nt) and codes in the viral complementary (vc) strand for the putative RNA-dependent RNA polymerase (RdRp) of 2,873 amino acids (aa) (331 kDa). The RdRp of TYRV shares the highest aa sequence identity (88.7 %) with that of Iris yellow spot virus (IYSV), and contains conserved motifs shared with those of the animal-infecting bunyaviruses. The M RNA contains 4,786 nt and codes in ambisense arrangement for the NSm protein of 308 aa (34.5 kDa) in viral sense, and the Gn/Gc glycoprotein precursor (GP) of 1,310 aa (128 kDa) in vc-sense. Phylogenetic analyses indicated that TYRV is closely clustered with IYSV and Polygonum ringspot virus (PolRSV). The NSm and GP of TYRV share the highest aa sequence identity with those of IYSV and PolRSV (89.9 and 80.2-86.5 %, respectively). Moreover, the GPs of TYRV, IYSV, and PolRSV share highly similar characteristics, among which an identical deduced N-terminal protease cleavage site that is distinct from all tospoviral GPs analyzed thus far. Taken together, the elucidation of the complete genome sequence and biological features of TYRV support a close ancestral relationship with IYSV and PolRSV.

  2. Distributed simulation using a real-time shared memory network

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Mattern, Duane L.; Wong, Edmond; Musgrave, Jeffrey L.

    1993-01-01

    The Advanced Control Technology Branch of the NASA Lewis Research Center performs research in the area of advanced digital controls for aeronautic and space propulsion systems. This work requires the real-time implementation of both control software and complex dynamical models of the propulsion system. We are implementing these systems in a distributed, multi-vendor computer environment. Therefore, a need exists for real-time communication and synchronization between the distributed multi-vendor computers. A shared memory network is a potential solution which offers several advantages over other real-time communication approaches. A candidate shared memory network was tested for basic performance. The shared memory network was then used to implement a distributed simulation of a ramjet engine. The accuracy and execution time of the distributed simulation was measured and compared to the performance of the non-partitioned simulation. The ease of partitioning the simulation, the minimal time required to develop for communication between the processors and the resulting execution time all indicate that the shared memory network is a real-time communication technique worthy of serious consideration.

  3. The impact of socio-technical communication styles on the diversity and innovation potential of global science collaboratories

    DOE PAGES

    Ozmen, Ozgur; Yilmaz, Levent; Smith, Jeffrey

    2016-02-09

    Emerging cyber-infrastructure tools are enabling scientists to transparently co-develop, share, and communicate about real-time diverse forms of knowledge artifacts. In these environments, communication preferences of scientists are posited as an important factor affecting innovation capacity and robustness of social and knowledge network structures. Scientific knowledge creation in such communities is called global participatory science (GPS). Recently, using agent-based modeling and collective action theory as a basis, a complex adaptive social communication network model (CollectiveInnoSim) is implemented. This work leverages CollectiveInnoSim implementing communication preferences of scientists. Social network metrics and knowledge production patterns are used as proxy metrics to infer innovationmore » potential of emergent knowledge and collaboration networks. The objective is to present the underlying communication dynamics of GPS in a form of computational model and delineate the impacts of various communication preferences of scientists on innovation potential of the collaboration network. Ultimately, the insight gained can help policy-makers to design GPS environments and promote innovation.« less

  4. The impact of socio-technical communication styles on the diversity and innovation potential of global science collaboratories

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ozmen, Ozgur; Yilmaz, Levent; Smith, Jeffrey

    Emerging cyber-infrastructure tools are enabling scientists to transparently co-develop, share, and communicate about real-time diverse forms of knowledge artifacts. In these environments, communication preferences of scientists are posited as an important factor affecting innovation capacity and robustness of social and knowledge network structures. Scientific knowledge creation in such communities is called global participatory science (GPS). Recently, using agent-based modeling and collective action theory as a basis, a complex adaptive social communication network model (CollectiveInnoSim) is implemented. This work leverages CollectiveInnoSim implementing communication preferences of scientists. Social network metrics and knowledge production patterns are used as proxy metrics to infer innovationmore » potential of emergent knowledge and collaboration networks. The objective is to present the underlying communication dynamics of GPS in a form of computational model and delineate the impacts of various communication preferences of scientists on innovation potential of the collaboration network. Ultimately, the insight gained can help policy-makers to design GPS environments and promote innovation.« less

  5. Badgers prefer cattle pasture but avoid cattle: implications for bovine tuberculosis control.

    PubMed

    Woodroffe, Rosie; Donnelly, Christl A; Ham, Cally; Jackson, Seth Y B; Moyes, Kelly; Chapman, Kayna; Stratton, Naomi G; Cartwright, Samantha J

    2016-10-01

    Effective management of infectious disease relies upon understanding mechanisms of pathogen transmission. In particular, while models of disease dynamics usually assume transmission through direct contact, transmission through environmental contamination can cause different dynamics. We used Global Positioning System (GPS) collars and proximity-sensing contact-collars to explore opportunities for transmission of Mycobacterium bovis [causal agent of bovine tuberculosis] between cattle and badgers (Meles meles). Cattle pasture was badgers' most preferred habitat. Nevertheless, although collared cattle spent 2914 collar-nights in the home ranges of contact-collared badgers, and 5380 collar-nights in the home ranges of GPS-collared badgers, we detected no direct contacts between the two species. Simultaneous GPS-tracking revealed that badgers preferred land > 50 m from cattle. Very infrequent direct contact indicates that badger-to-cattle and cattle-to-badger M. bovis transmission may typically occur through contamination of the two species' shared environment. This information should help to inform tuberculosis control by guiding both modelling and farm management. © 2016 John Wiley & Sons Ltd/CNRS.

  6. What makes a good GP? An empirical perspective on virtue in general practice

    PubMed Central

    Braunack-Mayer, A

    2005-01-01

    This paper takes a virtuist approach to medical ethics to explore, from an empirical angle, ideas about settled ways of living a good life. Qualitative research methods were used to analyse the ways in which a group of 15 general practitioners (GPs) articulated notions of good doctoring and the virtues in their work. I argue that the GPs, whose talk is analysed here, defined good general practice in terms of the ideals of accessibility, comprehensiveness, and continuity. They regarded these ideals significant both for the way they dealt with morally problematic situations and for how they conducted their professional lives more generally. In addition, I argue that the GPs who articulated these ideals most clearly were able to, in part, because they shared the experience of working in rural areas. This experience helped them to develop an understanding of the nature of general practice that their urban colleagues were less able to draw on. In that sense, the structural and organisational framework of general practice in rural areas provided the context for their understanding of ideals in general practice. PMID:15681671

  7. Enabling Spacecraft Formation Flying in Any Earth Orbit Through Spaceborne GPS and Enhanced Autonomy Technologies

    NASA Technical Reports Server (NTRS)

    Bauer, F. H.; Bristow, J. O.; Carpenter, J. R.; Garrison, J. L.; Hartman, K. R.; Lee, T.; Long, A. C.; Kelbel, D.; Lu, V.; How, J. P.; hide

    2000-01-01

    Formation flying is quickly revolutionizing the way the space community conducts autonomous science missions around the Earth and in space. This technological revolution will provide new, innovative ways for this community to gather scientific information, share this information between space vehicles and the ground, and expedite the human exploration of space. Once fully matured, this technology will result in swarms of space vehicles flying as a virtual platform and gathering significantly more and better science data than is possible today. Formation flying will be enabled through the development and deployment of spaceborne differential Global Positioning System (GPS) technology and through innovative spacecraft autonomy techniques, This paper provides an overview of the current status of NASA/DoD/Industry/University partnership to bring formation flying technology to the forefront as quickly as possible, the hurdles that need to be overcome to achieve the formation flying vision, and the team's approach to transfer this technology to space. It will also describe some of the formation flying testbeds, such as Orion, that are being developed to demonstrate and validate these innovative GPS sensing and formation control technologies.

  8. Synthetic vision in the cockpit: 3D systems for general aviation

    NASA Astrophysics Data System (ADS)

    Hansen, Andrew J.; Rybacki, Richard M.; Smith, W. Garth

    2001-08-01

    Synthetic vision has the potential to improve safety in aviation through better pilot situational awareness and enhanced navigational guidance. The technological advances enabling synthetic vision are GPS based navigation (position and attitude) systems and efficient graphical systems for rendering 3D displays in the cockpit. A benefit for military, commercial, and general aviation platforms alike is the relentless drive to miniaturize computer subsystems. Processors, data storage, graphical and digital signal processing chips, RF circuitry, and bus architectures are at or out-pacing Moore's Law with the transition to mobile computing and embedded systems. The tandem of fundamental GPS navigation services such as the US FAA's Wide Area and Local Area Augmentation Systems (WAAS) and commercially viable mobile rendering systems puts synthetic vision well with the the technological reach of general aviation. Given the appropriate navigational inputs, low cost and power efficient graphics solutions are capable of rendering a pilot's out-the-window view into visual databases with photo-specific imagery and geo-specific elevation and feature content. Looking beyond the single airframe, proposed aviation technologies such as ADS-B would provide a communication channel for bringing traffic information on-board and into the cockpit visually via the 3D display for additional pilot awareness. This paper gives a view of current 3D graphics system capability suitable for general aviation and presents a potential road map following the current trends.

  9. Identification of Air Force Emerging Technologies and Militarily Significant Emerging Technologies.

    DTIC Science & Technology

    1985-08-31

    taking an integrated approach to avionics and EU, the various sensors and receivers on the aircraft can time-share the use of common signal processors...functions mentioned above has required, in addition to a separate sensor or antenna, a totally independent electronics suite. Many of the advanced...Classification A3. IMAGING SENSOR AUTOPROCESSOR The Air Force has contracted with Rockwell International and Honeywell in this work. Rockwell’s work is

  10. Cooperative system and method using mobile robots for testing a cooperative search controller

    DOEpatents

    Byrne, Raymond H.; Harrington, John J.; Eskridge, Steven E.; Hurtado, John E.

    2002-01-01

    A test system for testing a controller provides a way to use large numbers of miniature mobile robots to test a cooperative search controller in a test area, where each mobile robot has a sensor, a communication device, a processor, and a memory. A method of using a test system provides a way for testing a cooperative search controller using multiple robots sharing information and communicating over a communication network.

  11. Comparison of the gene expression profiles between gallstones and gallbladder polyps.

    PubMed

    Li, Quanfu; Ge, Xin; Xu, Xu; Zhong, Yonggang; Qie, Zengwang

    2014-01-01

    Gallstones and gallbladder polyps (GPs) are two major types of gallbladder diseases that share multiple common symptoms. However, their pathological mechanism remains largely unknown. The aim of our study is to identify gallstones and GPs related-genes and gain an insight into the underlying genetic basis of these diseases. We enrolled 7 patients with gallstones and 2 patients with GP for RNA-Seq and we conducted functional enrichment analysis and protein-protein interaction (PPI) networks analysis for identified differentially expressed genes (DEGs). RNA-Seq produced 41.7 million in gallstones and 32.1 million pairs in GPs. A total of 147 DEGs was identified between gallstones and GPs. We found GO terms for molecular functions significantly enriched in antigen binding (GO:0003823, P=5.9E-11), while for biological processes, the enriched GO terms were immune response (GO:0006955, P=2.6E-15), and for cellular component, the enriched GO terms were extracellular region (GO:0005576, P=2.7E-15). To further evaluate the biological significance for the DEGs, we also performed the KEGG pathway enrichment analysis. The most significant pathway in our KEGG analysis was Cytokine-cytokine receptor interaction (P=7.5E-06). PPI network analysis indicated that the significant hub proteins containing S100A9 (S100 calcium binding protein A9, Degree=94) and CR2 (complement component receptor 2, Degree=8). This present study suggests some promising genes and may provide a clue to the role of these genes playing in the development of gallstones and GPs.

  12. General practitioners perceptions on advance care planning for patients living with dementia.

    PubMed

    Brazil, Kevin; Carter, Gillian; Galway, Karen; Watson, Max; van der Steen, Jenny T

    2015-04-23

    Advance care planning (ACP) facilitates communication and understanding of preferences, nevertheless the use of ACPs in primary care is low. The uncertain course of dementia and the inability to communicate with the patient living with dementia are significant challenges for GPs to initiate discussions on goals of care. A cross-sectional survey, using a purposive, cluster sample of GPs across Northern Ireland with registered dementia patients was used. GPs at selected practices received the survey instrument and up to four mail contacts was implemented. One hundred and thirty-three GPs (40.6%) participated in the survey, representing 60.9% of surveyed practices. While most respondents regarded dementia as a terminal disease (96.2%) only 37.6% felt that palliative care applied equally from the time of diagnosis to severe dementia. While most respondents thought that early discussions would facilitate decision-making during advanced dementia (61%), respondents were divided on whether ACP should be initiated at the time of diagnoses. While most respondents felt that GPs should take the initiative to introduce and encourage ACP, most survey participants acknowledged the need for improved knowledge to involve families in caring for patients with dementia at the end of life and that a standard format for ACP documentation was needed. Optimal timing of ACP discussions should be determined by the readiness of the patient and family carer to face end of life. ACP discussions can be enhanced by educational strategies directed towards the patient and family carer that enable shared decision-making with their GP when considering options in future care.

  13. Parallel computing for probabilistic fatigue analysis

    NASA Technical Reports Server (NTRS)

    Sues, Robert H.; Lua, Yuan J.; Smith, Mark D.

    1993-01-01

    This paper presents the results of Phase I research to investigate the most effective parallel processing software strategies and hardware configurations for probabilistic structural analysis. We investigate the efficiency of both shared and distributed-memory architectures via a probabilistic fatigue life analysis problem. We also present a parallel programming approach, the virtual shared-memory paradigm, that is applicable across both types of hardware. Using this approach, problems can be solved on a variety of parallel configurations, including networks of single or multiprocessor workstations. We conclude that it is possible to effectively parallelize probabilistic fatigue analysis codes; however, special strategies will be needed to achieve large-scale parallelism to keep large number of processors busy and to treat problems with the large memory requirements encountered in practice. We also conclude that distributed-memory architecture is preferable to shared-memory for achieving large scale parallelism; however, in the future, the currently emerging hybrid-memory architectures will likely be optimal.

  14. Parallelization of KENO-Va Monte Carlo code

    NASA Astrophysics Data System (ADS)

    Ramón, Javier; Peña, Jorge

    1995-07-01

    KENO-Va is a code integrated within the SCALE system developed by Oak Ridge that solves the transport equation through the Monte Carlo Method. It is being used at the Consejo de Seguridad Nuclear (CSN) to perform criticality calculations for fuel storage pools and shipping casks. Two parallel versions of the code: one for shared memory machines and other for distributed memory systems using the message-passing interface PVM have been generated. In both versions the neutrons of each generation are tracked in parallel. In order to preserve the reproducibility of the results in both versions, advanced seeds for random numbers were used. The CONVEX C3440 with four processors and shared memory at CSN was used to implement the shared memory version. A FDDI network of 6 HP9000/735 was employed to implement the message-passing version using proprietary PVM. The speedup obtained was 3.6 in both cases.

  15. Optimization of image processing algorithms on mobile platforms

    NASA Astrophysics Data System (ADS)

    Poudel, Pramod; Shirvaikar, Mukul

    2011-03-01

    This work presents a technique to optimize popular image processing algorithms on mobile platforms such as cell phones, net-books and personal digital assistants (PDAs). The increasing demand for video applications like context-aware computing on mobile embedded systems requires the use of computationally intensive image processing algorithms. The system engineer has a mandate to optimize them so as to meet real-time deadlines. A methodology to take advantage of the asymmetric dual-core processor, which includes an ARM and a DSP core supported by shared memory, is presented with implementation details. The target platform chosen is the popular OMAP 3530 processor for embedded media systems. It has an asymmetric dual-core architecture with an ARM Cortex-A8 and a TMS320C64x Digital Signal Processor (DSP). The development platform was the BeagleBoard with 256 MB of NAND RAM and 256 MB SDRAM memory. The basic image correlation algorithm is chosen for benchmarking as it finds widespread application for various template matching tasks such as face-recognition. The basic algorithm prototypes conform to OpenCV, a popular computer vision library. OpenCV algorithms can be easily ported to the ARM core which runs a popular operating system such as Linux or Windows CE. However, the DSP is architecturally more efficient at handling DFT algorithms. The algorithms are tested on a variety of images and performance results are presented measuring the speedup obtained due to dual-core implementation. A major advantage of this approach is that it allows the ARM processor to perform important real-time tasks, while the DSP addresses performance-hungry algorithms.

  16. Spaceborne Processor Array

    NASA Technical Reports Server (NTRS)

    Chow, Edward T.; Schatzel, Donald V.; Whitaker, William D.; Sterling, Thomas

    2008-01-01

    A Spaceborne Processor Array in Multifunctional Structure (SPAMS) can lower the total mass of the electronic and structural overhead of spacecraft, resulting in reduced launch costs, while increasing the science return through dynamic onboard computing. SPAMS integrates the multifunctional structure (MFS) and the Gilgamesh Memory, Intelligence, and Network Device (MIND) multi-core in-memory computer architecture into a single-system super-architecture. This transforms every inch of a spacecraft into a sharable, interconnected, smart computing element to increase computing performance while simultaneously reducing mass. The MIND in-memory architecture provides a foundation for high-performance, low-power, and fault-tolerant computing. The MIND chip has an internal structure that includes memory, processing, and communication functionality. The Gilgamesh is a scalable system comprising multiple MIND chips interconnected to operate as a single, tightly coupled, parallel computer. The array of MIND components shares a global, virtual name space for program variables and tasks that are allocated at run time to the distributed physical memory and processing resources. Individual processor- memory nodes can be activated or powered down at run time to provide active power management and to configure around faults. A SPAMS system is comprised of a distributed Gilgamesh array built into MFS, interfaces into instrument and communication subsystems, a mass storage interface, and a radiation-hardened flight computer.

  17. Comparison of Origin 2000 and Origin 3000 Using NAS Parallel Benchmarks

    NASA Technical Reports Server (NTRS)

    Turney, Raymond D.

    2001-01-01

    This report describes results of benchmark tests on the Origin 3000 system currently being installed at the NASA Ames National Advanced Supercomputing facility. This machine will ultimately contain 1024 R14K processors. The first part of the system, installed in November, 2000 and named mendel, is an Origin 3000 with 128 R12K processors. For comparison purposes, the tests were also run on lomax, an Origin 2000 with R12K processors. The BT, LU, and SP application benchmarks in the NAS Parallel Benchmark Suite and the kernel benchmark FT were chosen to determine system performance and measure the impact of changes on the machine as it evolves. Having been written to measure performance on Computational Fluid Dynamics applications, these benchmarks are assumed appropriate to represent the NAS workload. Since the NAS runs both message passing (MPI) and shared-memory, compiler directive type codes, both MPI and OpenMP versions of the benchmarks were used. The MPI versions used were the latest official release of the NAS Parallel Benchmarks, version 2.3. The OpenMP versiqns used were PBN3b2, a beta version that is in the process of being released. NPB 2.3 and PBN 3b2 are technically different benchmarks, and NPB results are not directly comparable to PBN results.

  18. Benchmark tests on the digital equipment corporation Alpha AXP 21164-based AlphaServer 8400, including a comparison of optimized vector and superscalar processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wasserman, H.J.

    1996-02-01

    The second generation of the Digital Equipment Corp. (DEC) DECchip Alpha AXP microprocessor is referred to as the 21164. From the viewpoint of numerically-intensive computing, the primary difference between it and its predecessor, the 21064, is that the 21164 has twice the multiply/add throughput per clock period (CP), a maximum of two floating point operations (FLOPS) per CP vs. one for 21064. The AlphaServer 8400 is a shared-memory multiprocessor server system that can accommodate up to 12 CPUs and up to 14 GB of memory. In this report we will compare single processor performance of the 8400 system with thatmore » of the International Business Machines Corp. (IBM) RISC System/6000 POWER-2 microprocessor running at 66 MHz, the Silicon Graphics, Inc. (SGI) MIPS R8000 microprocessor running at 75 MHz, and the Cray Research, Inc. CRAY J90. The performance comparison is based on a set of Fortran benchmark codes that represent a portion of the Los Alamos National Laboratory supercomputer workload. The advantage of using these codes, is that the codes also span a wide range of computational characteristics, such as vectorizability, problem size, and memory access pattern. The primary disadvantage of using them is that detailed, quantitative analysis of performance behavior of all codes on all machines is difficult. One important addition to the benchmark set appears for the first time in this report. Whereas the older version was written for a vector processor, the newer version is more optimized for microprocessor architectures. Therefore, we have for the first time, an opportunity to measure performance on a single application using implementations that expose the respective strengths of vector and superscalar architecture. All results in this report are from single processors. A subsequent article will explore shared-memory multiprocessing performance of the 8400 system.« less

  19. Autonomous Flight Safety System Road Test

    NASA Technical Reports Server (NTRS)

    Simpson, James C.; Zoemer, Roger D.; Forney, Chris S.

    2005-01-01

    On February 3, 2005, Kennedy Space Center (KSC) conducted the first Autonomous Flight Safety System (AFSS) test on a moving vehicle -- a van driven around the KSC industrial area. A subset of the Phase III design was used consisting of a single computer, GPS receiver, and UPS antenna. The description and results of this road test are described in this report.AFSS is a joint KSC and Wallops Flight Facility project that is in its third phase of development. AFSS is an independent subsystem intended for use with Expendable Launch Vehicles that uses tracking data from redundant onboard sensors to autonomously make flight termination decisions using software-based rules implemented on redundant flight processors. The goals of this project are to increase capabilities by allowing launches from locations that do not have or cannot afford extensive ground-based range safety assets, to decrease range costs, and to decrease reaction time for special situations.

  20. Autonomous navigation using lunar beacons

    NASA Technical Reports Server (NTRS)

    Khatib, A. R.; Ellis, J.; French, J.; Null, G.; Yunck, T.; Wu, S.

    1983-01-01

    The concept of using lunar beacon signal transmission for on-board navigation for earth satellites and near-earth spacecraft is described. The system would require powerful transmitters on the earth-side of the moon's surface and black box receivers with antennae and microprocessors placed on board spacecraft for autonomous navigation. Spacecraft navigation requires three position and three velocity elements to establish location coordinates. Two beacons could be soft-landed on the lunar surface at the limits of allowable separation and each would transmit a wide-beam signal with cones reaching GEO heights and be strong enough to be received by small antennae in near-earth orbit. The black box processor would perform on-board computation with one-way Doppler/range data and dynamical models. Alternatively, GEO satellites such as the GPS or TDRSS spacecraft can be used with interferometric techniques to provide decimeter-level accuracy for aircraft navigation.

  1. Draper Laboratory small autonomous aerial vehicle

    NASA Astrophysics Data System (ADS)

    DeBitetto, Paul A.; Johnson, Eric N.; Bosse, Michael C.; Trott, Christian A.

    1997-06-01

    The Charles Stark Draper Laboratory, Inc. and students from Massachusetts Institute of Technology and Boston University have cooperated to develop an autonomous aerial vehicle that won the 1996 International Aerial Robotics Competition. This paper describes the approach, system architecture and subsystem designs for the entry. This entry represents a combination of many technology areas: navigation, guidance, control, vision processing, human factors, packaging, power, real-time software, and others. The aerial vehicle, an autonomous helicopter, performs navigation and control functions using multiple sensors: differential GPS, inertial measurement unit, sonar altimeter, and a flux compass. The aerial transmits video imagery to the ground. A ground based vision processor converts the image data into target position and classification estimates. The system was designed, built, and flown in less than one year and has provided many lessons about autonomous vehicle systems, several of which are discussed. In an appendix, our current research in augmenting the navigation system with vision- based estimates is presented.

  2. Feasibility of through-time spiral generalized autocalibrating partial parallel acquisition for low latency accelerated real-time MRI of speech.

    PubMed

    Lingala, Sajan Goud; Zhu, Yinghua; Lim, Yongwan; Toutios, Asterios; Ji, Yunhua; Lo, Wei-Ching; Seiberlich, Nicole; Narayanan, Shrikanth; Nayak, Krishna S

    2017-12-01

    To evaluate the feasibility of through-time spiral generalized autocalibrating partial parallel acquisition (GRAPPA) for low-latency accelerated real-time MRI of speech. Through-time spiral GRAPPA (spiral GRAPPA), a fast linear reconstruction method, is applied to spiral (k-t) data acquired from an eight-channel custom upper-airway coil. Fully sampled data were retrospectively down-sampled to evaluate spiral GRAPPA at undersampling factors R = 2 to 6. Pseudo-golden-angle spiral acquisitions were used for prospective studies. Three subjects were imaged while performing a range of speech tasks that involved rapid articulator movements, including fluent speech and beat-boxing. Spiral GRAPPA was compared with view sharing, and a parallel imaging and compressed sensing (PI-CS) method. Spiral GRAPPA captured spatiotemporal dynamics of vocal tract articulators at undersampling factors ≤4. Spiral GRAPPA at 18 ms/frame and 2.4 mm 2 /pixel outperformed view sharing in depicting rapidly moving articulators. Spiral GRAPPA and PI-CS provided equivalent temporal fidelity. Reconstruction latency per frame was 14 ms for view sharing and 116 ms for spiral GRAPPA, using a single processor. Spiral GRAPPA kept up with the MRI data rate of 18ms/frame with eight processors. PI-CS required 17 minutes to reconstruct 5 seconds of dynamic data. Spiral GRAPPA enabled 4-fold accelerated real-time MRI of speech with a low reconstruction latency. This approach is applicable to wide range of speech RT-MRI experiments that benefit from real-time feedback while visualizing rapid articulator movement. Magn Reson Med 78:2275-2282, 2017. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  3. Software-Controlled Caches in the VMP Multiprocessor

    DTIC Science & Technology

    1986-03-01

    programming system level that Processors is tuned for the VMP design. In this vein, we are interested in exploring how far the software support can go to ...handled in software, analogously to the handling agement of the shared program state is familiar and of virtual memory page faults. Hardware support for...ensure good behavior, as opposed to how Each cache miss results in bus traffic. Table 2 pro- vides the bus cost for the "average" cache miss. Fig

  4. Thread Migration in the Presence of Pointers

    NASA Technical Reports Server (NTRS)

    Cronk, David; Haines, Matthew; Mehrotra, Piyush

    1996-01-01

    Dynamic migration of lightweight threads supports both data locality and load balancing. However, migrating threads that contain pointers referencing data in both the stack and heap remains an open problem. In this paper we describe a technique by which threads with pointers referencing both stack and non-shared heap data can be migrated such that the pointers remain valid after migration. As a result, threads containing pointers can now be migrated between processors in a homogeneous distributed memory environment.

  5. The Mark III Hypercube-Ensemble Computers

    NASA Technical Reports Server (NTRS)

    Peterson, John C.; Tuazon, Jesus O.; Lieberman, Don; Pniel, Moshe

    1988-01-01

    Mark III Hypercube concept applied in development of series of increasingly powerful computers. Processor of each node of Mark III Hypercube ensemble is specialized computer containing three subprocessors and shared main memory. Solves problem quickly by simultaneously processing part of problem at each such node and passing combined results to host computer. Disciplines benefitting from speed and memory capacity include astrophysics, geophysics, chemistry, weather, high-energy physics, applied mechanics, image processing, oil exploration, aircraft design, and microcircuit design.

  6. The force on the flex: Global parallelism and portability

    NASA Technical Reports Server (NTRS)

    Jordan, H. F.

    1986-01-01

    A parallel programming methodology, called the force, supports the construction of programs to be executed in parallel by an unspecified, but potentially large, number of processes. The methodology was originally developed on a pipelined, shared memory multiprocessor, the Denelcor HEP, and embodies the primitive operations of the force in a set of macros which expand into multiprocessor Fortran code. A small set of primitives is sufficient to write large parallel programs, and the system has been used to produce 10,000 line programs in computational fluid dynamics. The level of complexity of the force primitives is intermediate. It is high enough to mask detailed architectural differences between multiprocessors but low enough to give the user control over performance. The system is being ported to a medium scale multiprocessor, the Flex/32, which is a 20 processor system with a mixture of shared and local memory. Memory organization and the type of processor synchronization supported by the hardware on the two machines lead to some differences in efficient implementations of the force primitives, but the user interface remains the same. An initial implementation was done by retargeting the macros to Flexible Computer Corporation's ConCurrent C language. Subsequently, the macros were caused to directly produce the system calls which form the basis for ConCurrent C. The implementation of the Fortran based system is in step with Flexible Computer Corporations's implementation of a Fortran system in the parallel environment.

  7. Implementing Shared Memory Parallelism in MCBEND

    NASA Astrophysics Data System (ADS)

    Bird, Adam; Long, David; Dobson, Geoff

    2017-09-01

    MCBEND is a general purpose radiation transport Monte Carlo code from AMEC Foster Wheelers's ANSWERS® Software Service. MCBEND is well established in the UK shielding community for radiation shielding and dosimetry assessments. The existing MCBEND parallel capability effectively involves running the same calculation on many processors. This works very well except when the memory requirements of a model restrict the number of instances of a calculation that will fit on a machine. To more effectively utilise parallel hardware OpenMP has been used to implement shared memory parallelism in MCBEND. This paper describes the reasoning behind the choice of OpenMP, notes some of the challenges of multi-threading an established code such as MCBEND and assesses the performance of the parallel method implemented in MCBEND.

  8. Rapid solution of large-scale systems of equations

    NASA Technical Reports Server (NTRS)

    Storaasli, Olaf O.

    1994-01-01

    The analysis and design of complex aerospace structures requires the rapid solution of large systems of linear and nonlinear equations, eigenvalue extraction for buckling, vibration and flutter modes, structural optimization and design sensitivity calculation. Computers with multiple processors and vector capabilities can offer substantial computational advantages over traditional scalar computer for these analyses. These computers fall into two categories: shared memory computers and distributed memory computers. This presentation covers general-purpose, highly efficient algorithms for generation/assembly or element matrices, solution of systems of linear and nonlinear equations, eigenvalue and design sensitivity analysis and optimization. All algorithms are coded in FORTRAN for shared memory computers and many are adapted to distributed memory computers. The capability and numerical performance of these algorithms will be addressed.

  9. Dynamic programming on a shared-memory multiprocessor

    NASA Technical Reports Server (NTRS)

    Edmonds, Phil; Chu, Eleanor; George, Alan

    1993-01-01

    Three new algorithms for solving dynamic programming problems on a shared-memory parallel computer are described. All three algorithms attempt to balance work load, while keeping synchronization cost low. In particular, for a multiprocessor having p processors, an analysis of the best algorithm shows that the arithmetic cost is O(n-cubed/6p) and that the synchronization cost is O(absolute value of log sub C n) if p much less than n, where C = (2p-1)/(2p + 1) and n is the size of the problem. The low synchronization cost is important for machines where synchronization is expensive. Analysis and experiments show that the best algorithm is effective in balancing the work load and producing high efficiency.

  10. 'On the surface': a qualitative study of GPs' and patients' perspectives on psoriasis.

    PubMed

    Nelson, Pauline A; Barker, Zoë; Griffiths, Christopher E M; Cordingley, Lis; Chew-Graham, Carolyn A

    2013-10-20

    Psoriasis is a chronic, inflammatory skin disease affecting approximately 2% of the UK population and is currently incurable. It produces profound effects on psychological wellbeing and social functioning and has significant associated co-morbidities. The majority of patients with psoriasis are managed in primary care, however in-depth patient and GP perspectives about psoriasis management in this setting are absent from the literature. This article reports an in-depth study which compares and contrasts the perspectives of people with psoriasis and of GPs on the challenges of managing psoriasis in primary care. In-depth, qualitative semi-structured interviews were conducted with a diverse sample of 29 people with psoriasis and 14 GPs. Interviews were coded using principles of Framework Analysis to enable a comparison of patient and practitioner perspectives on key issues and concepts arising from the data. Patients perceived GPs to be lacking in confidence in the assessment and management of psoriasis and both groups felt lacking in knowledge and understanding about the condition. While practitioners recognised that psoriasis has physical, emotional and social impact, they assumed patients had expertise in the condition and may not address these issues in consultations. This resulted in patient dissatisfaction and sub-optimal assessment of severity and impact of psoriasis by GPs. Patients and GPs recognised that psoriasis was not being managed as a complex long-term condition, however this appeared less problematic for GPs than for patients who desired a shared management with their GP incorporating appropriate monitoring and timely reviews. The research suggests that current routine practice for psoriasis management in primary care is mismatched with the expressed needs of patients. To address these needs, psoriasis must be recognised as a complex long-term condition involving exacting physical, psychological and social demands, co-morbidity and the development of new treatments.General practitioners need to improve both their knowledge and skills in the assessment and management of psoriasis. This in turn will facilitate management of the condition in partnership with patients. Commissioning multi-disciplinary services, which focus on long-term impacts on wellbeing and quality of life, might address current deficits in care.

  11. The existential dimension in general practice: identifying understandings and experiences of general practitioners in Denmark

    PubMed Central

    Assing Hvidt, Elisabeth; Søndergaard, Jens; Ammentorp, Jette; Bjerrum, Lars; Gilså Hansen, Dorte; Olesen, Frede; Pedersen, Susanne S.; Timm, Helle; Timmermann, Connie; Hvidt, Niels Christian

    2016-01-01

    Objective The objective of this study is to identify points of agreement and disagreements among general practitioners (GPs) in Denmark concerning how the existential dimension is understood, and when and how it is integrated in the GP–patient encounter. Design A qualitative methodology with semi-structured focus group interviews was employed. Setting General practice setting in Denmark. Subjects Thirty-one GPs from two Danish regions between 38 and 68 years of age participated in seven focus group interviews. Results Although understood to involve broad life conditions such as present and future being and identity, connectedness to a society and to other people, the existential dimension was primarily reported integrated in connection with life-threatening diseases and death. Furthermore, integration of the existential dimension was characterized as unsystematic and intuitive. Communication about religious or spiritual questions was mostly avoided by GPs due to shyness and perceived lack of expertise. GPs also reported infrequent referrals of patients to chaplains. Conclusion GPs integrate issues related to the existential dimension in implicit and non-standardized ways and are hindered by cultural barriers. As a way to enhance a practice culture in which GPs pay more explicit attention to the patients’ multidimensional concerns, opportunities for professional development could be offered (courses or seminars) that focus on mutual sharing of existential reflections, ideas and communication competencies. Key pointsAlthough integration of the existential dimension is recommended for patient care in general practice, little is known about GPs’ understanding and integration of this dimension in the GP–patient encounter.The existential dimension is understood to involve broad and universal life conditions having no explicit reference to spiritual or religious aspects.The integration of the existential dimension is delimited to patient cases where life-threatening diseases, life crises and unexplainable patient symptoms occur. Integration of the existential dimension happens in unsystematic and intuitive ways.Cultural barriers such as shyness and lack of existential self-awareness seem to hinder GPs in communicating about issues related to the existential dimension. Educational initiatives might be needed in order to lessen barriers and enhance a more natural integration of communication about existential issues. PMID:27804316

  12. The Management of Acute Adverse Effects of Breast Cancer Treatment in General Practice: A Video-Vignette Study

    PubMed Central

    Pagey, Georgina; Halkett, Georgia; Pillai, Vinita; Meng, Xingqiong

    2014-01-01

    Background There has been a focus recently on the use of the Internet and email to deliver education interventions to general practitioners (GPs). The treatment of breast cancer may include surgery, radiotherapy, chemotherapy, and/or hormone treatment. These treatments may have acute adverse effects. GPs need more information on the diagnosis and management of specific adverse effects encountered immediately after cancer treatment. Objective The goal was to evaluate an Internet-based educational program developed for GPs to advise patients with acute adverse effects following breast cancer treatment. Methods During phase 1, participants viewed 6 video vignettes of actor-patients reporting 1 of 6 acute symptoms following surgery and chemotherapy and/or radiotherapy treatment. GPs indicated their diagnosis and proposed management through an online survey program. They received feedback about each scenario in the form of a specialist clinic letter, as if the patient had been seen at a specialist clinic after they had attended the GP. This letter incorporated extracts from local guidelines on the management of the symptoms presented. This feedback was sent to the GPs electronically on the same survey platform. In phase 2, all GPs were invited to manage similar cases as phase 1. Their proposed management was compared to the guidelines. McNemar test was used to compare data from phases 1 and 2, and logistic regression was used to explore the GP characteristics that were associated with inappropriate case management. Results A total of 50 GPs participated. Participants were younger and more likely to be female than other GPs in Australia. For 5 of 6 vignettes in phase 1, management was consistent with expert opinion in the minority of cases (6%-46%). Participant demographic characteristics had a variable effect on different management decisions in phase 1. The variables modeled explained 15%-28% of the differences observed. Diagnosis and management improved significantly in phase 2, especially for diarrhea, neutropenia, and seroma sample cases. The proportion of incorrect management responses was reduced to a minimum (25.3%-49.3%) in phase 2. Conclusions There was evidence that providing feedback by experts on specific cases had an impact on GPs’ knowledge about how to appropriately manage acute treatment adverse effects. This educational intervention could be targeted to support the implementation of shared care during cancer treatment. PMID:25274131

  13. General Practitioners' Barriers to Prescribe Physical Activity: The Dark Side of the Cluster Effects on the Physical Activity of Their Type 2 Diabetes Patients.

    PubMed

    Lanhers, Charlotte; Duclos, Martine; Guttmann, Aline; Coudeyre, Emmanuel; Pereira, Bruno; Ouchchane, Lemlih

    2015-01-01

    To describe barriers to physical activity (PA) in type 2 diabetes patients and their general practitioners (GPs), looking for practitioner's influence on PA practice of their patients. We conducted a cross-sectional study on GPs (n = 48) and their type 2 diabetes patients (n = 369) measuring respectively barriers to prescribe and practice PA using a self-assessment questionnaire: barriers to physical activity in diabetes (BAPAD). Statistical analysis was performed accounting hierarchical data structure. Similar practitioner's patients were considered a cluster sharing common patterns. The higher the patient's BAPAD score, the higher the barriers to PA, the higher the risk to declare practicing no PA (p<0.001), low frequency and low duration of PA (p<0.001). A high patient's BAPAD score was also associated with a higher risk to have HbA1c ≥7% (53 mmol/mol) (p = 0.001). The intra-class correlation coefficient between type 2 diabetes patients and GPs was 34%, indicating a high cluster effect. A high GP's BAPAD score, regarding the PA prescription, is predictive of a high BAPAD score with their patients, regarding their practice (p = 0.03). Type 2 diabetes patients with lower BAPAD score, thus lower barriers to physical activity, have a higher PA level and a better glycemic control. An important and deleterious cluster effect between GPs and their patients is demonstrated: the higher the GP's BAPAD score, the higher the type 2 diabetes patients' BAPAD score. This important cluster effect might designate GPs as a relevant lever for future interventions regarding patient's education towards PA and type 2 diabetes management.

  14. "It is not the fading candle that one expects": general practitioners' perspectives on life-preserving versus "letting go" decision-making in end-of-life home care.

    PubMed

    Sercu, Maria; Renterghem, Veerle Van; Pype, Peter; Aelbrecht, Karolien; Derese, Anselme; Deveugele, Myriam

    2015-01-01

    Many general practitioners (GPs) are willing to provide end-of-life (EoL) home care for their patients. International research on GPs' approach to care in patients' final weeks of life showed a combination of palliative measures with life-preserving actions. To explore the GP's perspective on life-preserving versus "letting go" decision-making in EoL home care. Qualitative analysis of semi-structured interviews with 52 Belgian GPs involved in EoL home care. Nearly all GPs adopted a palliative approach and an accepting attitude towards death. The erratic course of terminal illness can challenge this approach. Disruptive medical events threaten the prospect of a peaceful end-phase and death at home and force the GP either to maintain the patient's (quality of) life for the time being or to recognize the event as a step to life closure and "letting the patient go". Making the "right" decision was very difficult. Influencing factors included: the nature and time of the crisis, a patient's clinical condition at the event itself, a GP's level of determination in deciding and negotiating "letting go" and the patient's/family's wishes and preparedness regarding this death. Hospitalization was often a way out. GPs regard alternation between palliation and life-preservation as part of palliative care. They feel uncertain about their mandate in deciding and negotiating the final step to life closure. A shortage of knowledge of (acute) palliative medicine as one cause of difficulties in letting-go decisions may be underestimated. Sharing all these professional responsibilities with the specialist palliative home care teams would lighten a GP's burden considerably. Key Points A late transition from a life-preserving mindset to one of "letting go" has been reported as a reason why physicians resort to life-preserving actions in an end-of-life (EoL) context. We investigated GPs' perspectives on this matter. Not all GPs involved in EoL home care adopt a "letting go" mindset. For those who do, this mindset is challenged by the erratic course of terminal illness. GPs prioritize the quality of the remaining life and the serenity of the dying process, which is threatened by disruptive medical events. Making the "right" decision is difficult. GPs feel uncertain about their own role and responsibility in deciding and negotiating the final step to life closure.

  15. Measurement of signal use and vehicle turns as indication of driver cognition.

    PubMed

    Wallace, Bruce; Goubran, Rafik; Knoefel, Frank

    2014-01-01

    This paper uses data analytics to provide a method for the measurement of a key driving task, turn signal usage as a measure of an automatic over-learned cognitive function drivers. The paper augments previously reported more complex executive function cognition measures by proposing an algorithm that analyzes dashboard video to detect turn indicator use with 100% accuracy without any false positives. The paper proposes two algorithms that determine the actual turns made on a trip. The first through analysis of GPS location traces for the vehicle, locating 73% of the turns made with a very low false positive rate of 3%. A second algorithm uses GIS tools to retroactively create turn by turn directions. Fusion of GIS and GPS information raises performance to 77%. The paper presents the algorithm required to measure signal use for actual turns by realigning the 0.2Hz GPS data, 30fps video and GIS turn events. The result is a measure that can be tracked over time and changes in the driver's performance can result in alerts to the driver, caregivers or clinicians as indication of cognitive change. A lack of decline can also be shared as reassurance.

  16. Design of a real-time wind turbine simulator using a custom parallel architecture

    NASA Technical Reports Server (NTRS)

    Hoffman, John A.; Gluck, R.; Sridhar, S.

    1995-01-01

    The design of a new parallel-processing digital simulator is described. The new simulator has been developed specifically for analysis of wind energy systems in real time. The new processor has been named: the Wind Energy System Time-domain simulator, version 3 (WEST-3). Like previous WEST versions, WEST-3 performs many computations in parallel. The modules in WEST-3 are pure digital processors, however. These digital processors can be programmed individually and operated in concert to achieve real-time simulation of wind turbine systems. Because of this programmability, WEST-3 is very much more flexible and general than its two predecessors. The design features of WEST-3 are described to show how the system produces high-speed solutions of nonlinear time-domain equations. WEST-3 has two very fast Computational Units (CU's) that use minicomputer technology plus special architectural features that make them many times faster than a microcomputer. These CU's are needed to perform the complex computations associated with the wind turbine rotor system in real time. The parallel architecture of the CU causes several tasks to be done in each cycle, including an IO operation and the combination of a multiply, add, and store. The WEST-3 simulator can be expanded at any time for additional computational power. This is possible because the CU's interfaced to each other and to other portions of the simulation using special serial buses. These buses can be 'patched' together in essentially any configuration (in a manner very similar to the programming methods used in analog computation) to balance the input/ output requirements. CU's can be added in any number to share a given computational load. This flexible bus feature is very different from many other parallel processors which usually have a throughput limit because of rigid bus architecture.

  17. Design distributed simulation platform for vehicle management system

    NASA Astrophysics Data System (ADS)

    Wen, Zhaodong; Wang, Zhanlin; Qiu, Lihua

    2006-11-01

    Next generation military aircraft requires the airborne management system high performance. General modules, data integration, high speed data bus and so on are needed to share and manage information of the subsystems efficiently. The subsystems include flight control system, propulsion system, hydraulic power system, environmental control system, fuel management system, electrical power system and so on. The unattached or mixed architecture is changed to integrated architecture. That means the whole airborne system is regarded into one system to manage. So the physical devices are distributed but the system information is integrated and shared. The process function of each subsystem are integrated (including general process modules, dynamic reconfiguration), furthermore, the sensors and the signal processing functions are shared. On the other hand, it is a foundation for power shared. Establish a distributed vehicle management system using 1553B bus and distributed processors which can provide a validation platform for the research of airborne system integrated management. This paper establishes the Vehicle Management System (VMS) simulation platform. Discuss the software and hardware configuration and analyze the communication and fault-tolerant method.

  18. A shared electronic health record: lessons from the coalface.

    PubMed

    Silvester, Brett V; Carr, Simon J

    2009-06-01

    A shared electronic health record system has been successfully implemented in Australia by a Division of General Practice in northern Brisbane. The system grew out of coordinated care trials that showed the critical need to share summary patient information, particularly for patients with complex conditions who require the services of a wide range of multisector, multidisciplinary health care professionals. As at 30 April 2008, connected users of the system included 239 GPs from 66 general practices, two major public hospitals, three large private hospitals, 11 allied health and community-based provider organisations and 1108 registered patients. Access data showed a patient's shared record was accessed an average of 15 times over a 12-month period. The success of the Brisbane implementation relied on seven key factors: connectivity, interoperability, change management, clinical leadership, targeted patient involvement, information at the point of care, and governance. The Australian Commission on Safety and Quality in Health Care is currently evaluating the system for its potential to reduce errors relating to inadequate information transfer during clinical handover.

  19. Cache Sharing and Isolation Tradeoffs in Multicore Mixed-Criticality Systems

    DTIC Science & Technology

    2015-05-01

    form of lockdown registers, to provide way-based partitioning. These alternatives are illustrated in Fig. 1 with respect to a quad-core ARM Cortex A9... processor (as we do for Level-A and -B tasks), but they did not consider MC systems. Altmeyer et al. [1] considered uniprocessor scheduling on a system with a...framework. We randomly generated task sets and determined the fraction that were schedulable on our target hardware platform, the quad-core ARM Cortex A9

  20. Broca's area: a supramodal hierarchical processor?

    PubMed

    Tettamanti, Marco; Weniger, Dorothea

    2006-05-01

    Despite the presence of shared characteristics across the different domains modulating Broca's area activity (e.g., structural analogies, as between language and music, or representational homologies, as between action execution and action observation), the question of what exactly the common denominator of such diverse brain functions is, with respect to the function of Broca's area, remains largely a debated issue. Here, we suggest that an important computational role of Broca's area may be to process hierarchical structures in a wide range of functional domains.

  1. Multiple Microcomputer Control Algorithm.

    DTIC Science & Technology

    1979-09-01

    discrete and semaphore supervisor calls can be used with tasks in separate processors, in which case they are maintained in shared memory. Operations on ...the source or destination operand specifier of each mode in most cases . However, four of the 16 general register addressing modes and one of the 8 pro...instruction time is based on the specified usage factors and the best cast, and worst case execution times for the instruc- 1I 5 1NAVTRAEQZJ1PCrN M’.V7~j

  2. EndNote 7.0.

    PubMed

    Eapen, Bell Raj

    2006-01-01

    EndNote is a useful software for online literature search and efficient bibliography management. It helps to format the bibliography according to the citation style of each journal. EndNote stores references in a library file, which can be shared with others. It can connect to online resources like PubMed and retrieve search results as per the search criteria. It can also effortlessly integrate with popular word processors like MS Word. The Indian Journal of Dermatology, Venereology and Leprology website has a provision to import references to EndNote.

  3. Development and evaluation of a fault-tolerant multiprocessor (FTMP) computer. Volume 1: FTMP principles of operation

    NASA Technical Reports Server (NTRS)

    Smith, T. B., Jr.; Lala, J. H.

    1983-01-01

    The basic organization of the fault tolerant multiprocessor, (FTMP) is that of a general purpose homogeneous multiprocessor. Three processors operate on a shared system (memory and I/O) bus. Replication and tight synchronization of all elements and hardware voting is employed to detect and correct any single fault. Reconfiguration is then employed to repair a fault. Multiple faults may be tolerated as a sequence of single faults with repair between fault occurrences.

  4. Integration of the Plate Boundary Observatory and Existing GPS Networks in Southern California: A Multi Use Geodetic Network

    NASA Astrophysics Data System (ADS)

    Walls, C.; Blume, F.; Meertens, C.; Arnitz, E.; Lawrence, S.; Miller, S.; Bradley, W.; Jackson, M.; Feaux, K.

    2007-12-01

    The ultra-stable GPS monument design developed by Southern California Geodetic Network (SCIGN) in the late 1990s demonstrates sub-millimeter errors on long time series where there are a high percentage of observations and low multipath. Following SCIGN, other networks such as PANGA and BARGEN have adopted the monument design for both deep drilled braced monuments (DDBM = 5 legs grouted 10.7 meters into bedrock/stratigraphy) and short drilled braced monuments (SDBM = 4 legs epoxied 2 meters into bedrock). A Plate Boundary Observatory (PBO) GPS station consists of a "SCIGN" style monument and state of the art NetRS receiver and IP based communications. Between the years 2003-2008 875 permanent PBO GPS stations are being built throughout the United States. Concomitant with construction of the PBO the majority of pre-existing GPS stations that meet stability specifications are being upgraded with Trimble NetRS and IP based communications to PBO standards under the EarthScope PBO Nucleus project. In 2008, with completed construction of the Plate Boundary Observatory, more than 1100 GPS stations will share common design specifications and have identical receivers with common communications making it the most homogenous geodetic network in the World. Of the 875 total Plate Boundary Observatory GPS stations, 211 proposed sites are distributed throughout the Southern California region. As of August 2007 the production status is: 174 stations built (81 short braced monuments, 93 deep drilled braced monuments), 181 permits signed, 211 permits submitted and 211 station reconnaissance reports. The balance of 37 stations (19 SDBM and 18 DDBM) will be built over the next year from Long Valley to the Mexico border in order of priority as recommended by the PBO Transform, Extension and Magmatic working groups. Fifteen second data is archived for each station and 1 Hz as well as 5 Hz data is buffered to be triggered for download in the event of an earthquake. Communications equipment includes CDMA Proxicast modems, Hughes Vsat, Intuicom 900 MHz Ethernet bridge radios and several "real-time" sites use 2.4 GHz Wilan radios. Ultimately, 125 of the existing former-SCIGN GPS stations will be integrated into the So Cal region of PBO, of which 25 have real-time data streams. At the time of this publication the total combined Southern California region has over 40 stations streaming real-time data using both radios and CDMA modems. The real-time GPS sites provide specific benefits beyond the standard GPS station: they can provide a live correction for local surveyors and can be used to trigger an alarm if large displacements are recorded. The cross fault spatial distribution of these 336 GPS stations in the seismically active southern California region has the grand potential of augmenting a strong motion earthquake early warning system.

  5. Inter-organisation communication for end of life care.

    PubMed

    Thomas, Paul

    2009-01-01

    Background Poor communication between in-hours and out-of-hours (OoH) general practitioners (GPs) causes unwanted admissions to hospital of patients who want to die at home Setting A GP OoH service in West London (London Central and West Unscheduled Care Service) used by 159 general practices from four primary care trusts Question What helps to avoid hospital admission of patients who want to die at home when a crisis occurs in the OoH period? Methods Whole system participatory action research, with four stages: 1. engage stakeholders; 2. understand the initial situation; 3. re-design the system; 4. action for change Results The following help to avoid undesirable hospital admission of a dying person who has a crisis in the OoH period: 1. a register of vulnerable adults; 2. records at home; 3. key worker(s); 4. home interventions; 5. day-time practitioner communication; 6. a development and governance group; 7. speedy discharge from hospital; 8. decision support for OoH GPs. Discussion This project revealed a useful set of policies to help avoid unnecessary OoH admission to hospital, especially improved communication between day-time GPs and OoH GPs. The approach combined whole system participatory action research with systems modelling and this helped the issues to be revealed quickly and cheaply. Furthermore, including leaders from partner organisations at each stage of the inquiry has encouraged shared purpose and produced champions to move forward the project recommendations. Some changes have already happened.

  6. Load Balancing Strategies for Multiphase Flows on Structured Grids

    NASA Astrophysics Data System (ADS)

    Olshefski, Kristopher; Owkes, Mark

    2017-11-01

    The computation time required to perform large simulations of complex systems is currently one of the leading bottlenecks of computational research. Parallelization allows multiple processing cores to perform calculations simultaneously and reduces computational times. However, load imbalances between processors waste computing resources as processors wait for others to complete imbalanced tasks. In multiphase flows, these imbalances arise due to the additional computational effort required at the gas-liquid interface. However, many current load balancing schemes are only designed for unstructured grid applications. The purpose of this research is to develop a load balancing strategy while maintaining the simplicity of a structured grid. Several approaches are investigated including brute force oversubscription, node oversubscription through Message Passing Interface (MPI) commands, and shared memory load balancing using OpenMP. Each of these strategies are tested with a simple one-dimensional model prior to implementation into the three-dimensional NGA code. Current results show load balancing will reduce computational time by at least 30%.

  7. Advanced data management system architectures testbed

    NASA Technical Reports Server (NTRS)

    Grant, Terry

    1990-01-01

    The objective of the Architecture and Tools Testbed is to provide a working, experimental focus to the evolving automation applications for the Space Station Freedom data management system. Emphasis is on defining and refining real-world applications including the following: the validation of user needs; understanding system requirements and capabilities; and extending capabilities. The approach is to provide an open, distributed system of high performance workstations representing both the standard data processors and networks and advanced RISC-based processors and multiprocessor systems. The system provides a base from which to develop and evaluate new performance and risk management concepts and for sharing the results. Participants are given a common view of requirements and capability via: remote login to the testbed; standard, natural user interfaces to simulations and emulations; special attention to user manuals for all software tools; and E-mail communication. The testbed elements which instantiate the approach are briefly described including the workstations, the software simulation and monitoring tools, and performance and fault tolerance experiments.

  8. Concurrent computation of attribute filters on shared memory parallel machines.

    PubMed

    Wilkinson, Michael H F; Gao, Hui; Hesselink, Wim H; Jonker, Jan-Eppo; Meijster, Arnold

    2008-10-01

    Morphological attribute filters have not previously been parallelized, mainly because they are both global and non-separable. We propose a parallel algorithm that achieves efficient parallelism for a large class of attribute filters, including attribute openings, closings, thinnings and thickenings, based on Salembier's Max-Trees and Min-trees. The image or volume is first partitioned in multiple slices. We then compute the Max-trees of each slice using any sequential Max-Tree algorithm. Subsequently, the Max-trees of the slices can be merged to obtain the Max-tree of the image. A C-implementation yielded good speed-ups on both a 16-processor MIPS 14000 parallel machine, and a dual-core Opteron-based machine. It is shown that the speed-up of the parallel algorithm is a direct measure of the gain with respect to the sequential algorithm used. Furthermore, the concurrent algorithm shows a speed gain of up to 72 percent on a single-core processor, due to reduced cache thrashing.

  9. Digital Beamforming Scatterometer

    NASA Technical Reports Server (NTRS)

    Rincon, Rafael F.; Vega, Manuel; Kman, Luko; Buenfil, Manuel; Geist, Alessandro; Hillard, Larry; Racette, Paul

    2009-01-01

    This paper discusses scatterometer measurements collected with multi-mode Digital Beamforming Synthetic Aperture Radar (DBSAR) during the SMAP-VEX 2008 campaign. The 2008 SMAP Validation Experiment was conducted to address a number of specific questions related to the soil moisture retrieval algorithms. SMAP-VEX 2008 consisted on a series of aircraft-based.flights conducted on the Eastern Shore of Maryland and Delaware in the fall of 2008. Several other instruments participated in the campaign including the Passive Active L-Band System (PALS), the Marshall Airborne Polarimetric Imaging Radiometer (MAPIR), and the Global Positioning System Reflectometer (GPSR). This campaign was the first SMAP Validation Experiment. DBSAR is a multimode radar system developed at NASA/Goddard Space Flight Center that combines state-of-the-art radar technologies, on-board processing, and advances in signal processing techniques in order to enable new remote sensing capabilities applicable to Earth science and planetary applications [l]. The instrument can be configured to operate in scatterometer, Synthetic Aperture Radar (SAR), or altimeter mode. The system builds upon the L-band Imaging Scatterometer (LIS) developed as part of the RadSTAR program. The radar is a phased array system designed to fly on the NASA P3 aircraft. The instrument consists of a programmable waveform generator, eight transmit/receive (T/R) channels, a microstrip antenna, and a reconfigurable data acquisition and processor system. Each transmit channel incorporates a digital attenuator, and digital phase shifter that enables amplitude and phase modulation on transmit. The attenuators, phase shifters, and calibration switches are digitally controlled by the radar control card (RCC) on a pulse by pulse basis. The antenna is a corporate fed microstrip patch-array centered at 1.26 GHz with a 20 MHz bandwidth. Although only one feed is used with the present configuration, a provision was made for separate corporate feeds for vertical and horizontal polarization. System upgrades to dual polarization are currently under way. The DBSAR processor is a reconfigurable data acquisition and processor system capable of real-time, high-speed data processing. DBSAR uses an FPGA-based architecture to implement digitally down-conversion, in-phase and quadrature (I/Q) demodulation, and subsequent radar specific algorithms. The core of the processor board consists of an analog-to-digital (AID) section, three Altera Stratix field programmable gate arrays (FPGAs), an ARM microcontroller, several memory devices, and an Ethernet interface. The processor also interfaces with a navigation board consisting of a GPS and a MEMS gyro. The processor has been configured to operate in scatterometer, Synthetic Aperture Radar (SAR), and altimeter modes. All the modes are based on digital beamforming which is a digital process that generates the far-field beam patterns at various scan angles from voltages sampled in the antenna array. This technique allows steering the received beam and controlling its beam-width and side-lobe. Several beamforming techniques can be implemented each characterized by unique strengths and weaknesses, and each applicable to different measurement scenarios. In Scatterometer mode, the radar is capable to.generate a wide beam or scan a narrow beam on transmit, and to steer the received beam on processing while controlling its beamwidth and side-lobe level. Table I lists some important radar characteristics

  10. Parallelization of a Monte Carlo particle transport simulation code

    NASA Astrophysics Data System (ADS)

    Hadjidoukas, P.; Bousis, C.; Emfietzoglou, D.

    2010-05-01

    We have developed a high performance version of the Monte Carlo particle transport simulation code MC4. The original application code, developed in Visual Basic for Applications (VBA) for Microsoft Excel, was first rewritten in the C programming language for improving code portability. Several pseudo-random number generators have been also integrated and studied. The new MC4 version was then parallelized for shared and distributed-memory multiprocessor systems using the Message Passing Interface. Two parallel pseudo-random number generator libraries (SPRNG and DCMT) have been seamlessly integrated. The performance speedup of parallel MC4 has been studied on a variety of parallel computing architectures including an Intel Xeon server with 4 dual-core processors, a Sun cluster consisting of 16 nodes of 2 dual-core AMD Opteron processors and a 200 dual-processor HP cluster. For large problem size, which is limited only by the physical memory of the multiprocessor server, the speedup results are almost linear on all systems. We have validated the parallel implementation against the serial VBA and C implementations using the same random number generator. Our experimental results on the transport and energy loss of electrons in a water medium show that the serial and parallel codes are equivalent in accuracy. The present improvements allow for studying of higher particle energies with the use of more accurate physical models, and improve statistics as more particles tracks can be simulated in low response time.

  11. Integrated Payload Data Handling Systems Using Software Partitioning

    NASA Astrophysics Data System (ADS)

    Taylor, Alun; Hann, Mark; Wishart, Alex

    2015-09-01

    An integrated Payload Data Handling System (I-PDHS) is one in which multiple instruments share a central payload processor for their on-board data processing tasks. This offers a number of advantages over the conventional decentralised architecture. Savings in payload mass and power can be realised because the total processing resource is matched to the requirements, as opposed to the decentralised architecture here the processing resource is in effect the sum of all the applications. Overall development cost can be reduced using a common processor. At individual instrument level the potential benefits include a standardised application development environment, and the opportunity to run the instrument data handling application on a fully redundant and more powerful processing platform [1]. This paper describes a joint program by SCISYS UK Limited, Airbus Defence and Space, Imperial College London and RAL Space to implement a realistic demonstration of an I-PDHS using engineering models of flight instruments (a magnetometer and camera) and a laboratory demonstrator of a central payload processor which is functionally representative of a flight design. The objective is to raise the Technology Readiness Level of the centralised data processing technique by address the key areas of task partitioning to prevent fault propagation and the use of a common development process for the instrument applications. The project is supported by a UK Space Agency grant awarded under the National Space Technology Program SpaceCITI scheme. [1].

  12. Optimization of the Multi-Spectral Euclidean Distance Calculation for FPGA-based Spaceborne Systems

    NASA Technical Reports Server (NTRS)

    Cristo, Alejandro; Fisher, Kevin; Perez, Rosa M.; Martinez, Pablo; Gualtieri, Anthony J.

    2012-01-01

    Due to the high quantity of operations that spaceborne processing systems must carry out in space, new methodologies and techniques are being presented as good alternatives in order to free the main processor from work and improve the overall performance. These include the development of ancillary dedicated hardware circuits that carry out the more redundant and computationally expensive operations in a faster way, leaving the main processor free to carry out other tasks while waiting for the result. One of these devices is SpaceCube, a FPGA-based system designed by NASA. The opportunity to use FPGA reconfigurable architectures in space allows not only the optimization of the mission operations with hardware-level solutions, but also the ability to create new and improved versions of the circuits, including error corrections, once the satellite is already in orbit. In this work, we propose the optimization of a common operation in remote sensing: the Multi-Spectral Euclidean Distance calculation. For that, two different hardware architectures have been designed and implemented in a Xilinx Virtex-5 FPGA, the same model of FPGAs used by SpaceCube. Previous results have shown that the communications between the embedded processor and the circuit create a bottleneck that affects the overall performance in a negative way. In order to avoid this, advanced methods including memory sharing, Native Port Interface (NPI) connections and Data Burst Transfers have been used.

  13. Merlin - Massively parallel heterogeneous computing

    NASA Technical Reports Server (NTRS)

    Wittie, Larry; Maples, Creve

    1989-01-01

    Hardware and software for Merlin, a new kind of massively parallel computing system, are described. Eight computers are linked as a 300-MIPS prototype to develop system software for a larger Merlin network with 16 to 64 nodes, totaling 600 to 3000 MIPS. These working prototypes help refine a mapped reflective memory technique that offers a new, very general way of linking many types of computer to form supercomputers. Processors share data selectively and rapidly on a word-by-word basis. Fast firmware virtual circuits are reconfigured to match topological needs of individual application programs. Merlin's low-latency memory-sharing interfaces solve many problems in the design of high-performance computing systems. The Merlin prototypes are intended to run parallel programs for scientific applications and to determine hardware and software needs for a future Teraflops Merlin network.

  14. A Parallel Saturation Algorithm on Shared Memory Architectures

    NASA Technical Reports Server (NTRS)

    Ezekiel, Jonathan; Siminiceanu

    2007-01-01

    Symbolic state-space generators are notoriously hard to parallelize. However, the Saturation algorithm implemented in the SMART verification tool differs from other sequential symbolic state-space generators in that it exploits the locality of ring events in asynchronous system models. This paper explores whether event locality can be utilized to efficiently parallelize Saturation on shared-memory architectures. Conceptually, we propose to parallelize the ring of events within a decision diagram node, which is technically realized via a thread pool. We discuss the challenges involved in our parallel design and conduct experimental studies on its prototypical implementation. On a dual-processor dual core PC, our studies show speed-ups for several example models, e.g., of up to 50% for a Kanban model, when compared to running our algorithm only on a single core.

  15. Discovering shared segments on the migration route of the bar-headed goose by time-based plane-sweeping trajectory clustering

    USGS Publications Warehouse

    Luo, Ze; Baoping, Yan; Takekawa, John Y.; Prosser, Diann J.

    2012-01-01

    We propose a new method to help ornithologists and ecologists discover shared segments on the migratory pathway of the bar-headed geese by time-based plane-sweeping trajectory clustering. We present a density-based time parameterized line segment clustering algorithm, which extends traditional comparable clustering algorithms from temporal and spatial dimensions. We present a time-based plane-sweeping trajectory clustering algorithm to reveal the dynamic evolution of spatial-temporal object clusters and discover common motion patterns of bar-headed geese in the process of migration. Experiments are performed on GPS-based satellite telemetry data from bar-headed geese and results demonstrate our algorithms can correctly discover shared segments of the bar-headed geese migratory pathway. We also present findings on the migratory behavior of bar-headed geese determined from this new analytical approach.

  16. Shared decision-making in an intercultural context. Barriers in the interaction between physicians and immigrant patients.

    PubMed

    Suurmond, Jeanine; Seeleman, Conny

    2006-02-01

    The objective of this exploratory paper is to describe several barriers in shared decision-making in an intercultural context. Based on the prevailing literature on intercultural communication in medical settings, four conceptual barriers were described. When the conceptual barriers were described, they were compared with the results from semi-structured interviews with purposively selected physicians (n = 18) and immigrant patients (n = 13). Physicians differed in medical discipline (GPs, company doctors, an internist, a cardiologist, a gynaecologist, and an intern) and patients had different ethnic and immigration backgrounds. The following barriers were found: (1) physician and patient may not share the same linguistic background; (2) physician and patient may not share similar values about health and illness; (3) physician and patient may not have similar role expectations; and (4) physician and patient may have prejudices and do not speak to each other in an unbiased manner. We conclude that due to these barriers, the transfer of information, the formulation of the diagnosis, and the discussion of treatment options are at stake and the shared decision-making process is impeded. Improving physician's skills to recognize the communication limitations during shared decision-making as well as improving the skills to deal with the barriers may help to ameliorate shared decision-making in an intercultural setting.

  17. Irrational Delay Revisited: Examining Five Procrastination Scales in a Global Sample

    PubMed Central

    Svartdal, Frode; Steel, Piers

    2017-01-01

    Scales attempting to measure procrastination focus on different facets of the phenomenon, yet they share a common understanding of procrastination as an unnecessary, unwanted, and disadvantageous delay. The present paper examines in a global sample (N = 4,169) five different procrastination scales – Decisional Procrastination Scale (DPS), Irrational Procrastination Scale (IPS), Pure Procrastination Scale (PPS), Adult Inventory of Procrastination Scale (AIP), and General Procrastination Scale (GPS), focusing on factor structures and item functioning using Confirmatory Factor Analysis and Item Response Theory. The results indicated that The PPS (12 items selected from DPS, AIP, and GPS) measures different facets of procrastination even better than the three scales it is based on. An even shorter version of the PPS (5 items focusing on irrational delay), corresponds well to the nine-item IPS. Both scales demonstrate good psychometric properties and appear to be superior measures of core procrastination attributes than alternative procrastination scales. PMID:29163302

  18. Irrational Delay Revisited: Examining Five Procrastination Scales in a Global Sample.

    PubMed

    Svartdal, Frode; Steel, Piers

    2017-01-01

    Scales attempting to measure procrastination focus on different facets of the phenomenon, yet they share a common understanding of procrastination as an unnecessary, unwanted, and disadvantageous delay. The present paper examines in a global sample ( N = 4,169) five different procrastination scales - Decisional Procrastination Scale (DPS), Irrational Procrastination Scale (IPS), Pure Procrastination Scale (PPS), Adult Inventory of Procrastination Scale (AIP), and General Procrastination Scale (GPS), focusing on factor structures and item functioning using Confirmatory Factor Analysis and Item Response Theory. The results indicated that The PPS (12 items selected from DPS, AIP, and GPS) measures different facets of procrastination even better than the three scales it is based on. An even shorter version of the PPS (5 items focusing on irrational delay), corresponds well to the nine-item IPS. Both scales demonstrate good psychometric properties and appear to be superior measures of core procrastination attributes than alternative procrastination scales.

  19. Considerations for Future Climate Data Stewardship

    NASA Astrophysics Data System (ADS)

    Halem, M.; Nguyen, P. T.; Chapman, D. R.

    2009-12-01

    In this talk, we will describe the lessons learned based on processing and generating a decade of gridded AIRS and MODIS IR sounding data. We describe the challenges faced in accessing and sharing very large data sets, maintaining data provenance under evolving technologies, obtaining access to legacy calibration data and the permanent preservation of Earth science data records for on demand services. These lessons suggest a new approach to data stewardship will be required for the next decade of hyper spectral instruments combined with cloud resolving models. It will not be sufficient for stewards of future data centers to just provide the public with access to archived data but our experience indicates that data needs to reside close to computers with ultra large disc farms and tens of thousands of processors to deliver complex services on demand over very high speed networks much like the offerings of search engines today. Over the first decade of the 21st century, petabyte data records were acquired from the AIRS instrument on Aqua and the MODIS instrument on Aqua and Terra. NOAA data centers also maintain petabytes of operational IR sounders collected over the past four decades. The UMBC Multicore Computational Center (MC2) developed a Service Oriented Atmospheric Radiance gridding system (SOAR) to allow users to select IR sounding instruments from multiple archives and choose space-time- spectral periods of Level 1B data to download, grid, visualize and analyze on demand. Providing this service requires high data rate bandwidth access to the on line disks at Goddard. After 10 years, cost effective disk storage technology finally caught up with the MODIS data volume making it possible for Level 1B MODIS data to be available on line. However, 10Ge fiber optic networks to access large volumes of data are still not available from CSFC to serve the broader community. Data transfer rates are well below 10MB/s limiting their usefulness for climate studies. During this decade, processor performance hit a power wall leading computer vendors to design multicore processor chips. High performance computer systems obtained petaflop performance by clustering tens of thousands of multicore processor chips. Thus, power consumption and autonomic recovery from processor and disc failures have become major cost and technical considerations for future data archives. To address these new architecture requirements, a transparent parallel programming paradigm, the Hadoop MapReduce cloud computing system, became available as an open S/W system. In addition, the Hadoop File System and manages the distribution of data to these processors as well as backs up the processing in the event of any processor or disc failure. However, to employ this paradigm, the data needs to be stored on the computer system. We conclude this talk with a climate data preservation approach that addresses the scalability crisis to exabyte data requirements for the next decade based on projections of processor, disc data density and bandwidth doubling rates.

  20. Geodetic Seamless Archive Centers Modernization - Information Technology for Exploiting the Data Explosion

    NASA Astrophysics Data System (ADS)

    Boler, F. M.; Blewitt, G.; Kreemer, C. W.; Bock, Y.; Noll, C. E.; McWhirter, J.; Jamason, P.; Squibb, M. B.

    2010-12-01

    Space geodetic science and other disciplines using geodetic products have benefited immensely from open sharing of data and metadata from global and regional archives Ten years ago Scripps Orbit and Permanent Array Center (SOPAC), the NASA Crustal Dynamics Data Information System (CDDIS), UNAVCO and other archives collaborated to create the GPS Seamless Archive Centers (GSAC) in an effort to further enable research with the expanding collections of GPS data then becoming available. The GSAC partners share metadata to facilitate data discovery and mining across participating archives and distribution of data to users. This effort was pioneering, but was built on technology that has now been rendered obsolete. As the number of geodetic observing technologies has expanded, the variety of data and data products has grown dramatically, exposing limitations in data product sharing. Through a NASA ROSES project, the three archives (CDDIS, SOPAC and UNAVCO) have been funded to expand the original GSAC capability for multiple geodetic observation types and to simultaneously modernize the underlying technology by implementing web services. The University of Nevada, Reno (UNR) will test the web services implementation by incorporating them into their daily GNSS data processing scheme. The effort will include new methods for quality control of current and legacy data that will be a product of the analysis/testing phase performed by UNR. The quality analysis by UNR will include a report of the stability of the stations coordinates over time that will enable data users to select sites suitable for their application, for example identifying stations with large seasonal effects. This effort will contribute to enhanced ability for very large networks to obtain complete data sets for processing.

  1. Shared decision-making in antihypertensive therapy: a cluster randomised controlled trial

    PubMed Central

    2013-01-01

    Background Hypertension is one of the key factors causing cardiovascular diseases. A substantial proportion of treated hypertensive patients do not reach recommended target blood pressure values. Shared decision making (SDM) is to enhance the active role of patients. As until now there exists little information on the effects of SDM training in antihypertensive therapy, we tested the effect of an SDM training programme for general practitioners (GPs). Our hypotheses are that this SDM training (1) enhances the participation of patients and (2) leads to an enhanced decrease in blood pressure (BP) values, compared to patients receiving usual care without prior SDM training for GPs. Methods The study was conducted as a cluster randomised controlled trial (cRCT) with GP practices in Southwest Germany. Each GP practice included patients with treated but uncontrolled hypertension and/or with relevant comorbidity. After baseline assessment (T0) GP practices were randomly allocated into an intervention and a control arm. GPs of the intervention group took part in the SDM training. GPs of the control group treated their patients as usual. The intervention was blinded to the patients. Primary endpoints on patient level were (1) change of patients’ perceived participation (SDM-Q-9) and (2) change of systolic BP (24h-mean). Secondary endpoints were changes of (1) diastolic BP (24h-mean), (2) patients’ knowledge about hypertension, (3) adherence (MARS-D), and (4) cardiovascular risk score (CVR). Results In total 1357 patients from 36 general practices were screened for blood pressure control by ambulatory blood pressure monitoring (ABPM). Thereof 1120 patients remained in the study because of uncontrolled (but treated) hypertension and/or a relevant comorbidity. At T0 the intervention group involved 17 GP practices with 552 patients and the control group 19 GP practices with 568 patients. The effectiveness analysis could not demonstrate a significant or relevant effect of the SDM training on any of the endpoints. Conclusion The study hypothesis that the SDM training enhanced patients’ perceived participation and lowered their BP could not be confirmed. Further research is needed to examine the impact of patient participation on the treatment of hypertension in primary care. Trial registration German Clinical Trials Register (DRKS): DRKS00000125 PMID:24024587

  2. Qualitative evaluation of a local coronary heart disease treatment pathway: practical implications and theoretical framework.

    PubMed

    Kramer, Lena; Schlößler, Kathrin; Träger, Susanne; Donner-Banzhoff, Norbert

    2012-05-14

    Coronary heart disease (CHD) is a common medical problem in general practice. Due to its chronic character, shared care of the patient between general practitioner (GP) and cardiologist (C) is required. In order to improve the cooperation between both medical specialists for patients with CHD, a local treatment pathway was developed. The objective of this study was first to evaluate GPs' opinions regarding the pathway and its practical implications, and secondly to suggest a theoretical framework of the findings by feeding the identified key factors influencing the pathway implementation into a multi-dimensional model. The evaluation of the pathway was conducted in a qualitative design on a sample of 12 pathway developers (8 GPs and 4 cardiologists) and 4 pathway users (GPs). Face-to face interviews, which were aligned with previously conducted studies of the department and assumptions of the theory of planned behaviour (TPB), were performed following a semi-structured interview guideline. These were audio-taped, transcribed verbatim, coded, and analyzed according to the standards of qualitative content analysis. We identified 10 frequently mentioned key factors having an impact on the implementation success of the CHD treatment pathway. We thereby differentiated between pathway related (pathway content, effort, individual flexibility, ownership), behaviour related (previous behaviour, support), interaction related (patient, shared care/colleagues), and system related factors (context, health care system). The overall evaluation of the CHD pathway was positive, but did not automatically lead to a change of clinical behaviour as some GPs felt to have already acted as the pathway recommends. By providing an account of our experience creating and implementing an intersectoral care pathway for CHD, this study contributes to our knowledge of factors that may influence physicians' decisions regarding the use of a local treatment pathway. An improved adaptation of the pathway in daily practice might be best achieved by a combined implementation strategy addressing internal and external factors. A simple, direct adaptation regards the design of the pathway material (e.g. layout, PC version), or the embedding of the pathway in another programme, like a Disease Management Programme (DMP). In addition to these practical implications, we propose a theoretical framework to understand the key factors' influence on the pathway implementation, with the identified factors along the microlevel (pathway related factors), the mesolevel (interaction related factors), and system- related factors along the macrolevel.

  3. Decomposing Oncogenic Transcriptional Signatures to Generate Maps of Divergent Cellular States* | Office of Cancer Genomics

    Cancer.gov

    The systematic sequencing of the cancer genome has led to the identification of numerous genetic alterations in cancer. However, a deeper understanding of the functional consequences of these alterations is necessary to guide appropriate therapeutic strategies. Here, we describe Onco-GPS (OncoGenic Positioning System), a data-driven analysis framework to organize individual tumor samples with shared oncogenic alterations onto a reference map defined by their underlying cellular states.

  4. Exploring Potential of Crowdsourced Geographic Information in Studies of Active Travel and Health: Strava Data and Cycling Behaviour

    NASA Astrophysics Data System (ADS)

    Sun, Y.

    2017-09-01

    In development of sustainable transportation and green city, policymakers encourage people to commute by cycling and walking instead of motor vehicles in cities. One the one hand, cycling and walking enables decrease in air pollution emissions. On the other hand, cycling and walking offer health benefits by increasing people's physical activity. Earlier studies on investigating spatial patterns of active travel (cycling and walking) are limited by lacks of spatially fine-grained data. In recent years, with the development of information and communications technology, GPS-enabled devices are popular and portable. With smart phones or smart watches, people are able to record their cycling or walking GPS traces when they are moving. A large number of cyclists and pedestrians upload their GPS traces to sport social media to share their historical traces with other people. Those sport social media thus become a potential source for spatially fine-grained cycling and walking data. Very recently, Strava Metro offer aggregated cycling and walking data with high spatial granularity. Strava Metro aggregated a large amount of cycling and walking GPS traces of Strava users to streets or intersections across a city. Accordingly, as a kind of crowdsourced geographic information, the aggregated data is useful for investigating spatial patterns of cycling and walking activities, and thus is of high potential in understanding cycling or walking behavior at a large spatial scale. This study is a start of demonstrating usefulness of Strava Metro data for exploring cycling or walking patterns at a large scale.

  5. Changed terms for drug payment influenced GPs' diagnoses and prescribing practice for inhaled corticosteroids.

    PubMed

    Dalbak, Lene G; Rognstad, Sture; Melbye, Hasse; Straand, Jørund

    2013-06-01

    Inhaled glucocorticosteroids (ICS) are first-line anti-inflammatory treatment in asthma, but not in chronic obstructive pulmonary disease (COPD). To restrict ICS use in COPD to cases of severe disease, new terms for reimbursement of drug costs were introduced in Norway in 2006, requiring a diagnosis of COPD to be verified by spirometry. To describe how GPs' diagnoses and treatment of patients who used ICS before 2006 changed after a reassessment of the patients that included spirometry. From the shared electronic patient record system in one group practice, patients ≥ 50 years prescribed ICS (including in combination with long-acting beta2-agonists) during the previous year were identified and invited to a tailored consultation including spirometry to assure the quality of diagnosis and treatment. GPs' diagnoses and ICS prescribing patterns after this reassessment were recorded, retrospectively. Of 164 patients identified, 112 were included. Post-bronchodilator spirometry showed airflow limitation indicating COPD in 55 patients. Of the 57 remaining patients, five had a positive reversibility test. The number of patients diagnosed with asthma increased (from 25 to 62) after the reassessment. A diagnosis of COPD was also more frequently used, whereas fewer patients had other pulmonary diagnoses. ICS was discontinued in 31 patients; 20 with mild to moderate COPD and 11 with normal spirometry. Altered reimbursement terms for ICS changed GPs' diagnostic practice in a way that made the diagnoses better fit with the treatment given, but over-diagnosis of asthma could not be excluded. Spirometry was useful for identifying ICS overuse.

  6. The health professional-patient-relationship in conventional versus complementary and alternative medicine. A qualitative study comparing the perceived use of medical shared decision-making between two different approaches of medicine.

    PubMed

    Berger, Stephanie; Braehler, Elmar; Ernst, Jochen

    2012-07-01

    To explore differences between conventional medicine (COM) and complementary and alternative medicine (CAM) regarding the attitude toward and the perceived use of shared decision-making (SDM) from the health professional perspective. Thirty guideline-based interviews with German GPs and nonmedical practitioners were conducted using qualitative analysis for interpretation. The health professional-patient-relationship in CAM differs from that in COM, as SDM is perceived more often. Reasons for this include external context variables (e.g., longer consultation time) and internal provider beliefs (e.g., attitude toward SDM). German health care policy was regarded as one of the most critical factors which affected the relationship between GPs and their patients and their practice of SDM. Differences between COM and CAM regarding the attitude toward and the perceived use of SDM are attributable to diverse concepts of medicine, practice context variables and internal provider factors. Therefore, the perceived feasibility of SDM depends on the complexity of different occupational socialization processes and thus, different value systems between COM and CAM. Implementation barriers such as insufficient communication skills, lacking SDM training or obedient patients should be reduced. Especially in COM, contextual variables such as political restrictions need to be eliminated to successfully implement SDM. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  7. Multidisciplinary care planning in the primary care management of completed stroke: a systematic review

    PubMed Central

    Mitchell, Geoffrey K; Brown, Robyn M; Erikssen, Lars; Tieman, Jennifer J

    2008-01-01

    Background Chronic disease management requires input from multiple health professionals, both specialist and primary care providers. This study sought to assess the impact of co-ordinated multidisciplinary care in primary care, represented by the delivery of formal care planning by primary care teams or shared across primary-secondary teams, on outcomes in stroke, relative to usual care. Methods A Systematic review of Medline, EMBASE, CINAHL (all 1990–2006), Cochrane Library (Issue 1 2006), and grey literature from web based searching of web sites listed in the CCOHA Health Technology Assessment List Analysis used narrative analysis of findings of randomised and non-randomised trials, and observational and qualitative studies of patients with completed stroke in the primary care setting where care planning was undertaken by 1) a multi-disciplinary primary care team or 2) through shared care by primary and secondary providers. Results One thousand and forty-five citations were retrieved. Eighteen papers were included for analysis. Most care planning took part in the context of multidisciplinary team care based in hospitals with outreach to community patients. Mortality rates are not impacted by multidisciplinary care planning. Functional outcomes of the studies were inconsistent. It is uncertain whether the active engagement of GPs and other primary care professionals in the multidisciplinary care planning contributed to the outcomes in the studies showing a positive effect. There may be process benefits from multidisciplinary care planning that includes primary care professionals and GPs. Few studies actually described the tasks and roles GPs fulfilled and whether this matched what was presumed to be provided. Conclusion While multidisciplinary care planning may not unequivocally improve the care of patients with completed stroke, there may be process benefits such as improved task allocation between providers. Further study on the impact of active GP involvement in multidisciplinary care planning is warranted. PMID:18681977

  8. Fast 2D FWI on a multi and many-cores workstation.

    NASA Astrophysics Data System (ADS)

    Thierry, Philippe; Donno, Daniela; Noble, Mark

    2014-05-01

    Following the introduction of x86 co-processors (Xeon Phi) and the performance increase of standard 2-socket workstations using the latest 12 cores E5-v2 x86-64 CPU, we present here a MPI + OpenMP implementation of an acoustic 2D FWI (full waveform inversion) code which simultaneously runs on the CPUs and on the co-processors installed in a workstation. The main advantage of running a 2D FWI on a workstation is to be able to quickly evaluate new features such as more complicated wave equations, new cost functions, finite-difference stencils or boundary conditions. Since the co-processor is made of 61 in-order x86 cores, each of them having up to 4 threads, this many-core can be seen as a shared memory SMP (symmetric multiprocessing) machine with its own IP address. Depending on the vendor, a single workstation can handle several co-processors making the workstation as a personal cluster under the desk. The original Fortran 90 CPU version of the 2D FWI code is just recompiled to get a Xeon Phi x86 binary. This multi and many-core configuration uses standard compilers and associated MPI as well as math libraries under Linux; therefore, the cost of code development remains constant, while improving computation time. We choose to implement the code with the so-called symmetric mode to fully use the capacity of the workstation, but we also evaluate the scalability of the code in native mode (i.e running only on the co-processor) thanks to the Linux ssh and NFS capabilities. Usual care of optimization and SIMD vectorization is used to ensure optimal performances, and to analyze the application performances and bottlenecks on both platforms. The 2D FWI implementation uses finite-difference time-domain forward modeling and a quasi-Newton (with L-BFGS algorithm) optimization scheme for the model parameters update. Parallelization is achieved through standard MPI shot gathers distribution and OpenMP for domain decomposition within the co-processor. Taking advantage of the 16 GB of memory available on the co-processor we are able to keep wavefields in memory to achieve the gradient computation by cross-correlation of forward and back-propagated wavefields needed by our time-domain FWI scheme, without heavy traffic on the i/o subsystem and PCIe bus. In this presentation we will also review some simple methodologies to determine performance expectation compared to real performances in order to get optimization effort estimation before starting any huge modification or rewriting of research codes. The key message is the ease of use and development of this hybrid configuration to reach not the absolute peak performance value but the optimal one that ensures the best balance between geophysical and computer developments.

  9. A Simplified Baseband Prefilter Model with Adaptive Kalman Filter for Ultra-Tight COMPASS/INS Integration

    PubMed Central

    Luo, Yong; Wu, Wenqi; Babu, Ravindra; Tang, Kanghua; Luo, Bing

    2012-01-01

    COMPASS is an indigenously developed Chinese global navigation satellite system and will share many features in common with GPS (Global Positioning System). Since the ultra-tight GPS/INS (Inertial Navigation System) integration shows its advantage over independent GPS receivers in many scenarios, the federated ultra-tight COMPASS/INS integration has been investigated in this paper, particularly, by proposing a simplified prefilter model. Compared with a traditional prefilter model, the state space of this simplified system contains only carrier phase, carrier frequency and carrier frequency rate tracking errors. A two-quadrant arctangent discriminator output is used as a measurement. Since the code tracking error related parameters were excluded from the state space of traditional prefilter models, the code/carrier divergence would destroy the carrier tracking process, and therefore an adaptive Kalman filter algorithm tuning process noise covariance matrix based on state correction sequence was incorporated to compensate for the divergence. The federated ultra-tight COMPASS/INS integration was implemented with a hardware COMPASS intermediate frequency (IF), and INS's accelerometers and gyroscopes signal sampling system. Field and simulation test results showed almost similar tracking and navigation performances for both the traditional prefilter model and the proposed system; however, the latter largely decreased the computational load. PMID:23012564

  10. Reducing antibiotic prescribing in Australian general practice: time for a national strategy.

    PubMed

    Del Mar, Christopher B; Scott, Anna Mae; Glasziou, Paul P; Hoffmann, Tammy; van Driel, Mieke L; Beller, Elaine; Phillips, Susan M; Dartnell, Jonathan

    2017-11-06

    In Australia, the antibiotic resistance crisis may be partly alleviated by reducing antibiotic use in general practice, which has relatively high prescribing rates - antibiotics are mostly prescribed for acute respiratory infections, for which they provide only minor benefits. Current surveillance is inadequate for monitoring community antibiotic resistance rates, prescribing rates by indication, and serious complications of acute respiratory infections (which antibiotic use earlier in the infection may have averted), making target setting difficult. Categories of interventions that may support general practitioners to reduce prescribing antibiotics are: regulatory (eg, changing the default to "no repeats" in electronic prescribing, changing the packaging of antibiotics to facilitate tailored amounts of antibiotics for the right indication and restricting access to prescribing selected antibiotics to conserve them), externally administered (eg, academic detailing and audit and feedback on total antibiotic use for individual GPs), interventions that GPs can individually implement (eg, delayed prescribing, shared decision making, public declarations in the practice about conserving antibiotics, and self-administered audit), supporting GPs' access to near-patient diagnostic testing, and public awareness campaigns. Many unanswered clinical research questions remain, including research into optimal implementation methods. Reducing antibiotic use in Australian general practice will require a range of approaches (with various intervention categories), a sustained effort over many years and a commitment of appropriate resources and support.

  11. Lectin Affinity Plasmapheresis for Middle East Respiratory Syndrome-Coronavirus and Marburg Virus Glycoprotein Elimination.

    PubMed

    Koch, Benjamin; Schult-Dietrich, Patricia; Büttner, Stefan; Dilmaghani, Bijan; Lohmann, Dario; Baer, Patrick C; Dietrich, Ursula; Geiger, Helmut

    2018-04-26

    Middle East respiratory syndrome coronavirus (MERS-CoV) and Marburg virus (MARV) are among the World Health Organization's top 8 emerging pathogens. Both zoonoses share nonspecific early symptoms, a high lethality rate, and a reduced number of specific treatment options. Therefore, we evaluated extracorporeal virus and glycoprotein (GP) elimination by lectin affinity plasmapheresis (LAP). For both MERS-CoV (pseudovirus) as well as MARV (GPs), 4 LAP devices (Mini Hemopurifiers, Aethlon Medical, San Diego, CA, USA) and 4 negative controls were tested. Samples were collected every 30 min and analyzed for reduction in virus infectivity by a flow cytometry-based infectivity assay (MERS-CoV) and in soluble GP content (MARV) by an immunoassay. The experiments show a time-dependent clearance of MERS-CoV of up to 80% within 3 h (pseudovirus). Up to 70% of MARV-soluble GPs were eliminated at the same time. Substantial saturation of the binding resins was detected within the first treatment hour. MERS-CoV (pseudovirus) and MARV soluble GPs are eliminated by LAP in vitro. Considering the high lethality and missing established treatment options, LAP should be evaluated in vivo. Especially early initiation, continuous therapy, and timed cartridge exchanges could be of importance. The Author(s). Published by S. Karger AG, Basel.

  12. Report On Fiducial Points At The Space Geodesy Based Cagliari Astronomical Observatory

    NASA Astrophysics Data System (ADS)

    Banni, A.; Buffa, F.; Falchi, E.; Sanna, G.

    At the present time two research groups are engaged to space-geodesy activities in Sardinia: a staff belonging to the Stazione Astronomica of Cagliari (SAC) and the To- pography Section of the Dipartimento di Ingegneria Strutturale (DIST) of the Cagliari University. The two groups have a share in international campaigns and services. The local structure, consists of permanent stations of satellite observation both on radio and laser techniques. Particularly in the Cagliari Observatory a Satellite Laser Ranging system runs with nearly daily, low, medium and high orbit satellite tracking capability (e. g. Topex, Ajisai, Lageos1/2, Glonass); up to this time the Cagliari laser station has contributed towards the following international campaigns/organizations. Besides in the Observatory's site a fixed GPS system, belonging the Italian Space Agency GPS- Network and to the IGS-Network; and a GPS+GLONASS system, acquired by DIST and belonging to the IGLOS are installed and managed. All the above stations are furnished with meteorological sensors with RINEX format data dissemination avail- ability. Moreover a new 64 meters dish radio telescope (Sardinian Radio Telescope), geodetic VLBI equipped, is under construction not long away from the Observatory. The poster fully shows the facilities and furnishes a complete report on the mark- ers eccentricities, allowing co-location of the different space techniques operating in Sardinia.

  13. Vienna FORTRAN: A FORTRAN language extension for distributed memory multiprocessors

    NASA Technical Reports Server (NTRS)

    Chapman, Barbara; Mehrotra, Piyush; Zima, Hans

    1991-01-01

    Exploiting the performance potential of distributed memory machines requires a careful distribution of data across the processors. Vienna FORTRAN is a language extension of FORTRAN which provides the user with a wide range of facilities for such mapping of data structures. However, programs in Vienna FORTRAN are written using global data references. Thus, the user has the advantage of a shared memory programming paradigm while explicitly controlling the placement of data. The basic features of Vienna FORTRAN are presented along with a set of examples illustrating the use of these features.

  14. Programming in Vienna Fortran

    NASA Technical Reports Server (NTRS)

    Chapman, Barbara; Mehrotra, Piyush; Zima, Hans

    1992-01-01

    Exploiting the full performance potential of distributed memory machines requires a careful distribution of data across the processors. Vienna Fortran is a language extension of Fortran which provides the user with a wide range of facilities for such mapping of data structures. In contrast to current programming practice, programs in Vienna Fortran are written using global data references. Thus, the user has the advantages of a shared memory programming paradigm while explicitly controlling the data distribution. In this paper, we present the language features of Vienna Fortran for FORTRAN 77, together with examples illustrating the use of these features.

  15. Minimum energy information fusion in sensor networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chapline, G

    1999-05-11

    In this paper we consider how to organize the sharing of information in a distributed network of sensors and data processors so as to provide explanations for sensor readings with minimal expenditure of energy. We point out that the Minimum Description Length principle provides an approach to information fusion that is more naturally suited to energy minimization than traditional Bayesian approaches. In addition we show that for networks consisting of a large number of identical sensors Kohonen self-organization provides an exact solution to the problem of combing the sensor outputs into minimal description length explanations.

  16. Proceedings of the Seminar on the DoD Computer Security Initiative Program, National Bureau of Standards, Gaithersburg, Maryland, July 17-18, 1979.

    DTIC Science & Technology

    1979-01-01

    specifications have been prepared for a DoD communications processor on an IBM minicomputer, a minicomputer time sharing system for the DEC PDP-11 and...the Honeywell Level 6. a virtual machine monitor for the IBM 370, and Multics [10] for the Honeywell Level 68. MECHANISMS FOR KERNEL IMPLEMENTATION...HOL INA ZJO : ANERIONS g PROCESSORn , c ...THEOREMS 1 ITP I-THEOREMS PROOF EVIDENCE - p II KV./370 FORMAL DESIGN PROCESS M4ODULAR DECOMPOSITION * NON

  17. New Media Analysis: The Effects of Peer Influence and Personality Characteristics Through the Stages of Trial, Adoption, and Continued Use of Video Sharing Websites

    DTIC Science & Technology

    2011-03-01

    Unfortunately, my family deserves more credit than I could possibly say here, but I must try… Mom, thanks for always supporting my dreams and believing in me...example, the use of a computer word-processor to type a lengthy document may facilitate the trial of a system if the alternative is to handwrite the...technologies are inherently a voluntary form of technological communications; therefore, it is conceivable to say that individuals are more likely to be

  18. Wide Area Recovery and Resiliency Program (WARRP) Knowledge Enhancement Events: Agricultural Waste Disposal Workshop After Action Report

    DTIC Science & Technology

    2012-07-17

    production of milk . Weld produces 57 percent of the milk in Colorado and has become the 17th largest dairy county in the U.S. in cow numbers (almost...engaged in the plan; everyone from the milk producer to the milk processor. 6 “In the event of an outbreak, everyone in this room would have a role...slaughter. Dr. McCarl illustrated the magnitude of the carcass disposal problem, sharing how the problem would be 9 cows wide and stretch the length

  19. Development of Universal Controller Architecture for SiC Based Power Electronic Building Blocks

    DTIC Science & Technology

    2017-10-30

    time control and control network routing and the other for non -real time instrumentation and monitoring. The two subsystems are isolated and share...directly to the processor without any software intervention. We use a non -real time I Gb/s Ethernet interface for monitoring and control of the module...NOTC1 802.lW Spanning tree Prot. 76.96 184.0 107.04 Multiple point Private Line l NOTC1 203.2 382.3 179.1 N/ A Non applicable 1 No traffic control at

  20. Marionette

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sullivan, M.; Anderson, D.P.

    1988-01-01

    Marionette is a system for distributed parallel programming in an environment of networked heterogeneous computer systems. It is based on a master/slave model. The master process can invoke worker operations (asynchronous remote procedure calls to single slaves) and context operations (updates to the state of all slaves). The master and slaves also interact through shared data structures that can be modified only by the master. The master and slave processes are programmed in a sequential language. The Marionette runtime system manages slave process creation, propagates shared data structures to slaves as needed, queues and dispatches worker and context operations, andmore » manages recovery from slave processor failures. The Marionette system also includes tools for automated compilation of program binaries for multiple architectures, and for distributing binaries to remote fuel systems. A UNIX-based implementation of Marionette is described.« less

  1. Earthquake source parameters from GPS-measured static displacements with potential for real-time application

    NASA Astrophysics Data System (ADS)

    O'Toole, Thomas B.; Valentine, Andrew P.; Woodhouse, John H.

    2013-01-01

    We describe a method for determining an optimal centroid-moment tensor solution of an earthquake from a set of static displacements measured using a network of Global Positioning System receivers. Using static displacements observed after the 4 April 2010, MW 7.2 El Mayor-Cucapah, Mexico, earthquake, we perform an iterative inversion to obtain the source mechanism and location, which minimize the least-squares difference between data and synthetics. The efficiency of our algorithm for forward modeling static displacements in a layered elastic medium allows the inversion to be performed in real-time on a single processor without the need for precomputed libraries of excitation kernels; we present simulated real-time results for the El Mayor-Cucapah earthquake. The only a priori information that our inversion scheme needs is a crustal model and approximate source location, so the method proposed here may represent an improvement on existing early warning approaches that rely on foreknowledge of fault locations and geometries.

  2. Swarm Optimization-Based Magnetometer Calibration for Personal Handheld Devices

    PubMed Central

    Ali, Abdelrahman; Siddharth, Siddharth; Syed, Zainab; El-Sheimy, Naser

    2012-01-01

    Inertial Navigation Systems (INS) consist of accelerometers, gyroscopes and a processor that generates position and orientation solutions by integrating the specific forces and rotation rates. In addition to the accelerometers and gyroscopes, magnetometers can be used to derive the user heading based on Earth's magnetic field. Unfortunately, the measurements of the magnetic field obtained with low cost sensors are usually corrupted by several errors, including manufacturing defects and external electro-magnetic fields. Consequently, proper calibration of the magnetometer is required to achieve high accuracy heading measurements. In this paper, a Particle Swarm Optimization (PSO)-based calibration algorithm is presented to estimate the values of the bias and scale factor of low cost magnetometers. The main advantage of this technique is the use of the artificial intelligence which does not need any error modeling or awareness of the nonlinearity. Furthermore, the proposed algorithm can help in the development of Pedestrian Navigation Devices (PNDs) when combined with inertial sensors and GPS/Wi-Fi for indoor navigation and Location Based Services (LBS) applications.

  3. GPS and GPRS Based Telemonitoring System for Emergency Patient Transportation

    PubMed Central

    Satyanarayana, K.; Sarma, A. D.; Sravan, J.; Malini, M.; Venkateswarlu, G.

    2013-01-01

    Telemonitoring during the golden hour of patient transportation helps to improve medical care. Presently there are different physiological data acquisition and transmission systems using cellular network and radio communication links. Location monitoring systems and video transmission systems are also commercially available. The emergency patient transportation systems uniquely require transmission of data pertaining to the patient, vehicle, time of the call, physiological signals (like ECG, blood pressure, a body temperature, and blood oxygen saturation), location information, a snap shot of the patient, and voice. These requirements are presently met by using separate communication systems for voice, physiological data, and location that result in a lot of inconvenience to the technicians, maintenance related issues, in addition to being expensive. This paper presents design, development, and implementation of such a telemonitoring system for emergency patient transportation employing ARM 9 processor module. This system is found to be very useful for the emergency patient transportation being undertaken by organizations like the Emergency Management Research Institute (EMRI). PMID:27019844

  4. Geopotential Error Analysis from Satellite Gradiometer and Global Positioning System Observables on Parallel Architecture

    NASA Technical Reports Server (NTRS)

    Schutz, Bob E.; Baker, Gregory A.

    1997-01-01

    The recovery of a high resolution geopotential from satellite gradiometer observations motivates the examination of high performance computational techniques. The primary subject matter addresses specifically the use of satellite gradiometer and GPS observations to form and invert the normal matrix associated with a large degree and order geopotential solution. Memory resident and out-of-core parallel linear algebra techniques along with data parallel batch algorithms form the foundation of the least squares application structure. A secondary topic includes the adoption of object oriented programming techniques to enhance modularity and reusability of code. Applications implementing the parallel and object oriented methods successfully calculate the degree variance for a degree and order 110 geopotential solution on 32 processors of the Cray T3E. The memory resident gradiometer application exhibits an overall application performance of 5.4 Gflops, and the out-of-core linear solver exhibits an overall performance of 2.4 Gflops. The combination solution derived from a sun synchronous gradiometer orbit produce average geoid height variances of 17 millimeters.

  5. Advanced integrated enhanced vision systems

    NASA Astrophysics Data System (ADS)

    Kerr, J. R.; Luk, Chiu H.; Hammerstrom, Dan; Pavel, Misha

    2003-09-01

    In anticipation of its ultimate role in transport, business and rotary wing aircraft, we clarify the role of Enhanced Vision Systems (EVS): how the output data will be utilized, appropriate architecture for total avionics integration, pilot and control interfaces, and operational utilization. Ground-map (database) correlation is critical, and we suggest that "synthetic vision" is simply a subset of the monitor/guidance interface issue. The core of integrated EVS is its sensor processor. In order to approximate optimal, Bayesian multi-sensor fusion and ground correlation functionality in real time, we are developing a neural net approach utilizing human visual pathway and self-organizing, associative-engine processing. In addition to EVS/SVS imagery, outputs will include sensor-based navigation and attitude signals as well as hazard detection. A system architecture is described, encompassing an all-weather sensor suite; advanced processing technology; intertial, GPS and other avionics inputs; and pilot and machine interfaces. Issues of total-system accuracy and integrity are addressed, as well as flight operational aspects relating to both civil certification and military applications in IMC.

  6. Geopotential error analysis from satellite gradiometer and global positioning system observables on parallel architectures

    NASA Astrophysics Data System (ADS)

    Baker, Gregory Allen

    The recovery of a high resolution geopotential from satellite gradiometer observations motivates the examination of high performance computational techniques. The primary subject matter addresses specifically the use of satellite gradiometer and GPS observations to form and invert the normal matrix associated with a large degree and order geopotential solution. Memory resident and out-of-core parallel linear algebra techniques along with data parallel batch algorithms form the foundation of the least squares application structure. A secondary topic includes the adoption of object oriented programming techniques to enhance modularity and reusability of code. Applications implementing the parallel and object oriented methods successfully calculate the degree variance for a degree and order 110 geopotential solution on 32 processors of the Cray T3E. The memory resident gradiometer application exhibits an overall application performance of 5.4 Gflops, and the out-of-core linear solver exhibits an overall performance of 2.4 Gflops. The combination solution derived from a sun synchronous gradiometer orbit produce average geoid height variances of 17 millimeters.

  7. Shared Electronic Health Record Systems: Key Legal and Security Challenges.

    PubMed

    Christiansen, Ellen K; Skipenes, Eva; Hausken, Marie F; Skeie, Svein; Østbye, Truls; Iversen, Marjolein M

    2017-11-01

    Use of shared electronic health records opens a whole range of new possibilities for flexible and fruitful cooperation among health personnel in different health institutions, to the benefit of the patients. There are, however, unsolved legal and security challenges. The overall aim of this article is to highlight legal and security challenges that should be considered before using shared electronic cooperation platforms and health record systems to avoid legal and security "surprises" subsequent to the implementation. Practical lessons learned from the use of a web-based ulcer record system involving patients, community nurses, GPs, and hospital nurses and doctors in specialist health care are used to illustrate challenges we faced. Discussion of possible legal and security challenges is critical for successful implementation of shared electronic collaboration systems. Key challenges include (1) allocation of responsibility, (2) documentation routines, (3) and integrated or federated access control. We discuss and suggest how challenges of legal and security aspects can be handled. This discussion may be useful for both current and future users, as well as policy makers.

  8. Correlated-Data Fusion and Cooperative Aiding in GNSS-Stressed or Denied Environments

    NASA Astrophysics Data System (ADS)

    Mokhtarzadeh, Hamid

    A growing number of applications require continuous and reliable estimates of position, velocity, and orientation. Price requirements alone disqualify most traditional navigation or tactical-grade sensors and thus navigation systems based on automotive or consumer-grade sensors aided by Global Navigation Satellite Systems (GNSS), like the Global Positioning System (GPS), have gained popularity. The heavy reliance on GPS in these navigation systems is a point of concern and has created interest in alternative or back-up navigation systems to enable robust navigation through GPS-denied or stressed environments. This work takes advantage of current trends for increased sensing capabilities coupled with multilayer connectivity to propose a cooperative navigation-based aiding system as a means to limit dead reckoning error growth in the absence of absolute measurements like GPS. Each vehicle carries a dead reckoning navigation system which is aided by relative measurements, like range, to neighboring vehicles together with information sharing. Detailed architectures and concepts of operation are described for three specific applications: commercial aviation, Unmanned Aerial Vehicles (UAVs), and automotive applications. Both centralized and decentralized implementations of cooperative navigation-based aiding systems are described. The centralized system is based on a single Extended Kalman Filter (EKF). A decentralized implementation suited for applications with very limited communication bandwidth is discussed in detail. The presence of unknown correlation between the a priori state and measurement errors makes the standard Kalman filter unsuitable. Two existing estimators for handling this unknown correlation are Covariance Intersection (CI) and Bounded Covariance Inflation (BCInf) filters. A CI-based decentralized estimator suitable for decentralized cooperative navigation implementation is proposed. A unified derivation is presented for the Kalman filter, CI filter, and BCInf filter measurement update equations. Furthermore, characteristics important to the proper implementation of CI and BCInf in practice are discussed. A new covariance normalization step is proposed as necessary to properly apply CI or BCInf. Lastly, both centralized and decentralized implementations of cooperative aiding are analyzed and evaluated using experimental data in the three applications. In the commercial aviation study aircraft are simulated to use their Automatic Dependent Surveillance - Broadcast (ADS-B) and Traffic Collision Avoidance System (TCAS) systems to cooperatively aid their on board INS during a 60 min GPS outage in the national airspace. An availability study of cooperative navigation as proposed in this work around representative United States airports is performed. Availabilities between 70-100% were common at major airports like LGA and MSP in a 30 nmi radius around the airport during morning to evening hours. A GPS-denied navigation system for small UAVs based on cooperative information sharing is described. Experimentally collected flight data from 7 small UAV flights are played-back to evaluate the performance of the navigation system. The results show that the most effective of the architectures can lead to 5+ minutes of navigation without GPS maintaining position errors less than 200 m (1-sigma). The automotive case study considers 15 minutes of automotive traffic (2,000 + vehicles) driving through a half-mile stretch of highway without access to GPS. Automotive radar coupled with Dedicated Short Range Communication (DSRC) protocol are used to implement cooperative aiding to a low-cost 2-D INS on board each vehicle. The centralized system achieves an order of magnitude reduction in uncertainty by aggressively aiding the INS on board each vehicle. The proposed CI-based decentralized estimator is demonstrated to be conservative and maintain consistency. A quantitative analysis of bandwidth requirements shows that the proposed decentralized estimator falls comfortably within modern connectivity capabilities. A naive implementation of the high-performance centralized estimator is also achievable, but it was demonstrated to be burdensome, nearing the bandwidth limits.

  9. The Brave New World of Real-time GPS for Hazards Mitigation

    NASA Astrophysics Data System (ADS)

    Melbourne, T. I.; Szeliga, W. M.; Santillan, V. M.; Scrivner, C. W.

    2015-12-01

    Over 600 continuously-operating, real-time telemetered GPS receivers operate throughout California, Oregon, Washington and Alaska. These receivers straddle active crustal faults, volcanoes and landslides, the magnitude-9 Cascadia and northeastern Alaskan subduction zones and their attendant tsunamigenic regions along the Pacific coast. Around the circum-Pacific, there are hundreds more and the number is growing steadily as real-time networks proliferate. Despite offering the potential for sub-cm positioning accuracy in real-time useful for a broad array of hazards mitigation, these GPS stations are only now being incorporated into routine seismic, tsunami, volcanic, land-slide, space-weather, or meterologic monitoring. We will discuss NASA's READI (Real-time Earthquake Analysis for DIsasters) initiative. This effort is focussed on developing all aspects of real-time GPS for hazards mitigation, from establishing international data-sharing agreements to improving basic positioning algorithms. READI's long-term goal is to expand real-time GPS monitoring throughout the circum-Pacific as overseas data become freely available, so that it may be adopted by NOAA, USGS and other operational agencies responsible for natural hazards monitoring. Currently ~100 stations are being jointly processed by CWU and Scripps Inst. of Oceanography for algorithm comparison and downstream merging purposes. The resultant solution streams include point-position estimates in a global reference frame every second with centimeter accuracy, ionospheric total electron content and tropospheric zenith water content. These solutions are freely available to third-party agencies over several streaming protocols to enable their incorporation and use in hazards monitoring. This number will ramp up to ~400 stations over the next year. We will also discuss technical efforts underway to develop a variety of downstream applications of the real-time position streams, including the ability to broadcast solutions to thousands of users in real time, earthquake finite-fault and tsunami excitation estimations, and several user interfaces, both stand-alone client and browser-based, that allow interaction with both real-time position streams and their derived products.

  10. Development and Demonstration of a Self-Calibrating Pseudolite Array for Task Level Control of a Planetary Rover

    NASA Technical Reports Server (NTRS)

    Rock, Stephen M.; LeMaster, Edward A.

    2001-01-01

    Pseudolites can extend the availability of GPS-type positioning systems to a wide range of applications not possible with satellite-only GPS. One such application is Mars exploration, where the centimeter-level accuracy and high repeatability of CDGPS would make it attractive for rover positioning during autonomous exploration, sample collection, and habitat construction if it were available. Pseudolites distributed on the surface would allow multiple rovers and/or astronauts to share a common navigational reference. This would help enable cooperation for complicated science tasks, reducing the need for instructions from Earth and increasing the likelihood of mission success. Conventional GPS Pseudolite arrays require that the devices be pre-calibrated through a Survey of their locations, typically to sub-centimeter accuracy. This is a problematic task for robots on the surface of another planet. By using the GPS signals that the Pseudolites broadcast, however, it is possible to have the array self-survey its own relative locations, creating a SelfCalibrating Pseudolite Array (SCPA). This requires the use of GPS transceivers instead of standard pseudolites. Surveying can be done either at carrier- or code-phase levels. An overview of SCPA capabilities, system requirements, and self-calibration algorithms is presented in another work. The Aerospace Robotics Laboratory at Statif0id has developed a fully operational prototype SCPA. The array is able to determine the range between any two transceivers with either code- or carrier-phase accuracy, and uses this inter-transceiver ranging to determine the at-ray geometry. This paper presents results from field tests conducted at Stanford University demonstrating the accuracy of inter-transceiver ranging and its viability and utility for array localization, and shows how transceiver motion may be utilized to refine the array estimate by accurately determining carrier-phase integers and line biases. It also summarizes the overall system requirements and architecture, and describes the hardware and software used in the prototype system.

  11. Development of a web-based pharmaceutical care plan to facilitate collaboration between healthcare providers and patients.

    PubMed

    Geurts, Marlies M E; Ivens, Martijn; van Gelder, Egbert; de Gier, Johan J

    2013-01-01

    In medication therapy management there is a need for a tool to document medication reviews and pharmaceutical care plans (PCPs) as well as facilitate collaboration and sharing of patient data between different healthcare providers. Currently, pharmacists and general practitioners (GPs) have their own computer systems and patient files. To facilitate collaboration between different healthcare providers and to exchange patient data we developed a paper-based tool. As a result the structured collection of all relevant information for a clinical medication review was more protocol driven. The tool also enabled to plan interventions and follow-up activities: the PCP. The PCP was piloted among three GPs and six community pharmacists. Interviews with all healthcare providers concluded the PCP was found a very useful tool to collect and share patient data. A disadvantage was the time spent to collect all information. We therefore developed our PCP into a web-based tool: the web-based PCP (W-PCP). Development of a W-PCP to (1) provide healthcare providers with information from pharmacist- and GP computer systems and (2) facilitate collaboration between healthcare providers and patients. Development and Application: W-PCP facilitates uploading and sharing of patient data among health care professionals and collaboration between professionals and patients on performing treatment plans. The W-PCP is a stand-alone application developed by cocreation using a generic software platform that provides developmental speed and flexibility. The W-PCP was used in three research lines, two in primary care and one in a hospital setting. Outcomes measures were defined as satisfaction about efficiency and effectiveness during data sharing and documentation in providing care and conducting medication reviews using the W-PCP. First experiences concerning the use of W-PCP in a primary care setting were collected by a questionnaire and interviews with pharmacists and GPs using the W-PCP. A questionnaire about first experiences with the W-PCP was sent to 38 healthcare providers. 17 healthcare providers returned the questionnaire (response 44.7%). The use of W-PCP resulted in positive experiences from participating healthcare providers. One of the needs expressed is to have the W-PCP application integrated in the current pharmacy and GP computer systems. All experiences, needs, and ideas for improvement of the current application were collected. On the basis of experiences and requirements collected, the application will be further developed. The W-PCP application can potentially support successful collaboration between different healthcare providers and patients, which is important for medication therapy management. With this application, a successful collaboration between different healthcare providers and patients could be achieved.

  12. Application of neogeographic tools for geochemistry

    NASA Astrophysics Data System (ADS)

    Zhilin, Denis

    2010-05-01

    Neogeography is a usage of geographical tools for utilization by a non-expert group of users. It have been rapidly developing last ten years and is founded on (a) availability of Global Positioning System (GPS) receivers, that allows to obtain very precise geographical position (b) services, that allows linking geographical position with satellite images, GoogleEarth for example and (c) programs as GPS Track Maker or OziExplorer, that allows linking geographical coordinates with other raster images (for example, maps). However, the possibilities of neogeographic approach are much wider. It allows linking different parameters with geographical coordinates on the one hand and space image or map - on the other. If it is easy to measure a parameter, a great database could be collected for a very small time. The results can be presented in very different ways. One can plot a parameter versus the distance from a particular point (for example, a source of a substance), make two-dimension distribution of parameter of put the results onto a map or space image. In the case of chemical parameters it can help finding the source of pollution, trace the influence of pollution, reveal geochemical processes and patterns. The main advantage of neogeograpic approach is the employment of non-experts in collecting data. Now non-experts can easily measure electrical conductivity and pH of natural waters, concentration of different gases in the atmosphere, solar irradiation, radioactivity and so on. If the results are obtained (for example, by students of secondary schools) and shared, experts can proceed them and make significant conclusions. An interface of sharing the results (http://maps.sch192.ru/) was elaborated by V. Ilyin. Within the interface a user can load *.csv file with coordinates, type of parameter and the value of parameter in a particular point. The points are marked on the GoogleEarth map with the color corresponding the value of the parameter. The color scale can be edited manually. We would like to show some results of practical and scientific importance, obtained by non-experts. At 2006 our secondary school students investigated the distribution of snow salinity around Kosygina Street in Moscow. One can conclude that the distribution of salinity is reproducible and that the street influences the snow up to 150 meters. Another example obtained by our students is the distribution of electrical conductivity of swamp water showing extreme irregularity of this parameter within the small area (about 0.5x0.5 km) the electrical conductivity varied from 22 to 77 uS with no regularity. It points out the key role of local processes in swamp water chemistry. The third example (maps of electrical conductivity and pH of water on a large area) one can see at http://fenevo.narod.ru/maps/ec-maps.htm and http://fenevo.narod.ru/maps/ph-maps.htm. Basing on the map one can conclude mechanisms of formation of water mineralization in the area. Availability of GPS receivers and systems for easy measuring of chemical parameters can lead to neogeochemical revolution as GPS receivers have led to neogeographical. A great number of non-experts can share their geochemical results, forming huge amount of available geochemical data. It will help to falsify and visualize concepts of geochemistry and environmental chemistry and, maybe, develop new ones. Geophysical and biological data could be shared as well with the same advantages for corresponding sciences.

  13. Improving the performance of heterogeneous multi-core processors by modifying the cache coherence protocol

    NASA Astrophysics Data System (ADS)

    Fang, Juan; Hao, Xiaoting; Fan, Qingwen; Chang, Zeqing; Song, Shuying

    2017-05-01

    In the Heterogeneous multi-core architecture, CPU and GPU processor are integrated on the same chip, which poses a new challenge to the last-level cache management. In this architecture, the CPU application and the GPU application execute concurrently, accessing the last-level cache. CPU and GPU have different memory access characteristics, so that they have differences in the sensitivity of last-level cache (LLC) capacity. For many CPU applications, a reduced share of the LLC could lead to significant performance degradation. On the contrary, GPU applications can tolerate increase in memory access latency when there is sufficient thread-level parallelism. Taking into account the GPU program memory latency tolerance characteristics, this paper presents a method that let GPU applications can access to memory directly, leaving lots of LLC space for CPU applications, in improving the performance of CPU applications and does not affect the performance of GPU applications. When the CPU application is cache sensitive, and the GPU application is insensitive to the cache, the overall performance of the system is improved significantly.

  14. New computing systems and their impact on structural analysis and design

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.

    1989-01-01

    A review is given of the recent advances in computer technology that are likely to impact structural analysis and design. The computational needs for future structures technology are described. The characteristics of new and projected computing systems are summarized. Advances in programming environments, numerical algorithms, and computational strategies for new computing systems are reviewed, and a novel partitioning strategy is outlined for maximizing the degree of parallelism. The strategy is designed for computers with a shared memory and a small number of powerful processors (or a small number of clusters of medium-range processors). It is based on approximating the response of the structure by a combination of symmetric and antisymmetric response vectors, each obtained using a fraction of the degrees of freedom of the original finite element model. The strategy was implemented on the CRAY X-MP/4 and the Alliant FX/8 computers. For nonlinear dynamic problems on the CRAY X-MP with four CPUs, it resulted in an order of magnitude reduction in total analysis time, compared with the direct analysis on a single-CPU CRAY X-MP machine.

  15. Mass storage at NSA

    NASA Technical Reports Server (NTRS)

    Shields, Michael F.

    1993-01-01

    The need to manage large amounts of data on robotically controlled devices has been critical to the mission of this Agency for many years. In many respects this Agency has helped pioneer, with their industry counterparts, the development of a number of products long before these systems became commercially available. Numerous attempts have been made to field both robotically controlled tape and optical disk technology and systems to satisfy our tertiary storage needs. Custom developed products were architected, designed, and developed without vendor partners over the past two decades to field workable systems to handle our ever increasing storage requirements. Many of the attendees of this symposium are familiar with some of the older products, such as: the Braegen Automated Tape Libraries (ATL's), the IBM 3850, the Ampex TeraStore, just to name a few. In addition, we embarked on an in-house development of a shared disk input/output support processor to manage our every increasing tape storage needs. For all intents and purposes, this system was a file server by current definitions which used CDC Cyber computers as the control processors. It served us well and was just recently removed from production usage.

  16. Optical Interconnections for VLSI Computational Systems Using Computer-Generated Holography.

    NASA Astrophysics Data System (ADS)

    Feldman, Michael Robert

    Optical interconnects for VLSI computational systems using computer generated holograms are evaluated in theory and experiment. It is shown that by replacing particular electronic connections with free-space optical communication paths, connection of devices on a single chip or wafer and between chips or modules can be improved. Optical and electrical interconnects are compared in terms of power dissipation, communication bandwidth, and connection density. Conditions are determined for which optical interconnects are advantageous. Based on this analysis, it is shown that by applying computer generated holographic optical interconnects to wafer scale fine grain parallel processing systems, dramatic increases in system performance can be expected. Some new interconnection networks, designed to take full advantage of optical interconnect technology, have been developed. Experimental Computer Generated Holograms (CGH's) have been designed, fabricated and subsequently tested in prototype optical interconnected computational systems. Several new CGH encoding methods have been developed to provide efficient high performance CGH's. One CGH was used to decrease the access time of a 1 kilobit CMOS RAM chip. Another was produced to implement the inter-processor communication paths in a shared memory SIMD parallel processor array.

  17. Calibration of the Geosar Dual Frequency Interferometric SAR

    NASA Technical Reports Server (NTRS)

    Chapine, Elaine

    1999-01-01

    GeoSAR is an airborne, interferometric Synthetic Aperture Radar (INSAR) system for terrain mapping, currently under development by a consortium including NASA's Jet Propulsion Laboratory (JPL), Calgis, Inc., and the California Department of Conservation (CalDOC) with funding provided by the Topographic Engineering Center (TEC) of the U.S. Army Corps of Engineers and the Defense Advanced Research Projects Agency (DARPA). The radar simultaneously maps swaths on both sides of the aircraft at two frequencies, X-Band and P-Band. For the P-Band system, data is collected for two across track interferometric baselines and at the crossed polarization. The aircraft position and attitude are measured using two Honeywell Embedded GPS Inertial Navigation Units (EGI) and an Ashtech Z12 GPS receiver. The mechanical orientation and position of the antennas are actively measured using a Laser Baseline Metrology System (LBMS). In the GeoSAR motion measurement software, these data are optimally combined with data from a nearby ground station using Ashtech PNAV software to produce the position, orientation, and baseline information are used to process the dual frequency radar data. Proper calibration of the GeoSAR system is essential to obtaining digital elevation models (DEMS) with the required sub-meter level planimetric and vertical accuracies. Calibration begins with the determination of the yaw and pitch biases for the two EGI units. Common range delays are determined for each mode, along with differential time and phase delays between channels. Because the antennas are measured by the LBMS, baseline calibration consists primarily of measuring a constant offset between mechanical center and the electrical phase center of the antennas. A phase screen, an offset to the interferometric phase difference which is a function of absolute phase, is applied to the interferometric data to compensate for multipath and leakage. Calibration parameters are calculated for each of the ten processing modes, each of the operational bandwidths (80 and 160 MHZ), and each aircraft altitude. In this talk we will discuss the layout calibration sites, the synthesis of data from multiple flights to improve the calibration, methods for determining time and phase delays, and techniques for determining radiometric and polarimetric quantities. We will describe how calibration quantities are incorporated into the processor and pre-processor. We will demonstrate our techniques applied to GeoSar data and assess the stability and accuracy of the calibration. This will be compared to the modeled performance determined from calibrating the output of a point target simulator. The details of baseline determination and phase screen calculation are covered in related talks.

  18. Mahali: Space Weather Monitoring Using Multicore Mobile Devices

    NASA Astrophysics Data System (ADS)

    Pankratius, V.; Lind, F. D.; Coster, A. J.; Erickson, P. J.; Semeter, J. L.

    2013-12-01

    Analysis of Total Electron Content (TEC) measurements derived from Global Positioning System (GPS) signals has led to revolutionary new data products for space weather monitoring and ionospheric research. However, the current sensor network is sparse, especially over the oceans and in regions like Africa and Siberia, and the full potential of dense, global, real-time TEC monitoring remains to be realized. The Mahali project will prototype a revolutionary architecture that uses mobile devices, such as phones and tablets, to form a global space weather monitoring network. Mahali exploits the existing GPS infrastructure - more specifically, delays in multi-frequency GPS signals observed at the ground - to acquire a vast set of global TEC projections, with the goal of imaging multi-scale variability in the global ionosphere at unprecedented spatial and temporal resolution. With connectivity available worldwide, mobile devices are excellent candidates to establish crowd sourced global relays that feed multi-frequency GPS sensor data into a cloud processing environment. Once the data is within the cloud, it is relatively straightforward to reconstruct the structure of the space environment, and its dynamic changes. This vision is made possible owing to advances in multicore technology that have transformed mobile devices into parallel computers with several processors on a chip. For example, local data can be pre-processed, validated with other sensors nearby, and aggregated when transmission is temporarily unavailable. Intelligent devices can also autonomously decide the most practical way of transmitting data with in any given context, e.g., over cell networks or Wifi, depending on availability, bandwidth, cost, energy usage, and other constraints. In the long run, Mahali facilitates data collection from remote locations such as deserts or on oceans. For example, mobile devices on ships could collect time-tagged measurements that are transmitted at a later point in time when some connectivity is available. Our concept of the overall Mahali system will employ both auto-tuning and machine learning techniques to cope with the opportunistic nature of data collection, computational load distribution on mobile devices and in the cloud, and fault-tolerance in a dynamically changing network. "Kila Mahali" means "everywhere" in the Swahili language. This project will follow that spirit by enabling space weather data collection even in the most remote places, resulting in dramatic improvements in observational gaps that exist in space weather research today. The dense network may enable the use of the entire ionosphere as a sensor to monitor geophysical events from earthquakes to tsunamis, and other natural disasters.

  19. JPL/USC GAIM: Validating COSMIC and Ground-Based GPS Assimilation Results to Estimate Ionospheric Electron Densities

    NASA Astrophysics Data System (ADS)

    Komjathy, A.; Wilson, B.; Akopian, V.; Pi, X.; Mannucci, A.; Wang, C.

    2008-12-01

    We seem to be in the midst of a revolution in ionospheric remote sensing driven by the abundance of ground and space-based GPS receivers, new UV remote sensing satellites, and the advent of data assimilation techniques for space weather. In particular, the COSMIC 6-satellite constellation was launched in April 2006. COSMIC now provides unprecedented global coverage of GPS occultations measurements, each of which yields electron density information with unprecedented ~1 km vertical resolution. Calibrated measurements of ionospheric delay (total electron content or TEC) suitable for input into assimilation models is currently made available in near real-time (NRT) from the COSMIC with a latency of 30 to 120 minutes. The University of Southern California (USC) and the Jet Propulsion Laboratory (JPL) have jointly developed a real-time Global Assimilative Ionospheric Model (GAIM) to monitor space weather, study storm effects, and provide ionospheric calibration for DoD customers and NASA flight projects. JPL/USC GAIM is a physics- based 3D data assimilation model that uses both 4DVAR and Kalman filter techniques to solve for the ion and electron density state and key drivers such as equatorial electrodynamics, neutral winds, and production terms. Daily (delayed) GAIM runs can accept as input ground GPS TEC data from 1200+ sites, occultation links from CHAMP, SAC-C, and the COSMIC constellation, UV limb and nadir scans from the TIMED and DMSP satellites, and in situ data from a variety of satellites (DMSP and C/NOFS). Real-Time GAIM (RTGAIM) ingests multiple data sources in real time, updates the 3D electron density grid every 5 minutes, and solves for improved drivers every 1-2 hours. Since our forward physics model and the adjoint model were expressly designed for data assimilation and computational efficiency, all of this can be accomplished on a single dual- processor Unix workstation. Customers are currently evaluating the accuracy of JPL/USC GAIM 'nowcasts' for ray tracing applications and trans-ionospheric path delay calibration. In the presentation, we will discuss the expected impact of NRT COSMIC occultation and NRT ground-based measurements and present validation results for ingest of COSMIC data into GAIM using measurements from World Days. We will quality check our COSMIC-derived products by comparing Abel profiles and JPL- processed results. Furthermore, we will validate GAIM assimilation results using Incoherent Scatter Radar measurements from Arecibo, Jicamarca and Millstone Hill datasets. We will conclude by characterizing the improved electron density states using dual-frequency altimeter-derived Jason vertical TEC measurements.

  20. Running ATLAS workloads within massively parallel distributed applications using Athena Multi-Process framework (AthenaMP)

    NASA Astrophysics Data System (ADS)

    Calafiura, Paolo; Leggett, Charles; Seuster, Rolf; Tsulaia, Vakhtang; Van Gemmeren, Peter

    2015-12-01

    AthenaMP is a multi-process version of the ATLAS reconstruction, simulation and data analysis framework Athena. By leveraging Linux fork and copy-on-write mechanisms, it allows for sharing of memory pages between event processors running on the same compute node with little to no change in the application code. Originally targeted to optimize the memory footprint of reconstruction jobs, AthenaMP has demonstrated that it can reduce the memory usage of certain configurations of ATLAS production jobs by a factor of 2. AthenaMP has also evolved to become the parallel event-processing core of the recently developed ATLAS infrastructure for fine-grained event processing (Event Service) which allows the running of AthenaMP inside massively parallel distributed applications on hundreds of compute nodes simultaneously. We present the architecture of AthenaMP, various strategies implemented by AthenaMP for scheduling workload to worker processes (for example: Shared Event Queue and Shared Distributor of Event Tokens) and the usage of AthenaMP in the diversity of ATLAS event processing workloads on various computing resources: Grid, opportunistic resources and HPC.

  1. Message Passing and Shared Address Space Parallelism on an SMP Cluster

    NASA Technical Reports Server (NTRS)

    Shan, Hongzhang; Singh, Jaswinder P.; Oliker, Leonid; Biswas, Rupak; Biegel, Bryan (Technical Monitor)

    2002-01-01

    Currently, message passing (MP) and shared address space (SAS) are the two leading parallel programming paradigms. MP has been standardized with MPI, and is the more common and mature approach; however, code development can be extremely difficult, especially for irregularly structured computations. SAS offers substantial ease of programming, but may suffer from performance limitations due to poor spatial locality and high protocol overhead. In this paper, we compare the performance of and the programming effort required for six applications under both programming models on a 32-processor PC-SMP cluster, a platform that is becoming increasingly attractive for high-end scientific computing. Our application suite consists of codes that typically do not exhibit scalable performance under shared-memory programming due to their high communication-to-computation ratios and/or complex communication patterns. Results indicate that SAS can achieve about half the parallel efficiency of MPI for most of our applications, while being competitive for the others. A hybrid MPI+SAS strategy shows only a small performance advantage over pure MPI in some cases. Finally, improved implementations of two MPI collective operations on PC-SMP clusters are presented.

  2. Project Aether Aurora: STEM outreach near the arctic circle

    NASA Astrophysics Data System (ADS)

    Longmier, B. W.; Bering, E. A.

    2012-12-01

    Project Aether is a program designed to immerse high-school through graduate students to field research in some of the fields of STEM. The program leaders launch high altitude weather balloons in collaboration with schools and students to teach physics concepts, experimental research skills, and to make space exploration accessible to students. A weather balloon lifts a specially designed payload package that is composed of HD cameras, GPS tracking devices, and other science equipment. The payload is constructed and attached to the balloon by the students with low-cost materials. The balloon and payload are launched with FAA clearance from a site chosen based on wind patterns and predicted landing locations. The balloon ascends over 2 hours to a maximum altitude of 100,000 feet where it bursts and allows the payload to slowly descend using a built-in parachute. The balloon's location is monitored during its flight by GPS-satellite relay. Most of the science and video data are recorded on SD cards using an Arduino digitizer. The payload is located using the GPS device. The science data are recovered from the payload and shared with the students. In April 2012, Project Aether leaders conducted a field campaign near Fairbanks Alaska, sending several student-built experiments to an altitude of 30km, underneath several strong auroral displays. Auroral physics experiments that can be done on ultra small balloons (5 cubic meters) include electric field and magnetic fluctuation observations, full spectrum and narrow band optical imaging, GPS monitoring of the total electron content of the ionosphere, x-ray detection and infrared and UV spectroscopy. The actual undergraduate student experiments will be reviewed and some data presented.; Balloon deployment underneath aurora, Fairbanks Alaska, 2012.

  3. Collision-induced rotation in an arc-continent collision: Constrained by continuous GPS observations in Mindoro, Philippines

    NASA Astrophysics Data System (ADS)

    Rau, R.; Hung, H.; Yang, C.; Tsai, M.; Ching, K.; Bacolcol, T.; Solidum, R.; Chang, W.

    2012-12-01

    The Mindoro Island, situated at the southern end of the Manila trench, is a modern arc-continent collision. Seismic activity in Mindoro concentrates mainly in the northern segment of the island as part of the Manila subduction processes; in contrast, seismicity in the middle and the southern parts of the island is rather diffuse. Although the Mindoro Island has been experiencing intense seismic activities and is a type example of arc-continent collision, the modern mode of deformation of the Mindoro collision remains unclear. We have installed eight dual-frequency continuous GPS stations in the island since May 2010. The questions we want to address by using continuous GPS observations are (1) if there are still compressions within the Mindoro collision? Have they ceased as seen by the diffuse seismicity, or are the thrust faults locked? (2) What is the mode of deformation in the Mindoro collision and what are the roles of thrust and strike-slip faults playing in the collision? (3) How does the Mindoro collision compare with the other collision, such as the Taiwan orogen? Do they share similar characteristics for the subduction-collision transition zone? For the results of the first two years GPS measurements, if we take the Sablayan site near the southern end of the Manila trench as the reference station, a large counterclockwise rotation from south to north, with horizontal velocities of 1.9-31.1 mm/yr from 165 to 277 degrees, are found in the island. The deformation of the Mindoro is similar to the pattern of the transition zone from collision to subduction in northeastern Taiwan. This result suggests that collision-induced rotation is occurring in the Mindoro Island and the Mindoro arc-continent collision is still active.

  4. Laser-based pedestrian tracking in outdoor environments by multiple mobile robots.

    PubMed

    Ozaki, Masataka; Kakimuma, Kei; Hashimoto, Masafumi; Takahashi, Kazuhiko

    2012-10-29

    This paper presents an outdoors laser-based pedestrian tracking system using a group of mobile robots located near each other. Each robot detects pedestrians from its own laser scan image using an occupancy-grid-based method, and the robot tracks the detected pedestrians via Kalman filtering and global-nearest-neighbor (GNN)-based data association. The tracking data is broadcast to multiple robots through intercommunication and is combined using the covariance intersection (CI) method. For pedestrian tracking, each robot identifies its own posture using real-time-kinematic GPS (RTK-GPS) and laser scan matching. Using our cooperative tracking method, all the robots share the tracking data with each other; hence, individual robots can always recognize pedestrians that are invisible to any other robot. The simulation and experimental results show that cooperating tracking provides the tracking performance better than conventional individual tracking does. Our tracking system functions in a decentralized manner without any central server, and therefore, this provides a degree of scalability and robustness that cannot be achieved by conventional centralized architectures.

  5. A Robust High-Performance GPS L1 Receiver with Single-stage Quadrature Redio-Frequency Circuit

    NASA Astrophysics Data System (ADS)

    Liu, Jianghua; Xu, Weilin; Wan, Qinq; Liu, Tianci

    2018-03-01

    A low power current reuse single-stage quadrature raido-frequency part (SQRF) is proposed for GPS L1 receiver in 180nm CMOS process. The proposed circuit consists of LNA, Mixer, QVCO, is called the QLMV cell. A two blocks stacked topology is adopted in this design. The parallel QVCO and mixer placed on the top forms the upper stacked block, and the LNA placed on the bottom forms the other stacked block. The two blocks share the current and achieve low power performance. To improve the stability, a float current source is proposed. The float current isolated the local oscillation signal and the input RF signal, which bring the whole circuit robust high-performance. The result shows conversion gain is 34 dB, noise figure is three dB, the phase noise is -110 dBc/Hz at 1MHz and IIP3 is -20 dBm. The proposed circuit dissipated 1.7mW with 1 V supply voltage.

  6. CORS911:Real-Time Subsidence Monitoring of the Napoleonville Salt Dome Sinkhole Using GPS

    NASA Astrophysics Data System (ADS)

    Kent, J. D.

    2013-12-01

    The sinkhole associated with the Napoleonville salt dome in Assumption Parish, Louisiana, threatens the stability of Highway 70 - a state maintained route. To mitigate the potential damaging effects to the highway and address issues of public safety, a program of research and decision support has been implemented to provide long-term measurements of the surface stability using continuous operating GPS reference stations (CORS). Four CORS sites were installed in the vicinity of the sinkhole to measure the horizontal and vertical motions of each site relative to each other and a fixed location outside the study area. Differential motions measured by a integrity monitoring software are summarized for response agencies tasked with ensuring public safety and stability of the Highway, a designated hurricane evacuation route. Implementation experience and intermediate findings will be shared and discussed. Strategies for monitoring random and systematic biases detected in the system are presented. Figure depicting the location of CORS sites used to monitor surface stability along Highway 70 near the Bayou Corne Sinkhole.

  7. A Comparison of Three Programming Models for Adaptive Applications

    NASA Technical Reports Server (NTRS)

    Shan, Hong-Zhang; Singh, Jaswinder Pal; Oliker, Leonid; Biswa, Rupak; Kwak, Dochan (Technical Monitor)

    2000-01-01

    We study the performance and programming effort for two major classes of adaptive applications under three leading parallel programming models. We find that all three models can achieve scalable performance on the state-of-the-art multiprocessor machines. The basic parallel algorithms needed for different programming models to deliver their best performance are similar, but the implementations differ greatly, far beyond the fact of using explicit messages versus implicit loads/stores. Compared with MPI and SHMEM, CC-SAS (cache-coherent shared address space) provides substantial ease of programming at the conceptual and program orchestration level, which often leads to the performance gain. However it may also suffer from the poor spatial locality of physically distributed shared data on large number of processors. Our CC-SAS implementation of the PARMETIS partitioner itself runs faster than in the other two programming models, and generates more balanced result for our application.

  8. Parallel, Asynchronous Executive (PAX): System concepts, facilities, and architecture

    NASA Technical Reports Server (NTRS)

    Jones, W. H.

    1983-01-01

    The Parallel, Asynchronous Executive (PAX) is a software operating system simulation that allows many computers to work on a single problem at the same time. PAX is currently implemented on a UNIVAC 1100/42 computer system. Independent UNIVAC runstreams are used to simulate independent computers. Data are shared among independent UNIVAC runstreams through shared mass-storage files. PAX has achieved the following: (1) applied several computing processes simultaneously to a single, logically unified problem; (2) resolved most parallel processor conflicts by careful work assignment; (3) resolved by means of worker requests to PAX all conflicts not resolved by work assignment; (4) provided fault isolation and recovery mechanisms to meet the problems of an actual parallel, asynchronous processing machine. Additionally, one real-life problem has been constructed for the PAX environment. This is CASPER, a collection of aerodynamic and structural dynamic problem simulation routines. CASPER is not discussed in this report except to provide examples of parallel-processing techniques.

  9. Partitioning problems in parallel, pipelined and distributed computing

    NASA Technical Reports Server (NTRS)

    Bokhari, S.

    1985-01-01

    The problem of optimally assigning the modules of a parallel program over the processors of a multiple computer system is addressed. A Sum-Bottleneck path algorithm is developed that permits the efficient solution of many variants of this problem under some constraints on the structure of the partitions. In particular, the following problems are solved optimally for a single-host, multiple satellite system: partitioning multiple chain structured parallel programs, multiple arbitrarily structured serial programs and single tree structured parallel programs. In addition, the problems of partitioning chain structured parallel programs across chain connected systems and across shared memory (or shared bus) systems are also solved under certain constraints. All solutions for parallel programs are equally applicable to pipelined programs. These results extend prior research in this area by explicitly taking concurrency into account and permit the efficient utilization of multiple computer architectures for a wide range of problems of practical interest.

  10. Optimizing CMS build infrastructure via Apache Mesos

    NASA Astrophysics Data System (ADS)

    Abdurachmanov, David; Degano, Alessandro; Elmer, Peter; Eulisse, Giulio; Mendez, David; Muzaffar, Shahzad

    2015-12-01

    The Offline Software of the CMS Experiment at the Large Hadron Collider (LHC) at CERN consists of 6M lines of in-house code, developed over a decade by nearly 1000 physicists, as well as a comparable amount of general use open-source code. A critical ingredient to the success of the construction and early operation of the WLCG was the convergence, around the year 2000, on the use of a homogeneous environment of commodity x86-64 processors and Linux. Apache Mesos is a cluster manager that provides efficient resource isolation and sharing across distributed applications, or frameworks. It can run Hadoop, Jenkins, Spark, Aurora, and other applications on a dynamically shared pool of nodes. We present how we migrated our continuous integration system to schedule jobs on a relatively small Apache Mesos enabled cluster and how this resulted in better resource usage, higher peak performance and lower latency thanks to the dynamic scheduling capabilities of Mesos.

  11. Implementation of Parallel Dynamic Simulation on Shared-Memory vs. Distributed-Memory Environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Shuangshuang; Chen, Yousu; Wu, Di

    2015-12-09

    Power system dynamic simulation computes the system response to a sequence of large disturbance, such as sudden changes in generation or load, or a network short circuit followed by protective branch switching operation. It consists of a large set of differential and algebraic equations, which is computational intensive and challenging to solve using single-processor based dynamic simulation solution. High-performance computing (HPC) based parallel computing is a very promising technology to speed up the computation and facilitate the simulation process. This paper presents two different parallel implementations of power grid dynamic simulation using Open Multi-processing (OpenMP) on shared-memory platform, and Messagemore » Passing Interface (MPI) on distributed-memory clusters, respectively. The difference of the parallel simulation algorithms and architectures of the two HPC technologies are illustrated, and their performances for running parallel dynamic simulation are compared and demonstrated.« less

  12. “It is not the fading candle that one expects”: general practitioners’ perspectives on life-preserving versus “letting go” decision-making in end-of-life home care

    PubMed Central

    Renterghem, Veerle Van; Pype, Peter; Aelbrecht, Karolien; Derese, Anselme; Deveugele, Myriam

    2015-01-01

    Background Many general practitioners (GPs) are willing to provide end-of-life (EoL) home care for their patients. International research on GPs’ approach to care in patients’ final weeks of life showed a combination of palliative measures with life-preserving actions. Aim To explore the GP’s perspective on life-preserving versus “letting go” decision-making in EoL home care. Design Qualitative analysis of semi-structured interviews with 52 Belgian GPs involved in EoL home care. Results Nearly all GPs adopted a palliative approach and an accepting attitude towards death. The erratic course of terminal illness can challenge this approach. Disruptive medical events threaten the prospect of a peaceful end-phase and death at home and force the GP either to maintain the patient’s (quality of) life for the time being or to recognize the event as a step to life closure and “letting the patient go”. Making the “right” decision was very difficult. Influencing factors included: the nature and time of the crisis, a patient’s clinical condition at the event itself, a GP’s level of determination in deciding and negotiating “letting go” and the patient’s/family’s wishes and preparedness regarding this death. Hospitalization was often a way out. Conclusions GPs regard alternation between palliation and life-preservation as part of palliative care. They feel uncertain about their mandate in deciding and negotiating the final step to life closure. A shortage of knowledge of (acute) palliative medicine as one cause of difficulties in letting-go decisions may be underestimated. Sharing all these professional responsibilities with the specialist palliative home care teams would lighten a GP’s burden considerably.Key PointsA late transition from a life-preserving mindset to one of “letting go” has been reported as a reason why physicians resort to life-preserving actions in an end-of-life (EoL) context. We investigated GPs’ perspectives on this matter.Not all GPs involved in EoL home care adopt a “letting go” mindset. For those who do, this mindset is challenged by the erratic course of terminal illness.GPs prioritize the quality of the remaining life and the serenity of the dying process, which is threatened by disruptive medical events.Making the “right” decision is difficult. GPs feel uncertain about their own role and responsibility in deciding and negotiating the final step to life closure. PMID:26654583

  13. Does sharing the electronic health record in the consultation enhance patient involvement? A mixed-methods study using multichannel video recording and in-depth interviews in primary care.

    PubMed

    Milne, Heather; Huby, Guro; Buckingham, Susan; Hayward, James; Sheikh, Aziz; Cresswell, Kathrin; Pinnock, Hilary

    2016-06-01

    Sharing the electronic health-care record (EHR) during consultations has the potential to facilitate patient involvement in their health care, but research about this practice is limited. We used multichannel video recordings to identify examples and examine the practice of screen-sharing within 114 primary care consultations. A subset of 16 consultations was viewed by the general practitioner and/or patient in 26 reflexive interviews. Screen-sharing emerged as a significant theme and was explored further in seven additional patient interviews. Final analysis involved refining themes from interviews and observation of videos to understand how screen-sharing occurred, and its significance to patients and professionals. Eighteen (16%) of 114 videoed consultations involved instances of screen-sharing. Screen-sharing occurred in six of the subset of 16 consultations with interviews and was a significant theme in 19 of 26 interviews. The screen was shared in three ways: 'convincing' the patient of a diagnosis or treatment; 'translating' between medical and lay understandings of disease/medication; and by patients 'verifying' the accuracy of the EHR. However, patients and most GPs perceived the screen as the doctor's domain, not to be routinely viewed by the patient. Screen-sharing can facilitate patient involvement in the consultation, depending on the way in which sharing comes about, but the perception that the record belongs to the doctor is a barrier. To exploit the potential of sharing the screen to promote patient involvement, there is a need to reconceptualise and redesign the EHR. © 2014 The Authors Health Expectations Published by John Wiley & Sons Ltd.

  14. Efficiency of static core turn-off in a system-on-a-chip with variation

    DOEpatents

    Cher, Chen-Yong; Coteus, Paul W; Gara, Alan; Kursun, Eren; Paulsen, David P; Schuelke, Brian A; Sheets, II, John E; Tian, Shurong

    2013-10-29

    A processor-implemented method for improving efficiency of a static core turn-off in a multi-core processor with variation, the method comprising: conducting via a simulation a turn-off analysis of the multi-core processor at the multi-core processor's design stage, wherein the turn-off analysis of the multi-core processor at the multi-core processor's design stage includes a first output corresponding to a first multi-core processor core to turn off; conducting a turn-off analysis of the multi-core processor at the multi-core processor's testing stage, wherein the turn-off analysis of the multi-core processor at the multi-core processor's testing stage includes a second output corresponding to a second multi-core processor core to turn off; comparing the first output and the second output to determine if the first output is referring to the same core to turn off as the second output; outputting a third output corresponding to the first multi-core processor core if the first output and the second output are both referring to the same core to turn off.

  15. Evaluation of a mental health training intervention for multidisciplinary teams in primary care in Brazil: a pre- and posttest study.

    PubMed

    Goncalves, Daniel A; Fortes, Sandra; Campos, Monica; Ballester, Dinarte; Portugal, Flávia Batista; Tófoli, Luis Fernando; Gask, Linda; Mari, Jair; Bower, Peter

    2013-01-01

    The aim of this research was to investigate whether a training intervention to enhance collaboration between mental health and primary care professionals improved the detection and management of mental health problems in primary health care in four large cities in Brazil. The training intervention was a multifaceted program over 96 h focused on development of a shared care model. A quasiexperimental study design was undertaken with assessment of performance by nurse and general practitioners (GPs) pre- and postintervention. Rates of recognition of mental health disorders (compared with the General Health Questionnaire) were the primary outcome, while self-reports of patient-centered care, psychosocial interventions and referral were the secondary outcomes. Six to 8 months postintervention, no changes were observed in terms of rate of recognition across the entire sample. Nurses significantly increased their recognition rates (from 23% to 39%, P=.05), while GPs demonstrated a significant decrease (from 42% to 30%, P=.04). There were significant increases in reports of patient-centered care, but no changes in other secondary outcomes. Training professionals in a shared care model was not associated with consistent improvements in the recognition or management of mental health problems. Although instabilities in the local context may have contributed to the lack of effects, wider changes in the system of care may be required to augment training and encourage reliable changes in behavior, and more specific educating models are necessary. Copyright © 2013 Elsevier Inc. All rights reserved.

  16. The Data Processor of the JEM-EUSO pathfinders

    NASA Astrophysics Data System (ADS)

    Scotti, V.; Osteria, G.

    2014-06-01

    JEM-EUSO is a wide-angle refractive UV telescope being proposed for attachment to the Japanese Experiment Module on ISS. The main goal of the mission is to study Extreme Energy Cosmic Rays. Two pathfinder mission are now in progress: EUSO-TA and EUSO-Balloon. The EUSO-TA project foresees the installation of a telescope prototype in the Telescope Array site. The aim of this project is to calibrate the telescope with the TA fluorescence detector. An initial run of one year starting from 2013 is foreseen. EUSO-Balloon is a pathfinder mission in which a prototype telescope will be mounted on a stratospheric balloon. The main aim of this mission is to perform a end-to-end test of all the key technologies and instrumentation of JEM-EUSO detectors and to prove the global detection chain. EUSO-Balloon will measure the UV background fundamental for the development of the simulations. EUSO-Balloon has the potential to detect Extensive Air Showers from above, paving the way for any future space-based EECR observatory. We will present the Data Processor of the pathfinders. The DP is the component of the Electronics System which performs data management and instrument control. The DP controls front-end electronics, performs 2nd level trigger filtering, tags events with arrival time and payload position through a GPS system, manages mass memory for data storage, measures live and dead time of the telescope, provides signals for time synchronization of the event, performs housekeeping monitor and handles interface to the telemetry system. We will describe the main components of the DP, the state-of-the-art and the results of the tests carried out.

  17. Calibrating thermal behavior of electronics

    DOEpatents

    Chainer, Timothy J.; Parida, Pritish R.; Schultz, Mark D.

    2017-07-11

    A method includes determining a relationship between indirect thermal data for a processor and a measured temperature associated with the processor, during a calibration process, obtaining the indirect thermal data for the processor during actual operation of the processor, and determining an actual significant temperature associated with the processor during the actual operation using the indirect thermal data for the processor during actual operation of the processor and the relationship.

  18. Calibrating thermal behavior of electronics

    DOEpatents

    Chainer, Timothy J.; Parida, Pritish R.; Schultz, Mark D.

    2016-05-31

    A method includes determining a relationship between indirect thermal data for a processor and a measured temperature associated with the processor, during a calibration process, obtaining the indirect thermal data for the processor during actual operation of the processor, and determining an actual significant temperature associated with the processor during the actual operation using the indirect thermal data for the processor during actual operation of the processor and the relationship.

  19. Calibrating thermal behavior of electronics

    DOEpatents

    Chainer, Timothy J.; Parida, Pritish R.; Schultz, Mark D.

    2017-01-03

    A method includes determining a relationship between indirect thermal data for a processor and a measured temperature associated with the processor, during a calibration process, obtaining the indirect thermal data for the processor during actual operation of the processor, and determining an actual significant temperature associated with the processor during the actual operation using the indirect thermal data for the processor during actual operation of the processor and the relationship.

  20. Experimenting Galileo on Board the International Space Station

    NASA Technical Reports Server (NTRS)

    Fantinato, Samuele; Pozzobon, Oscar; Gamba, Giovanni; Chiara, Andrea Dalla; Montagner, Stefano; Giordano, Pietro; Crisci, Massimo; Enderle, Werner; Chelmins, David T.; Sands, Obed S.; hide

    2016-01-01

    The SCaN Testbed is an advanced integrated communications system and laboratory facility installed on the International Space Station (ISS) in 2012. The testbed incorporates a set of new generation of Software Defined Radio (SDR) technologies intended to allow researchers to develop, test, and demonstrate new communications, networking, and navigation capabilities in the actual environment of space. Qascom, in cooperation with ESA and NASA, is designing a Software Defined Radio GalileoGPS Receiver capable to provide accurate positioning and timing to be installed on the ISS SCaN Testbed. The GalileoGPS waveform will be operated in the JPL SDR that is constituted by several hardware components that can be used for experimentations in L-Band and S-Band. The JPL SDR includes an L-Band Dorne Margolin antenna mounted onto a choke ring. The antenna is connected to a radio front end capable to provide one bit samples for the three GNSS frequencies (L1, L2 and L5) at 38 MHz, exploiting the subharmonic sampling. The baseband processing is then performed by an ATMEL AT697 processor (100 MIPS) and two Virtex 2 FPGAs. The JPL SDR supports the STRS (Space Telecommunications Radio System) that provides common waveform software interfaces, methods of instantiation, operation, and testing among different compliant hardware and software products. The standard foresees the development of applications that are modular, portable, reconfigurable, and reusable. The developed waveform uses the STRS infrastructure-provided application program interfaces (APIs) and services to load, verify, execute, change parameters, terminate, or unload an application. The project is divided in three main phases. 1)Design and Development of the GalileoGPS waveform for the SCaN Testbed starting from Qascom existing GNSS SDR receiver. The baseline design is limited to the implementation of the single frequency Galileo and GPS L1E1 receiver even if as part of the activity it will be to assess the feasibility of a dual frequency implementation (L1E1+L5E5a) in the same SDR platform.2)Qualification and test the GalileoGPS waveform using ground systems available at the NASA Glenn Research Center. Experimenters can have access to two SCaN Testbed ground based systems for development and verification: the Experimenter Development System (EDS) that is intended to provide initial opportunity for software testing and basic functional validation and the Ground Integration Unit (GIU) that is a high fidelity version of the SCaN Testbed flight system and is therefore used for more controlled final development testing and verification testing.3)Perform in-orbit validation and experimentation: The experimentation phase will consists on the collection of raw measurements (pseudorange, Carrier phase, CN0) in space, assessment on the quality of the measurements and the receiver performances in terms of signal acquisition, tracking, etc. Finally computation of positioning in space (Position, Velocity and time) and assessment of its performance.(Complete abstract in attached document).

  1. JPL/USC GAIM: Using COSMIC Occultations in a Real-Time Global Ionospheric Data Assimilation Model

    NASA Astrophysics Data System (ADS)

    Mandrake, L.; Komjathy, A.; Wilson, B. D.; Pi, X.; Hajj, G.; Iijima, B.; Wang, C.

    2006-12-01

    We are in the midst of a revolution in ionospheric remote sensing driven by the illuminating powers of ground and space-based GPS receivers, new UV remote sensing satellites, and the advent of data assimilation techniques for space weather. In particular, the COSMIC 6-satellite constellation launched in April 2006. COSMIC will provide unprecedented global coverage of GPS occultations (~5000 per day), each of which yields electron density information with unprecedented ~1 km vertical resolution. Calibrated measurements of ionospheric delay (total electron content or TEC) suitable for input into assimilation models will be available in near real-time (NRT) from the COSMIC project with a latency of 30 to 120 minutes. Similarly, NRT TEC data are available from two worldwide NRT networks of ground GPS receivers (~75 5-minute sites and ~125 more hourly sites, operated by JPL and others). The combined NRT ground and space-based GPS datasets provide a new opportunity to more accurately specify the 3-dimensional ionospheric density with a time lag of only 15 to 120 minutes. With the addition of the vertically-resolved NRT occultation data, the retrieved profile shapes will model the hour-to-hour ionospheric "weather" much more accurately. The University of Southern California (USC) and the Jet Propulsion Laboratory (JPL) have jointly developed a real-time Global Assimilative Ionospheric Model (GAIM) to monitor space weather, study storm effects, and provide ionospheric calibration for DoD customers and NASA flight projects. JPL/USC GAIM is a physics- based 3D data assimilation model that uses both 4DVAR and Kalman filter techniques to solve for the ion & electron density state and key drivers such as equatorial electrodynamics, neutral winds, and production terms. Daily (delayed) GAIM runs can accept as input ground GPS TEC data from 1000+ sites, occultation links from CHAMP, SAC-C, and the COSMIC constellation, UV limb and nadir scans from the TIMED and DMSP satellites, and in situ data from a variety of satellites (DMSP and C/NOFS). RTGAIM ingests multiple data sources in real time, updates the 3D electron density grid every 5 minutes, and solves for improved drivers every 1-2 hours. Since our forward physics model and the adjoint model were expressly designed for data assimilation and computational efficiency, all of this can be accomplished on a single dual-processor Unix workstation. Customers are currently evaluating the accuracy of JPL/USC GAIM "nowcasts" for ray tracing applications and trans-ionospheric path delay calibration. In the talk, we will discuss the expected impact of COSMIC occultation data; show first results for ingest of COSMIC data using the GAIM Kalman filter; present validation of the GAIM electron density grid by comparisons to Abel profiles and independent datasets; describe recent improvements to the JPL/USC GAIM model; and describe our plans for NRT ingest of COSMIC data into RTGAIM.

  2. Present day crustal deformation of the Italian peninsula observed by permanent GPS stations

    NASA Astrophysics Data System (ADS)

    Devoti, Roberto; Esposito, Alessandra; Galvani, Alessandro; Pietrantonio, Grazia; Pisani, Anna Rita; Riguzzi, Federica; Sepe, Vincenzo

    2010-05-01

    Italian penisula is a crucial area in the Mediterranean region to understand the active deformation processes along Nubia-Eurasia plate boundary. We present the velocity and strain rate fields of the Italian area derived from continuous GPS observations of more than 300 sites in the time span 1998-2009. The GPS networks were installed and managed by different institutions and for different purposes; altogether they cover the whole country with a mean inter-site distance of about 50 km and provide a valuable source of data to map the present day kinematics of the region. The data processing is performed by BERNESE software ver. 5.0, adopting a distributed session approach, with more than 10 clusters, sharing common stations, each of them consisting of about 40 stations. Daily loosely constrained solutions are routinely produced for each cluster and then combined into a network daily loose solution. Subsequently daily solutions are transformed on the chosen reference frame and the constrained time series are fitted using the complete covariance matrix, simultaneously estimating site velocities together with annual signals and sporadic offsets at epochs of instrumental changes. In this work we provide an updated detailed picture of the horizontal and vertical kinematics (velocity maps) and deformation pattern (strain rate maps) of the Italian area. The results show several crustal domains characterized by different velocity rates and styles of deformation.

  3. Analysis of Plug-In hybrid Electric Vehicles' utility factors using GPS-based longitudinal travel data

    NASA Astrophysics Data System (ADS)

    Aviquzzaman, Md

    The benefit of using a Plug-in Hybrid Electric Vehicle (PHEV) comes from its ability of substituting gasoline with electricity in operation. Defined as the share of distance traveled in the electric mode, the utility factor (UF) depends mostly on the battery capacity but also on many other factors, such as travel pattern and recharging pattern. Conventionally, the UFs are calculated from the daily vehicle miles traveled (DVMT) of vehicles by assuming motorists leaving home in the morning with full battery and return home in the evening. Such assumption, however, ignores the impact of the heterogeneity in both travel and charging behavior. The main objective of the thesis is to compare the UF by using multiday GPS-based travel data in regards to the charging decision. This thesis employs the global positioning system (GPS) based longitudinal travel data (covering 3-18 months) collected from 403 vehicles in the Seattle metropolitan area to investigate the impacts of such travel and charging behavior on UFs by analyzing the DVMT and home and work related tours. The UFs based on the DVMT are found close to those based on home-to-home tours. On the other hand, it is seen that the workplace charge opportunities largely improve UFs if the battery capacity is no more than 50 miles. It is also found that the gasoline price does not have significant impact on the UFs.

  4. Qualitative study to conceptualise a model of interprofessional collaboration between pharmacists and general practitioners to support patients' adherence to medication

    PubMed Central

    Rathbone, Adam P; Mansoor, Sarab M; Krass, Ines; Hamrosi, Kim; Aslani, Parisa

    2016-01-01

    Objectives Pharmacists and general practitioners (GPs) face an increasing expectation to collaborate interprofessionally on a number of healthcare issues, including medication non-adherence. This study aimed to propose a model of interprofessional collaboration within the context of identifying and improving medication non-adherence in primary care. Setting Primary care; Sydney, Australia. Participants 3 focus groups were conducted with pharmacists (n=23) and 3 with GPs (n=22) working in primary care. Primary and secondary outcome measures Qualitative investigation of GP and pharmacist interactions with each other, and specifically around supporting their patients’ medication adherence. Audio-recordings were transcribed verbatim and transcripts thematically analysed using a combination of manual and computer coding. Results 3 themes pertaining to interprofessional collaboration were identified (1) frequency, (2) co-collaborators and (3) nature of communication which included 2 subthemes (method of communication and type of communication). While the frequency of interactions was low, the majority were conducted by telephone. Interactions, especially those conducted face-to-face, were positive. Only a few related to patient non-adherence. The findings are positioned within contemporary collaborative theory and provide an accessible introduction to models of interprofessional collaboration. Conclusions This work highlighted that successful collaboration to improve medication adherence was underpinned by shared paradigmatic perspectives and trust, constructed through regular, face-to-face interactions between pharmacists and GPs. PMID:26983948

  5. On the impact of communication complexity in the design of parallel numerical algorithms

    NASA Technical Reports Server (NTRS)

    Gannon, D.; Vanrosendale, J.

    1984-01-01

    This paper describes two models of the cost of data movement in parallel numerical algorithms. One model is a generalization of an approach due to Hockney, and is suitable for shared memory multiprocessors where each processor has vector capabilities. The other model is applicable to highly parallel nonshared memory MIMD systems. In the second model, algorithm performance is characterized in terms of the communication network design. Techniques used in VLSI complexity theory are also brought in, and algorithm independent upper bounds on system performance are derived for several problems that are important to scientific computation.

  6. Implementations of BLAST for parallel computers.

    PubMed

    Jülich, A

    1995-02-01

    The BLAST sequence comparison programs have been ported to a variety of parallel computers-the shared memory machine Cray Y-MP 8/864 and the distributed memory architectures Intel iPSC/860 and nCUBE. Additionally, the programs were ported to run on workstation clusters. We explain the parallelization techniques and consider the pros and cons of these methods. The BLAST programs are very well suited for parallelization for a moderate number of processors. We illustrate our results using the program blastp as an example. As input data for blastp, a 799 residue protein query sequence and the protein database PIR were used.

  7. On the impact of communication complexity on the design of parallel numerical algorithms

    NASA Technical Reports Server (NTRS)

    Gannon, D. B.; Van Rosendale, J.

    1984-01-01

    This paper describes two models of the cost of data movement in parallel numerical alorithms. One model is a generalization of an approach due to Hockney, and is suitable for shared memory multiprocessors where each processor has vector capabilities. The other model is applicable to highly parallel nonshared memory MIMD systems. In this second model, algorithm performance is characterized in terms of the communication network design. Techniques used in VLSI complexity theory are also brought in, and algorithm-independent upper bounds on system performance are derived for several problems that are important to scientific computation.

  8. A simple modern correctness condition for a space-based high-performance multiprocessor

    NASA Technical Reports Server (NTRS)

    Probst, David K.; Li, Hon F.

    1992-01-01

    A number of U.S. national programs, including space-based detection of ballistic missile launches, envisage putting significant computing power into space. Given sufficient progress in low-power VLSI, multichip-module packaging and liquid-cooling technologies, we will see design of high-performance multiprocessors for individual satellites. In very high speed implementations, performance depends critically on tolerating large latencies in interprocessor communication; without latency tolerance, performance is limited by the vastly differing time scales in processor and data-memory modules, including interconnect times. The modern approach to tolerating remote-communication cost in scalable, shared-memory multiprocessors is to use a multithreaded architecture, and alter the semantics of shared memory slightly, at the price of forcing the programmer either to reason about program correctness in a relaxed consistency model or to agree to program in a constrained style. The literature on multiprocessor correctness conditions has become increasingly complex, and sometimes confusing, which may hinder its practical application. We propose a simple modern correctness condition for a high-performance, shared-memory multiprocessor; the correctness condition is based on a simple interface between the multiprocessor architecture and a high-performance, shared-memory multiprocessor; the correctness condition is based on a simple interface between the multiprocessor architecture and the parallel programming system.

  9. Parallelizing ATLAS Reconstruction and Simulation: Issues and Optimization Solutions for Scaling on Multi- and Many-CPU Platforms

    NASA Astrophysics Data System (ADS)

    Leggett, C.; Binet, S.; Jackson, K.; Levinthal, D.; Tatarkhanov, M.; Yao, Y.

    2011-12-01

    Thermal limitations have forced CPU manufacturers to shift from simply increasing clock speeds to improve processor performance, to producing chip designs with multi- and many-core architectures. Further the cores themselves can run multiple threads as a zero overhead context switch allowing low level resource sharing (Intel Hyperthreading). To maximize bandwidth and minimize memory latency, memory access has become non uniform (NUMA). As manufacturers add more cores to each chip, a careful understanding of the underlying architecture is required in order to fully utilize the available resources. We present AthenaMP and the Atlas event loop manager, the driver of the simulation and reconstruction engines, which have been rewritten to make use of multiple cores, by means of event based parallelism, and final stage I/O synchronization. However, initial studies on 8 andl6 core Intel architectures have shown marked non-linearities as parallel process counts increase, with as much as 30% reductions in event throughput in some scenarios. Since the Intel Nehalem architecture (both Gainestown and Westmere) will be the most common choice for the next round of hardware procurements, an understanding of these scaling issues is essential. Using hardware based event counters and Intel's Performance Tuning Utility, we have studied the performance bottlenecks at the hardware level, and discovered optimization schemes to maximize processor throughput. We have also produced optimization mechanisms, common to all large experiments, that address the extreme nature of today's HEP code, which due to it's size, places huge burdens on the memory infrastructure of today's processors.

  10. Pneumocafé project: an inquiry on current COPD diagnosis and management among General Practitioners in Italy through a novel tool for professional education.

    PubMed

    Sanguinetti, Claudio M; De Benedetto, Fernando; Donner, Claudio F; Nardini, Stefano; Visconti, Alberto

    2014-01-01

    Symptoms of COPD are frequently disregarded by patients and also by general practitioners (GPs) in early stages of the disease, that consequently is diagnosed when already at an advanced grade of severity. Underdiagnosis and undertreatment of COPD and scarce use of spirometry are widely recurrent, while a better knowledge of the disease and a wider use of spirometry would be critical to diagnose more patients still neglected, do it at an earlier stage and properly treat established COPD. The aim of Pneumocafè project is to improve, through an innovative approach, the diagnosis and management of COPD at primary care level increasing the awareness of issues pertaining to early diagnosis, adequate prevention and correct treatment of the disease. Pneumocafè is based on informal meetings between GPs of various geographical zones of Italy and their reference respiratory specialist (RS), aimed at discussing the current practice in comparison to suggestions of official guidelines, analyzing the actual problems in diagnosing and managing COPD patients and sharing the possible solution at the community level. In these meetings RSs faced many issues including patho-physiological mechanisms of bronchial obstruction, significance of clinical symptoms, patients' phenotyping, and clinical approach to diagnosis and long-term treatment, also reinforcing the importance of a timely diagnosis, proper long term treatment and the compliance to treatment. At the end of each meeting GPs had to fill in a questionnaire arranged by the scientific board of the Project that included 18 multiple-choice questions concerning their approach to COPD management. The results of the analysis of these questionnaires are here presented. 1, 964 questionnaires were returned from 49 RSs. 1,864 questionnaires out of those received (94.91% of the total) resulted properly compiled and form the object of the present analysis. The 49 RSs, 37 males and 12 females, were distributed all over the Italian country and practiced their profession both in public and private hospitals and in territorial sanitary facilities. GPs were 1,330 males (71.35%) and 534 females (28.64%), mean age 56,29 years (range 27-70 yrs). Mean duration of general practice was 25.56 years (range: 0,5-40 yrs) with a mean of 1,302.43 patients assisted by each GP and 2,427,741 patients assisted in all. The majority of GPs affirmed that in their patients COPD has a mean-to-great prevalence and a mean/high impact on their practice, preceded only by diabetes and heart failure. Three-quarters of GPs refer to COPD guidelines and most of them believe that a screening on their assisted patients at risk would enhance early diagnosis of COPD. Tobacco smoking is the main recognized cause of COPD but the actions carried out by GPs to help a patient to give up smoking result still insufficient. The majority of GPs recognize spirometry as necessary to early COPD diagnosis, but the main obstacle pointed out to its wider use was the too long time for the spirometry to be performed. GPs' main reason for prescribing a bronchodilator is dyspnea and bronchodilators preferably prescribed are LABA and LAMA. Control of patient's adherence to therapy is mainly carried out by GPs checking the number of drugs annually prescribed or asking the patient during a control visit. Finally, about how many COPD patients GPs believe are in their group of assisted patients, a mean range of 25-40 patients was reported, that is consistently below the forecast based on epidemiological data and number of patients assisted by each GP. The results obtained with this project confirm the validity of this informal approach to professional education. Furthermore, this inquiry provided important insights about the general management of COPD and the process of integration between RS and GPs activities on this disease condition in the long run.

  11. Methods and systems for providing reconfigurable and recoverable computing resources

    NASA Technical Reports Server (NTRS)

    Stange, Kent (Inventor); Hess, Richard (Inventor); Kelley, Gerald B (Inventor); Rogers, Randy (Inventor)

    2010-01-01

    A method for optimizing the use of digital computing resources to achieve reliability and availability of the computing resources is disclosed. The method comprises providing one or more processors with a recovery mechanism, the one or more processors executing one or more applications. A determination is made whether the one or more processors needs to be reconfigured. A rapid recovery is employed to reconfigure the one or more processors when needed. A computing system that provides reconfigurable and recoverable computing resources is also disclosed. The system comprises one or more processors with a recovery mechanism, with the one or more processors configured to execute a first application, and an additional processor configured to execute a second application different than the first application. The additional processor is reconfigurable with rapid recovery such that the additional processor can execute the first application when one of the one more processors fails.

  12. Space Based Communications

    NASA Technical Reports Server (NTRS)

    Simpson, James; Denson, Erik; Valencia, Lisa; Birr, Richard

    2003-01-01

    Current space lift launches on the Eastern and Western Range require extensive ground-based real-time tracking, communications and command/control systems. These are expensive to maintain and operate and cover only limited geographical areas. Future spaceports will require new technologies to provide greater launch and landing opportunities, support simultaneous missions, and offer enhanced decision support models and simulation capabilities. These ranges must also have lower costs and reduced complexity while continuing to provide unsurpassed safety to the public, flight crew, personnel, vehicles and facilities. Commercial and government space-based assets for tracking and communications offer many attractive possibilities to help achieve these goals. This paper describes two NASA proof-of-concept projects that seek-to exploit the advantages of a space-based range: Iridium Flight Modem and Space-Based Telemetry and Range Safety (STARS). Iridium Flight Modem uses the commercial satellite system Iridium for extremely low cost, low rate two-way communications and has been successfully tested on four aircraft flights. A sister project at Goddard Space Flight Center's (GSFC) Wallops Flight Facility (WFF) using the Globalstar system has been tested on one rocket. The basic Iridium Flight Modem system consists of a L1 carrier Coarse/Acquisition (C/A)-Code Global Positioning System (GPS) receiver, an on-board computer, and a standard commercial satellite modem and antennas. STARS uses the much higher data rate NASA owned Tracking and Data Relay Satellite System (TDRSS), a C/A-Code GPS receiver, an experimental low-power transceiver, custom built command and data handler processor, and digitized flight termination system (FTS) commands. STARS is scheduled to fly on an F-15 at Dryden Flight Research Center in the spring of 2003, with follow-on tests over the next several years.

  13. FLY MPI-2: a parallel tree code for LSS

    NASA Astrophysics Data System (ADS)

    Becciani, U.; Comparato, M.; Antonuccio-Delogu, V.

    2006-04-01

    New version program summaryProgram title: FLY 3.1 Catalogue identifier: ADSC_v2_0 Licensing provisions: yes Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADSC_v2_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland No. of lines in distributed program, including test data, etc.: 158 172 No. of bytes in distributed program, including test data, etc.: 4 719 953 Distribution format: tar.gz Programming language: Fortran 90, C Computer: Beowulf cluster, PC, MPP systems Operating system: Linux, Aix RAM: 100M words Catalogue identifier of previous version: ADSC_v1_0 Journal reference of previous version: Comput. Phys. Comm. 155 (2003) 159 Does the new version supersede the previous version?: yes Nature of problem: FLY is a parallel collisionless N-body code for the calculation of the gravitational force Solution method: FLY is based on the hierarchical oct-tree domain decomposition introduced by Barnes and Hut (1986) Reasons for the new version: The new version of FLY is implemented by using the MPI-2 standard: the distributed version 3.1 was developed by using the MPICH2 library on a PC Linux cluster. Today the FLY performance allows us to consider the FLY code among the most powerful parallel codes for tree N-body simulations. Another important new feature regards the availability of an interface with hydrodynamical Paramesh based codes. Simulations must follow a box large enough to accurately represent the power spectrum of fluctuations on very large scales so that we may hope to compare them meaningfully with real data. The number of particles then sets the mass resolution of the simulation, which we would like to make as fine as possible. The idea to build an interface between two codes, that have different and complementary cosmological tasks, allows us to execute complex cosmological simulations with FLY, specialized for DM evolution, and a code specialized for hydrodynamical components that uses a Paramesh block structure. Summary of revisions: The parallel communication schema was totally changed. The new version adopts the MPICH2 library. Now FLY can be executed on all Unix systems having an MPI-2 standard library. The main data structure, is declared in a module procedure of FLY (fly_h.F90 routine). FLY creates the MPI Window object for one-sided communication for all the shared arrays, with a call like the following: CALL MPI_WIN_CREATE(POS, SIZE, REAL8, MPI_INFO_NULL, MPI_COMM_WORLD, WIN_POS, IERR) the following main window objects are created: win_pos, win_vel, win_acc: particles positions velocities and accelerations, win_pos_cell, win_mass_cell, win_quad, win_subp, win_grouping: cells positions, masses, quadrupole momenta, tree structure and grouping cells. Other windows are created for dynamic load balance and global counters. Restrictions: The program uses the leapfrog integrator schema, but could be changed by the user. Unusual features: FLY uses the MPI-2 standard: the MPICH2 library on Linux systems was adopted. To run this version of FLY the working directory must be shared among all the processors that execute FLY. Additional comments: Full documentation for the program is included in the distribution in the form of a README file, a User Guide and a Reference manuscript. Running time: IBM Linux Cluster 1350, 512 nodes with 2 processors for each node and 2 GB RAM for each processor, at Cineca, was adopted to make performance tests. Processor type: Intel Xeon Pentium IV 3.0 GHz and 512 KB cache (128 nodes have Nocona processors). Internal Network: Myricom LAN Card "C" Version and "D" Version. Operating System: Linux SuSE SLES 8. The code was compiled using the mpif90 compiler version 8.1 and with basic optimization options in order to have performances that could be useful compared with other generic clusters Processors

  14. Rectangular Array Of Digital Processors For Planning Paths

    NASA Technical Reports Server (NTRS)

    Kemeny, Sabrina E.; Fossum, Eric R.; Nixon, Robert H.

    1993-01-01

    Prototype 24 x 25 rectangular array of asynchronous parallel digital processors rapidly finds best path across two-dimensional field, which could be patch of terrain traversed by robotic or military vehicle. Implemented as single-chip very-large-scale integrated circuit. Excepting processors on edges, each processor communicates with four nearest neighbors along paths representing travel to north, south, east, and west. Each processor contains delay generator in form of 8-bit ripple counter, preset to 1 of 256 possible values. Operation begins with choice of processor representing starting point. Transmits signals to nearest neighbor processors, which retransmits to other neighboring processors, and process repeats until signals propagated across entire field.

  15. Comparison between Frame-Constrained Fix-Pixel-Value and Frame-Free Spiking-Dynamic-Pixel ConvNets for Visual Processing

    PubMed Central

    Farabet, Clément; Paz, Rafael; Pérez-Carrasco, Jose; Zamarreño-Ramos, Carlos; Linares-Barranco, Alejandro; LeCun, Yann; Culurciello, Eugenio; Serrano-Gotarredona, Teresa; Linares-Barranco, Bernabe

    2012-01-01

    Most scene segmentation and categorization architectures for the extraction of features in images and patches make exhaustive use of 2D convolution operations for template matching, template search, and denoising. Convolutional Neural Networks (ConvNets) are one example of such architectures that can implement general-purpose bio-inspired vision systems. In standard digital computers 2D convolutions are usually expensive in terms of resource consumption and impose severe limitations for efficient real-time applications. Nevertheless, neuro-cortex inspired solutions, like dedicated Frame-Based or Frame-Free Spiking ConvNet Convolution Processors, are advancing real-time visual processing. These two approaches share the neural inspiration, but each of them solves the problem in different ways. Frame-Based ConvNets process frame by frame video information in a very robust and fast way that requires to use and share the available hardware resources (such as: multipliers, adders). Hardware resources are fixed- and time-multiplexed by fetching data in and out. Thus memory bandwidth and size is important for good performance. On the other hand, spike-based convolution processors are a frame-free alternative that is able to perform convolution of a spike-based source of visual information with very low latency, which makes ideal for very high-speed applications. However, hardware resources need to be available all the time and cannot be time-multiplexed. Thus, hardware should be modular, reconfigurable, and expansible. Hardware implementations in both VLSI custom integrated circuits (digital and analog) and FPGA have been already used to demonstrate the performance of these systems. In this paper we present a comparison study of these two neuro-inspired solutions. A brief description of both systems is presented and also discussions about their differences, pros and cons. PMID:22518097

  16. Comparison between Frame-Constrained Fix-Pixel-Value and Frame-Free Spiking-Dynamic-Pixel ConvNets for Visual Processing.

    PubMed

    Farabet, Clément; Paz, Rafael; Pérez-Carrasco, Jose; Zamarreño-Ramos, Carlos; Linares-Barranco, Alejandro; Lecun, Yann; Culurciello, Eugenio; Serrano-Gotarredona, Teresa; Linares-Barranco, Bernabe

    2012-01-01

    Most scene segmentation and categorization architectures for the extraction of features in images and patches make exhaustive use of 2D convolution operations for template matching, template search, and denoising. Convolutional Neural Networks (ConvNets) are one example of such architectures that can implement general-purpose bio-inspired vision systems. In standard digital computers 2D convolutions are usually expensive in terms of resource consumption and impose severe limitations for efficient real-time applications. Nevertheless, neuro-cortex inspired solutions, like dedicated Frame-Based or Frame-Free Spiking ConvNet Convolution Processors, are advancing real-time visual processing. These two approaches share the neural inspiration, but each of them solves the problem in different ways. Frame-Based ConvNets process frame by frame video information in a very robust and fast way that requires to use and share the available hardware resources (such as: multipliers, adders). Hardware resources are fixed- and time-multiplexed by fetching data in and out. Thus memory bandwidth and size is important for good performance. On the other hand, spike-based convolution processors are a frame-free alternative that is able to perform convolution of a spike-based source of visual information with very low latency, which makes ideal for very high-speed applications. However, hardware resources need to be available all the time and cannot be time-multiplexed. Thus, hardware should be modular, reconfigurable, and expansible. Hardware implementations in both VLSI custom integrated circuits (digital and analog) and FPGA have been already used to demonstrate the performance of these systems. In this paper we present a comparison study of these two neuro-inspired solutions. A brief description of both systems is presented and also discussions about their differences, pros and cons.

  17. VINE-A NUMERICAL CODE FOR SIMULATING ASTROPHYSICAL SYSTEMS USING PARTICLES. II. IMPLEMENTATION AND PERFORMANCE CHARACTERISTICS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nelson, Andrew F.; Wetzstein, M.; Naab, T.

    2009-10-01

    We continue our presentation of VINE. In this paper, we begin with a description of relevant architectural properties of the serial and shared memory parallel computers on which VINE is intended to run, and describe their influences on the design of the code itself. We continue with a detailed description of a number of optimizations made to the layout of the particle data in memory and to our implementation of a binary tree used to access that data for use in gravitational force calculations and searches for smoothed particle hydrodynamics (SPH) neighbor particles. We describe the modifications to the codemore » necessary to obtain forces efficiently from special purpose 'GRAPE' hardware, the interfaces required to allow transparent substitution of those forces in the code instead of those obtained from the tree, and the modifications necessary to use both tree and GRAPE together as a fused GRAPE/tree combination. We conclude with an extensive series of performance tests, which demonstrate that the code can be run efficiently and without modification in serial on small workstations or in parallel using the OpenMP compiler directives on large-scale, shared memory parallel machines. We analyze the effects of the code optimizations and estimate that they improve its overall performance by more than an order of magnitude over that obtained by many other tree codes. Scaled parallel performance of the gravity and SPH calculations, together the most costly components of most simulations, is nearly linear up to at least 120 processors on moderate sized test problems using the Origin 3000 architecture, and to the maximum machine sizes available to us on several other architectures. At similar accuracy, performance of VINE, used in GRAPE-tree mode, is approximately a factor 2 slower than that of VINE, used in host-only mode. Further optimizations of the GRAPE/host communications could improve the speed by as much as a factor of 3, but have not yet been implemented in VINE. Finally, we find that although parallel performance on small problems may reach a plateau beyond which more processors bring no additional speedup, performance never decreases, a factor important for running large simulations on many processors with individual time steps, where only a small fraction of the total particles require updates at any given moment.« less

  18. An Approach for Rapid Assessment of Seismic Hazards in Turkey by Continuous GPS Data

    PubMed Central

    Ozener, Haluk; Dogru, Asli; Unlutepe, Ahmet

    2009-01-01

    The Earth is being monitored every day by all kinds of sensors. This leads an overflow of data in all branches of science nowadays, especially in Earth Sciences. Data storage and data processing are the problems to be solved by current technologies, as well as by those accessing and analyzing these large data sources. Once solutions have been created for collecting, storing and accessing data, then the challenge becomes how to effectively share data, applications and processing resources across many locations. The Global Positioning System (GPS) sensors are being used as geodetic instruments to precisely detect crustal motion in the Earth's surface. Rapid access to data provided by GPS sensors is becoming increasingly important for deformation monitoring and rapid hazard assessments. Today, reliable and fast collection and distribution of data is a challenge and advances in Internet technologies have made it easier to provide the needed data. This study describes a system which will be able to generate strain maps using data from continuous GPS stations for seismic hazard analysis. Strain rates are a key factor in seismic hazard analyses. Turkey is a country prone to earthquakes with a long history of seismic hazards and disasters. This situation has resulted in the studies by Earth scientists that focus on Turkey in order to improve their understanding of the Earth's crust structure and seismic hazards. Nevertheless, the construction of models, data access and analysis are often not fast as expected, but the combination of Internet technologies with continuous GPS sensors can be a solution to overcome this problem. This system would have the potential to answer many important questions to assess seismic hazards such as how much stretching, squashing and shearing is taking place in different parts of Turkey, and how do velocities change from place to place? Seismic hazard estimation is the most effective way to reduce earthquake losses. It is clear that reliability of data and on-line services will support the preparation of strategies for disaster management and planning to cope with hazards. PMID:22389619

  19. [Regional network for patients with dementia--carrying out Kumamoto model for dementia].

    PubMed

    Ikeda, Manabu

    2014-01-01

    The Japanese government has tried to establish 150 Medical Centers for Dementia (MCDs) since 2008 to overcome the dementia medical service shortage. MCDs are required to provide special medical services for dementia and connect with other community resources in order to contribute to building a comprehensive support network for demented patients. The main specific needs are as follows: 1) special medical consultation; 2) differential diagnosis and early intervention; 3) medical treatment for the acute stage of BPSD; 4) corresponding to serious physical complications of dementia; 5) education for general physicians (GPs) and other community professionals. According to the population rate, two dementia medical centers were planned in Kumamoto Prefecture. However, it seemed to be too few to cover the vast Kumamoto area. Therefore, the local government and I proposed to the Japanese government that we build up networks that consist of one core MCD in our university hospital and several regional MCDs in local mental hospitals. The local government selected seven (nine at present) centers according to the area balance and condition of equipment. The Japanese government has recommended and funded such networks between core and regional centers since 2010. The main roles of the core centers are as follows: 1) early diagnosis such as Mild cognitive impairment, very mild Alzheimer's disease, Dementia with Lewy bodies, and Frontotemporal lobar degeneration using comprehensive neuropsychological batteries and neuroimagings, such as MRI and SPECT scans; 2) education for GPs; 3) training for young consultants. The core center opens case conferences at least every one or two months for all staff of regional centers to maintain the quality of all centers and give training opportunities for standardized international assessment scales. While the main roles of the regional centers are differential diagnosis, intervention for BPSD, and management of general medical problems using local networks with general hospitals and GPs, and organizing local networks for dementia with GPs and care staff In short, the regional centers take responsibility for ordinal clinical work for dementia. To construct a more extensive network, each regional center must hold regional case conferences and lectures on dementia for care staff and GPs sharing knowledge and skills acquired from case conferences by the core center.

  20. New PBO GPS Station Construction: Eastern Region Network Enhancements and Multiple-Monument Stability Comparisons

    NASA Astrophysics Data System (ADS)

    Dittmann, S. T.; Austin, K. E.; Berglund, H. T.; Blume, F.; Feaux, K.; Mann, D.; Mattioli, G. S.; Walls, C. P.

    2013-12-01

    The Plate Boundary Observatory (PBO) network consists of 1100 continuously operating, permanent GPS stations throughout the United States. The majority of this network was constructed using NSF-MREFC funding as part of the EarthScope Project during FY2003-FY2008. Since FY2009, UNAVCO has operated and maintained PBO through a Cooperative Agreement (CA) with NSF. Construction of new, permanent GPS monuments in the PBO network was the result of two change orders to the original PBO O&M CA. Change Order 33 (CO33) allocated funds to construct additional GPS stations at six locations in the Eastern Region of PBO. Three of these locations were designed to replace poorly performing existing GPS monuments in Georgia, Texas and New York. The remaining three new locations were selected to fill in gaps in network coverage in Pennsylvania, Wisconsin and North Dakota. Construction of all six new sites was completed in September 2013. Important scientific goals for CO33 include improvement of the stable North American reference frame, measurement of the vertical signal associated with the Glacial Isostatic Adjustment, and improved constraints on surface deformation and possible earthquakes, which occur in the low-strain tectonic setting of the eastern North American Plate. Change Order 35 (CO35) allocated funds to construct two additional geodetic monuments at five existing PBO stations in order to test and compare the long-term stability of various monument designs under near-identical geologic conditions. Sites were chosen to yield a variety of geographic, hydrologic and geologic conditions, including both fine-grained alluvium and crystalline bedrock. At each location, three different monuments (deep drill braced, short drill braced/driven-braced, mast/pillar) were built with 10 meter spacing, with shared power systems and data telemetry infrastructure. Construction of these multi-monument test locations began in October 2012 and finished in September 2013. See G010- Berglund, H., Blume, F., et al... 'PBO Monument Stability Experiment Analysis' for the initial results of the data quality comparison from these locations.

  1. Self-calibrating pseudolite arrays: Theory and experiment

    NASA Astrophysics Data System (ADS)

    Lemaster, Edward Alan

    Tasks envisioned for future-generation Mars rovers---sample collection, area survey, resource mining, habitat construction, etc.---will require greatly enhanced navigational capabilities over those possessed by the 1997 Mars Sojourner rover. Many of these tasks will involve cooperative efforts by multiple rovers and other agents, necessitating both high accuracy and the ability to share navigation information among different users. On Earth, satellite-based carrier-phase differential GPS provides a means of delivering centimeter-level, drift-free positioning to multiple users in contact with a reference base station. It would be highly desirable to have a similar navigational capability for use in Mars exploration. This research has originated a new local-area navigation system---a Self-Calibrating Pseudolite Array (SCPA)---that can provide centimeter-level localization to multiple rovers by utilizing GPS-based pseudolite transceivers deployed in a ground-based array. Such a system of localized beacons can replace or augment a system based on orbiting satellite transmitters. Previous pseudolite arrays have relied upon a priori information to survey the locations of the pseudolites, which must be accurately known to enable navigation within the array. In contrast, an SCPA does not rely upon other measurement sources to determine these pseudolite locations. This independence is a key requirement for autonomous deployment on Mars, and is accomplished through the use of GPS transceivers containing both transmit and receive components and through algorithms that utilize limited motion of a transceiver-bearing rover to determine the locations of the stationary transceivers. This dissertation describes the theory and operation of GPS transceivers, and how they can be used for navigation within a Self-Calibrating Pseudolite Array. It presents new algorithms that can be used to self-survey such arrays robustly using no a priori information, even under adverse conditions such as high-multipath environments. It then describes the experimental SCPA prototype developed at Stanford University and used in conjunction with the K9 Mars rover operated by NASA Ames Research Center. Using this experimental system, it provides experimental validation of both successful positioning using GPS transceivers and full calibration of an SCPA following deployment in an unknown configuration.

  2. Buffered coscheduling for parallel programming and enhanced fault tolerance

    DOEpatents

    Petrini, Fabrizio [Los Alamos, NM; Feng, Wu-chun [Los Alamos, NM

    2006-01-31

    A computer implemented method schedules processor jobs on a network of parallel machine processors or distributed system processors. Control information communications generated by each process performed by each processor during a defined time interval is accumulated in buffers, where adjacent time intervals are separated by strobe intervals for a global exchange of control information. A global exchange of the control information communications at the end of each defined time interval is performed during an intervening strobe interval so that each processor is informed by all of the other processors of the number of incoming jobs to be received by each processor in a subsequent time interval. The buffered coscheduling method of this invention also enhances the fault tolerance of a network of parallel machine processors or distributed system processors

  3. Flight design system level C requirements. Solid rocket booster and external tank impact prediction processors. [space transportation system

    NASA Technical Reports Server (NTRS)

    Seale, R. H.

    1979-01-01

    The prediction of the SRB and ET impact areas requires six separate processors. The SRB impact prediction processor computes the impact areas and related trajectory data for each SRB element. Output from this processor is stored on a secure file accessible by the SRB impact plot processor which generates the required plots. Similarly the ET RTLS impact prediction processor and the ET RTLS impact plot processor generates the ET impact footprints for return-to-launch-site (RTLS) profiles. The ET nominal/AOA/ATO impact prediction processor and the ET nominal/AOA/ATO impact plot processor generate the ET impact footprints for non-RTLS profiles. The SRB and ET impact processors compute the size and shape of the impact footprints by tabular lookup in a stored footprint dispersion data base. The location of each footprint is determined by simulating a reference trajectory and computing the reference impact point location. To insure consistency among all flight design system (FDS) users, much input required by these processors will be obtained from the FDS master data base.

  4. Extending Automatic Parallelization to Optimize High-Level Abstractions for Multicore

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liao, C; Quinlan, D J; Willcock, J J

    2008-12-12

    Automatic introduction of OpenMP for sequential applications has attracted significant attention recently because of the proliferation of multicore processors and the simplicity of using OpenMP to express parallelism for shared-memory systems. However, most previous research has only focused on C and Fortran applications operating on primitive data types. C++ applications using high-level abstractions, such as STL containers and complex user-defined types, are largely ignored due to the lack of research compilers that are readily able to recognize high-level object-oriented abstractions and leverage their associated semantics. In this paper, we automatically parallelize C++ applications using ROSE, a multiple-language source-to-source compiler infrastructuremore » which preserves the high-level abstractions and gives us access to their semantics. Several representative parallelization candidate kernels are used to explore semantic-aware parallelization strategies for high-level abstractions, combined with extended compiler analyses. Those kernels include an array-base computation loop, a loop with task-level parallelism, and a domain-specific tree traversal. Our work extends the applicability of automatic parallelization to modern applications using high-level abstractions and exposes more opportunities to take advantage of multicore processors.« less

  5. Parallel approach to incorporating face image information into dialogue processing

    NASA Astrophysics Data System (ADS)

    Ren, Fuji

    2000-10-01

    There are many kinds of so-called irregular expressions in natural dialogues. Even if the content of a conversation is the same in words, different meanings can be interpreted by a person's feeling or face expression. To have a good understanding of dialogues, it is required in a flexible dialogue processing system to infer the speaker's view properly. However, it is difficult to obtain the meaning of the speaker's sentences in various scenes using traditional methods. In this paper, a new approach for dialogue processing that incorporates information from the speaker's face is presented. We first divide conversation statements into several simple tasks. Second, we process each simple task using an independent processor. Third, we employ some speaker's face information to estimate the view of the speakers to solve ambiguities in dialogues. The approach presented in this paper can work efficiently, because independent processors run in parallel, writing partial results to a shared memory, incorporating partial results at appropriate points, and complementing each other. A parallel algorithm and a method for employing the face information in a dialogue machine translation will be discussed, and some results will be included in this paper.

  6. Advanced data management design for autonomous telerobotic systems in space using spaceborne symbolic processors

    NASA Technical Reports Server (NTRS)

    Goforth, Andre

    1987-01-01

    The use of computers in autonomous telerobots is reaching the point where advanced distributed processing concepts and techniques are needed to support the functioning of Space Station era telerobotic systems. Three major issues that have impact on the design of data management functions in a telerobot are covered. It also presents a design concept that incorporates an intelligent systems manager (ISM) running on a spaceborne symbolic processor (SSP), to address these issues. The first issue is the support of a system-wide control architecture or control philosophy. Salient features of two candidates are presented that impose constraints on data management design. The second issue is the role of data management in terms of system integration. This referes to providing shared or coordinated data processing and storage resources to a variety of telerobotic components such as vision, mechanical sensing, real-time coordinated multiple limb and end effector control, and planning and reasoning. The third issue is hardware that supports symbolic processing in conjunction with standard data I/O and numeric processing. A SSP that currently is seen to be technologically feasible and is being developed is described and used as a baseline in the design concept.

  7. Massively Multithreaded Maxflow for Image Segmentation on the Cray XMT-2

    PubMed Central

    Bokhari, Shahid H.; Çatalyürek, Ümit V.; Gurcan, Metin N.

    2014-01-01

    SUMMARY Image segmentation is a very important step in the computerized analysis of digital images. The maxflow mincut approach has been successfully used to obtain minimum energy segmentations of images in many fields. Classical algorithms for maxflow in networks do not directly lend themselves to efficient parallel implementations on contemporary parallel processors. We present the results of an implementation of Goldberg-Tarjan preflow-push algorithm on the Cray XMT-2 massively multithreaded supercomputer. This machine has hardware support for 128 threads in each physical processor, a uniformly accessible shared memory of up to 4 TB and hardware synchronization for each 64 bit word. It is thus well-suited to the parallelization of graph theoretic algorithms, such as preflow-push. We describe the implementation of the preflow-push code on the XMT-2 and present the results of timing experiments on a series of synthetically generated as well as real images. Our results indicate very good performance on large images and pave the way for practical applications of this machine architecture for image analysis in a production setting. The largest images we have run are 320002 pixels in size, which are well beyond the largest previously reported in the literature. PMID:25598745

  8. Optics Program Modified for Multithreaded Parallel Computing

    NASA Technical Reports Server (NTRS)

    Lou, John; Bedding, Dave; Basinger, Scott

    2006-01-01

    A powerful high-performance computer program for simulating and analyzing adaptive and controlled optical systems has been developed by modifying the serial version of the Modeling and Analysis for Controlled Optical Systems (MACOS) program to impart capabilities for multithreaded parallel processing on computing systems ranging from supercomputers down to Symmetric Multiprocessing (SMP) personal computers. The modifications included the incorporation of OpenMP, a portable and widely supported application interface software, that can be used to explicitly add multithreaded parallelism to an application program under a shared-memory programming model. OpenMP was applied to parallelize ray-tracing calculations, one of the major computing components in MACOS. Multithreading is also used in the diffraction propagation of light in MACOS based on pthreads [POSIX Thread, (where "POSIX" signifies a portable operating system for UNIX)]. In tests of the parallelized version of MACOS, the speedup in ray-tracing calculations was found to be linear, or proportional to the number of processors, while the speedup in diffraction calculations ranged from 50 to 60 percent, depending on the type and number of processors. The parallelized version of MACOS is portable, and, to the user, its interface is basically the same as that of the original serial version of MACOS.

  9. Dynamic Load-Balancing for Distributed Heterogeneous Computing of Parallel CFD Problems

    NASA Technical Reports Server (NTRS)

    Ecer, A.; Chien, Y. P.; Boenisch, T.; Akay, H. U.

    2000-01-01

    The developed methodology is aimed at improving the efficiency of executing block-structured algorithms on parallel, distributed, heterogeneous computers. The basic approach of these algorithms is to divide the flow domain into many sub- domains called blocks, and solve the governing equations over these blocks. Dynamic load balancing problem is defined as the efficient distribution of the blocks among the available processors over a period of several hours of computations. In environments with computers of different architecture, operating systems, CPU speed, memory size, load, and network speed, balancing the loads and managing the communication between processors becomes crucial. Load balancing software tools for mutually dependent parallel processes have been created to efficiently utilize an advanced computation environment and algorithms. These tools are dynamic in nature because of the chances in the computer environment during execution time. More recently, these tools were extended to a second operating system: NT. In this paper, the problems associated with this application will be discussed. Also, the developed algorithms were combined with the load sharing capability of LSF to efficiently utilize workstation clusters for parallel computing. Finally, results will be presented on running a NASA based code ADPAC to demonstrate the developed tools for dynamic load balancing.

  10. Adaptive and mobile ground sensor array.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holzrichter, Michael Warren; O'Rourke, William T.; Zenner, Jennifer

    The goal of this LDRD was to demonstrate the use of robotic vehicles for deploying and autonomously reconfiguring seismic and acoustic sensor arrays with high (centimeter) accuracy to obtain enhancement of our capability to locate and characterize remote targets. The capability to accurately place sensors and then retrieve and reconfigure them allows sensors to be placed in phased arrays in an initial monitoring configuration and then to be reconfigured in an array tuned to the specific frequencies and directions of the selected target. This report reviews the findings and accomplishments achieved during this three-year project. This project successfully demonstrated autonomousmore » deployment and retrieval of a payload package with an accuracy of a few centimeters using differential global positioning system (GPS) signals. It developed an autonomous, multisensor, temporally aligned, radio-frequency communication and signal processing capability, and an array optimization algorithm, which was implemented on a digital signal processor (DSP). Additionally, the project converted the existing single-threaded, monolithic robotic vehicle control code into a multi-threaded, modular control architecture that enhances the reuse of control code in future projects.« less

  11. Coding, testing and documentation of processors for the flight design system

    NASA Technical Reports Server (NTRS)

    1980-01-01

    The general functional design and implementation of processors for a space flight design system are briefly described. Discussions of a basetime initialization processor; conic, analytical, and precision coasting flight processors; and an orbit lifetime processor are included. The functions of several utility routines are also discussed.

  12. The computational structural mechanics testbed generic structural-element processor manual

    NASA Technical Reports Server (NTRS)

    Stanley, Gary M.; Nour-Omid, Shahram

    1990-01-01

    The usage and development of structural finite element processors based on the CSM Testbed's Generic Element Processor (GEP) template is documented. By convention, such processors have names of the form ESi, where i is an integer. This manual is therefore intended for both Testbed users who wish to invoke ES processors during the course of a structural analysis, and Testbed developers who wish to construct new element processors (or modify existing ones).

  13. Highly parallel reconfigurable computer architecture for robotic computation having plural processor cells each having right and left ensembles of plural processors

    NASA Technical Reports Server (NTRS)

    Fijany, Amir (Inventor); Bejczy, Antal K. (Inventor)

    1994-01-01

    In a computer having a large number of single-instruction multiple data (SIMD) processors, each of the SIMD processors has two sets of three individual processor elements controlled by a master control unit and interconnected among a plurality of register file units where data is stored. The register files input and output data in synchronism with a minor cycle clock under control of two slave control units controlling the register file units connected to respective ones of the two sets of processor elements. Depending upon which ones of the register file units are enabled to store or transmit data during a particular minor clock cycle, the processor elements within an SIMD processor are connected in rings or in pipeline arrays, and may exchange data with the internal bus or with neighboring SIMD processors through interface units controlled by respective ones of the two slave control units.

  14. System and method for representing and manipulating three-dimensional objects on massively parallel architectures

    DOEpatents

    Karasick, Michael S.; Strip, David R.

    1996-01-01

    A parallel computing system is described that comprises a plurality of uniquely labeled, parallel processors, each processor capable of modelling a three-dimensional object that includes a plurality of vertices, faces and edges. The system comprises a front-end processor for issuing a modelling command to the parallel processors, relating to a three-dimensional object. Each parallel processor, in response to the command and through the use of its own unique label, creates a directed-edge (d-edge) data structure that uniquely relates an edge of the three-dimensional object to one face of the object. Each d-edge data structure at least includes vertex descriptions of the edge and a description of the one face. As a result, each processor, in response to the modelling command, operates upon a small component of the model and generates results, in parallel with all other processors, without the need for processor-to-processor intercommunication.

  15. Switch for serial or parallel communication networks

    DOEpatents

    Crosette, D.B.

    1994-07-19

    A communication switch apparatus and a method for use in a geographically extensive serial, parallel or hybrid communication network linking a multi-processor or parallel processing system has a very low software processing overhead in order to accommodate random burst of high density data. Associated with each processor is a communication switch. A data source and a data destination, a sensor suite or robot for example, may also be associated with a switch. The configuration of the switches in the network are coordinated through a master processor node and depends on the operational phase of the multi-processor network: data acquisition, data processing, and data exchange. The master processor node passes information on the state to be assumed by each switch to the processor node associated with the switch. The processor node then operates a series of multi-state switches internal to each communication switch. The communication switch does not parse and interpret communication protocol and message routing information. During a data acquisition phase, the communication switch couples sensors producing data to the processor node associated with the switch, to a downlink destination on the communications network, or to both. It also may couple an uplink data source to its processor node. During the data exchange phase, the switch couples its processor node or an uplink data source to a downlink destination (which may include a processor node or a robot), or couples an uplink source to its processor node and its processor node to a downlink destination. 9 figs.

  16. Switch for serial or parallel communication networks

    DOEpatents

    Crosette, Dario B.

    1994-01-01

    A communication switch apparatus and a method for use in a geographically extensive serial, parallel or hybrid communication network linking a multi-processor or parallel processing system has a very low software processing overhead in order to accommodate random burst of high density data. Associated with each processor is a communication switch. A data source and a data destination, a sensor suite or robot for example, may also be associated with a switch. The configuration of the switches in the network are coordinated through a master processor node and depends on the operational phase of the multi-processor network: data acquisition, data processing, and data exchange. The master processor node passes information on the state to be assumed by each switch to the processor node associated with the switch. The processor node then operates a series of multi-state switches internal to each communication switch. The communication switch does not parse and interpret communication protocol and message routing information. During a data acquisition phase, the communication switch couples sensors producing data to the processor node associated with the switch, to a downlink destination on the communications network, or to both. It also may couple an uplink data source to its processor node. During the data exchange phase, the switch couples its processor node or an uplink data source to a downlink destination (which may include a processor node or a robot), or couples an uplink source to its processor node and its processor node to a downlink destination.

  17. Parallel hyperbolic PDE simulation on clusters: Cell versus GPU

    NASA Astrophysics Data System (ADS)

    Rostrup, Scott; De Sterck, Hans

    2010-12-01

    Increasingly, high-performance computing is looking towards data-parallel computational devices to enhance computational performance. Two technologies that have received significant attention are IBM's Cell Processor and NVIDIA's CUDA programming model for graphics processing unit (GPU) computing. In this paper we investigate the acceleration of parallel hyperbolic partial differential equation simulation on structured grids with explicit time integration on clusters with Cell and GPU backends. The message passing interface (MPI) is used for communication between nodes at the coarsest level of parallelism. Optimizations of the simulation code at the several finer levels of parallelism that the data-parallel devices provide are described in terms of data layout, data flow and data-parallel instructions. Optimized Cell and GPU performance are compared with reference code performance on a single x86 central processing unit (CPU) core in single and double precision. We further compare the CPU, Cell and GPU platforms on a chip-to-chip basis, and compare performance on single cluster nodes with two CPUs, two Cell processors or two GPUs in a shared memory configuration (without MPI). We finally compare performance on clusters with 32 CPUs, 32 Cell processors, and 32 GPUs using MPI. Our GPU cluster results use NVIDIA Tesla GPUs with GT200 architecture, but some preliminary results on recently introduced NVIDIA GPUs with the next-generation Fermi architecture are also included. This paper provides computational scientists and engineers who are considering porting their codes to accelerator environments with insight into how structured grid based explicit algorithms can be optimized for clusters with Cell and GPU accelerators. It also provides insight into the speed-up that may be gained on current and future accelerator architectures for this class of applications. Program summaryProgram title: SWsolver Catalogue identifier: AEGY_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGY_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GPL v3 No. of lines in distributed program, including test data, etc.: 59 168 No. of bytes in distributed program, including test data, etc.: 453 409 Distribution format: tar.gz Programming language: C, CUDA Computer: Parallel Computing Clusters. Individual compute nodes may consist of x86 CPU, Cell processor, or x86 CPU with attached NVIDIA GPU accelerator. Operating system: Linux Has the code been vectorised or parallelized?: Yes. Tested on 1-128 x86 CPU cores, 1-32 Cell Processors, and 1-32 NVIDIA GPUs. RAM: Tested on Problems requiring up to 4 GB per compute node. Classification: 12 External routines: MPI, CUDA, IBM Cell SDK Nature of problem: MPI-parallel simulation of Shallow Water equations using high-resolution 2D hyperbolic equation solver on regular Cartesian grids for x86 CPU, Cell Processor, and NVIDIA GPU using CUDA. Solution method: SWsolver provides 3 implementations of a high-resolution 2D Shallow Water equation solver on regular Cartesian grids, for CPU, Cell Processor, and NVIDIA GPU. Each implementation uses MPI to divide work across a parallel computing cluster. Additional comments: Sub-program numdiff is used for the test run.

  18. A qualitative exploration of opinions on the community pharmacists' role amongst the general public in Scotland.

    PubMed

    Gidman, Wendy; Cowley, Joseph

    2013-10-01

    To understand members of the public's opinions and experiences of pharmacy services. This exploratory study employed qualitative methods. Five focus groups were conducted with 26 members of the public resident in Scotland in March 2010. The groups comprised those perceived to be users and non-users of community pharmacy. A topic guide was developed to prompt discussion. Each focus group was recorded, transcribed, anonymised and analysed using thematic analysis. Participants made positive comments about pharmacy services although many preferred to see a general practitioner (GP). Participants discussed using pharmacies for convenience, often because they were unable to access GPs. Pharmacists were perceived principally to be suppliers of medicine, although there was some recognition of roles in dealing with minor ailments and providing advice. For those with serious and long-standing health matters GPs were usually the professional of choice for most health needs. Community pharmacy was seen to offer incomplete services which did not co-ordinate well with other primary-care services. The pharmacy environment and retail setting were not considered to be ideal for private healthcare consultations. This study suggests that despite recent initiatives to extend the role of community pharmacists many members of the general public continue to prefer a GP-led service. Importantly GPs inspire public confidence as well as offering comprehensive services and private consultation facilities. Improved communication and information sharing between community pharmacists and general practice could support community pharmacist-role expansion. © 2012 The Authors. IJPP © 2012 Royal Pharmaceutical Society.

  19. Future practice of graduates of the New Zealand Diploma of Obstetrics and Gynaecology or Certificate in Women's Health.

    PubMed

    Miller, Dawn; Roberts, Helen; Wilson, Don

    2008-09-22

    To determine: why Diploma of Obstetrics (DipObs), Diploma of Obstetrics and Medical Gynaecology (DipOMG), or Certificate in Women's Health graduates enrolled; course usefulness; and subsequent practice. 588 University of Otago DipObs, DipOMG, and Certificate in Women's Health graduates (1992-2006) plus Auckland University graduates (1996-2006) were identified. All were doctors. Questionnaires were sent to the 477 with New Zealand medical registration and responses analysed. 334 of the 477 graduates returned completed questionnaires--70% response rate. 73% had worked as GPs, 10% at family planning clinics, 6% at sexual health clinics; and 13% specialised in OandG. 80% enrolled to further knowledge in women's health, 20% in children's health, and 43% to practise GP obstetrics. Most respondents who enrolled in the 1990s intended to practise GP obstetrics but by 2000 most did not. Of 137 New Zealand-based GP respondents who enrolled to practise GP obstetrics, only 5 (3.6%) currently practise intrapartum obstetric care. Twenty-three GPs still practise shared maternity care. Of 220 primary care practitioners, 90% provide early antenatal care. 93% described the course as useful-extremely useful. The DipObs, Dip OMG and Certificate in Women's Health have continued to provide useful postgraduate training in women's health during a changing time in New Zealand pregnancy care. While many graduates of the 1990s enrolled to practise GP obstetrics, most recent graduates did not, and few GPs still practise intrapartum obstetrics.

  20. Key Technologies of Phone Storage Forensics Based on ARM Architecture

    NASA Astrophysics Data System (ADS)

    Zhang, Jianghan; Che, Shengbing

    2018-03-01

    Smart phones are mainly running Android, IOS and Windows Phone three mobile platform operating systems. The android smart phone has the best market shares and its processor chips are almost ARM software architecture. The chips memory address mapping mechanism of ARM software architecture is different with x86 software architecture. To forensics to android mart phone, we need to understand three key technologies: memory data acquisition, the conversion mechanism from virtual address to the physical address, and find the system’s key data. This article presents a viable solution which does not rely on the operating system API for a complete solution to these three issues.

  1. Multiprogramming performance degradation - Case study on a shared memory multiprocessor

    NASA Technical Reports Server (NTRS)

    Dimpsey, R. T.; Iyer, R. K.

    1989-01-01

    The performance degradation due to multiprogramming overhead is quantified for a parallel-processing machine. Measurements of real workloads were taken, and it was found that there is a moderate correlation between the completion time of a program and the amount of system overhead measured during program execution. Experiments in controlled environments were then conducted to calculate a lower bound on the performance degradation of parallel jobs caused by multiprogramming overhead. The results show that the multiprogramming overhead of parallel jobs consumes at least 4 percent of the processor time. When two or more serial jobs are introduced into the system, this amount increases to 5.3 percent

  2. A mechanism for efficient debugging of parallel programs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, B.P.; Choi, J.D.

    1988-01-01

    This paper addresses the design and implementation of an integrated debugging system for parallel programs running on shared memory multi-processors (SMMP). The authors describe the use of flowback analysis to provide information on causal relationships between events in a program's execution without re-executing the program for debugging. The authors introduce a mechanism called incremental tracing that, by using semantic analyses of the debugged program, makes the flowback analysis practical with only a small amount of trace generated during execution. The extend flowback analysis to apply to parallel programs and describe a method to detect race conditions in the interactions ofmore » the co-operating processes.« less

  3. Kalman filter tracking on parallel architectures

    NASA Astrophysics Data System (ADS)

    Cerati, G.; Elmer, P.; Krutelyov, S.; Lantz, S.; Lefebvre, M.; McDermott, K.; Riley, D.; Tadel, M.; Wittich, P.; Wurthwein, F.; Yagil, A.

    2017-10-01

    We report on the progress of our studies towards a Kalman filter track reconstruction algorithm with optimal performance on manycore architectures. The combinatorial structure of these algorithms is not immediately compatible with an efficient SIMD (or SIMT) implementation; the challenge for us is to recast the existing software so it can readily generate hundreds of shared-memory threads that exploit the underlying instruction set of modern processors. We show how the data and associated tasks can be organized in a way that is conducive to both multithreading and vectorization. We demonstrate very good performance on Intel Xeon and Xeon Phi architectures, as well as promising first results on Nvidia GPUs.

  4. The Spectral Element Method for Geophysical Flows

    NASA Astrophysics Data System (ADS)

    Taylor, Mark

    1998-11-01

    We will describe SEAM, a Spectral Element Atmospheric Model. SEAM solves the 3D primitive equations used in climate modeling and medium range forecasting. SEAM uses a spectral element discretization for the surface of the globe and finite differences in the vertical direction. The model is spectrally accurate, as demonstrated by a variety of test cases. It is well suited for modern distributed-shared memory computers, sustaining over 24 GFLOPS on a 240 processor HP Exemplar. This performance has allowed us to run several interesting simulations in full spherical geometry at high resolution (over 22 million grid points).

  5. A general model for memory interference in a multiprocessor system with memory hierarchy

    NASA Technical Reports Server (NTRS)

    Taha, Badie A.; Standley, Hilda M.

    1989-01-01

    The problem of memory interference in a multiprocessor system with a hierarchy of shared buses and memories is addressed. The behavior of the processors is represented by a sequence of memory requests with each followed by a determined amount of processing time. A statistical queuing network model for determining the extent of memory interference in multiprocessor systems with clusters of memory hierarchies is presented. The performance of the system is measured by the expected number of busy memory clusters. The results of the analytic model are compared with simulation results, and the correlation between them is found to be very high.

  6. The role of graphics super-workstations in a supercomputing environment

    NASA Technical Reports Server (NTRS)

    Levin, E.

    1989-01-01

    A new class of very powerful workstations has recently become available which integrate near supercomputer computational performance with very powerful and high quality graphics capability. These graphics super-workstations are expected to play an increasingly important role in providing an enhanced environment for supercomputer users. Their potential uses include: off-loading the supercomputer (by serving as stand-alone processors, by post-processing of the output of supercomputer calculations, and by distributed or shared processing), scientific visualization (understanding of results, communication of results), and by real time interaction with the supercomputer (to steer an iterative computation, to abort a bad run, or to explore and develop new algorithms).

  7. Conditions for space invariance in optical data processors used with coherent or noncoherent light.

    PubMed

    Arsenault, H R

    1972-10-01

    The conditions for space invariance in coherent and noncoherent optical processors are considered. All linear optical processors are shown to belong to one of two types. The conditions for space invariance are more stringent for noncoherent processors than for coherent processors, so that a system that is linear in coherent light may be nonlinear in noncoherent light. However, any processor that is linear in noncoherent light is also linear in the coherent limit.

  8. Laser-Based Pedestrian Tracking in Outdoor Environments by Multiple Mobile Robots

    PubMed Central

    Ozaki, Masataka; Kakimuma, Kei; Hashimoto, Masafumi; Takahashi, Kazuhiko

    2012-01-01

    This paper presents an outdoors laser-based pedestrian tracking system using a group of mobile robots located near each other. Each robot detects pedestrians from its own laser scan image using an occupancy-grid-based method, and the robot tracks the detected pedestrians via Kalman filtering and global-nearest-neighbor (GNN)-based data association. The tracking data is broadcast to multiple robots through intercommunication and is combined using the covariance intersection (CI) method. For pedestrian tracking, each robot identifies its own posture using real-time-kinematic GPS (RTK-GPS) and laser scan matching. Using our cooperative tracking method, all the robots share the tracking data with each other; hence, individual robots can always recognize pedestrians that are invisible to any other robot. The simulation and experimental results show that cooperating tracking provides the tracking performance better than conventional individual tracking does. Our tracking system functions in a decentralized manner without any central server, and therefore, this provides a degree of scalability and robustness that cannot be achieved by conventional centralized architectures. PMID:23202171

  9. Decomposing Oncogenic Transcriptional Signatures to Generate Maps of Divergent Cellular States.

    PubMed

    Kim, Jong Wook; Abudayyeh, Omar O; Yeerna, Huwate; Yeang, Chen-Hsiang; Stewart, Michelle; Jenkins, Russell W; Kitajima, Shunsuke; Konieczkowski, David J; Medetgul-Ernar, Kate; Cavazos, Taylor; Mah, Clarence; Ting, Stephanie; Van Allen, Eliezer M; Cohen, Ofir; Mcdermott, John; Damato, Emily; Aguirre, Andrew J; Liang, Jonathan; Liberzon, Arthur; Alexe, Gabriella; Doench, John; Ghandi, Mahmoud; Vazquez, Francisca; Weir, Barbara A; Tsherniak, Aviad; Subramanian, Aravind; Meneses-Cime, Karina; Park, Jason; Clemons, Paul; Garraway, Levi A; Thomas, David; Boehm, Jesse S; Barbie, David A; Hahn, William C; Mesirov, Jill P; Tamayo, Pablo

    2017-08-23

    The systematic sequencing of the cancer genome has led to the identification of numerous genetic alterations in cancer. However, a deeper understanding of the functional consequences of these alterations is necessary to guide appropriate therapeutic strategies. Here, we describe Onco-GPS (OncoGenic Positioning System), a data-driven analysis framework to organize individual tumor samples with shared oncogenic alterations onto a reference map defined by their underlying cellular states. We applied the methodology to the RAS pathway and identified nine distinct components that reflect transcriptional activities downstream of RAS and defined several functional states associated with patterns of transcriptional component activation that associates with genomic hallmarks and response to genetic and pharmacological perturbations. These results show that the Onco-GPS is an effective approach to explore the complex landscape of oncogenic cellular states across cancers, and an analytic framework to summarize knowledge, establish relationships, and generate more effective disease models for research or as part of individualized precision medicine paradigms. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. Pneumocafé project: an inquiry on current COPD diagnosis and management among General Practitioners in Italy through a novel tool for professional education

    PubMed Central

    2014-01-01

    Background Symptoms of COPD are frequently disregarded by patients and also by general practitioners (GPs) in early stages of the disease, that consequently is diagnosed when already at an advanced grade of severity. Underdiagnosis and undertreatment of COPD and scarce use of spirometry are widely recurrent, while a better knowledge of the disease and a wider use of spirometry would be critical to diagnose more patients still neglected, do it at an earlier stage and properly treat established COPD. The aim of Pneumocafè project is to improve, through an innovative approach, the diagnosis and management of COPD at primary care level increasing the awareness of issues pertaining to early diagnosis, adequate prevention and correct treatment of the disease. Methods Pneumocafè is based on informal meetings between GPs of various geographical zones of Italy and their reference respiratory specialist (RS), aimed at discussing the current practice in comparison to suggestions of official guidelines, analyzing the actual problems in diagnosing and managing COPD patients and sharing the possible solution at the community level. In these meetings RSs faced many issues including patho-physiological mechanisms of bronchial obstruction, significance of clinical symptoms, patients’ phenotyping, and clinical approach to diagnosis and long-term treatment, also reinforcing the importance of a timely diagnosis, proper long term treatment and the compliance to treatment. At the end of each meeting GPs had to fill in a questionnaire arranged by the scientific board of the Project that included 18 multiple-choice questions concerning their approach to COPD management. The results of the analysis of these questionnaires are here presented. Results 1, 964 questionnaires were returned from 49 RSs. 1,864 questionnaires out of those received (94.91% of the total) resulted properly compiled and form the object of the present analysis. The 49 RSs, 37 males and 12 females, were distributed all over the Italian country and practiced their profession both in public and private hospitals and in territorial sanitary facilities. GPs were 1,330 males (71.35%) and 534 females (28.64%), mean age 56,29 years (range 27-70 yrs). Mean duration of general practice was 25.56 years (range: 0,5-40 yrs) with a mean of 1,302.43 patients assisted by each GP and 2,427,741 patients assisted in all. The majority of GPs affirmed that in their patients COPD has a mean-to-great prevalence and a mean/high impact on their practice, preceded only by diabetes and heart failure. Three-quarters of GPs refer to COPD guidelines and most of them believe that a screening on their assisted patients at risk would enhance early diagnosis of COPD. Tobacco smoking is the main recognized cause of COPD but the actions carried out by GPs to help a patient to give up smoking result still insufficient. The majority of GPs recognize spirometry as necessary to early COPD diagnosis, but the main obstacle pointed out to its wider use was the too long time for the spirometry to be performed. GPs’ main reason for prescribing a bronchodilator is dyspnea and bronchodilators preferably prescribed are LABA and LAMA. Control of patient’s adherence to therapy is mainly carried out by GPs checking the number of drugs annually prescribed or asking the patient during a control visit. Finally, about how many COPD patients GPs believe are in their group of assisted patients, a mean range of 25-40 patients was reported, that is consistently below the forecast based on epidemiological data and number of patients assisted by each GP. Conclusions The results obtained with this project confirm the validity of this informal approach to professional education. Furthermore, this inquiry provided important insights about the general management of COPD and the process of integration between RS and GPs activities on this disease condition in the long run. PMID:24944787

  11. The Jet Propulsion Laboratory shared control architecture and implementation

    NASA Technical Reports Server (NTRS)

    Backes, Paul G.; Hayati, Samad

    1990-01-01

    A hardware and software environment for shared control of telerobot task execution has been implemented. Modes of task execution range from fully teleoperated to fully autonomous as well as shared where hand controller inputs from the human operator are mixed with autonomous system inputs in real time. The objective of the shared control environment is to aid the telerobot operator during task execution by merging real-time operator control from hand controllers with autonomous control to simplify task execution for the operator. The operator is the principal command source and can assign as much autonomy for a task as desired. The shared control hardware environment consists of two PUMA 560 robots, two 6-axis force reflecting hand controllers, Universal Motor Controllers for each of the robots and hand controllers, a SUN4 computer, and VME chassis containing 68020 processors and input/output boards. The operator interface for shared control, the User Macro Interface (UMI), is a menu driven interface to design a task and assign the levels of teleoperated and autonomous control. The operator also sets up the system monitor which checks safety limits during task execution. Cartesian-space degrees of freedom for teleoperated and/or autonomous control inputs are selected within UMI as well as the weightings for the teleoperation and autonmous inputs. These are then used during task execution to determine the mix of teleoperation and autonomous inputs. Some of the autonomous control primitives available to the user are Joint-Guarded-Move, Cartesian-Guarded-Move, Move-To-Touch, Pin-Insertion/Removal, Door/Crank-Turn, Bolt-Turn, and Slide. The operator can execute a task using pure teleoperation or mix control execution from the autonomous primitives with teleoperated inputs. Presently the shared control environment supports single arm task execution. Work is presently underway to provide the shared control environment for dual arm control. Teleoperation during shared control is only Cartesian space control and no force-reflection is provided. Force-reflecting teleoperation and joint space operator inputs are planned extensions to the environment.

  12. Broadcasting collective operation contributions throughout a parallel computer

    DOEpatents

    Faraj, Ahmad [Rochester, MN

    2012-02-21

    Methods, systems, and products are disclosed for broadcasting collective operation contributions throughout a parallel computer. The parallel computer includes a plurality of compute nodes connected together through a data communications network. Each compute node has a plurality of processors for use in collective parallel operations on the parallel computer. Broadcasting collective operation contributions throughout a parallel computer according to embodiments of the present invention includes: transmitting, by each processor on each compute node, that processor's collective operation contribution to the other processors on that compute node using intra-node communications; and transmitting on a designated network link, by each processor on each compute node according to a serial processor transmission sequence, that processor's collective operation contribution to the other processors on the other compute nodes using inter-node communications.

  13. LANDSAT-D flight segment operations manual. Appendix B: OBC software operations

    NASA Technical Reports Server (NTRS)

    Talipsky, R.

    1981-01-01

    The LANDSAT 4 satellite contains two NASA standard spacecraft computers and 65,536 words of memory. Onboard computer software is divided into flight executive and applications processors. Both applications processors and the flight executive use one or more of 67 system tables to obtain variables, constants, and software flags. Output from the software for monitoring operation is via 49 OBC telemetry reports subcommutated in the spacecraft telemetry. Information is provided about the flight software as it is used to control the various spacecraft operations and interpret operational OBC telemetry. Processor function descriptions, processor operation, software constraints, processor system tables, processor telemetry, and processor flow charts are presented.

  14. Managing Power Heterogeneity

    NASA Astrophysics Data System (ADS)

    Pruhs, Kirk

    A particularly important emergent technology is heterogeneous processors (or cores), which many computer architects believe will be the dominant architectural design in the future. The main advantage of a heterogeneous architecture, relative to an architecture of identical processors, is that it allows for the inclusion of processors whose design is specialized for particular types of jobs, and for jobs to be assigned to a processor best suited for that job. Most notably, it is envisioned that these heterogeneous architectures will consist of a small number of high-power high-performance processors for critical jobs, and a larger number of lower-power lower-performance processors for less critical jobs. Naturally, the lower-power processors would be more energy efficient in terms of the computation performed per unit of energy expended, and would generate less heat per unit of computation. For a given area and power budget, heterogeneous designs can give significantly better performance for standard workloads. Moreover, even processors that were designed to be homogeneous, are increasingly likely to be heterogeneous at run time: the dominant underlying cause is the increasing variability in the fabrication process as the feature size is scaled down (although run time faults will also play a role). Since manufacturing yields would be unacceptably low if every processor/core was required to be perfect, and since there would be significant performance loss from derating the entire chip to the functioning of the least functional processor (which is what would be required in order to attain processor homogeneity), some processor heterogeneity seems inevitable in chips with many processors/cores.

  15. Parallel implementation of an adaptive and parameter-free N-body integrator

    NASA Astrophysics Data System (ADS)

    Pruett, C. David; Ingham, William H.; Herman, Ralph D.

    2011-05-01

    Previously, Pruett et al. (2003) [3] described an N-body integrator of arbitrarily high order M with an asymptotic operation count of O(MN). The algorithm's structure lends itself readily to data parallelization, which we document and demonstrate here in the integration of point-mass systems subject to Newtonian gravitation. High order is shown to benefit parallel efficiency. The resulting N-body integrator is robust, parameter-free, highly accurate, and adaptive in both time-step and order. Moreover, it exhibits linear speedup on distributed parallel processors, provided that each processor is assigned at least a handful of bodies. Program summaryProgram title: PNB.f90 Catalogue identifier: AEIK_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIK_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC license, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 3052 No. of bytes in distributed program, including test data, etc.: 68 600 Distribution format: tar.gz Programming language: Fortran 90 and OpenMPI Computer: All shared or distributed memory parallel processors Operating system: Unix/Linux Has the code been vectorized or parallelized?: The code has been parallelized but has not been explicitly vectorized. RAM: Dependent upon N Classification: 4.3, 4.12, 6.5 Nature of problem: High accuracy numerical evaluation of trajectories of N point masses each subject to Newtonian gravitation. Solution method: Parallel and adaptive extrapolation in time via power series of arbitrary degree. Running time: 5.1 s for the demo program supplied with the package.

  16. Multi-Core Processor Memory Contention Benchmark Analysis Case Study

    NASA Technical Reports Server (NTRS)

    Simon, Tyler; McGalliard, James

    2009-01-01

    Multi-core processors dominate current mainframe, server, and high performance computing (HPC) systems. This paper provides synthetic kernel and natural benchmark results from an HPC system at the NASA Goddard Space Flight Center that illustrate the performance impacts of multi-core (dual- and quad-core) vs. single core processor systems. Analysis of processor design, application source code, and synthetic and natural test results all indicate that multi-core processors can suffer from significant memory subsystem contention compared to similar single-core processors.

  17. Simulink/PARS Integration Support

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vacaliuc, B.; Nakhaee, N.

    2013-12-18

    The state of the art for signal processor hardware has far out-paced the development tools for placing applications on that hardware. In addition, signal processors are available in a variety of architectures, each uniquely capable of handling specific types of signal processing efficiently. With these processors becoming smaller and demanding less power, it has become possible to group multiple processors, a heterogeneous set of processors, into single systems. Different portions of the desired problem set can be assigned to different processor types as appropriate. As software development tools do not keep pace with these processors, especially when multiple processors ofmore » different types are used, a method is needed to enable software code portability among multiple processors and multiple types of processors along with their respective software environments. Sundance DSP, Inc. has developed a software toolkit called “PARS”, whose objective is to provide a framework that uses suites of tools provided by different vendors, along with modeling tools and a real time operating system, to build an application that spans different processor types. The software language used to express the behavior of the system is a very high level modeling language, “Simulink”, a MathWorks product. ORNL has used this toolkit to effectively implement several deliverables. This CRADA describes this collaboration between ORNL and Sundance DSP, Inc.« less

  18. SPECIAL ISSUE ON OPTICAL PROCESSING OF INFORMATION: Optoelectronic processors with scanning CCD photodetectors

    NASA Astrophysics Data System (ADS)

    Esepkina, N. A.; Lavrov, A. P.; Anan'ev, M. N.; Blagodarnyi, V. S.; Ivanov, S. I.; Mansyrev, M. I.; Molodyakov, S. A.

    1995-10-01

    Two new types of optoelectronic radio-signal processors were investigated. Charge-coupled device (CCD) photodetectors are used in these processors under continuous scanning conditions, i.e. in a time delay and storage mode. One of these processors is based on a CCD photodetector array with a reference-signal amplitude transparency and the other is an adaptive acousto-optical signal processor with linear frequency modulation. The processor with the transparency performs multichannel discrete—analogue convolution of an input signal with a corresponding kernel of the transformation determined by the transparency. If a light source is an array of light-emitting diodes of special (stripe) geometry, the optical stages of the processor can be made from optical fibre components and the whole processor then becomes a rigid 'sandwich' (a compact hybrid optoelectronic microcircuit). A report is given also of a study of a prototype processor with optical fibre components for the reception of signals from a system with antenna aperture synthesis, which forms a radio image of the Earth.

  19. System and method for representing and manipulating three-dimensional objects on massively parallel architectures

    DOEpatents

    Karasick, M.S.; Strip, D.R.

    1996-01-30

    A parallel computing system is described that comprises a plurality of uniquely labeled, parallel processors, each processor capable of modeling a three-dimensional object that includes a plurality of vertices, faces and edges. The system comprises a front-end processor for issuing a modeling command to the parallel processors, relating to a three-dimensional object. Each parallel processor, in response to the command and through the use of its own unique label, creates a directed-edge (d-edge) data structure that uniquely relates an edge of the three-dimensional object to one face of the object. Each d-edge data structure at least includes vertex descriptions of the edge and a description of the one face. As a result, each processor, in response to the modeling command, operates upon a small component of the model and generates results, in parallel with all other processors, without the need for processor-to-processor intercommunication. 8 figs.

  20. Exploring Patients' Views Toward Giving Web-Based Feedback and Ratings to General Practitioners in England: A Qualitative Descriptive Study.

    PubMed

    Patel, Salma; Cain, Rebecca; Neailey, Kevin; Hooberman, Lucy

    2016-08-05

    Patient feedback websites or doctor rating websites are increasingly being used by patients to give feedback about their health care experiences. There is little known about why patients in England may give Web-based feedback and what may motivate or dissuade them from giving Web-based feedback. The aim of this study was to explore patients' views toward giving Web-based feedback and ratings to general practitioners (GPs), within the context of other feedback methods available in primary care in England, and in particular, paper-based feedback cards. A descriptive exploratory qualitative approach using face-to-face semistructured interviews was used in this study. Purposive sampling was used to recruit 18 participants from different age groups in London and Coventry. Interviews were transcribed verbatim and analyzed using applied thematic analysis. Half of the participants in this study were not aware of the opportunity to leave feedback for GPs, and there was limited awareness about the methods available to leave feedback for a GP. The majority of participants were not convinced that formal patient feedback was needed by GPs or would be used by GPs for improvement, regardless of whether they gave it via a website or on paper. Some participants said or suggested that they may leave feedback on a website rather than on a paper-based feedback card for several reasons: because of the ability and ease of giving it remotely; because it would be shared with the public; and because it would be taken more seriously by GPs. Others, however, suggested that they would not use a website to leave feedback for the opposite reasons: because of accessibility issues; privacy and security concerns; and because they felt feedback left on a website may be ignored. Patient feedback and rating websites as they currently are will not replace other mechanisms for patients in England to leave feedback for a GP. Rather, they may motivate a small number of patients who have more altruistic motives or wish to place collective pressure on a GP to give Web-based feedback. If the National Health Service or GP practices want more patients to leave Web-based feedback, we suggest they first make patients aware that they can leave anonymous feedback securely on a website for a GP. They can then convince them that their feedback is needed and wanted by GPs for improvement, and that the reviews they leave on the website will be of benefit to other patients to decide which GP to see or which GP practice to join.

  1. Scalable Triadic Analysis of Large-Scale Graphs: Multi-Core vs. Multi-Processor vs. Multi-Threaded Shared Memory Architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chin, George; Marquez, Andres; Choudhury, Sutanay

    2012-09-01

    Triadic analysis encompasses a useful set of graph mining methods that is centered on the concept of a triad, which is a subgraph of three nodes and the configuration of directed edges across the nodes. Such methods are often applied in the social sciences as well as many other diverse fields. Triadic methods commonly operate on a triad census that counts the number of triads of every possible edge configuration in a graph. Like other graph algorithms, triadic census algorithms do not scale well when graphs reach tens of millions to billions of nodes. To enable the triadic analysis ofmore » large-scale graphs, we developed and optimized a triad census algorithm to efficiently execute on shared memory architectures. We will retrace the development and evolution of a parallel triad census algorithm. Over the course of several versions, we continually adapted the code’s data structures and program logic to expose more opportunities to exploit parallelism on shared memory that would translate into improved computational performance. We will recall the critical steps and modifications that occurred during code development and optimization. Furthermore, we will compare the performances of triad census algorithm versions on three specific systems: Cray XMT, HP Superdome, and AMD multi-core NUMA machine. These three systems have shared memory architectures but with markedly different hardware capabilities to manage parallelism.« less

  2. Parallel Computation of the Regional Ocean Modeling System (ROMS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, P; Song, Y T; Chao, Y

    2005-04-05

    The Regional Ocean Modeling System (ROMS) is a regional ocean general circulation modeling system solving the free surface, hydrostatic, primitive equations over varying topography. It is free software distributed world-wide for studying both complex coastal ocean problems and the basin-to-global scale ocean circulation. The original ROMS code could only be run on shared-memory systems. With the increasing need to simulate larger model domains with finer resolutions and on a variety of computer platforms, there is a need in the ocean-modeling community to have a ROMS code that can be run on any parallel computer ranging from 10 to hundreds ofmore » processors. Recently, we have explored parallelization for ROMS using the MPI programming model. In this paper, an efficient parallelization strategy for such a large-scale scientific software package, based on an existing shared-memory computing model, is presented. In addition, scientific applications and data-performance issues on a couple of SGI systems, including Columbia, the world's third-fastest supercomputer, are discussed.« less

  3. Optimizing CMS build infrastructure via Apache Mesos

    DOE PAGES

    Abdurachmanov, David; Degano, Alessandro; Elmer, Peter; ...

    2015-12-23

    The Offline Software of the CMS Experiment at the Large Hadron Collider (LHC) at CERN consists of 6M lines of in-house code, developed over a decade by nearly 1000 physicists, as well as a comparable amount of general use open-source code. A critical ingredient to the success of the construction and early operation of the WLCG was the convergence, around the year 2000, on the use of a homogeneous environment of commodity x86-64 processors and Linux.Apache Mesos is a cluster manager that provides efficient resource isolation and sharing across distributed applications, or frameworks. It can run Hadoop, Jenkins, Spark, Aurora,more » and other applications on a dynamically shared pool of nodes. Lastly, we present how we migrated our continuous integration system to schedule jobs on a relatively small Apache Mesos enabled cluster and how this resulted in better resource usage, higher peak performance and lower latency thanks to the dynamic scheduling capabilities of Mesos.« less

  4. Optimizing CMS build infrastructure via Apache Mesos

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abdurachmanov, David; Degano, Alessandro; Elmer, Peter

    The Offline Software of the CMS Experiment at the Large Hadron Collider (LHC) at CERN consists of 6M lines of in-house code, developed over a decade by nearly 1000 physicists, as well as a comparable amount of general use open-source code. A critical ingredient to the success of the construction and early operation of the WLCG was the convergence, around the year 2000, on the use of a homogeneous environment of commodity x86-64 processors and Linux.Apache Mesos is a cluster manager that provides efficient resource isolation and sharing across distributed applications, or frameworks. It can run Hadoop, Jenkins, Spark, Aurora,more » and other applications on a dynamically shared pool of nodes. Lastly, we present how we migrated our continuous integration system to schedule jobs on a relatively small Apache Mesos enabled cluster and how this resulted in better resource usage, higher peak performance and lower latency thanks to the dynamic scheduling capabilities of Mesos.« less

  5. A Tensor Product Formulation of Strassen's Matrix Multiplication Algorithm with Memory Reduction

    DOE PAGES

    Kumar, B.; Huang, C. -H.; Sadayappan, P.; ...

    1995-01-01

    In this article, we present a program generation strategy of Strassen's matrix multiplication algorithm using a programming methodology based on tensor product formulas. In this methodology, block recursive programs such as the fast Fourier Transforms and Strassen's matrix multiplication algorithm are expressed as algebraic formulas involving tensor products and other matrix operations. Such formulas can be systematically translated to high-performance parallel/vector codes for various architectures. In this article, we present a nonrecursive implementation of Strassen's algorithm for shared memory vector processors such as the Cray Y-MP. A previous implementation of Strassen's algorithm synthesized from tensor product formulas required working storagemore » of size O(7 n ) for multiplying 2 n × 2 n matrices. We present a modified formulation in which the working storage requirement is reduced to O(4 n ). The modified formulation exhibits sufficient parallelism for efficient implementation on a shared memory multiprocessor. Performance results on a Cray Y-MP8/64 are presented.« less

  6. Implementation of kernels on the Maestro processor

    NASA Astrophysics Data System (ADS)

    Suh, Jinwoo; Kang, D. I. D.; Crago, S. P.

    Currently, most microprocessors use multiple cores to increase performance while limiting power usage. Some processors use not just a few cores, but tens of cores or even 100 cores. One such many-core microprocessor is the Maestro processor, which is based on Tilera's TILE64 processor. The Maestro chip is a 49-core, general-purpose, radiation-hardened processor designed for space applications. The Maestro processor, unlike the TILE64, has a floating point unit (FPU) in each core for improved floating point performance. The Maestro processor runs at 342 MHz clock frequency. On the Maestro processor, we implemented several widely used kernels: matrix multiplication, vector add, FIR filter, and FFT. We measured and analyzed the performance of these kernels. The achieved performance was up to 5.7 GFLOPS, and the speedup compared to single tile was up to 49 using 49 tiles.

  7. Electrochemical sensing using voltage-current time differential

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Woo, Leta Yar-Li; Glass, Robert Scott; Fitzpatrick, Joseph Jay

    2017-02-28

    A device for signal processing. The device includes a signal generator, a signal detector, and a processor. The signal generator generates an original waveform. The signal detector detects an affected waveform. The processor is coupled to the signal detector. The processor receives the affected waveform from the signal detector. The processor also compares at least one portion of the affected waveform with the original waveform. The processor also determines a difference between the affected waveform and the original waveform. The processor also determines a value corresponding to a unique portion of the determined difference between the original and affected waveforms.more » The processor also outputs the determined value.« less

  8. Accuracy requirements of optical linear algebra processors in adaptive optics imaging systems

    NASA Technical Reports Server (NTRS)

    Downie, John D.; Goodman, Joseph W.

    1989-01-01

    The accuracy requirements of optical processors in adaptive optics systems are determined by estimating the required accuracy in a general optical linear algebra processor (OLAP) that results in a smaller average residual aberration than that achieved with a conventional electronic digital processor with some specific computation speed. Special attention is given to an error analysis of a general OLAP with regard to the residual aberration that is created in an adaptive mirror system by the inaccuracies of the processor, and to the effect of computational speed of an electronic processor on the correction. Results are presented on the ability of an OLAP to compete with a digital processor in various situations.

  9. Developing resources to support the diagnosis and management of Chronic Fatigue Syndrome/Myalgic Encephalitis (CFS/ME) in primary care: a qualitative study.

    PubMed

    Hannon, Kerin; Peters, Sarah; Fisher, Louise; Riste, Lisa; Wearden, Alison; Lovell, Karina; Turner, Pam; Leech, Yvonne; Chew-Graham, Carolyn

    2012-09-21

    NICE guidelines emphasise the need for a confident, early diagnosis of Chronic Fatigue Syndrome/ Myalgic Encephalitis (CFS/ME) in Primary Care with management tailored to the needs of the patient. Research suggests that GPs are reluctant to make the diagnosis and resources for management are currently inadequate. This study aimed to develop resources for practitioners and patients to support the diagnosis and management of CFS/ME in primary care. Semi structured interviews were conducted with patients, carers, GPs, practice nurses and CFS/ME specialists in North West England. All interviews were audio recorded, transcribed and analysed qualitatively using open explorative thematic coding. Two patient involvement groups were consulted at each stage of the development of resources to ensure that the resources reflect everyday issues faced by people living with CFS/ME. Patients and carers stressed the importance of recognising CFS/ME as a legitimate condition, and the need to be believed by health care professionals. GPs and practice nurses stated that they do not always have the knowledge or skills to diagnose and manage the condition. They expressed a preference for an online training package. For patients, information on getting the most out of a consultation and the role of carers was thought to be important. Patients did not want to be overloaded with information at diagnosis, and suggested information should be given in steps. A DVD was suggested, to enable information sharing with carers and family, and also for those whose symptoms act as a barrier to reading. Rather than use a top-down approach to the development of training for health care practitioners and information for patients and carers, we have used data from key stakeholders to develop a patient DVD, patient leaflets to guide symptom management and a modular e-learning resource which should equip GPs to diagnose and manage CFS/ME effectively, meet NICE guidelines and give patients acceptable, evidence-based information.

  10. Presentation and outcome of clinical poor performance in one health district over a 5-year period: 2002-2007.

    PubMed

    Cox, Stephen J; Holden, John D

    2009-05-01

    The detection, assessment, and management of primary care poor performance raise difficult issues for all those involved. Guidance has largely focused on managing the most serious cases where patient safety is severely compromised. The management of primary care poor performance has become an increasingly important part of primary care trust (PCT) work, but its modes of presentation and prevalence are not well known. To report the prevalence, presentation modes, and management of primary care poor performance cases presenting to one PCT over a 5-year period. A retrospective review of primary care poor performance cases in one district. St Helens PCT administered 35 practices with 130 GPs on the performers list, caring for 190 110 patients in North West England, UK. Cases presenting during 2002-2007 were initially reviewed by the chair of the PCT clinical executive committee. Anonymised data were then jointly reviewed by the assessor and another experienced GP advisor. There were 102 individual presentations (20 per year or one every 2-3 weeks) where clinician performance raised significant cause for concern occurred over the 5-year period. These concerns related to 37 individual clinicians, a range of 1-14 per clinician (mean 2.7). Whistleblowing by professional colleagues on 43 occasions was the most common presentation, of which 26 were from GPs about GPs. Patient complaints (18) were the second most common presentation. Twenty-seven clinicians were GPs, of whom the General Medical Council (GMC) were notified or involved in 13 cases. Clinicians were supported locally, and remedying was exclusively locally managed in 14 cases, and shared with an external organisation (such as the GMC or deanery) in another 12. Professional whistleblowing and patient complaints were the most common sources of presentation. Effective PCT teams are needed to manage clinicians whose performance gives cause for concern. Sufficient resources and both formal and informal ways of reporting concerns are essential.

  11. Facilitating professional liaison in collaborative care for depression in UK primary care; a qualitative study utilising normalisation process theory.

    PubMed

    Coupe, Nia; Anderson, Emma; Gask, Linda; Sykes, Paul; Richards, David A; Chew-Graham, Carolyn

    2014-05-01

    Collaborative care (CC) is an organisational framework which facilitates the delivery of a mental health intervention to patients by case managers in collaboration with more senior health professionals (supervisors and GPs), and is effective for the management of depression in primary care. However, there remains limited evidence on how to successfully implement this collaborative approach in UK primary care. This study aimed to explore to what extent CC impacts on professional working relationships, and if CC for depression could be implemented as routine in the primary care setting. This qualitative study explored perspectives of the 6 case managers (CMs), 5 supervisors (trial research team members) and 15 general practitioners (GPs) from practices participating in a randomised controlled trial of CC for depression. Interviews were transcribed verbatim and data was analysed using a two-step approach using an initial thematic analysis, and a secondary analysis using the Normalisation Process Theory concepts of coherence, cognitive participation, collective action and reflexive monitoring with respect to the implementation of CC in primary care. Supervisors and CMs demonstrated coherence in their understanding of CC, and consequently reported good levels of cognitive participation and collective action regarding delivering and supervising the intervention. GPs interviewed showed limited understanding of the CC framework, and reported limited collaboration with CMs: barriers to collaboration were identified. All participants identified the potential or experienced benefits of a collaborative approach to depression management and were able to discuss ways in which collaboration can be facilitated. Primary care professionals in this study valued the potential for collaboration, but GPs' understanding of CC and organisational barriers hindered opportunities for communication. Further work is needed to address these organisational barriers in order to facilitate collaboration around individual patients with depression, including shared IT systems, facilitating opportunities for informal discussion and building in formal collaboration into the CC framework. ISRCTN32829227 30/9/2008.

  12. Comparative Analysis of Volcanic Inflation—Deflation Cycles

    NASA Astrophysics Data System (ADS)

    Walwer, D.; Ghil, M.; Calais, E.

    2016-12-01

    GPS geodetic data together with INSAR images are often used to formulate kinematic models of the sources of volcanic deformations. The increasing amount of data now available allows one to produce time series that are several years long and thus capture continuously the history of volcanic deformations, in particular their nonlinear behavior. This information is highly valuable in helping understand the dynamics of volcanic systems.Nonlinear deformation signals are, however, difficult to extract from the background noise inherent in the GPS time series. It is also arduous to unravel the signal of interest from other nonlinear signals, such as the seasonal oscillations associated with mass variations in the atmosphere, the ocean, and the hydrological reservoirs. Here we use Multichannel Singular Spectrum Analysis (M-SSA) — an advanced, data-adaptive method for time series analysis that exploits simultaneously the temporal and spatial correlations of geophysical fields — to extract such deformation signals.We apply M-SSA to GPS data sets from four volcanoes: Akutan, Alaska; Okmok, Alaska; Westdahl, Alaska; and Piton de la Fournaise, La Reunion. Our analyses show that all four volcanoes share similar features in their deformation history, suggesting similarities in the dynamics that generate the inflation-deflation cycles. In particular, all four volcanic systems exhibit sawtooth-shaped oscillations with slow inflations followed by slower deflations, with time scales that vary from 6 months to 4 years. This relation of dynamical similarity is further highlighted by the phase portrait reconstruction of the four systems in the plane of deformation vs. rate-of-deformation, as obtained from the deformation signals extracted from the GPS time series using M-SSA.The inflating phase of these oscillations is followed by eruptions at Okmok volcano and at Piton de la Fournaise. These analysis results suggest that these volcanic inflation—deflation cycles are associated with the destabilization of a volcanic system and may lead to the identification of premonitory signals for an eruptive regime.

  13. Modeling heterogeneous processor scheduling for real time systems

    NASA Technical Reports Server (NTRS)

    Leathrum, J. F.; Mielke, R. R.; Stoughton, J. W.

    1994-01-01

    A new model is presented to describe dataflow algorithms implemented in a multiprocessing system. Called the resource/data flow graph (RDFG), the model explicitly represents cyclo-static processor schedules as circuits of processor arcs which reflect the order that processors execute graph nodes. The model also allows the guarantee of meeting hard real-time deadlines. When unfolded, the model identifies statically the processor schedule. The model therefore is useful for determining the throughput and latency of systems with heterogeneous processors. The applicability of the model is demonstrated using a space surveillance algorithm.

  14. Parallel processor for real-time structural control

    NASA Astrophysics Data System (ADS)

    Tise, Bert L.

    1993-07-01

    A parallel processor that is optimized for real-time linear control has been developed. This modular system consists of A/D modules, D/A modules, and floating-point processor modules. The scalable processor uses up to 1,000 Motorola DSP96002 floating-point processors for a peak computational rate of 60 GFLOPS. Sampling rates up to 625 kHz are supported by this analog-in to analog-out controller. The high processing rate and parallel architecture make this processor suitable for computing state-space equations and other multiply/accumulate-intensive digital filters. Processor features include 14-bit conversion devices, low input-to-output latency, 240 Mbyte/s synchronous backplane bus, low-skew clock distribution circuit, VME connection to host computer, parallelizing code generator, and look- up-tables for actuator linearization. This processor was designed primarily for experiments in structural control. The A/D modules sample sensors mounted on the structure and the floating- point processor modules compute the outputs using the programmed control equations. The outputs are sent through the D/A module to the power amps used to drive the structure's actuators. The host computer is a Sun workstation. An OpenWindows-based control panel is provided to facilitate data transfer to and from the processor, as well as to control the operating mode of the processor. A diagnostic mode is provided to allow stimulation of the structure and acquisition of the structural response via sensor inputs.

  15. Testing and operating a multiprocessor chip with processor redundancy

    DOEpatents

    Bellofatto, Ralph E; Douskey, Steven M; Haring, Rudolf A; McManus, Moyra K; Ohmacht, Martin; Schmunkamp, Dietmar; Sugavanam, Krishnan; Weatherford, Bryan J

    2014-10-21

    A system and method for improving the yield rate of a multiprocessor semiconductor chip that includes primary processor cores and one or more redundant processor cores. A first tester conducts a first test on one or more processor cores, and encodes results of the first test in an on-chip non-volatile memory. A second tester conducts a second test on the processor cores, and encodes results of the second test in an external non-volatile storage device. An override bit of a multiplexer is set if a processor core fails the second test. In response to the override bit, the multiplexer selects a physical-to-logical mapping of processor IDs according to one of: the encoded results in the memory device or the encoded results in the external storage device. On-chip logic configures the processor cores according to the selected physical-to-logical mapping.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reed, D.A.; Grunwald, D.C.

    The spectrum of parallel processor designs can be divided into three sections according to the number and complexity of the processors. At one end there are simple, bit-serial processors. Any one of thee processors is of little value, but when it is coupled with many others, the aggregate computing power can be large. This approach to parallel processing can be likened to a colony of termites devouring a log. The most notable examples of this approach are the NASA/Goodyear Massively Parallel Processor, which has 16K one-bit processors, and the Thinking Machines Connection Machine, which has 64K one-bit processors. At themore » other end of the spectrum, a small number of processors, each built using the fastest available technology and the most sophisticated architecture, are combined. An example of this approach is the Cray X-MP. This type of parallel processing is akin to four woodmen attacking the log with chainsaws.« less

  17. Electrochemical sensing using comparison of voltage-current time differential values during waveform generation and detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Woo, Leta Yar-Li; Glass, Robert Scott; Fitzpatrick, Joseph Jay

    2018-01-02

    A device for signal processing. The device includes a signal generator, a signal detector, and a processor. The signal generator generates an original waveform. The signal detector detects an affected waveform. The processor is coupled to the signal detector. The processor receives the affected waveform from the signal detector. The processor also compares at least one portion of the affected waveform with the original waveform. The processor also determines a difference between the affected waveform and the original waveform. The processor also determines a value corresponding to a unique portion of the determined difference between the original and affected waveforms.more » The processor also outputs the determined value.« less

  18. Multi-GPU and multi-CPU accelerated FDTD scheme for vibroacoustic applications

    NASA Astrophysics Data System (ADS)

    Francés, J.; Otero, B.; Bleda, S.; Gallego, S.; Neipp, C.; Márquez, A.; Beléndez, A.

    2015-06-01

    The Finite-Difference Time-Domain (FDTD) method is applied to the analysis of vibroacoustic problems and to study the propagation of longitudinal and transversal waves in a stratified media. The potential of the scheme and the relevance of each acceleration strategy for massively computations in FDTD are demonstrated in this work. In this paper, we propose two new specific implementations of the bi-dimensional scheme of the FDTD method using multi-CPU and multi-GPU, respectively. In the first implementation, an open source message passing interface (OMPI) has been included in order to massively exploit the resources of a biprocessor station with two Intel Xeon processors. Moreover, regarding CPU code version, the streaming SIMD extensions (SSE) and also the advanced vectorial extensions (AVX) have been included with shared memory approaches that take advantage of the multi-core platforms. On the other hand, the second implementation called the multi-GPU code version is based on Peer-to-Peer communications available in CUDA on two GPUs (NVIDIA GTX 670). Subsequently, this paper presents an accurate analysis of the influence of the different code versions including shared memory approaches, vector instructions and multi-processors (both CPU and GPU) and compares them in order to delimit the degree of improvement of using distributed solutions based on multi-CPU and multi-GPU. The performance of both approaches was analysed and it has been demonstrated that the addition of shared memory schemes to CPU computing improves substantially the performance of vector instructions enlarging the simulation sizes that use efficiently the cache memory of CPUs. In this case GPU computing is slightly twice times faster than the fine tuned CPU version in both cases one and two nodes. However, for massively computations explicit vector instructions do not worth it since the memory bandwidth is the limiting factor and the performance tends to be the same than the sequential version with auto-vectorisation and also shared memory approach. In this scenario GPU computing is the best option since it provides a homogeneous behaviour. More specifically, the speedup of GPU computing achieves an upper limit of 12 for both one and two GPUs, whereas the performance reaches peak values of 80 GFlops and 146 GFlops for the performance for one GPU and two GPUs respectively. Finally, the method is applied to an earth crust profile in order to demonstrate the potential of our approach and the necessity of applying acceleration strategies in these type of applications.

  19. Hybrid Electro-Optic Processor

    DTIC Science & Technology

    1991-07-01

    This report describes the design of a hybrid electro - optic processor to perform adaptive interference cancellation in radar systems. The processor is...modulator is reported. Included is this report is a discussion of the design, partial fabrication in the laboratory, and partial testing of the hybrid electro ... optic processor. A follow on effort is planned to complete the construction and testing of the processor. The work described in this report is the

  20. JPRS Report, Science & Technology, Europe.

    DTIC Science & Technology

    1991-04-30

    processor in collaboration with Intel . The processor , christened Touchstone, will be used as the core of a parallel computer with 2,000 processors . One of...ELECTRONIQUE HEBDO in French 24 Jan 91 pp 14-15 [Article by Claire Remy: "Everything Set for Neural Signal Processors " first paragraph is ELECTRONIQUE...paving the way for neural signal processors in so doing. The principal advantage of this specific circuit over a neuromimetic software program is

Top