Sample records for parallel post-processing evaluation

  1. Evaluation of retention and fracture resistance of different fiber reinforced posts: An in vitro study.

    PubMed

    Pruthi, Varun; Talwar, Sangeeta; Nawal, Ruchika Roongta; Pruthi, Preeti Jain; Choudhary, Sarika; Yadav, Seema

    2018-01-01

    The aim of this study was to evaluate retention & fracture resistance of different fibre posts. 90 extracted human permanent maxillary central incisors were used in this study. For retention evaluation, after obturation, post space preparation was done in all root canals and posts were cemented under three groups. Later, the posts were grasped & pulled out from the roots with the help of a three-jaw chuck at a cross-head speed of 5mm/min. Force required to dislodge each post was recorded in Newtons. To evaluate the fracture behavior of posts, artificial root canals were drilled into aluminium blocks and posts were cemented. Load required to fracture each post was recorded in Newtons. The results of the present study show the mean retention values for Fibrekleer Parallel post were significantly greater than those for Synca Double tapered post & Bioloren Tapered post. The mean retention values of the Double tapered post & the tapered post were not statistically different. The Synca Double tapered post had the highest mean load to fracture, and this value was significantly higher than those of FibreKleer Parallel & Bioloren Tapered post. The mean fracture resistance values of Parallel & tapered post were not statistically different. This study showed parallel posts to have better retention than tapered and double tapered posts. Regarding the fracture resistance, double tapered posts were found to be better than parallel and tapered posts.

  2. Parallel workflow tools to facilitate human brain MRI post-processing

    PubMed Central

    Cui, Zaixu; Zhao, Chenxi; Gong, Gaolang

    2015-01-01

    Multi-modal magnetic resonance imaging (MRI) techniques are widely applied in human brain studies. To obtain specific brain measures of interest from MRI datasets, a number of complex image post-processing steps are typically required. Parallel workflow tools have recently been developed, concatenating individual processing steps and enabling fully automated processing of raw MRI data to obtain the final results. These workflow tools are also designed to make optimal use of available computational resources and to support the parallel processing of different subjects or of independent processing steps for a single subject. Automated, parallel MRI post-processing tools can greatly facilitate relevant brain investigations and are being increasingly applied. In this review, we briefly summarize these parallel workflow tools and discuss relevant issues. PMID:26029043

  3. Parallelization of Program to Optimize Simulated Trajectories (POST3D)

    NASA Technical Reports Server (NTRS)

    Hammond, Dana P.; Korte, John J. (Technical Monitor)

    2001-01-01

    This paper describes the parallelization of the Program to Optimize Simulated Trajectories (POST3D). POST3D uses a gradient-based optimization algorithm that reaches an optimum design point by moving from one design point to the next. The gradient calculations required to complete the optimization process, dominate the computational time and have been parallelized using a Single Program Multiple Data (SPMD) on a distributed memory NUMA (non-uniform memory access) architecture. The Origin2000 was used for the tests presented.

  4. Electrophysiological evidence for parallel and serial processing during visual search.

    PubMed

    Luck, S J; Hillyard, S A

    1990-12-01

    Event-related potentials were recorded from young adults during a visual search task in order to evaluate parallel and serial models of visual processing in the context of Treisman's feature integration theory. Parallel and serial search strategies were produced by the use of feature-present and feature-absent targets, respectively. In the feature-absent condition, the slopes of the functions relating reaction time and latency of the P3 component to set size were essentially identical, indicating that the longer reaction times observed for larger set sizes can be accounted for solely by changes in stimulus identification and classification time, rather than changes in post-perceptual processing stages. In addition, the amplitude of the P3 wave on target-present trials in this condition increased with set size and was greater when the preceding trial contained a target, whereas P3 activity was minimal on target-absent trials. These effects are consistent with the serial self-terminating search model and appear to contradict parallel processing accounts of attention-demanding visual search performance, at least for a subset of search paradigms. Differences in ERP scalp distributions further suggested that different physiological processes are utilized for the detection of feature presence and absence.

  5. Parallel Processing of Broad-Band PPM Signals

    NASA Technical Reports Server (NTRS)

    Gray, Andrew; Kang, Edward; Lay, Norman; Vilnrotter, Victor; Srinivasan, Meera; Lee, Clement

    2010-01-01

    A parallel-processing algorithm and a hardware architecture to implement the algorithm have been devised for timeslot synchronization in the reception of pulse-position-modulated (PPM) optical or radio signals. As in the cases of some prior algorithms and architectures for parallel, discrete-time, digital processing of signals other than PPM, an incoming broadband signal is divided into multiple parallel narrower-band signals by means of sub-sampling and filtering. The number of parallel streams is chosen so that the frequency content of the narrower-band signals is low enough to enable processing by relatively-low speed complementary metal oxide semiconductor (CMOS) electronic circuitry. The algorithm and architecture are intended to satisfy requirements for time-varying time-slot synchronization and post-detection filtering, with correction of timing errors independent of estimation of timing errors. They are also intended to afford flexibility for dynamic reconfiguration and upgrading. The architecture is implemented in a reconfigurable CMOS processor in the form of a field-programmable gate array. The algorithm and its hardware implementation incorporate three separate time-varying filter banks for three distinct functions: correction of sub-sample timing errors, post-detection filtering, and post-detection estimation of timing errors. The design of the filter bank for correction of timing errors, the method of estimating timing errors, and the design of a feedback-loop filter are governed by a host of parameters, the most critical one, with regard to processing very broadband signals with CMOS hardware, being the number of parallel streams (equivalently, the rate-reduction parameter).

  6. Influence of post pattern and resin cement curing mode on the retention of glass fibre posts.

    PubMed

    Poskus, L T; Sgura, R; Paragó, F E M; Silva, E M; Guimarães, J G A

    2010-04-01

    To evaluate the influence of post design and roughness and cement system (dual- or self-cured) on the retention of glass fibre posts. Two tapered and smooth posts (Exacto Cônico No. 2 and White Post No. 1) and two parallel-sided and serrated posts (Fibrekor 1.25 mm and Reforpost No. 2) were adhesively luted with two different resin cements--a dual-cured (Rely-X ARC) and a self-cured (Cement Post)--in 40 single-rooted teeth. The teeth were divided into eight experimental groups (n = 5): PFD--Parallel-serrated-Fibrekor/dual-cured; PRD--Parallel-serrated-Reforpost/dual-cured; TED--Tapered-smooth-Exacto Cônico/dual-cured; TWD--Tapered-smooth-White Post/dual-cured; PFS--Parallel-serrated-Fibrekor/self-cured; PRS--Parallel-serrated-Reforpost/self-cured; TES--Tapered-smooth-Exacto Cônico/self-cured; TWS--Tapered-smooth-White Post/self-cured. The specimens were submitted to a pull-out test at a crosshead speed of 0.5 mm min(-1). Data were analysed using analysis of variance and Bonferroni's multiple comparison test (alpha = 0.05). Pull-out results (MPa) were: PFD = 8.13 (+/-1.71); PRD = 8.30 (+/-0.46); TED = 8.68 (+/-1.71); TWD = 9.35 (+/-1.99); PFS = 8.54 (+/-2.23); PRS = 7.09 (+/-1.96); TES = 8.27 (+/-3.92); TWS = 7.57 (+/-2.35). No statistical significant difference was detected for posts and cement factors and their interaction. The retention of glass fibre posts was not affected by post design or surface roughness nor by resin cement-curing mode. These results imply that the choice for serrated posts and self-cured cements is not related to an improvement in retention.

  7. Progress of the Swedish-Australian research collaboration on uncooled smart IR sensors

    NASA Astrophysics Data System (ADS)

    Liddiard, Kevin C.; Ringh, Ulf; Jansson, Christer; Reinhold, Olaf

    1998-10-01

    Progress is reported on the development of uncooled microbolometer IR focal plane detector arrays (IRFPDA) under a research collaboration between the Swedish Defence Research Establishment (FOA), and the Defence Science and Technology Organization (DSTO), Australia. The paper describes current focal plane detector arrays designed by Electro-optic Sensor Design (EOSD) for readout circuits developed by FOA. The readouts are fabricated in 0.8 micrometer CMOS, and have a novel signal conditioning and 16 bit parallel ADC design. The arrays are post-processed at DSTO on wafers supplied by FOA. During the past year array processing has been carried out at a new microengineering facility at DSTO, Salisbury, South Australia. A number of small format 16 X 16 arrays have been delivered to FOA for evaluation, and imaging has been demonstrated with these arrays. A 320 X 240 readout with 320 parallel 16 bit ADCs has been developed and IRFPDAs for this readout have been fabricated and are currently being evaluated.

  8. The use of reinforced composite resin cement as compensation for reduced post length.

    PubMed

    Nissan, J; Dmitry, Y; Assif, D

    2001-09-01

    Cements that yield high retentive values are believed to allow use of shorter posts. This study investigated the use of reinforced composite resin cement as compensation for reduced dowel length. The retention values of stainless steel posts (parallel-sided ParaPost and tapered Dentatus in 5-, 8-, and 10-mm lengths) luted with Flexi-Flow titanium-reinforced composite resin and zinc phosphate cements were evaluated. Single-rooted extracted human teeth with crowns (n = 120), removed at the cementoenamel junction, were randomly divided into 4 groups of 30 samples each. Different post lengths were luted with either Flexi-Flow or zinc phosphate. Each sample was placed into a specialized jig and on a tensile testing machine with a crosshead speed of 2 mm/min, applied until failure. The effect of different posts and cements on the force required to dislodge the dowels was evaluated with multiple analyses of variance (ANOVA). One-way ANOVA with Scheffé contrast was applied to determine the effect of different post lengths on the retentive failure of posts luted with the 2 agents. Flexi-Flow reinforced composite resin cement significantly increased retention of ParaPost and Dentatus dowels (P<.001) compared with zinc phosphate. One-way ANOVA revealed no statistically significant difference (P>.05) between mean retention of both dowels luted with Flexi-Flow for all posts length used (5 mm = 8 mm = 10 mm). Mean retention values of the groups luted with zinc phosphate showed a statistically significant difference (P<.001) for the different post lengths (10 > 8 > 5 mm). Parallel-sided ParaPost dowels demonstrated a higher mean retention than tapered Dentatus dowels (P<.001). In this study, Flexi-Flow reinforced composite resin cement compensated for the reduced length of shorter parallel-sided ParaPost and tapered Dentatus dowels.

  9. Search asymmetries: parallel processing of uncertain sensory information.

    PubMed

    Vincent, Benjamin T

    2011-08-01

    What is the mechanism underlying search phenomena such as search asymmetry? Two-stage models such as Feature Integration Theory and Guided Search propose parallel pre-attentive processing followed by serial post-attentive processing. They claim search asymmetry effects are indicative of finding pairs of features, one processed in parallel, the other in serial. An alternative proposal is that a 1-stage parallel process is responsible, and search asymmetries occur when one stimulus has greater internal uncertainty associated with it than another. While the latter account is simpler, only a few studies have set out to empirically test its quantitative predictions, and many researchers still subscribe to the 2-stage account. This paper examines three separate parallel models (Bayesian optimal observer, max rule, and a heuristic decision rule). All three parallel models can account for search asymmetry effects and I conclude that either people can optimally utilise the uncertain sensory data available to them, or are able to select heuristic decision rules which approximate optimal performance. Copyright © 2011 Elsevier Ltd. All rights reserved.

  10. Toward a Model Framework of Generalized Parallel Componential Processing of Multi-Symbol Numbers

    ERIC Educational Resources Information Center

    Huber, Stefan; Cornelsen, Sonja; Moeller, Korbinian; Nuerk, Hans-Christoph

    2015-01-01

    In this article, we propose and evaluate a new model framework of parallel componential multi-symbol number processing, generalizing the idea of parallel componential processing of multi-digit numbers to the case of negative numbers by considering the polarity signs similar to single digits. In a first step, we evaluated this account by defining…

  11. Lexical and Post-Lexical Complexity Effects on Eye Movements in Reading

    PubMed Central

    Warren, Tessa; Reichle, Erik D.; Patson, Nikole D.

    2011-01-01

    The current study investigated how a post-lexical complexity manipulation followed by a lexical complexity manipulation affects eye movements during reading. Both manipulations caused disruption in all measures on the manipulated words, but the patterns of spill-over differed. Critically, the effects of the two kinds of manipulations did not interact, and there was no evidence that post-lexical processing difficulty delayed lexical processing on the next word (c.f. Henderson & Ferreira, 1990). This suggests that post-lexical processing of one word and lexical processing of the next can proceed independently and likely in parallel. This finding is consistent with the assumptions of the E-Z Reader model of eye movement control in reading (Reichle, Warren, & McConnell, 2009). PMID:21603125

  12. An architecture for real-time vision processing

    NASA Technical Reports Server (NTRS)

    Chien, Chiun-Hong

    1994-01-01

    To study the feasibility of developing an architecture for real time vision processing, a task queue server and parallel algorithms for two vision operations were designed and implemented on an i860-based Mercury Computing System 860VS array processor. The proposed architecture treats each vision function as a task or set of tasks which may be recursively divided into subtasks and processed by multiple processors coordinated by a task queue server accessible by all processors. Each idle processor subsequently fetches a task and associated data from the task queue server for processing and posts the result to shared memory for later use. Load balancing can be carried out within the processing system without the requirement for a centralized controller. The author concludes that real time vision processing cannot be achieved without both sequential and parallel vision algorithms and a good parallel vision architecture.

  13. Contingency Analysis Post-Processing With Advanced Computing and Visualization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yousu; Glaesemann, Kurt; Fitzhenry, Erin

    Contingency analysis is a critical function widely used in energy management systems to assess the impact of power system component failures. Its outputs are important for power system operation for improved situational awareness, power system planning studies, and power market operations. With the increased complexity of power system modeling and simulation caused by increased energy production and demand, the penetration of renewable energy and fast deployment of smart grid devices, and the trend of operating grids closer to their capacity for better efficiency, more and more contingencies must be executed and analyzed quickly in order to ensure grid reliability andmore » accuracy for the power market. Currently, many researchers have proposed different techniques to accelerate the computational speed of contingency analysis, but not much work has been published on how to post-process the large amount of contingency outputs quickly. This paper proposes a parallel post-processing function that can analyze contingency analysis outputs faster and display them in a web-based visualization tool to help power engineers improve their work efficiency by fast information digestion. Case studies using an ESCA-60 bus system and a WECC planning system are presented to demonstrate the functionality of the parallel post-processing technique and the web-based visualization tool.« less

  14. STAMPS: Software Tool for Automated MRI Post-processing on a supercomputer.

    PubMed

    Bigler, Don C; Aksu, Yaman; Miller, David J; Yang, Qing X

    2009-08-01

    This paper describes a Software Tool for Automated MRI Post-processing (STAMP) of multiple types of brain MRIs on a workstation and for parallel processing on a supercomputer (STAMPS). This software tool enables the automation of nonlinear registration for a large image set and for multiple MR image types. The tool uses standard brain MRI post-processing tools (such as SPM, FSL, and HAMMER) for multiple MR image types in a pipeline fashion. It also contains novel MRI post-processing features. The STAMP image outputs can be used to perform brain analysis using Statistical Parametric Mapping (SPM) or single-/multi-image modality brain analysis using Support Vector Machines (SVMs). Since STAMPS is PBS-based, the supercomputer may be a multi-node computer cluster or one of the latest multi-core computers.

  15. Why caution is recommended with post-hoc individual patient matching for estimation of treatment effect in parallel-group randomized controlled trials: the case of acute stroke trials.

    PubMed

    Jafari, Nahid; Hearne, John; Churilov, Leonid

    2013-11-10

    A post-hoc individual patient matching procedure was recently proposed within the context of parallel group randomized clinical trials (RCTs) as a method for estimating treatment effect. In this paper, we consider a post-hoc individual patient matching problem within a parallel group RCT as a multi-objective decision-making problem focussing on the trade-off between the quality of individual matches and the overall percentage of matching. Using acute stroke trials as a context, we utilize exact optimization and simulation techniques to investigate a complex relationship between the overall percentage of individual post-hoc matching, the size of the respective RCT, and the quality of matching on variables highly prognostic for a good functional outcome after stroke, as well as the dispersion in these variables. It is empirically confirmed that a high percentage of individual post-hoc matching can only be achieved when the differences in prognostic baseline variables between individually matched subjects within the same pair are sufficiently large and that the unmatched subjects are qualitatively different to the matched ones. It is concluded that the post-hoc individual matching as a technique for treatment effect estimation in parallel-group RCTs should be exercised with caution because of its propensity to introduce significant bias and reduce validity. If used with appropriate caution and thorough evaluation, this approach can complement other viable alternative approaches for estimating the treatment effect. Copyright © 2013 John Wiley & Sons, Ltd.

  16. Capacitive Micro Pressure Sensor Integrated with a Ring Oscillator Circuit on Chip

    PubMed Central

    Dai, Ching-Liang; Lu, Po-Wei; Chang, Chienliu; Liu, Cheng-Yang

    2009-01-01

    The study investigates a capacitive micro pressure sensor integrated with a ring oscillator circuit on a chip. The integrated capacitive pressure sensor is fabricated using the commercial CMOS (complementary metal oxide semiconductor) process and a post-process. The ring oscillator is employed to convert the capacitance of the pressure sensor into the frequency output. The pressure sensor consists of 16 sensing cells in parallel. Each sensing cell contains a top electrode and a lower electrode, and the top electrode is a sandwich membrane. The pressure sensor needs a post-CMOS process to release the membranes after completion of the CMOS process. The post-process uses etchants to etch the sacrificial layers, and to release the membranes. The advantages of the post-process include easy execution and low cost. Experimental results reveal that the pressure sensor has a high sensitivity of 7 Hz/Pa in the pressure range of 0–300 kPa. PMID:22303167

  17. Capacitive micro pressure sensor integrated with a ring oscillator circuit on chip.

    PubMed

    Dai, Ching-Liang; Lu, Po-Wei; Chang, Chienliu; Liu, Cheng-Yang

    2009-01-01

    The study investigates a capacitive micro pressure sensor integrated with a ring oscillator circuit on a chip. The integrated capacitive pressure sensor is fabricated using the commercial CMOS (complementary metal oxide semiconductor) process and a post-process. The ring oscillator is employed to convert the capacitance of the pressure sensor into the frequency output. The pressure sensor consists of 16 sensing cells in parallel. Each sensing cell contains a top electrode and a lower electrode, and the top electrode is a sandwich membrane. The pressure sensor needs a post-CMOS process to release the membranes after completion of the CMOS process. The post-process uses etchants to etch the sacrificial layers, and to release the membranes. The advantages of the post-process include easy execution and low cost. Experimental results reveal that the pressure sensor has a high sensitivity of 7 Hz/Pa in the pressure range of 0-300 kPa.

  18. Quantitative Image Feature Engine (QIFE): an Open-Source, Modular Engine for 3D Quantitative Feature Extraction from Volumetric Medical Images.

    PubMed

    Echegaray, Sebastian; Bakr, Shaimaa; Rubin, Daniel L; Napel, Sandy

    2017-10-06

    The aim of this study was to develop an open-source, modular, locally run or server-based system for 3D radiomics feature computation that can be used on any computer system and included in existing workflows for understanding associations and building predictive models between image features and clinical data, such as survival. The QIFE exploits various levels of parallelization for use on multiprocessor systems. It consists of a managing framework and four stages: input, pre-processing, feature computation, and output. Each stage contains one or more swappable components, allowing run-time customization. We benchmarked the engine using various levels of parallelization on a cohort of CT scans presenting 108 lung tumors. Two versions of the QIFE have been released: (1) the open-source MATLAB code posted to Github, (2) a compiled version loaded in a Docker container, posted to DockerHub, which can be easily deployed on any computer. The QIFE processed 108 objects (tumors) in 2:12 (h/mm) using 1 core, and 1:04 (h/mm) hours using four cores with object-level parallelization. We developed the Quantitative Image Feature Engine (QIFE), an open-source feature-extraction framework that focuses on modularity, standards, parallelism, provenance, and integration. Researchers can easily integrate it with their existing segmentation and imaging workflows by creating input and output components that implement their existing interfaces. Computational efficiency can be improved by parallelizing execution at the cost of memory usage. Different parallelization levels provide different trade-offs, and the optimal setting will depend on the size and composition of the dataset to be processed.

  19. The Wang Landau parallel algorithm for the simple grids. Optimizing OpenMPI parallel implementation

    NASA Astrophysics Data System (ADS)

    Kussainov, A. S.

    2017-12-01

    The Wang Landau Monte Carlo algorithm to calculate density of states for the different simple spin lattices was implemented. The energy space was split between the individual threads and balanced according to the expected runtime for the individual processes. Custom spin clustering mechanism, necessary for overcoming of the critical slowdown in the certain energy subspaces, was devised. Stable reconstruction of the density of states was of primary importance. Some data post-processing techniques were involved to produce the expected smooth density of states.

  20. Blast traumatic brain injury induced cognitive deficits are attenuated by pre- or post-injury treatment with the glucagon-like peptide-1 receptor agonist, exendin-4

    PubMed Central

    Tweedie, David; Rachmany, Lital; Rubovitch, Vardit; Li, Yazhou; Holloway, Harold W.; Lehrmann, Elin; Zhang, Yongqing; Becker, Kevin G.; Perez, Evelyn; Hoffer, Barry J.; Pick, Chaim G.; Greig, Nigel H.

    2015-01-01

    Background Blast traumatic brain injury (B-TBI) affects military and civilian personnel. Presently there are no approved drugs for blast brain injury. Methods Exendin-4, administered subcutaneously, was evaluated as a pre-treatment (48 hours) and post-injury treatment (2 hours) on neurodegeneration, behaviors and gene expressions in a murine open field model of blast injury. Results B-TBI induced neurodegeneration, changes in cognition and genes expressions linked to dementia disorders. Exendin-4, administered pre- or post-injury ameliorated B-TBI-induced neurodegeneration at 72 hours, memory deficits from days 7–14 and attenuated genes regulated by blast at day 14 post-injury. Conclusions The present data suggest shared pathological processes between concussive and B-TBI, with endpoints amenable to beneficial therapeutic manipulation by exendin-4. B-TBI-induced dementia-related gene pathways and cognitive deficits in mice somewhat parallel epidemiological studies of Barnes and co-workers who identified a greater risk in US military veterans who experienced diverse TBIs, for dementia in later life. PMID:26327236

  1. Early brain injury alters the blood-brain barrier phenotype in parallel with β-amyloid and cognitive changes in adulthood.

    PubMed

    Pop, Viorela; Sorensen, Dane W; Kamper, Joel E; Ajao, David O; Murphy, M Paul; Head, Elizabeth; Hartman, Richard E; Badaut, Jérôme

    2013-02-01

    Clinical studies suggest that traumatic brain injury (TBI) hastens cognitive decline and development of neuropathology resembling brain aging. Blood-brain barrier (BBB) disruption following TBI may contribute to the aging process by deregulating substance exchange between the brain and blood. We evaluated the effect of juvenile TBI (jTBI) on these processes by examining long-term alterations of BBB proteins, β-amyloid (Aβ) neuropathology, and cognitive changes. A controlled cortical impact was delivered to the parietal cortex of male rats at postnatal day 17, with behavioral studies and brain tissue evaluation at 60 days post-injury (dpi). Immunoglobulin G extravasation was unchanged, and jTBI animals had higher levels of tight-junction protein claudin 5 versus shams, suggesting the absence of BBB disruption. However, decreased P-glycoprotein (P-gp) on cortical blood vessels indicates modifications of BBB properties. In parallel, we observed higher levels of endogenous rodent Aβ in several brain regions of the jTBI group versus shams. In addition at 60 dpi, jTBI animals displayed systematic search strategies rather than relying on spatial memory during the water maze. Together, these alterations to the BBB phenotype after jTBI may contribute to the accumulation of toxic products, which in turn may induce cognitive differences and ultimately accelerate brain aging.

  2. Spatio-Temporal Process Variability in Watershed Scale Wetland Restoration Planning

    NASA Astrophysics Data System (ADS)

    Evenson, G. R.

    2012-12-01

    Watershed scale restoration decision making processes are increasingly informed by quantitative methodologies providing site-specific restoration recommendations - sometimes referred to as "systematic planning." The more advanced of these methodologies are characterized by a coupling of search algorithms and ecological models to discover restoration plans that optimize environmental outcomes. Yet while these methods have exhibited clear utility as decision support toolsets, they may be critiqued for flawed evaluations of spatio-temporally variable processes fundamental to watershed scale restoration. Hydrologic and non-hydrologic mediated process connectivity along with post-restoration habitat dynamics, for example, are commonly ignored yet known to appreciably affect restoration outcomes. This talk will present a methodology to evaluate such spatio-temporally complex processes in the production of watershed scale wetland restoration plans. Using the Tuscarawas Watershed in Eastern Ohio as a case study, a genetic algorithm will be coupled with the Soil and Water Assessment Tool (SWAT) to reveal optimal wetland restoration plans as measured by their capacity to maximize nutrient reductions. Then, a so-called "graphical" representation of the optimization problem will be implemented in-parallel to promote hydrologic and non-hydrologic mediated connectivity amongst existing wetlands and sites selected for restoration. Further, various search algorithm mechanisms will be discussed as a means of accounting for temporal complexities such as post-restoration habitat dynamics. Finally, generalized patterns of restoration plan optimality will be discussed as an alternative and possibly superior decision support toolset given the complexity and stochastic nature of spatio-temporal process variability.

  3. Recovery of Visual Search following Moderate to Severe Traumatic Brain Injury

    PubMed Central

    Schmitter-Edgecombe, Maureen; Robertson, Kayela

    2015-01-01

    Introduction Deficits in attentional abilities can significantly impact rehabilitation and recovery from traumatic brain injury (TBI). This study investigated the nature and recovery of pre-attentive (parallel) and attentive (serial) visual search abilities after TBI. Methods Participants were 40 individuals with moderate to severe TBI who were tested following emergence from post-traumatic amnesia and approximately 8-months post-injury, as well as 40 age and education matched controls. Pre-attentive (automatic) and attentive (controlled) visual search situations were created by manipulating the saliency of the target item amongst distractor items in visual displays. The relationship between pre-attentive and attentive visual search rates and follow-up community integration were also explored. Results The results revealed intact parallel (automatic) processing skills in the TBI group both post-acutely and at follow-up. In contrast, when attentional demands on visual search were increased by reducing the saliency of the target, the TBI group demonstrated poorer performances compared to the control group both post-acutely and 8-months post-injury. Neither pre-attentive nor attentive visual search slope values correlated with follow-up community integration. Conclusions These results suggest that utilizing intact pre-attentive visual search skills during rehabilitation may help to reduce high mental workload situations, thereby improving the rehabilitation process. For example, making commonly used objects more salient in the environment should increase reliance or more automatic visual search processes and reduce visual search time for individuals with TBI. PMID:25671675

  4. Parallel text rendering by a PostScript interpreter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kritskii, S.P.; Zastavnoi, B.A.

    1994-11-01

    The most radical method of increasing the performance of devices controlled by PostScript interpreters may be the use of multiprocessor controllers. This paper presents a method for parallelizing the operation of a PostScript interpreter for rendering text. The proposed method is based on decomposition of the outlines of letters into horizontal strips covering equal areas. The subroutines thus obtained are distributed to the processors in a network and then filled in by conventional sequential algorithms. A special algorithm has been developed for dividing the outlines of characters into subroutines so that each may be colored independently of the others. Themore » algorithm uses special estimates for estimating the correct partition so that the corresponding outlines are divided into horizontal strips. A method is presented for finding such estimates. Two different processing approaches are presented. In the first, one of the processors performs the decomposition of the outlines and distributes the strips to the remaining processors, which are responsible for the rendering. In the second approach, the decomposition process is itself distributed among the processors in the network.« less

  5. Stress distributions in maxillary central incisors restored with various types of post materials and designs.

    PubMed

    Madfa, A A; Kadir, M R Abdul; Kashani, J; Saidin, S; Sulaiman, E; Marhazlinda, J; Rahbari, R; Abdullah, B J J; Abdullah, H; Abu Kasim, N H

    2014-07-01

    Different dental post designs and materials affect the stability of restoration of a tooth. This study aimed to analyse and compare the stability of two shapes of dental posts (parallel-sided and tapered) made of five different materials (titanium, zirconia, carbon fibre and glass fibre) by investigating their stress transfer through the finite element (FE) method. Ten three-dimensional (3D) FE models of a maxillary central incisor restored with two different designs and five different materials were constructed. An oblique loading of 100 N was applied to each 3D model. Analyses along the centre of the post, the crown-cement/core and the post-cement/dentine interfaces were computed, and the means were calculated. One-way ANOVAs followed by post hoc tests were used to evaluate the effectiveness of the post materials and designs (p=0.05). For post designs, the tapered posts introduced significantly higher stress compared with the parallel-sided post (p<0.05), especially along the centre of the post. Of the materials, the highest level of stress was found for stainless steel, followed by zirconia, titanium, glass fibre and carbon fibre posts (p<0.05). The carbon and glass fibre posts reduced the stress distribution at the middle and apical part of the posts compared with the stainless steel, zirconia and titanium posts. The opposite results were observed at the crown-cement/core interface. Copyright © 2014 IPEM. Published by Elsevier Ltd. All rights reserved.

  6. Interdisciplinary Research and Phenomenology as Parallel Processes of Consciousness

    ERIC Educational Resources Information Center

    Arvidson, P. Sven

    2016-01-01

    There are significant parallels between interdisciplinarity and phenomenology. Interdisciplinary conscious processes involve identifying relevant disciplines, evaluating each disciplinary insight, and creating common ground. In an analogous way, phenomenology involves conscious processes of epoché, reduction, and eidetic variation. Each stresses…

  7. Post retention and post/core shear bond strength of four post systems.

    PubMed

    Stockton, L W; Williams, P T; Clarke, C T

    2000-01-01

    As clinicians we continue to search for a post system which will give us maximum retention while maximizing resistance to root fracture. The introduction of several new post systems, with claims of high retentive and resistance to root fracture values, require that independent studies be performed to evaluate these claims. This study tested the tensile and shear dislodgment forces of four post designs that were luted into roots 10 mm apical of the CEJ. The Para Post Plus (P1) is a parallel-sided, passive design; the Para Post XT (P2) is a combination active/passive design; the Flexi-Post (F1) and the Flexi-Flange (F2) are active post designs. All systems tested were stainless steel. This study compared the test results of the four post designs for tensile and shear dislodgment. All mounted samples were loaded in tension until failure occurred. The tensile load was applied parallel to the long axis of the root, while the shear load was applied at 450 to the long axis of the root. The Flexi-Post (F1) was significantly different from the other three in the tensile test, however, the Para Post XT (P2) was significantly different to the other three in the shear test and had a better probability for survival in the Kaplan-Meier survival function test. Based on the results of this study, our recommendation is for the Para Post XT (P2).

  8. Teaching and Learning: Highlighting the Parallels between Education and Participatory Evaluation.

    ERIC Educational Resources Information Center

    Vanden Berk, Eric J.; Cassata, Jennifer Coyne; Moye, Melinda J.; Yarbrough, Donald B.; Siddens, Stephanie K.

    As an evaluation team trained in educational psychology and committed to participatory evaluation and its evolution, the researchers have found the parallel between evaluator-stakeholder roles in the participatory evaluation process and educator-student roles in educational psychology theory to be important. One advantage then is that the theories…

  9. Secure web-based invocation of large-scale plasma simulation codes

    NASA Astrophysics Data System (ADS)

    Dimitrov, D. A.; Busby, R.; Exby, J.; Bruhwiler, D. L.; Cary, J. R.

    2004-12-01

    We present our design and initial implementation of a web-based system for running, both in parallel and serial, Particle-In-Cell (PIC) codes for plasma simulations with automatic post processing and generation of visual diagnostics.

  10. Parallel processing in finite element structural analysis

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.

    1987-01-01

    A brief review is made of the fundamental concepts and basic issues of parallel processing. Discussion focuses on parallel numerical algorithms, performance evaluation of machines and algorithms, and parallelism in finite element computations. A computational strategy is proposed for maximizing the degree of parallelism at different levels of the finite element analysis process including: 1) formulation level (through the use of mixed finite element models); 2) analysis level (through additive decomposition of the different arrays in the governing equations into the contributions to a symmetrized response plus correction terms); 3) numerical algorithm level (through the use of operator splitting techniques and application of iterative processes); and 4) implementation level (through the effective combination of vectorization, multitasking and microtasking, whenever available).

  11. Improving operating room productivity via parallel anesthesia processing.

    PubMed

    Brown, Michael J; Subramanian, Arun; Curry, Timothy B; Kor, Daryl J; Moran, Steven L; Rohleder, Thomas R

    2014-01-01

    Parallel processing of regional anesthesia may improve operating room (OR) efficiency in patients undergoes upper extremity surgical procedures. The purpose of this paper is to evaluate whether performing regional anesthesia outside the OR in parallel increases total cases per day, improve efficiency and productivity. Data from all adult patients who underwent regional anesthesia as their primary anesthetic for upper extremity surgery over a one-year period were used to develop a simulation model. The model evaluated pure operating modes of regional anesthesia performed within and outside the OR in a parallel manner. The scenarios were used to evaluate how many surgeries could be completed in a standard work day (555 minutes) and assuming a standard three cases per day, what was the predicted end-of-day time overtime. Modeling results show that parallel processing of regional anesthesia increases the average cases per day for all surgeons included in the study. The average increase was 0.42 surgeries per day. Where it was assumed that three cases per day would be performed by all surgeons, the days going to overtime was reduced by 43 percent with parallel block. The overtime with parallel anesthesia was also projected to be 40 minutes less per day per surgeon. Key limitations include the assumption that all cases used regional anesthesia in the comparisons. Many days may have both regional and general anesthesia. Also, as a case study, single-center research may limit generalizability. Perioperative care providers should consider parallel administration of regional anesthesia where there is a desire to increase daily upper extremity surgical case capacity. Where there are sufficient resources to do parallel anesthesia processing, efficiency and productivity can be significantly improved. Simulation modeling can be an effective tool to show practice change effects at a system-wide level.

  12. Performance Evaluation in Network-Based Parallel Computing

    NASA Technical Reports Server (NTRS)

    Dezhgosha, Kamyar

    1996-01-01

    Network-based parallel computing is emerging as a cost-effective alternative for solving many problems which require use of supercomputers or massively parallel computers. The primary objective of this project has been to conduct experimental research on performance evaluation for clustered parallel computing. First, a testbed was established by augmenting our existing SUNSPARCs' network with PVM (Parallel Virtual Machine) which is a software system for linking clusters of machines. Second, a set of three basic applications were selected. The applications consist of a parallel search, a parallel sort, a parallel matrix multiplication. These application programs were implemented in C programming language under PVM. Third, we conducted performance evaluation under various configurations and problem sizes. Alternative parallel computing models and workload allocations for application programs were explored. The performance metric was limited to elapsed time or response time which in the context of parallel computing can be expressed in terms of speedup. The results reveal that the overhead of communication latency between processes in many cases is the restricting factor to performance. That is, coarse-grain parallelism which requires less frequent communication between processes will result in higher performance in network-based computing. Finally, we are in the final stages of installing an Asynchronous Transfer Mode (ATM) switch and four ATM interfaces (each 155 Mbps) which will allow us to extend our study to newer applications, performance metrics, and configurations.

  13. Methods for design and evaluation of parallel computating systems (The PISCES project)

    NASA Technical Reports Server (NTRS)

    Pratt, Terrence W.; Wise, Robert; Haught, Mary JO

    1989-01-01

    The PISCES project started in 1984 under the sponsorship of the NASA Computational Structural Mechanics (CSM) program. A PISCES 1 programming environment and parallel FORTRAN were implemented in 1984 for the DEC VAX (using UNIX processes to simulate parallel processes). This system was used for experimentation with parallel programs for scientific applications and AI (dynamic scene analysis) applications. PISCES 1 was ported to a network of Apollo workstations by N. Fitzgerald.

  14. Polymer Based Highly Parallel Nanoscopic Sensors for Rapid Detection of Chemical and Biological Threats

    DTIC Science & Technology

    2007-09-18

    Xuliang Han, PI of Brewer Science, Inc. Subcontract Center for Applied Science & Engineering Missouri State University 901 South National Avenue...Science an effective post-growth purification procedure was developed to reduce the amount of impurities, and several characterization techniques were...CNTs) contain a wide range of impurities from the growth process. At Brewer Science an effective post-growth purification procedure was developed to

  15. Cedar Project---Original goals and progress to date

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cybenko, G.; Kuck, D.; Padua, D.

    1990-11-28

    This work encompasses a broad attack on high speed parallel processing. Hardware, software, applications development, and performance evaluation and visualization as well as research topics are proposed. Our goal is to develop practical parallel processing for the 1990's.

  16. Visualization and Tracking of Parallel CFD Simulations

    NASA Technical Reports Server (NTRS)

    Vaziri, Arsi; Kremenetsky, Mark

    1995-01-01

    We describe a system for interactive visualization and tracking of a 3-D unsteady computational fluid dynamics (CFD) simulation on a parallel computer. CM/AVS, a distributed, parallel implementation of a visualization environment (AVS) runs on the CM-5 parallel supercomputer. A CFD solver is run as a CM/AVS module on the CM-5. Data communication between the solver, other parallel visualization modules, and a graphics workstation, which is running AVS, are handled by CM/AVS. Partitioning of the visualization task, between CM-5 and the workstation, can be done interactively in the visual programming environment provided by AVS. Flow solver parameters can also be altered by programmable interactive widgets. This system partially removes the requirement of storing large solution files at frequent time steps, a characteristic of the traditional 'simulate (yields) store (yields) visualize' post-processing approach.

  17. A finite element study of teeth restored with post and core: Effect of design, material, and ferrule.

    PubMed

    Upadhyaya, Viram; Bhargava, Akshay; Parkash, Hari; Chittaranjan, B; Kumar, Vivek

    2016-01-01

    Different postdesigns and materials are available; however, no consensus exists regarding superiority for stress distribution. The aim of this study was to evaluate the effect of design and material of post with or without ferrule on stress distribution using finite element analysis. A total of 12 three-dimensional (3D) axisymmetric models of postretained central incisors were made: Six with ferrule design and six without it. Three of these six models had tapered posts, and three had parallel posts. The materials tested were titanium post with a composite resin core, nickel chromium cast post and core, and fiber reinforced composite (FRC) post with a composite resin core. The stress analysis was done using ANSYS software. The load of 100 N at an angle of 45΀ was applied 2 mm cervical to incisal edge on the palatal surface and results were analyzed using 3D von Mises criteria. The highest amount of stress was in the cervical region. Overall, the stress in the tapered postsystem was more than the parallel one. FRC post and composite resin core recorded minimal stresses within the post but the stresses transmitted to cervical dentin were more as compared to other systems. Minimal stresses in cervical dentine were observed where the remaining coronal dentin was strengthen by ferrule. A rigid material with high modulus of elasticity for post and core system creates most uniform stress distribution pattern. Ferrule provides uniform distribution of stresses and decreases the cervical stresses.

  18. Performance evaluation of canny edge detection on a tiled multicore architecture

    NASA Astrophysics Data System (ADS)

    Brethorst, Andrew Z.; Desai, Nehal; Enright, Douglas P.; Scrofano, Ronald

    2011-01-01

    In the last few years, a variety of multicore architectures have been used to parallelize image processing applications. In this paper, we focus on assessing the parallel speed-ups of different Canny edge detection parallelization strategies on the Tile64, a tiled multicore architecture developed by the Tilera Corporation. Included in these strategies are different ways Canny edge detection can be parallelized, as well as differences in data management. The two parallelization strategies examined were loop-level parallelism and domain decomposition. Loop-level parallelism is achieved through the use of OpenMP,1 and it is capable of parallelization across the range of values over which a loop iterates. Domain decomposition is the process of breaking down an image into subimages, where each subimage is processed independently, in parallel. The results of the two strategies show that for the same number of threads, programmer implemented, domain decomposition exhibits higher speed-ups than the compiler managed, loop-level parallelism implemented with OpenMP.

  19. Second International Workshop on Software Engineering and Code Design in Parallel Meteorological and Oceanographic Applications

    NASA Technical Reports Server (NTRS)

    OKeefe, Matthew (Editor); Kerr, Christopher L. (Editor)

    1998-01-01

    This report contains the abstracts and technical papers from the Second International Workshop on Software Engineering and Code Design in Parallel Meteorological and Oceanographic Applications, held June 15-18, 1998, in Scottsdale, Arizona. The purpose of the workshop is to bring together software developers in meteorology and oceanography to discuss software engineering and code design issues for parallel architectures, including Massively Parallel Processors (MPP's), Parallel Vector Processors (PVP's), Symmetric Multi-Processors (SMP's), Distributed Shared Memory (DSM) multi-processors, and clusters. Issues to be discussed include: (1) code architectures for current parallel models, including basic data structures, storage allocation, variable naming conventions, coding rules and styles, i/o and pre/post-processing of data; (2) designing modular code; (3) load balancing and domain decomposition; (4) techniques that exploit parallelism efficiently yet hide the machine-related details from the programmer; (5) tools for making the programmer more productive; and (6) the proliferation of programming models (F--, OpenMP, MPI, and HPF).

  20. Automation of a Wave-Optics Simulation and Image Post-Processing Package on Riptide

    NASA Astrophysics Data System (ADS)

    Werth, M.; Lucas, J.; Thompson, D.; Abercrombie, M.; Holmes, R.; Roggemann, M.

    Detailed wave-optics simulations and image post-processing algorithms are computationally expensive and benefit from the massively parallel hardware available at supercomputing facilities. We created an automated system that interfaces with the Maui High Performance Computing Center (MHPCC) Distributed MATLAB® Portal interface to submit massively parallel waveoptics simulations to the IBM iDataPlex (Riptide) supercomputer. This system subsequently postprocesses the output images with an improved version of physically constrained iterative deconvolution (PCID) and analyzes the results using a series of modular algorithms written in Python. With this architecture, a single person can simulate thousands of unique scenarios and produce analyzed, archived, and briefing-compatible output products with very little effort. This research was developed with funding from the Defense Advanced Research Projects Agency (DARPA). The views, opinions, and/or findings expressed are those of the author(s) and should not be interpreted as representing the official views or policies of the Department of Defense or the U.S. Government.

  1. ParaBTM: A Parallel Processing Framework for Biomedical Text Mining on Supercomputers.

    PubMed

    Xing, Yuting; Wu, Chengkun; Yang, Xi; Wang, Wei; Zhu, En; Yin, Jianping

    2018-04-27

    A prevailing way of extracting valuable information from biomedical literature is to apply text mining methods on unstructured texts. However, the massive amount of literature that needs to be analyzed poses a big data challenge to the processing efficiency of text mining. In this paper, we address this challenge by introducing parallel processing on a supercomputer. We developed paraBTM, a runnable framework that enables parallel text mining on the Tianhe-2 supercomputer. It employs a low-cost yet effective load balancing strategy to maximize the efficiency of parallel processing. We evaluated the performance of paraBTM on several datasets, utilizing three types of named entity recognition tasks as demonstration. Results show that, in most cases, the processing efficiency can be greatly improved with parallel processing, and the proposed load balancing strategy is simple and effective. In addition, our framework can be readily applied to other tasks of biomedical text mining besides NER.

  2. The Influence of Post System Design and Material on the Biomechanical Behavior of Teeth with Little Remaining Coronal Structure.

    PubMed

    Pinto, Cristiano Lazzari; Bhering, Claudia Lopes Brilhante; de Oliveira, Gabriel Rodrigues; Maroli, Angélica; Reginato, Vagner Flávio; Caldas, Ricardo Armini; Bacchi, Atais

    2018-05-14

    To evaluate the influence of different post systems on the biomechanical behavior of teeth with a severe loss of remaining coronal structure. Fifty standardized bovine teeth (n = 10 per group) were restored with: cast post-and-core (CPC), prefabricated metallic post (PFM), parallel glass-fiber post (P-FP), conical glass-fiber post (C-FP), or composite core (no post, CC). The survival rate during thermomechanical challenging (TC), the fracture strength (FS), and failure patterns (FP) were evaluated. Finite element models evaluated the stress distribution after the application of 100 N. All specimens survived TC. Similar FS was observed among post-containing groups. Groups P-FP and CC presented 100% repairable fractures. The von Mises analysis showed the maximum stresses into the root canal in groups restored with metallic posts. Glass-fiber posts and CC presented the maximum stresses at the load contact point. Glass-fiber groups showed lower stresses in the analysis of maximal contact pressure; CPC led to the highest values of contact pressure. The modified von Mises (mvM) stress in dentin did not show differences among groups. Moreover, mvM values did not reach the dentin fracture limit for any group. The type of intracanal post had a relevant influence on the biomechanical behavior of teeth with little remaining coronal structure. © 2018 by the American College of Prosthodontists.

  3. A Debugger for Computational Grid Applications

    NASA Technical Reports Server (NTRS)

    Hood, Robert; Jost, Gabriele; Biegel, Bryan (Technical Monitor)

    2001-01-01

    This viewgraph presentation gives an overview of a debugger for computational grid applications. Details are given on NAS parallel tools groups (including parallelization support tools, evaluation of various parallelization strategies, and distributed and aggregated computing), debugger dependencies, scalability, initial implementation, the process grid, and information on Globus.

  4. Efficient in-situ visualization of unsteady flows in climate simulation

    NASA Astrophysics Data System (ADS)

    Vetter, Michael; Olbrich, Stephan

    2017-04-01

    The simulation of climate data tends to produce very large data sets, which hardly can be processed in classical post-processing visualization applications. Typically, the visualization pipeline consisting of the processes data generation, visualization mapping and rendering is distributed into two parts over the network or separated via file transfer. Within most traditional post-processing scenarios the simulation is done on a supercomputer whereas the data analysis and visualization is done on a graphics workstation. That way temporary data sets with huge volume have to be transferred over the network, which leads to bandwidth bottlenecks and volume limitations. The solution to this issue is the avoidance of temporary storage, or at least significant reduction of data complexity. Within the Climate Visualization Lab - as part of the Cluster of Excellence "Integrated Climate System Analysis and Prediction" (CliSAP) at the University of Hamburg, in cooperation with the German Climate Computing Center (DKRZ) - we develop and integrate an in-situ approach. Our software framework DSVR is based on the separation of the process chain between the mapping and the rendering processes. It couples the mapping process directly to the simulation by calling methods of a parallelized data extraction library, which create a time-based sequence of geometric 3D scenes. This sequence is stored on a special streaming server with an interactive post-filtering option and then played-out asynchronously in a separate 3D viewer application. Since the rendering is part of this viewer application, the scenes can be navigated interactively. In contrast to other in-situ approaches where 2D images are created as part of the simulation or synchronous co-visualization takes place, our method supports interaction in 3D space and in time, as well as fixed frame rates. To integrate in-situ processing based on our DSVR framework and methods in the ICON climate model, we are continuously evolving the data structures and mapping algorithms of the framework to support the ICON model's native grid structures, since DSVR originally was designed for rectilinear grids only. We now have implemented a new output module to ICON to take advantage of the DSVR visualization. The visualization can be configured as most output modules by using a specific namelist and is exemplarily integrated within the non-hydrostatic atmospheric model time loop. With the integration of a DSVR based in-situ pathline extraction within ICON, a further milestone is reached. The pathline algorithm as well as the grid data structures have been optimized for the domain decomposition used for the parallelization of ICON based on MPI and OpenMP. The software implementation and evaluation is done on the supercomputers at DKRZ. In principle, the data complexity is reduced from O(n3) to O(m), where n is the grid resolution and m the number of supporting point of all pathlines. The stability and scalability evaluation is done using Atmospheric Model Intercomparison Project (AMIP) runs. We will give a short introduction in our software framework, as well as a short overview on the implementation and usage of DSVR within ICON. Furthermore, we will present visualization and evaluation results of sample applications.

  5. Overview of the NCC

    NASA Technical Reports Server (NTRS)

    Liu, Nan-Suey

    2001-01-01

    A multi-disciplinary design/analysis tool for combustion systems is critical for optimizing the low-emission, high-performance combustor design process. Based on discussions between then NASA Lewis Research Center and the jet engine companies, an industry-government team was formed in early 1995 to develop the National Combustion Code (NCC), which is an integrated system of computer codes for the design and analysis of combustion systems. NCC has advanced features that address the need to meet designer's requirements such as "assured accuracy", "fast turnaround", and "acceptable cost". The NCC development team is comprised of Allison Engine Company (Allison), CFD Research Corporation (CFDRC), GE Aircraft Engines (GEAE), NASA Glenn Research Center (LeRC), and Pratt & Whitney (P&W). The "unstructured mesh" capability and "parallel computing" are fundamental features of NCC from its inception. The NCC system is composed of a set of "elements" which includes grid generator, main flow solver, turbulence module, turbulence and chemistry interaction module, chemistry module, spray module, radiation heat transfer module, data visualization module, and a post-processor for evaluating engine performance parameters. Each element may have contributions from several team members. Such a multi-source multi-element system needs to be integrated in a way that facilitates inter-module data communication, flexibility in module selection, and ease of integration. The development of the NCC beta version was essentially completed in June 1998. Technical details of the NCC elements are given in the Reference List. Elements such as the baseline flow solver, turbulence module, and the chemistry module, have been extensively validated; and their parallel performance on large-scale parallel systems has been evaluated and optimized. However the scalar PDF module and the Spray module, as well as their coupling with the baseline flow solver, were developed in a small-scale distributed computing environment. As a result, the validation of the NCC beta version as a whole was quite limited. Current effort has been focused on the validation of the integrated code and the evaluation/optimization of its overall performance on large-scale parallel systems.

  6. GPU Based Software Correlators - Perspectives for VLBI2010

    NASA Technical Reports Server (NTRS)

    Hobiger, Thomas; Kimura, Moritaka; Takefuji, Kazuhiro; Oyama, Tomoaki; Koyama, Yasuhiro; Kondo, Tetsuro; Gotoh, Tadahiro; Amagai, Jun

    2010-01-01

    Caused by historical separation and driven by the requirements of the PC gaming industry, Graphics Processing Units (GPUs) have evolved to massive parallel processing systems which entered the area of non-graphic related applications. Although a single processing core on the GPU is much slower and provides less functionality than its counterpart on the CPU, the huge number of these small processing entities outperforms the classical processors when the application can be parallelized. Thus, in recent years various radio astronomical projects have started to make use of this technology either to realize the correlator on this platform or to establish the post-processing pipeline with GPUs. Therefore, the feasibility of GPUs as a choice for a VLBI correlator is being investigated, including pros and cons of this technology. Additionally, a GPU based software correlator will be reviewed with respect to energy consumption/GFlop/sec and cost/GFlop/sec.

  7. Attitude determination for small satellites using GPS signal-to-noise ratio

    NASA Astrophysics Data System (ADS)

    Peters, Daniel

    An embedded system for GPS-based attitude determination (AD) using signal-to-noise (SNR) measurements was developed for CubeSat applications. The design serves as an evaluation testbed for conducting ground based experiments using various computational methods and antenna types to determine the optimum AD accuracy. Raw GPS data is also stored to non-volatile memory for downloading and post analysis. Two low-power microcontrollers are used for processing and to display information on a graphic screen for real-time performance evaluations. A new parallel inter-processor communication protocol was developed that is faster and uses less power than existing standard protocols. A shorted annular patch (SAP) antenna was fabricated for the initial ground-based AD experiments with the testbed. Static AD estimations with RMS errors in the range of 2.5° to 4.8° were achieved over a range of off-zenith attitudes.

  8. Mouse-tracking evidence for parallel anticipatory option evaluation.

    PubMed

    Cranford, Edward A; Moss, Jarrod

    2017-12-23

    In fast-paced, dynamic tasks, the ability to anticipate the future outcome of a sequence of events is crucial to quickly selecting an appropriate course of action among multiple alternative options. There are two classes of theories that describe how anticipation occurs. Serial theories assume options are generated and evaluated one at a time, in order of quality, whereas parallel theories assume simultaneous generation and evaluation. The present research examined the option evaluation process during a task designed to be analogous to prior anticipation tasks, but within the domain of narrative text comprehension. Prior research has relied on indirect, off-line measurement of the option evaluation process during anticipation tasks. Because the movement of the hand can provide a window into underlying cognitive processes, online metrics such as continuous mouse tracking provide more fine-grained measurements of cognitive processing as it occurs in real time. In this study, participants listened to three-sentence stories and predicted the protagonists' final action by moving a mouse toward one of three possible options. Each story was presented with either one (control condition) or two (distractor condition) plausible ending options. Results seem most consistent with a parallel option evaluation process because initial mouse trajectories deviated further from the best option in the distractor condition compared to the control condition. It is difficult to completely rule out all possible serial processing accounts, although the results do place constraints on the time frame in which a serial processing explanation must operate.

  9. Impact of process improvements on measures of emergency department efficiency.

    PubMed

    Leung, Alexander K; Whatley, Shawn D; Gao, Dechang; Duic, Marko

    2017-03-01

    To study the operational impact of process improvements on emergency department (ED) patient flow. The changes did not require any increase in resources or expenditures. This was a 36-month pre- and post-intervention study to evaluate the effect of implementing process improvements at a community ED from January 2010 to December 2012. The intervention comprised streamlining triage by having patients accepted into internal waiting areas immediately after triage. Within the ED, parallel processes unfolded, and there was no restriction on when registration occurred or which health care provider a patient saw first. Flexible nursing ratios allowed nursing staff to redeploy and move to areas of highest demand. Last, demand-based physician scheduling was implemented. The main outcome was length of stay (LOS). Secondary outcomes included time to physician initial assessment (PIA), left-without-being-seen (LWBS) rates, and left-against-medical-advice (LAMA) rates. Segmented regression of interrupted time series analysis was performed to quantify the impact of the intervention, and whether it was sustained. Patients totalling 251,899 attended the ED during the study period. Daily patient volumes increased 17.3% during the post-intervention period. Post-intervention, mean LOS decreased by 0.64 hours (p<0.005). LOS for non-admitted Canadian Triage and Acuity Scale 2 (-0.58 hours, p<0.005), 3 (-0.75 hours, p<0.005), and 4 (-0.32 hours, p<0.005) patients also decreased. There were reductions in PIA (43.81 minutes, p<0.005), LWBS (35.2%, p<0.005), and LAMA (61.9%, p<0.005). A combination of process improvements in the ED was associated with clinically significant reductions in LOS, PIA, LWBS, and LAMA for non-resuscitative patients.

  10. Evaluation of Job Queuing/Scheduling Software: Phase I Report

    NASA Technical Reports Server (NTRS)

    Jones, James Patton

    1996-01-01

    The recent proliferation of high performance work stations and the increased reliability of parallel systems have illustrated the need for robust job management systems to support parallel applications. To address this issue, the national Aerodynamic Simulation (NAS) supercomputer facility compiled a requirements checklist for job queuing/scheduling software. Next, NAS began an evaluation of the leading job management system (JMS) software packages against the checklist. This report describes the three-phase evaluation process, and presents the results of Phase 1: Capabilities versus Requirements. We show that JMS support for running parallel applications on clusters of workstations and parallel systems is still insufficient, even in the leading JMS's. However, by ranking each JMS evaluated against the requirements, we provide data that will be useful to other sites in selecting a JMS.

  11. Cache write generate for parallel image processing on shared memory architectures.

    PubMed

    Wittenbrink, C M; Somani, A K; Chen, C H

    1996-01-01

    We investigate cache write generate, our cache mode invention. We demonstrate that for parallel image processing applications, the new mode improves main memory bandwidth, CPU efficiency, cache hits, and cache latency. We use register level simulations validated by the UW-Proteus system. Many memory, cache, and processor configurations are evaluated.

  12. Simultaneous chromatic and luminance human electroretinogram responses.

    PubMed

    Parry, Neil R A; Murray, Ian J; Panorgias, Athanasios; McKeefry, Declan J; Lee, Barry B; Kremers, Jan

    2012-07-01

    The parallel processing of information forms an important organisational principle of the primate visual system. Here we describe experiments which use a novel chromatic–achromatic temporal compound stimulus to simultaneously identify colour and luminance specific signals in the human electroretinogram (ERG). Luminance and chromatic components are separated in the stimulus; the luminance modulation has twice the temporal frequency of the chromatic modulation. ERGs were recorded from four trichromatic and two dichromatic subjects (1 deuteranope and 1 protanope). At isoluminance, the fundamental (first harmonic) response was elicited by the chromatic component in the stimulus. The trichromatic ERGs possessed low-pass temporal tuning characteristics, reflecting the activity of parvocellular post-receptoral mechanisms. There was very little first harmonic response in the dichromats' ERGs. The second harmonic response was elicited by the luminance modulation in the compound stimulus and showed, in all subjects, band-pass temporal tuning characteristic of magnocellular activity. Thus it is possible to concurrently elicit ERG responses from the human retina which reflect processing in both chromatic and luminance pathways. As well as providing a clear demonstration of the parallel nature of chromatic and luminance processing in the human retina, the differences that exist between ERGs from trichromatic and dichromatic subjects point to the existence of interactions between afferent post-receptoral pathways that are in operation from the earliest stages of visual processing.

  13. Parallel processing of general and specific threat during early stages of perception

    PubMed Central

    2016-01-01

    Differential processing of threat can consummate as early as 100 ms post-stimulus. Moreover, early perception not only differentiates threat from non-threat stimuli but also distinguishes among discrete threat subtypes (e.g. fear, disgust and anger). Combining spatial-frequency-filtered images of fear, disgust and neutral scenes with high-density event-related potentials and intracranial source estimation, we investigated the neural underpinnings of general and specific threat processing in early stages of perception. Conveyed in low spatial frequencies, fear and disgust images evoked convergent visual responses with similarly enhanced N1 potentials and dorsal visual (middle temporal gyrus) cortical activity (relative to neutral cues; peaking at 156 ms). Nevertheless, conveyed in high spatial frequencies, fear and disgust elicited divergent visual responses, with fear enhancing and disgust suppressing P1 potentials and ventral visual (occipital fusiform) cortical activity (peaking at 121 ms). Therefore, general and specific threat processing operates in parallel in early perception, with the ventral visual pathway engaged in specific processing of discrete threats and the dorsal visual pathway in general threat processing. Furthermore, selectively tuned to distinctive spatial-frequency channels and visual pathways, these parallel processes underpin dimensional and categorical threat characterization, promoting efficient threat response. These findings thus lend support to hybrid models of emotion. PMID:26412811

  14. Application of integration algorithms in a parallel processing environment for the simulation of jet engines

    NASA Technical Reports Server (NTRS)

    Krosel, S. M.; Milner, E. J.

    1982-01-01

    The application of Predictor corrector integration algorithms developed for the digital parallel processing environment are investigated. The algorithms are implemented and evaluated through the use of a software simulator which provides an approximate representation of the parallel processing hardware. Test cases which focus on the use of the algorithms are presented and a specific application using a linear model of a turbofan engine is considered. Results are presented showing the effects of integration step size and the number of processors on simulation accuracy. Real time performance, interprocessor communication, and algorithm startup are also discussed.

  15. Time-Dependent Simulations of Turbopump Flows

    NASA Technical Reports Server (NTRS)

    Kiris, Cetin; Kwak, Dochan; Chan, William; Williams, Robert

    2002-01-01

    Unsteady flow simulations for RLV (Reusable Launch Vehicles) 2nd Generation baseline turbopump for one and half impeller rotations have been completed by using a 34.3 Million grid points model. MLP (Multi-Level Parallelism) shared memory parallelism has been implemented in INS3D, and benchmarked. Code optimization for cash based platforms will be completed by the end of September 2001. Moving boundary capability is obtained by using DCF module. Scripting capability from CAD (computer aided design) geometry to solution has been developed. Data compression is applied to reduce data size in post processing. Fluid/Structure coupling has been initiated.

  16. Parallelism Effects and Verb Activation: The Sustained Reactivation Hypothesis

    ERIC Educational Resources Information Center

    Callahan, Sarah M.; Shapiro, Lewis P.; Love, Tracy

    2010-01-01

    This study investigated the processes underlying parallelism by evaluating the activation of a parallel element (i.e., a verb) throughout "and"-coordinated sentences. Four points were tested: (1) approximately 1,600ms after the verb in the first conjunct (PP1), (2) immediately following the conjunction (PP2), (3) approximately 1,100ms after the…

  17. A concise evidence-based physical examination for diagnosis of acromioclavicular joint pathology: a systematic review.

    PubMed

    Krill, Michael K; Rosas, Samuel; Kwon, KiHyun; Dakkak, Andrew; Nwachukwu, Benedict U; McCormick, Frank

    2018-02-01

    The clinical examination of the shoulder joint is an undervalued diagnostic tool for evaluating acromioclavicular (AC) joint pathology. Applying evidence-based clinical tests enables providers to make an accurate diagnosis and minimize costly imaging procedures and potential delays in care. The purpose of this study was to create a decision tree analysis enabling simple and accurate diagnosis of AC joint pathology. A systematic review of the Medline, Ovid and Cochrane Review databases was performed to identify level one and two diagnostic studies evaluating clinical tests for AC joint pathology. Individual test characteristics were combined in series and in parallel to improve sensitivities and specificities. A secondary analysis utilized subjective pre-test probabilities to create a clinical decision tree algorithm with post-test probabilities. The optimal special test combination to screen and confirm AC joint pathology combined Paxinos sign and O'Brien's Test, with a specificity of 95.8% when performed in series; whereas, Paxinos sign and Hawkins-Kennedy Test demonstrated a sensitivity of 93.7% when performed in parallel. Paxinos sign and O'Brien's Test demonstrated the greatest positive likelihood ratio (2.71); whereas, Paxinos sign and Hawkins-Kennedy Test reported the lowest negative likelihood ratio (0.35). No combination of special tests performed in series or in parallel creates more than a small impact on post-test probabilities to screen or confirm AC joint pathology. Paxinos sign and O'Brien's Test is the only special test combination that has a small and sometimes important impact when used both in series and in parallel. Physical examination testing is not beneficial for diagnosis of AC joint pathology when pretest probability is unequivocal. In these instances, it is of benefit to proceed with procedural tests to evaluate AC joint pathology. Ultrasound-guided corticosteroid injections are diagnostic and therapeutic. An ultrasound-guided AC joint corticosteroid injection may be an appropriate new standard for treatment and surgical decision-making. II - Systematic Review.

  18. A massively asynchronous, parallel brain.

    PubMed

    Zeki, Semir

    2015-05-19

    Whether the visual brain uses a parallel or a serial, hierarchical, strategy to process visual signals, the end result appears to be that different attributes of the visual scene are perceived asynchronously--with colour leading form (orientation) by 40 ms and direction of motion by about 80 ms. Whatever the neural root of this asynchrony, it creates a problem that has not been properly addressed, namely how visual attributes that are perceived asynchronously over brief time windows after stimulus onset are bound together in the longer term to give us a unified experience of the visual world, in which all attributes are apparently seen in perfect registration. In this review, I suggest that there is no central neural clock in the (visual) brain that synchronizes the activity of different processing systems. More likely, activity in each of the parallel processing-perceptual systems of the visual brain is reset independently, making of the brain a massively asynchronous organ, just like the new generation of more efficient computers promise to be. Given the asynchronous operations of the brain, it is likely that the results of activities in the different processing-perceptual systems are not bound by physiological interactions between cells in the specialized visual areas, but post-perceptually, outside the visual brain.

  19. Automated Generation of Message-Passing Programs: An Evaluation Using CAPTools

    NASA Technical Reports Server (NTRS)

    Hribar, Michelle R.; Jin, Haoqiang; Yan, Jerry C.; Saini, Subhash (Technical Monitor)

    1998-01-01

    Scientists at NASA Ames Research Center have been developing computational aeroscience applications on highly parallel architectures over the past ten years. During that same time period, a steady transition of hardware and system software also occurred, forcing us to expend great efforts into migrating and re-coding our applications. As applications and machine architectures become increasingly complex, the cost and time required for this process will become prohibitive. In this paper, we present the first set of results in our evaluation of interactive parallelization tools. In particular, we evaluate CAPTool's ability to parallelize computational aeroscience applications. CAPTools was tested on serial versions of the NAS Parallel Benchmarks and ARC3D, a computational fluid dynamics application, on two platforms: the SGI Origin 2000 and the Cray T3E. This evaluation includes performance, amount of user interaction required, limitations and portability. Based on these results, a discussion on the feasibility of computer aided parallelization of aerospace applications is presented along with suggestions for future work.

  20. MRI Post-processing in Pre-surgical Evaluation

    PubMed Central

    Wang, Z. Irene; Alexopoulos, Andreas V.

    2016-01-01

    Purpose of Review Advanced MRI post-processing techniques are increasingly used to complement visual analysis and elucidate structural epileptogenic lesions. This review summarizes recent developments in MRI post-processing in the context of epilepsy pre-surgical evaluation, with the focus on patients with unremarkable MRI by visual analysis (i.e., “nonlesional” MRI). Recent Findings Various methods of MRI post-processing have been reported to show additional clinical values in the following areas: (1) lesion detection on an individual level; (2) lesion confirmation for reducing the risk of over reading the MRI; (3) detection of sulcal/gyral morphologic changes that are particularly difficult for visual analysis; and (4) delineation of cortical abnormalities extending beyond the visible lesion. Future directions to improve performance of MRI post-processing include using higher magnetic field strength for better signal and contrast to noise ratio, adopting a multi-contrast frame work, and integration with other noninvasive modalities. Summary MRI post-processing can provide essential value to increase the yield of structural MRI and should be included as part of the presurgical evaluation of nonlesional epilepsies. MRI post-processing allows for more accurate identification/delineation of cortical abnormalities, which should then be more confidently targeted and mapped. PMID:26900745

  1. The Design and Evaluation of "CAPTools"--A Computer Aided Parallelization Toolkit

    NASA Technical Reports Server (NTRS)

    Yan, Jerry; Frumkin, Michael; Hribar, Michelle; Jin, Haoqiang; Waheed, Abdul; Johnson, Steve; Cross, Jark; Evans, Emyr; Ierotheou, Constantinos; Leggett, Pete; hide

    1998-01-01

    Writing applications for high performance computers is a challenging task. Although writing code by hand still offers the best performance, it is extremely costly and often not very portable. The Computer Aided Parallelization Tools (CAPTools) are a toolkit designed to help automate the mapping of sequential FORTRAN scientific applications onto multiprocessors. CAPTools consists of the following major components: an inter-procedural dependence analysis module that incorporates user knowledge; a 'self-propagating' data partitioning module driven via user guidance; an execution control mask generation and optimization module for the user to fine tune parallel processing of individual partitions; a program transformation/restructuring facility for source code clean up and optimization; a set of browsers through which the user interacts with CAPTools at each stage of the parallelization process; and a code generator supporting multiple programming paradigms on various multiprocessors. Besides describing the rationale behind the architecture of CAPTools, the parallelization process is illustrated via case studies involving structured and unstructured meshes. The programming process and the performance of the generated parallel programs are compared against other programming alternatives based on the NAS Parallel Benchmarks, ARC3D and other scientific applications. Based on these results, a discussion on the feasibility of constructing architectural independent parallel applications is presented.

  2. The parallel algorithm for the 2D discrete wavelet transform

    NASA Astrophysics Data System (ADS)

    Barina, David; Najman, Pavel; Kleparnik, Petr; Kula, Michal; Zemcik, Pavel

    2018-04-01

    The discrete wavelet transform can be found at the heart of many image-processing algorithms. Until now, the transform on general-purpose processors (CPUs) was mostly computed using a separable lifting scheme. As the lifting scheme consists of a small number of operations, it is preferred for processing using single-core CPUs. However, considering a parallel processing using multi-core processors, this scheme is inappropriate due to a large number of steps. On such architectures, the number of steps corresponds to the number of points that represent the exchange of data. Consequently, these points often form a performance bottleneck. Our approach appropriately rearranges calculations inside the transform, and thereby reduces the number of steps. In other words, we propose a new scheme that is friendly to parallel environments. When evaluating on multi-core CPUs, we consistently overcome the original lifting scheme. The evaluation was performed on 61-core Intel Xeon Phi and 8-core Intel Xeon processors.

  3. Parallel perceptual enhancement and hierarchic relevance evaluation in an audio-visual conjunction task.

    PubMed

    Potts, Geoffrey F; Wood, Susan M; Kothmann, Delia; Martin, Laura E

    2008-10-21

    Attention directs limited-capacity information processing resources to a subset of available perceptual representations. The mechanisms by which attention selects task-relevant representations for preferential processing are not fully known. Triesman and Gelade's [Triesman, A., Gelade, G., 1980. A feature integration theory of attention. Cognit. Psychol. 12, 97-136.] influential attention model posits that simple features are processed preattentively, in parallel, but that attention is required to serially conjoin multiple features into an object representation. Event-related potentials have provided evidence for this model showing parallel processing of perceptual features in the posterior Selection Negativity (SN) and serial, hierarchic processing of feature conjunctions in the Frontal Selection Positivity (FSP). Most prior studies have been done on conjunctions within one sensory modality while many real-world objects have multimodal features. It is not known if the same neural systems of posterior parallel processing of simple features and frontal serial processing of feature conjunctions seen within a sensory modality also operate on conjunctions between modalities. The current study used ERPs and simultaneously presented auditory and visual stimuli in three task conditions: Attend Auditory (auditory feature determines the target, visual features are irrelevant), Attend Visual (visual features relevant, auditory irrelevant), and Attend Conjunction (target defined by the co-occurrence of an auditory and a visual feature). In the Attend Conjunction condition when the auditory but not the visual feature was a target there was an SN over auditory cortex, when the visual but not auditory stimulus was a target there was an SN over visual cortex, and when both auditory and visual stimuli were targets (i.e. conjunction target) there were SNs over both auditory and visual cortex, indicating parallel processing of the simple features within each modality. In contrast, an FSP was present when either the visual only or both auditory and visual features were targets, but not when only the auditory stimulus was a target, indicating that the conjunction target determination was evaluated serially and hierarchically with visual information taking precedence. This indicates that the detection of a target defined by audio-visual conjunction is achieved via the same mechanism as within a single perceptual modality, through separate, parallel processing of the auditory and visual features and serial processing of the feature conjunction elements, rather than by evaluation of a fused multimodal percept.

  4. NETRA: A parallel architecture for integrated vision systems. 1: Architecture and organization

    NASA Technical Reports Server (NTRS)

    Choudhary, Alok N.; Patel, Janak H.; Ahuja, Narendra

    1989-01-01

    Computer vision is regarded as one of the most complex and computationally intensive problems. An integrated vision system (IVS) is considered to be a system that uses vision algorithms from all levels of processing for a high level application (such as object recognition). A model of computation is presented for parallel processing for an IVS. Using the model, desired features and capabilities of a parallel architecture suitable for IVSs are derived. Then a multiprocessor architecture (called NETRA) is presented. This architecture is highly flexible without the use of complex interconnection schemes. The topology of NETRA is recursively defined and hence is easily scalable from small to large systems. Homogeneity of NETRA permits fault tolerance and graceful degradation under faults. It is a recursively defined tree-type hierarchical architecture where each of the leaf nodes consists of a cluster of processors connected with a programmable crossbar with selective broadcast capability to provide for desired flexibility. A qualitative evaluation of NETRA is presented. Then general schemes are described to map parallel algorithms onto NETRA. Algorithms are classified according to their communication requirements for parallel processing. An extensive analysis of inter-cluster communication strategies in NETRA is presented, and parameters affecting performance of parallel algorithms when mapped on NETRA are discussed. Finally, a methodology to evaluate performance of algorithms on NETRA is described.

  5. Parallel approach in RDF query processing

    NASA Astrophysics Data System (ADS)

    Vajgl, Marek; Parenica, Jan

    2017-07-01

    Parallel approach is nowadays a very cheap solution to increase computational power due to possibility of usage of multithreaded computational units. This hardware became typical part of nowadays personal computers or notebooks and is widely spread. This contribution deals with experiments how evaluation of computational complex algorithm of the inference over RDF data can be parallelized over graphical cards to decrease computational time.

  6. Fast parallel algorithm for slicing STL based on pipeline

    NASA Astrophysics Data System (ADS)

    Ma, Xulong; Lin, Feng; Yao, Bo

    2016-05-01

    In Additive Manufacturing field, the current researches of data processing mainly focus on a slicing process of large STL files or complicated CAD models. To improve the efficiency and reduce the slicing time, a parallel algorithm has great advantages. However, traditional algorithms can't make full use of multi-core CPU hardware resources. In the paper, a fast parallel algorithm is presented to speed up data processing. A pipeline mode is adopted to design the parallel algorithm. And the complexity of the pipeline algorithm is analyzed theoretically. To evaluate the performance of the new algorithm, effects of threads number and layers number are investigated by a serial of experiments. The experimental results show that the threads number and layers number are two remarkable factors to the speedup ratio. The tendency of speedup versus threads number reveals a positive relationship which greatly agrees with the Amdahl's law, and the tendency of speedup versus layers number also keeps a positive relationship agreeing with Gustafson's law. The new algorithm uses topological information to compute contours with a parallel method of speedup. Another parallel algorithm based on data parallel is used in experiments to show that pipeline parallel mode is more efficient. A case study at last shows a suspending performance of the new parallel algorithm. Compared with the serial slicing algorithm, the new pipeline parallel algorithm can make full use of the multi-core CPU hardware, accelerate the slicing process, and compared with the data parallel slicing algorithm, the new slicing algorithm in this paper adopts a pipeline parallel model, and a much higher speedup ratio and efficiency is achieved.

  7. Simultaneous chromatic and luminance human electroretinogram responses

    PubMed Central

    Parry, Neil R A; Murray, Ian J; Panorgias, Athanasios; McKeefry, Declan J; Lee, Barry B; Kremers, Jan

    2012-01-01

    The parallel processing of information forms an important organisational principle of the primate visual system. Here we describe experiments which use a novel chromatic–achromatic temporal compound stimulus to simultaneously identify colour and luminance specific signals in the human electroretinogram (ERG). Luminance and chromatic components are separated in the stimulus; the luminance modulation has twice the temporal frequency of the chromatic modulation. ERGs were recorded from four trichromatic and two dichromatic subjects (1 deuteranope and 1 protanope). At isoluminance, the fundamental (first harmonic) response was elicited by the chromatic component in the stimulus. The trichromatic ERGs possessed low-pass temporal tuning characteristics, reflecting the activity of parvocellular post-receptoral mechanisms. There was very little first harmonic response in the dichromats’ ERGs. The second harmonic response was elicited by the luminance modulation in the compound stimulus and showed, in all subjects, band-pass temporal tuning characteristic of magnocellular activity. Thus it is possible to concurrently elicit ERG responses from the human retina which reflect processing in both chromatic and luminance pathways. As well as providing a clear demonstration of the parallel nature of chromatic and luminance processing in the human retina, the differences that exist between ERGs from trichromatic and dichromatic subjects point to the existence of interactions between afferent post-receptoral pathways that are in operation from the earliest stages of visual processing. PMID:22586211

  8. Dual phase high-temperature membranes for CO2 separation - performance assessment in post- and pre-combustion processes.

    PubMed

    Anantharaman, Rahul; Peters, Thijs; Xing, Wen; Fontaine, Marie-Laure; Bredesen, Rune

    2016-10-20

    Dual phase membranes are highly CO 2 -selective membranes with an operating temperature above 400 °C. The focus of this work is to quantify the potential of dual phase membranes in pre- and post-combustion CO 2 capture processes. The process evaluations show that the dual phase membranes integrated with an NGCC power plant for CO 2 capture are not competitive with the MEA process for post-combustion capture. However, dual phase membrane concepts outperform the reference Selexol technology for pre-combustion CO 2 capture in an IGCC process. The two processes evaluated in this work, post-combustion NGCC and pre-combustion IGCC, represent extremes in CO 2 partial pressure fed to the separation unit. Based on the evaluations it is expected that dual phase membranes could be competitive for post-combustion capture from a pulverized coal fired power plant (PCC) and pre-combustion capture from an Integrated Reforming Cycle (IRCC).

  9. Report to the High Order Language Working Group (HOLWG)

    DTIC Science & Technology

    1977-01-14

    as running, runnable, suspended or dormant, may be synchronized by semaphore variables, may be schedaled using clock and duration data types and mpy...Recursive and non-recursive routines G6. Parallel processes, synchronization , critical regions G7. User defined parameterized exception handling G8...typed and lacks extensibility, parallel processing, synchronization and real-time features. Overall Evaluation IBM strongly recommended PL/I as a

  10. A novel processing platform for post tape out flows

    NASA Astrophysics Data System (ADS)

    Vu, Hien T.; Kim, Soohong; Word, James; Cai, Lynn Y.

    2018-03-01

    As the computational requirements for post tape out (PTO) flows increase at the 7nm and below technology nodes, there is a need to increase the scalability of the computational tools in order to reduce the turn-around time (TAT) of the flows. Utilization of design hierarchy has been one proven method to provide sufficient partitioning to enable PTO processing. However, as the data is processed through the PTO flow, its effective hierarchy is reduced. The reduction is necessary to achieve the desired accuracy. Also, the sequential nature of the PTO flow is inherently non-scalable. To address these limitations, we are proposing a quasi-hierarchical solution that combines multiple levels of parallelism to increase the scalability of the entire PTO flow. In this paper, we describe the system and present experimental results demonstrating the runtime reduction through scalable processing with thousands of computational cores.

  11. Bin-Hash Indexing: A Parallel Method for Fast Query Processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bethel, Edward W; Gosink, Luke J.; Wu, Kesheng

    2008-06-27

    This paper presents a new parallel indexing data structure for answering queries. The index, called Bin-Hash, offers extremely high levels of concurrency, and is therefore well-suited for the emerging commodity of parallel processors, such as multi-cores, cell processors, and general purpose graphics processing units (GPU). The Bin-Hash approach first bins the base data, and then partitions and separately stores the values in each bin as a perfect spatial hash table. To answer a query, we first determine whether or not a record satisfies the query conditions based on the bin boundaries. For the bins with records that can not bemore » resolved, we examine the spatial hash tables. The procedures for examining the bin numbers and the spatial hash tables offer the maximum possible level of concurrency; all records are able to be evaluated by our procedure independently in parallel. Additionally, our Bin-Hash procedures access much smaller amounts of data than similar parallel methods, such as the projection index. This smaller data footprint is critical for certain parallel processors, like GPUs, where memory resources are limited. To demonstrate the effectiveness of Bin-Hash, we implement it on a GPU using the data-parallel programming language CUDA. The concurrency offered by the Bin-Hash index allows us to fully utilize the GPU's massive parallelism in our work; over 12,000 records can be simultaneously evaluated at any one time. We show that our new query processing method is an order of magnitude faster than current state-of-the-art CPU-based indexing technologies. Additionally, we compare our performance to existing GPU-based projection index strategies.« less

  12. Use of gamma ray radiation to parallel the plates of a Fabry-Perot interferometer

    NASA Technical Reports Server (NTRS)

    Skinner, Wilbert R.; Hays, Paul B.; Anderson, Sally M.

    1987-01-01

    The use of gamma radiation to parallel the plates of a Fabry-Perot etalon is examined. The method for determining the etalon parallelism, and the procedure for irradiating the posts are described. Changes in effective gap for the etalon over the surface are utilized to measure the parallelism of the Fabry-Perot etalon. An example in which this technique is applied to an etalon of fused silica plates, which are 132 mm in diameter and coded with zinc sulfide and cryolite, with Zerodur spaces 2 cm in length. The effect of the irradiation of the posts on the thermal performance of the etalon is investigated.

  13. PRATHAM: Parallel Thermal Hydraulics Simulations using Advanced Mesoscopic Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Joshi, Abhijit S; Jain, Prashant K; Mudrich, Jaime A

    2012-01-01

    At the Oak Ridge National Laboratory, efforts are under way to develop a 3D, parallel LBM code called PRATHAM (PaRAllel Thermal Hydraulic simulations using Advanced Mesoscopic Methods) to demonstrate the accuracy and scalability of LBM for turbulent flow simulations in nuclear applications. The code has been developed using FORTRAN-90, and parallelized using the message passing interface MPI library. Silo library is used to compact and write the data files, and VisIt visualization software is used to post-process the simulation data in parallel. Both the single relaxation time (SRT) and multi relaxation time (MRT) LBM schemes have been implemented in PRATHAM.more » To capture turbulence without prohibitively increasing the grid resolution requirements, an LES approach [5] is adopted allowing large scale eddies to be numerically resolved while modeling the smaller (subgrid) eddies. In this work, a Smagorinsky model has been used, which modifies the fluid viscosity by an additional eddy viscosity depending on the magnitude of the rate-of-strain tensor. In LBM, this is achieved by locally varying the relaxation time of the fluid.« less

  14. Bayer image parallel decoding based on GPU

    NASA Astrophysics Data System (ADS)

    Hu, Rihui; Xu, Zhiyong; Wei, Yuxing; Sun, Shaohua

    2012-11-01

    In the photoelectrical tracking system, Bayer image is decompressed in traditional method, which is CPU-based. However, it is too slow when the images become large, for example, 2K×2K×16bit. In order to accelerate the Bayer image decoding, this paper introduces a parallel speedup method for NVIDA's Graphics Processor Unit (GPU) which supports CUDA architecture. The decoding procedure can be divided into three parts: the first is serial part, the second is task-parallelism part, and the last is data-parallelism part including inverse quantization, inverse discrete wavelet transform (IDWT) as well as image post-processing part. For reducing the execution time, the task-parallelism part is optimized by OpenMP techniques. The data-parallelism part could advance its efficiency through executing on the GPU as CUDA parallel program. The optimization techniques include instruction optimization, shared memory access optimization, the access memory coalesced optimization and texture memory optimization. In particular, it can significantly speed up the IDWT by rewriting the 2D (Tow-dimensional) serial IDWT into 1D parallel IDWT. Through experimenting with 1K×1K×16bit Bayer image, data-parallelism part is 10 more times faster than CPU-based implementation. Finally, a CPU+GPU heterogeneous decompression system was designed. The experimental result shows that it could achieve 3 to 5 times speed increase compared to the CPU serial method.

  15. A massively asynchronous, parallel brain

    PubMed Central

    Zeki, Semir

    2015-01-01

    Whether the visual brain uses a parallel or a serial, hierarchical, strategy to process visual signals, the end result appears to be that different attributes of the visual scene are perceived asynchronously—with colour leading form (orientation) by 40 ms and direction of motion by about 80 ms. Whatever the neural root of this asynchrony, it creates a problem that has not been properly addressed, namely how visual attributes that are perceived asynchronously over brief time windows after stimulus onset are bound together in the longer term to give us a unified experience of the visual world, in which all attributes are apparently seen in perfect registration. In this review, I suggest that there is no central neural clock in the (visual) brain that synchronizes the activity of different processing systems. More likely, activity in each of the parallel processing-perceptual systems of the visual brain is reset independently, making of the brain a massively asynchronous organ, just like the new generation of more efficient computers promise to be. Given the asynchronous operations of the brain, it is likely that the results of activities in the different processing-perceptual systems are not bound by physiological interactions between cells in the specialized visual areas, but post-perceptually, outside the visual brain. PMID:25823871

  16. Gas chromatography fractionation platform featuring parallel flame-ionization detection and continuous high-resolution analyte collection in 384-well plates.

    PubMed

    Jonker, Willem; Clarijs, Bas; de Witte, Susannah L; van Velzen, Martin; de Koning, Sjaak; Schaap, Jaap; Somsen, Govert W; Kool, Jeroen

    2016-09-02

    Gas chromatography (GC) is a superior separation technique for many compounds. However, fractionation of a GC eluate for analyte isolation and/or post-column off-line analysis is not straightforward, and existing platforms are limited in the number of fractions that can be collected. Moreover, aerosol formation may cause serious analyte losses. Previously, our group has developed a platform that resolved these limitations of GC fractionation by post-column infusion of a trap solvent prior to continuous small-volume fraction collection in a 96-wells plate (Pieke et al., 2013 [17]). Still, this GC fractionation set-up lacked a chemical detector for the on-line recording of chromatograms, and the introduction of trap solvent resulted in extensive peak broadening for late-eluting compounds. This paper reports advancements to the fractionation platform allowing flame ionization detection (FID) parallel to high-resolution collection of a full GC chromatograms in up to 384 nanofractions of 7s each. To this end, a post-column split was incorporated which directs part of the eluate towards FID. Furthermore, a solvent heating device was developed for stable delivery of preheated/vaporized trap solvent, which significantly reduced band broadening by post-column infusion. In order to achieve optimal analyte trapping, several solvents were tested at different flow rates. The repeatability of the optimized GC fraction collection process was assessed demonstrating the possibility of up-concentration of isolated analytes by repetitive analyses of the same sample. The feasibility of the improved GC fractionation platform for bioactivity screening of toxic compounds was studied by the analysis of a mixture of test pesticides, which after fractionation were subjected to a post-column acetylcholinesterase (AChE) assay. Fractions showing AChE inhibition could be unambiguously correlated with peaks from the parallel-recorded FID chromatogram. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. Combining microfluidics, optogenetics and calcium imaging to study neuronal communication in vitro.

    PubMed

    Renault, Renaud; Sukenik, Nirit; Descroix, Stéphanie; Malaquin, Laurent; Viovy, Jean-Louis; Peyrin, Jean-Michel; Bottani, Samuel; Monceau, Pascal; Moses, Elisha; Vignes, Maéva

    2015-01-01

    In this paper we report the combination of microfluidics, optogenetics and calcium imaging as a cheap and convenient platform to study synaptic communication between neuronal populations in vitro. We first show that Calcium Orange indicator is compatible in vitro with a commonly used Channelrhodopsine-2 (ChR2) variant, as standard calcium imaging conditions did not alter significantly the activity of transduced cultures of rodent primary neurons. A fast, robust and scalable process for micro-chip fabrication was developed in parallel to build micro-compartmented cultures. Coupling optical fibers to each micro-compartment allowed for the independent control of ChR2 activation in the different populations without crosstalk. By analyzing the post-stimuli activity across the different populations, we finally show how this platform can be used to evaluate quantitatively the effective connectivity between connected neuronal populations.

  18. Voxel based parallel post processor for void nucleation and growth analysis of atomistic simulations of material fracture.

    PubMed

    Hemani, H; Warrier, M; Sakthivel, N; Chaturvedi, S

    2014-05-01

    Molecular dynamics (MD) simulations are used in the study of void nucleation and growth in crystals that are subjected to tensile deformation. These simulations are run for typically several hundred thousand time steps depending on the problem. We output the atom positions at a required frequency for post processing to determine the void nucleation, growth and coalescence due to tensile deformation. The simulation volume is broken up into voxels of size equal to the unit cell size of crystal. In this paper, we present the algorithm to identify the empty unit cells (voids), their connections (void size) and dynamic changes (growth and coalescence of voids) for MD simulations of large atomic systems (multi-million atoms). We discuss the parallel algorithms that were implemented and discuss their relative applicability in terms of their speedup and scalability. We also present the results on scalability of our algorithm when it is incorporated into MD software LAMMPS. Copyright © 2014 Elsevier Inc. All rights reserved.

  19. New technique for real-time distortion-invariant multiobject recognition and classification

    NASA Astrophysics Data System (ADS)

    Hong, Rutong; Li, Xiaoshun; Hong, En; Wang, Zuyi; Wei, Hongan

    2001-04-01

    A real-time hybrid distortion-invariant OPR system was established to make 3D multiobject distortion-invariant automatic pattern recognition. Wavelet transform technique was used to make digital preprocessing of the input scene, to depress the noisy background and enhance the recognized object. A three-layer backpropagation artificial neural network was used in correlation signal post-processing to perform multiobject distortion-invariant recognition and classification. The C-80 and NOA real-time processing ability and the multithread programming technology were used to perform high speed parallel multitask processing and speed up the post processing rate to ROIs. The reference filter library was constructed for the distortion version of 3D object model images based on the distortion parameter tolerance measuring as rotation, azimuth and scale. The real-time optical correlation recognition testing of this OPR system demonstrates that using the preprocessing, post- processing, the nonlinear algorithm os optimum filtering, RFL construction technique and the multithread programming technology, a high possibility of recognition and recognition rate ere obtained for the real-time multiobject distortion-invariant OPR system. The recognition reliability and rate was improved greatly. These techniques are very useful to automatic target recognition.

  20. The relationship of post-event processing to self-evaluation of performance in social anxiety.

    PubMed

    Brozovich, Faith; Heimberg, Richard G

    2011-06-01

    Socially anxious and control participants engaged in a social interaction with a confederate and then wrote about themselves or the other person (i.e., self-focused post-event processing [SF-PEP] vs. other-focused post-event processing [OF-PEP]) and completed several questionnaires. One week later, participants completed measures concerning their evaluation of their performance in the social interaction and the degree to which they engaged in post-event processing (PEP) during the week. Socially anxious individuals evaluated their performance in the social interaction more poorly than control participants, both immediately after and 1 week later. Socially anxious individuals assigned to the SF-PEP condition displayed fewer positive feelings about their performance compared to the socially anxious individuals in the OF-PEP condition as well as controls in either condition. Also, the trait tendency to engage in PEP moderated the effect of social anxiety on participants' evaluation of their performance in the interaction, such that high socially anxious individuals with high trait PEP scores evaluated themselves in the interaction more negatively at the later assessment. These results suggest that PEP and other self-evaluative processes may perpetuate the cycle of social anxiety. Copyright © 2011. Published by Elsevier Ltd.

  1. A parallel computational model for GATE simulations.

    PubMed

    Rannou, F R; Vega-Acevedo, N; El Bitar, Z

    2013-12-01

    GATE/Geant4 Monte Carlo simulations are computationally demanding applications, requiring thousands of processor hours to produce realistic results. The classical strategy of distributing the simulation of individual events does not apply efficiently for Positron Emission Tomography (PET) experiments, because it requires a centralized coincidence processing and large communication overheads. We propose a parallel computational model for GATE that handles event generation and coincidence processing in a simple and efficient way by decentralizing event generation and processing but maintaining a centralized event and time coordinator. The model is implemented with the inclusion of a new set of factory classes that can run the same executable in sequential or parallel mode. A Mann-Whitney test shows that the output produced by this parallel model in terms of number of tallies is equivalent (but not equal) to its sequential counterpart. Computational performance evaluation shows that the software is scalable and well balanced. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  2. Study design for the "effect of METOprolol in CARDioproteCtioN during an acute myocardial InfarCtion" (METOCARD-CNIC): a randomized, controlled parallel-group, observer-blinded clinical trial of early pre-reperfusion metoprolol administration in ST-segment elevation myocardial infarction.

    PubMed

    Ibanez, Borja; Fuster, Valentin; Macaya, Carlos; Sánchez-Brunete, Vicente; Pizarro, Gonzalo; López-Romero, Pedro; Mateos, Alonso; Jiménez-Borreguero, Jesús; Fernández-Ortiz, Antonio; Sanz, Ginés; Fernández-Friera, Leticia; Corral, Ervigio; Barreiro, Maria-Victoria; Ruiz-Mateos, Borja; Goicolea, Javier; Hernández-Antolín, Rosana; Acebal, Carlos; García-Rubira, Juan Carlos; Albarrán, Agustín; Zamorano, José Luis; Casado, Isabel; Valenciano, Juan; Fernández-Vázquez, Felipe; de la Torre, José María; Pérez de Prado, Armando; Iglesias-Vázquez, José Antonio; Martínez-Tenorio, Pedro; Iñiguez, Andrés

    2012-10-01

    Infarct size predicts post-infarction mortality. Oral β-blockade within 24 hours of a ST-segment elevation acute myocardial infarction (STEMI) is a class-IA indication, however early intravenous (IV) β-blockers initiation is not encouraged. In recent magnetic resonance imaging (MRI)-based experimental studies, the β(1)-blocker metoprolol has been shown to reduce infarct size only when administered before coronary reperfusion. To date, there is not a single trial comparing the pre- vs. post-reperfusion β-blocker initiation in STEMI. The METOCARD-CNIC trial is testing whether the early initiation of IV metoprolol before primary percutaneous coronary intervention (pPCI) could reduce infarct size and improve outcomes when compared to oral post-pPCI metoprolol initiation. The METOCARD-CNIC trial is a randomized parallel-group single-blind (to outcome evaluators) clinical effectiveness trial conducted in 5 Counties across Spain that will enroll 220 participants. Eligible are 18- to 80-year-old patients with anterior STEMI revascularized by pPCI ≤6 hours from symptom onset. Exclusion criteria are Killip-class ≥III, atrioventricular block or active treatment with β-blockers/bronchodilators. Primary end point is infarct size evaluated by MRI 5 to 7 days post-STEMI. Prespecified major secondary end points are salvage-index, left ventricular ejection fraction recovery (day 5-7 to 6 months), the composite of (death/malignant ventricular arrhythmias/reinfarction/admission due to heart failure), and myocardial perfusion. The METOCARD-CNIC trial is testing the hypothesis that the early initiation of IV metoprolol pre-reperfusion reduces infarct size in comparison to initiation of oral metoprolol post-reperfusion. Given the implications of infarct size reduction in STEMI, if positive, this trial might evidence that a refined use of an approved inexpensive drug can improve outcomes of patients with STEMI. Copyright © 2012 Mosby, Inc. All rights reserved.

  3. A Developmental Perspective on Peer Rejection, Deviant Peer Affiliation, and Conduct Problems Among Youth.

    PubMed

    Chen, Diane; Drabick, Deborah A G; Burgers, Darcy E

    2015-12-01

    Peer rejection and deviant peer affiliation are linked consistently to the development and maintenance of conduct problems. Two proposed models may account for longitudinal relations among these peer processes and conduct problems: the (a) sequential mediation model, in which peer rejection in childhood and deviant peer affiliation in adolescence mediate the link between early externalizing behaviors and more serious adolescent conduct problems; and (b) parallel process model, in which peer rejection and deviant peer affiliation are considered independent processes that operate simultaneously to increment risk for conduct problems. In this review, we evaluate theoretical models and evidence for associations among conduct problems and (a) peer rejection and (b) deviant peer affiliation. We then consider support for the sequential mediation and parallel process models. Next, we propose an integrated model incorporating both the sequential mediation and parallel process models. Future research directions and implications for prevention and intervention efforts are discussed.

  4. A Developmental Perspective on Peer Rejection, Deviant Peer Affiliation, and Conduct Problems among Youth

    PubMed Central

    Chen, Diane; Drabick, Deborah A. G.; Burgers, Darcy E.

    2015-01-01

    Peer rejection and deviant peer affiliation are linked consistently to the development and maintenance of conduct problems. Two proposed models may account for longitudinal relations among these peer processes and conduct problems: the (a) sequential mediation model, in which peer rejection in childhood and deviant peer affiliation in adolescence mediate the link between early externalizing behaviors and more serious adolescent conduct problems; and (b) parallel process model, in which peer rejection and deviant peer affiliation are considered independent processes that operate simultaneously to increment risk for conduct problems. In this review, we evaluate theoretical models and evidence for associations among conduct problems and (a) peer rejection and (b) deviant peer affiliation. We then consider support for the sequential mediation and parallel process models. Next, we propose an integrated model incorporating both the sequential mediation and parallel process models. Future research directions and implications for prevention and intervention efforts are discussed. PMID:25410430

  5. Heuristic and analytic processes in reasoning: an event-related potential study of belief bias.

    PubMed

    Banks, Adrian P; Hope, Christopher

    2014-03-01

    Human reasoning involves both heuristic and analytic processes. This study of belief bias in relational reasoning investigated whether the two processes occur serially or in parallel. Participants evaluated the validity of problems in which the conclusions were either logically valid or invalid and either believable or unbelievable. Problems in which the conclusions presented a conflict between the logically valid response and the believable response elicited a more positive P3 than problems in which there was no conflict. This shows that P3 is influenced by the interaction of belief and logic rather than either of these factors on its own. These findings indicate that belief and logic influence reasoning at the same time, supporting models in which belief-based and logical evaluations occur in parallel but not theories in which belief-based heuristic evaluations precede logical analysis.

  6. DCL System Research Using Advanced Approaches for Land-based or Ship-based Real-Time Recognition and Localization of Marine Mammals

    DTIC Science & Technology

    2012-09-30

    recognition. Algorithm design and statistical analysis and feature analysis. Post -Doctoral Associate, Cornell University, Bioacoustics Research...short. The HPC-ADA was designed based on fielded systems [1-4, 6] that offer a variety of desirable attributes, specifically dynamic resource...The software package was designed to utilize parallel and distributed processing for running recognition and other advanced algorithms. DeLMA

  7. A WENO-Limited, ADER-DT, Finite-Volume Scheme for Efficient, Robust, and Communication-Avoiding Multi-Dimensional Transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Norman, Matthew R

    2014-01-01

    The novel ADER-DT time discretization is applied to two-dimensional transport in a quadrature-free, WENO- and FCT-limited, Finite-Volume context. Emphasis is placed on (1) the serial and parallel computational properties of ADER-DT and this framework and (2) the flexibility of ADER-DT and this framework in efficiently balancing accuracy with other constraints important to transport applications. This study demonstrates a range of choices for the user when approaching their specific application while maintaining good parallel properties. In this method, genuine multi-dimensionality, single-step and single-stage time stepping, strict positivity, and a flexible range of limiting are all achieved with only one parallel synchronizationmore » and data exchange per time step. In terms of parallel data transfers per simulated time interval, this improves upon multi-stage time stepping and post-hoc filtering techniques such as hyperdiffusion. This method is evaluated with standard transport test cases over a range of limiting options to demonstrate quantitatively and qualitatively what a user should expect when employing this method in their application.« less

  8. SIAM Conference on Parallel Processing for Scientific Computing, 4th, Chicago, IL, Dec. 11-13, 1989, Proceedings

    NASA Technical Reports Server (NTRS)

    Dongarra, Jack (Editor); Messina, Paul (Editor); Sorensen, Danny C. (Editor); Voigt, Robert G. (Editor)

    1990-01-01

    Attention is given to such topics as an evaluation of block algorithm variants in LAPACK and presents a large-grain parallel sparse system solver, a multiprocessor method for the solution of the generalized Eigenvalue problem on an interval, and a parallel QR algorithm for iterative subspace methods on the CM2. A discussion of numerical methods includes the topics of asynchronous numerical solutions of PDEs on parallel computers, parallel homotopy curve tracking on a hypercube, and solving Navier-Stokes equations on the Cedar Multi-Cluster system. A section on differential equations includes a discussion of a six-color procedure for the parallel solution of elliptic systems using the finite quadtree structure, data parallel algorithms for the finite element method, and domain decomposition methods in aerodynamics. Topics dealing with massively parallel computing include hypercube vs. 2-dimensional meshes and massively parallel computation of conservation laws. Performance and tools are also discussed.

  9. Object-Oriented Implementation of the NAS Parallel Benchmarks using Charm++

    NASA Technical Reports Server (NTRS)

    Krishnan, Sanjeev; Bhandarkar, Milind; Kale, Laxmikant V.

    1996-01-01

    This report describes experiences with implementing the NAS Computational Fluid Dynamics benchmarks using a parallel object-oriented language, Charm++. Our main objective in implementing the NAS CFD kernel benchmarks was to develop a code that could be used to easily experiment with different domain decomposition strategies and dynamic load balancing. We also wished to leverage the object-orientation provided by the Charm++ parallel object-oriented language, to develop reusable abstractions that would simplify the process of developing parallel applications. We first describe the Charm++ parallel programming model and the parallel object array abstraction, then go into detail about each of the Scalar Pentadiagonal (SP) and Lower/Upper Triangular (LU) benchmarks, along with performance results. Finally we conclude with an evaluation of the methodology used.

  10. Long-term survival of endodontically treated, maxillary anterior teeth restored with either tapered or parallel-sided glass-fiber posts and full-ceramic crown coverage.

    PubMed

    Signore, Antonio; Benedicenti, Stefano; Kaitsas, Vassilios; Barone, Michele; Angiero, Francesca; Ravera, Giambattista

    2009-02-01

    This retrospective study investigated the clinical effectiveness over up to 8 years of parallel-sided and of tapered glass-fiber posts, in combination with either hybrid composite or dual-cure composite resin core material, in endodontically treated, maxillary anterior teeth covered with full-ceramic crowns. The study population comprised 192 patients and 526 endodontically treated teeth, with various degrees of hard-tissue loss, restored by the post-and-core technique. Four groups were defined based on post shape and core build-up materials, and within each group post-and-core restorations were assigned randomly with respect to root morphology. Inclusion criteria were symptom-free endodontic therapy, root-canal treatment with a minimum apical seal of 4mm, application of rubber dam, need for post-and-core complex because of coronal tooth loss, and tooth with at least one residual coronal wall. Survival rate of the post-and-core restorations was determined using Kaplan-Meier statistical analysis. The restorations were examined clinically and radiologically; mean observation period was 5.3 years. The overall survival rate of glass-fiber post-and-core restorations was 98.5%. The survival rate for parallel-sided posts was 98.6% and for tapered posts was 96.8%. Survival rates for core build-up materials were 100% for dual-cure composite and 96.8% for hybrid light-cure composite. For both glass-fiber post designs and for both core build-up materials, clinical performance was satisfactory. Survival was higher for teeth retaining four and three coronal walls.

  11. Incorporating deliverable monitor unit constraints into spot intensity optimization in intensity modulated proton therapy treatment planning

    PubMed Central

    Cao, Wenhua; Lim, Gino; Li, Xiaoqiang; Li, Yupeng; Zhu, X. Ronald; Zhang, Xiaodong

    2014-01-01

    The purpose of this study is to investigate the feasibility and impact of incorporating deliverable monitor unit (MU) constraints into spot intensity optimization in intensity modulated proton therapy (IMPT) treatment planning. The current treatment planning system (TPS) for IMPT disregards deliverable MU constraints in the spot intensity optimization (SIO) routine. It performs a post-processing procedure on an optimized plan to enforce deliverable MU values that are required by the spot scanning proton delivery system. This procedure can create a significant dose distribution deviation between the optimized and post-processed deliverable plans, especially when small spot spacings are used. In this study, we introduce a two-stage linear programming (LP) approach to optimize spot intensities and constrain deliverable MU values simultaneously, i.e., a deliverable spot intensity optimization (DSIO) model. Thus, the post-processing procedure is eliminated and the associated optimized plan deterioration can be avoided. Four prostate cancer cases at our institution were selected for study and two parallel opposed beam angles were planned for all cases. A quadratic programming (QP) based model without MU constraints, i.e., a conventional spot intensity optimization (CSIO) model, was also implemented to emulate the commercial TPS. Plans optimized by both the DSIO and CSIO models were evaluated for five different settings of spot spacing from 3 mm to 7 mm. For all spot spacings, the DSIO-optimized plans yielded better uniformity for the target dose coverage and critical structure sparing than did the CSIO-optimized plans. With reduced spot spacings, more significant improvements in target dose uniformity and critical structure sparing were observed in the DSIO- than in the CSIO-optimized plans. Additionally, better sparing of the rectum and bladder was achieved when reduced spacings were used for the DSIO-optimized plans. The proposed DSIO approach ensures the deliverability of optimized IMPT plans that take into account MU constraints. This eliminates the post-processing procedure required by the TPS as well as the resultant deteriorating effect on ultimate dose distributions. This approach therefore allows IMPT plans to adopt all possible spot spacings optimally. Moreover, dosimetric benefits can be achieved using smaller spot spacings. PMID:23835656

  12. Alternative pre-rigor foreshank positioning can improve beef shoulder muscle tenderness.

    PubMed

    Grayson, A L; Lawrence, T E

    2013-09-01

    Thirty beef carcasses were harvested and the foreshank of each side was independently positioned (cranial, natural, parallel, or caudal) 1h post-mortem to determine the effect of foreshank angle at rigor mortis on the sarcomere length and tenderness of six beef shoulder muscles. The infraspinatus (IS), pectoralis profundus (PP), serratus ventralis (SV), supraspinatus (SS), teres major (TM) and triceps brachii (TB) were excised 48 h post-mortem for Warner-Bratzler shear force (WBSF) and sarcomere length evaluations. All muscles except the SS had altered (P<0.05) sarcomere lengths between positions; the cranial position resulted in the longest sarcomeres for the SV and TB muscles whilst the natural position had longer sarcomeres for the PP and TM muscles. The SV from the cranial position had lower (P<0.05) shear than the caudal position and TB from the natural position had lower (P<0.05) shear than the parallel or caudal positions. Sarcomere length was moderately correlated (r=-0.63; P<0.01) to shear force. Copyright © 2013 Elsevier Ltd. All rights reserved.

  13. Examining the Relationships Among Self-Compassion, Social Anxiety, and Post-Event Processing.

    PubMed

    Blackie, Rebecca A; Kocovski, Nancy L

    2017-01-01

    Post-event processing refers to negative and repetitive thinking following anxiety provoking social situations. Those who engage in post-event processing may lack self-compassion in relation to social situations. As such, the primary aim of this research was to evaluate whether those high in self-compassion are less likely to engage in post-event processing and the specific self-compassion domains that may be most protective. In study 1 ( N = 156 undergraduate students) and study 2 ( N = 150 individuals seeking help for social anxiety and shyness), participants completed a battery of questionnaires, recalled a social situation, and then rated state post-event processing. Self-compassion negatively correlated with post-event processing, with some differences depending on situation type. Even after controlling for self-esteem, self-compassion remained significantly correlated with state post-event processing. Given these findings, self-compassion may serve as a buffer against post-event processing. Future studies should experimentally examine whether increasing self-compassion leads to reduced post-event processing.

  14. Optimization of the coherence function estimation for multi-core central processing unit

    NASA Astrophysics Data System (ADS)

    Cheremnov, A. G.; Faerman, V. A.; Avramchuk, V. S.

    2017-02-01

    The paper considers use of parallel processing on multi-core central processing unit for optimization of the coherence function evaluation arising in digital signal processing. Coherence function along with other methods of spectral analysis is commonly used for vibration diagnosis of rotating machinery and its particular nodes. An algorithm is given for the function evaluation for signals represented with digital samples. The algorithm is analyzed for its software implementation and computational problems. Optimization measures are described, including algorithmic, architecture and compiler optimization, their results are assessed for multi-core processors from different manufacturers. Thus, speeding-up of the parallel execution with respect to sequential execution was studied and results are presented for Intel Core i7-4720HQ и AMD FX-9590 processors. The results show comparatively high efficiency of the optimization measures taken. In particular, acceleration indicators and average CPU utilization have been significantly improved, showing high degree of parallelism of the constructed calculating functions. The developed software underwent state registration and will be used as a part of a software and hardware solution for rotating machinery fault diagnosis and pipeline leak location with acoustic correlation method.

  15. Physical and sensory quality of Java Arabica green coffee beans

    NASA Astrophysics Data System (ADS)

    Sunarharum, W. B.; Yuwono, S. S.; Pangestu, N. B. S. W.; Nadhiroh, H.

    2018-03-01

    Demand on high quality coffee for consumption is continually increasing not only in the consuming countries (importers) but also in the producing countries (exporters). Coffee quality could be affected by several factors from farm to cup including the post-harvest processing methods. This research aimed to investigate the influence of different post-harvest processing methods on physical and sensory quality of Java Arabica green coffee beans. The two factors being evaluated were three different post-harvest processing methods to produce green coffee beans (natural/dry, semi-washed and fully-washed processing) under sun drying. Physical quality evaluation was based on The Indonesian National Standard (SNI 01-2907-2008) while sensory quality was evaluated by five expert judges. The result shows that less defects observed in wet processed coffee as compared to the dry processing. The mechanical drying was also proven to yield a higher quality green coffee beans and minimise losses.

  16. The Temporal Dynamics of Visual Search: Evidence for Parallel Processing in Feature and Conjunction Searches

    PubMed Central

    McElree, Brian; Carrasco, Marisa

    2012-01-01

    Feature and conjunction searches have been argued to delineate parallel and serial operations in visual processing. The authors evaluated this claim by examining the temporal dynamics of the detection of features and conjunctions. The 1st experiment used a reaction time (RT) task to replicate standard mean RT patterns and to examine the shapes of the RT distributions. The 2nd experiment used the response-signal speed–accuracy trade-off (SAT) procedure to measure discrimination (asymptotic detection accuracy) and detection speed (processing dynamics). Set size affected discrimination in both feature and conjunction searches but affected detection speed only in the latter. Fits of models to the SAT data that included a serial component overpredicted the magnitude of the observed dynamics differences. The authors concluded that both features and conjunctions are detected in parallel. Implications for the role of attention in visual processing are discussed. PMID:10641310

  17. When fast logic meets slow belief: Evidence for a parallel-processing model of belief bias.

    PubMed

    Trippas, Dries; Thompson, Valerie A; Handley, Simon J

    2017-05-01

    Two experiments pitted the default-interventionist account of belief bias against a parallel-processing model. According to the former, belief bias occurs because a fast, belief-based evaluation of the conclusion pre-empts a working-memory demanding logical analysis. In contrast, according to the latter both belief-based and logic-based responding occur in parallel. Participants were given deductive reasoning problems of variable complexity and instructed to decide whether the conclusion was valid on half the trials or to decide whether the conclusion was believable on the other half. When belief and logic conflict, the default-interventionist view predicts that it should take less time to respond on the basis of belief than logic, and that the believability of a conclusion should interfere with judgments of validity, but not the reverse. The parallel-processing view predicts that beliefs should interfere with logic judgments only if the processing required to evaluate the logical structure exceeds that required to evaluate the knowledge necessary to make a belief-based judgment, and vice versa otherwise. Consistent with this latter view, for the simplest reasoning problems (modus ponens), judgments of belief resulted in lower accuracy than judgments of validity, and believability interfered more with judgments of validity than the converse. For problems of moderate complexity (modus tollens and single-model syllogisms), the interference was symmetrical, in that validity interfered with belief judgments to the same degree that believability interfered with validity judgments. For the most complex (three-term multiple-model syllogisms), conclusion believability interfered more with judgments of validity than vice versa, in spite of the significant interference from conclusion validity on judgments of belief.

  18. Risk factors for failure of glass fiber-reinforced composite post restorations: a prospective observational clinical study.

    PubMed

    Naumann, Michael; Blankenstein, Felix; Kiessling, Saskia; Dietrich, Thomas

    2005-12-01

    Glass fiber-reinforced endodontic posts are considered to have favorable mechanical properties for the reconstruction of endodontically treated teeth. The aim of the present investigation was to evaluate the survival of two tapered and one parallel-sided glass fiber-reinforced endodontic post systems in teeth with different stages of hard tissue loss and to identify risk factors for restoration failure. One-hundred and forty-nine glass fiber-reinforced endodontic posts in 122 patients were followed-up for 5-56 months [mean +/- standard deviation (SD): 39 +/- 11 months]. Glass fiber-reinforced endodontic posts were adhesively luted and the core was built with a composite resin. Cox proportional hazards models were used to evaluate the association of clinical variables and failure rate. Higher failure rates were found for restorations of anterior teeth compared with posterior teeth [Hazard-Ratios (HR): 3.1; 95% confidence interval (CI): 1.3-7.4], for restorations in teeth with no proximal contacts compared with at least one proximal contact (HR: 3.0; 95% CI: 1.0-9.0), and for teeth restored with single crowns compared with fixed bridges (HR: 4.3; 95% CI: 1.1-16.2). Tooth type, type of final restoration and the presence of adjacent teeth were found to be significant predictors of failure rates in endodontically treated teeth restored with glass fiber-reinforced endodontic posts.

  19. A Non-Equilibrium Sediment Transport Model for Coastal Inlets and Navigation Channels

    DTIC Science & Technology

    2011-01-01

    exchange of water , sediment, and nutrients between estuaries and the ocean. Because of the multiple interacting forces (waves, wind, tide, river...in parallel using OpenMP. The CMS takes advantage of the Surface- water Modeling System (SMS) interface for grid generation and model setup, as well...as for plotting and post- processing (Zundel, 2000). The circulation model in the CMS (called CMS-Flow) computes the unsteady water level and

  20. Shared decision-making for psychiatric medication: A mixed-methods evaluation of a UK training programme for service users and clinicians.

    PubMed

    Ramon, Shulamit; Morant, Nicola; Stead, Ute; Perry, Ben

    2017-12-01

    Shared decision making (SDM) is recognised as a promising strategy to enhance good collaboration between clinicians and service users, yet it is not practised regularly in mental health. Develop and evaluate a novel training programme to enhance SDM in psychiatric medication management for service users, psychiatrists and care co-ordinators. The training programme design was informed by existing literature and local stakeholders consultations. Parallel group-based training programmes on SDM process were delivered to community mental health service users and providers. Evaluation consisted of quantitative measures at baseline and 12-month follow-up, post-programme participant feedback and qualitative interviews. Training was provided to 47 service users, 35 care-coordinators and 12 psychiatrists. Participant feedback was generally positive. Statistically significant changes in service users' decisional conflict and perceptions of practitioners' interactional style in promoting SDM occurred at the follow-up. Qualitative data suggested positive impacts on service users' and care co-ordinators confidence to explore medication experience, and group-based training was valued. The programme was generally acceptable to service users and practitioners. This indicates the value of conducting a larger study and exploring application for non-medical decisions.

  1. Parallel Processing Systems for Passive Ranging During Helicopter Flight

    NASA Technical Reports Server (NTRS)

    Sridhar, Bavavar; Suorsa, Raymond E.; Showman, Robert D. (Technical Monitor)

    1994-01-01

    The complexity of rotorcraft missions involving operations close to the ground result in high pilot workload. In order to allow a pilot time to perform mission-oriented tasks, sensor-aiding and automation of some of the guidance and control functions are highly desirable. Images from an electro-optical sensor provide a covert way of detecting objects in the flight path of a low-flying helicopter. Passive ranging consists of processing a sequence of images using techniques based on optical low computation and recursive estimation. The passive ranging algorithm has to extract obstacle information from imagery at rates varying from five to thirty or more frames per second depending on the helicopter speed. We have implemented and tested the passive ranging algorithm off-line using helicopter-collected images. However, the real-time data and computation requirements of the algorithm are beyond the capability of any off-the-shelf microprocessor or digital signal processor. This paper describes the computational requirements of the algorithm and uses parallel processing technology to meet these requirements. Various issues in the selection of a parallel processing architecture are discussed and four different computer architectures are evaluated regarding their suitability to process the algorithm in real-time. Based on this evaluation, we conclude that real-time passive ranging is a realistic goal and can be achieved with a short time.

  2. The Automated Instrumentation and Monitoring System (AIMS) reference manual

    NASA Technical Reports Server (NTRS)

    Yan, Jerry; Hontalas, Philip; Listgarten, Sherry

    1993-01-01

    Whether a researcher is designing the 'next parallel programming paradigm,' another 'scalable multiprocessor' or investigating resource allocation algorithms for multiprocessors, a facility that enables parallel program execution to be captured and displayed is invaluable. Careful analysis of execution traces can help computer designers and software architects to uncover system behavior and to take advantage of specific application characteristics and hardware features. A software tool kit that facilitates performance evaluation of parallel applications on multiprocessors is described. The Automated Instrumentation and Monitoring System (AIMS) has four major software components: a source code instrumentor which automatically inserts active event recorders into the program's source code before compilation; a run time performance-monitoring library, which collects performance data; a trace file animation and analysis tool kit which reconstructs program execution from the trace file; and a trace post-processor which compensate for data collection overhead. Besides being used as prototype for developing new techniques for instrumenting, monitoring, and visualizing parallel program execution, AIMS is also being incorporated into the run-time environments of various hardware test beds to evaluate their impact on user productivity. Currently, AIMS instrumentors accept FORTRAN and C parallel programs written for Intel's NX operating system on the iPSC family of multi computers. A run-time performance-monitoring library for the iPSC/860 is included in this release. We plan to release monitors for other platforms (such as PVM and TMC's CM-5) in the near future. Performance data collected can be graphically displayed on workstations (e.g. Sun Sparc and SGI) supporting X-Windows (in particular, Xl IR5, Motif 1.1.3).

  3. Parallel Electric Field on Auroral Magnetic Field Lines.

    NASA Astrophysics Data System (ADS)

    Yeh, Huey-Ching Betty

    1982-03-01

    The interaction of Birkeland (magnetic-field-aligned) current carriers and the Earth's magnetic field results in electrostatic potential drops along magnetic field lines. The statistical distributions of the field-aligned potential difference (phi)(,(PARLL)) were determined from the energy spectra of electron inverted "V" events observed at ionospheric altitude for different conditions of geomagnetic activity as indicated by the AE index. Data of 1270 electron inverted "V"'s were obtained from Low-Energy Electron measurements of the Atmosphere Explorer-C and -D Satellite (despun mode) in the interval January 1974-April 1976. In general, (phi)(,(PARLL)) is largest in the dusk to pre-midnight sector, smaller in the post-midnight to dawn sector, and smallest in the near noon sector during quiet and disturbed geomagnetic conditions; there is a steady dusk-dawn-noon asymmetry of the global (phi)(,(PARLL)) distribution. As the geomagnetic activity level increases, the (phi)(,(PARLL)) pattern expands to lower invariant latitudes, and the magnitude of (phi)(,(PARLL)) in the 13-24 magnetic local time sector increases significantly. The spatial structure and intensity variation of the global (phi)(,(PARLL)) distribution are statistically more variable, and the magnitudes of (phi)(,(PARLL)) have smaller correlation with the AE-index, in the post-midnight to dawn sector. A strong correlation is found to exist between upward Birkeland current systems and global parallel potential drops, and between auroral electron precipitation patterns and parallel potential drops, regarding their mophology, their intensity and their dependence of geomagnetic activity. An analysis of the fine-scale simultaneous current-voltage relationship for upward Birkeland currents in Region 1 shows that typical field-aligned potential drops are consistent with model predictions based on linear acceleration of the charge carriers through an electrostatic potential drop along convergent magnetic field lines to maintain current continuity. In a steady state, this model of simple electrostatic acceleration without anomalous resistivity also predicts observable relations between global parallel currents and parallel potential drops and between global energy deposition and parallel potential drops. The temperature, density, and species of the unaccelerated charge carriers are the relevant parameters of the model. The dusk-dawn -noon asymmetry of the global (phi)(,(PARLL)) distribution can be explained by the above steady-state (phi)(,(PARLL)) process if we associate the source regions of upward Birkeland current carriers in Region 1, Region 2, and the cusp region with the plasma sheet boundary layer, the near-Earth plasma sheet, and the magnetosheath, respectively. The results of this study provide observational information on the global distribution of parallel potential drops and the prevailing process of generating and maintaining potential gradients (parallel electric fields) along auroral magnetic field lines.

  4. Textural defect detect using a revised ant colony clustering algorithm

    NASA Astrophysics Data System (ADS)

    Zou, Chao; Xiao, Li; Wang, Bingwen

    2007-11-01

    We propose a totally novel method based on a revised ant colony clustering algorithm (ACCA) to explore the topic of textural defect detection. In this algorithm, our efforts are mainly made on the definition of local irregularity measurement and the implementation of the revised ACCA. The local irregular measurement defined evaluates the local textural inconsistency of each pixel against their mini-environment. In our revised ACCA, the behaviors of each ant are divided into two steps: release pheromone and act. The quantity of pheromone released is proportional to the irregularity measurement; the actions of the ants to act next are chosen independently of each other in a stochastic way according to some evaluated heuristic knowledge. The independency of ants implies the inherent parallel computation architecture of this algorithm. We apply the proposed method in some typical textural images with defects. From the series of pheromone distribution map (PDM), it can be clearly seen that the pheromone distribution approaches the textual defects gradually. By some post-processing, the final distribution of pheromone can demonstrate the shape and area of the defects well.

  5. Post-analysis report on Chesapeake Bay data processing. [spectral analysis and recognition computer signature extension

    NASA Technical Reports Server (NTRS)

    Thomson, F.

    1972-01-01

    The additional processing performed on data collected over the Rhode River Test Site and Forestry Site in November 1970 is reported. The techniques and procedures used to obtain the processed results are described. Thermal data collected over three approximately parallel lines of the site were contoured, and the results color coded, for the purpose of delineating important scene constituents and to identify trees attacked by pine bark beetles. Contouring work and histogram preparation are reviewed and the important conclusions from the spectral analysis and recognition computer (SPARC) signature extension work are summarized. The SPARC setup and processing records are presented and recommendations are made for future data collection over the site.

  6. PaFlexPepDock: parallel ab-initio docking of peptides onto their receptors with full flexibility based on Rosetta.

    PubMed

    Li, Haiou; Lu, Liyao; Chen, Rong; Quan, Lijun; Xia, Xiaoyan; Lü, Qiang

    2014-01-01

    Structural information related to protein-peptide complexes can be very useful for novel drug discovery and design. The computational docking of protein and peptide can supplement the structural information available on protein-peptide interactions explored by experimental ways. Protein-peptide docking of this paper can be described as three processes that occur in parallel: ab-initio peptide folding, peptide docking with its receptor, and refinement of some flexible areas of the receptor as the peptide is approaching. Several existing methods have been used to sample the degrees of freedom in the three processes, which are usually triggered in an organized sequential scheme. In this paper, we proposed a parallel approach that combines all the three processes during the docking of a folding peptide with a flexible receptor. This approach mimics the actual protein-peptide docking process in parallel way, and is expected to deliver better performance than sequential approaches. We used 22 unbound protein-peptide docking examples to evaluate our method. Our analysis of the results showed that the explicit refinement of the flexible areas of the receptor facilitated more accurate modeling of the interfaces of the complexes, while combining all of the moves in parallel helped the constructing of energy funnels for predictions.

  7. Sup wit Eval Ext?

    ERIC Educational Resources Information Center

    Patton, Michael Quinn

    2008-01-01

    Extension and evaluation share some similar challenges, including working with diverse stakeholders, parallel processes for focusing priorities, meeting common standards of excellence, and adapting to globalization, new technologies, and changing times. Evaluations of extension programs have helped clarify how change occurs, especially the…

  8. Evaluating Statistical Targets for Assembling Parallel Mixed-Format Test Forms

    ERIC Educational Resources Information Center

    Debeer, Dries; Ali, Usama S.; van Rijn, Peter W.

    2017-01-01

    Test assembly is the process of selecting items from an item pool to form one or more new test forms. Often new test forms are constructed to be parallel with an existing (or an ideal) test. Within the context of item response theory, the test information function (TIF) or the test characteristic curve (TCC) are commonly used as statistical…

  9. Development of a parallel FE simulator for modeling the whole trans-scale failure process of rock from meso- to engineering-scale

    NASA Astrophysics Data System (ADS)

    Li, Gen; Tang, Chun-An; Liang, Zheng-Zhao

    2017-01-01

    Multi-scale high-resolution modeling of rock failure process is a powerful means in modern rock mechanics studies to reveal the complex failure mechanism and to evaluate engineering risks. However, multi-scale continuous modeling of rock, from deformation, damage to failure, has raised high requirements on the design, implementation scheme and computation capacity of the numerical software system. This study is aimed at developing the parallel finite element procedure, a parallel rock failure process analysis (RFPA) simulator that is capable of modeling the whole trans-scale failure process of rock. Based on the statistical meso-damage mechanical method, the RFPA simulator is able to construct heterogeneous rock models with multiple mechanical properties, deal with and represent the trans-scale propagation of cracks, in which the stress and strain fields are solved for the damage evolution analysis of representative volume element by the parallel finite element method (FEM) solver. This paper describes the theoretical basis of the approach and provides the details of the parallel implementation on a Windows - Linux interactive platform. A numerical model is built to test the parallel performance of FEM solver. Numerical simulations are then carried out on a laboratory-scale uniaxial compression test, and field-scale net fracture spacing and engineering-scale rock slope examples, respectively. The simulation results indicate that relatively high speedup and computation efficiency can be achieved by the parallel FEM solver with a reasonable boot process. In laboratory-scale simulation, the well-known physical phenomena, such as the macroscopic fracture pattern and stress-strain responses, can be reproduced. In field-scale simulation, the formation process of net fracture spacing from initiation, propagation to saturation can be revealed completely. In engineering-scale simulation, the whole progressive failure process of the rock slope can be well modeled. It is shown that the parallel FE simulator developed in this study is an efficient tool for modeling the whole trans-scale failure process of rock from meso- to engineering-scale.

  10. Data Parallel Bin-Based Indexing for Answering Queries on Multi-Core Architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gosink, Luke; Wu, Kesheng; Bethel, E. Wes

    2009-06-02

    The multi-core trend in CPUs and general purpose graphics processing units (GPUs) offers new opportunities for the database community. The increase of cores at exponential rates is likely to affect virtually every server and client in the coming decade, and presents database management systems with a huge, compelling disruption that will radically change how processing is done. This paper presents a new parallel indexing data structure for answering queries that takes full advantage of the increasing thread-level parallelism emerging in multi-core architectures. In our approach, our Data Parallel Bin-based Index Strategy (DP-BIS) first bins the base data, and then partitionsmore » and stores the values in each bin as a separate, bin-based data cluster. In answering a query, the procedures for examining the bin numbers and the bin-based data clusters offer the maximum possible level of concurrency; each record is evaluated by a single thread and all threads are processed simultaneously in parallel. We implement and demonstrate the effectiveness of DP-BIS on two multi-core architectures: a multi-core CPU and a GPU. The concurrency afforded by DP-BIS allows us to fully utilize the thread-level parallelism provided by each architecture--for example, our GPU-based DP-BIS implementation simultaneously evaluates over 12,000 records with an equivalent number of concurrently executing threads. In comparing DP-BIS's performance across these architectures, we show that the GPU-based DP-BIS implementation requires significantly less computation time to answer a query than the CPU-based implementation. We also demonstrate in our analysis that DP-BIS provides better overall performance than the commonly utilized CPU and GPU-based projection index. Finally, due to data encoding, we show that DP-BIS accesses significantly smaller amounts of data than index strategies that operate solely on a column's base data; this smaller data footprint is critical for parallel processors that possess limited memory resources (e.g., GPUs).« less

  11. Evaluation of Turkish and Mathematics Curricula According to Value-Based Evaluation Model

    ERIC Educational Resources Information Center

    Duman, Serap Nur; Akbas, Oktay

    2017-01-01

    This study evaluated secondary school seventh-grade Turkish and mathematics programs using the Context-Input-Process-Product Evaluation Model based on student, teacher, and inspector views. The convergent parallel mixed method design was used in the study. Student values were identified using the scales for socio-level identification, traditional…

  12. Modeling Cooperative Threads to Project GPU Performance for Adaptive Parallelism

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meng, Jiayuan; Uram, Thomas; Morozov, Vitali A.

    Most accelerators, such as graphics processing units (GPUs) and vector processors, are particularly suitable for accelerating massively parallel workloads. On the other hand, conventional workloads are developed for multi-core parallelism, which often scale to only a few dozen OpenMP threads. When hardware threads significantly outnumber the degree of parallelism in the outer loop, programmers are challenged with efficient hardware utilization. A common solution is to further exploit the parallelism hidden deep in the code structure. Such parallelism is less structured: parallel and sequential loops may be imperfectly nested within each other, neigh boring inner loops may exhibit different concurrency patternsmore » (e.g. Reduction vs. Forall), yet have to be parallelized in the same parallel section. Many input-dependent transformations have to be explored. A programmer often employs a larger group of hardware threads to cooperatively walk through a smaller outer loop partition and adaptively exploit any encountered parallelism. This process is time-consuming and error-prone, yet the risk of gaining little or no performance remains high for such workloads. To reduce risk and guide implementation, we propose a technique to model workloads with limited parallelism that can automatically explore and evaluate transformations involving cooperative threads. Eventually, our framework projects the best achievable performance and the most promising transformations without implementing GPU code or using physical hardware. We envision our technique to be integrated into future compilers or optimization frameworks for autotuning.« less

  13. Investigation of Mediational Processes Using Parallel Process Latent Growth Curve Modeling.

    ERIC Educational Resources Information Center

    Cheong, JeeWon; MacKinnon, David P.; Khoo, Siek Toon

    2003-01-01

    Investigated a method to evaluate mediational processes using latent growth curve modeling and tested it with empirical data from a longitudinal steroid use prevention program focusing on 1,506 high school football players over 4 years. Findings suggest the usefulness of the approach. (SLD)

  14. Cloud Computing Boosts Business Intelligence of Telecommunication Industry

    NASA Astrophysics Data System (ADS)

    Xu, Meng; Gao, Dan; Deng, Chao; Luo, Zhiguo; Sun, Shaoling

    Business Intelligence becomes an attracting topic in today's data intensive applications, especially in telecommunication industry. Meanwhile, Cloud Computing providing IT supporting Infrastructure with excellent scalability, large scale storage, and high performance becomes an effective way to implement parallel data processing and data mining algorithms. BC-PDM (Big Cloud based Parallel Data Miner) is a new MapReduce based parallel data mining platform developed by CMRI (China Mobile Research Institute) to fit the urgent requirements of business intelligence in telecommunication industry. In this paper, the architecture, functionality and performance of BC-PDM are presented, together with the experimental evaluation and case studies of its applications. The evaluation result demonstrates both the usability and the cost-effectiveness of Cloud Computing based Business Intelligence system in applications of telecommunication industry.

  15. MapReduce Based Parallel Neural Networks in Enabling Large Scale Machine Learning

    PubMed Central

    Yang, Jie; Huang, Yuan; Xu, Lixiong; Li, Siguang; Qi, Man

    2015-01-01

    Artificial neural networks (ANNs) have been widely used in pattern recognition and classification applications. However, ANNs are notably slow in computation especially when the size of data is large. Nowadays, big data has received a momentum from both industry and academia. To fulfill the potentials of ANNs for big data applications, the computation process must be speeded up. For this purpose, this paper parallelizes neural networks based on MapReduce, which has become a major computing model to facilitate data intensive applications. Three data intensive scenarios are considered in the parallelization process in terms of the volume of classification data, the size of the training data, and the number of neurons in the neural network. The performance of the parallelized neural networks is evaluated in an experimental MapReduce computer cluster from the aspects of accuracy in classification and efficiency in computation. PMID:26681933

  16. SAPNEW: Parallel finite element code for thin shell structures on the Alliant FX/80

    NASA Astrophysics Data System (ADS)

    Kamat, Manohar P.; Watson, Brian C.

    1992-02-01

    The results of a research activity aimed at providing a finite element capability for analyzing turbo-machinery bladed-disk assemblies in a vector/parallel processing environment are summarized. Analysis of aircraft turbofan engines is very computationally intensive. The performance limit of modern day computers with a single processing unit was estimated at 3 billions of floating point operations per second (3 gigaflops). In view of this limit of a sequential unit, performance rates higher than 3 gigaflops can be achieved only through vectorization and/or parallelization as on Alliant FX/80. Accordingly, the efforts of this critically needed research were geared towards developing and evaluating parallel finite element methods for static and vibration analysis. A special purpose code, named with the acronym SAPNEW, performs static and eigen analysis of multi-degree-of-freedom blade models built-up from flat thin shell elements.

  17. Evaluation of a parallel implementation of the learning portion of the backward error propagation neural network: experiments in artifact identification.

    PubMed Central

    Sittig, D. F.; Orr, J. A.

    1991-01-01

    Various methods have been proposed in an attempt to solve problems in artifact and/or alarm identification including expert systems, statistical signal processing techniques, and artificial neural networks (ANN). ANNs consist of a large number of simple processing units connected by weighted links. To develop truly robust ANNs, investigators are required to train their networks on huge training data sets, requiring enormous computing power. We implemented a parallel version of the backward error propagation neural network training algorithm in the widely portable parallel programming language C-Linda. A maximum speedup of 4.06 was obtained with six processors. This speedup represents a reduction in total run-time from approximately 6.4 hours to 1.5 hours. We conclude that use of the master-worker model of parallel computation is an excellent method for obtaining speedups in the backward error propagation neural network training algorithm. PMID:1807607

  18. SAPNEW: Parallel finite element code for thin shell structures on the Alliant FX/80

    NASA Technical Reports Server (NTRS)

    Kamat, Manohar P.; Watson, Brian C.

    1992-01-01

    The results of a research activity aimed at providing a finite element capability for analyzing turbo-machinery bladed-disk assemblies in a vector/parallel processing environment are summarized. Analysis of aircraft turbofan engines is very computationally intensive. The performance limit of modern day computers with a single processing unit was estimated at 3 billions of floating point operations per second (3 gigaflops). In view of this limit of a sequential unit, performance rates higher than 3 gigaflops can be achieved only through vectorization and/or parallelization as on Alliant FX/80. Accordingly, the efforts of this critically needed research were geared towards developing and evaluating parallel finite element methods for static and vibration analysis. A special purpose code, named with the acronym SAPNEW, performs static and eigen analysis of multi-degree-of-freedom blade models built-up from flat thin shell elements.

  19. MapReduce Based Parallel Neural Networks in Enabling Large Scale Machine Learning.

    PubMed

    Liu, Yang; Yang, Jie; Huang, Yuan; Xu, Lixiong; Li, Siguang; Qi, Man

    2015-01-01

    Artificial neural networks (ANNs) have been widely used in pattern recognition and classification applications. However, ANNs are notably slow in computation especially when the size of data is large. Nowadays, big data has received a momentum from both industry and academia. To fulfill the potentials of ANNs for big data applications, the computation process must be speeded up. For this purpose, this paper parallelizes neural networks based on MapReduce, which has become a major computing model to facilitate data intensive applications. Three data intensive scenarios are considered in the parallelization process in terms of the volume of classification data, the size of the training data, and the number of neurons in the neural network. The performance of the parallelized neural networks is evaluated in an experimental MapReduce computer cluster from the aspects of accuracy in classification and efficiency in computation.

  20. Post-Occupancy Evaluation (POE) Methodologies for School Facilities: A Case Study of the V. Sue Cleveland High School Post Occupancy Evaluation

    ERIC Educational Resources Information Center

    Harmon, Marcel; Larroque, Andre; Maniktala, Nate

    2012-01-01

    The New Mexico Public School Facilities Authority (NMPSFA) is the agency responsible for administering state-funded capital projects for schools statewide. Post occupancy evaluation (POE) is the tool selected by NMPSFA for measuring project outcomes. The basic POE process for V. Sue Cleveland High School (VSCHS) consisted of a series of field…

  1. Evaluation of computed tomography post-processing images in postoperative assessment of Lisfranc injuries compared with plain radiographs.

    PubMed

    Li, Haobo; Chen, Yanxi; Qiang, Minfei; Zhang, Kun; Jiang, Yuchen; Zhang, Yijie; Jia, Xiaoyang

    2017-06-14

    The objective of this study is to evaluate the value of computed tomography (CT) post-processing images in postoperative assessment of Lisfranc injuries compared with plain radiographs. A total of 79 cases with closed Lisfranc injuries that were treated with conventional open reduction and internal fixation from January 2010 to June 2016 were analyzed. Postoperative assessment was performed by two independent orthopedic surgeons with both plain radiographs and CT post-processing images. Inter- and intra-observer agreement were analyzed by kappa statistics while the differences between the two postoperative imaging assessments were assessed using the χ 2 test (McNemar's test). Significance was assumed when p < 0.05. Inter- and intra-observer agreement of CT post-processing images was much higher than that of plain radiographs. Non-anatomic reduction was more easily identified in patients with injuries of Myerson classifications A, B1, B2, and C1 using CT post-processing images with overall groups (p < 0.05), and poor internal fixation was also more easily detected in patients with injuries of Myerson classifications A, B1, B2, and C2 using CT post-processing images with overall groups (p < 0.05). CT post-processing images can be more reliable than plain radiographs in the postoperative assessment of reduction and implant placement for Lisfranc injuries.

  2. Post Occupancy Evaluation of Educational Buildings and Equipment.

    ERIC Educational Resources Information Center

    Watson, Chris

    1997-01-01

    Details the post occupancy evaluation (POE) process for public buildings. POEs are used to improve design and optimize educational building and equipment use. The evaluation participants, the method used, the results and recommendations, model schools, and classroom alterations using POE are described. (9 references.) (RE)

  3. Concurrent Probabilistic Simulation of High Temperature Composite Structural Response

    NASA Technical Reports Server (NTRS)

    Abdi, Frank

    1996-01-01

    A computational structural/material analysis and design tool which would meet industry's future demand for expedience and reduced cost is presented. This unique software 'GENOA' is dedicated to parallel and high speed analysis to perform probabilistic evaluation of high temperature composite response of aerospace systems. The development is based on detailed integration and modification of diverse fields of specialized analysis techniques and mathematical models to combine their latest innovative capabilities into a commercially viable software package. The technique is specifically designed to exploit the availability of processors to perform computationally intense probabilistic analysis assessing uncertainties in structural reliability analysis and composite micromechanics. The primary objectives which were achieved in performing the development were: (1) Utilization of the power of parallel processing and static/dynamic load balancing optimization to make the complex simulation of structure, material and processing of high temperature composite affordable; (2) Computational integration and synchronization of probabilistic mathematics, structural/material mechanics and parallel computing; (3) Implementation of an innovative multi-level domain decomposition technique to identify the inherent parallelism, and increasing convergence rates through high- and low-level processor assignment; (4) Creating the framework for Portable Paralleled architecture for the machine independent Multi Instruction Multi Data, (MIMD), Single Instruction Multi Data (SIMD), hybrid and distributed workstation type of computers; and (5) Market evaluation. The results of Phase-2 effort provides a good basis for continuation and warrants Phase-3 government, and industry partnership.

  4. Handwriting Identification, Matching, and Indexing in Noisy Document Images

    DTIC Science & Technology

    2006-01-01

    algorithm to detect all parallel lines simultaneously. Our method can detect 96.8% of the severely broken rule lines in the Arabic database we collected...in the database to guide later processing. It is widely used in banks, post offices, and tax offices where the types of forms are most often pre...be used for different fields), and output the recognition results to a database . Although special anchors may be avail- able to facilitate form

  5. Near Real-Time Image Reconstruction

    NASA Astrophysics Data System (ADS)

    Denker, C.; Yang, G.; Wang, H.

    2001-08-01

    In recent years, post-facto image-processing algorithms have been developed to achieve diffraction-limited observations of the solar surface. We present a combination of frame selection, speckle-masking imaging, and parallel computing which provides real-time, diffraction-limited, 256×256 pixel images at a 1-minute cadence. Our approach to achieve diffraction limited observations is complementary to adaptive optics (AO). At the moment, AO is limited by the fact that it corrects wavefront abberations only for a field of view comparable to the isoplanatic patch. This limitation does not apply to speckle-masking imaging. However, speckle-masking imaging relies on short-exposure images which limits its spectroscopic applications. The parallel processing of the data is performed on a Beowulf-class computer which utilizes off-the-shelf, mass-market technologies to provide high computational performance for scientific calculations and applications at low cost. Beowulf computers have a great potential, not only for image reconstruction, but for any kind of complex data reduction. Immediate access to high-level data products and direct visualization of dynamic processes on the Sun are two of the advantages to be gained.

  6. Description of the AILS Alerting Algorithm

    NASA Technical Reports Server (NTRS)

    Samanant, Paul; Jackson, Mike

    2000-01-01

    This document provides a complete description of the Airborne Information for Lateral Spacing (AILS) alerting algorithms. The purpose of AILS is to provide separation assurance between aircraft during simultaneous approaches to closely spaced parallel runways. AILS will allow independent approaches to be flown in such situations where dependent approaches were previously required (typically under Instrument Meteorological Conditions (IMC)). This is achieved by providing multiple levels of alerting for pairs of aircraft that are in parallel approach situations. This document#s scope is comprehensive and covers everything from general overviews, definitions, and concepts down to algorithmic elements and equations. The entire algorithm is presented in complete and detailed pseudo-code format. This can be used by software programmers to program AILS into a software language. Additional supporting information is provided in the form of coordinate frame definitions, data requirements, calling requirements as well as all necessary pre-processing and post-processing requirements. This is important and required information for the implementation of AILS into an analysis, a simulation, or a real-time system.

  7. Coordinated Post-transcriptional Regulation of Hsp70.3 Gene Expression by MicroRNA and Alternative Polyadenylation*

    PubMed Central

    Tranter, Michael; Helsley, Robert N.; Paulding, Waltke R.; McGuinness, Michael; Brokamp, Cole; Haar, Lauren; Liu, Yong; Ren, Xiaoping; Jones, W. Keith

    2011-01-01

    Heat shock protein 70 (Hsp70) is well documented to possess general cytoprotective properties in protecting the cell against stressful and noxious stimuli. We have recently shown that expression of the stress-inducible Hsp70.3 gene in the myocardium in response to ischemic preconditioning is NF-κB-dependent and necessary for the resulting late phase cardioprotection against a subsequent ischemia/reperfusion injury. Here we show that the Hsp70.3 gene product is subject to post-transcriptional regulation through parallel regulatory processes involving microRNAs and alternative polyadenylation of the mRNA transcript. First, we show that cardiac ischemic preconditioning of the in vivo mouse heart results in decreased levels of two Hsp70.3-targeting microRNAs: miR-378* and miR-711. Furthermore, an ischemic or heat shock stimulus induces alternative polyadenylation of the expressed Hsp70.3 transcript that results in the accumulation of transcripts with a shortened 3′-UTR. This shortening of the 3′-UTR results in the loss of the binding site for the suppressive miR-378* and thus renders the alternatively polyadenylated transcript insusceptible to miR-378*-mediated suppression. Results also suggest that the alternative polyadenylation-mediated shortening of the Hsp70.3 3′-UTR relieves translational suppression observed in the long 3′-UTR variant, allowing for a more robust increase in protein expression. These results demonstrate alternative polyadenylation of Hsp70.3 in parallel with ischemic or heat shock-induced up-regulation of mRNA levels and implicate the importance of this process in post-transcriptional control of Hsp70.3 expression. PMID:21757701

  8. Optimizing SIEM Throughput on the Cloud Using Parallelization.

    PubMed

    Alam, Masoom; Ihsan, Asif; Khan, Muazzam A; Javaid, Qaisar; Khan, Abid; Manzoor, Jawad; Akhundzada, Adnan; Khan, Muhammad Khurram; Farooq, Sajid

    2016-01-01

    Processing large amounts of data in real time for identifying security issues pose several performance challenges, especially when hardware infrastructure is limited. Managed Security Service Providers (MSSP), mostly hosting their applications on the Cloud, receive events at a very high rate that varies from a few hundred to a couple of thousand events per second (EPS). It is critical to process this data efficiently, so that attacks could be identified quickly and necessary response could be initiated. This paper evaluates the performance of a security framework OSTROM built on the Esper complex event processing (CEP) engine under a parallel and non-parallel computational framework. We explain three architectures under which Esper can be used to process events. We investigated the effect on throughput, memory and CPU usage in each configuration setting. The results indicate that the performance of the engine is limited by the number of events coming in rather than the queries being processed. The architecture where 1/4th of the total events are submitted to each instance and all the queries are processed by all the units shows best results in terms of throughput, memory and CPU usage.

  9. Multibus-based parallel processor for simulation

    NASA Technical Reports Server (NTRS)

    Ogrady, E. P.; Wang, C.-H.

    1983-01-01

    A Multibus-based parallel processor simulation system is described. The system is intended to serve as a vehicle for gaining hands-on experience, testing system and application software, and evaluating parallel processor performance during development of a larger system based on the horizontal/vertical-bus interprocessor communication mechanism. The prototype system consists of up to seven Intel iSBC 86/12A single-board computers which serve as processing elements, a multiple transmission controller (MTC) designed to support system operation, and an Intel Model 225 Microcomputer Development System which serves as the user interface and input/output processor. All components are interconnected by a Multibus/IEEE 796 bus. An important characteristic of the system is that it provides a mechanism for a processing element to broadcast data to other selected processing elements. This parallel transfer capability is provided through the design of the MTC and a minor modification to the iSBC 86/12A board. The operation of the MTC, the basic hardware-level operation of the system, and pertinent details about the iSBC 86/12A and the Multibus are described.

  10. Evaluation of the Intel iWarp parallel processor for space flight applications

    NASA Technical Reports Server (NTRS)

    Hine, Butler P., III; Fong, Terrence W.

    1993-01-01

    The potential of a DARPA-sponsored advanced processor, the Intel iWarp, for use in future SSF Data Management Systems (DMS) upgrades is evaluated through integration into the Ames DMS testbed and applications testing. The iWarp is a distributed, parallel computing system well suited for high performance computing applications such as matrix operations and image processing. The system architecture is modular, supports systolic and message-based computation, and is capable of providing massive computational power in a low-cost, low-power package. As a consequence, the iWarp offers significant potential for advanced space-based computing. This research seeks to determine the iWarp's suitability as a processing device for space missions. In particular, the project focuses on evaluating the ease of integrating the iWarp into the SSF DMS baseline architecture and the iWarp's ability to support computationally stressing applications representative of SSF tasks.

  11. Non-CAR resists and advanced materials for Massively Parallel E-Beam Direct Write process integration

    NASA Astrophysics Data System (ADS)

    Pourteau, Marie-Line; Servin, Isabelle; Lepinay, Kévin; Essomba, Cyrille; Dal'Zotto, Bernard; Pradelles, Jonathan; Lattard, Ludovic; Brandt, Pieter; Wieland, Marco

    2016-03-01

    The emerging Massively Parallel-Electron Beam Direct Write (MP-EBDW) is an attractive high resolution high throughput lithography technology. As previously shown, Chemically Amplified Resists (CARs) meet process/integration specifications in terms of dose-to-size, resolution, contrast, and energy latitude. However, they are still limited by their line width roughness. To overcome this issue, we tested an alternative advanced non-CAR and showed it brings a substantial gain in sensitivity compared to CAR. We also implemented and assessed in-line post-lithographic treatments for roughness mitigation. For outgassing-reduction purpose, a top-coat layer is added to the total process stack. A new generation top-coat was tested and showed improved printing performances compared to the previous product, especially avoiding dark erosion: SEM cross-section showed a straight pattern profile. A spin-coatable charge dissipation layer based on conductive polyaniline has also been tested for conductivity and lithographic performances, and compatibility experiments revealed that the underlying resist type has to be carefully chosen when using this product. Finally, the Process Of Reference (POR) trilayer stack defined for 5 kV multi-e-beam lithography was successfully etched with well opened and straight patterns, and no lithography-etch bias.

  12. Evaluation of parallel milliliter-scale stirred-tank bioreactors for the study of biphasic whole-cell biocatalysis with ionic liquids.

    PubMed

    Dennewald, Danielle; Hortsch, Ralf; Weuster-Botz, Dirk

    2012-01-01

    As clear structure-activity relationships are still rare for ionic liquids, preliminary experiments are necessary for the process development of biphasic whole-cell processes involving these solvents. To reduce the time investment and the material costs, the process development of such biphasic reaction systems would profit from a small-scale high-throughput platform. Exemplarily, the reduction of 2-octanone to (R)-2-octanol by a recombinant Escherichia coli in a biphasic ionic liquid/water system was studied in a miniaturized stirred-tank bioreactor system allowing the parallel operation of up to 48 reactors at the mL-scale. The results were compared to those obtained in a 20-fold larger stirred-tank reactor. The maximum local energy dissipation was evaluated at the larger scale and compared to the data available for the small-scale reactors, to verify if similar mass transfer could be obtained at both scales. Thereafter, the reaction kinetics and final conversions reached in different reactions setups were analysed. The results were in good agreement between both scales for varying ionic liquids and for ionic liquid volume fractions up to 40%. The parallel bioreactor system can thus be used for the process development of the majority of biphasic reaction systems involving ionic liquids, reducing the time and resource investment during the process development of this type of applications. Copyright © 2011. Published by Elsevier B.V.

  13. Changes in the midpalatal and pterygopalatine sutures induced by micro-implant-supported skeletal expander, analyzed with a novel 3D method based on CBCT imaging.

    PubMed

    Cantarella, Daniele; Dominguez-Mompell, Ramon; Mallya, Sanjay M; Moschik, Christoph; Pan, Hsin Chuan; Miller, Joseph; Moon, Won

    2017-11-01

    Mini-implant-assisted rapid palatal expansion (MARPE) appliances have been developed with the aim to enhance the orthopedic effect induced by rapid maxillary expansion (RME). Maxillary Skeletal Expander (MSE) is a particular type of MARPE appliance characterized by the presence of four mini-implants positioned in the posterior part of the palate with bi-cortical engagement. The aim of the present study is to evaluate the MSE effects on the midpalatal and pterygopalatine sutures in late adolescents, using high-resolution CBCT. Specific aims are to define the magnitude and sagittal parallelism of midpalatal suture opening, to measure the extent of transverse asymmetry of split, and to illustrate the possibility of splitting the pterygopalatine suture. Fifteen subjects (mean age of 17.2 years; range, 13.9-26.2 years) were treated with MSE. Pre- and post-treatment CBCT exams were taken and superimposed. A novel methodology based on three new reference planes was utilized to analyze the sutural changes. Parameters were compared from pre- to post-treatment and between genders non-parametrically using the Wilcoxon sign rank test. For the frequency of openings in the lower part of the pterygopalatine suture, the Fisher's exact test was used. Regarding the magnitude of midpalatal suture opening, the split at anterior nasal spine (ANS) and at posterior nasal spine (PNS) was 4.8 and 4.3 mm, respectively. The amount of split at PNS was 90% of that at ANS, showing that the opening of the midpalatal suture was almost perfectly parallel antero-posteriorly. On average, one half of the anterior nasal spine (ANS) moved more than the contralateral one by 1.1 mm. Openings between the lateral and medial plates of the pterygoid process were detectable in 53% of the sutures (P < 0.05). No significant differences were found in the magnitude and frequency of suture opening between males and females. Correlation between age and suture opening was negligible (R 2 range, 0.3-4.2%). Midpalatal suture was successfully split by MSE in late adolescents, and the opening was almost perfectly parallel in a sagittal direction. Regarding the extent of transverse asymmetry of the split, on average one half of ANS moved more than the contralateral one by 1.1 mm. Pterygopalatine suture was split in its lower region by MSE, as the pyramidal process was pulled out from the pterygoid process. Patient gender and age had a negligible influence on suture opening for the age group considered in the study.

  14. Discrete Event Modeling and Massively Parallel Execution of Epidemic Outbreak Phenomena

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perumalla, Kalyan S; Seal, Sudip K

    2011-01-01

    In complex phenomena such as epidemiological outbreaks, the intensity of inherent feedback effects and the significant role of transients in the dynamics make simulation the only effective method for proactive, reactive or post-facto analysis. The spatial scale, runtime speed, and behavioral detail needed in detailed simulations of epidemic outbreaks make it necessary to use large-scale parallel processing. Here, an optimistic parallel execution of a new discrete event formulation of a reaction-diffusion simulation model of epidemic propagation is presented to facilitate in dramatically increasing the fidelity and speed by which epidemiological simulations can be performed. Rollback support needed during optimistic parallelmore » execution is achieved by combining reverse computation with a small amount of incremental state saving. Parallel speedup of over 5,500 and other runtime performance metrics of the system are observed with weak-scaling execution on a small (8,192-core) Blue Gene / P system, while scalability with a weak-scaling speedup of over 10,000 is demonstrated on 65,536 cores of a large Cray XT5 system. Scenarios representing large population sizes exceeding several hundreds of millions of individuals in the largest cases are successfully exercised to verify model scalability.« less

  15. Performance bounds on parallel self-initiating discrete-event

    NASA Technical Reports Server (NTRS)

    Nicol, David M.

    1990-01-01

    The use is considered of massively parallel architectures to execute discrete-event simulations of what is termed self-initiating models. A logical process in a self-initiating model schedules its own state re-evaluation times, independently of any other logical process, and sends its new state to other logical processes following the re-evaluation. The interest is in the effects of that communication on synchronization. The performance is considered of various synchronization protocols by deriving upper and lower bounds on optimal performance, upper bounds on Time Warp's performance, and lower bounds on the performance of a new conservative protocol. The analysis of Time Warp includes the overhead costs of state-saving and rollback. The analysis points out sufficient conditions for the conservative protocol to outperform Time Warp. The analysis also quantifies the sensitivity of performance to message fan-out, lookahead ability, and the probability distributions underlying the simulation.

  16. A simple hyperbolic model for communication in parallel processing environments

    NASA Technical Reports Server (NTRS)

    Stoica, Ion; Sultan, Florin; Keyes, David

    1994-01-01

    We introduce a model for communication costs in parallel processing environments called the 'hyperbolic model,' which generalizes two-parameter dedicated-link models in an analytically simple way. Dedicated interprocessor links parameterized by a latency and a transfer rate that are independent of load are assumed by many existing communication models; such models are unrealistic for workstation networks. The communication system is modeled as a directed communication graph in which terminal nodes represent the application processes that initiate the sending and receiving of the information and in which internal nodes, called communication blocks (CBs), reflect the layered structure of the underlying communication architecture. The direction of graph edges specifies the flow of the information carried through messages. Each CB is characterized by a two-parameter hyperbolic function of the message size that represents the service time needed for processing the message. The parameters are evaluated in the limits of very large and very small messages. Rules are given for reducing a communication graph consisting of many to an equivalent two-parameter form, while maintaining an approximation for the service time that is exact in both large and small limits. The model is validated on a dedicated Ethernet network of workstations by experiments with communication subprograms arising in scientific applications, for which a tight fit of the model predictions with actual measurements of the communication and synchronization time between end processes is demonstrated. The model is then used to evaluate the performance of two simple parallel scientific applications from partial differential equations: domain decomposition and time-parallel multigrid. In an appropriate limit, we also show the compatibility of the hyperbolic model with the recently proposed LogP model.

  17. KSC-08pd0431

    NASA Image and Video Library

    2008-02-20

    KENNEDY SPACE CENTER, FLA. -- Space shuttle Atlantis is towed into the Orbiter Processing Facility, or OPF, where processing Atlantis for another flight will take place. After a round trip of nearly 5.3 million miles, Atlantis and crew returned to Earth with a landing at 9:07 a.m. EST to complete the STS-122 mission. Towing normally begins within four hours after landing and is completed within six hours unless removal of time-sensitive experiments is required on the runway. In the OPF, turnaround processing procedures on Atlantis will include various post-flight deservicing and maintenance functions, which are carried out in parallel with payload removal and the installation of equipment needed for the next mission. Photo credit: NASA/Jack Pfaller

  18. KSC-08pd0430

    NASA Image and Video Library

    2008-02-20

    KENNEDY SPACE CENTER, FLA. -- Space shuttle Atlantis is towed toward the Orbiter Processing Facility, or OPF, where processing Atlantis for another flight will take place. After a round trip of nearly 5.3 million miles, Atlantis and crew returned to Earth with a landing at 9:07 a.m. EST to complete the STS-122 mission. Towing normally begins within four hours after landing and is completed within six hours unless removal of time-sensitive experiments is required on the runway. In the OPF, turnaround processing procedures on Atlantis will include various post-flight deservicing and maintenance functions, which are carried out in parallel with payload removal and the installation of equipment needed for the next mission. Photo credit: NASA/Jack Pfaller

  19. Cache-Oblivious parallel SIMD Viterbi decoding for sequence search in HMMER.

    PubMed

    Ferreira, Miguel; Roma, Nuno; Russo, Luis M S

    2014-05-30

    HMMER is a commonly used bioinformatics tool based on Hidden Markov Models (HMMs) to analyze and process biological sequences. One of its main homology engines is based on the Viterbi decoding algorithm, which was already highly parallelized and optimized using Farrar's striped processing pattern with Intel SSE2 instruction set extension. A new SIMD vectorization of the Viterbi decoding algorithm is proposed, based on an SSE2 inter-task parallelization approach similar to the DNA alignment algorithm proposed by Rognes. Besides this alternative vectorization scheme, the proposed implementation also introduces a new partitioning of the Markov model that allows a significantly more efficient exploitation of the cache locality. Such optimization, together with an improved loading of the emission scores, allows the achievement of a constant processing throughput, regardless of the innermost-cache size and of the dimension of the considered model. The proposed optimized vectorization of the Viterbi decoding algorithm was extensively evaluated and compared with the HMMER3 decoder to process DNA and protein datasets, proving to be a rather competitive alternative implementation. Being always faster than the already highly optimized ViterbiFilter implementation of HMMER3, the proposed Cache-Oblivious Parallel SIMD Viterbi (COPS) implementation provides a constant throughput and offers a processing speedup as high as two times faster, depending on the model's size.

  20. Large-scale neural circuit mapping data analysis accelerated with the graphical processing unit (GPU).

    PubMed

    Shi, Yulin; Veidenbaum, Alexander V; Nicolau, Alex; Xu, Xiangmin

    2015-01-15

    Modern neuroscience research demands computing power. Neural circuit mapping studies such as those using laser scanning photostimulation (LSPS) produce large amounts of data and require intensive computation for post hoc processing and analysis. Here we report on the design and implementation of a cost-effective desktop computer system for accelerated experimental data processing with recent GPU computing technology. A new version of Matlab software with GPU enabled functions is used to develop programs that run on Nvidia GPUs to harness their parallel computing power. We evaluated both the central processing unit (CPU) and GPU-enabled computational performance of our system in benchmark testing and practical applications. The experimental results show that the GPU-CPU co-processing of simulated data and actual LSPS experimental data clearly outperformed the multi-core CPU with up to a 22× speedup, depending on computational tasks. Further, we present a comparison of numerical accuracy between GPU and CPU computation to verify the precision of GPU computation. In addition, we show how GPUs can be effectively adapted to improve the performance of commercial image processing software such as Adobe Photoshop. To our best knowledge, this is the first demonstration of GPU application in neural circuit mapping and electrophysiology-based data processing. Together, GPU enabled computation enhances our ability to process large-scale data sets derived from neural circuit mapping studies, allowing for increased processing speeds while retaining data precision. Copyright © 2014 Elsevier B.V. All rights reserved.

  1. Large scale neural circuit mapping data analysis accelerated with the graphical processing unit (GPU)

    PubMed Central

    Shi, Yulin; Veidenbaum, Alexander V.; Nicolau, Alex; Xu, Xiangmin

    2014-01-01

    Background Modern neuroscience research demands computing power. Neural circuit mapping studies such as those using laser scanning photostimulation (LSPS) produce large amounts of data and require intensive computation for post-hoc processing and analysis. New Method Here we report on the design and implementation of a cost-effective desktop computer system for accelerated experimental data processing with recent GPU computing technology. A new version of Matlab software with GPU enabled functions is used to develop programs that run on Nvidia GPUs to harness their parallel computing power. Results We evaluated both the central processing unit (CPU) and GPU-enabled computational performance of our system in benchmark testing and practical applications. The experimental results show that the GPU-CPU co-processing of simulated data and actual LSPS experimental data clearly outperformed the multi-core CPU with up to a 22x speedup, depending on computational tasks. Further, we present a comparison of numerical accuracy between GPU and CPU computation to verify the precision of GPU computation. In addition, we show how GPUs can be effectively adapted to improve the performance of commercial image processing software such as Adobe Photoshop. Comparison with Existing Method(s) To our best knowledge, this is the first demonstration of GPU application in neural circuit mapping and electrophysiology-based data processing. Conclusions Together, GPU enabled computation enhances our ability to process large-scale data sets derived from neural circuit mapping studies, allowing for increased processing speeds while retaining data precision. PMID:25277633

  2. The mediating role of state maladaptive emotion regulation in the relation between social anxiety symptoms and self-evaluation bias.

    PubMed

    Sarfan, Laurel D; Cody, Meghan W; Clerkin, Elise M

    2018-03-16

    Although social anxiety symptoms are robustly linked to biased self-evaluations across time, the mechanisms of this relation remain unclear. The present study tested three maladaptive emotion regulation strategies - state post-event processing, state experiential avoidance, and state expressive suppression - as potential mediators of this relation. Undergraduate participants (N = 88; 61.4% Female) rated their social skill in an impromptu conversation task and then returned to the laboratory approximately two days later to evaluate their social skill in the conversation again. Consistent with expectations, state post-event processing and state experiential avoidance mediated the relation between social anxiety symptoms and worsening self-evaluations of social skill (controlling for research assistant evaluations), particularly for positive qualities (e.g. appeared confident, demonstrated social skill). State expressive suppression did not mediate the relation between social anxiety symptoms and changes in self-evaluation bias across time. These findings highlight the role that spontaneous, state experiential avoidance and state post-event processing may play in the relation between social anxiety symptoms and worsening self-evaluation biases of social skill across time.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, M; Choi, E; Chuong, M

    Purpose: To evaluate weather the current radiobiological models can predict the normal liver complications of radioactive Yttrium-90 ({sup 90}Y) selective-internal-radiation-treatment (SIRT) for metastatic liver lesions based on the post-infusion {sup 90}Y PET images. Methods: A total of 20 patients with metastatic liver tumors treated with SIRT that received a post-infusion {sup 90}Y-PET/CT scan were analyzed in this work. The 3D activity distribution of the PET images was converted into a 3D dose distribution via a kernel convolution process. The physical dose distribution was converted into the equivalent dose (EQ2) delivered at 2 Gy based on the linear-quadratic (LQ) model consideringmore » the dose rate effect. The biological endpoint of this work was radiation-induce liver disease (RILD). The NTCPs were calculated with four different repair-times (T1/2-Liver-Repair= 0,0.5,1.0,2.0 hr) and three published NTCP models (Lyman-external-RT, Lyman 90Y-HCC-SIRT, parallel model) were compared to the incidence of RILD of the recruited patients to evaluate their ability of outcome prediction. Results: The mean normal liver physical dose (avg. 51.9 Gy, range 31.9–69.8 Gy) is higher than the suggested liver dose constraint for external beam treatment (∼30 Gy). However, none of the patients in our study developed RILD after the SIRT. The estimated probability of ‘no patient developing RILD’ obtained from the two Lyman models are 46.3% to 48.3% (T1/2-Liver-Repair= 0hr) and <1% for all other repair times. For the parallel model, the estimated probability is 97.3% (0hr), 51.7% (0.5hr), 2.0% (1.0hr) and <1% (2.0hr). Conclusion: Molecular-images providing the distribution of {sup 90}Y enable the dose-volume based dose/outcome analysis for SIRT. Current NTCP models fail to predict RILD complications in our patient population, unless a very short repair-time for the liver is assumed. The discrepancy between the Lyman {sup 90}Y-HCC-SIRT model predicted and the clinically observed outcomes further demonstrates the need of an NTCP model specific to the metastatic liver SIRT.« less

  4. [Expert consensus post-marketing evaluation scheme to detect immunotoxicity of Chinese medicine in clinical populations (draft version for comments)].

    PubMed

    Xie, Yan-Ming; Zhao, Yu-Bin; Jiang, Jun-Jie; Chang, Yan-Peng; Zhang, Wen; Shen, Hao; Lu, Peng-Fei

    2013-09-01

    Through consensus, establish a post-marketing scheme and the technical processes to evaluate Chinese medicine's immunotoxicity on a population, as well as its beneficial influences on the immune system. Provide regulations on the collection, storage and transportation of serum samples. This article applies to the post-marketing scientific evaluation of the immunotoxicity of parenterally administered, and for other ways of taking Chinese medicine.

  5. The Development and Evaluation of the Psychometric Properties of the Negative Beliefs about Post-Event Processing Scale.

    PubMed

    Rodriguez, Hayley; Kissell, Kellie; Lucas, Lloyd; Fisak, Brian

    2017-11-01

    Although negative beliefs have been found to be associated with worry symptoms and depressive rumination, negative beliefs have yet to be examined in relation to post-event processing and social anxiety symptoms. The purpose of the current study was to examine the psychometric properties of the Negative Beliefs about Post-Event Processing Questionnaire (NB-PEPQ). A large, non-referred undergraduate sample completed the NB-PEPQ along with validation measures, including a measure of post-event processing and social anxiety symptoms. Based on factor analysis, a single-factor model was obtained, and the NB-PEPQ was found to exhibit good validity, including positive associations with measures of post-event processing and social anxiety symptoms. These findings add to the literature on the metacognitive variables that may lead to the development and maintenance of post-event processing and social anxiety symptoms, and have relevant clinical applications.

  6. Performance Modeling and Measurement of Parallelized Code for Distributed Shared Memory Multiprocessors

    NASA Technical Reports Server (NTRS)

    Waheed, Abdul; Yan, Jerry

    1998-01-01

    This paper presents a model to evaluate the performance and overhead of parallelizing sequential code using compiler directives for multiprocessing on distributed shared memory (DSM) systems. With increasing popularity of shared address space architectures, it is essential to understand their performance impact on programs that benefit from shared memory multiprocessing. We present a simple model to characterize the performance of programs that are parallelized using compiler directives for shared memory multiprocessing. We parallelized the sequential implementation of NAS benchmarks using native Fortran77 compiler directives for an Origin2000, which is a DSM system based on a cache-coherent Non Uniform Memory Access (ccNUMA) architecture. We report measurement based performance of these parallelized benchmarks from four perspectives: efficacy of parallelization process; scalability; parallelization overhead; and comparison with hand-parallelized and -optimized version of the same benchmarks. Our results indicate that sequential programs can conveniently be parallelized for DSM systems using compiler directives but realizing performance gains as predicted by the performance model depends primarily on minimizing architecture-specific data locality overhead.

  7. High-speed spectral domain optical coherence tomography using non-uniform fast Fourier transform

    PubMed Central

    Chan, Kenny K. H.; Tang, Shuo

    2010-01-01

    The useful imaging range in spectral domain optical coherence tomography (SD-OCT) is often limited by the depth dependent sensitivity fall-off. Processing SD-OCT data with the non-uniform fast Fourier transform (NFFT) can improve the sensitivity fall-off at maximum depth by greater than 5dB concurrently with a 30 fold decrease in processing time compared to the fast Fourier transform with cubic spline interpolation method. NFFT can also improve local signal to noise ratio (SNR) and reduce image artifacts introduced in post-processing. Combined with parallel processing, NFFT is shown to have the ability to process up to 90k A-lines per second. High-speed SD-OCT imaging is demonstrated at camera-limited 100 frames per second on an ex-vivo squid eye. PMID:21258551

  8. Non-Interfering Effects of Active Post-Encoding Tasks on Episodic Memory Consolidation in Humans

    PubMed Central

    Varma, Samarth; Takashima, Atsuko; Krewinkel, Sander; van Kooten, Maaike; Fu, Lily; Medendorp, W. Pieter; Kessels, Roy P. C.; Daselaar, Sander M.

    2017-01-01

    So far, studies that investigated interference effects of post-learning processes on episodic memory consolidation in humans have used tasks involving only complex and meaningful information. Such tasks require reallocation of general or encoding-specific resources away from consolidation-relevant activities. The possibility that interference can be elicited using a task that heavily taxes our limited brain resources, but has low semantic and hippocampal related long-term memory processing demands, has never been tested. We address this question by investigating whether consolidation could persist in parallel with an active, encoding-irrelevant, minimally semantic task, regardless of its high resource demands for cognitive processing. We distinguish the impact of such a task on consolidation based on whether it engages resources that are: (1) general/executive, or (2) specific/overlapping with the encoding modality. Our experiments compared subsequent memory performance across two post-encoding consolidation periods: quiet wakeful rest and a cognitively demanding n-Back task. Across six different experiments (total N = 176), we carefully manipulated the design of the n-Back task to target general or specific resources engaged in the ongoing consolidation process. In contrast to previous studies that employed interference tasks involving conceptual stimuli and complex processing demands, we did not find any differences between n-Back and rest conditions on memory performance at delayed test, using both recall and recognition tests. Our results indicate that: (1) quiet, wakeful rest is not a necessary prerequisite for episodic memory consolidation; and (2) post-encoding cognitive engagement does not interfere with memory consolidation when task-performance has minimal semantic and hippocampally-based episodic memory processing demands. We discuss our findings with reference to resource and reactivation-led interference theories. PMID:28424596

  9. Non-Interfering Effects of Active Post-Encoding Tasks on Episodic Memory Consolidation in Humans.

    PubMed

    Varma, Samarth; Takashima, Atsuko; Krewinkel, Sander; van Kooten, Maaike; Fu, Lily; Medendorp, W Pieter; Kessels, Roy P C; Daselaar, Sander M

    2017-01-01

    So far, studies that investigated interference effects of post-learning processes on episodic memory consolidation in humans have used tasks involving only complex and meaningful information. Such tasks require reallocation of general or encoding-specific resources away from consolidation-relevant activities. The possibility that interference can be elicited using a task that heavily taxes our limited brain resources, but has low semantic and hippocampal related long-term memory processing demands, has never been tested. We address this question by investigating whether consolidation could persist in parallel with an active, encoding-irrelevant, minimally semantic task, regardless of its high resource demands for cognitive processing. We distinguish the impact of such a task on consolidation based on whether it engages resources that are: (1) general/executive, or (2) specific/overlapping with the encoding modality. Our experiments compared subsequent memory performance across two post-encoding consolidation periods: quiet wakeful rest and a cognitively demanding n-Back task. Across six different experiments (total N = 176), we carefully manipulated the design of the n-Back task to target general or specific resources engaged in the ongoing consolidation process. In contrast to previous studies that employed interference tasks involving conceptual stimuli and complex processing demands, we did not find any differences between n-Back and rest conditions on memory performance at delayed test, using both recall and recognition tests. Our results indicate that: (1) quiet, wakeful rest is not a necessary prerequisite for episodic memory consolidation; and (2) post-encoding cognitive engagement does not interfere with memory consolidation when task-performance has minimal semantic and hippocampally-based episodic memory processing demands. We discuss our findings with reference to resource and reactivation-led interference theories.

  10. Evaluation of a new parallel numerical parameter optimization algorithm for a dynamical system

    NASA Astrophysics Data System (ADS)

    Duran, Ahmet; Tuncel, Mehmet

    2016-10-01

    It is important to have a scalable parallel numerical parameter optimization algorithm for a dynamical system used in financial applications where time limitation is crucial. We use Message Passing Interface parallel programming and present such a new parallel algorithm for parameter estimation. For example, we apply the algorithm to the asset flow differential equations that have been developed and analyzed since 1989 (see [3-6] and references contained therein). We achieved speed-up for some time series to run up to 512 cores (see [10]). Unlike [10], we consider more extensive financial market situations, for example, in presence of low volatility, high volatility and stock market price at a discount/premium to its net asset value with varying magnitude, in this work. Moreover, we evaluated the convergence of the model parameter vector, the nonlinear least squares error and maximum improvement factor to quantify the success of the optimization process depending on the number of initial parameter vectors.

  11. Parallelization of a spatial random field characterization process using the Method of Anchored Distributions and the HTCondor high throughput computing system

    NASA Astrophysics Data System (ADS)

    Osorio-Murillo, C. A.; Over, M. W.; Frystacky, H.; Ames, D. P.; Rubin, Y.

    2013-12-01

    A new software application called MAD# has been coupled with the HTCondor high throughput computing system to aid scientists and educators with the characterization of spatial random fields and enable understanding the spatial distribution of parameters used in hydrogeologic and related modeling. MAD# is an open source desktop software application used to characterize spatial random fields using direct and indirect information through Bayesian inverse modeling technique called the Method of Anchored Distributions (MAD). MAD relates indirect information with a target spatial random field via a forward simulation model. MAD# executes inverse process running the forward model multiple times to transfer information from indirect information to the target variable. MAD# uses two parallelization profiles according to computational resources available: one computer with multiple cores and multiple computers - multiple cores through HTCondor. HTCondor is a system that manages a cluster of desktop computers for submits serial or parallel jobs using scheduling policies, resources monitoring, job queuing mechanism. This poster will show how MAD# reduces the time execution of the characterization of random fields using these two parallel approaches in different case studies. A test of the approach was conducted using 1D problem with 400 cells to characterize saturated conductivity, residual water content, and shape parameters of the Mualem-van Genuchten model in four materials via the HYDRUS model. The number of simulations evaluated in the inversion was 10 million. Using the one computer approach (eight cores) were evaluated 100,000 simulations in 12 hours (10 million - 1200 hours approximately). In the evaluation on HTCondor, 32 desktop computers (132 cores) were used, with a processing time of 60 hours non-continuous in five days. HTCondor reduced the processing time for uncertainty characterization by a factor of 20 (1200 hours reduced to 60 hours.)

  12. A 2D MTF approach to evaluate and guide dynamic imaging developments.

    PubMed

    Chao, Tzu-Cheng; Chung, Hsiao-Wen; Hoge, W Scott; Madore, Bruno

    2010-02-01

    As the number and complexity of partially sampled dynamic imaging methods continue to increase, reliable strategies to evaluate performance may prove most useful. In the present work, an analytical framework to evaluate given reconstruction methods is presented. A perturbation algorithm allows the proposed evaluation scheme to perform robustly without requiring knowledge about the inner workings of the method being evaluated. A main output of the evaluation process consists of a two-dimensional modulation transfer function, an easy-to-interpret visual rendering of a method's ability to capture all combinations of spatial and temporal frequencies. Approaches to evaluate noise properties and artifact content at all spatial and temporal frequencies are also proposed. One fully sampled phantom and three fully sampled cardiac cine datasets were subsampled (R = 4 and 8) and reconstructed with the different methods tested here. A hybrid method, which combines the main advantageous features observed in our assessments, was proposed and tested in a cardiac cine application, with acceleration factors of 3.5 and 6.3 (skip factors of 4 and 8, respectively). This approach combines features from methods such as k-t sensitivity encoding, unaliasing by Fourier encoding the overlaps in the temporal dimension-sensitivity encoding, generalized autocalibrating partially parallel acquisition, sensitivity profiles from an array of coils for encoding and reconstruction in parallel, self, hybrid referencing with unaliasing by Fourier encoding the overlaps in the temporal dimension and generalized autocalibrating partially parallel acquisition, and generalized autocalibrating partially parallel acquisition-enhanced sensitivity maps for sensitivity encoding reconstructions.

  13. Non-invasive Brain Stimulation in the Treatment of Post-stroke and Neurodegenerative Aphasia: Parallels, Differences, and Lessons Learned

    PubMed Central

    Norise, Catherine; Hamilton, Roy H.

    2017-01-01

    Numerous studies over the span of more than a decade have shown that non-invasive brain stimulation (NIBS) techniques, namely transcranial magnetic stimulation (TMS) and transcranial direct current stimulation (tDCS), can facilitate language recovery for patients who have suffered from aphasia due to stroke. While stroke is the most common etiology of aphasia, neurodegenerative causes of language impairment—collectively termed primary progressive aphasia (PPA)—are increasingly being recognized as important clinical phenotypes in dementia. Very limited data now suggest that (NIBS) may have some benefit in treating PPAs. However, before applying the same approaches to patients with PPA as have previously been pursued in patients with post-stroke aphasia, it will be important for investigators to consider key similarities and differences between these aphasia etiologies that is likely to inform successful approaches to stimulation. While both post-stroke aphasia and the PPAs have clear overlaps in their clinical phenomenology, the mechanisms of injury and theorized neuroplastic changes associated with the two etiologies are notably different. Importantly, theories of plasticity in post-stroke aphasia are largely predicated on the notion that regions of the brain that had previously been uninvolved in language processing may take on new compensatory roles. PPAs, however, are characterized by slow distributed degeneration of cellular units within the language system; compensatory recruitment of brain regions to subserve language is not currently understood to be an important aspect of the condition. This review will survey differences in the mechanisms of language representation between the two etiologies of aphasia and evaluate properties that may define and limit the success of different neuromodulation approaches for these two disorders. PMID:28167904

  14. Computational method for multi-modal microscopy based on transport of intensity equation

    NASA Astrophysics Data System (ADS)

    Li, Jiaji; Chen, Qian; Sun, Jiasong; Zhang, Jialin; Zuo, Chao

    2017-02-01

    In this paper, we develop the requisite theory to describe a hybrid virtual-physical multi-modal imaging system which yields quantitative phase, Zernike phase contrast, differential interference contrast (DIC), and light field moment imaging simultaneously based on transport of intensity equation(TIE). We then give the experimental demonstration of these ideas by time-lapse imaging of live HeLa cell mitosis. Experimental results verify that a tunable lens based TIE system, combined with the appropriate post-processing algorithm, can achieve a variety of promising imaging modalities in parallel with the quantitative phase images for the dynamic study of cellular processes.

  15. Block-Parallel Data Analysis with DIY2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morozov, Dmitriy; Peterka, Tom

    DIY2 is a programming model and runtime for block-parallel analytics on distributed-memory machines. Its main abstraction is block-structured data parallelism: data are decomposed into blocks; blocks are assigned to processing elements (processes or threads); computation is described as iterations over these blocks, and communication between blocks is defined by reusable patterns. By expressing computation in this general form, the DIY2 runtime is free to optimize the movement of blocks between slow and fast memories (disk and flash vs. DRAM) and to concurrently execute blocks residing in memory with multiple threads. This enables the same program to execute in-core, out-of-core, serial,more » parallel, single-threaded, multithreaded, or combinations thereof. This paper describes the implementation of the main features of the DIY2 programming model and optimizations to improve performance. DIY2 is evaluated on benchmark test cases to establish baseline performance for several common patterns and on larger complete analysis codes running on large-scale HPC machines.« less

  16. Preclinical immunogenicity and safety of a Group A streptococcal M protein-based vaccine candidate.

    PubMed

    Batzloff, Michael R; Fane, Anne; Gorton, Davina; Pandey, Manisha; Rivera-Hernandez, Tania; Calcutt, Ainslie; Yeung, Grace; Hartas, Jon; Johnson, Linda; Rush, Catherine M; McCarthy, James; Ketheesan, Natkunam; Good, Michael F

    2016-12-01

    Streptococcus pyogenes (group A streptococcus, GAS) causes a wide range of clinical manifestations ranging from mild self-limiting pyoderma to invasive diseases such as sepsis. Also of concern are the post-infectious immune-mediated diseases including rheumatic heart disease. The development of a vaccine against GAS would have a large health impact on populations at risk of these diseases. However, there is a lack of suitable models for the safety evaluation of vaccines with respect to post-infectious complications. We have utilized the Lewis Rat model for cardiac valvulitis to evaluate the safety of the J8-DT vaccine formulation in parallel with a rabbit toxicology study. These studies demonstrated that the vaccine did not induce abnormal pathology. We also show that in mice the vaccine is highly immunogenic but that 3 doses are required to induce protection from a GAS skin challenge even though 2 doses are sufficient to induce a high antibody titer.

  17. Preclinical immunogenicity and safety of a Group A streptococcal M protein-based vaccine candidate

    PubMed Central

    Batzloff, Michael R.; Fane, Anne; Gorton, Davina; Pandey, Manisha; Rivera-Hernandez, Tania; Calcutt, Ainslie; Yeung, Grace; Hartas, Jon; Johnson, Linda; Rush, Catherine M.; McCarthy, James; Ketheesan, Natkunam; Good, Michael F.

    2016-01-01

    ABSTRACT Streptococcus pyogenes (group A streptococcus, GAS) causes a wide range of clinical manifestations ranging from mild self-limiting pyoderma to invasive diseases such as sepsis. Also of concern are the post-infectious immune-mediated diseases including rheumatic heart disease. The development of a vaccine against GAS would have a large health impact on populations at risk of these diseases. However, there is a lack of suitable models for the safety evaluation of vaccines with respect to post-infectious complications. We have utilized the Lewis Rat model for cardiac valvulitis to evaluate the safety of the J8-DT vaccine formulation in parallel with a rabbit toxicology study. These studies demonstrated that the vaccine did not induce abnormal pathology. We also show that in mice the vaccine is highly immunogenic but that 3 doses are required to induce protection from a GAS skin challenge even though 2 doses are sufficient to induce a high antibody titer. PMID:27541593

  18. Parallel approaches to composite production: interfaces that behave contrary to expectation.

    PubMed

    Frowd, Charlie D; Bruce, Vicki; Ness, Hayley; Bowie, Leslie; Paterson, Jenny; Thomson-Bogner, Claire; McIntyre, Alexander; Hancock, Peter J B

    2007-04-01

    This paper examines two facial composite systems that present multiple faces during construction to more closely resemble natural face processing. A 'parallel' version of PRO-fit was evaluated, which presents facial features in sets of six or twelve, and EvoFIT, a system in development, which contains a holistic face model and an evolutionary interface. The PRO-fit parallel interface turned out not to be quite as good as the 'serial' version as it appeared to interfere with holistic face processing. Composites from EvoFIT were named almost three times better than PRO-fit, but a benefit emerged under feature encoding, suggesting that recall has a greater role for EvoFIT than was previously thought. In general, an advantage was found for feature encoding, replicating a previous finding in this area, and also for a novel 'holistic' interview.

  19. A lightweight, flow-based toolkit for parallel and distributed bioinformatics pipelines

    PubMed Central

    2011-01-01

    Background Bioinformatic analyses typically proceed as chains of data-processing tasks. A pipeline, or 'workflow', is a well-defined protocol, with a specific structure defined by the topology of data-flow interdependencies, and a particular functionality arising from the data transformations applied at each step. In computer science, the dataflow programming (DFP) paradigm defines software systems constructed in this manner, as networks of message-passing components. Thus, bioinformatic workflows can be naturally mapped onto DFP concepts. Results To enable the flexible creation and execution of bioinformatics dataflows, we have written a modular framework for parallel pipelines in Python ('PaPy'). A PaPy workflow is created from re-usable components connected by data-pipes into a directed acyclic graph, which together define nested higher-order map functions. The successive functional transformations of input data are evaluated on flexibly pooled compute resources, either local or remote. Input items are processed in batches of adjustable size, all flowing one to tune the trade-off between parallelism and lazy-evaluation (memory consumption). An add-on module ('NuBio') facilitates the creation of bioinformatics workflows by providing domain specific data-containers (e.g., for biomolecular sequences, alignments, structures) and functionality (e.g., to parse/write standard file formats). Conclusions PaPy offers a modular framework for the creation and deployment of parallel and distributed data-processing workflows. Pipelines derive their functionality from user-written, data-coupled components, so PaPy also can be viewed as a lightweight toolkit for extensible, flow-based bioinformatics data-processing. The simplicity and flexibility of distributed PaPy pipelines may help users bridge the gap between traditional desktop/workstation and grid computing. PaPy is freely distributed as open-source Python code at http://muralab.org/PaPy, and includes extensive documentation and annotated usage examples. PMID:21352538

  20. A lightweight, flow-based toolkit for parallel and distributed bioinformatics pipelines.

    PubMed

    Cieślik, Marcin; Mura, Cameron

    2011-02-25

    Bioinformatic analyses typically proceed as chains of data-processing tasks. A pipeline, or 'workflow', is a well-defined protocol, with a specific structure defined by the topology of data-flow interdependencies, and a particular functionality arising from the data transformations applied at each step. In computer science, the dataflow programming (DFP) paradigm defines software systems constructed in this manner, as networks of message-passing components. Thus, bioinformatic workflows can be naturally mapped onto DFP concepts. To enable the flexible creation and execution of bioinformatics dataflows, we have written a modular framework for parallel pipelines in Python ('PaPy'). A PaPy workflow is created from re-usable components connected by data-pipes into a directed acyclic graph, which together define nested higher-order map functions. The successive functional transformations of input data are evaluated on flexibly pooled compute resources, either local or remote. Input items are processed in batches of adjustable size, all flowing one to tune the trade-off between parallelism and lazy-evaluation (memory consumption). An add-on module ('NuBio') facilitates the creation of bioinformatics workflows by providing domain specific data-containers (e.g., for biomolecular sequences, alignments, structures) and functionality (e.g., to parse/write standard file formats). PaPy offers a modular framework for the creation and deployment of parallel and distributed data-processing workflows. Pipelines derive their functionality from user-written, data-coupled components, so PaPy also can be viewed as a lightweight toolkit for extensible, flow-based bioinformatics data-processing. The simplicity and flexibility of distributed PaPy pipelines may help users bridge the gap between traditional desktop/workstation and grid computing. PaPy is freely distributed as open-source Python code at http://muralab.org/PaPy, and includes extensive documentation and annotated usage examples.

  1. Optimizing SIEM Throughput on the Cloud Using Parallelization

    PubMed Central

    Alam, Masoom; Ihsan, Asif; Javaid, Qaisar; Khan, Abid; Manzoor, Jawad; Akhundzada, Adnan; Khan, M Khurram; Farooq, Sajid

    2016-01-01

    Processing large amounts of data in real time for identifying security issues pose several performance challenges, especially when hardware infrastructure is limited. Managed Security Service Providers (MSSP), mostly hosting their applications on the Cloud, receive events at a very high rate that varies from a few hundred to a couple of thousand events per second (EPS). It is critical to process this data efficiently, so that attacks could be identified quickly and necessary response could be initiated. This paper evaluates the performance of a security framework OSTROM built on the Esper complex event processing (CEP) engine under a parallel and non-parallel computational framework. We explain three architectures under which Esper can be used to process events. We investigated the effect on throughput, memory and CPU usage in each configuration setting. The results indicate that the performance of the engine is limited by the number of events coming in rather than the queries being processed. The architecture where 1/4th of the total events are submitted to each instance and all the queries are processed by all the units shows best results in terms of throughput, memory and CPU usage. PMID:27851762

  2. Cache-Oblivious parallel SIMD Viterbi decoding for sequence search in HMMER

    PubMed Central

    2014-01-01

    Background HMMER is a commonly used bioinformatics tool based on Hidden Markov Models (HMMs) to analyze and process biological sequences. One of its main homology engines is based on the Viterbi decoding algorithm, which was already highly parallelized and optimized using Farrar’s striped processing pattern with Intel SSE2 instruction set extension. Results A new SIMD vectorization of the Viterbi decoding algorithm is proposed, based on an SSE2 inter-task parallelization approach similar to the DNA alignment algorithm proposed by Rognes. Besides this alternative vectorization scheme, the proposed implementation also introduces a new partitioning of the Markov model that allows a significantly more efficient exploitation of the cache locality. Such optimization, together with an improved loading of the emission scores, allows the achievement of a constant processing throughput, regardless of the innermost-cache size and of the dimension of the considered model. Conclusions The proposed optimized vectorization of the Viterbi decoding algorithm was extensively evaluated and compared with the HMMER3 decoder to process DNA and protein datasets, proving to be a rather competitive alternative implementation. Being always faster than the already highly optimized ViterbiFilter implementation of HMMER3, the proposed Cache-Oblivious Parallel SIMD Viterbi (COPS) implementation provides a constant throughput and offers a processing speedup as high as two times faster, depending on the model’s size. PMID:24884826

  3. Adaptation of a Multi-Block Structured Solver for Effective Use in a Hybrid CPU/GPU Massively Parallel Environment

    NASA Astrophysics Data System (ADS)

    Gutzwiller, David; Gontier, Mathieu; Demeulenaere, Alain

    2014-11-01

    Multi-Block structured solvers hold many advantages over their unstructured counterparts, such as a smaller memory footprint and efficient serial performance. Historically, multi-block structured solvers have not been easily adapted for use in a High Performance Computing (HPC) environment, and the recent trend towards hybrid GPU/CPU architectures has further complicated the situation. This paper will elaborate on developments and innovations applied to the NUMECA FINE/Turbo solver that have allowed near-linear scalability with real-world problems on over 250 hybrid GPU/GPU cluster nodes. Discussion will focus on the implementation of virtual partitioning and load balancing algorithms using a novel meta-block concept. This implementation is transparent to the user, allowing all pre- and post-processing steps to be performed using a simple, unpartitioned grid topology. Additional discussion will elaborate on developments that have improved parallel performance, including fully parallel I/O with the ADIOS API and the GPU porting of the computationally heavy CPUBooster convergence acceleration module. Head of HPC and Release Management, Numeca International.

  4. KSC-08pd0428

    NASA Image and Video Library

    2008-02-20

    KENNEDY SPACE CENTER, FLA. -- Space shuttle Atlantis is towed along a two-mile tow-way to the Orbiter Processing Facility, or OPF, where processing Atlantis for another flight will take place. Towing normally begins within four hours after landing and is completed within six hours unless removal of time-sensitive experiments is required on the runway. In the OPF, turnaround processing procedures on Atlantis will include various post-flight deservicing and maintenance functions, which are carried out in parallel with payload removal and the installation of equipment needed for the next mission. After a round trip of nearly 5.3 million miles, Atlantis and crew returned to Earth with a landing at 9:07 a.m. EST to complete the STS-122 mission. Photo credit: NASA/Jack Pfaller

  5. Post-Adoption Depression: Clinical Windows on an Emerging Concept

    ERIC Educational Resources Information Center

    Speilman, Eda

    2011-01-01

    In recent years, the concept of post-adoption depression--with both parallels and differences from postpartum depression--has emerged as a salient descriptor of the experience of a significant minority of newly adoptive parents. This article offers a clinical perspective on post-adoption depression through the stories of several families seen in…

  6. A General-purpose Framework for Parallel Processing of Large-scale LiDAR Data

    NASA Astrophysics Data System (ADS)

    Li, Z.; Hodgson, M.; Li, W.

    2016-12-01

    Light detection and ranging (LiDAR) technologies have proven efficiency to quickly obtain very detailed Earth surface data for a large spatial extent. Such data is important for scientific discoveries such as Earth and ecological sciences and natural disasters and environmental applications. However, handling LiDAR data poses grand geoprocessing challenges due to data intensity and computational intensity. Previous studies received notable success on parallel processing of LiDAR data to these challenges. However, these studies either relied on high performance computers and specialized hardware (GPUs) or focused mostly on finding customized solutions for some specific algorithms. We developed a general-purpose scalable framework coupled with sophisticated data decomposition and parallelization strategy to efficiently handle big LiDAR data. Specifically, 1) a tile-based spatial index is proposed to manage big LiDAR data in the scalable and fault-tolerable Hadoop distributed file system, 2) two spatial decomposition techniques are developed to enable efficient parallelization of different types of LiDAR processing tasks, and 3) by coupling existing LiDAR processing tools with Hadoop, this framework is able to conduct a variety of LiDAR data processing tasks in parallel in a highly scalable distributed computing environment. The performance and scalability of the framework is evaluated with a series of experiments conducted on a real LiDAR dataset using a proof-of-concept prototype system. The results show that the proposed framework 1) is able to handle massive LiDAR data more efficiently than standalone tools; and 2) provides almost linear scalability in terms of either increased workload (data volume) or increased computing nodes with both spatial decomposition strategies. We believe that the proposed framework provides valuable references on developing a collaborative cyberinfrastructure for processing big earth science data in a highly scalable environment.

  7. Tuning HDF5 subfiling performance on parallel file systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Byna, Suren; Chaarawi, Mohamad; Koziol, Quincey

    Subfiling is a technique used on parallel file systems to reduce locking and contention issues when multiple compute nodes interact with the same storage target node. Subfiling provides a compromise between the single shared file approach that instigates the lock contention problems on parallel file systems and having one file per process, which results in generating a massive and unmanageable number of files. In this paper, we evaluate and tune the performance of recently implemented subfiling feature in HDF5. In specific, we explain the implementation strategy of subfiling feature in HDF5, provide examples of using the feature, and evaluate andmore » tune parallel I/O performance of this feature with parallel file systems of the Cray XC40 system at NERSC (Cori) that include a burst buffer storage and a Lustre disk-based storage. We also evaluate I/O performance on the Cray XC30 system, Edison, at NERSC. Our results show performance benefits of 1.2X to 6X performance advantage with subfiling compared to writing a single shared HDF5 file. We present our exploration of configurations, such as the number of subfiles and the number of Lustre storage targets to storing files, as optimization parameters to obtain superior I/O performance. Based on this exploration, we discuss recommendations for achieving good I/O performance as well as limitations with using the subfiling feature.« less

  8. Impact of automatization in temperature series in Spain and comparison with the POST-AWS dataset

    NASA Astrophysics Data System (ADS)

    Aguilar, Enric; López-Díaz, José Antonio; Prohom Duran, Marc; Gilabert, Alba; Luna Rico, Yolanda; Venema, Victor; Auchmann, Renate; Stepanek, Petr; Brandsma, Theo

    2016-04-01

    Climate data records are most of the times affected by inhomogeneities. Especially inhomogeneities introducing network-wide biases are sometimes related to changes happening almost simultaneously in an entire network. Relative homogenization is difficult in these cases, especially at the daily scale. A good example of this is the substitution of manual observations (MAN) by automatic weather stations (AWS). Parallel measurements (i.e. records taken at the same time with the old (MAN) and new (AWS) sensors can provide an idea of the bias introduced and help to evaluate the suitability of different correction approaches. We present here a quality controlled dataset compiled under the DAAMEC Project, comprising 46 stations across Spain and over 85,000 parallel measurements (AWS-MAN) of daily maximum and minimum temperature. We study the differences between both sensors and compare it with the available metadata to account for internal inhomogeneities. The differences between both systems vary much across stations, with patterns more related to their particular settings than to climatic/geographical reasons. The typical median biases (AWS-MAN) by station (comprised between the interquartile range) oscillate between -0.2°C and 0.4 in daily maximum temperature and between -0.4°C and 0.2°C in daily minimum temperature. These and other results are compared with a larger network, the Parallel Observations Scientific Team, a working group of the International Surface Temperatures Initiative (ISTI-POST) dataset, which comprises our stations, as well as others from different countries in America, Asia and Europe.

  9. Faculty-Specific Factors of Degree of HE Internationalization: An Evaluation of Four Faculties of a Post-1992 University in the United Kingdom

    ERIC Educational Resources Information Center

    Jiang, Nan; Carpenter, Victoria

    2013-01-01

    Purpose: The purpose of this paper is to investigate the difference in the process of higher education (HE) internationalization across faculties in a post-1992 university and to identify faculty-specific factors through evaluating the four faculties in the case study. Design/methodology/approach: A qualitative research is conducted in a post-1992…

  10. Micropollutant degradation, bacterial inactivation and regrowth risk in wastewater effluents: Influence of the secondary (pre)treatment on the efficiency of Advanced Oxidation Processes.

    PubMed

    Giannakis, Stefanos; Voumard, Margaux; Grandjean, Dominique; Magnet, Anoys; De Alencastro, Luiz Felippe; Pulgarin, César

    2016-10-01

    In this work, disinfection by 5 Advanced Oxidation Processes was preceded by 3 different secondary treatment systems present in the wastewater treatment plant of Vidy, Lausanne (Switzerland). 5 AOPs after two biological treatment methods (conventional activated sludge and moving bed bioreactor) and a physiochemical process (coagulation-flocculation) were tested in laboratory scale. The dependence among AOPs efficiency and secondary (pre)treatment was estimated by following the bacterial concentration i) before secondary treatment, ii) after the different secondary treatment methods and iii) after the various AOPs. Disinfection and post-treatment bacterial regrowth were the evaluation indicators. The order of efficiency was Moving Bed Bioreactor > Activated Sludge > Coagulation-Flocculation > Primary Treatment. As far as the different AOPs are concerned, the disinfection kinetics were: UVC/H2O2 > UVC and solar photo-Fenton > Fenton or solar light. The contextualization and parallel study of microorganisms with the micropollutants of the effluents revealed that higher exposure times were necessary for complete degradation compared to microorganisms for the UV-based processes and inversed for the Fenton-related ones. Nevertheless, in the Fenton-related systems, the nominal 80% removal of micropollutants deriving from the Swiss legislation, often took place before the elimination of bacterial regrowth risk. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Partitioning in parallel processing of production systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oflazer, K.

    1987-01-01

    This thesis presents research on certain issues related to parallel processing of production systems. It first presents a parallel production system interpreter that has been implemented on a four-processor multiprocessor. This parallel interpreter is based on Forgy's OPS5 interpreter and exploits production-level parallelism in production systems. Runs on the multiprocessor system indicate that it is possible to obtain speed-up of around 1.7 in the match computation for certain production systems when productions are split into three sets that are processed in parallel. The next issue addressed is that of partitioning a set of rules to processors in a parallel interpretermore » with production-level parallelism, and the extent of additional improvement in performance. The partitioning problem is formulated and an algorithm for approximate solutions is presented. The thesis next presents a parallel processing scheme for OPS5 production systems that allows some redundancy in the match computation. This redundancy enables the processing of a production to be divided into units of medium granularity each of which can be processed in parallel. Subsequently, a parallel processor architecture for implementing the parallel processing algorithm is presented.« less

  12. GWM-VI: groundwater management with parallel processing for multiple MODFLOW versions

    USGS Publications Warehouse

    Banta, Edward R.; Ahlfeld, David P.

    2013-01-01

    Groundwater Management–Version Independent (GWM–VI) is a new version of the Groundwater Management Process of MODFLOW. The Groundwater Management Process couples groundwater-flow simulation with a capability to optimize stresses on the simulated aquifer based on an objective function and constraints imposed on stresses and aquifer state. GWM–VI extends prior versions of Groundwater Management in two significant ways—(1) it can be used with any version of MODFLOW that meets certain requirements on input and output, and (2) it is structured to allow parallel processing of the repeated runs of the MODFLOW model that are required to solve the optimization problem. GWM–VI uses the same input structure for files that describe the management problem as that used by prior versions of Groundwater Management. GWM–VI requires only minor changes to the input files used by the MODFLOW model. GWM–VI uses the Joint Universal Parameter IdenTification and Evaluation of Reliability Application Programming Interface (JUPITER-API) to implement both version independence and parallel processing. GWM–VI communicates with the MODFLOW model by manipulating certain input files and interpreting results from the MODFLOW listing file and binary output files. Nearly all capabilities of prior versions of Groundwater Management are available in GWM–VI. GWM–VI has been tested with MODFLOW-2005, MODFLOW-NWT (a Newton formulation for MODFLOW-2005), MF2005-FMP2 (the Farm Process for MODFLOW-2005), SEAWAT, and CFP (Conduit Flow Process for MODFLOW-2005). This report provides sample problems that demonstrate a range of applications of GWM–VI and the directory structure and input information required to use the parallel-processing capability.

  13. Parallel processing considerations for image recognition tasks

    NASA Astrophysics Data System (ADS)

    Simske, Steven J.

    2011-01-01

    Many image recognition tasks are well-suited to parallel processing. The most obvious example is that many imaging tasks require the analysis of multiple images. From this standpoint, then, parallel processing need be no more complicated than assigning individual images to individual processors. However, there are three less trivial categories of parallel processing that will be considered in this paper: parallel processing (1) by task; (2) by image region; and (3) by meta-algorithm. Parallel processing by task allows the assignment of multiple workflows-as diverse as optical character recognition [OCR], document classification and barcode reading-to parallel pipelines. This can substantially decrease time to completion for the document tasks. For this approach, each parallel pipeline is generally performing a different task. Parallel processing by image region allows a larger imaging task to be sub-divided into a set of parallel pipelines, each performing the same task but on a different data set. This type of image analysis is readily addressed by a map-reduce approach. Examples include document skew detection and multiple face detection and tracking. Finally, parallel processing by meta-algorithm allows different algorithms to be deployed on the same image simultaneously. This approach may result in improved accuracy.

  14. StrAuto: automation and parallelization of STRUCTURE analysis.

    PubMed

    Chhatre, Vikram E; Emerson, Kevin J

    2017-03-24

    Population structure inference using the software STRUCTURE has become an integral part of population genetic studies covering a broad spectrum of taxa including humans. The ever-expanding size of genetic data sets poses computational challenges for this analysis. Although at least one tool currently implements parallel computing to reduce computational overload of this analysis, it does not fully automate the use of replicate STRUCTURE analysis runs required for downstream inference of optimal K. There is pressing need for a tool that can deploy population structure analysis on high performance computing clusters. We present an updated version of the popular Python program StrAuto, to streamline population structure analysis using parallel computing. StrAuto implements a pipeline that combines STRUCTURE analysis with the Evanno Δ K analysis and visualization of results using STRUCTURE HARVESTER. Using benchmarking tests, we demonstrate that StrAuto significantly reduces the computational time needed to perform iterative STRUCTURE analysis by distributing runs over two or more processors. StrAuto is the first tool to integrate STRUCTURE analysis with post-processing using a pipeline approach in addition to implementing parallel computation - a set up ideal for deployment on computing clusters. StrAuto is distributed under the GNU GPL (General Public License) and available to download from http://strauto.popgen.org .

  15. Parallel Domain Decomposition Formulation and Software for Large-Scale Sparse Symmetrical/Unsymmetrical Aeroacoustic Applications

    NASA Technical Reports Server (NTRS)

    Nguyen, D. T.; Watson, Willie R. (Technical Monitor)

    2005-01-01

    The overall objectives of this research work are to formulate and validate efficient parallel algorithms, and to efficiently design/implement computer software for solving large-scale acoustic problems, arised from the unified frameworks of the finite element procedures. The adopted parallel Finite Element (FE) Domain Decomposition (DD) procedures should fully take advantages of multiple processing capabilities offered by most modern high performance computing platforms for efficient parallel computation. To achieve this objective. the formulation needs to integrate efficient sparse (and dense) assembly techniques, hybrid (or mixed) direct and iterative equation solvers, proper pre-conditioned strategies, unrolling strategies, and effective processors' communicating schemes. Finally, the numerical performance of the developed parallel finite element procedures will be evaluated by solving series of structural, and acoustic (symmetrical and un-symmetrical) problems (in different computing platforms). Comparisons with existing "commercialized" and/or "public domain" software are also included, whenever possible.

  16. Sequential or parallel decomposed processing of two-digit numbers? Evidence from eye-tracking.

    PubMed

    Moeller, Korbinian; Fischer, Martin H; Nuerk, Hans-Christoph; Willmes, Klaus

    2009-02-01

    While reaction time data have shown that decomposed processing of two-digit numbers occurs, there is little evidence about how decomposed processing functions. Poltrock and Schwartz (1984) argued that multi-digit numbers are compared in a sequential digit-by-digit fashion starting at the leftmost digit pair. In contrast, Nuerk and Willmes (2005) favoured parallel processing of the digits constituting a number. These models (i.e., sequential decomposition, parallel decomposition) make different predictions regarding the fixation pattern in a two-digit number magnitude comparison task and can therefore be differentiated by eye fixation data. We tested these models by evaluating participants' eye fixation behaviour while selecting the larger of two numbers. The stimulus set consisted of within-decade comparisons (e.g., 53_57) and between-decade comparisons (e.g., 42_57). The between-decade comparisons were further divided into compatible and incompatible trials (cf. Nuerk, Weger, & Willmes, 2001) and trials with different decade and unit distances. The observed fixation pattern implies that the comparison of two-digit numbers is not executed by sequentially comparing decade and unit digits as proposed by Poltrock and Schwartz (1984) but rather in a decomposed but parallel fashion. Moreover, the present fixation data provide first evidence that digit processing in multi-digit numbers is not a pure bottom-up effect, but is also influenced by top-down factors. Finally, implications for multi-digit number processing beyond the range of two-digit numbers are discussed.

  17. Web Service Model for Plasma Simulations with Automatic Post Processing and Generation of Visual Diagnostics*

    NASA Astrophysics Data System (ADS)

    Exby, J.; Busby, R.; Dimitrov, D. A.; Bruhwiler, D.; Cary, J. R.

    2003-10-01

    We present our design and initial implementation of a web service model for running particle-in-cell (PIC) codes remotely from a web browser interface. PIC codes have grown significantly in complexity and now often require parallel execution on multiprocessor computers, which in turn requires sophisticated post-processing and data analysis. A significant amount of time and effort is required for a physicist to develop all the necessary skills, at the expense of actually doing research. Moreover, parameter studies with a computationally intensive code justify the systematic management of results with an efficient way to communicate them among a group of remotely located collaborators. Our initial implementation uses the OOPIC Pro code [1], Linux, Apache, MySQL, Python, and PHP. The Interactive Data Language is used for visualization. [1] D.L. Bruhwiler et al., Phys. Rev. ST-AB 4, 101302 (2001). * This work is supported by DOE grant # DE-FG02-03ER83857 and by Tech-X Corp. ** Also University of Colorado.

  18. The effects of sleep deprivation on emotional empathy.

    PubMed

    Guadagni, Veronica; Burles, Ford; Ferrara, Michele; Iaria, Giuseppe

    2014-12-01

    Previous studies have shown that sleep loss has a detrimental effect on the ability of the individuals to process emotional information. In this study, we tested the hypothesis that this negative effect extends to the ability of experiencing emotions while observing other individuals, i.e. emotional empathy. To test this hypothesis, we assessed emotional empathy in 37 healthy volunteers who were assigned randomly to one of three experimental groups: one group was tested before and after a night of total sleep deprivation (sleep deprivation group), a second group was tested before and after a usual night of sleep spent at home (sleep group) and the third group was tested twice during the same day (day group). Emotional empathy was assessed by using two parallel versions of a computerized test measuring direct (i.e. explicit evaluation of empathic concern) and indirect (i.e. the observer's reported physiological arousal) emotional empathy. The results revealed that the post measurements of both direct and indirect emotional empathy of participants in the sleep deprivation group were significantly lower than those of the sleep and day groups; post measurement scores of participants in the day and sleep groups did not differ significantly for either direct or indirect emotional empathy. These data are consistent with previous studies showing the negative effect of sleep deprivation on the processing of emotional information, and extend these effects to emotional empathy. The findings reported in our study are relevant to healthy individuals with poor sleep habits, as well as clinical populations suffering from sleep disturbances. © 2014 European Sleep Research Society.

  19. The performance of silk scaffolds in a rat model of augmentation cystoplasty.

    PubMed

    Seth, Abhishek; Chung, Yeun Goo; Gil, Eun Seok; Tu, Duong; Franck, Debra; Di Vizio, Dolores; Adam, Rosalyn M; Kaplan, David L; Estrada, Carlos R; Mauney, Joshua R

    2013-07-01

    The diverse processing plasticity of silk-based biomaterials offers a versatile platform for understanding the impact of structural and mechanical matrix properties on bladder regenerative processes. Three distinct groups of 3-D matrices were fabricated from aqueous solutions of Bombyx mori silk fibroin either by a gel spinning technique (GS1 and GS2 groups) or a solvent-casting/salt-leaching method in combination with silk film casting (FF group). SEM analyses revealed that GS1 matrices consisted of smooth, compact multi-laminates of parallel-oriented silk fibers while GS2 scaffolds were composed of porous (pore size range, 5-50 μm) lamellar-like sheets buttressed by a dense outer layer. Bi-layer FF scaffolds were comprised of porous foams (pore size, ~400 μm) fused on their external face with a homogenous, nonporous silk film. Silk groups and small intestinal submucosa (SIS) matrices were evaluated in a rat model of augmentation cystoplasty for 10 weeks of implantation and compared to cystotomy controls. Gross tissue evaluations revealed the presence of intra-luminal stones in all experimental groups. The incidence and size of urinary calculi was the highest in animals implanted with gel spun silk matrices and SIS with frequencies ≥57% and stone diameters of 3-4 mm. In contrast, rats augmented with FF scaffolds displayed substantially lower rates (20%) and stone size (2 mm), similar to the levels observed in controls (13%, 2 mm). Histological (hematoxylin and eosin, Masson's trichrome) and immunohistochemical (IHC) analyses showed comparable extents of smooth muscle regeneration and contractile protein (α-smooth muscle actin and SM22α) expression within defect sites supported by all matrix groups similar to controls. Parallel evaluations demonstrated the formation of a transitional, multi-layered urothelium with prominent uroplakin and p63 protein expression in all experimental groups. De novo innervation and vascularization processes were evident in all regenerated tissues indicated by Fox3-positive neuronal cells and vessels lined with CD31 expressing endothelial cells. In comparison to other biomaterial groups, cystometric analyses at 10 weeks post-op revealed that animals implanted with the FF matrix configuration displayed superior urodynamic characteristics including compliance, functional capacity, as well as spontaneous non voiding contractions consistent with control levels. Our data demonstrate that variations in scaffold processing techniques can influence the in vivo functional performance of silk matrices in bladder reconstructive procedures. Copyright © 2013 Elsevier Ltd. All rights reserved.

  20. Real-time implementation of optimized maximum noise fraction transform for feature extraction of hyperspectral images

    NASA Astrophysics Data System (ADS)

    Wu, Yuanfeng; Gao, Lianru; Zhang, Bing; Zhao, Haina; Li, Jun

    2014-01-01

    We present a parallel implementation of the optimized maximum noise fraction (G-OMNF) transform algorithm for feature extraction of hyperspectral images on commodity graphics processing units (GPUs). The proposed approach explored the algorithm data-level concurrency and optimized the computing flow. We first defined a three-dimensional grid, in which each thread calculates a sub-block data to easily facilitate the spatial and spectral neighborhood data searches in noise estimation, which is one of the most important steps involved in OMNF. Then, we optimized the processing flow and computed the noise covariance matrix before computing the image covariance matrix to reduce the original hyperspectral image data transmission. These optimization strategies can greatly improve the computing efficiency and can be applied to other feature extraction algorithms. The proposed parallel feature extraction algorithm was implemented on an Nvidia Tesla GPU using the compute unified device architecture and basic linear algebra subroutines library. Through the experiments on several real hyperspectral images, our GPU parallel implementation provides a significant speedup of the algorithm compared with the CPU implementation, especially for highly data parallelizable and arithmetically intensive algorithm parts, such as noise estimation. In order to further evaluate the effectiveness of G-OMNF, we used two different applications: spectral unmixing and classification for evaluation. Considering the sensor scanning rate and the data acquisition time, the proposed parallel implementation met the on-board real-time feature extraction.

  1. Test and evaluation procedures for Sandia's Teraflops Operating System (TOS) on Janus.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barnette, Daniel Wayne

    This report describes the test and evaluation methods by which the Teraflops Operating System, or TOS, that resides on Sandia's massively-parallel computer Janus is verified for production release. Also discussed are methods used to build TOS before testing and evaluating, miscellaneous utility scripts, a sample test plan, and a proposed post-test method for quickly examining the large number of test results. The purpose of the report is threefold: (1) to provide a guide to T&E procedures, (2) to aid and guide others who will run T&E procedures on the new ASCI Red Storm machine, and (3) to document some ofmore » the history of evaluation and testing of TOS. This report is not intended to serve as an exhaustive manual for testers to conduct T&E procedures.« less

  2. FOOD PROCESSING TECHNOLOGY, A SUGGESTED 2-YEAR POST HIGH SCHOOL CURRICULUM.

    ERIC Educational Resources Information Center

    KNOEBEL, ROBERT M.; AND OTHERS

    ADMINISTRATORS, ADVISORY COMMITTEES, SUPERVISORS, AND TEACHERS MAY USE THIS GUIDE IN PLANNING AND DEVELOPING NEW PROGRAMS OR EVALUATING EXISTING PROGRAMS IN POST-HIGH SCHOOL FOOD PROCESSING TECHNOLOGY. BASIC MATERIALS WERE PREPARED BY THE STATE UNIVERSITY OF NEW YORK AGRICULTURAL AND TECHNICAL COLLEGE AT MORRISVILLE AND FINAL PREPARATION WAS…

  3. Evaluating the effectiveness of agricultural mulches for reducing post-wildfire wind erosion

    USDA-ARS?s Scientific Manuscript database

    Post-wildfire soil erosion can be caused by water or aeolian processes yet most erosion research has focused on predominantly water-driven erosion. This study investigates the effectiveness of three agricultural mulches, with and without a tackifier, on aeolian sediment transport processes. A wind t...

  4. Evaluation of automatic video summarization systems

    NASA Astrophysics Data System (ADS)

    Taskiran, Cuneyt M.

    2006-01-01

    Compact representations of video, or video summaries, data greatly enhances efficient video browsing. However, rigorous evaluation of video summaries generated by automatic summarization systems is a complicated process. In this paper we examine the summary evaluation problem. Text summarization is the oldest and most successful summarization domain. We show some parallels between these to domains and introduce methods and terminology. Finally, we present results for a comprehensive evaluation summary that we have performed.

  5. A Survey of Selected Document Processing Systems.

    ERIC Educational Resources Information Center

    Fong, Elizabeth

    In addition to reviewing the characteristics of document processing systems, this paper pays considerable attention to the description of a system via a feature list approach. The purpose of this report is to present features of the systems in parallel fashion to facilitate comparison so that a potential user may have a basis for evaluation in…

  6. Learning through Action: Parallel Learning Processes in Children and Adults

    ERIC Educational Resources Information Center

    Ethridge, Elizabeth A.; Branscomb, Kathryn R.

    2009-01-01

    Experiential learning has become an essential part of many educational settings from infancy through adulthood. While the effectiveness of active learning has been evaluated in youth and adult settings, few known studies have compared the learning processes of children and adults within the same project. This article contrasts the active learning…

  7. Post Detection Target State Estimation Using Heuristic Information Processing - A Preliminary Investigation

    DTIC Science & Technology

    1977-09-01

    Interpolation algorithm allows this to be done when the transition boundaries are defined close together and parallel to one another. In this case the...in the variable kernel esti- -mates.) In [2] a goodness-of-fit criterion for a set of sam- One question of great interest to us in this study pies...an estimate /(x) is For the unimodal case the ab.olute minimum okV .based on the variables ocurs at k .= 100, ce 5. At this point we have j Mean

  8. Comparative performance evaluation of full-scale anaerobic and aerobic wastewater treatment processes in Brazil.

    PubMed

    von Sperling, M; Oliveira, S C

    2009-01-01

    This article evaluates and compares the actual behavior of 166 full-scale anaerobic and aerobic wastewater treatment plants in operation in Brazil, providing information on the performance of the processes in terms of the quality of the generated effluent and the removal efficiency achieved. The observed results of effluent concentrations and removal efficiencies of the constituents BOD, COD, TSS (total suspended solids), TN (total nitrogen), TP (total phosphorus) and FC (faecal or thermotolerant coliforms) have been compared with the typical expected performance reported in the literature. The treatment technologies selected for study were: (a) predominantly anaerobic: (i) septic tank + anaerobic filter (ST + AF), (ii) UASB reactor without post-treatment (UASB) and (iii) UASB reactor followed by several post-treatment processes (UASB + POST); (b) predominantly aerobic: (iv) facultative pond (FP), (v) anaerobic pond followed by facultative pond (AP + FP) and (vi) activated sludge (AS). The results, confirmed by statistical tests, showed that, in general, the best performance was achieved by AS, but closely followed by UASB reactor, when operating with any kind of post-treatment. The effluent quality of the anaerobic processes ST + AF and UASB reactor without post-treatment was very similar to the one presented by facultative pond, a simpler aerobic process, regarding organic matter.

  9. Breaking the Cycle: A Phenomenological Approach to Broadening Access to Post-Secondary Education

    ERIC Educational Resources Information Center

    Cefai, Carmel; Downes, Paul; Cavioni, Valeria

    2016-01-01

    Over the past decades, there has been a substantial increase in post-secondary education participation in most Organisation for Economic Co-operation and Development (OECD) and European Union countries. This increase, however, does not necessarily reflect a parallel equitable growth in post-secondary education, and early school leaving is still an…

  10. Probabilistic structural mechanics research for parallel processing computers

    NASA Technical Reports Server (NTRS)

    Sues, Robert H.; Chen, Heh-Chyun; Twisdale, Lawrence A.; Martin, William R.

    1991-01-01

    Aerospace structures and spacecraft are a complex assemblage of structural components that are subjected to a variety of complex, cyclic, and transient loading conditions. Significant modeling uncertainties are present in these structures, in addition to the inherent randomness of material properties and loads. To properly account for these uncertainties in evaluating and assessing the reliability of these components and structures, probabilistic structural mechanics (PSM) procedures must be used. Much research has focused on basic theory development and the development of approximate analytic solution methods in random vibrations and structural reliability. Practical application of PSM methods was hampered by their computationally intense nature. Solution of PSM problems requires repeated analyses of structures that are often large, and exhibit nonlinear and/or dynamic response behavior. These methods are all inherently parallel and ideally suited to implementation on parallel processing computers. New hardware architectures and innovative control software and solution methodologies are needed to make solution of large scale PSM problems practical.

  11. Simulation framework for intelligent transportation systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ewing, T.; Doss, E.; Hanebutte, U.

    1996-10-01

    A simulation framework has been developed for a large-scale, comprehensive, scaleable simulation of an Intelligent Transportation System (ITS). The simulator is designed for running on parallel computers and distributed (networked) computer systems, but can run on standalone workstations for smaller simulations. The simulator currently models instrumented smart vehicles with in-vehicle navigation units capable of optimal route planning and Traffic Management Centers (TMC). The TMC has probe vehicle tracking capabilities (display position and attributes of instrumented vehicles), and can provide two-way interaction with traffic to provide advisories and link times. Both the in-vehicle navigation module and the TMC feature detailed graphicalmore » user interfaces to support human-factors studies. Realistic modeling of variations of the posted driving speed are based on human factors studies that take into consideration weather, road conditions, driver personality and behavior, and vehicle type. The prototype has been developed on a distributed system of networked UNIX computers but is designed to run on parallel computers, such as ANL`s IBM SP-2, for large-scale problems. A novel feature of the approach is that vehicles are represented by autonomous computer processes which exchange messages with other processes. The vehicles have a behavior model which governs route selection and driving behavior, and can react to external traffic events much like real vehicles. With this approach, the simulation is scaleable to take advantage of emerging massively parallel processor (MPP) systems.« less

  12. Minimally invasive strabismus surgery versus paralimbal approach: A randomized, parallel design study is minimally invasive strabismus surgery worth the effort?

    PubMed Central

    Sharma, Richa; Amitava, Abadan K; Bani, Sadat AO

    2014-01-01

    Introduction: Minimal access surgery is common in all fields of medicine. We compared a new minimally invasive strabismus surgery (MISS) approach with a standard paralimbal strabismus surgery (SPSS) approach in terms of post-operative course. Materials and Methods: This parallel design study was done on 28 eyes of 14 patients, in which one eye was randomized to MISS and the other to SPSS. MISS was performed by giving two conjunctival incisions parallel to the horizontal rectus muscles; performing recession or resection below the conjunctival strip so obtained. We compared post-operative redness, congestion, chemosis, foreign body sensation (FBS), and drop intolerance (DI) on a graded scale of 0 to 3 on post-operative day 1, at 2-3 weeks, and 6 weeks. In addition, all scores were added to obtain a total inflammatory score (TIS). Statistical Analysis: Inflammatory scores were analyzed using Wilcoxon's signed rank test. Results: On the first post-operative day, only FBS (P =0.01) and TIS (P =0.04) showed significant difference favoring MISS. At 2-3 weeks, redness (P =0.04), congestion (P =0.04), FBS (P =0.02), and TIS (P =0.04) were significantly less in MISS eye. At 6 weeks, only redness (P =0.04) and TIS (P =0.05) were significantly less. Conclusion: MISS is more comfortable in the immediate post-operative period and provides better cosmesis in the intermediate period. PMID:24088635

  13. Orthorectification by Using Gpgpu Method

    NASA Astrophysics Data System (ADS)

    Sahin, H.; Kulur, S.

    2012-07-01

    Thanks to the nature of the graphics processing, the newly released products offer highly parallel processing units with high-memory bandwidth and computational power of more than teraflops per second. The modern GPUs are not only powerful graphic engines but also they are high level parallel programmable processors with very fast computing capabilities and high-memory bandwidth speed compared to central processing units (CPU). Data-parallel computations can be shortly described as mapping data elements to parallel processing threads. The rapid development of GPUs programmability and capabilities attracted the attentions of researchers dealing with complex problems which need high level calculations. This interest has revealed the concepts of "General Purpose Computation on Graphics Processing Units (GPGPU)" and "stream processing". The graphic processors are powerful hardware which is really cheap and affordable. So the graphic processors became an alternative to computer processors. The graphic chips which were standard application hardware have been transformed into modern, powerful and programmable processors to meet the overall needs. Especially in recent years, the phenomenon of the usage of graphics processing units in general purpose computation has led the researchers and developers to this point. The biggest problem is that the graphics processing units use different programming models unlike current programming methods. Therefore, an efficient GPU programming requires re-coding of the current program algorithm by considering the limitations and the structure of the graphics hardware. Currently, multi-core processors can not be programmed by using traditional programming methods. Event procedure programming method can not be used for programming the multi-core processors. GPUs are especially effective in finding solution for repetition of the computing steps for many data elements when high accuracy is needed. Thus, it provides the computing process more quickly and accurately. Compared to the GPUs, CPUs which perform just one computing in a time according to the flow control are slower in performance. This structure can be evaluated for various applications of computer technology. In this study covers how general purpose parallel programming and computational power of the GPUs can be used in photogrammetric applications especially direct georeferencing. The direct georeferencing algorithm is coded by using GPGPU method and CUDA (Compute Unified Device Architecture) programming language. Results provided by this method were compared with the traditional CPU programming. In the other application the projective rectification is coded by using GPGPU method and CUDA programming language. Sample images of various sizes, as compared to the results of the program were evaluated. GPGPU method can be used especially in repetition of same computations on highly dense data, thus finding the solution quickly.

  14. Low and high-frequency TENS in post-episiotomy pain relief: a randomized, double-blind clinical trial.

    PubMed

    Pitangui, Ana C R; Araújo, Rodrigo C; Bezerra, Michelle J S; Ribeiro, Camila O; Nakano, Ana M S

    2014-01-01

    To evaluate the effectiveness of low-frequency TENS (LFT) and high-frequency TENS (HFT) in post-episiotomy pain relief. A randomized, controlled, double-blind clinical trial with placebo composed of 33 puerperae with post-episiotomy pain. TENS was applied for 30 minutes to groups: HFT(100 Hz; 100 µs), LFT (5 Hz; 100 µs), and placebo (PT). Four electrodes were placed in parallel near the episiotomy and four pain evaluations were performed with the numeric rating scale. The first and the second evaluation took place before TENS application and immediately after its removal and were done in the resting position and in the activities of sitting and ambulating. The third and fourth evaluation took place 30 and 60 minutes after TENS removal, only in the resting position. Intragroup differences were verified using the Friedman and Wilcoxon tests, and the intergroup analysis employed the Kruskal-Wallis test. In the intragroup analysis, there was no significant difference in the PT during rest, sitting, and ambulation (P>0.05). In the HFT and LFT, a significant difference was observed in all activities (P<0.001). In the intergroup analysis, there was a significant difference in the resting position in the HFT and LFT (P<0.001). In the sitting activity, a significant difference was verified in the second evaluation in the HFT and LFT (P<0.008). No significant difference was verified among the groups in ambulation (P<0.20). LFT and HFT are an effective resource that may be included in the routine of maternity wards.

  15. Parallel and Efficient Sensitivity Analysis of Microscopy Image Segmentation Workflows in Hybrid Systems

    PubMed Central

    Barreiros, Willian; Teodoro, George; Kurc, Tahsin; Kong, Jun; Melo, Alba C. M. A.; Saltz, Joel

    2017-01-01

    We investigate efficient sensitivity analysis (SA) of algorithms that segment and classify image features in a large dataset of high-resolution images. Algorithm SA is the process of evaluating variations of methods and parameter values to quantify differences in the output. A SA can be very compute demanding because it requires re-processing the input dataset several times with different parameters to assess variations in output. In this work, we introduce strategies to efficiently speed up SA via runtime optimizations targeting distributed hybrid systems and reuse of computations from runs with different parameters. We evaluate our approach using a cancer image analysis workflow on a hybrid cluster with 256 nodes, each with an Intel Phi and a dual socket CPU. The SA attained a parallel efficiency of over 90% on 256 nodes. The cooperative execution using the CPUs and the Phi available in each node with smart task assignment strategies resulted in an additional speedup of about 2×. Finally, multi-level computation reuse lead to an additional speedup of up to 2.46× on the parallel version. The level of performance attained with the proposed optimizations will allow the use of SA in large-scale studies. PMID:29081725

  16. Psychological Outcomes following a nurse-led Preventative Psychological Intervention for critically ill patients (POPPI): protocol for a cluster-randomised clinical trial of a complex intervention

    PubMed Central

    Richards-Belle, Alvin; Mouncey, Paul R; Wade, Dorothy; Brewin, Chris R; Emerson, Lydia M; Grieve, Richard; Harrison, David A; Harvey, Sheila; Howell, David; Mythen, Monty; Sadique, Zia; Smyth, Deborah; Weinman, John; Welch, John; Rowan, Kathryn M

    2018-01-01

    Introduction Acute psychological stress, as well as unusual experiences including hallucinations and delusions, are common in critical care unit patients and have been linked to post-critical care psychological morbidity such as post-traumatic stress disorder (PTSD), depression and anxiety. Little high-quality research has been conducted to evaluate psychological interventions that could alleviate longer-term psychological morbidity in the critical care unit setting. Our research team developed and piloted a nurse-led psychological intervention, aimed at reducing patient-reported PTSD symptom severity and other adverse psychological outcomes at 6 months, for evaluation in the POPPI trial. Methods and analysis This is a multicentre, parallel group, cluster-randomised clinical trial with a staggered roll-out of the intervention. The trial is being carried out at 24 (12 intervention, 12 control) NHS adult, general, critical care units in the UK and is evaluating the clinical effectiveness and cost-effectiveness of a nurse-led preventative psychological intervention in reducing patient-reported PTSD symptom severity and other psychological morbidity at 6 months. All sites deliver usual care for 5 months (baseline period). Intervention group sites are then trained to carry out the POPPI intervention, and transition to delivering the intervention for the rest of the recruitment period. Control group sites deliver usual care for the duration of the recruitment period. The trial also includes a process evaluation conducted independently of the trial team. Ethics and dissemination This protocol was reviewed and approved by the National Research Ethics Service South Central - Oxford B Research Ethics Committee (reference: 15/SC/0287). The first patient was recruited in September 2015 and results will be disseminated in 2018. The results will be presented at national and international conferences and published in peer reviewed medical journals. Trial registration number ISRCTN53448131; Pre-results. PMID:29439083

  17. Obstructive Airways Disease With Air Trapping Among Firefighters Exposed to World Trade Center Dust

    PubMed Central

    Weiden, Michael D.; Ferrier, Natalia; Nolan, Anna; Rom, William N.; Comfort, Ashley; Gustave, Jackson; Zeig-Owens, Rachel; Zheng, Shugi; Goldring, Roberta M.; Berger, Kenneth I.; Cosenza, Kaitlyn; Lee, Roy; Webber, Mayris P.; Kelly, Kerry J.; Aldrich, Thomas K.

    2010-01-01

    Background: The World Trade Center (WTC) collapse produced a massive exposure to respirable particulates in New York City Fire Department (FDNY) rescue workers. This group had spirometry examinations pre-September 11, 2001, and post-September 11, 2001, demonstrating declines in lung function with parallel declines in FEV1 and FVC. To date, the underlying pathophysiologic cause for this has been open to question. Methods: Of 13,234 participants in the FDNY-WTC Monitoring Program, 1,720 (13%) were referred for pulmonary subspecialty evaluation at a single institution. Evaluation included 919 full pulmonary function tests, 1,219 methacholine challenge tests, and 982 high-resolution chest CT scans. Results: At pulmonary evaluation (median 34 months post-September 11, 2001), median values were FEV1 93% predicted (interquartile range [IQR], 83%-101%), FVC 98% predicted (IQR, 89%-106%), and FEV1/FVC 0.78 (IQR, 0.72-0.82). The residual volume (RV) was 123% predicted (IQR, 106%-147%) with nearly all participants having normal total lung capacity, functional residual capacity, and diffusing capacity of carbon monoxide. Also, 1,051/1,720 (59%) had obstructive airways disease based on at least one of the following: FEV1/FVC, bronchodilator responsiveness, hyperreactivity, or elevated RV. After adjusting for age, gender, race, height and weight, and tobacco use, the decline in FEV1 post-September 11, 2001, was significantly correlated with increased RV percent predicted (P < .0001), increased bronchodilator responsiveness (P < .0001), and increased hyperreactivity (P = .0056). CT scans demonstrated bronchial wall thickening that was significantly associated with the decline in FEV1 post-September 11, 2001 (P = .024), increases in hyperreactivity (P < .0001), and increases in RV (P < .0001). Few had evidence for interstitial disease. Conclusions: Airways obstruction was the predominant physiologic finding underlying the reduction in lung function post-September 11, 2001, in FDNY WTC rescue workers presenting for pulmonary evaluation. PMID:19820077

  18. A GPU-Accelerated Parameter Interpolation Thermodynamic Integration Free Energy Method.

    PubMed

    Giese, Timothy J; York, Darrin M

    2018-03-13

    There has been a resurgence of interest in free energy methods motivated by the performance enhancements offered by molecular dynamics (MD) software written for specialized hardware, such as graphics processing units (GPUs). In this work, we exploit the properties of a parameter-interpolated thermodynamic integration (PI-TI) method to connect states by their molecular mechanical (MM) parameter values. This pathway is shown to be better behaved for Mg 2+ → Ca 2+ transformations than traditional linear alchemical pathways (with and without soft-core potentials). The PI-TI method has the practical advantage that no modification of the MD code is required to propagate the dynamics, and unlike with linear alchemical mixing, only one electrostatic evaluation is needed (e.g., single call to particle-mesh Ewald) leading to better performance. In the case of AMBER, this enables all the performance benefits of GPU-acceleration to be realized, in addition to unlocking the full spectrum of features available within the MD software, such as Hamiltonian replica exchange (HREM). The TI derivative evaluation can be accomplished efficiently in a post-processing step by reanalyzing the statistically independent trajectory frames in parallel for high throughput. We also show how one can evaluate the particle mesh Ewald contribution to the TI derivative evaluation without needing to perform two reciprocal space calculations. We apply the PI-TI method with HREM on GPUs in AMBER to predict p K a values in double stranded RNA molecules and make comparison with experiments. Convergence to under 0.25 units for these systems required 100 ns or more of sampling per window and coupling of windows with HREM. We find that MM charges derived from ab initio QM/MM fragment calculations improve the agreement between calculation and experimental results.

  19. Compensating Atmospheric Turbulence Effects at High Zenith Angles with Adaptive Optics Using Advanced Phase Reconstructors

    NASA Astrophysics Data System (ADS)

    Roggemann, M.; Soehnel, G.; Archer, G.

    Atmospheric turbulence degrades the resolution of images of space objects far beyond that predicted by diffraction alone. Adaptive optics telescopes have been widely used for compensating these effects, but as users seek to extend the envelopes of operation of adaptive optics telescopes to more demanding conditions, such as daylight operation, and operation at low elevation angles, the level of compensation provided will degrade. We have been investigating the use of advanced wave front reconstructors and post detection image reconstruction to overcome the effects of turbulence on imaging systems in these more demanding scenarios. In this paper we show results comparing the optical performance of the exponential reconstructor, the least squares reconstructor, and two versions of a reconstructor based on the stochastic parallel gradient descent algorithm in a closed loop adaptive optics system using a conventional continuous facesheet deformable mirror and a Hartmann sensor. The performance of these reconstructors has been evaluated under a range of source visual magnitudes and zenith angles ranging up to 70 degrees. We have also simulated satellite images, and applied speckle imaging, multi-frame blind deconvolution algorithms, and deconvolution algorithms that presume the average point spread function is known to compute object estimates. Our work thus far indicates that the combination of adaptive optics and post detection image processing will extend the useful envelope of the current generation of adaptive optics telescopes.

  20. Innovative Sensory Methods to Access Acceptability of Mixed Polymer Semisoft Ovules for Microbicide Applications

    PubMed Central

    Zaveri, Toral; Running, Cordelia A; Surapaneni, Lahari; Ziegler, Gregory R; Hayes, John E

    2016-01-01

    Vaginal microbicides are a promising means to prevent the transmission of HIV, empowering women by putting protection under their control. We have been using gel technology to develop microbicides in the intermediate texture space to overcome shortcomings of current solid and liquid forms. We recently formulated semisoft ovules from mixed polymer combinations of carrageenan and Carbopol 940P to overcome some of the flaws with our previous generation of formulations based solely on carrageenan. To determine the user acceptability of the reformulated gels, women first evaluated intact semisoft ovules before evaluating ovules that had been subjected to mechanical crushing to simulate samples that represent post-use discharge. Women then evaluated combinations of intact and discharge samples to understand how ovule textures correlated with texture of the resulting discharge samples. Carbopol concentration directly and inversely correlated with willingness to try for discharge samples and intact samples respectively. When evaluating intact samples, women focused on the ease of inserting the product and preferred firmer samples; conversely, when evaluating discharge samples, softer samples that resulted in a smooth paste were preferred. Significant differences between samples were lost when evaluating pairs as women made varying tradeoffs between their preference for ease of inserting intact ovules and acceptability of discharge appearance. Evaluating samples that represent different stages of the use cycle reveals a more holistic measure of product acceptability. Studying sensory acceptability in parallel with biophysical performance enables an iterative design process that considers what women prefer in terms of insertion as well as possibility of leakage. PMID:27357703

  1. Innovative sensory methods to access acceptability of mixed polymer semisoft ovules for microbicide applications.

    PubMed

    Zaveri, Toral; Running, Cordelia A; Surapaneni, Lahari; Ziegler, Gregory R; Hayes, John E

    2016-10-01

    Vaginal microbicides are a promising means to prevent the transmission of HIV, empowering women by putting protection under their control. We have been using gel technology to develop microbicides in the intermediate texture space to overcome shortcomings of current solid and liquid forms. We recently formulated semisoft ovules from mixed polymer combinations of carrageenan and Carbopol 940P to overcome some of the flaws with our previous generation of formulations based solely on carrageenan. To determine the user acceptability of the reformulated gels, women first evaluated intact semisoft ovules before evaluating ovules that had been subjected to mechanical crushing to simulate samples that represent post-use discharge. Women then evaluated combinations of intact and discharge samples to understand how ovule textures correlated with texture of the resulting discharge samples. Carbopol concentration directly and inversely correlated with willingness to try for discharge samples and intact samples, respectively. When evaluating intact samples, women focused on the ease of inserting the product and preferred firmer samples; conversely, when evaluating discharge samples, softer samples that resulted in a smooth paste were preferred. Significant differences between samples were lost when evaluating pairs as women made varying trade-offs between their preference for ease of inserting intact ovules and acceptability of discharge appearance. Evaluating samples that represent different stages of the use cycle reveals a more holistic measure of product acceptability. Studying sensory acceptability in parallel with biophysical performance enables an iterative design process that considers what women prefer in terms of insertion as well as possibility of leakage.

  2. NDL-v2.0: A new version of the numerical differentiation library for parallel architectures

    NASA Astrophysics Data System (ADS)

    Hadjidoukas, P. E.; Angelikopoulos, P.; Voglis, C.; Papageorgiou, D. G.; Lagaris, I. E.

    2014-07-01

    We present a new version of the numerical differentiation library (NDL) used for the numerical estimation of first and second order partial derivatives of a function by finite differencing. In this version we have restructured the serial implementation of the code so as to achieve optimal task-based parallelization. The pure shared-memory parallelization of the library has been based on the lightweight OpenMP tasking model allowing for the full extraction of the available parallelism and efficient scheduling of multiple concurrent library calls. On multicore clusters, parallelism is exploited by means of TORC, an MPI-based multi-threaded tasking library. The new MPI implementation of NDL provides optimal performance in terms of function calls and, furthermore, supports asynchronous execution of multiple library calls within legacy MPI programs. In addition, a Python interface has been implemented for all cases, exporting the functionality of our library to sequential Python codes. Catalog identifier: AEDG_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEDG_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 63036 No. of bytes in distributed program, including test data, etc.: 801872 Distribution format: tar.gz Programming language: ANSI Fortran-77, ANSI C, Python. Computer: Distributed systems (clusters), shared memory systems. Operating system: Linux, Unix. Has the code been vectorized or parallelized?: Yes. RAM: The library uses O(N) internal storage, N being the dimension of the problem. It can use up to O(N2) internal storage for Hessian calculations, if a task throttling factor has not been set by the user. Classification: 4.9, 4.14, 6.5. Catalog identifier of previous version: AEDG_v1_0 Journal reference of previous version: Comput. Phys. Comm. 180(2009)1404 Does the new version supersede the previous version?: Yes Nature of problem: The numerical estimation of derivatives at several accuracy levels is a common requirement in many computational tasks, such as optimization, solution of nonlinear systems, and sensitivity analysis. For a large number of scientific and engineering applications, the underlying functions correspond to simulation codes for which analytical estimation of derivatives is difficult or almost impossible. A parallel implementation that exploits systems with multiple CPUs is very important for large scale and computationally expensive problems. Solution method: Finite differencing is used with a carefully chosen step that minimizes the sum of the truncation and round-off errors. The parallel versions employ both OpenMP and MPI libraries. Reasons for new version: The updated version was motivated by our endeavors to extend a parallel Bayesian uncertainty quantification framework [1], by incorporating higher order derivative information as in most state-of-the-art stochastic simulation methods such as Stochastic Newton MCMC [2] and Riemannian Manifold Hamiltonian MC [3]. The function evaluations are simulations with significant time-to-solution, which also varies with the input parameters such as in [1, 4]. The runtime of the N-body-type of problem changes considerably with the introduction of a longer cut-off between the bodies. In the first version of the library, the OpenMP-parallel subroutines spawn a new team of threads and distribute the function evaluations with a PARALLEL DO directive. This limits the functionality of the library as multiple concurrent calls require nested parallelism support from the OpenMP environment. Therefore, either their function evaluations will be serialized or processor oversubscription is likely to occur due to the increased number of OpenMP threads. In addition, the Hessian calculations include two explicit parallel regions that compute first the diagonal and then the off-diagonal elements of the array. Due to the barrier between the two regions, the parallelism of the calculations is not fully exploited. These issues have been addressed in the new version by first restructuring the serial code and then running the function evaluations in parallel using OpenMP tasks. Although the MPI-parallel implementation of the first version is capable of fully exploiting the task parallelism of the PNDL routines, it does not utilize the caching mechanism of the serial code and, therefore, performs some redundant function evaluations in the Hessian and Jacobian calculations. This can lead to: (a) higher execution times if the number of available processors is lower than the total number of tasks, and (b) significant energy consumption due to wasted processor cycles. Overcoming these drawbacks, which become critical as the time of a single function evaluation increases, was the primary goal of this new version. Due to the code restructure, the MPI-parallel implementation (and the OpenMP-parallel in accordance) avoids redundant calls, providing optimal performance in terms of the number of function evaluations. Another limitation of the library was that the library subroutines were collective and synchronous calls. In the new version, each MPI process can issue any number of subroutines for asynchronous execution. We introduce two library calls that provide global and local task synchronizations, similarly to the BARRIER and TASKWAIT directives of OpenMP. The new MPI-implementation is based on TORC, a new tasking library for multicore clusters [5-7]. TORC improves the portability of the software, as it relies exclusively on the POSIX-Threads and MPI programming interfaces. It allows MPI processes to utilize multiple worker threads, offering a hybrid programming and execution environment similar to MPI+OpenMP, in a completely transparent way. Finally, to further improve the usability of our software, a Python interface has been implemented on top of both the OpenMP and MPI versions of the library. This allows sequential Python codes to exploit shared and distributed memory systems. Summary of revisions: The revised code improves the performance of both parallel (OpenMP and MPI) implementations. The functionality and the user-interface of the MPI-parallel version have been extended to support the asynchronous execution of multiple PNDL calls, issued by one or multiple MPI processes. A new underlying tasking library increases portability and allows MPI processes to have multiple worker threads. For both implementations, an interface to the Python programming language has been added. Restrictions: The library uses only double precision arithmetic. The MPI implementation assumes the homogeneity of the execution environment provided by the operating system. Specifically, the processes of a single MPI application must have identical address space and a user function resides at the same virtual address. In addition, address space layout randomization should not be used for the application. Unusual features: The software takes into account bound constraints, in the sense that only feasible points are used to evaluate the derivatives, and given the level of the desired accuracy, the proper formula is automatically employed. Running time: Running time depends on the function's complexity. The test run took 23 ms for the serial distribution, 25 ms for the OpenMP with 2 threads, 53 ms and 1.01 s for the MPI parallel distribution using 2 threads and 2 processes respectively and yield-time for idle workers equal to 10 ms. References: [1] P. Angelikopoulos, C. Paradimitriou, P. Koumoutsakos, Bayesian uncertainty quantification and propagation in molecular dynamics simulations: a high performance computing framework, J. Chem. Phys 137 (14). [2] H.P. Flath, L.C. Wilcox, V. Akcelik, J. Hill, B. van Bloemen Waanders, O. Ghattas, Fast algorithms for Bayesian uncertainty quantification in large-scale linear inverse problems based on low-rank partial Hessian approximations, SIAM J. Sci. Comput. 33 (1) (2011) 407-432. [3] M. Girolami, B. Calderhead, Riemann manifold Langevin and Hamiltonian Monte Carlo methods, J. R. Stat. Soc. Ser. B (Stat. Methodol.) 73 (2) (2011) 123-214. [4] P. Angelikopoulos, C. Paradimitriou, P. Koumoutsakos, Data driven, predictive molecular dynamics for nanoscale flow simulations under uncertainty, J. Phys. Chem. B 117 (47) (2013) 14808-14816. [5] P.E. Hadjidoukas, E. Lappas, V.V. Dimakopoulos, A runtime library for platform-independent task parallelism, in: PDP, IEEE, 2012, pp. 229-236. [6] C. Voglis, P.E. Hadjidoukas, D.G. Papageorgiou, I. Lagaris, A parallel hybrid optimization algorithm for fitting interatomic potentials, Appl. Soft Comput. 13 (12) (2013) 4481-4492. [7] P.E. Hadjidoukas, C. Voglis, V.V. Dimakopoulos, I. Lagaris, D.G. Papageorgiou, Supporting adaptive and irregular parallelism for non-linear numerical optimization, Appl. Math. Comput. 231 (2014) 544-559.

  3. Reducing door-to-needle times using Toyota's lean manufacturing principles and value stream analysis.

    PubMed

    Ford, Andria L; Williams, Jennifer A; Spencer, Mary; McCammon, Craig; Khoury, Naim; Sampson, Tomoko R; Panagos, Peter; Lee, Jin-Moo

    2012-12-01

    Earlier tissue-type plasminogen activator (tPA) treatment for acute ischemic stroke increases efficacy, prompting national efforts to reduce door-to-needle times. We used lean process improvement methodology to develop a streamlined intravenous tPA protocol. In early 2011, a multidisciplinary team analyzed the steps required to treat patients with acute ischemic stroke with intravenous tPA using value stream analysis (VSA). We directly compared the tPA-treated patients in the "pre-VSA" epoch with the "post-VSA" epoch with regard to baseline characteristics, protocol metrics, and clinical outcomes. The VSA revealed several tPA protocol inefficiencies: routing of patients to room, then to CT, then back to the room; serial processing of workflow; and delays in waiting for laboratory results. On March 1, 2011, a new protocol incorporated changes to minimize delays: routing patients directly to head CT before the patient room, using parallel process workflow, and implementing point-of-care laboratories. In the pre and post-VSA epochs, 132 and 87 patients were treated with intravenous tPA, respectively. Compared with pre-VSA, door-to-needle times and percent of patients treated ≤60 minutes from hospital arrival were improved in the post-VSA epoch: 60 minutes versus 39 minutes (P<0.0001) and 52% versus 78% (P<0.0001), respectively, with no change in symptomatic hemorrhage rate. Lean process improvement methodology can expedite time-dependent stroke care without compromising safety.

  4. Reducing Door-to-Needle Times using Toyota’s Lean Manufacturing Principles and Value Stream Analysis

    PubMed Central

    Ford, Andria L.; Williams, Jennifer A.; Spencer, Mary; McCammon, Craig; Khoury, Naim; Sampson, Tomoko; Panagos, Peter; Lee, Jin-Moo

    2012-01-01

    Background Earlier tPA treatment for acute ischemic stroke increases efficacy, prompting national efforts to reduce door-to-needle times (DNTs). We utilized lean process improvement methodology to develop a streamlined IV tPA protocol. Methods In early 2011, a multi-disciplinary team analyzed the steps required to treat acute ischemic stroke patients with IV tPA, utilizing value stream analysis (VSA). We directly compared the tPA-treated patients in the “pre-VSA” epoch to the “post-VSA” epoch with regard to baseline characteristics, protocol metrics, and clinical outcomes. Results The VSA revealed several tPA protocol inefficiencies: routing of patients to room, then to CT, then back to room; serial processing of work flow; and delays in waiting for lab results. On 3/1/2011, a new protocol incorporated changes to minimize delays: routing patients directly to head CT prior to patient room, utilizing parallel process work-flow, and implementing point-of-care labs. In the pre-and post-VSA epochs, 132 and 87 patients were treated with IV tPA, respectively. Compared to pre-VSA, DNTs and percent of patients treated ≤60 minutes from hospital arrival were improved in the post-VSA epoch: 60 min vs. 39 min (p<0.0001) and 52% vs. 78% (p<0.0001), respectively, with no change in symptomatic hemorrhage rate. Conclusions Lean process improvement methodology can expedite time-dependent stroke care, without compromising safety. PMID:23138440

  5. Effects of implant angulation, impression material, and variation in arch curvature width on implant transfer model accuracy.

    PubMed

    Akalin, Zerrin Fidan; Ozkan, Yasemin Kulak; Ekerim, Ahmet

    2013-01-01

    The effects of implant angulation, impression material, and variation in width of the arch curvature on transfer models were evaluated. Three edentulous maxillary epoxy resin models were fabricated, and six internal-connection implant analogs were placed in different locations and different angulations in each model. In the first model, implants were positioned in the canine, first premolar, and first molar regions, and all analogs were positioned parallel to each other and perpendicular to the horizontal crestal plane (parallel model). In the second model, analogs were positioned in same regions (canine, first premolar, and first molar), but three of them were positioned with 10-degree buccal angulations (versus the horizontal crestal plane) (angular model). In the third model, analogs were inserted in the lateral incisor, canine, and second molar regions and parallel to each other (wide-arch model). Eighteen impressions of each model were made with each of the three materials--condensation silicone, polyvinyl siloxane, and polyether--and impressions were poured and kept at room temperature for 24 hours. They were then observed under a toolmaker's microscope, with epoxy resin models of each group used as references. Distance deformations between implants in each model in the x- and y-axes were recorded separately. Implant angulation deformations were recorded in the x-z plane. Statistical evaluations were performed with analysis of variance and the least significant difference post hoc test. Angular model measurements showed the greatest deformation values (P < .05). All impression materials showed deformation, and the polyether impression models showed statistically significantly less deformation in angular measurements (P < .05). The models with implants placed parallel to each other exhibited greater accuracy than a model with implants placed at angles to each other.

  6. KSC-08pd0424

    NASA Image and Video Library

    2008-02-20

    KENNEDY SPACE CENTER, FLA. -- From the Shuttle Landing Facility runway at NASA's Kennedy Space Center, space shuttle Atlantis is towed to theOrbiter Processing Facility, or OPF, where processing Atlantis for another flight will take place. Towing normally begins within four hours after landing and is completed within six hours unless removal of time-sensitive experiments is required on the runway. In the OPF, turnaround processing procedures on Atlantis will include various post-flight deservicing and maintenance functions, which are carried out in parallel with payload removal and the installation of equipment needed for the next mission. After a round trip of nearly 5.3 million miles, Atlantis and crew returned to Earth with a landing at 9:07 a.m. EST to complete the STS-122 mission. Photo credit: NASA/Jack Pfaller

  7. KSC-08pd0426

    NASA Image and Video Library

    2008-02-20

    KENNEDY SPACE CENTER, FLA. -- From the Shuttle Landing Facility runway at NASA's Kennedy Space Center, space shuttle Atlantis is towed to the Orbiter Processing Facility, or OPF, where processing Atlantis for another flight will take place. Towing normally begins within four hours after landing and is completed within six hours unless removal of time-sensitive experiments is required on the runway. In the OPF, turnaround processing procedures on Atlantis will include various post-flight deservicing and maintenance functions, which are carried out in parallel with payload removal and the installation of equipment needed for the next mission. After a round trip of nearly 5.3 million miles, Atlantis and crew returned to Earth with a landing at 9:07 a.m. EST to complete the STS-122 mission. Photo credit: NASA/Jack Pfaller

  8. KSC-08pd0429

    NASA Image and Video Library

    2008-02-20

    KENNEDY SPACE CENTER, FLA. -- From the Shuttle Landing Facility runway at NASA's Kennedy Space Center, space shuttle Atlantis is towed toward the Orbiter Processing Facility, or OPF, where processing Atlantis for another flight will take place. Towing normally begins within four hours after landing and is completed within six hours unless removal of time-sensitive experiments is required on the runway. In the OPF, turnaround processing procedures on Atlantis will include various post-flight deservicing and maintenance functions, which are carried out in parallel with payload removal and the installation of equipment needed for the next mission. After a round trip of nearly 5.3 million miles, Atlantis and crew returned to Earth with a landing at 9:07 a.m. EST to complete the STS-122 mission. Photo credit: NASA/Jack Pfaller

  9. KSC-08pd0427

    NASA Image and Video Library

    2008-02-20

    KENNEDY SPACE CENTER, FLA. -- From the Shuttle Landing Facility runway at NASA's Kennedy Space Center, space shuttle Atlantis is towed to the Orbiter Processing Facility, or OPF, where processing Atlantis for another flight will take place. Towing normally begins within four hours after landing and is completed within six hours unless removal of time-sensitive experiments is required on the runway. In the OPF, turnaround processing procedures on Atlantis will include various post-flight deservicing and maintenance functions, which are carried out in parallel with payload removal and the installation of equipment needed for the next mission. After a round trip of nearly 5.3 million miles, Atlantis and crew returned to Earth with a landing at 9:07 a.m. EST to complete the STS-122 mission. Photo credit: NASA/Jack Pfaller

  10. KSC-08pd0425

    NASA Image and Video Library

    2008-02-20

    KENNEDY SPACE CENTER, FLA. -- From the Shuttle Landing Facility runway at NASA's Kennedy Space Center, space shuttle Atlantis is towed to the Orbiter Processing Facility, or OPF, where processing Atlantis for another flight will take place. Towing normally begins within four hours after landing and is completed within six hours unless removal of time-sensitive experiments is required on the runway. In the OPF, turnaround processing procedures on Atlantis will include various post-flight deservicing and maintenance functions, which are carried out in parallel with payload removal and the installation of equipment needed for the next mission. After a round trip of nearly 5.3 million miles, Atlantis and crew returned to Earth with a landing at 9:07 a.m. EST to complete the STS-122 mission. Photo credit: NASA/Jack Pfaller

  11. Some thoughts about parallel process and psychotherapy supervision: when is a parallel just a parallel?

    PubMed

    Watkins, C Edward

    2012-09-01

    In a way not done before, Tracey, Bludworth, and Glidden-Tracey ("Are there parallel processes in psychotherapy supervision: An empirical examination," Psychotherapy, 2011, advance online publication, doi.10.1037/a0026246) have shown us that parallel process in psychotherapy supervision can indeed be rigorously and meaningfully researched, and their groundbreaking investigation provides a nice prototype for future supervision studies to emulate. In what follows, I offer a brief complementary comment to Tracey et al., addressing one matter that seems to be a potentially important conceptual and empirical parallel process consideration: When is a parallel just a parallel? PsycINFO Database Record (c) 2012 APA, all rights reserved.

  12. [Opportunity and challenge of post-marketing evaluation of traditional Chinese medicine].

    PubMed

    Du, Xiao-Xi; Song, Hai-Bo; Ren, Jing-Tian; Yang, Le; Guo, Xiao-Xin; Pang, Yu

    2014-09-01

    Post-marketing evaluation is a process which evaluate the risks and benefits of drug clinical application comprehensively and systematically, scientific and systematic results of post-marketing evaluation not only can provide data support for clinical application of traditional Chinese medicine, but also can be a reliable basis for the supervision department to develop risk control measures. With the increasing demands for treatment and prevention of disease, traditional Chinese medicine has been widely used, and security issues are also exposed. How to find risk signal of traditional Chinese medicine in the early stages, carry out targeted evaluation work and control risk timely have become challenges in the development of traditional Chinese medicine industry.

  13. Seeing the forest for the trees: Networked workstations as a parallel processing computer

    NASA Technical Reports Server (NTRS)

    Breen, J. O.; Meleedy, D. M.

    1992-01-01

    Unlike traditional 'serial' processing computers in which one central processing unit performs one instruction at a time, parallel processing computers contain several processing units, thereby, performing several instructions at once. Many of today's fastest supercomputers achieve their speed by employing thousands of processing elements working in parallel. Few institutions can afford these state-of-the-art parallel processors, but many already have the makings of a modest parallel processing system. Workstations on existing high-speed networks can be harnessed as nodes in a parallel processing environment, bringing the benefits of parallel processing to many. While such a system can not rival the industry's latest machines, many common tasks can be accelerated greatly by spreading the processing burden and exploiting idle network resources. We study several aspects of this approach, from algorithms to select nodes to speed gains in specific tasks. With ever-increasing volumes of astronomical data, it becomes all the more necessary to utilize our computing resources fully.

  14. Improving college science teaching through peer coaching and classroom assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sode, J.R.

    Peer coaching involves the observation of one teacher by another. This observation is accompanied by open and honest reflective discussion. The three main components of peer coaching are pre conference (for setting observation guidelines and building trust), observation (the sytematic collection of classroom data), and post conference (a non evaluative examination and discussion of the classroom). The non-evaluative post conference involves an examination of the teaching/learning process that occurred during the observation phase. In effective assessment, information on what and how well students are learning is used to make decisions about overall program improvement and to implement continuous classroom improvement.more » During peer coaching and assessment neither the instructor nor the students are formally evaluated. This session presents a sequential process in which the peer coaching steps of pre conference, observation, and post conference are combined with assessment to provide instructional guidance. An actual cast study, using the student complaint, {open_quotes}Lectures are boring and useless,{close_quotes} is used to demonstrate the process.« less

  15. Parallel Processing at the High School Level.

    ERIC Educational Resources Information Center

    Sheary, Kathryn Anne

    This study investigated the ability of high school students to cognitively understand and implement parallel processing. Data indicates that most parallel processing is being taught at the university level. Instructional modules on C, Linux, and the parallel processing language, P4, were designed to show that high school students are highly…

  16. Parent-Child Parallel-Group Intervention for Childhood Aggression in Hong Kong

    ERIC Educational Resources Information Center

    Fung, Annis L. C.; Tsang, Sandra H. K. M.

    2006-01-01

    This article reports the original evidence-based outcome study on parent-child parallel group-designed Anger Coping Training (ACT) program for children aged 8-10 with reactive aggression and their parents in Hong Kong. This research program involved experimental and control groups with pre- and post-comparison. Quantitative data collection…

  17. Instrumentation, performance visualization, and debugging tools for multiprocessors

    NASA Technical Reports Server (NTRS)

    Yan, Jerry C.; Fineman, Charles E.; Hontalas, Philip J.

    1991-01-01

    The need for computing power has forced a migration from serial computation on a single processor to parallel processing on multiprocessor architectures. However, without effective means to monitor (and visualize) program execution, debugging, and tuning parallel programs becomes intractably difficult as program complexity increases with the number of processors. Research on performance evaluation tools for multiprocessors is being carried out at ARC. Besides investigating new techniques for instrumenting, monitoring, and presenting the state of parallel program execution in a coherent and user-friendly manner, prototypes of software tools are being incorporated into the run-time environments of various hardware testbeds to evaluate their impact on user productivity. Our current tool set, the Ames Instrumentation Systems (AIMS), incorporates features from various software systems developed in academia and industry. The execution of FORTRAN programs on the Intel iPSC/860 can be automatically instrumented and monitored. Performance data collected in this manner can be displayed graphically on workstations supporting X-Windows. We have successfully compared various parallel algorithms for computational fluid dynamics (CFD) applications in collaboration with scientists from the Numerical Aerodynamic Simulation Systems Division. By performing these comparisons, we show that performance monitors and debuggers such as AIMS are practical and can illuminate the complex dynamics that occur within parallel programs.

  18. Comprehending and Using Text Ideas: The Order of Processing as Affected by Reader Background and Style.

    ERIC Educational Resources Information Center

    Steinley, Gary

    1989-01-01

    Examines the processing order between the comprehension of a text and the use of comprehended ideas for such thinking tasks as comparing, evaluating, and problem solving. Finds that readers with limited background knowledge read in a more linear fashion than those with extensive background, who read in a parallel manner. (RS)

  19. Real-time object tracking based on scale-invariant features employing bio-inspired hardware.

    PubMed

    Yasukawa, Shinsuke; Okuno, Hirotsugu; Ishii, Kazuo; Yagi, Tetsuya

    2016-09-01

    We developed a vision sensor system that performs a scale-invariant feature transform (SIFT) in real time. To apply the SIFT algorithm efficiently, we focus on a two-fold process performed by the visual system: whole-image parallel filtering and frequency-band parallel processing. The vision sensor system comprises an active pixel sensor, a metal-oxide semiconductor (MOS)-based resistive network, a field-programmable gate array (FPGA), and a digital computer. We employed the MOS-based resistive network for instantaneous spatial filtering and a configurable filter size. The FPGA is used to pipeline process the frequency-band signals. The proposed system was evaluated by tracking the feature points detected on an object in a video. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. A Pilot Study for Evaluation of Digital Systems as an Adjunct to Sphygmomanometry for Undergraduate Teaching

    PubMed Central

    Sharma, Renuka; Kapoor, Raj

    2016-01-01

    Objectives: Blood pressure estimation is a key skill for medical practitioners. It is routinely taught to undergraduate medical students using an aneroid sphygmomanometer. However, the conceptual understanding in the practical remains limited. We conducted the following study to evaluate the efficacy of digital data acquisition systems as an adjunct to the sphygmomanometer to teach blood pressure. Methods: Fifty-seven first-year medical students participated in the study. An MCQ test of 15 questions, consisting of 10 conceptual and five factual questions, was administered twice – pre- and post-demonstration of blood pressure measurement using a digital data acquisition system. In addition, qualitative feedback was also obtained. Results: Median scores were 7 (6 - 8) and 3 (1.5 - 4) in pre-test sessions for conceptual and factual questions, respectively. Post-test scores showed a significant improvement in both categories (10 (9 - 10) and 4 (4 - 4.5), respectively, Mann-Whitney U test, p < 0.0001). Student feedback also indicated that the digital system enhanced learning and student participation. Conclusions: Student feedback regarding the demonstrations was uniformly positive, which was also reflected in significantly improved post-test scores. We conclude that parallel demonstration on digital systems and the sphygmomanometer will enhance student engagement and understanding of blood pressure measurement. PMID:27660735

  1. Post-traumatic stress disorder: the neurobiological impact of psychological trauma

    PubMed Central

    Sherin, Jonathan E.; Nemeroff, Charles B.

    2011-01-01

    The classic fight-or-flight response to perceived threat is a reflexive nervous phenomenon thai has obvious survival advantages in evolutionary terms. However, the systems that organize the constellation of reflexive survival behaviors following exposure to perceived threat can under some circumstances become dysregulated in the process. Chronic dysregulation of these systems can lead to functional impairment in certain individuals who become “psychologically traumatized” and suffer from post-traumatic stress disorder (PTSD), A body of data accumulated over several decades has demonstrated neurobiological abnormalities in PTSD patients. Some of these findings offer insight into the pathophysiology of PTSD as well as the biological vulnerability of certain populations to develop PTSD, Several pathological features found in PTSD patients overlap with features found in patients with traumatic brain injury paralleling the shared signs and symptoms of these clinical syndromes. PMID:22034143

  2. How product trial changes quality perception of four new processed beef products.

    PubMed

    Saeed, Faiza; Grunert, Klaus G; Therkildsen, Margrethe

    2013-01-01

    The purpose of this paper is the quantitative analysis of the change in quality perception of four new processed beef products from pre to post trial phases. Based on the Total Food Quality Model, differences in pre and post-trial phases were measured using repeated measures technique for cue evaluation, quality evaluation and purchase motive fulfillment. For two of the tested products, trial resulted in a decline of the evaluation of cues, quality and purchase motive fulfillment compared to pre-trial expectations. For these products, positive expectations were created by giving information about ingredients and ways of processing, which were not confirmed during trial. For the other two products, evaluations on key sensory dimensions based on trial exceeded expectations, whereas the other evaluations remained unchanged. Several demographic factors influenced the pattern of results, notably age and gender, which may be due to underlying differences in previous experience. The study gives useful insights for testing of new processed meat products before market introduction. Copyright © 2012 Elsevier Ltd. All rights reserved.

  3. Accelerating large-scale protein structure alignments with graphics processing units

    PubMed Central

    2012-01-01

    Background Large-scale protein structure alignment, an indispensable tool to structural bioinformatics, poses a tremendous challenge on computational resources. To ensure structure alignment accuracy and efficiency, efforts have been made to parallelize traditional alignment algorithms in grid environments. However, these solutions are costly and of limited accessibility. Others trade alignment quality for speedup by using high-level characteristics of structure fragments for structure comparisons. Findings We present ppsAlign, a parallel protein structure Alignment framework designed and optimized to exploit the parallelism of Graphics Processing Units (GPUs). As a general-purpose GPU platform, ppsAlign could take many concurrent methods, such as TM-align and Fr-TM-align, into the parallelized algorithm design. We evaluated ppsAlign on an NVIDIA Tesla C2050 GPU card, and compared it with existing software solutions running on an AMD dual-core CPU. We observed a 36-fold speedup over TM-align, a 65-fold speedup over Fr-TM-align, and a 40-fold speedup over MAMMOTH. Conclusions ppsAlign is a high-performance protein structure alignment tool designed to tackle the computational complexity issues from protein structural data. The solution presented in this paper allows large-scale structure comparisons to be performed using massive parallel computing power of GPU. PMID:22357132

  4. A comparison of two treatments for childhood apraxia of speech: methods and treatment protocol for a parallel group randomised control trial

    PubMed Central

    2012-01-01

    Background Childhood Apraxia of Speech is an impairment of speech motor planning that manifests as difficulty producing the sounds (articulation) and melody (prosody) of speech. These difficulties may persist through life and are detrimental to academic, social, and vocational development. A number of published single subject and case series studies of speech treatments are available. There are currently no randomised control trials or other well designed group trials available to guide clinical practice. Methods/Design A parallel group, fixed size randomised control trial will be conducted in Sydney, Australia to determine the efficacy of two treatments for Childhood Apraxia of Speech: 1) Rapid Syllable Transition Treatment and the 2) Nuffield Dyspraxia Programme – Third edition. Eligible children will be English speaking, aged 4–12 years with a diagnosis of suspected CAS, normal or adjusted hearing and vision, and no comprehension difficulties or other developmental diagnoses. At least 20 children will be randomised to receive one of the two treatments in parallel. Treatments will be delivered by trained and supervised speech pathology clinicians using operationalised manuals. Treatment will be administered in 1-hour sessions, 4 times per week for 3 weeks. The primary outcomes are speech sound and prosodic accuracy on a customised 292 item probe and the Diagnostic Evaluation of Articulation and Phonology inconsistency subtest administered prior to treatment and 1 week, 1 month and 4 months post-treatment. All post assessments will be completed by blinded assessors. Our hypotheses are: 1) treatment effects at 1 week post will be similar for both treatments, 2) maintenance of treatment effects at 1 and 4 months post will be greater for Rapid Syllable Transition Treatment than Nuffield Dyspraxia Programme treatment, and 3) generalisation of treatment effects to untrained related speech behaviours will be greater for Rapid Syllable Transition Treatment than Nuffield Dyspraxia Programme treatment. This protocol was approved by the Human Research Ethics Committee, University of Sydney (#12924). Discussion This will be the first randomised control trial to test treatment for CAS. It will be valuable for clinical decision-making and providing evidence-based services for children with CAS. Trial Registration Australian New Zealand Clinical Trials Registry: ACTRN12612000744853 PMID:22863021

  5. Evaluating the performance of the particle finite element method in parallel architectures

    NASA Astrophysics Data System (ADS)

    Gimenez, Juan M.; Nigro, Norberto M.; Idelsohn, Sergio R.

    2014-05-01

    This paper presents a high performance implementation for the particle-mesh based method called particle finite element method two (PFEM-2). It consists of a material derivative based formulation of the equations with a hybrid spatial discretization which uses an Eulerian mesh and Lagrangian particles. The main aim of PFEM-2 is to solve transport equations as fast as possible keeping some level of accuracy. The method was found to be competitive with classical Eulerian alternatives for these targets, even in their range of optimal application. To evaluate the goodness of the method with large simulations, it is imperative to use of parallel environments. Parallel strategies for Finite Element Method have been widely studied and many libraries can be used to solve Eulerian stages of PFEM-2. However, Lagrangian stages, such as streamline integration, must be developed considering the parallel strategy selected. The main drawback of PFEM-2 is the large amount of memory needed, which limits its application to large problems with only one computer. Therefore, a distributed-memory implementation is urgently needed. Unlike a shared-memory approach, using domain decomposition the memory is automatically isolated, thus avoiding race conditions; however new issues appear due to data distribution over the processes. Thus, a domain decomposition strategy for both particle and mesh is adopted, which minimizes the communication between processes. Finally, performance analysis running over multicore and multinode architectures are presented. The Courant-Friedrichs-Lewy number used influences the efficiency of the parallelization and, in some cases, a weighted partitioning can be used to improve the speed-up. However the total cputime for cases presented is lower than that obtained when using classical Eulerian strategies.

  6. Simulation of Powder Layer Deposition in Additive Manufacturing Processes Using the Discrete Element Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Herbold, E. B.; Walton, O.; Homel, M. A.

    2015-10-26

    This document serves as a final report to a small effort where several improvements were added to a LLNL code GEODYN-­L to develop Discrete Element Method (DEM) algorithms coupled to Lagrangian Finite Element (FE) solvers to investigate powder-­bed formation problems for additive manufacturing. The results from these simulations will be assessed for inclusion as the initial conditions for Direct Metal Laser Sintering (DMLS) simulations performed with ALE3D. The algorithms were written and performed on parallel computing platforms at LLNL. The total funding level was 3-­4 weeks of an FTE split amongst two staff scientists and one post-­doc. The DEM simulationsmore » emulated, as much as was feasible, the physical process of depositing a new layer of powder over a bed of existing powder. The DEM simulations utilized truncated size distributions spanning realistic size ranges with a size distribution profile consistent with realistic sample set. A minimum simulation sample size on the order of 40-­particles square by 10-­particles deep was utilized in these scoping studies in order to evaluate the potential effects of size segregation variation with distance displaced in front of a screed blade. A reasonable method for evaluating the problem was developed and validated. Several simulations were performed to show the viability of the approach. Future investigations will focus on running various simulations investigating powder particle sizing and screen geometries.« less

  7. A Stream Tilling Approach to Surface Area Estimation for Large Scale Spatial Data in a Shared Memory System

    NASA Astrophysics Data System (ADS)

    Liu, Jiping; Kang, Xiaochen; Dong, Chun; Xu, Shenghua

    2017-12-01

    Surface area estimation is a widely used tool for resource evaluation in the physical world. When processing large scale spatial data, the input/output (I/O) can easily become the bottleneck in parallelizing the algorithm due to the limited physical memory resources and the very slow disk transfer rate. In this paper, we proposed a stream tilling approach to surface area estimation that first decomposed a spatial data set into tiles with topological expansions. With these tiles, the one-to-one mapping relationship between the input and the computing process was broken. Then, we realized a streaming framework towards the scheduling of the I/O processes and computing units. Herein, each computing unit encapsulated a same copy of the estimation algorithm, and multiple asynchronous computing units could work individually in parallel. Finally, the performed experiment demonstrated that our stream tilling estimation can efficiently alleviate the heavy pressures from the I/O-bound work, and the measured speedup after being optimized have greatly outperformed the directly parallel versions in shared memory systems with multi-core processors.

  8. Work stealing for GPU-accelerated parallel programs in a global address space framework: WORK STEALING ON GPU-ACCELERATED SYSTEMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arafat, Humayun; Dinan, James; Krishnamoorthy, Sriram

    Task parallelism is an attractive approach to automatically load balance the computation in a parallel system and adapt to dynamism exhibited by parallel systems. Exploiting task parallelism through work stealing has been extensively studied in shared and distributed-memory contexts. In this paper, we study the design of a system that uses work stealing for dynamic load balancing of task-parallel programs executed on hybrid distributed-memory CPU-graphics processing unit (GPU) systems in a global-address space framework. We take into account the unique nature of the accelerator model employed by GPUs, the significant performance difference between GPU and CPU execution as a functionmore » of problem size, and the distinct CPU and GPU memory domains. We consider various alternatives in designing a distributed work stealing algorithm for CPU-GPU systems, while taking into account the impact of task distribution and data movement overheads. These strategies are evaluated using microbenchmarks that capture various execution configurations as well as the state-of-the-art CCSD(T) application module from the computational chemistry domain.« less

  9. Work stealing for GPU-accelerated parallel programs in a global address space framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arafat, Humayun; Dinan, James; Krishnamoorthy, Sriram

    Task parallelism is an attractive approach to automatically load balance the computation in a parallel system and adapt to dynamism exhibited by parallel systems. Exploiting task parallelism through work stealing has been extensively studied in shared and distributed-memory contexts. In this paper, we study the design of a system that uses work stealing for dynamic load balancing of task-parallel programs executed on hybrid distributed-memory CPU-graphics processing unit (GPU) systems in a global-address space framework. We take into account the unique nature of the accelerator model employed by GPUs, the significant performance difference between GPU and CPU execution as a functionmore » of problem size, and the distinct CPU and GPU memory domains. We consider various alternatives in designing a distributed work stealing algorithm for CPU-GPU systems, while taking into account the impact of task distribution and data movement overheads. These strategies are evaluated using microbenchmarks that capture various execution configurations as well as the state-of-the-art CCSD(T) application module from the computational chemistry domain« less

  10. An application of analyzing the trajectories of two disorders: A parallel piecewise growth model of substance use and attention-deficit/hyperactivity disorder.

    PubMed

    Mamey, Mary Rose; Barbosa-Leiker, Celestina; McPherson, Sterling; Burns, G Leonard; Parks, Craig; Roll, John

    2015-12-01

    Researchers often want to examine 2 comorbid conditions simultaneously. One strategy to do so is through the use of parallel latent growth curve modeling (LGCM). This statistical technique allows for the simultaneous evaluation of 2 disorders to determine the explanations and predictors of change over time. Additionally, a piecewise model can help identify whether there are more than 2 growth processes within each disorder (e.g., during a clinical trial). A parallel piecewise LGCM was applied to self-reported attention-deficit/hyperactivity disorder (ADHD) and self-reported substance use symptoms in 303 adolescents enrolled in cognitive-behavioral therapy treatment for a substance use disorder and receiving either oral-methylphenidate or placebo for ADHD across 16 weeks. Assessing these 2 disorders concurrently allowed us to determine whether elevated levels of 1 disorder predicted elevated levels or increased risk of the other disorder. First, a piecewise growth model measured ADHD and substance use separately. Next, a parallel piecewise LGCM was used to estimate the regressions across disorders to determine whether higher scores at baseline of the disorders (i.e., ADHD or substance use disorder) predicted rates of change in the related disorder. Finally, treatment was added to the model to predict change. While the analyses revealed no significant relationships across disorders, this study explains and applies a parallel piecewise growth model to examine the developmental processes of comorbid conditions over the course of a clinical trial. Strengths of piecewise and parallel LGCMs for other addictions researchers interested in examining dual processes over time are discussed. (PsycINFO Database Record (c) 2015 APA, all rights reserved).

  11. Real-time processing of radar return on a parallel computer

    NASA Technical Reports Server (NTRS)

    Aalfs, David D.

    1992-01-01

    NASA is working with the FAA to demonstrate the feasibility of pulse Doppler radar as a candidate airborne sensor to detect low altitude windshears. The need to provide the pilot with timely information about possible hazards has motivated a demand for real-time processing of a radar return. Investigated here is parallel processing as a means of accommodating the high data rates required. A PC based parallel computer, called the transputer, is used to investigate issues in real time concurrent processing of radar signals. A transputer network is made up of an array of single instruction stream processors that can be networked in a variety of ways. They are easily reconfigured and software development is largely independent of the particular network topology. The performance of the transputer is evaluated in light of the computational requirements. A number of algorithms have been implemented on the transputers in OCCAM, a language specially designed for parallel processing. These include signal processing algorithms such as the Fast Fourier Transform (FFT), pulse-pair, and autoregressive modelling, as well as routing software to support concurrency. The most computationally intensive task is estimating the spectrum. Two approaches have been taken on this problem, the first and most conventional of which is to use the FFT. By using table look-ups for the basis function and other optimizing techniques, an algorithm has been developed that is sufficient for real time. The other approach is to model the signal as an autoregressive process and estimate the spectrum based on the model coefficients. This technique is attractive because it does not suffer from the spectral leakage problem inherent in the FFT. Benchmark tests indicate that autoregressive modeling is feasible in real time.

  12. Parallelism Effects and Verb Activation: The Sustained Reactivation Hypothesis

    PubMed Central

    Shapiro, Lewis P.; Love, Tracy

    2010-01-01

    This study investigated the processes underlying parallelism by evaluating the activation of a parallel element (i.e., a verb) throughout and-coordinated sentences. Four points were tested: (1) approximately 1,600ms after the verb in the first conjunct (PP1), (2) immediately following the conjunction (PP2), (3) approximately 1,100ms after the conjunction (PP3), (4) at the end of the second conjunct (PP4). The results revealed no activation at PP1, suggesting activation related to the initial presentation had decayed by this point; however, activation was observed at PP2, PP3, and PP4, suggesting the conjunction elicits reactivation that is sustained throughout the second conjunct. These findings support a specific hypothesis about parallelism, the sustained reactivation hypothesis. This hypothesis claims that, in conjoined structures, a cue that is associated with parallelism elicits the reactivation of material from the first conjunct and that this activation is sustained until integration with the second conjunct can be completed. PMID:19774464

  13. Parallelism effects and verb activation: the sustained reactivation hypothesis.

    PubMed

    Callahan, Sarah M; Shapiro, Lewis P; Love, Tracy

    2010-04-01

    This study investigated the processes underlying parallelism by evaluating the activation of a parallel element (i.e., a verb) throughout and-coordinated sentences. Four points were tested: (1) approximately 1,600 ms after the verb in the first conjunct (PP1), (2) immediately following the conjunction (PP2), (3) approximately 1,100 ms after the conjunction (PP3), (4) at the end of the second conjunct (PP4). The results revealed no activation at PP1, suggesting activation related to the initial presentation had decayed by this point; however, activation was observed at PP2, PP3, and PP4, suggesting the conjunction elicits reactivation that is sustained throughout the second conjunct. These findings support a specific hypothesis about parallelism, the sustained reactivation hypothesis. This hypothesis claims that, in conjoined structures, a cue that is associated with parallelism elicits the reactivation of material from the first conjunct and that this activation is sustained until integration with the second conjunct can be completed.

  14. A Parallel Numerical Micromagnetic Code Using FEniCS

    NASA Astrophysics Data System (ADS)

    Nagy, L.; Williams, W.; Mitchell, L.

    2013-12-01

    Many problems in the geosciences depend on understanding the ability of magnetic minerals to provide stable paleomagnetic recordings. Numerical micromagnetic modelling allows us to calculate the domain structures found in naturally occurring magnetic materials. However the computational cost rises exceedingly quickly with respect to the size and complexity of the geometries that we wish to model. This problem is compounded by the fact that the modern processor design no longer focuses on the speed at which calculations are performed, but rather on the number of computational units amongst which we may distribute our calculations. Consequently to better exploit modern computational resources our micromagnetic simulations must "go parallel". We present a parallel and scalable micromagnetics code written using FEniCS. FEniCS is a multinational collaboration involving several institutions (University of Cambridge, University of Chicago, The Simula Research Laboratory, etc.) that aims to provide a set of tools for writing scientific software; in particular software that employs the finite element method. The advantages of this approach are the leveraging of pre-existing projects from the world of scientific computing (PETSc, Trilinos, Metis/Parmetis, etc.) and exposing these so that researchers may pose problems in a manner closer to the mathematical language of their domain. Our code provides a scriptable interface (in Python) that allows users to not only run micromagnetic models in parallel, but also to perform pre/post processing of data.

  15. The parallel-sequential field subtraction technique for coherent nonlinear ultrasonic imaging

    NASA Astrophysics Data System (ADS)

    Cheng, Jingwei; Potter, Jack N.; Drinkwater, Bruce W.

    2018-06-01

    Nonlinear imaging techniques have recently emerged which have the potential to detect cracks at a much earlier stage than was previously possible and have sensitivity to partially closed defects. This study explores a coherent imaging technique based on the subtraction of two modes of focusing: parallel, in which the elements are fired together with a delay law and sequential, in which elements are fired independently. In the parallel focusing a high intensity ultrasonic beam is formed in the specimen at the focal point. However, in sequential focusing only low intensity signals from individual elements enter the sample and the full matrix of transmit-receive signals is recorded and post-processed to form an image. Under linear elastic assumptions, both parallel and sequential images are expected to be identical. Here we measure the difference between these images and use this to characterise the nonlinearity of small closed fatigue cracks. In particular we monitor the change in relative phase and amplitude at the fundamental frequencies for each focal point and use this nonlinear coherent imaging metric to form images of the spatial distribution of nonlinearity. The results suggest the subtracted image can suppress linear features (e.g. back wall or large scatters) effectively when instrumentation noise compensation in applied, thereby allowing damage to be detected at an early stage (c. 15% of fatigue life) and reliably quantified in later fatigue life.

  16. The source of dual-task limitations: Serial or parallel processing of multiple response selections?

    PubMed Central

    Marois, René

    2014-01-01

    Although it is generally recognized that the concurrent performance of two tasks incurs costs, the sources of these dual-task costs remain controversial. The serial bottleneck model suggests that serial postponement of task performance in dual-task conditions results from a central stage of response selection that can only process one task at a time. Cognitive-control models, by contrast, propose that multiple response selections can proceed in parallel, but that serial processing of task performance is predominantly adopted because its processing efficiency is higher than that of parallel processing. In the present study, we empirically tested this proposition by examining whether parallel processing would occur when it was more efficient and financially rewarded. The results indicated that even when parallel processing was more efficient and was incentivized by financial reward, participants still failed to process tasks in parallel. We conclude that central information processing is limited by a serial bottleneck. PMID:23864266

  17. MDCT evaluation of potential living renal donor, prior to laparoscopic donor nephrectomy: What the transplant surgeon wants to know?

    PubMed Central

    Ghonge, Nitin P; Gadanayak, Satyabrat; Rajakumari, Vijaya

    2014-01-01

    As Laparoscopic Donor Nephrectomy (LDN) offers several advantages for the donor such as lesser post-operative pain, fewer cosmetic concerns and faster recovery time, there is growing global trend towards LDN as compared to open nephrectomy. Comprehensive pre-LDN donor evaluation includes assessment of renal morphology including pelvi-calyceal and vascular system. Apart from donor selection, evaluation of the regional anatomy allows precise surgical planning. Due to limited visualization during laparoscopic renal harvesting, detailed pre-transplant evaluation of regional anatomy, including the renal venous anatomy is of utmost importance. MDCT is the modality of choice for pre-LDN evaluation of potential renal donors. Apart from appropriate scan protocol and post-processing methods, detailed understanding of surgical techniques is essential for the Radiologist for accurate image interpretation during pre-LDN MDCT evaluation of potential renal donors. This review article describes MDCT evaluation of potential living renal donor, prior to LDN with emphasis on scan protocol, post-processing methods and image interpretation. The article laid special emphasis on surgical perspectives of pre-LDN MDCT evaluation and addresses important points which transplant surgeons want to know. PMID:25489130

  18. High-frequency TENS in post-episiotomy pain relief in primiparous puerpere: a randomized, controlled trial.

    PubMed

    Pitangui, Ana Carolina Rodarti; de Sousa, Ligia; Gomes, Flávia Azevedo; Ferreira, Cristine Homsi Jorge; Nakano, Ana Márcia Spanó

    2012-07-01

    We evaluated the effectiveness of high-frequency transcutaneous electrical nerve stimulation (TENS) as a pain relief resource for primiparous puerpere who had experienced natural childbirth with an episiotomy. A controlled, randomized clinical study was conducted in a Brazilian maternity ward. Forty puerpere were randomly divided into two groups: TENS high frequency and a no treatment control group. Post-episiotomy pain was assessed in the resting and sitting positions and during ambulation. An 11-point numeric rating scale was performed in three separate evaluations (at the beginning of the study, after 60 min and after 120 min). The McGill pain questionnaire was employed at the beginning and 60 min later. TENS with 100 Hz frequency and 75 µs pulse for 60 min was employed without causing any pain. Four electrodes ware placed in parallel near the episiotomy site, in the area of the pudendal and genitofemoral nerves. An 11-point numeric rating scale and McGill pain questionnaire showed a significant statistical difference in pain reduction in the TENS group, while the control group showed no alteration in the level of discomfort. Hence, high-frequency TENS treatment significantly reduced pain intensity immediately after its use and 60 min later. TENS is a safe and viable non-pharmacological analgesic resource to be employed for pain relief post-episiotomy. The routine use of TENS post-episiotomy is recommended. © 2012 The Authors. Journal of Obstetrics and Gynaecology Research © 2012 Japan Society of Obstetrics and Gynecology.

  19. Decomposition method for fast computation of gigapixel-sized Fresnel holograms on a graphics processing unit cluster.

    PubMed

    Jackin, Boaz Jessie; Watanabe, Shinpei; Ootsu, Kanemitsu; Ohkawa, Takeshi; Yokota, Takashi; Hayasaki, Yoshio; Yatagai, Toyohiko; Baba, Takanobu

    2018-04-20

    A parallel computation method for large-size Fresnel computer-generated hologram (CGH) is reported. The method was introduced by us in an earlier report as a technique for calculating Fourier CGH from 2D object data. In this paper we extend the method to compute Fresnel CGH from 3D object data. The scale of the computation problem is also expanded to 2 gigapixels, making it closer to real application requirements. The significant feature of the reported method is its ability to avoid communication overhead and thereby fully utilize the computing power of parallel devices. The method exhibits three layers of parallelism that favor small to large scale parallel computing machines. Simulation and optical experiments were conducted to demonstrate the workability and to evaluate the efficiency of the proposed technique. A two-times improvement in computation speed has been achieved compared to the conventional method, on a 16-node cluster (one GPU per node) utilizing only one layer of parallelism. A 20-times improvement in computation speed has been estimated utilizing two layers of parallelism on a very large-scale parallel machine with 16 nodes, where each node has 16 GPUs.

  20. 76 FR 20750 - Self-Regulatory Organizations; EDGX Exchange, Inc.; Notice of Filing and Immediate Effectiveness...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-13

    ... on the Exchange's Internet Web site at http://www.directedge.com . \\3\\ A Member is any registered... strategy to the ROUD/ROUE routing strategies is Parallel D or Parallel 2D with the DRT (Dark routing... one method. The Commission will post all comments on the Commission's Internet Web site ( http://www...

  1. Parallel Activation in Bilingual Phonological Processing

    ERIC Educational Resources Information Center

    Lee, Su-Yeon

    2011-01-01

    In bilingual language processing, the parallel activation hypothesis suggests that bilinguals activate their two languages simultaneously during language processing. Support for the parallel activation mainly comes from studies of lexical (word-form) processing, with relatively less attention to phonological (sound) processing. According to…

  2. Recent advances in applying mass spectrometry and systems biology to determine brain dynamics.

    PubMed

    Scifo, Enzo; Calza, Giulio; Fuhrmann, Martin; Soliymani, Rabah; Baumann, Marc; Lalowski, Maciej

    2017-06-01

    Neurological disorders encompass various pathologies which disrupt normal brain physiology and function. Poor understanding of their underlying molecular mechanisms and their societal burden argues for the necessity of novel prevention strategies, early diagnostic techniques and alternative treatment options to reduce the scale of their expected increase. Areas covered: This review scrutinizes mass spectrometry based approaches used to investigate brain dynamics in various conditions, including neurodegenerative and neuropsychiatric disorders. Different proteomics workflows for isolation/enrichment of specific cell populations or brain regions, sample processing; mass spectrometry technologies, for differential proteome quantitation, analysis of post-translational modifications and imaging approaches in the brain are critically deliberated. Future directions, including analysis of cellular sub-compartments, targeted MS platforms (selected/parallel reaction monitoring) and use of mass cytometry are also discussed. Expert commentary: Here, we summarize and evaluate current mass spectrometry based approaches for determining brain dynamics in health and diseases states, with a focus on neurological disorders. Furthermore, we provide insight on current trends and new MS technologies with potential to improve this analysis.

  3. Design and process evaluation of an informative website tailored to breast cancer survivors' and intimate partners' post-treatment care needs.

    PubMed

    Pauwels, Evelyn; Van Hoof, Elke; Charlier, Caroline; Lechner, Lilian; De Bourdeaudhuij, Ilse

    2012-10-03

    On-line provision of information during the transition phase after treatment carries great promise in meeting shortcomings in post-treatment care for breast cancer survivors and their partners. The objectives of this study are to describe the development and process evaluation of a tailored informative website and to assess which characteristics of survivors and partners, participating in the feasibility study, are related to visiting the website. The development process included quantitative and qualitative assessments of survivors' and partners' care needs and preferences. Participants' use and evaluation of the website were explored by conducting baseline and post-measurements. During the intervening 10-12 weeks 57 survivors and 28 partners were granted access to the website. Fifty-seven percent (n=21) of survivors who took part in the post-measurement indicated that they had visited the website. Compared to non-visitors (n=16), they were more likely to have a partner and a higher income, reported higher levels of self-esteem and had completed treatment for a longer period of time. Partners who consulted the on-line information (42%, n=8) were younger and reported lower levels of social support compared to partners who did not visit the website (n=11). Visitors generally evaluated the content and lay-out positively, yet some believed the information was incomplete and impersonal. The website reached only about half of survivors and partners, yet was mostly well-received. Besides other ways of providing information and support, a website containing clear-cut and tailored information could be a useful tool in post-treatment care provision.

  4. Variations in disaster evacuation behavior: public responses versus private sector executive decision-making processes.

    PubMed

    Drabek, T E

    1992-06-01

    Data obtained from 65 executives working for tourism firms in three sample communities permitted comparison with the public warning response literature regarding three topics: disaster evacuation planning, initial warning responses, and disaster evacuation behavior. Disaster evacuation planning was reported by nearly all of these business executives, although it was highly variable in content, completeness, and formality. Managerial responses to post-disaster warnings paralleled the type of complex social processes that have been documented within the public response literature, except that warning sources and confirmation behavior were significantly affected by contact with authorities. Five key areas of difference were discovered in disaster evacuation behavior pertaining to: influence of planning, firm versus family priorities, shelter selection, looting concerns, and media contacts.

  5. Time Dependent Simulation of Turbopump Flows

    NASA Technical Reports Server (NTRS)

    Kiris, Cetin C.; Kwak, Dochan; Chan, William; Williams, Robert

    2001-01-01

    The objective of this viewgraph presentation is to enhance incompressible flow simulation capability for developing aerospace vehicle components, especially unsteady flow phenomena associated with high speed turbo pumps. Unsteady Space Shuttle Main Engine (SSME)-rig1 1 1/2 rotations are completed for the 34.3 million grid points model. The moving boundary capability is obtained by using the DCF module. MLP shared memory parallelism has been implemented and benchmarked in INS3D. The scripting capability from CAD geometry to solution is developed. Data compression is applied to reduce data size in post processing and fluid/structure coupling is initiated.

  6. Evaluating post-wildfire hydrologic recovery using ParFlow in southern California

    NASA Astrophysics Data System (ADS)

    Lopez, S. R.; Kinoshita, A. M.; Atchley, A. L.

    2016-12-01

    Wildfires are naturally occurring hazards that can have catastrophic impacts. They can alter the natural processes within a watershed, such as surface runoff and subsurface water storage. Generally, post-fire hydrologic models are either one-dimensional, empirically-based models, or two-dimensional, conceptually-based models with lumped parameter distributions. These models are useful in providing runoff measurements at the watershed outlet; however, do not provide distributed hydrologic simulation at each point within the watershed. This research demonstrates how ParFlow, a three-dimensional, distributed hydrologic model can simulate post-fire hydrologic processes by representing soil burn severity (via hydrophobicity) and vegetation recovery as they vary both spatially and temporally. Using this approach, we are able to evaluate the change in post-fire water components (surface flow, lateral flow, baseflow, and evapotranspiration). This model is initially developed for a hillslope in Devil Canyon, burned in 2003 by the Old Fire in southern California (USA). The domain uses a 2m-cell size resolution over a 25 m by 25 m lateral extent. The subsurface reaches 2 m and is assigned a variable cell thickness, allowing an explicit consideration of the soil burn severity throughout the stages of recovery and vegetation regrowth. Vegetation regrowth is incorporated represented by satellite-based Enhanced Vegetation Index (EVI) products. The pre- and post-fire surface runoff, subsurface storage, and surface storage interactions are evaluated and will be used as a basis for developing a watershed-scale model. Long-term continuous simulations will advance our understanding of post-fire hydrological partitioning between water balance components and the spatial variability of watershed processes, providing improved guidance for post-fire watershed management.

  7. An adaptive optics imaging system designed for clinical use.

    PubMed

    Zhang, Jie; Yang, Qiang; Saito, Kenichi; Nozato, Koji; Williams, David R; Rossi, Ethan A

    2015-06-01

    Here we demonstrate a new imaging system that addresses several major problems limiting the clinical utility of conventional adaptive optics scanning light ophthalmoscopy (AOSLO), including its small field of view (FOV), reliance on patient fixation for targeting imaging, and substantial post-processing time. We previously showed an efficient image based eye tracking method for real-time optical stabilization and image registration in AOSLO. However, in patients with poor fixation, eye motion causes the FOV to drift substantially, causing this approach to fail. We solve that problem here by tracking eye motion at multiple spatial scales simultaneously by optically and electronically integrating a wide FOV SLO (WFSLO) with an AOSLO. This multi-scale approach, implemented with fast tip/tilt mirrors, has a large stabilization range of ± 5.6°. Our method consists of three stages implemented in parallel: 1) coarse optical stabilization driven by a WFSLO image, 2) fine optical stabilization driven by an AOSLO image, and 3) sub-pixel digital registration of the AOSLO image. We evaluated system performance in normal eyes and diseased eyes with poor fixation. Residual image motion with incremental compensation after each stage was: 1) ~2-3 arc minutes, (arcmin) 2) ~0.5-0.8 arcmin and, 3) ~0.05-0.07 arcmin, for normal eyes. Performance in eyes with poor fixation was: 1) ~3-5 arcmin, 2) ~0.7-1.1 arcmin and 3) ~0.07-0.14 arcmin. We demonstrate that this system is capable of reducing image motion by a factor of ~400, on average. This new optical design provides additional benefits for clinical imaging, including a steering subsystem for AOSLO that can be guided by the WFSLO to target specific regions of interest such as retinal pathology and real-time averaging of registered images to eliminate image post-processing.

  8. Improvements in Low-cost Ultrasonic Measurements of Blood Flow in "by-passes" Using Narrow & Broad Band Transit-time Procedures

    NASA Astrophysics Data System (ADS)

    Ramos, A.; Calas, H.; Diez, L.; Moreno, E.; Prohías, J.; Villar, A.; Carrillo, E.; Jiménez, A.; Pereira, W. C. A.; Von Krüger, M. A.

    The cardio-pathology by ischemia is an important cause of death, but the re-vascularization of coronary arteries (by-pass operation) is an useful solution to reduce associated morbidity improving quality of life in patients. During these surgeries, the flow in coronary vessels must be measured, using non-invasive ultrasonic methods, known as transit time flow measurements (TTFM), which are the most accurate option nowadays. TTFM is a common intra-operative tool, in conjunction with classic Doppler velocimetry, to check the quality of these surgery processes for implanting grafts in parallel with the coronary arteries. This work shows important improvements achieved in flow-metering, obtained in our research laboratories (CSIC, ICIMAF, COPPE) and tested under real surgical conditions in Cardiocentro-HHA, for both narrowband NB and broadband BB regimes, by applying results of a CYTED multinational project (Ultrasonic & computational systems for cardiovascular diagnostics). mathematical models and phantoms were created to evaluate accurately flow measurements, in laboratory conditions, before our new electronic designs and low-cost implementations, improving previous ttfm systems, which include analogic detection, acquisition & post-processing, and a portable PC. Both regimes (NB and BB), with complementary performances for different conditions, were considered. Finally, specific software was developed to offer facilities to surgeons in their interventions.

  9. Essential issues in multiprocessor systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gajski, D.D.; Peir, J.K.

    1985-06-01

    During the past several years, a great number of proposals have been made with the objective to increase supercomputer performance by an order of magnitude on the basis of a utilization of new computer architectures. The present paper is concerned with a suitable classification scheme for comparing these architectures. It is pointed out that there are basically four schools of thought as to the most important factor for an enhancement of computer performance. According to one school, the development of faster circuits will make it possible to retain present architectures, except, possibly, for a mechanism providing synchronization of parallel processes.more » A second school assigns priority to the optimization and vectorization of compilers, which will detect parallelism and help users to write better parallel programs. A third school believes in the predominant importance of new parallel algorithms, while the fourth school supports new models of computation. The merits of the four approaches are critically evaluated. 50 references.« less

  10. A scalable parallel algorithm for multiple objective linear programs

    NASA Technical Reports Server (NTRS)

    Wiecek, Malgorzata M.; Zhang, Hong

    1994-01-01

    This paper presents an ADBASE-based parallel algorithm for solving multiple objective linear programs (MOLP's). Job balance, speedup and scalability are of primary interest in evaluating efficiency of the new algorithm. Implementation results on Intel iPSC/2 and Paragon multiprocessors show that the algorithm significantly speeds up the process of solving MOLP's, which is understood as generating all or some efficient extreme points and unbounded efficient edges. The algorithm gives specially good results for large and very large problems. Motivation and justification for solving such large MOLP's are also included.

  11. Design of experiments confirms optimization of lithium administration parameters for enhanced fracture healing.

    PubMed

    Vachhani, Kathak; Pagotto, Andrea; Wang, Yufa; Whyne, Cari; Nam, Diane

    2018-01-03

    Fracture healing is a lengthy process which fails in 5-10% of cases. Lithium, a low-cost therapeutic used in psychiatric medicine, up-regulates the canonical Wingless pathway crucial for osteoblastic mineralization in fracture healing. A design-of-experiments (DOE) methodology was used to optimize lithium administration parameters (dose, onset time and treatment duration) to enhance healing in a rat femoral fracture model. In the previously completed first stage (screening), onset time was found to significantly impact healing, with later (day 7 vs. day 3 post-fracture) treatment yielding improved maximum yield torque. The greatest strength was found in healing femurs treated at day 7 post fracture, with a low lithium dose (20 mg/kg) for 2 weeks duration. This paper describes the findings of the second (optimization) and third (verification) stages of the DOE investigation. Closed traumatic diaphyseal femur fractures were induced in 3-month old rats. Healing was evaluated on day 28 post fracture by CT-based morphometry and torsional loading. In optimization, later onset times of day 10 and 14 did not perform as well as day 7 onset. As such, efficacy of the best regimen (20 mg/kg dose given at day 7 onset for 2 weeks duration) was reassessed in a distinct cohort of animals to complete the DOE verification. A significant 44% higher maximum yield torque (primary outcome) was seen with optimized lithium treatment vs. controls, which paralleled the 46% improvement seen in the screening stage. Successful completion of this robustly designed preclinical DOE study delineates the optimal lithium regimen for enhancing preclinical long-bone fracture healing. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Synthesizing parallel imaging applications using the CAP (computer-aided parallelization) tool

    NASA Astrophysics Data System (ADS)

    Gennart, Benoit A.; Mazzariol, Marc; Messerli, Vincent; Hersch, Roger D.

    1997-12-01

    Imaging applications such as filtering, image transforms and compression/decompression require vast amounts of computing power when applied to large data sets. These applications would potentially benefit from the use of parallel processing. However, dedicated parallel computers are expensive and their processing power per node lags behind that of the most recent commodity components. Furthermore, developing parallel applications remains a difficult task: writing and debugging the application is difficult (deadlocks), programs may not be portable from one parallel architecture to the other, and performance often comes short of expectations. In order to facilitate the development of parallel applications, we propose the CAP computer-aided parallelization tool which enables application programmers to specify at a high-level of abstraction the flow of data between pipelined-parallel operations. In addition, the CAP tool supports the programmer in developing parallel imaging and storage operations. CAP enables combining efficiently parallel storage access routines and image processing sequential operations. This paper shows how processing and I/O intensive imaging applications must be implemented to take advantage of parallelism and pipelining between data access and processing. This paper's contribution is (1) to show how such implementations can be compactly specified in CAP, and (2) to demonstrate that CAP specified applications achieve the performance of custom parallel code. The paper analyzes theoretically the performance of CAP specified applications and demonstrates the accuracy of the theoretical analysis through experimental measurements.

  13. Range-wide parallel climate-associated genomic clines in Atlantic salmon

    PubMed Central

    Stanley, Ryan R. E.; Wringe, Brendan F.; Guijarro-Sabaniel, Javier; Bourret, Vincent; Bernatchez, Louis; Bentzen, Paul; Beiko, Robert G.; Gilbey, John; Clément, Marie; Bradbury, Ian R.

    2017-01-01

    Clinal variation across replicated environmental gradients can reveal evidence of local adaptation, providing insight into the demographic and evolutionary processes that shape intraspecific diversity. Using 1773 genome-wide single nucleotide polymorphisms we evaluated latitudinal variation in allele frequency for 134 populations of North American and European Atlantic salmon (Salmo salar). We detected 84 (4.74%) and 195 (11%) loci showing clinal patterns in North America and Europe, respectively, with 12 clinal loci in common between continents. Clinal single nucleotide polymorphisms were evenly distributed across the salmon genome and logistic regression revealed significant associations with latitude and seasonal temperatures, particularly average spring temperature in both continents. Loci displaying parallel clines were associated with several metabolic and immune functions, suggesting a potential basis for climate-associated adaptive differentiation. These climate-based clines collectively suggest evidence of large-scale environmental associated differences on either side of the North Atlantic. Our results support patterns of parallel evolution on both sides of the North Atlantic, with evidence of both similar and divergent underlying genetic architecture. The identification of climate-associated genomic clines illuminates the role of selection and demographic processes on intraspecific diversity in this species and provides a context in which to evaluate the impacts of climate change. PMID:29291123

  14. Acceleration of the matrix multiplication of Radiance three phase daylighting simulations with parallel computing on heterogeneous hardware of personal computer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zuo, Wangda; McNeil, Andrew; Wetter, Michael

    2013-05-23

    Building designers are increasingly relying on complex fenestration systems to reduce energy consumed for lighting and HVAC in low energy buildings. Radiance, a lighting simulation program, has been used to conduct daylighting simulations for complex fenestration systems. Depending on the configurations, the simulation can take hours or even days using a personal computer. This paper describes how to accelerate the matrix multiplication portion of a Radiance three-phase daylight simulation by conducting parallel computing on heterogeneous hardware of a personal computer. The algorithm was optimized and the computational part was implemented in parallel using OpenCL. The speed of new approach wasmore » evaluated using various daylighting simulation cases on a multicore central processing unit and a graphics processing unit. Based on the measurements and analysis of the time usage for the Radiance daylighting simulation, further speedups can be achieved by using fast I/O devices and storing the data in a binary format.« less

  15. Quantum supercharger library: hyper-parallelism of the Hartree-Fock method.

    PubMed

    Fernandes, Kyle D; Renison, C Alicia; Naidoo, Kevin J

    2015-07-05

    We present here a set of algorithms that completely rewrites the Hartree-Fock (HF) computations common to many legacy electronic structure packages (such as GAMESS-US, GAMESS-UK, and NWChem) into a massively parallel compute scheme that takes advantage of hardware accelerators such as Graphical Processing Units (GPUs). The HF compute algorithm is core to a library of routines that we name the Quantum Supercharger Library (QSL). We briefly evaluate the QSL's performance and report that it accelerates a HF 6-31G Self-Consistent Field (SCF) computation by up to 20 times for medium sized molecules (such as a buckyball) when compared with mature Central Processing Unit algorithms available in the legacy codes in regular use by researchers. It achieves this acceleration by massive parallelization of the one- and two-electron integrals and optimization of the SCF and Direct Inversion in the Iterative Subspace routines through the use of GPU linear algebra libraries. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  16. The Goddard Space Flight Center Program to develop parallel image processing systems

    NASA Technical Reports Server (NTRS)

    Schaefer, D. H.

    1972-01-01

    Parallel image processing which is defined as image processing where all points of an image are operated upon simultaneously is discussed. Coherent optical, noncoherent optical, and electronic methods are considered parallel image processing techniques.

  17. Power of food moderates food craving, perceived control, and brain networks following a short-term post-absorptive state in older adults.

    PubMed

    Rejeski, W Jack; Burdette, Jonathan; Burns, Marley; Morgan, Ashley R; Hayasaka, Satoru; Norris, James; Williamson, Donald A; Laurienti, Paul J

    2012-06-01

    The Power of Food Scale (PFS) is a new measure that assesses the drive to consume highly palatable food in an obesogenic food environment. The data reported in this investigation evaluate whether the PFS moderates state cravings, control beliefs, and brain networks of older, obese adults following either a short-term post-absorptive state, in which participants were only allowed to consume water, or a short-term energy surfeit treatment condition, in which they consumed BOOST®. We found that the short-term post-absorptive condition, in which participants consumed water only, was associated with increases in state cravings for desired food, a reduction in participants' confidence related to the control of eating behavior, and shifts in brain networks that parallel what is observed with other addictive behaviors. Furthermore, individuals who scored high on the PFS were at an increased risk for experiencing these effects. Future research is needed to examine the eating behavior of persons who score high on the PFS and to develop interventions that directly target food cravings. Copyright © 2012 Elsevier Ltd. All rights reserved.

  18. A study of process-related electrical defects in SOI lateral bipolar transistors fabricated by ion implantation

    NASA Astrophysics Data System (ADS)

    Yau, J.-B.; Cai, J.; Hashemi, P.; Balakrishnan, K.; D'Emic, C.; Ning, T. H.

    2018-04-01

    We report a systematic study of process-related electrical defects in symmetric lateral NPN transistors on silicon-on-insulator (SOI) fabricated using ion implantation for all the doped regions. A primary objective of this study is to see if pipe defects (emitter-collector shorts caused by locally enhanced dopant diffusion) are a show stopper for such bipolar technology. Measurements of IC-VCE and Gummel currents in parallel-connected transistor chains as a function of post-fabrication rapid thermal anneal cycles allow several process-related electrical defects to be identified. They include defective emitter-base and collector-base diodes, pipe defects, and defects associated with a dopant-deficient region in an extrinsic base adjacent its intrinsic base. There is no evidence of pipe defects being a major concern in SOI lateral bipolar transistors.

  19. KSC-08pd0423

    NASA Image and Video Library

    2008-02-20

    KENNEDY SPACE CENTER, FLA. -- On the Shuttle Landing Facility runway at NASA's Kennedy Space Center, a tractor tow vehicle is backed up to space shuttle Atlantis for towing to the Orbiter Processing Facility, or OPF, where processing Atlantis for another flight will take place. Towing normally begins within four hours after landing and is completed within six hours unless removal of time-sensitive experiments is required on the runway. In the OPF, turnaround processing procedures on Atlantis will include various post-flight deservicing and maintenance functions, which are carried out in parallel with payload removal and the installation of equipment needed for the next mission. After a round trip of nearly 5.3 million miles, Atlantis and crew returned to Earth with a landing at 9:07 a.m. EST to complete the STS-122 mission. Photo credit: NASA/Jack Pfaller

  20. Effects of rigor status during high-pressure processing on the physical qualities of farm-raised abalone (Haliotis rufescens).

    PubMed

    Hughes, Brianna H; Greenberg, Neil J; Yang, Tom C; Skonberg, Denise I

    2015-01-01

    High-pressure processing (HPP) is used to increase meat safety and shelf-life, with conflicting quality effects depending on rigor status during HPP. In the seafood industry, HPP is used to shuck and pasteurize oysters, but its use on abalones has only been minimally evaluated and the effect of rigor status during HPP on abalone quality has not been reported. Farm-raised abalones (Haliotis rufescens) were divided into 12 HPP treatments and 1 unprocessed control treatment. Treatments were processed pre-rigor or post-rigor at 2 pressures (100 and 300 MPa) and 3 processing times (1, 3, and 5 min). The control was analyzed post-rigor. Uniform plugs were cut from adductor and foot meat for texture profile analysis, shear force, and color analysis. Subsamples were used for scanning electron microscopy of muscle ultrastructure. Texture profile analysis revealed that post-rigor processed abalone was significantly (P < 0.05) less firm and chewy than pre-rigor processed irrespective of muscle type, processing time, or pressure. L values increased with pressure to 68.9 at 300 MPa for pre-rigor processed foot, 73.8 for post-rigor processed foot, 90.9 for pre-rigor processed adductor, and 89.0 for post-rigor processed adductor. Scanning electron microscopy images showed fraying of collagen fibers in processed adductor, but did not show pressure-induced compaction of the foot myofibrils. Post-rigor processed abalone meat was more tender than pre-rigor processed meat, and post-rigor processed foot meat was lighter in color than pre-rigor processed foot meat, suggesting that waiting for rigor to resolve prior to processing abalones may improve consumer perceptions of quality and market value. © 2014 Institute of Food Technologists®

  1. Managing fear in public health campaigns: a theory-based formative evaluation process.

    PubMed

    Cho, Hyunyi; Witte, Kim

    2005-10-01

    The HIV/AIDS infection rate of Ethiopia is one of the world's highest. Prevention campaigns should systematically incorporate and respond to at-risk population's existing beliefs, emotions, and perceived barriers in the message design process to effectively promote behavior change. However, guidelines for conducting formative evaluation that are grounded in proven risk communication theory and empirical data analysis techniques are hard to find. This article provides a five-step formative evaluation process that translates theory and research for developing effective messages for behavior change. Guided by the extended parallel process model, the five-step process helps message designers manage public's fear surrounding issues such as HIV/AIDS. An entertainment education project that used the process to design HIV/AIDS prevention messages for Ethiopian urban youth is reported. Data were collected in five urban regions of Ethiopia and analyzed according to the process to develop key messages for a 26-week radio soap opera.

  2. Trace: a high-throughput tomographic reconstruction engine for large-scale datasets.

    PubMed

    Bicer, Tekin; Gürsoy, Doğa; Andrade, Vincent De; Kettimuthu, Rajkumar; Scullin, William; Carlo, Francesco De; Foster, Ian T

    2017-01-01

    Modern synchrotron light sources and detectors produce data at such scale and complexity that large-scale computation is required to unleash their full power. One of the widely used imaging techniques that generates data at tens of gigabytes per second is computed tomography (CT). Although CT experiments result in rapid data generation, the analysis and reconstruction of the collected data may require hours or even days of computation time with a medium-sized workstation, which hinders the scientific progress that relies on the results of analysis. We present Trace, a data-intensive computing engine that we have developed to enable high-performance implementation of iterative tomographic reconstruction algorithms for parallel computers. Trace provides fine-grained reconstruction of tomography datasets using both (thread-level) shared memory and (process-level) distributed memory parallelization. Trace utilizes a special data structure called replicated reconstruction object to maximize application performance. We also present the optimizations that we apply to the replicated reconstruction objects and evaluate them using tomography datasets collected at the Advanced Photon Source. Our experimental evaluations show that our optimizations and parallelization techniques can provide 158× speedup using 32 compute nodes (384 cores) over a single-core configuration and decrease the end-to-end processing time of a large sinogram (with 4501 × 1 × 22,400 dimensions) from 12.5 h to <5 min per iteration. The proposed tomographic reconstruction engine can efficiently process large-scale tomographic data using many compute nodes and minimize reconstruction times.

  3. Experimental study on evaluation and optimization of tilt angle of parallel-plate electrodes using electrocoagulation device for oily water demulsification.

    PubMed

    Liu, Yang; Jiang, Wen-Ming; Yang, Jie; Li, Yu-Xing; Chen, Ming-Can; Li, Jian-Na

    2017-08-01

    Tilt angle of parallel-plate electrodes (APE) is very important as it improves the economy of diffusion controlled Electrocoagulation (EC) processes. This study aimed to evaluate and optimize APE of a self-made EC device including integrally rotary electrodes, at a fixed current density of 120 Am -2 . The APEs investigated in this study were selected at 0°, 30°, 45°, 60°, 90°, and a special value (α (d) ) which was defined as a special orientation of electrode when the upper end of anode and the lower end of cathode is in a line vertical to the bottom of reactor. Experiments were conducted to determine the optimum APE for demulsification process using four evaluation indexes, as: oil removal efficiency in the center between electrodes; energy consumption and Al consumption, and besides, a novel universal evaluation index named as evenness index of oil removal efficiency employed to fully reflect distribution characteristics of demulsification efficiency. At a given plate spacing of 4 cm, the optimal APE was found to be α (d) because of its potential of enhancing the mass transfer process within whole EC reactor without addition, external mechanical stirring energy, and finally the four evaluation indexed are 97.07%, 0.11 g Al g -1 oil, 2.99 kwhkg -1 oil, 99.97% and 99.97%, respectively. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. An Experimental Test of the Roles of Audience Involvement and Message Frame in Shaping Public Reactions to Celebrity Illness Disclosures.

    PubMed

    Myrick, Jessica Gall

    2018-04-13

    Much research has investigated what happens when celebrities disclose an illness (via media) to the public. While audience involvement (i.e., identification and parasocial relationships) is often the proposed mechanism linking illness disclosures with audience behavior change, survey designs have prevented researchers from understanding if audience involvement prior to the illness disclosure actually predicts post-disclosure emotions, cognitions, and behaviors. Rooted in previous work on audience involvement as well as the Extended Parallel Process Model, the present study uses a national online experiment (N = 1,068) to test how pre-disclosure audience involvement may initiate post-disclosure effects for the message context of skin cancer. The data demonstrate that pre-disclosure audience involvement as well as the celebrity's framing of the disclosure can shape emotional responses (i.e., fear and hope), and that cognitive perceptions of the illness itself also influence behavioral intentions.

  5. Local spatio-temporal analysis in vision systems

    NASA Astrophysics Data System (ADS)

    Geisler, Wilson S.; Bovik, Alan; Cormack, Lawrence; Ghosh, Joydeep; Gildeen, David

    1994-07-01

    The aims of this project are the following: (1) develop a physiologically and psychophysically based model of low-level human visual processing (a key component of which are local frequency coding mechanisms); (2) develop image models and image-processing methods based upon local frequency coding; (3) develop algorithms for performing certain complex visual tasks based upon local frequency representations, (4) develop models of human performance in certain complex tasks based upon our understanding of low-level processing; and (5) develop a computational testbed for implementing, evaluating and visualizing the proposed models and algorithms, using a massively parallel computer. Progress has been substantial on all aims. The highlights include the following: (1) completion of a number of psychophysical and physiological experiments revealing new, systematic and exciting properties of the primate (human and monkey) visual system; (2) further development of image models that can accurately represent the local frequency structure in complex images; (3) near completion in the construction of the Texas Active Vision Testbed; (4) development and testing of several new computer vision algorithms dealing with shape-from-texture, shape-from-stereo, and depth-from-focus; (5) implementation and evaluation of several new models of human visual performance; and (6) evaluation, purchase and installation of a MasPar parallel computer.

  6. Time-Resolved 3D Quantitative Flow MRI of the Major Intracranial Vessels: Initial Experience and Comparative Evaluation at 1.5T and 3.0T in Combination With Parallel Imaging

    PubMed Central

    Bammer, Roland; Hope, Thomas A.; Aksoy, Murat; Alley, Marcus T.

    2012-01-01

    Exact knowledge of blood flow characteristics in the major cerebral vessels is of great relevance for diagnosing cerebrovascular abnormalities. This involves the assessment of hemodynamically critical areas as well as the derivation of biomechanical parameters such as wall shear stress and pressure gradients. A time-resolved, 3D phase-contrast (PC) MRI method using parallel imaging was implemented to measure blood flow in three dimensions at multiple instances over the cardiac cycle. The 4D velocity data obtained from 14 healthy volunteers were used to investigate dynamic blood flow with the use of multiplanar reformatting, 3D streamlines, and 4D particle tracing. In addition, the effects of magnetic field strength, parallel imaging, and temporal resolution on the data were investigated in a comparative evaluation at 1.5T and 3T using three different parallel imaging reduction factors and three different temporal resolutions in eight of the 14 subjects. Studies were consistently performed faster at 3T than at 1.5T because of better parallel imaging performance. A high temporal resolution (65 ms) was required to follow dynamic processes in the intracranial vessels. The 4D flow measurements provided a high degree of vascular conspicuity. Time-resolved streamline analysis provided features that have not been reported previously for the intracranial vasculature. PMID:17195166

  7. The Principal's Role in the Post-Evaluation Process.--How Does the Principal Engage in the Work Carried out after the Schools Self-Evaluation?

    ERIC Educational Resources Information Center

    Emstad, Anne Berit

    2011-01-01

    This article refers to a study on how the school principal engaged in the process after a school self-evaluation. The study examined how two primary schools followed up the evaluation. Although they both used the same evaluation tool, the schools' understanding and application of results differed greatly. This paper describes and discusses the…

  8. Thread concept for automatic task parallelization in image analysis

    NASA Astrophysics Data System (ADS)

    Lueckenhaus, Maximilian; Eckstein, Wolfgang

    1998-09-01

    Parallel processing of image analysis tasks is an essential method to speed up image processing and helps to exploit the full capacity of distributed systems. However, writing parallel code is a difficult and time-consuming process and often leads to an architecture-dependent program that has to be re-implemented when changing the hardware. Therefore it is highly desirable to do the parallelization automatically. For this we have developed a special kind of thread concept for image analysis tasks. Threads derivated from one subtask may share objects and run in the same context but may process different threads of execution and work on different data in parallel. In this paper we describe the basics of our thread concept and show how it can be used as basis of an automatic task parallelization to speed up image processing. We further illustrate the design and implementation of an agent-based system that uses image analysis threads for generating and processing parallel programs by taking into account the available hardware. The tests made with our system prototype show that the thread concept combined with the agent paradigm is suitable to speed up image processing by an automatic parallelization of image analysis tasks.

  9. Pilot study of the pharmacokinetics of betel nut and betel quid biomarkers in saliva, urine, and hair of betel consumers.

    PubMed

    Franke, Adrian A; Li, Xingnan; Lai, Jennifer F

    2016-10-01

    Approximately 600 million people worldwide practise the carcinogenic habit of betel nut/quid chewing. Carcinogenic N-nitroso compounds have been identified in saliva or urine of betel chewers and the betel alkaloid arecoline in hair from habitual betel quid chewers. However, the pharmacokinetic parameters of these compounds have been little explored. Assessment of betel use by biomarkers is urgently needed to evaluate the effectiveness of cessation programmes aimed at reducing betel consumption to decrease the burden of cancers in regions of high betel consumption. In the search for biomarkers of betel consumption, we measured by liquid chromatography-mass spectrometry (LC-MS) the appearance and disappearance of betel alkaloids (characteristic for betel nuts), N-nitroso compounds, and chavibetol (characteristic for Piper Betle leaves) in saliva (n=4), hair (n=2), and urine (n=1) of occasional betel nut/quid chewers. The betel alkaloids arecoline, guvacoline, guvacine, and arecaidine were detected in saliva of all four participants and peaked within the first 2 h post-chewing before returning to baseline levels after 8 h. Salivary chavibetol was detected in participants consuming Piper Betle leaves in their quid and peaked ~1 h post-chewing. Urinary arecoline, guvacoline, and arecaidine excretion paralleled saliva almost exactly while chavibetol glucuronide excretion paralleled salivary chavibetol. No betel nut related compounds were detected in the tested hair samples using various extraction methods. From these preliminary results, we conclude that betel exposure can only be followed on a short-term basis (≤8 h post-chewing) using the applied biomarkers from urine and saliva while the feasibility of using hair has yet to be validated. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  10. Pilot study of the pharmacokinetics of betel nut and betel quid biomarkers in saliva, urine, and hair of betel consumers

    PubMed Central

    Franke, Adrian A.; Li, Xingnan; Lai, Jennifer F.

    2016-01-01

    Approximately 600 million people worldwide practise the carcinogenic habit of betel nut/quid chewing. Carcinogenic N-nitroso compounds have been identified in saliva or urine of betel chewers and the betel alkaloid arecoline in hair from habitual betel quid chewers. However, the pharmacokinetic parameters of these compounds have been little explored. Assessment of betel use by biomarkers is urgently needed to evaluate the effectiveness of cessation programmes aimed at reducing betel consumption to decrease the burden of cancers in regions of high betel consumption. In the search for biomarkers of betel consumption, we measured by liquid chromatography-mass spectrometry (LC-MS) the appearance and disappearance of betel alkaloids (characteristic for betel nuts), N-nitroso compounds, and chavibetol (characteristic for Piper Betle leaves) in saliva (n=4), hair (n=2), and urine (n=1) of occasional betel nut/quid chewers. The betel alkaloids arecoline, guvacoline, guvacine, and arecaidine were detected in saliva of all four participants and peaked within the first 2 h post-chewing before returning to baseline levels after 8 h. Salivary chavibetol was detected in participants consuming Piper Betle leaves in their quid and peaked ~1 h post-chewing. Urinary arecoline, guvacoline, and arecaidine excretion paralleled saliva almost exactly while chavibetol glucuronide excretion paralleled salivary chavibetol. No betel nut related compounds were detected in the tested hair samples using various extraction methods. From these preliminary results, we conclude that betel exposure can only be followed on a short-term basis (≤8 h post-chewing) using the applied biomarkers from urine and saliva while the feasibility of using hair has yet to be validated. PMID:26619803

  11. Revised Extended Grid Library

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martz, Roger L.

    The Revised Eolus Grid Library (REGL) is a mesh-tracking library that was developed for use with the MCNP6TM computer code so that (radiation) particles can track on an unstructured mesh. The unstructured mesh is a finite element representation of any geometric solid model created with a state-of-the-art CAE/CAD tool. The mesh-tracking library is written using modern Fortran and programming standards; the library is Fortran 2003 compliant. The library was created with a defined application programmer interface (API) so that it could easily integrate with other particle tracking/transport codes. The library does not handle parallel processing via the message passing interfacemore » (mpi), but has been used successfully where the host code handles the mpi calls. The library is thread-safe and supports the OpenMP paradigm. As a library, all features are available through the API and overall a tight coupling between it and the host code is required. Features of the library are summarized with the following list: Can accommodate first and second order 4, 5, and 6-sided polyhedra; any combination of element types may appear in a single geometry model; parts may not contain tetrahedra mixed with other element types; pentahedra and hexahedra can be together in the same part; robust handling of overlaps and gaps; tracks element-to-element to produce path length results at the element level; finds element numbers for a given mesh location; finds intersection points on element faces for the particle tracks; produce a data file for post processing results analysis; reads Abaqus .inp input (ASCII) files to obtain information for the global mesh-model; supports parallel input processing via mpi; and support parallel particle transport by both mpi and OpenMP.« less

  12. Studies in optical parallel processing. [All optical and electro-optic approaches

    NASA Technical Reports Server (NTRS)

    Lee, S. H.

    1978-01-01

    Threshold and A/D devices for converting a gray scale image into a binary one were investigated for all-optical and opto-electronic approaches to parallel processing. Integrated optical logic circuits (IOC) and optical parallel logic devices (OPA) were studied as an approach to processing optical binary signals. In the IOC logic scheme, a single row of an optical image is coupled into the IOC substrate at a time through an array of optical fibers. Parallel processing is carried out out, on each image element of these rows, in the IOC substrate and the resulting output exits via a second array of optical fibers. The OPAL system for parallel processing which uses a Fabry-Perot interferometer for image thresholding and analog-to-digital conversion, achieves a higher degree of parallel processing than is possible with IOC.

  13. Outcome of the acute glomerular injury in proliferative lupus nephritis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chagnac, A.; Kiberd, B.A.; Farinas, M.C.

    1989-09-01

    Treatment with total lymphoid irradiation (TLI) and corticosteroids markedly reduced activity of systemic lupus erythematosis in 10 patients with diffuse proliferative lupus nephritis (DPLN) complicated by a nephrotic syndrome. Physiologic and morphometric techniques were used serially before, and 12 and 36 mo post-TLI to characterize the course of glomerular injury. Judged by a progressive reduction in the density of glomerular cells and immune deposits, glomerular inflammation subsided. A sustained reduction in the fractional clearance of albumin, IgG and uncharged dextrans of radius greater than 50 A, pointed to a parallel improvement in glomerular barrier size-selectivity. Corresponding changes in GFR weremore » modest, however. A trend towards higher GFR at 12 mo was associated with a marked increase in the fraction of glomerular tuft area occupied by patent capillary loops as inflammatory changes receded. A late trend toward declining GFR beyond 12 mo was associated with progressive glomerulosclerosis, which affected 57% of all glomeruli globally by 36 mo post-TLI. Judged by a parallel increase in volume by 59%, remaining, patent glomeruli had undergone a process of adaptive enlargement. We propose that an increasing fraction of glomeruli continues to undergo progressive sclerosis after DPLN has become quiescent, and that the prevailing GFR depends on the extent to which hypertrophied remnant glomeruli can compensate for the ensuing loss of filtration surface area.« less

  14. Cooperative storage of shared files in a parallel computing system with dynamic block size

    DOEpatents

    Bent, John M.; Faibish, Sorin; Grider, Gary

    2015-11-10

    Improved techniques are provided for parallel writing of data to a shared object in a parallel computing system. A method is provided for storing data generated by a plurality of parallel processes to a shared object in a parallel computing system. The method is performed by at least one of the processes and comprises: dynamically determining a block size for storing the data; exchanging a determined amount of the data with at least one additional process to achieve a block of the data having the dynamically determined block size; and writing the block of the data having the dynamically determined block size to a file system. The determined block size comprises, e.g., a total amount of the data to be stored divided by the number of parallel processes. The file system comprises, for example, a log structured virtual parallel file system, such as a Parallel Log-Structured File System (PLFS).

  15. Health trainer-led motivational intervention plus usual care for people under community supervision compared with usual care alone: a study protocol for a parallel-group pilot randomised controlled trial (STRENGTHEN)

    PubMed Central

    Thompson, Tom P; Callaghan, Lynne; Hazeldine, Emma; Quinn, Cath; Walker, Samantha; Byng, Richard; Wallace, Gary; Creanor, Siobhan; Green, Colin; Hawton, Annie; Annison, Jill; Sinclair, Julia; Senior, Jane; Taylor, Adrian H

    2018-01-01

    Introduction People with experience of the criminal justice system typically have worse physical and mental health, lower levels of mental well-being and have less healthy lifestyles than the general population. Health trainers have worked with offenders in the community to provide support for lifestyle change, enhance mental well-being and signpost to appropriate services. There has been no rigorous evaluation of the effectiveness and cost-effectiveness of providing such community support. This study aims to determine the feasibility and acceptability of conducting a randomised trial and delivering a health trainer intervention to people receiving community supervision in the UK. Methods and analysis A multicentre, parallel, two-group randomised controlled trial recruiting 120 participants with 1:1 individual allocation to receive support from a health trainer and usual care or usual care alone, with mixed methods process evaluation. Participants receive community supervision from an offender manager in either a Community Rehabilitation Company or the National Probation Service. If they have served a custodial sentence, then they have to have been released for at least 2 months. The supervision period must have at least 7 months left at recruitment. Participants are interested in receiving support to change diet, physical activity, alcohol use and smoking and/or improve mental well-being. The primary outcome is mental well-being with secondary outcomes related to smoking, physical activity, alcohol consumption and diet. The primary outcome will inform sample size calculations for a definitive trial. Ethics and dissemination The study has been approved by the Health and Care Research Wales Ethics Committee (REC reference 16/WA/0171). Dissemination will include publication of the intervention development process and findings for the stated outcomes, parallel process evaluation and economic evaluation in peer-reviewed journals. Results will also be disseminated to stakeholders and trial participants. Trial registration numbers ISRCTN80475744; Pre-results. PMID:29866736

  16. Fiber Bragg Grating Sensor System for Monitoring Smart Composite Aerospace Structures

    NASA Technical Reports Server (NTRS)

    Moslehi, Behzad; Black, Richard J.; Gowayed, Yasser

    2012-01-01

    Lightweight, electromagnetic interference (EMI) immune, fiber-optic, sensor- based structural health monitoring (SHM) will play an increasing role in aerospace structures ranging from aircraft wings to jet engine vanes. Fiber Bragg Grating (FBG) sensors for SHM include advanced signal processing, system and damage identification, and location and quantification algorithms. Potentially, the solution could be developed into an autonomous onboard system to inspect and perform non-destructive evaluation and SHM. A novel method has been developed to massively multiplex FBG sensors, supported by a parallel processing interrogator, which enables high sampling rates combined with highly distributed sensing (up to 96 sensors per system). The interrogation system comprises several subsystems. A broadband optical source subsystem (BOSS) and routing and interface module (RIM) send light from the interrogation system to a composite embedded FBG sensor matrix, which returns measurand-dependent wavelengths back to the interrogation system for measurement with subpicometer resolution. In particular, the returned wavelengths are channeled by the RIM to a photonic signal processing subsystem based on powerful optical chips, then passed through an optoelectronic interface to an analog post-detection electronics subsystem, digital post-detection electronics subsystem, and finally via a data interface to a computer. A range of composite structures has been fabricated with FBGs embedded. Stress tensile, bending, and dynamic strain tests were performed. The experimental work proved that the FBG sensors have a good level of accuracy in measuring the static response of the tested composite coupons (down to submicrostrain levels), the capability to detect and monitor dynamic loads, and the ability to detect defects in composites by a variety of methods including monitoring the decay time under different dynamic loading conditions. In addition to quasi-static and dynamic load monitoring, the system can capture acoustic emission events that can be a prelude to structural failure, as well as piezoactuator-induced ultrasonic Lamb-waves-based techniques as a basis for damage detection.

  17. Efficient multitasking: parallel versus serial processing of multiple tasks

    PubMed Central

    Fischer, Rico; Plessow, Franziska

    2015-01-01

    In the context of performance optimizations in multitasking, a central debate has unfolded in multitasking research around whether cognitive processes related to different tasks proceed only sequentially (one at a time), or can operate in parallel (simultaneously). This review features a discussion of theoretical considerations and empirical evidence regarding parallel versus serial task processing in multitasking. In addition, we highlight how methodological differences and theoretical conceptions determine the extent to which parallel processing in multitasking can be detected, to guide their employment in future research. Parallel and serial processing of multiple tasks are not mutually exclusive. Therefore, questions focusing exclusively on either task-processing mode are too simplified. We review empirical evidence and demonstrate that shifting between more parallel and more serial task processing critically depends on the conditions under which multiple tasks are performed. We conclude that efficient multitasking is reflected by the ability of individuals to adjust multitasking performance to environmental demands by flexibly shifting between different processing strategies of multiple task-component scheduling. PMID:26441742

  18. Efficient multitasking: parallel versus serial processing of multiple tasks.

    PubMed

    Fischer, Rico; Plessow, Franziska

    2015-01-01

    In the context of performance optimizations in multitasking, a central debate has unfolded in multitasking research around whether cognitive processes related to different tasks proceed only sequentially (one at a time), or can operate in parallel (simultaneously). This review features a discussion of theoretical considerations and empirical evidence regarding parallel versus serial task processing in multitasking. In addition, we highlight how methodological differences and theoretical conceptions determine the extent to which parallel processing in multitasking can be detected, to guide their employment in future research. Parallel and serial processing of multiple tasks are not mutually exclusive. Therefore, questions focusing exclusively on either task-processing mode are too simplified. We review empirical evidence and demonstrate that shifting between more parallel and more serial task processing critically depends on the conditions under which multiple tasks are performed. We conclude that efficient multitasking is reflected by the ability of individuals to adjust multitasking performance to environmental demands by flexibly shifting between different processing strategies of multiple task-component scheduling.

  19. Microfluidic local perfusion chambers for the visualization and manipulation of synapses

    PubMed Central

    Taylor, Anne M.; Dieterich, Daniela C.; Ito, Hiroshi T.; Kim, Sally A.; Schuman, Erin M.

    2010-01-01

    Summary The polarized nature of neurons as well as the size and density of synapses complicates the manipulation and visualization of cell biological processes that control synaptic function. Here we developed a microfluidic local perfusion (μLP) chamber to access and manipulate synaptic regions and pre- and post-synaptic compartments in vitro. This chamber directs the formation of synapses in >100 parallel rows connecting separate neuron populations. A perfusion channel transects the parallel rows allowing access to synaptic regions with high spatial and temporal resolution. We used this chamber to investigate synapse-to-nucleus signaling. Using the calcium indicator dye, Fluo-4, we measured changes in calcium at dendrites and somata, following local perfusion of glutamate. Exploiting the high temporal resolution of the chamber, we exposed synapses to “spaced” or “massed” application of glutamate and then examined levels of pCREB in somata. Lastly, we applied the metabotropic receptor agonist, DHPG, to dendrites and observed increases in Arc transcription and Arc transcript localization. PMID:20399729

  20. Status of parallel Python-based implementation of UEDGE

    NASA Astrophysics Data System (ADS)

    Umansky, M. V.; Pankin, A. Y.; Rognlien, T. D.; Dimits, A. M.; Friedman, A.; Joseph, I.

    2017-10-01

    The tokamak edge transport code UEDGE has long used the code-development and run-time framework Basis. However, with the support for Basis expected to terminate in the coming years, and with the advent of the modern numerical language Python, it has become desirable to move UEDGE to Python, to ensure its long-term viability. Our new Python-based UEDGE implementation takes advantage of the portable build system developed for FACETS. The new implementation gives access to Python's graphical libraries and numerical packages for pre- and post-processing, and support of HDF5 simplifies exchanging data. The older serial version of UEDGE has used for time-stepping the Newton-Krylov solver NKSOL. The renovated implementation uses backward Euler discretization with nonlinear solvers from PETSc, which has the promise to significantly improve the UEDGE parallel performance. We will report on assessment of some of the extended UEDGE capabilities emerging in the new implementation, and will discuss the future directions. Work performed for U.S. DOE by LLNL under contract DE-AC52-07NA27344.

  1. Evaluation of a time efficient immunization strategy for anti-PAH antibody development

    PubMed Central

    Li, Xin; Kaattari, Stephen L.; Vogelbein, Mary Ann; Unger, Michael A.

    2016-01-01

    The development of monoclonal antibodies (mAb) with affinity to small molecules can be a time-consuming process. To evaluate shortening the time for mAb production, we examined mouse antisera at different time points post-immunization to measure titer and to evaluate the affinity to the immunogen PBA (pyrene butyric acid). Fusions were also conducted temporally to evaluate antibody production success at various time periods. We produced anti-PBA antibodies 7 weeks post-immunization and selected for anti-PAH reactivity during the hybridoma screening process. Moreover, there were no obvious sensitivity differences relative to antibodies screened from a more traditional 18 week schedule. Our results demonstrate a more time efficient immunization strategy for anti-PAH antibody development that may be applied to other small molecules. PMID:27282486

  2. Design and process evaluation of an informative website tailored to breast cancer survivors’ and intimate partners’ post-treatment care needs

    PubMed Central

    2012-01-01

    Background On-line provision of information during the transition phase after treatment carries great promise in meeting shortcomings in post-treatment care for breast cancer survivors and their partners. The objectives of this study are to describe the development and process evaluation of a tailored informative website and to assess which characteristics of survivors and partners, participating in the feasibility study, are related to visiting the website. Methods The development process included quantitative and qualitative assessments of survivors’ and partners’ care needs and preferences. Participants’ use and evaluation of the website were explored by conducting baseline and post-measurements. During the intervening 10–12 weeks 57 survivors and 28 partners were granted access to the website. Results Fifty-seven percent (n=21) of survivors who took part in the post-measurement indicated that they had visited the website. Compared to non-visitors (n=16), they were more likely to have a partner and a higher income, reported higher levels of self-esteem and had completed treatment for a longer period of time. Partners who consulted the on-line information (42%, n=8) were younger and reported lower levels of social support compared to partners who did not visit the website (n=11). Visitors generally evaluated the content and lay-out positively, yet some believed the information was incomplete and impersonal. Conclusions The website reached only about half of survivors and partners, yet was mostly well-received. Besides other ways of providing information and support, a website containing clear-cut and tailored information could be a useful tool in post-treatment care provision. PMID:23034161

  3. Liquid rocket booster integration study. Volume 1: Executive summary

    NASA Technical Reports Server (NTRS)

    1988-01-01

    The impacts of introducing liquid rocket booster engines (LRB) into the Space Transportation System (STS)/Kennedy Space Center (KSC) launch environment are identified and evaluated. Proposed ground systems configurations are presented along with a launch site requirements summary. Prelaunch processing scenarios are described and the required facility modifications and new facility requirements are analyzed. Flight vehicle design recommendations to enhance launch processing are discussed. Processing approaches to integrate LRB with existing STS launch operations are evaluated. The key features and significance of launch site transition to a new STS configuration in parallel with ongoing launch activities are enumerated. This volume is the executive summary of the five volume series.

  4. Liquid rocket booster integration study. Volume 5, part 1: Appendices

    NASA Technical Reports Server (NTRS)

    1988-01-01

    The impacts of introducing liquid rocket booster engines (LRB) into the Space Transportation System (STS)/Kennedy Space Center (KSC) launch environment are identified and evaluated. Proposed ground systems configurations are presented along with a launch site requirements summary. Prelaunch processing scenarios are described and the required facility modifications and new facility requirements are analyzed. Flight vehicle design recommendations to enhance launch processing are discussed. Processing approaches to integrate LRB with existing STS launch operations are evaluated. The key features and significance of launch site transition to a new STS configuration in parallel with ongoing launch activities are enumerated. This volume is the appendices of the five volume series.

  5. Liquid Rocket Booster Integration Study. Volume 2: Study synopsis

    NASA Technical Reports Server (NTRS)

    1988-01-01

    The impacts of introducing liquid rocket booster engines (LRB) into the Space Transportation System (STS)/Kennedy Space Center (KSC) launch environment are identified and evaluated. Proposed ground systems configurations are presented along with a launch site requirements summary. Prelaunch processing scenarios are described and the required facility modifications and new facility requirements are analyzed. Flight vehicle design recommendations to enhance launch processing are discussed. Processing approaches to integrate LRB with existing STS launch operations are evaluated. The key features and significance of launch site transition to a new STS configuration in parallel with ongoing launch activities are enumerated. This volume is the study summary of the five volume series.

  6. Evaluation of Vipassana Meditation Course Effects on Subjective Stress, Well-being, Self-kindness and Mindfulness in a Community Sample: Post-course and 6-month Outcomes.

    PubMed

    Szekeres, Roberta A; Wertheim, Eleanor H

    2015-12-01

    Residential Vipassana meditation courses, which teach mindfulness skills, are widely available globally but under-evaluated. This study examined effects of a standardized, community-based Vipassana course, on subjective stress, well-being, self-kindness and trait mindfulness in a community sample. Participants completed self-report measures of these variables at pre-course and post-course (n = 122), and outcomes were compared to a control group of early enrollers (EEs) (n = 50) who completed measures at parallel time points before course commencement. Six-month follow-up was undertaken in the intervention group (n = 90). Findings, including intention-to-complete analyses, suggested positive effects of the Vipassana course in reducing subjective stress and increasing well-being, self-kindness and overall mindfulness (present-moment awareness and non-reaction). Although some reductions in post-course gains were found at follow-up, particularly in stress, follow-up scores still showed improvements compared to pre-course scores. Mindfulness change scores between pre-course and 6-month follow-up were moderately to highly correlated with outcome variable change scores, consistent with the idea that effects of the Vipassana course on stress and well-being operate, at least partially, through increasing mindfulness. The present research underscores the importance of undertaking further investigations into Vipassana courses' effects and applications. Copyright © 2014 John Wiley & Sons, Ltd.

  7. Visual grading analysis of digital neonatal chest phantom X-ray images: Impact of detector type, dose and image processing on image quality.

    PubMed

    Smet, M H; Breysem, L; Mussen, E; Bosmans, H; Marshall, N W; Cockmartin, L

    2018-07-01

    To evaluate the impact of digital detector, dose level and post-processing on neonatal chest phantom X-ray image quality (IQ). A neonatal phantom was imaged using four different detectors: a CR powder phosphor (PIP), a CR needle phosphor (NIP) and two wireless CsI DR detectors (DXD and DRX). Five different dose levels were studied for each detector and two post-processing algorithms evaluated for each vendor. Three paediatric radiologists scored the images using European quality criteria plus additional questions on vascular lines, noise and disease simulation. Visual grading characteristics and ordinal regression statistics were used to evaluate the effect of detector type, post-processing and dose on VGA score (VGAS). No significant differences were found between the NIP, DXD and CRX detectors (p>0.05) whereas the PIP detector had significantly lower VGAS (p< 0.0001). Processing did not influence VGAS (p=0.819). Increasing dose resulted in significantly higher VGAS (p<0.0001). Visual grading analysis (VGA) identified a detector air kerma/image (DAK/image) of ~2.4 μGy as an ideal working point for NIP, DXD and DRX detectors. VGAS tracked IQ differences between detectors and dose levels but not image post-processing changes. VGA showed a DAK/image value above which perceived IQ did not improve, potentially useful for commissioning. • A VGA study detects IQ differences between detectors and dose levels. • The NIP detector matched the VGAS of the CsI DR detectors. • VGA data are useful in setting initial detector air kerma level. • Differences in NNPS were consistent with changes in VGAS.

  8. Shortcomings of low-cost imaging systems for viewing computed radiographs.

    PubMed

    Ricke, J; Hänninen, E L; Zielinski, C; Amthauer, H; Stroszczynski, C; Liebig, T; Wolf, M; Hosten, N

    2000-01-01

    To assess potential advantages of a new PC-based viewing tool featuring image post-processing for viewing computed radiographs on low-cost hardware (PC) with a common display card and color monitor, and to evaluate the effect of using color versus monochrome monitors. Computed radiographs of a statistical phantom were viewed on a PC, with and without post-processing (spatial frequency and contrast processing), employing a monochrome or a color monitor. Findings were compared with the viewing on a radiological Workstation and evaluated with ROC analysis. Image post-processing improved the perception of low-contrast details significantly irrespective of the monitor used. No significant difference in perception was observed between monochrome and color monitors. The review at the radiological Workstation was superior to the review done using the PC with image processing. Lower quality hardware (graphic card and monitor) used in low cost PCs negatively affects perception of low-contrast details in computed radiographs. In this situation, it is highly recommended to use spatial frequency and contrast processing. No significant quality gain has been observed for the high-end monochrome monitor compared to the color display. However, the color monitor was affected stronger by high ambient illumination.

  9. [Process and key points of clinical literature evaluation of post-marketing traditional Chinese medicine].

    PubMed

    Liu, Huan; Xie, Yanming

    2011-10-01

    The clinical literature evaluation of the post-marketing traditional Chinese medicine is a comprehensive evaluation by the comprehensive gain, analysis of the drug, literature of drug efficacy, safety, economy, based on the literature evidence and is part of the evaluation of evidence-based medicine. The literature evaluation in the post-marketing Chinese medicine clinical evaluation is in the foundation and the key position. Through the literature evaluation, it can fully grasp the information, grasp listed drug variety of traditional Chinese medicines second development orientation, make clear further clinical indications, perfect the medicines, etc. This paper discusses the main steps and emphasis of the clinical literature evaluation. Emphasizing security literature evaluation should attach importance to the security of a comprehensive collection drug information. Safety assessment should notice traditional Chinese medicine validity evaluation in improving syndrome, improveing the living quality of patients with special advantage. The economics literature evaluation should pay attention to reliability, sensitivity and practicability of the conclusion.

  10. The Automated Instrumentation and Monitoring System (AIMS): Design and Architecture. 3.2

    NASA Technical Reports Server (NTRS)

    Yan, Jerry C.; Schmidt, Melisa; Schulbach, Cathy; Bailey, David (Technical Monitor)

    1997-01-01

    Whether a researcher is designing the 'next parallel programming paradigm', another 'scalable multiprocessor' or investigating resource allocation algorithms for multiprocessors, a facility that enables parallel program execution to be captured and displayed is invaluable. Careful analysis of such information can help computer and software architects to capture, and therefore, exploit behavioral variations among/within various parallel programs to take advantage of specific hardware characteristics. A software tool-set that facilitates performance evaluation of parallel applications on multiprocessors has been put together at NASA Ames Research Center under the sponsorship of NASA's High Performance Computing and Communications Program over the past five years. The Automated Instrumentation and Monitoring Systematic has three major software components: a source code instrumentor which automatically inserts active event recorders into program source code before compilation; a run-time performance monitoring library which collects performance data; and a visualization tool-set which reconstructs program execution based on the data collected. Besides being used as a prototype for developing new techniques for instrumenting, monitoring and presenting parallel program execution, AIMS is also being incorporated into the run-time environments of various hardware testbeds to evaluate their impact on user productivity. Currently, the execution of FORTRAN and C programs on the Intel Paragon and PALM workstations can be automatically instrumented and monitored. Performance data thus collected can be displayed graphically on various workstations. The process of performance tuning with AIMS will be illustrated using various NAB Parallel Benchmarks. This report includes a description of the internal architecture of AIMS and a listing of the source code.

  11. A concept of volume rendering guided search process to analyze medical data set.

    PubMed

    Zhou, Jianlong; Xiao, Chun; Wang, Zhiyan; Takatsuka, Masahiro

    2008-03-01

    This paper firstly presents an approach of parallel coordinates based parameter control panel (PCP). The PCP is used to control parameters of focal region-based volume rendering (FRVR) during data analysis. It uses a parallel coordinates style interface. Different rendering parameters represented with nodes on each axis, and renditions based on related parameters are connected using polylines to show dependencies between renditions and parameters. Based on the PCP, a concept of volume rendering guided search process is proposed. The search pipeline is divided into four phases. Different parameters of FRVR are recorded and modulated in the PCP during search phases. The concept shows that volume visualization could play the role of guiding a search process in the rendition space to help users to efficiently find local structures of interest. The usability of the proposed approach is evaluated to show its effectiveness.

  12. Checkpoint-based forward recovery using lookahead execution and rollback validation in parallel and distributed systems. Ph.D. Thesis, 1992

    NASA Technical Reports Server (NTRS)

    Long, Junsheng

    1994-01-01

    This thesis studies a forward recovery strategy using checkpointing and optimistic execution in parallel and distributed systems. The approach uses replicated tasks executing on different processors for forwared recovery and checkpoint comparison for error detection. To reduce overall redundancy, this approach employs a lower static redundancy in the common error-free situation to detect error than the standard N Module Redundancy scheme (NMR) does to mask off errors. For the rare occurrence of an error, this approach uses some extra redundancy for recovery. To reduce the run-time recovery overhead, look-ahead processes are used to advance computation speculatively and a rollback process is used to produce a diagnosis for correct look-ahead processes without rollback of the whole system. Both analytical and experimental evaluation have shown that this strategy can provide a nearly error-free execution time even under faults with a lower average redundancy than NMR.

  13. A Randomized, Rater-Blinded, Parallel Trial of Intensive Speech Therapy in Sub-Acute Post-Stroke Aphasia: The SP-I-R-IT Study

    ERIC Educational Resources Information Center

    Martins, Isabel Pavao; Leal, Gabriela; Fonseca, Isabel; Farrajota, Luisa; Aguiar, Marta; Fonseca, Jose; Lauterbach, Martin; Goncalves, Luis; Cary, M. Carmo; Ferreira, Joaquim J.; Ferro, Jose M.

    2013-01-01

    Background: There is conflicting evidence regarding the benefits of intensive speech and language therapy (SLT), particularly because intensity is often confounded with total SLT provided. Aims: A two-centre, randomized, rater-blinded, parallel study was conducted to compare the efficacy of 100 h of SLT in a regular (RT) versus intensive (IT)…

  14. Long-term persistence of quality improvements for an intensive care unit communication initiative using the VALUE strategy.

    PubMed

    Wysham, Nicholas G; Mularski, Richard A; Schmidt, David M; Nord, Shirley C; Louis, Deborah L; Shuster, Elizabeth; Curtis, J Randall; Mosen, David M

    2014-06-01

    Communication in the intensive care unit (ICU) is an important component of quality ICU care. In this report, we evaluate the long-term effects of a quality improvement (QI) initiative, based on the VALUE communication strategy, designed to improve communication with family members of critically ill patients. We implemented a multifaceted intervention to improve communication in the ICU and measured processes of care. Quality improvement components included posted VALUE placards, templated progress note inclusive of communication documentation, and a daily rounding checklist prompt. We evaluated care for all patients cared for by the intensivists during three separate 3 week periods, pre, post, and 3 years following the initial intervention. Care delivery was assessed in 38 patients and their families in the pre-intervention sample, 27 in the post-intervention period, and 41 in follow-up. Process measures of communication showed improvement across the evaluation periods, for example, daily updates increased from pre 62% to post 76% to current 84% of opportunities. Our evaluation of this quality improvement project suggests persistence and continued improvements in the delivery of measured aspects of ICU family communication. Maintenance with point-of-care-tools may account for some of the persistence and continued improvements. Copyright © 2014 Elsevier Inc. All rights reserved.

  15. Reduction of product-related species during the fermentation and purification of a recombinant IL-1 receptor antagonist at the laboratory and pilot scale.

    PubMed

    Schirmer, Emily B; Golden, Kathryn; Xu, Jin; Milling, Jesse; Murillo, Alec; Lowden, Patricia; Mulagapati, Srihariraju; Hou, Jinzhao; Kovalchin, Joseph T; Masci, Allyson; Collins, Kathryn; Zarbis-Papastoitsis, Gregory

    2013-08-01

    Through a parallel approach of tracking product quality through fermentation and purification development, a robust process was designed to reduce the levels of product-related species. Three biochemically similar product-related species were identified as byproducts of host-cell enzymatic activity. To modulate intracellular proteolytic activity, key fermentation parameters (temperature, pH, trace metals, EDTA levels, and carbon source) were evaluated through bioreactor optimization, while balancing negative effects on growth, productivity, and oxygen demand. The purification process was based on three non-affinity steps and resolved product-related species by exploiting small charge differences. Using statistical design of experiments for elution conditions, a high-resolution cation exchange capture column was optimized for resolution and recovery. Further reduction of product-related species was achieved by evaluating a matrix of conditions for a ceramic hydroxyapatite column. The optimized fermentation process was transferred from the 2-L laboratory scale to the 100-L pilot scale and the purification process was scaled accordingly to process the fermentation harvest. The laboratory- and pilot-scale processes resulted in similar process recoveries of 60 and 65%, respectively, and in a product that was of equal quality and purity to that of small-scale development preparations. The parallel approach for up- and downstream development was paramount in achieving a robust and scalable clinical process. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Effects of phenobarbital and levetiracetam on PR and QTc intervals in patients with post-stroke seizure.

    PubMed

    Siniscalchi, Antonio; Scaglione, Francesco; Sanzaro, Enzo; Iemolo, Francesco; Albertini, Giorgio; Quirino, Gianluca; Manes, Maria Teresa; Gratteri, Santo; Mercuri, Nicola Biagio; De Sarro, Giovambattista; Gallelli, Luca

    2014-12-01

    Sudden unexplained/unexpected death (SUDEP) is related to high mortality in patients with epilepsy. The prolongation of QT interval, involved in cardiac arrhythmia-related SUDEP, may be precipitated by antiepileptic drugs (AEDs). In this study, we evaluated the effects of phenobarbital and levetiracetam on PR-QTc intervals in patients with post-stroke seizures. We performed an open-label, parallel group, prospective, multicenter study between June 2009 and December 2013 in patients older than 18 years of age with a clinical diagnosis of post-stroke seizure and treated with phenobarbital or levetiracetam. In order to exclude a role of cerebral post-stroke injury on modulation of PR and QTc intervals, patients with cerebral post-stroke injury and without seizures were also enrolled as controls. Interictal electrocardiography analysis revealed no significant difference in PR interval between patients treated with an AED (n = 49) and control patients (n = 50) (181.25 ± 12.05 vs. 182.4 ± 10.3 ms; p > 0.05). In contrast, a significantly longer QTc interval was recorded in patients treated with an AED compared with control patients (441.2 ± 56.6 vs. 396.8 ± 49.3 ms; p < 0.01). Patients treated with phenobarbital showed a significantly longer QTc interval than patients treated with levetiracetam (460.0 ± 57.2 vs. 421.5 ± 50.1 ms; p < 0.05). The study reported that in patients with late post-stroke seizures, phenobarbital prolonged QTc interval more so than levetiracetam.

  17. Parallelized multi–graphics processing unit framework for high-speed Gabor-domain optical coherence microscopy

    PubMed Central

    Tankam, Patrice; Santhanam, Anand P.; Lee, Kye-Sung; Won, Jungeun; Canavesi, Cristina; Rolland, Jannick P.

    2014-01-01

    Abstract. Gabor-domain optical coherence microscopy (GD-OCM) is a volumetric high-resolution technique capable of acquiring three-dimensional (3-D) skin images with histological resolution. Real-time image processing is needed to enable GD-OCM imaging in a clinical setting. We present a parallelized and scalable multi-graphics processing unit (GPU) computing framework for real-time GD-OCM image processing. A parallelized control mechanism was developed to individually assign computation tasks to each of the GPUs. For each GPU, the optimal number of amplitude-scans (A-scans) to be processed in parallel was selected to maximize GPU memory usage and core throughput. We investigated five computing architectures for computational speed-up in processing 1000×1000 A-scans. The proposed parallelized multi-GPU computing framework enables processing at a computational speed faster than the GD-OCM image acquisition, thereby facilitating high-speed GD-OCM imaging in a clinical setting. Using two parallelized GPUs, the image processing of a 1×1×0.6  mm3 skin sample was performed in about 13 s, and the performance was benchmarked at 6.5 s with four GPUs. This work thus demonstrates that 3-D GD-OCM data may be displayed in real-time to the examiner using parallelized GPU processing. PMID:24695868

  18. Parallelized multi-graphics processing unit framework for high-speed Gabor-domain optical coherence microscopy.

    PubMed

    Tankam, Patrice; Santhanam, Anand P; Lee, Kye-Sung; Won, Jungeun; Canavesi, Cristina; Rolland, Jannick P

    2014-07-01

    Gabor-domain optical coherence microscopy (GD-OCM) is a volumetric high-resolution technique capable of acquiring three-dimensional (3-D) skin images with histological resolution. Real-time image processing is needed to enable GD-OCM imaging in a clinical setting. We present a parallelized and scalable multi-graphics processing unit (GPU) computing framework for real-time GD-OCM image processing. A parallelized control mechanism was developed to individually assign computation tasks to each of the GPUs. For each GPU, the optimal number of amplitude-scans (A-scans) to be processed in parallel was selected to maximize GPU memory usage and core throughput. We investigated five computing architectures for computational speed-up in processing 1000×1000 A-scans. The proposed parallelized multi-GPU computing framework enables processing at a computational speed faster than the GD-OCM image acquisition, thereby facilitating high-speed GD-OCM imaging in a clinical setting. Using two parallelized GPUs, the image processing of a 1×1×0.6  mm3 skin sample was performed in about 13 s, and the performance was benchmarked at 6.5 s with four GPUs. This work thus demonstrates that 3-D GD-OCM data may be displayed in real-time to the examiner using parallelized GPU processing.

  19. A survey of packages for large linear systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Kesheng; Milne, Brent

    2000-02-11

    This paper evaluates portable software packages for the iterative solution of very large sparse linear systems on parallel architectures. While we cannot hope to tell individual users which package will best suit their needs, we do hope that our systematic evaluation provides essential unbiased information about the packages and the evaluation process may serve as an example on how to evaluate these packages. The information contained here include feature comparisons, usability evaluations and performance characterizations. This review is primarily focused on self-contained packages that can be easily integrated into an existing program and are capable of computing solutions to verymore » large sparse linear systems of equations. More specifically, it concentrates on portable parallel linear system solution packages that provide iterative solution schemes and related preconditioning schemes because iterative methods are more frequently used than competing schemes such as direct methods. The eight packages evaluated are: Aztec, BlockSolve,ISIS++, LINSOL, P-SPARSLIB, PARASOL, PETSc, and PINEAPL. Among the eight portable parallel iterative linear system solvers reviewed, we recommend PETSc and Aztec for most application programmers because they have well designed user interface, extensive documentation and very responsive user support. Both PETSc and Aztec are written in the C language and are callable from Fortran. For those users interested in using Fortran 90, PARASOL is a good alternative. ISIS++is a good alternative for those who prefer the C++ language. Both PARASOL and ISIS++ are relatively new and are continuously evolving. Thus their user interface may change. In general, those packages written in Fortran 77 are more cumbersome to use because the user may need to directly deal with a number of arrays of varying sizes. Languages like C++ and Fortran 90 offer more convenient data encapsulation mechanisms which make it easier to implement a clean and intuitive user interface. In addition to reviewing these portable parallel iterative solver packages, we also provide a more cursory assessment of a range of related packages, from specialized parallel preconditioners to direct methods for sparse linear systems.« less

  20. Use of general purpose graphics processing units with MODFLOW

    USGS Publications Warehouse

    Hughes, Joseph D.; White, Jeremy T.

    2013-01-01

    To evaluate the use of general-purpose graphics processing units (GPGPUs) to improve the performance of MODFLOW, an unstructured preconditioned conjugate gradient (UPCG) solver has been developed. The UPCG solver uses a compressed sparse row storage scheme and includes Jacobi, zero fill-in incomplete, and modified-incomplete lower-upper (LU) factorization, and generalized least-squares polynomial preconditioners. The UPCG solver also includes options for sequential and parallel solution on the central processing unit (CPU) using OpenMP. For simulations utilizing the GPGPU, all basic linear algebra operations are performed on the GPGPU; memory copies between the central processing unit CPU and GPCPU occur prior to the first iteration of the UPCG solver and after satisfying head and flow criteria or exceeding a maximum number of iterations. The efficiency of the UPCG solver for GPGPU and CPU solutions is benchmarked using simulations of a synthetic, heterogeneous unconfined aquifer with tens of thousands to millions of active grid cells. Testing indicates GPGPU speedups on the order of 2 to 8, relative to the standard MODFLOW preconditioned conjugate gradient (PCG) solver, can be achieved when (1) memory copies between the CPU and GPGPU are optimized, (2) the percentage of time performing memory copies between the CPU and GPGPU is small relative to the calculation time, (3) high-performance GPGPU cards are utilized, and (4) CPU-GPGPU combinations are used to execute sequential operations that are difficult to parallelize. Furthermore, UPCG solver testing indicates GPGPU speedups exceed parallel CPU speedups achieved using OpenMP on multicore CPUs for preconditioners that can be easily parallelized.

  1. Parallel efficient rate control methods for JPEG 2000

    NASA Astrophysics Data System (ADS)

    Martínez-del-Amor, Miguel Á.; Bruns, Volker; Sparenberg, Heiko

    2017-09-01

    Since the introduction of JPEG 2000, several rate control methods have been proposed. Among them, post-compression rate-distortion optimization (PCRD-Opt) is the most widely used, and the one recommended by the standard. The approach followed by this method is to first compress the entire image split in code blocks, and subsequently, optimally truncate the set of generated bit streams according to the maximum target bit rate constraint. The literature proposes various strategies on how to estimate ahead of time where a block will get truncated in order to stop the execution prematurely and save time. However, none of them have been defined bearing in mind a parallel implementation. Today, multi-core and many-core architectures are becoming popular for JPEG 2000 codecs implementations. Therefore, in this paper, we analyze how some techniques for efficient rate control can be deployed in GPUs. In order to do that, the design of our GPU-based codec is extended, allowing stopping the process at a given point. This extension also harnesses a higher level of parallelism on the GPU, leading to up to 40% of speedup with 4K test material on a Titan X. In a second step, three selected rate control methods are adapted and implemented in our parallel encoder. A comparison is then carried out, and used to select the best candidate to be deployed in a GPU encoder, which gave an extra 40% of speedup in those situations where it was really employed.

  2. Light-weight Parallel Python Tools for Earth System Modeling Workflows

    NASA Astrophysics Data System (ADS)

    Mickelson, S. A.; Paul, K.; Xu, H.; Dennis, J.; Brown, D. I.

    2015-12-01

    With the growth in computing power over the last 30 years, earth system modeling codes have become increasingly data-intensive. As an example, it is expected that the data required for the next Intergovernmental Panel on Climate Change (IPCC) Assessment Report (AR6) will increase by more than 10x to an expected 25PB per climate model. Faced with this daunting challenge, developers of the Community Earth System Model (CESM) have chosen to change the format of their data for long-term storage from time-slice to time-series, in order to reduce the required download bandwidth needed for later analysis and post-processing by climate scientists. Hence, efficient tools are required to (1) perform the transformation of the data from time-slice to time-series format and to (2) compute climatology statistics, needed for many diagnostic computations, on the resulting time-series data. To address the first of these two challenges, we have developed a parallel Python tool for converting time-slice model output to time-series format. To address the second of these challenges, we have developed a parallel Python tool to perform fast time-averaging of time-series data. These tools are designed to be light-weight, be easy to install, have very few dependencies, and can be easily inserted into the Earth system modeling workflow with negligible disruption. In this work, we present the motivation, approach, and testing results of these two light-weight parallel Python tools, as well as our plans for future research and development.

  3. Improving Quantum Gate Simulation using a GPU

    NASA Astrophysics Data System (ADS)

    Gutierrez, Eladio; Romero, Sergio; Trenas, Maria A.; Zapata, Emilio L.

    2008-11-01

    Due to the increasing computing power of the graphics processing units (GPU), they are becoming more and more popular when solving general purpose algorithms. As the simulation of quantum computers results on a problem with exponential complexity, it is advisable to perform a parallel computation, such as the one provided by the SIMD multiprocessors present in recent GPUs. In this paper, we focus on an important quantum algorithm, the quantum Fourier transform (QTF), in order to evaluate different parallelization strategies on a novel GPU architecture. Our implementation makes use of the new CUDA software/hardware architecture developed recently by NVIDIA.

  4. A new full-field digital mammography system with and without the use of an advanced post-processing algorithm: comparison of image quality and diagnostic performance.

    PubMed

    Ahn, Hye Shin; Kim, Sun Mi; Jang, Mijung; Yun, Bo La; Kim, Bohyoung; Ko, Eun Sook; Han, Boo-Kyung; Chang, Jung Min; Yi, Ann; Cho, Nariya; Moon, Woo Kyung; Choi, Hye Young

    2014-01-01

    To compare new full-field digital mammography (FFDM) with and without use of an advanced post-processing algorithm to improve image quality, lesion detection, diagnostic performance, and priority rank. During a 22-month period, we prospectively enrolled 100 cases of specimen FFDM mammography (Brestige®), which was performed alone or in combination with a post-processing algorithm developed by the manufacturer: group A (SMA), specimen mammography without application of "Mammogram enhancement ver. 2.0"; group B (SMB), specimen mammography with application of "Mammogram enhancement ver. 2.0". Two sets of specimen mammographies were randomly reviewed by five experienced radiologists. Image quality, lesion detection, diagnostic performance, and priority rank with regard to image preference were evaluated. Three aspects of image quality (overall quality, contrast, and noise) of the SMB were significantly superior to those of SMA (p < 0.05). SMB was significantly superior to SMA for visualizing calcifications (p < 0.05). Diagnostic performance, as evaluated by cancer score, was similar between SMA and SMB. SMB was preferred to SMA by four of the five reviewers. The post-processing algorithm may improve image quality with better image preference in FFDM than without use of the software.

  5. Carpal tunnel and median nerve volume changes after tunnel release in patients with the carpal tunnel syndrome: a magnetic resonance imaging (MRI) study.

    PubMed

    Crnković, T; Trkulja, V; Bilić, R; Gašpar, D; Kolundžić, R

    2016-05-01

    Our aim was to study the dynamics of the post-surgical canal and nerve volumes and their relationships to objective [electromyoneurography (EMNG)] and subjective (pain) outcomes. Forty-seven patients with carpal tunnel syndrome (CTS) (median age 52, range 23-75 years) with a prominent narrowing of the median nerve within the canal (observed during carpal tunnel release) were evaluated clinically using EMNG and magnetic resonance imagining (MRI) before and at 90 and 180 days post-surgery. Canal and nerve volumes increased, EMNG findings improved and pain resolved during the follow-up. Increase in tunnel volume was independently associated with increased nerve volume. A greater post-surgical nerve volume was independently associated with a more prominent resolution of pain, but not with the extent of EMNG improvement, whereas EMNG improvement was not associated with pain resolution. Data confirm that MRI can detect even modest changes in the carpal tunnel and median nerve volume and that tunnel release results in tunnel and nerve-volume increases that are paralleled by EMNG and clinical improvements. Taken together, these observations suggest that MRI could be used to objectivise persistent post-surgical difficulties in CTS patients. Level of evidence 3 (follow-up study).

  6. A high-speed linear algebra library with automatic parallelism

    NASA Technical Reports Server (NTRS)

    Boucher, Michael L.

    1994-01-01

    Parallel or distributed processing is key to getting highest performance workstations. However, designing and implementing efficient parallel algorithms is difficult and error-prone. It is even more difficult to write code that is both portable to and efficient on many different computers. Finally, it is harder still to satisfy the above requirements and include the reliability and ease of use required of commercial software intended for use in a production environment. As a result, the application of parallel processing technology to commercial software has been extremely small even though there are numerous computationally demanding programs that would significantly benefit from application of parallel processing. This paper describes DSSLIB, which is a library of subroutines that perform many of the time-consuming computations in engineering and scientific software. DSSLIB combines the high efficiency and speed of parallel computation with a serial programming model that eliminates many undesirable side-effects of typical parallel code. The result is a simple way to incorporate the power of parallel processing into commercial software without compromising maintainability, reliability, or ease of use. This gives significant advantages over less powerful non-parallel entries in the market.

  7. High-Throughput Industrial Coatings Research at The Dow Chemical Company.

    PubMed

    Kuo, Tzu-Chi; Malvadkar, Niranjan A; Drumright, Ray; Cesaretti, Richard; Bishop, Matthew T

    2016-09-12

    At The Dow Chemical Company, high-throughput research is an active area for developing new industrial coatings products. Using the principles of automation (i.e., using robotic instruments), parallel processing (i.e., prepare, process, and evaluate samples in parallel), and miniaturization (i.e., reduce sample size), high-throughput tools for synthesizing, formulating, and applying coating compositions have been developed at Dow. In addition, high-throughput workflows for measuring various coating properties, such as cure speed, hardness development, scratch resistance, impact toughness, resin compatibility, pot-life, surface defects, among others have also been developed in-house. These workflows correlate well with the traditional coatings tests, but they do not necessarily mimic those tests. The use of such high-throughput workflows in combination with smart experimental designs allows accelerated discovery and commercialization.

  8. Neural Parallel Engine: A toolbox for massively parallel neural signal processing.

    PubMed

    Tam, Wing-Kin; Yang, Zhi

    2018-05-01

    Large-scale neural recordings provide detailed information on neuronal activities and can help elicit the underlying neural mechanisms of the brain. However, the computational burden is also formidable when we try to process the huge data stream generated by such recordings. In this study, we report the development of Neural Parallel Engine (NPE), a toolbox for massively parallel neural signal processing on graphical processing units (GPUs). It offers a selection of the most commonly used routines in neural signal processing such as spike detection and spike sorting, including advanced algorithms such as exponential-component-power-component (EC-PC) spike detection and binary pursuit spike sorting. We also propose a new method for detecting peaks in parallel through a parallel compact operation. Our toolbox is able to offer a 5× to 110× speedup compared with its CPU counterparts depending on the algorithms. A user-friendly MATLAB interface is provided to allow easy integration of the toolbox into existing workflows. Previous efforts on GPU neural signal processing only focus on a few rudimentary algorithms, are not well-optimized and often do not provide a user-friendly programming interface to fit into existing workflows. There is a strong need for a comprehensive toolbox for massively parallel neural signal processing. A new toolbox for massively parallel neural signal processing has been created. It can offer significant speedup in processing signals from large-scale recordings up to thousands of channels. Copyright © 2018 Elsevier B.V. All rights reserved.

  9. Anatomically constrained neural network models for the categorization of facial expression

    NASA Astrophysics Data System (ADS)

    McMenamin, Brenton W.; Assadi, Amir H.

    2004-12-01

    The ability to recognize facial expression in humans is performed with the amygdala which uses parallel processing streams to identify the expressions quickly and accurately. Additionally, it is possible that a feedback mechanism may play a role in this process as well. Implementing a model with similar parallel structure and feedback mechanisms could be used to improve current facial recognition algorithms for which varied expressions are a source for error. An anatomically constrained artificial neural-network model was created that uses this parallel processing architecture and feedback to categorize facial expressions. The presence of a feedback mechanism was not found to significantly improve performance for models with parallel architecture. However the use of parallel processing streams significantly improved accuracy over a similar network that did not have parallel architecture. Further investigation is necessary to determine the benefits of using parallel streams and feedback mechanisms in more advanced object recognition tasks.

  10. Anatomically constrained neural network models for the categorization of facial expression

    NASA Astrophysics Data System (ADS)

    McMenamin, Brenton W.; Assadi, Amir H.

    2005-01-01

    The ability to recognize facial expression in humans is performed with the amygdala which uses parallel processing streams to identify the expressions quickly and accurately. Additionally, it is possible that a feedback mechanism may play a role in this process as well. Implementing a model with similar parallel structure and feedback mechanisms could be used to improve current facial recognition algorithms for which varied expressions are a source for error. An anatomically constrained artificial neural-network model was created that uses this parallel processing architecture and feedback to categorize facial expressions. The presence of a feedback mechanism was not found to significantly improve performance for models with parallel architecture. However the use of parallel processing streams significantly improved accuracy over a similar network that did not have parallel architecture. Further investigation is necessary to determine the benefits of using parallel streams and feedback mechanisms in more advanced object recognition tasks.

  11. Parallel processing data network of master and slave transputers controlled by a serial control network

    DOEpatents

    Crosetto, D.B.

    1996-12-31

    The present device provides for a dynamically configurable communication network having a multi-processor parallel processing system having a serial communication network and a high speed parallel communication network. The serial communication network is used to disseminate commands from a master processor to a plurality of slave processors to effect communication protocol, to control transmission of high density data among nodes and to monitor each slave processor`s status. The high speed parallel processing network is used to effect the transmission of high density data among nodes in the parallel processing system. Each node comprises a transputer, a digital signal processor, a parallel transfer controller, and two three-port memory devices. A communication switch within each node connects it to a fast parallel hardware channel through which all high density data arrives or leaves the node. 6 figs.

  12. Parallel processing data network of master and slave transputers controlled by a serial control network

    DOEpatents

    Crosetto, Dario B.

    1996-01-01

    The present device provides for a dynamically configurable communication network having a multi-processor parallel processing system having a serial communication network and a high speed parallel communication network. The serial communication network is used to disseminate commands from a master processor (100) to a plurality of slave processors (200) to effect communication protocol, to control transmission of high density data among nodes and to monitor each slave processor's status. The high speed parallel processing network is used to effect the transmission of high density data among nodes in the parallel processing system. Each node comprises a transputer (104), a digital signal processor (114), a parallel transfer controller (106), and two three-port memory devices. A communication switch (108) within each node (100) connects it to a fast parallel hardware channel (70) through which all high density data arrives or leaves the node.

  13. Content standards for medical image metadata

    NASA Astrophysics Data System (ADS)

    d'Ornellas, Marcos C.; da Rocha, Rafael P.

    2003-12-01

    Medical images are at the heart of the healthcare diagnostic procedures. They have provided not only a noninvasive mean to view anatomical cross-sections of internal organs but also a mean for physicians to evaluate the patient"s diagnosis and monitor the effects of the treatment. For a Medical Center, the emphasis may shift from the generation of image to post processing and data management since the medical staff may generate even more processed images and other data from the original image after various analyses and post processing. A medical image data repository for health care information system is becoming a critical need. This data repository would contain comprehensive patient records, including information such as clinical data and related diagnostic images, and post-processed images. Due to the large volume and complexity of the data as well as the diversified user access requirements, the implementation of the medical image archive system will be a complex and challenging task. This paper discusses content standards for medical image metadata. In addition it also focuses on the image metadata content evaluation and metadata quality management.

  14. Robust Parallel Motion Estimation and Mapping with Stereo Cameras in Underground Infrastructure

    NASA Astrophysics Data System (ADS)

    Liu, Chun; Li, Zhengning; Zhou, Yuan

    2016-06-01

    Presently, we developed a novel robust motion estimation method for localization and mapping in underground infrastructure using a pre-calibrated rigid stereo camera rig. Localization and mapping in underground infrastructure is important to safety. Yet it's also nontrivial since most underground infrastructures have poor lighting condition and featureless structure. Overcoming these difficulties, we discovered that parallel system is more efficient than the EKF-based SLAM approach since parallel system divides motion estimation and 3D mapping tasks into separate threads, eliminating data-association problem which is quite an issue in SLAM. Moreover, the motion estimation thread takes the advantage of state-of-art robust visual odometry algorithm which is highly functional under low illumination and provides accurate pose information. We designed and built an unmanned vehicle and used the vehicle to collect a dataset in an underground garage. The parallel system was evaluated by the actual dataset. Motion estimation results indicated a relative position error of 0.3%, and 3D mapping results showed a mean position error of 13cm. Off-line process reduced position error to 2cm. Performance evaluation by actual dataset showed that our system is capable of robust motion estimation and accurate 3D mapping in poor illumination and featureless underground environment.

  15. Scalable software architecture for on-line multi-camera video processing

    NASA Astrophysics Data System (ADS)

    Camplani, Massimo; Salgado, Luis

    2011-03-01

    In this paper we present a scalable software architecture for on-line multi-camera video processing, that guarantees a good trade off between computational power, scalability and flexibility. The software system is modular and its main blocks are the Processing Units (PUs), and the Central Unit. The Central Unit works as a supervisor of the running PUs and each PU manages the acquisition phase and the processing phase. Furthermore, an approach to easily parallelize the desired processing application has been presented. In this paper, as case study, we apply the proposed software architecture to a multi-camera system in order to efficiently manage multiple 2D object detection modules in a real-time scenario. System performance has been evaluated under different load conditions such as number of cameras and image sizes. The results show that the software architecture scales well with the number of camera and can easily works with different image formats respecting the real time constraints. Moreover, the parallelization approach can be used in order to speed up the processing tasks with a low level of overhead.

  16. Providing Post-Compulsory Education Options through "Newlook" Rural Partnerships

    ERIC Educational Resources Information Center

    Mlcek, Susan

    2009-01-01

    The tree change phenomenon started around Australia from about 2003 and continues to this day, even into places like the Western Plains area of New South Wales. Relocating from the city to rural areas for a lifestyle change is attributed to this phenomenon. An ever-growing interest in post-compulsory education solutions that run parallel to this…

  17. Processing mode during repetitive thinking in socially anxious individuals: evidence for a maladaptive experiential mode.

    PubMed

    Wong, Quincy J J; Moulds, Michelle L

    2012-12-01

    Evidence from the depression literature suggests that an analytical processing mode adopted during repetitive thinking leads to maladaptive outcomes relative to an experiential processing mode. To date, in socially anxious individuals, the impact of processing mode during repetitive thinking related to an actual social-evaluative situation has not been investigated. We thus tested whether an analytical processing mode would be maladaptive relative to an experiential processing mode during anticipatory processing and post-event rumination. High and low socially anxious participants were induced to engage in either an analytical or experiential processing mode during: (a) anticipatory processing before performing a speech (Experiment 1; N = 94), or (b) post-event rumination after performing a speech (Experiment 2; N = 74). Mood, cognition, and behavioural measures were employed to examine the effects of processing mode. For high socially anxious participants, the modes had a similar effect on self-reported anxiety during both anticipatory processing and post-event rumination. Unexpectedly, relative to the analytical mode, the experiential mode led to stronger high standard and conditional beliefs during anticipatory processing, and stronger unconditional beliefs during post-event rumination. These experiments are the first to investigate processing mode during anticipatory processing and post-event rumination. Hence, these results are novel and will need to be replicated. These findings suggest that an experiential processing mode is maladaptive relative to an analytical processing mode during repetitive thinking characteristic of socially anxious individuals. Copyright © 2012 Elsevier Ltd. All rights reserved.

  18. Super and parallel computers and their impact on civil engineering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kamat, M.P.

    1986-01-01

    This book presents the papers given at a conference on the use of supercomputers in civil engineering. Topics considered at the conference included solving nonlinear equations on a hypercube, a custom architectured parallel processing system, distributed data processing, algorithms, computer architecture, parallel processing, vector processing, computerized simulation, and cost benefit analysis.

  19. Parallel processing architecture for computing inverse differential kinematic equations of the PUMA arm

    NASA Technical Reports Server (NTRS)

    Hsia, T. C.; Lu, G. Z.; Han, W. H.

    1987-01-01

    In advanced robot control problems, on-line computation of inverse Jacobian solution is frequently required. Parallel processing architecture is an effective way to reduce computation time. A parallel processing architecture is developed for the inverse Jacobian (inverse differential kinematic equation) of the PUMA arm. The proposed pipeline/parallel algorithm can be inplemented on an IC chip using systolic linear arrays. This implementation requires 27 processing cells and 25 time units. Computation time is thus significantly reduced.

  20. A Novel Approach to Enhance the Mechanical Strength and Electrical and Thermal Conductivity of Cu-GNP Nanocomposites

    NASA Astrophysics Data System (ADS)

    Saboori, Abdollah; Pavese, Matteo; Badini, Claudio; Fino, Paolo

    2018-01-01

    Copper/graphene nanoplatelet (GNP) nanocomposites were produced by a wet mixing method followed by a classical powder metallurgy technique. A qualitative evaluation of the structure of graphene after mixing indicated that wet mixing is an appropriate dispersion method. Thereafter, the effects of two post-processing techniques such as repressing-annealing and hot isostatic pressing (HIP) on density, interfacial bonding, hardness, and thermal and electrical conductivity of the nanocomposites were analyzed. Density evaluations showed that the relative density of specimens increased after the post-processing steps so that after HIPing almost full densification was achieved. The Vickers hardness of specimens increased considerably after the post-processing techniques. The thermal conductivity of pure copper was very low in the case of the as-sintered samples containing 2 to 3 pct porosity and increased considerably to a maximum value in the case of HIPed samples which contained only 0.1 to 0.2 pct porosity. Electrical conductivity measurements showed that by increasing the graphene content electrical conductivity decreased.

  1. Spectroscopic optical coherence tomography based on wavelength de-multiplexing and smart pixel array detection

    NASA Astrophysics Data System (ADS)

    Laubscher, Markus; Bourquin, Stéphane; Froehly, Luc; Karamata, Boris; Lasser, Theo

    2004-07-01

    Current spectroscopic optical coherence tomography (OCT) methods rely on a posteriori numerical calculation. We present an experimental alternative for accessing spectroscopic information in OCT without post-processing based on wavelength de-multiplexing and parallel detection using a diffraction grating and a smart pixel detector array. Both a conventional A-scan with high axial resolution and the spectrally resolved measurement are acquired simultaneously. A proof-of-principle demonstration is given on a dynamically changing absorbing sample. The method's potential for fast spectroscopic OCT imaging is discussed. The spectral measurements obtained with this approach are insensitive to scan non-linearities or sample movements.

  2. Work stressors, depressive symptoms and sleep quality among US Navy members: a parallel process latent growth modelling approach across deployment.

    PubMed

    Bravo, Adrian J; Kelley, Michelle L; Swinkels, Cindy M; Ulmer, Christi S

    2017-11-03

    The present study examined whether work stressors contribute to sleep problems and depressive symptoms over the course of deployment (i.e. pre-deployment, post-deployment and 6-month reintegration) among US Navy members. Specifically, we examined whether depressive symptoms or sleep quality mediate the relationships between work stressors and these outcomes. Participants were 101 US Navy members who experienced an 8-month deployment after Operational Enduring Freedom/Operation Iraqi Freedom. Using piecewise latent growth models, we found that increased work stressors were linked to increased depressive symptoms and decreased sleep quality across all three deployment stages. Further, increases in work stressors from pre- to post-deployment contributed to poorer sleep quality post-deployment via increasing depressive symptoms. Moreover, sleep quality mediated the association between increases in work stressors and increases in depressive symptoms from pre- to post-deployment. These effects were maintained from post-deployment through the 6-month reintegration. Although preliminary, our results suggest that changes in work stressors may have small, but significant implications for both depressive symptoms and quality of sleep over time, and a bi-directional relationship persists between sleep quality and depression across deployment. Strategies that target both stress and sleep could address both precipitating and perpetuating factors that affect sleep and depressive symptoms. © 2017 European Sleep Research Society.

  3. Frog: Asynchronous Graph Processing on GPU with Hybrid Coloring Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shi, Xuanhua; Luo, Xuan; Liang, Junling

    GPUs have been increasingly used to accelerate graph processing for complicated computational problems regarding graph theory. Many parallel graph algorithms adopt the asynchronous computing model to accelerate the iterative convergence. Unfortunately, the consistent asynchronous computing requires locking or atomic operations, leading to significant penalties/overheads when implemented on GPUs. As such, coloring algorithm is adopted to separate the vertices with potential updating conflicts, guaranteeing the consistency/correctness of the parallel processing. Common coloring algorithms, however, may suffer from low parallelism because of a large number of colors generally required for processing a large-scale graph with billions of vertices. We propose a light-weightmore » asynchronous processing framework called Frog with a preprocessing/hybrid coloring model. The fundamental idea is based on Pareto principle (or 80-20 rule) about coloring algorithms as we observed through masses of realworld graph coloring cases. We find that a majority of vertices (about 80%) are colored with only a few colors, such that they can be read and updated in a very high degree of parallelism without violating the sequential consistency. Accordingly, our solution separates the processing of the vertices based on the distribution of colors. In this work, we mainly answer three questions: (1) how to partition the vertices in a sparse graph with maximized parallelism, (2) how to process large-scale graphs that cannot fit into GPU memory, and (3) how to reduce the overhead of data transfers on PCIe while processing each partition. We conduct experiments on real-world data (Amazon, DBLP, YouTube, RoadNet-CA, WikiTalk and Twitter) to evaluate our approach and make comparisons with well-known non-preprocessed (such as Totem, Medusa, MapGraph and Gunrock) and preprocessed (Cusha) approaches, by testing four classical algorithms (BFS, PageRank, SSSP and CC). On all the tested applications and datasets, Frog is able to significantly outperform existing GPU-based graph processing systems except Gunrock and MapGraph. MapGraph gets better performance than Frog when running BFS on RoadNet-CA. The comparison between Gunrock and Frog is inconclusive. Frog can outperform Gunrock more than 1.04X when running PageRank and SSSP, while the advantage of Frog is not obvious when running BFS and CC on some datasets especially for RoadNet-CA.« less

  4. Use of parallel computing in mass processing of laser data

    NASA Astrophysics Data System (ADS)

    Będkowski, J.; Bratuś, R.; Prochaska, M.; Rzonca, A.

    2015-12-01

    The first part of the paper includes a description of the rules used to generate the algorithm needed for the purpose of parallel computing and also discusses the origins of the idea of research on the use of graphics processors in large scale processing of laser scanning data. The next part of the paper includes the results of an efficiency assessment performed for an array of different processing options, all of which were substantially accelerated with parallel computing. The processing options were divided into the generation of orthophotos using point clouds, coloring of point clouds, transformations, and the generation of a regular grid, as well as advanced processes such as the detection of planes and edges, point cloud classification, and the analysis of data for the purpose of quality control. Most algorithms had to be formulated from scratch in the context of the requirements of parallel computing. A few of the algorithms were based on existing technology developed by the Dephos Software Company and then adapted to parallel computing in the course of this research study. Processing time was determined for each process employed for a typical quantity of data processed, which helped confirm the high efficiency of the solutions proposed and the applicability of parallel computing to the processing of laser scanning data. The high efficiency of parallel computing yields new opportunities in the creation and organization of processing methods for laser scanning data.

  5. A Bayesian modelling method for post-processing daily sub-seasonal to seasonal rainfall forecasts from global climate models and evaluation for 12 Australian catchments

    NASA Astrophysics Data System (ADS)

    Schepen, Andrew; Zhao, Tongtiegang; Wang, Quan J.; Robertson, David E.

    2018-03-01

    Rainfall forecasts are an integral part of hydrological forecasting systems at sub-seasonal to seasonal timescales. In seasonal forecasting, global climate models (GCMs) are now the go-to source for rainfall forecasts. For hydrological applications however, GCM forecasts are often biased and unreliable in uncertainty spread, and calibration is therefore required before use. There are sophisticated statistical techniques for calibrating monthly and seasonal aggregations of the forecasts. However, calibration of seasonal forecasts at the daily time step typically uses very simple statistical methods or climate analogue methods. These methods generally lack the sophistication to achieve unbiased, reliable and coherent forecasts of daily amounts and seasonal accumulated totals. In this study, we propose and evaluate a Rainfall Post-Processing method for Seasonal forecasts (RPP-S), which is based on the Bayesian joint probability modelling approach for calibrating daily forecasts and the Schaake Shuffle for connecting the daily ensemble members of different lead times. We apply the method to post-process ACCESS-S forecasts for 12 perennial and ephemeral catchments across Australia and for 12 initialisation dates. RPP-S significantly reduces bias in raw forecasts and improves both skill and reliability. RPP-S forecasts are also more skilful and reliable than forecasts derived from ACCESS-S forecasts that have been post-processed using quantile mapping, especially for monthly and seasonal accumulations. Several opportunities to improve the robustness and skill of RPP-S are identified. The new RPP-S post-processed forecasts will be used in ensemble sub-seasonal to seasonal streamflow applications.

  6. Parallel Geospatial Data Management for Multi-Scale Environmental Data Analysis on GPUs

    NASA Astrophysics Data System (ADS)

    Wang, D.; Zhang, J.; Wei, Y.

    2013-12-01

    As the spatial and temporal resolutions of Earth observatory data and Earth system simulation outputs are getting higher, in-situ and/or post- processing such large amount of geospatial data increasingly becomes a bottleneck in scientific inquires of Earth systems and their human impacts. Existing geospatial techniques that are based on outdated computing models (e.g., serial algorithms and disk-resident systems), as have been implemented in many commercial and open source packages, are incapable of processing large-scale geospatial data and achieve desired level of performance. In this study, we have developed a set of parallel data structures and algorithms that are capable of utilizing massively data parallel computing power available on commodity Graphics Processing Units (GPUs) for a popular geospatial technique called Zonal Statistics. Given two input datasets with one representing measurements (e.g., temperature or precipitation) and the other one represent polygonal zones (e.g., ecological or administrative zones), Zonal Statistics computes major statistics (or complete distribution histograms) of the measurements in all regions. Our technique has four steps and each step can be mapped to GPU hardware by identifying its inherent data parallelisms. First, a raster is divided into blocks and per-block histograms are derived. Second, the Minimum Bounding Boxes (MBRs) of polygons are computed and are spatially matched with raster blocks; matched polygon-block pairs are tested and blocks that are either inside or intersect with polygons are identified. Third, per-block histograms are aggregated to polygons for blocks that are completely within polygons. Finally, for blocks that intersect with polygon boundaries, all the raster cells within the blocks are examined using point-in-polygon-test and cells that are within polygons are used to update corresponding histograms. As the task becomes I/O bound after applying spatial indexing and GPU hardware acceleration, we have developed a GPU-based data compression technique by reusing our previous work on Bitplane Quadtree (or BPQ-Tree) based indexing of binary bitmaps. Results have shown that our GPU-based parallel Zonal Statistic technique on 3000+ US counties over 20+ billion NASA SRTM 30 meter resolution Digital Elevation (DEM) raster cells has achieved impressive end-to-end runtimes: 101 seconds and 46 seconds a low-end workstation equipped with a Nvidia GTX Titan GPU using cold and hot cache, respectively; and, 60-70 seconds using a single OLCF TITAN computing node and 10-15 seconds using 8 nodes. Our experiment results clearly show the potentials of using high-end computing facilities for large-scale geospatial processing.

  7. Monte Carlo MP2 on Many Graphical Processing Units.

    PubMed

    Doran, Alexander E; Hirata, So

    2016-10-11

    In the Monte Carlo second-order many-body perturbation (MC-MP2) method, the long sum-of-product matrix expression of the MP2 energy, whose literal evaluation may be poorly scalable, is recast into a single high-dimensional integral of functions of electron pair coordinates, which is evaluated by the scalable method of Monte Carlo integration. The sampling efficiency is further accelerated by the redundant-walker algorithm, which allows a maximal reuse of electron pairs. Here, a multitude of graphical processing units (GPUs) offers a uniquely ideal platform to expose multilevel parallelism: fine-grain data-parallelism for the redundant-walker algorithm in which millions of threads compute and share orbital amplitudes on each GPU; coarse-grain instruction-parallelism for near-independent Monte Carlo integrations on many GPUs with few and infrequent interprocessor communications. While the efficiency boost by the redundant-walker algorithm on central processing units (CPUs) grows linearly with the number of electron pairs and tends to saturate when the latter exceeds the number of orbitals, on a GPU it grows quadratically before it increases linearly and then eventually saturates at a much larger number of pairs. This is because the orbital constructions are nearly perfectly parallelized on a GPU and thus completed in a near-constant time regardless of the number of pairs. In consequence, an MC-MP2/cc-pVDZ calculation of a benzene dimer is 2700 times faster on 256 GPUs (using 2048 electron pairs) than on two CPUs, each with 8 cores (which can use only up to 256 pairs effectively). We also numerically determine that the cost to achieve a given relative statistical uncertainty in an MC-MP2 energy increases as O(n 3 ) or better with system size n, which may be compared with the O(n 5 ) scaling of the conventional implementation of deterministic MP2. We thus establish the scalability of MC-MP2 with both system and computer sizes.

  8. Modeling post-wildfire hydrological processes with ParFlow

    NASA Astrophysics Data System (ADS)

    Escobar, I. S.; Lopez, S. R.; Kinoshita, A. M.

    2017-12-01

    Wildfires alter the natural processes within a watershed, such as surface runoff, evapotranspiration rates, and subsurface water storage. Post-fire hydrologic models are typically one-dimensional, empirically-based models or two-dimensional, conceptually-based models with lumped parameter distributions. These models are useful for modeling and predictions at the watershed outlet; however, do not provide detailed, distributed hydrologic processes at the point scale within the watershed. This research uses ParFlow, a three-dimensional, distributed hydrologic model to simulate post-fire hydrologic processes by representing the spatial and temporal variability of soil burn severity (via hydrophobicity) and vegetation recovery. Using this approach, we are able to evaluate the change in post-fire water components (surface flow, lateral flow, baseflow, and evapotranspiration). This work builds upon previous field and remote sensing analysis conducted for the 2003 Old Fire Burn in Devil Canyon, located in southern California (USA). This model is initially developed for a hillslope defined by a 500 m by 1000 m lateral extent. The subsurface reaches 12.4 m and is assigned a variable cell thickness to explicitly consider soil burn severity throughout the stages of recovery and vegetation regrowth. We consider four slope and eight hydrophobic layer configurations. Evapotranspiration is used as a proxy for vegetation regrowth and is represented by the satellite-based Simplified Surface Energy Balance (SSEBOP) product. The pre- and post-fire surface runoff, subsurface storage, and surface storage interactions are evaluated at the point scale. Results will be used as a basis for developing and fine-tuning a watershed-scale model. Long-term simulations will advance our understanding of post-fire hydrological partitioning between water balance components and the spatial variability of watershed processes, providing improved guidance for post-fire watershed management. In reference to the presenter, Isabel Escobar: Research is funded by the NASA-DIRECT STEM Program. Travel expenses for this presentation is funded by CSU-LSAMP. CSU-LSAMP is supported by the National Science Foundation under Grant # HRD-1302873 and the CSU Office of Chancellor.

  9. Magma transport in sheet intrusions of the Alnö carbonatite complex, central Sweden.

    PubMed

    Andersson, Magnus; Almqvist, Bjarne S G; Burchardt, Steffi; Troll, Valentin R; Malehmir, Alireza; Snowball, Ian; Kübler, Lutz

    2016-06-10

    Magma transport through the Earth's crust occurs dominantly via sheet intrusions, such as dykes and cone-sheets, and is fundamental to crustal evolution, volcanic eruptions and geochemical element cycling. However, reliable methods to reconstruct flow direction in solidified sheet intrusions have proved elusive. Anisotropy of magnetic susceptibility (AMS) in magmatic sheets is often interpreted as primary magma flow, but magnetic fabrics can be modified by post-emplacement processes, making interpretation of AMS data ambiguous. Here we present AMS data from cone-sheets in the Alnö carbonatite complex, central Sweden. We discuss six scenarios of syn- and post-emplacement processes that can modify AMS fabrics and offer a conceptual framework for systematic interpretation of magma movements in sheet intrusions. The AMS fabrics in the Alnö cone-sheets are dominantly oblate with magnetic foliations parallel to sheet orientations. These fabrics may result from primary lateral flow or from sheet closure at the terminal stage of magma transport. As the cone-sheets are discontinuous along their strike direction, sheet closure is the most probable process to explain the observed AMS fabrics. We argue that these fabrics may be common to cone-sheets and an integrated geology, petrology and AMS approach can be used to distinguish them from primary flow fabrics.

  10. Magma transport in sheet intrusions of the Alnö carbonatite complex, central Sweden

    PubMed Central

    Andersson, Magnus; Almqvist, Bjarne S. G.; Burchardt, Steffi; Troll, Valentin R.; Malehmir, Alireza; Snowball, Ian; Kübler, Lutz

    2016-01-01

    Magma transport through the Earth’s crust occurs dominantly via sheet intrusions, such as dykes and cone-sheets, and is fundamental to crustal evolution, volcanic eruptions and geochemical element cycling. However, reliable methods to reconstruct flow direction in solidified sheet intrusions have proved elusive. Anisotropy of magnetic susceptibility (AMS) in magmatic sheets is often interpreted as primary magma flow, but magnetic fabrics can be modified by post-emplacement processes, making interpretation of AMS data ambiguous. Here we present AMS data from cone-sheets in the Alnö carbonatite complex, central Sweden. We discuss six scenarios of syn- and post-emplacement processes that can modify AMS fabrics and offer a conceptual framework for systematic interpretation of magma movements in sheet intrusions. The AMS fabrics in the Alnö cone-sheets are dominantly oblate with magnetic foliations parallel to sheet orientations. These fabrics may result from primary lateral flow or from sheet closure at the terminal stage of magma transport. As the cone-sheets are discontinuous along their strike direction, sheet closure is the most probable process to explain the observed AMS fabrics. We argue that these fabrics may be common to cone-sheets and an integrated geology, petrology and AMS approach can be used to distinguish them from primary flow fabrics. PMID:27282420

  11. DREAM: An Efficient Methodology for DSMC Simulation of Unsteady Processes

    NASA Astrophysics Data System (ADS)

    Cave, H. M.; Jermy, M. C.; Tseng, K. C.; Wu, J. S.

    2008-12-01

    A technique called the DSMC Rapid Ensemble Averaging Method (DREAM) for reducing the statistical scatter in the output from unsteady DSMC simulations is introduced. During post-processing by DREAM, the DSMC algorithm is re-run multiple times over a short period before the temporal point of interest thus building up a combination of time- and ensemble-averaged sampling data. The particle data is regenerated several mean collision times before the output time using the particle data generated during the original DSMC run. This methodology conserves the original phase space data from the DSMC run and so is suitable for reducing the statistical scatter in highly non-equilibrium flows. In this paper, the DREAM-II method is investigated and verified in detail. Propagating shock waves at high Mach numbers (Mach 8 and 12) are simulated using a parallel DSMC code (PDSC) and then post-processed using DREAM. The ability of DREAM to obtain the correct particle velocity distribution in the shock structure is demonstrated and the reduction of statistical scatter in the output macroscopic properties is measured. DREAM is also used to reduce the statistical scatter in the results from the interaction of a Mach 4 shock with a square cavity and for the interaction of a Mach 12 shock on a wedge in a channel.

  12. Preoperative and post-operative sleep quality evaluation in rotator cuff tear patients.

    PubMed

    Serbest, Sancar; Tiftikçi, Uğur; Askın, Aydogan; Yaman, Ferda; Alpua, Murat

    2017-07-01

    The aim of this study was to examine the potential relationship between subjective sleep quality and degree of pain in patients with rotator cuff repair. Thirty-one patients who underwent rotator cuff repair prospectively completed the Pittsburgh Sleep Quality Index, the Western Ontario Rotator Cuff Index, and the Constant and Murley shoulder scores before surgery and at 6 months after surgery. Preoperative demographic, clinical, and radiologic parameters were also evaluated. The study analysed 31 patients with a median age of 61 years. There was a significant difference preoperatively versus post-operatively in terms of all PSQI global scores and subdivisions (p < 0.001). A statistically significant improvement was determined by the Western Ontario Rotator Cuff Scale and the Constant and Murley shoulder scores (p ˂ 0.001). Sleep disorders are commonly seen in patients with rotator cuff tear, and after repair, there is an increase in the quality of sleep with a parallel improvement in shoulder functions. However, no statistically significant correlation was determined between arthroscopic procedures and the size of the tear and sleep quality. It is suggested that rotator cuff tear repair improves the quality of sleep and the quality of life. IV.

  13. Immediate effects of acupuncture on biceps brachii muscle function in healthy and post-stroke subjects

    PubMed Central

    2012-01-01

    Background The effects of acupuncture on muscle function in healthy subjects are contradictory and cannot be extrapolated to post-stroke patients. This study evaluated the immediate effects of manual acupuncture on myoelectric activity and isometric force in healthy and post-stroke patients. Methods A randomized clinical trial, with parallel groups, single-blinded study design, was conducted with 32 healthy subjects and 15 post-stroke patients with chronic hemiparesis. Surface electromyography from biceps brachii during maximal isometric voluntary tests was performed before and after 20-min intermittent, and manual stimulation of acupoints Quchi (LI11) or Tianquan (PC2). Pattern differentiation was performed by an automated method based on logistic regression equations. Results Healthy subjects showed a decrease in the root mean-squared (RMS) values after the stimulation of LI11 (pre: 1.392 ± 0.826 V; post: 0.612 ± 0.0.320 V; P = 0.002) and PC2 (pre: 1.494 ± 0.826 V; post: 0.623 ± 0.320 V; P = 0.001). Elbow flexion maximal isometric voluntary contraction (MIVC) was not significantly different after acupuncture stimulation of LI11 (pre: 22.2 ± 10.7 kg; post: 21.7 ± 9.5 kg; P = 0.288) or PC2 (pre: 18.8 ± 4.6 kg; post: 18.7 ± 6.0 kg; P = 0.468). Post-stroke patients did not exhibit any significant decrease in the RMS values after the stimulation of LI11 (pre: 0.627 ± 0.335 V; post: 0.530 ± 0.272 V; P = 0.187) and PC2 (pre: 0.601 ± 0.258 V; post: 0.591 ± 0.326 V; P = 0.398). Also, no significant decrease in the MIVC value was observed after the stimulation of LI11 (pre: 9.6 ± 3.9 kg; post: 9.6 ± 4.7 kg; P = 0.499) or PC2 (pre: 10.7 ± 5.6 kg; post: 10.2 ± 5.3 kg; P = 0.251). Different frequency of patterns was observed among healthy subjects and post-stroke patients groups (χ2 = 9.759; P = 0.021). Conclusion Manual acupuncture provides sufficient neuromuscular stimuli to promote immediate changes in motor unit gross recruitment without repercussion in maximal force output in healthy subjects. Post-stroke patients did not exhibit significant reduction on the myoelectric activity and maximal force output after manual acupuncture and needs further evaluation with a larger sample. Trial registration Brazilian Clinical Trials Registry RBR-5g7xqh. PMID:22417176

  14. Parallelized CCHE2D flow model with CUDA Fortran on Graphics Process Units

    USDA-ARS?s Scientific Manuscript database

    This paper presents the CCHE2D implicit flow model parallelized using CUDA Fortran programming technique on Graphics Processing Units (GPUs). A parallelized implicit Alternating Direction Implicit (ADI) solver using Parallel Cyclic Reduction (PCR) algorithm on GPU is developed and tested. This solve...

  15. Descriptive analysis of a 1:1 physiotherapy outpatient intervention post primary lumbar discectomy: one arm of a small-scale parallel randomised controlled trial across two UK sites.

    PubMed

    Rushton, A; Calcutt, A; Heneghan, N; Heap, A; White, L; Calvert, M; Goodwin, P

    2016-11-09

    There is a lack of high-quality evidence for physiotherapy post lumbar discectomy. Substantial heterogeneity in treatment effects may be explained by variation in quality, administration and components of interventions. An optimised physiotherapy intervention may reduce heterogeneity and improve patient benefit. The objective was to describe, analyse and evaluate an optimised 1:1 physiotherapy outpatient intervention for patients following primary lumbar discectomy, to provide preliminary insights. A descriptive analysis of the intervention embedded within an external pilot and feasibility trial. Two UK spinal centres. Participants aged ≥18; post primary, single level, lumbar discectomy were recruited. The intervention encompassed education, advice, mobility and core stability exercises, progressive exercise, and encouragement of early return to work/activity. Patients received ≤8 sessions for ≤8 weeks, starting 4 weeks post surgery (baseline). Blinded outcome assessment at baseline and 12 weeks (post intervention) included the Roland Morris Disability Questionnaire. STarT Back data were collected at baseline. Statistical analyses summarised participant characteristics and preplanned descriptive analyses. Thematic analysis grouped related data. Twenty-two of 29 allocated participants received the intervention. STarT Back categorised n=16 (55%) participants 'not at low risk'. Physiotherapists identified reasons for caution for 8 (36%) participants, commonly risk of overdoing activity (n=4, 18%). There was no relationship between STarT Back and physiotherapists' evaluation of caution. Physiotherapists identified 154 problems (mean (SD) 5.36 (2.63)). Those 'not at low risk', and/or requiring caution presented with more problems, and required more sessions (mean (SD) 3.14 (1.16)). Patients present differently and therefore require tailored interventions. These differences may be identified using clinical reasoning and outcome data. ISRCTN33808269; post results. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  16. Optimizing the Performance of Reactive Molecular Dynamics Simulations for Multi-core Architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aktulga, Hasan Metin; Coffman, Paul; Shan, Tzu-Ray

    2015-12-01

    Hybrid parallelism allows high performance computing applications to better leverage the increasing on-node parallelism of modern supercomputers. In this paper, we present a hybrid parallel implementation of the widely used LAMMPS/ReaxC package, where the construction of bonded and nonbonded lists and evaluation of complex ReaxFF interactions are implemented efficiently using OpenMP parallelism. Additionally, the performance of the QEq charge equilibration scheme is examined and a dual-solver is implemented. We present the performance of the resulting ReaxC-OMP package on a state-of-the-art multi-core architecture Mira, an IBM BlueGene/Q supercomputer. For system sizes ranging from 32 thousand to 16.6 million particles, speedups inmore » the range of 1.5-4.5x are observed using the new ReaxC-OMP software. Sustained performance improvements have been observed for up to 262,144 cores (1,048,576 processes) of Mira with a weak scaling efficiency of 91.5% in larger simulations containing 16.6 million particles.« less

  17. Pre-liver transplant psychosocial evaluation predicts post-transplantation outcomes.

    PubMed

    Benson, Ariel A; Rowe, Mina; Eid, Ahmad; Bluth, Keren; Merhav, Hadar; Khalaileh, Abed; Safadi, Rifaat

    2018-08-01

    Psychosocial factors greatly impact the course of patients throughout the liver transplantation process. A retrospective chart review was performed of patients who underwent liver transplantation at Hadassah-Hebrew University Medical Center between 2002 and 2012. A composite psychosocial score was computed based on the patient's pre-transplant evaluation. Patients were divided into two groups based on compliance, support and insight: Optimal psychosocial score and Non-optimal psychosocial score. Post-liver transplantation survival and complication rates were evaluated. Out of 100 patients who underwent liver transplantation at the Hadassah-Hebrew University Medical Center between 2002 and 2012, 93% had a complete pre-liver transplant psychosocial evaluation in the medical record performed by professional psychologists and social workers. Post-liver transplantation survival was significantly higher in the Optimal group (85%) as compared to the Non-optimal group (56%, p = .002). Post-liver transplantation rate of renal failure was significantly lower in the Optimal group. No significant differences were observed between the groups in other post-transplant complications. A patient's psychosocial status may impact outcomes following transplantation as inferior psychosocial grades were associated with lower overall survival and increased rates of complications. Pre-liver transplant psychosocial evaluations are an important tool to help predict survival following transplantation.

  18. Massively parallel information processing systems for space applications

    NASA Technical Reports Server (NTRS)

    Schaefer, D. H.

    1979-01-01

    NASA is developing massively parallel systems for ultra high speed processing of digital image data collected by satellite borne instrumentation. Such systems contain thousands of processing elements. Work is underway on the design and fabrication of the 'Massively Parallel Processor', a ground computer containing 16,384 processing elements arranged in a 128 x 128 array. This computer uses existing technology. Advanced work includes the development of semiconductor chips containing thousands of feedthrough paths. Massively parallel image analog to digital conversion technology is also being developed. The goal is to provide compact computers suitable for real-time onboard processing of images.

  19. Parallel log structured file system collective buffering to achieve a compact representation of scientific and/or dimensional data

    DOEpatents

    Grider, Gary A.; Poole, Stephen W.

    2015-09-01

    Collective buffering and data pattern solutions are provided for storage, retrieval, and/or analysis of data in a collective parallel processing environment. For example, a method can be provided for data storage in a collective parallel processing environment. The method comprises receiving data to be written for a plurality of collective processes within a collective parallel processing environment, extracting a data pattern for the data to be written for the plurality of collective processes, generating a representation describing the data pattern, and saving the data and the representation.

  20. GPURFSCREEN: a GPU based virtual screening tool using random forest classifier.

    PubMed

    Jayaraj, P B; Ajay, Mathias K; Nufail, M; Gopakumar, G; Jaleel, U C A

    2016-01-01

    In-silico methods are an integral part of modern drug discovery paradigm. Virtual screening, an in-silico method, is used to refine data models and reduce the chemical space on which wet lab experiments need to be performed. Virtual screening of a ligand data model requires large scale computations, making it a highly time consuming task. This process can be speeded up by implementing parallelized algorithms on a Graphical Processing Unit (GPU). Random Forest is a robust classification algorithm that can be employed in the virtual screening. A ligand based virtual screening tool (GPURFSCREEN) that uses random forests on GPU systems has been proposed and evaluated in this paper. This tool produces optimized results at a lower execution time for large bioassay data sets. The quality of results produced by our tool on GPU is same as that on a regular serial environment. Considering the magnitude of data to be screened, the parallelized virtual screening has a significantly lower running time at high throughput. The proposed parallel tool outperforms its serial counterpart by successfully screening billions of molecules in training and prediction phases.

  1. schwimmbad: A uniform interface to parallel processing pools in Python

    NASA Astrophysics Data System (ADS)

    Price-Whelan, Adrian M.; Foreman-Mackey, Daniel

    2017-09-01

    Many scientific and computing problems require doing some calculation on all elements of some data set. If the calculations can be executed in parallel (i.e. without any communication between calculations), these problems are said to be perfectly parallel. On computers with multiple processing cores, these tasks can be distributed and executed in parallel to greatly improve performance. A common paradigm for handling these distributed computing problems is to use a processing "pool": the "tasks" (the data) are passed in bulk to the pool, and the pool handles distributing the tasks to a number of worker processes when available. schwimmbad provides a uniform interface to parallel processing pools and enables switching easily between local development (e.g., serial processing or with multiprocessing) and deployment on a cluster or supercomputer (via, e.g., MPI or JobLib).

  2. Qualification of serological infectious disease assays for the screening of samples from deceased tissue donors.

    PubMed

    Kitchen, A D; Newham, J A

    2011-05-01

    Whilst some of the assays used for serological screening of post-mortem blood samples from deceased tissue donors in some countries have been specifically validated by the manufacturer for this purpose, a significant number of those currently in use globally have not. Although specificity has previously been considered a problem in the screening of such samples, we believe that ensuring sensitivity is more important. The aim of this study was to validate a broader range of assays for the screening of post-mortem blood samples from deceased tissue donors. Six microplate immunoassays currently in use within National Health Service Blood and Transplant (NHSBT) for the screening of blood, tissue and stem cell donations were included. Representative samples from confirmed positive donors were titrated in screen negative post-mortem samples in parallel with normal pooled negative serum to determine if there was any inhibition with the post-mortem samples. There were no significant differences seen (P < 0.005) between the dilution curves obtained for the positive samples diluted in post-mortem samples and normal pooled sera. Although small numbers of samples were studied, it can be surmised that the post-mortem blood samples from deceased tissue donors, collected according to United Kingdom guidelines, are a suitable substrate for the assays evaluated. No diminution of reactivity was seen when dilution with sera from deceased donors was compared to dilution using pooled serum from live donors. In the absence of genuine low titre positive post-mortem samples, the use of samples spiked with various levels of target material provides a means of qualifying serological screening assays used by NHSBT for the screening of post-mortem blood samples from deceased tissue donors.

  3. Comparison of Sprotte and Quincke needles with respect to post dural puncture headache and backache.

    PubMed

    Tarkkila, P J; Heine, H; Tervo, R R

    1992-01-01

    The objective of this study was to compare 24-gauge Sprotte and 25-gauge Quincke needles with respect to post dural puncture headache and backache. Three hundred ASA Physical Status I or II patients scheduled for minor orthopedic or urologic operations under spinal anesthesia were chosen for this randomized, prospective study at a university hospital and a city hospital. Anesthetic technique, intravenous fluids, and postoperative pain therapy were standardized. Patients were randomly divided into three equal groups. Spinal anesthesia was performed with either a 24-gauge Sprotte needle or a 25-gauge Quincke needle with the cutting bevel parallel or perpendicular to the dural fibers. Anesthesia could not be performed in three cases with the Sprotte needle and in one case with the Quincke needle. The most common complications were post dural puncture backache (18.0%), post dural puncture headache (8.2%), and non-postural headache (6.7%). No major complications occurred. The Quincke needle with bevel perpendicular to the dural fibers caused a 17.9% incidence of post dural puncture headache. The Quincke with bevel parallel to the dural fibers and the Sprotte needles caused similar post dural puncture headache rates (4.5% and 2.4%, respectively). Other factors associated with post dural puncture headache were young age, early ambulation, and sedation during spinal anesthesia. There were no significant differences between needles in the incidence of post dural puncture backache. Our data indicate that Quincke needles should not be used with the needle bevel inserted perpendicular to the dural fibers. The Sprotte needle does not solve the problem of post dural puncture headache and backache.

  4. The Use of AMET & Automated Scripts for Model Evaluation

    EPA Science Inventory

    Brief overview of EPA’s new CMAQ website to be launched publically in June, 2017. Details on the upcoming release of the Atmospheric Model Evaluation Tool (AMET) and the creation of automated scripts for post-processing and evaluating air quality model data.

  5. Parallel Signal Processing and System Simulation using aCe

    NASA Technical Reports Server (NTRS)

    Dorband, John E.; Aburdene, Maurice F.

    2003-01-01

    Recently, networked and cluster computation have become very popular for both signal processing and system simulation. A new language is ideally suited for parallel signal processing applications and system simulation since it allows the programmer to explicitly express the computations that can be performed concurrently. In addition, the new C based parallel language (ace C) for architecture-adaptive programming allows programmers to implement algorithms and system simulation applications on parallel architectures by providing them with the assurance that future parallel architectures will be able to run their applications with a minimum of modification. In this paper, we will focus on some fundamental features of ace C and present a signal processing application (FFT).

  6. CFD code evaluation for internal flow modeling

    NASA Technical Reports Server (NTRS)

    Chung, T. J.

    1990-01-01

    Research on the computational fluid dynamics (CFD) code evaluation with emphasis on supercomputing in reacting flows is discussed. Advantages of unstructured grids, multigrids, adaptive methods, improved flow solvers, vector processing, parallel processing, and reduction of memory requirements are discussed. As examples, researchers include applications of supercomputing to reacting flow Navier-Stokes equations including shock waves and turbulence and combustion instability problems associated with solid and liquid propellants. Evaluation of codes developed by other organizations are not included. Instead, the basic criteria for accuracy and efficiency have been established, and some applications on rocket combustion have been made. Research toward an ultimate goal, the most accurate and efficient CFD code, is in progress and will continue for years to come.

  7. Connectionism, parallel constraint satisfaction processes, and gestalt principles: (re) introducing cognitive dynamics to social psychology.

    PubMed

    Read, S J; Vanman, E J; Miller, L C

    1997-01-01

    We argue that recent work in connectionist modeling, in particular the parallel constraint satisfaction processes that are central to many of these models, has great importance for understanding issues of both historical and current concern for social psychologists. We first provide a brief description of connectionist modeling, with particular emphasis on parallel constraint satisfaction processes. Second, we examine the tremendous similarities between parallel constraint satisfaction processes and the Gestalt principles that were the foundation for much of modem social psychology. We propose that parallel constraint satisfaction processes provide a computational implementation of the principles of Gestalt psychology that were central to the work of such seminal social psychologists as Asch, Festinger, Heider, and Lewin. Third, we then describe how parallel constraint satisfaction processes have been applied to three areas that were key to the beginnings of modern social psychology and remain central today: impression formation and causal reasoning, cognitive consistency (balance and cognitive dissonance), and goal-directed behavior. We conclude by discussing implications of parallel constraint satisfaction principles for a number of broader issues in social psychology, such as the dynamics of social thought and the integration of social information within the narrow time frame of social interaction.

  8. Using Parallel Processing for Problem Solving.

    DTIC Science & Technology

    1979-12-01

    are the basic parallel proces- sing primitive . Different goals of the system can be pursued in parallel by placing them in separate activities...Language primitives are provided for manipulating running activities. Viewpoints are a generalization of context FOM -(over "*’ DD I FON 1473 ’EDITION OF I...arc the basic parallel processing primitive . Different goals of the system can be pursued in parallel by placing them in separate activities. Language

  9. Liquid rocket booster integration study. Volume 3: Study products. Part 2: Sections 8-19

    NASA Technical Reports Server (NTRS)

    1988-01-01

    The impacts of introducing liquid rocket booster engines (LRB) into the Space Transportation System (STS)/Kennedy Space Center (KSC) launch environment are identified and evaluated. Proposed ground systems configurations are presented along with a launch site requirements summary. Prelaunch processing scenarios are described and the required facility modifications and new facility requirements are analyzed. Flight vehicle design recommendations to enhance launch processing are discussed. Processing approaches to integrate LRB with existing STS launch operations are evaluated. The key features and significance of launch site transition to a new STS configuration in parallel with ongoing launch activities are enumerated. This volume is part two of the study products section of the five volume series.

  10. Liquid rocket booster integration study. Volume 3, part 1: Study products

    NASA Technical Reports Server (NTRS)

    1988-01-01

    The impacts of introducing liquid rocket booster engines (LRB) into the Space Transportation System (STS)/Kennedy Space Center (KSC) launch environment are identified and evaluated. Proposed ground systems configurations are presented along with a launch site requirements summary. Prelaunch processing scenarios are described and the required facility modifications and new facility requirements are analyzed. Flight vehicle design recommendations to enhance launch processing are discussed. Processing approaches to integrate LRB with existing STS launch operations are evaluated. The key features and significance of launch site transition to a new STS configuration in parallel with ongoing launch activities are enumerated. This volume is part one of the study products section of the five volume series.

  11. A Quality Improvement Activity to Promote Interprofessional Collaboration Among Health Professions Students

    PubMed Central

    Stevenson, Katherine; Busch, Angela; Scott, Darlene J.; Henry, Carol; Wall, Patricia A.

    2009-01-01

    Objectives To develop and evaluate a classroom-based curriculum designed to promote interprofessional competencies by having undergraduate students from various health professions work together on system-based problems using quality improvement (QI) methods and tools to improve patient-centered care. Design Students from 4 health care programs (nursing, nutrition, pharmacy, and physical therapy) participated in an interprofessional QI activity. In groups of 6 or 7, students completed pre-intervention and post-intervention reflection tools on attitudes relating to interprofessio nal teams, and a tool designed to evaluate group process. Assessment One hundred thirty-four students (76.6%) completed both self-reflection instruments, and 132 (74.2%) completed the post-course group evaluation instrument. Although already high prior to the activity, students' mean post-intervention reflection scores increased for 12 of 16 items. Post-intervention group evaluation scores reflected a high level of satisfaction with the experience. Conclusion Use of a quality-based case study and QI methodology were an effective approach to enhancing interprofessional experiences among students. PMID:19657497

  12. Digital Divide in Post-Primary Schools

    ERIC Educational Resources Information Center

    Marcus-Quinn, Ann; McGarr, Oliver

    2013-01-01

    This research study developed curricular specific open educational resources (OERs) for the teaching of poetry at Junior Certificate level in Irish post-primary schools. It aimed to capture the collaborative design and development process used in the development of the digital resources and describe and evaluate the implementation of the resources…

  13. Nonequilibrium thermodynamics and the transport phenomena in magnetically confined plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balescu, R.

    1987-09-01

    The neoclassical theory of transport in magnetically confined plasmas is reviewed. The emphasis is laid on a set of relationships existing among the banana transport coefficients. The surface-averaged entropy production in such plasmas is evaluated. It is shown that neoclassical effects emerge from the entropy production due to parallel transport processes. The Pfirsch-Schlueter effect can be clearly interpreted as due to spatial fluctuations of parallel fluxes on a magnetic surface: the corresponding entropy production is the measure of these fluctuations. The banana fluxes can be formulated in a quasithermodynamic form in which the average entropy production is a bilinear formmore » in the parallel fluxes and the conjugate generalized stresses. A formulation as a quadratic form in the thermodynamic forces is also possible, but leads to anomalies, which are discussed in some detail.« less

  14. Dopamine Inactivation Efficacy Related to Functional DAT1 and COMT Variants Influences Motor Response Evaluation

    PubMed Central

    Bender, Stephan; Rellum, Thomas; Freitag, Christine; Resch, Franz; Rietschel, Marcella; Treutlein, Jens; Jennen-Steinmetz, Christine; Brandeis, Daniel; Banaschewski, Tobias; Laucht, Manfred

    2012-01-01

    Background Dopamine plays an important role in orienting, response anticipation and movement evaluation. Thus, we examined the influence of functional variants related to dopamine inactivation in the dopamine transporter (DAT1) and catechol-O-methyltransferase genes (COMT) on the time-course of motor processing in a contingent negative variation (CNV) task. Methods 64-channel EEG recordings were obtained from 195 healthy adolescents of a community-based sample during a continuous performance task (A-X version). Early and late CNV as well as motor postimperative negative variation were assessed. Adolescents were genotyped for the COMT Val158Met and two DAT1 polymorphisms (variable number tandem repeats in the 3′-untranslated region and in intron 8). Results The results revealed a significant interaction between COMT and DAT1, indicating that COMT exerted stronger effects on lateralized motor post-processing (centro-parietal motor postimperative negative variation) in homozygous carriers of a DAT1 haplotype increasing DAT1 expression. Source analysis showed that the time interval 500–1000 ms after the motor response was specifically affected in contrast to preceding movement anticipation and programming stages, which were not altered. Conclusions Motor slow negative waves allow the genomic imaging of dopamine inactivation effects on cortical motor post-processing during response evaluation. This is the first report to point towards epistatic effects in the motor system during response evaluation, i.e. during the post-processing of an already executed movement rather than during movement programming. PMID:22649558

  15. Use of a parallel path nebulizer for capillary-based microseparation techniques coupled with an inductively coupled plasma mass spectrometer for speciation measurements

    NASA Astrophysics Data System (ADS)

    Yanes, Enrique G.; Miller-Ihli, Nancy J.

    2004-06-01

    A low flow, parallel path Mira Mist CE nebulizer designed for capillary electrophoresis (CE) was evaluated as a function of make-up solution flow rate, composition, and concentration, as well as the nebulizer gas flow rate. This research was conducted in support of a project related to the separation and quantification of cobalamin (vitamin B-12) species using microseparation techniques combined with inductively coupled plasma mass spectrometry (ICP-MS) detection. As such, Co signals were monitored during the nebulizer characterization process. Transient effects in the ICP were studied to evaluate the suitability of using gradients for microseparations and the benefit of using methanol for the make-up solution was demonstrated. Co signal response changed significantly as a function of changing methanol concentrations of the make-up solution and maximum signal enhancement was seen at 20% methanol with a 15 μl/min flow rate. Evaluation of the effect of changing the nebulizer gas flow rates showed that argon flows from 0.8 to 1.2 l/min were equally effective. The Mira Mist CE parallel path nebulizer was then evaluated for interfacing capillary microseparation techniques including capillary electrophoresis (CE) and micro high performance liquid chromatography (μHPLC) to inductively coupled plasma mass spectrometry (ICP-MS). A mixture of four cobalamin species standards (cyanocobalamin, hydroxocobalamin, methylcobalamin, and 5' deoxyadenosylcobalamin) and the corrinoid analogue cobinamide dicyanide were successfully separated using both CE-ICP-MS and μHPLC-ICP-MS using the parallel path nebulizer with a make-up solution containing 20% methanol with a flow rate of 15 μl/min.

  16. Real-time implementations of image segmentation algorithms on shared memory multicore architecture: a survey (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Akil, Mohamed

    2017-05-01

    The real-time processing is getting more and more important in many image processing applications. Image segmentation is one of the most fundamental tasks image analysis. As a consequence, many different approaches for image segmentation have been proposed. The watershed transform is a well-known image segmentation tool. The watershed transform is a very data intensive task. To achieve acceleration and obtain real-time processing of watershed algorithms, parallel architectures and programming models for multicore computing have been developed. This paper focuses on the survey of the approaches for parallel implementation of sequential watershed algorithms on multicore general purpose CPUs: homogeneous multicore processor with shared memory. To achieve an efficient parallel implementation, it's necessary to explore different strategies (parallelization/distribution/distributed scheduling) combined with different acceleration and optimization techniques to enhance parallelism. In this paper, we give a comparison of various parallelization of sequential watershed algorithms on shared memory multicore architecture. We analyze the performance measurements of each parallel implementation and the impact of the different sources of overhead on the performance of the parallel implementations. In this comparison study, we also discuss the advantages and disadvantages of the parallel programming models. Thus, we compare the OpenMP (an application programming interface for multi-Processing) with Ptheads (POSIX Threads) to illustrate the impact of each parallel programming model on the performance of the parallel implementations.

  17. Post-processing of multi-hydrologic model simulations for improved streamflow projections

    NASA Astrophysics Data System (ADS)

    khajehei, sepideh; Ahmadalipour, Ali; Moradkhani, Hamid

    2016-04-01

    Hydrologic model outputs are prone to bias and uncertainty due to knowledge deficiency in model and data. Uncertainty in hydroclimatic projections arises due to uncertainty in hydrologic model as well as the epistemic or aleatory uncertainties in GCM parameterization and development. This study is conducted to: 1) evaluate the recently developed multi-variate post-processing method for historical simulations and 2) assess the effect of post-processing on uncertainty and reliability of future streamflow projections in both high-flow and low-flow conditions. The first objective is performed for historical period of 1970-1999. Future streamflow projections are generated for 10 statistically downscaled GCMs from two widely used downscaling methods: Bias Corrected Statistically Downscaled (BCSD) and Multivariate Adaptive Constructed Analogs (MACA), over the period of 2010-2099 for two representative concentration pathways of RCP4.5 and RCP8.5. Three semi-distributed hydrologic models were employed and calibrated at 1/16 degree latitude-longitude resolution for over 100 points across the Columbia River Basin (CRB) in the pacific northwest USA. Streamflow outputs are post-processed through a Bayesian framework based on copula functions. The post-processing approach is relying on a transfer function developed based on bivariate joint distribution between the observation and simulation in historical period. Results show that application of post-processing technique leads to considerably higher accuracy in historical simulations and also reducing model uncertainty in future streamflow projections.

  18. Evaluating the effectiveness of a practical inquiry-based learning bioinformatics module on undergraduate student engagement and applied skills.

    PubMed

    Brown, James A L

    2016-05-06

    A pedagogic intervention, in the form of an inquiry-based peer-assisted learning project (as a practical student-led bioinformatics module), was assessed for its ability to increase students' engagement, practical bioinformatic skills and process-specific knowledge. Elements assessed were process-specific knowledge following module completion, qualitative student-based module evaluation and the novelty, scientific validity and quality of written student reports. Bioinformatics is often the starting point for laboratory-based research projects, therefore high importance was placed on allowing students to individually develop and apply processes and methods of scientific research. Students led a bioinformatic inquiry-based project (within a framework of inquiry), discovering, justifying and exploring individually discovered research targets. Detailed assessable reports were produced, displaying data generated and the resources used. Mimicking research settings, undergraduates were divided into small collaborative groups, with distinctive central themes. The module was evaluated by assessing the quality and originality of the students' targets through reports, reflecting students' use and understanding of concepts and tools required to generate their data. Furthermore, evaluation of the bioinformatic module was assessed semi-quantitatively using pre- and post-module quizzes (a non-assessable activity, not contributing to their grade), which incorporated process- and content-specific questions (indicative of their use of the online tools). Qualitative assessment of the teaching intervention was performed using post-module surveys, exploring student satisfaction and other module specific elements. Overall, a positive experience was found, as was a post module increase in correct process-specific answers. In conclusion, an inquiry-based peer-assisted learning module increased students' engagement, practical bioinformatic skills and process-specific knowledge. © 2016 by The International Union of Biochemistry and Molecular Biology, 44:304-313 2016. © 2016 The International Union of Biochemistry and Molecular Biology.

  19. Visualization for Molecular Dynamics Simulation of Gas and Metal Surface Interaction

    NASA Astrophysics Data System (ADS)

    Puzyrkov, D.; Polyakov, S.; Podryga, V.

    2016-02-01

    The development of methods, algorithms and applications for visualization of molecular dynamics simulation outputs is discussed. The visual analysis of the results of such calculations is a complex and actual problem especially in case of the large scale simulations. To solve this challenging task it is necessary to decide on: 1) what data parameters to render, 2) what type of visualization to choose, 3) what development tools to use. In the present work an attempt to answer these questions was made. For visualization it was offered to draw particles in the corresponding 3D coordinates and also their velocity vectors, trajectories and volume density in the form of isosurfaces or fog. We tested the way of post-processing and visualization based on the Python language with use of additional libraries. Also parallel software was developed that allows processing large volumes of data in the 3D regions of the examined system. This software gives the opportunity to achieve desired results that are obtained in parallel with the calculations, and at the end to collect discrete received frames into a video file. The software package "Enthought Mayavi2" was used as the tool for visualization. This visualization application gave us the opportunity to study the interaction of a gas with a metal surface and to closely observe the adsorption effect.

  20. Exploring the Energy Landscapes of Protein Folding Simulations with Bayesian Computation

    PubMed Central

    Burkoff, Nikolas S.; Várnai, Csilla; Wells, Stephen A.; Wild, David L.

    2012-01-01

    Nested sampling is a Bayesian sampling technique developed to explore probability distributions localized in an exponentially small area of the parameter space. The algorithm provides both posterior samples and an estimate of the evidence (marginal likelihood) of the model. The nested sampling algorithm also provides an efficient way to calculate free energies and the expectation value of thermodynamic observables at any temperature, through a simple post processing of the output. Previous applications of the algorithm have yielded large efficiency gains over other sampling techniques, including parallel tempering. In this article, we describe a parallel implementation of the nested sampling algorithm and its application to the problem of protein folding in a Gō-like force field of empirical potentials that were designed to stabilize secondary structure elements in room-temperature simulations. We demonstrate the method by conducting folding simulations on a number of small proteins that are commonly used for testing protein-folding procedures. A topological analysis of the posterior samples is performed to produce energy landscape charts, which give a high-level description of the potential energy surface for the protein folding simulations. These charts provide qualitative insights into both the folding process and the nature of the model and force field used. PMID:22385859

  1. Exploring the energy landscapes of protein folding simulations with Bayesian computation.

    PubMed

    Burkoff, Nikolas S; Várnai, Csilla; Wells, Stephen A; Wild, David L

    2012-02-22

    Nested sampling is a Bayesian sampling technique developed to explore probability distributions localized in an exponentially small area of the parameter space. The algorithm provides both posterior samples and an estimate of the evidence (marginal likelihood) of the model. The nested sampling algorithm also provides an efficient way to calculate free energies and the expectation value of thermodynamic observables at any temperature, through a simple post processing of the output. Previous applications of the algorithm have yielded large efficiency gains over other sampling techniques, including parallel tempering. In this article, we describe a parallel implementation of the nested sampling algorithm and its application to the problem of protein folding in a Gō-like force field of empirical potentials that were designed to stabilize secondary structure elements in room-temperature simulations. We demonstrate the method by conducting folding simulations on a number of small proteins that are commonly used for testing protein-folding procedures. A topological analysis of the posterior samples is performed to produce energy landscape charts, which give a high-level description of the potential energy surface for the protein folding simulations. These charts provide qualitative insights into both the folding process and the nature of the model and force field used. Copyright © 2012 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  2. Numerical propulsion system simulation: An interdisciplinary approach

    NASA Technical Reports Server (NTRS)

    Nichols, Lester D.; Chamis, Christos C.

    1991-01-01

    The tremendous progress being made in computational engineering and the rapid growth in computing power that is resulting from parallel processing now make it feasible to consider the use of computer simulations to gain insights into the complex interactions in aerospace propulsion systems and to evaluate new concepts early in the design process before a commitment to hardware is made. Described here is a NASA initiative to develop a Numerical Propulsion System Simulation (NPSS) capability.

  3. Numerical propulsion system simulation - An interdisciplinary approach

    NASA Technical Reports Server (NTRS)

    Nichols, Lester D.; Chamis, Christos C.

    1991-01-01

    The tremendous progress being made in computational engineering and the rapid growth in computing power that is resulting from parallel processing now make it feasible to consider the use of computer simulations to gain insights into the complex interactions in aerospace propulsion systems and to evaluate new concepts early in the design process before a commitment to hardware is made. Described here is a NASA initiative to develop a Numerical Propulsion System Simulation (NPSS) capability.

  4. An Annotated Bibliography on Tactical Map Display Symbology

    DTIC Science & Technology

    1989-08-01

    failure of attention to be focused on one element selectively in filtering tasks where only that one element was relevant to the discrimination. Failure of...The present study evaluates a class of models of human information processing made popular by Broadbent . A brief tachistoscopic display of one or two...213-219. Two experiments were performed to test Neisser’s two-stage model of recognition as applied to matching. Evidence of parallel processing was

  5. Image Processing Using a Parallel Architecture.

    DTIC Science & Technology

    1987-12-01

    ENG/87D-25 Abstract This study developed a set o± low level image processing tools on a parallel computer that allows concurrent processing of images...environment, the set of tools offers a significant reduction in the time required to perform some commonly used image processing operations. vI IMAGE...step toward developing these systems, a structured set of image processing tools was implemented using a parallel computer. More important than

  6. Crashworthiness simulations with DYNA3D

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schauer, D.A.; Hoover, C.G.; Kay, G.J.

    1996-04-01

    Current progress in parallel algorithm research and applications in vehicle crash simulation is described for the explicit, finite element algorithms in DYNA3D. Problem partitioning methods and parallel algorithms for contact at material interfaces are the two challenging algorithm research problems that are addressed. Two prototype parallel contact algorithms have been developed for treating the cases of local and arbitrary contact. Demonstration problems for local contact are crashworthiness simulations with 222 locally defined contact surfaces and a vehicle/barrier collision modeled with arbitrary contact. A simulation of crash tests conducted for a vehicle impacting a U-channel small sign post embedded in soilmore » has been run on both the serial and parallel versions of DYNA3D. A significant reduction in computational time has been observed when running these problems on the parallel version. However, to achieve maximum efficiency, complex problems must be appropriately partitioned, especially when contact dominates the computation.« less

  7. Emissions Prediction and Measurement for Liquid-Fueled TVC Combustor with and without Water Injection

    NASA Technical Reports Server (NTRS)

    Brankovic, A.; Ryder, R. C., Jr.; Hendricks, R. C.; Liu, N.-S.; Shouse, D. T.; Roquemore, W. M.

    2005-01-01

    An investigation is performed to evaluate the performance of a computational fluid dynamics (CFD) tool for the prediction of the reacting flow in a liquid-fueled combustor that uses water injection for control of pollutant emissions. The experiment consists of a multisector, liquid-fueled combustor rig operated at different inlet pressures and temperatures, and over a range of fuel/air and water/fuel ratios. Fuel can be injected directly into the main combustion airstream and into the cavities. Test rig performance is characterized by combustor exit quantities such as temperature and emissions measurements using rakes and overall pressure drop from upstream plenum to combustor exit. Visualization of the flame is performed using gray scale and color still photographs and high-frame-rate videos. CFD simulations are performed utilizing a methodology that includes computer-aided design (CAD) solid modeling of the geometry, parallel processing over networked computers, and graphical and quantitative post-processing. Physical models include liquid fuel droplet dynamics and evaporation, with combustion modeled using a hybrid finite-rate chemistry model developed for Jet-A fuel. CFD and experimental results are compared for cases with cavity-only fueling, while numerical studies of cavity and main fueling was also performed. Predicted and measured trends in combustor exit temperature, CO and NOx are in general agreement at the different water/fuel loading rates, although quantitative differences exist between the predictions and measurements.

  8. The individual therapy process questionnaire: development and validation of a revised measure to evaluate general change mechanisms in psychotherapy.

    PubMed

    Mander, Johannes

    2015-01-01

    There is a dearth of measures specifically designed to assess empirically validated mechanisms of therapeutic change. To fill in this research gap, the aim of the current study was to develop a measure that covers a large variety of empirically validated mechanisms of change with corresponding versions for the patient and therapist. To develop an instrument that is based on several important change process frameworks, we combined two established change mechanisms instruments: the Scale for the Multiperspective Assessment of General Change Mechanisms in Psychotherapy (SACiP) and the Scale of the Therapeutic Alliance-Revised (STA-R). In our study, 457 psychosomatic inpatients completed the SACiP and the STA-R and diverse outcome measures in early, middle and late stages of psychotherapy. Data analyses were conducted using factor analyses and multilevel modelling. The psychometric properties of the resulting Individual Therapy Process Questionnaire were generally good to excellent, as demonstrated by (a) exploratory factor analyses on both patient and therapist ratings, (b) CFA on later measuring times, (c) high internal consistencies and (d) significant outcome predictive effects. The parallel forms of the ITPQ deliver opportunities to compare the patient and therapist perspectives for a broader range of facets of change mechanisms than was hitherto possible. Consequently, the measure can be applied in future research to more specifically analyse different change mechanism profiles in session-to-session development and outcome prediction. Key Practitioner Message This article describes the development of an instrument that measures general mechanisms of change in psychotherapy from both the patient and therapist perspectives. Post-session item ratings from both the patient and therapist can be used as feedback to optimize therapeutic processes. We provide a detailed discussion of measures developed to evaluate therapeutic change mechanisms. Copyright © 2014 John Wiley & Sons, Ltd.

  9. An efficient finite element method for simulation of droplet spreading on a topologically rough surface

    NASA Astrophysics Data System (ADS)

    Luo, Li; Wang, Xiao-Ping; Cai, Xiao-Chuan

    2017-11-01

    We study numerically the dynamics of a three-dimensional droplet spreading on a rough solid surface using a phase-field model consisting of the coupled Cahn-Hilliard and Navier-Stokes equations with a generalized Navier boundary condition (GNBC). An efficient finite element method on unstructured meshes is introduced to cope with the complex geometry of the solid surfaces. We extend the GNBC to surfaces with complex geometry by including its weak form along different normal and tangential directions in the finite element formulation. The semi-implicit time discretization scheme results in a decoupled system for the phase function, the velocity, and the pressure. In addition, a mass compensation algorithm is introduced to preserve the mass of the droplet. To efficiently solve the decoupled systems, we present a highly parallel solution strategy based on domain decomposition techniques. We validate the newly developed solution method through extensive numerical experiments, particularly for those phenomena that can not be achieved by two-dimensional simulations. On a surface with circular posts, we study how wettability of the rough surface depends on the geometry of the posts. The contact line motion for a droplet spreading over some periodic rough surfaces are also efficiently computed. Moreover, we study the spreading process of an impacting droplet on a microstructured surface, a qualitative agreement is achieved between the numerical and experimental results. The parallel performance suggests that the proposed solution algorithm is scalable with over 4,000 processors cores with tens of millions of unknowns.

  10. Re-Evaluating the Time Course of Gender and Phonological Encoding during Silent Monitoring Tasks Estimated by ERP: Serial or Parallel Processing?

    ERIC Educational Resources Information Center

    Camen, Christian; Morand, Stephanie; Laganaro, Marina

    2010-01-01

    Neurolinguistic and psycholinguistic studies suggest that grammatical (gender) and phonological information are retrieved independently and that gender can be accessed before phonological information. This study investigated the relative time courses of gender and phonological encoding using topographic evoked potentials mapping methods.…

  11. Prolonged recovery of sea otters from the Exxon Valdez oil spill? A re-examination of the evidence.

    PubMed

    Garshelis, David L; Johnson, Charles B

    2013-06-15

    Sea otters (Enhydra lutris) suffered major mortality after the Exxon Valdez oil spill in Prince William Sound, Alaska, 1989. We evaluate the contention that their recovery spanned over two decades. A model based on the otter age-at-death distribution suggested a large, spill-related population sink, but this has never been found, and other model predictions failed to match empirical data. Studies focused on a previously-oiled area where otter numbers (~80) stagnated post-spill; nevertheless, post-spill abundance exceeded the most recent pre-spill count, and population trends paralleled an adjacent, unoiled-lightly-oiled area. Some investigators posited that otters suffered chronic effects by digging up buried oil residues while foraging, but an ecological risk assessment indicated that exposure levels via this pathway were well below thresholds for toxicological effects. Significant confounding factors, including killer whale predation, subsistence harvests, human disturbances, and environmental regime shifts made it impossible to judge recovery at such a small scale. Copyright © 2013 Elsevier Ltd. All rights reserved.

  12. Design of a dataway processor for a parallel image signal processing system

    NASA Astrophysics Data System (ADS)

    Nomura, Mitsuru; Fujii, Tetsuro; Ono, Sadayasu

    1995-04-01

    Recently, demands for high-speed signal processing have been increasing especially in the field of image data compression, computer graphics, and medical imaging. To achieve sufficient power for real-time image processing, we have been developing parallel signal-processing systems. This paper describes a communication processor called 'dataway processor' designed for a new scalable parallel signal-processing system. The processor has six high-speed communication links (Dataways), a data-packet routing controller, a RISC CORE, and a DMA controller. Each communication link operates at 8-bit parallel in a full duplex mode at 50 MHz. Moreover, data routing, DMA, and CORE operations are processed in parallel. Therefore, sufficient throughput is available for high-speed digital video signals. The processor is designed in a top- down fashion using a CAD system called 'PARTHENON.' The hardware is fabricated using 0.5-micrometers CMOS technology, and its hardware is about 200 K gates.

  13. A Comparative Evaluation of Effect of Different Chemical Solvents on the Shear Bond Strength of Glass Fiber reinforced Post to Core Material

    PubMed Central

    Samadi, Firoza; Jaiswal, JN; Saha, Sonali

    2014-01-01

    ABSTRACT% Aim: To compare the effect of different chemical solvents on glass fiber reinforced posts and to study the effect of these solvents on the shear bond strength of glass fiber reinforced post to core material. Materials and methods: This study was conducted to evaluate the effect of three chemical solvents, i.e. silane coupling agent, 6% H2O2 and 37% phosphoric acid on the shear bond strength of glass fiber post to a composite resin restorative material. The changes in post surface characteristics after different treatments were also observed, using scanning electron microscopy (SEM) and shear bond strength was analyzed using universal testing machine (UTM). Results: Surface treatment with hydrogen peroxide had greatest impact on the post surface followed by 37% phosphoric acid and silane. On evaluation of the shear bond strength, 6% H2O2 exhibited the maximum shear bond strength followed in descending order by 37% phosphoric acid and silane respectively. Conclusion: The surface treatment of glass fiber post enhances the adhesion between the post and composite resin which is used as core material. Failure of a fiber post and composite resin core often occurs at the junction between the two materials. This failure process requires better characterization. How to cite this article: Sharma A, Samadi F, Jaiswal JN, Saha S. A Comparative Evaluation of Effect of Different Chemical Solvents on the Shear Bond Strength of Glass Fiber Reinforced Post to Core Material. Int J Clin Pediatr Dent 2014;7(3):192-196. PMID:25709300

  14. 77 FR 47573 - Approval and Promulgation of Implementation Plans; Mississippi; 110(a)(2)(E)(ii) Infrastructure...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-09

    ... Mississippi Department of Environmental Quality (MDEQ), on July 13, 2012, for parallel processing. This... of Contents I. What is parallel processing? II. Background III. What elements are required under... Executive Order Reviews I. What is parallel processing? Consistent with EPA regulations found at 40 CFR Part...

  15. Double Take: Parallel Processing by the Cerebral Hemispheres Reduces Attentional Blink

    ERIC Educational Resources Information Center

    Scalf, Paige E.; Banich, Marie T.; Kramer, Arthur F.; Narechania, Kunjan; Simon, Clarissa D.

    2007-01-01

    Recent data have shown that parallel processing by the cerebral hemispheres can expand the capacity of visual working memory for spatial locations (J. F. Delvenne, 2005) and attentional tracking (G. A. Alvarez & P. Cavanagh, 2005). Evidence that parallel processing by the cerebral hemispheres can improve item identification has remained elusive.…

  16. Parallel artificial liquid membrane extraction as an efficient tool for removal of phospholipids from human plasma.

    PubMed

    Ask, Kristine Skoglund; Bardakci, Turgay; Parmer, Marthe Petrine; Halvorsen, Trine Grønhaug; Øiestad, Elisabeth Leere; Pedersen-Bjergaard, Stig; Gjelstad, Astrid

    2016-09-10

    Generic Parallel Artificial Liquid Membrane Extraction (PALME) methods for non-polar basic and non-polar acidic drugs from human plasma were investigated with respect to phospholipid removal. In both cases, extractions in 96-well format were performed from plasma (125μL), through 4μL organic solvent used as supported liquid membranes (SLMs), and into 50μL aqueous acceptor solutions. The acceptor solutions were subsequently analysed by liquid chromatography-tandem mass spectrometry (LC-MS/MS) using in-source fragmentation and monitoring the m/z 184→184 transition for investigation of phosphatidylcholines (PC), sphingomyelins (SM), and lysophosphatidylcholines (Lyso-PC). In both generic methods, no phospholipids were detected in the acceptor solutions. Thus, PALME appeared to be highly efficient for phospholipid removal. To further support this, qualitative (post-column infusion) and quantitative matrix effects were investigated with fluoxetine, fluvoxamine, and quetiapine as model analytes. No signs of matrix effects were observed. Finally, PALME was evaluated for the aforementioned drug substances, and data were in accordance with European Medicines Agency (EMA) guidelines. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. On the costs of parallel processing in dual-task performance: The case of lexical processing in word production.

    PubMed

    Paucke, Madlen; Oppermann, Frank; Koch, Iring; Jescheniak, Jörg D

    2015-12-01

    Previous dual-task picture-naming studies suggest that lexical processes require capacity-limited processes and prevent other tasks to be carried out in parallel. However, studies involving the processing of multiple pictures suggest that parallel lexical processing is possible. The present study investigated the specific costs that may arise when such parallel processing occurs. We used a novel dual-task paradigm by presenting 2 visual objects associated with different tasks and manipulating between-task similarity. With high similarity, a picture-naming task (T1) was combined with a phoneme-decision task (T2), so that lexical processes were shared across tasks. With low similarity, picture-naming was combined with a size-decision T2 (nonshared lexical processes). In Experiment 1, we found that a manipulation of lexical processes (lexical frequency of T1 object name) showed an additive propagation with low between-task similarity and an overadditive propagation with high between-task similarity. Experiment 2 replicated this differential forward propagation of the lexical effect and showed that it disappeared with longer stimulus onset asynchronies. Moreover, both experiments showed backward crosstalk, indexed as worse T1 performance with high between-task similarity compared with low similarity. Together, these findings suggest that conditions of high between-task similarity can lead to parallel lexical processing in both tasks, which, however, does not result in benefits but rather in extra performance costs. These costs can be attributed to crosstalk based on the dual-task binding problem arising from parallel processing. Hence, the present study reveals that capacity-limited lexical processing can run in parallel across dual tasks but only at the expense of extraordinary high costs. (c) 2015 APA, all rights reserved).

  18. Graphical Representation of Parallel Algorithmic Processes

    DTIC Science & Technology

    1990-12-01

    interface with the AAARF main process . The source code for the AAARF class-common library is in the common subdi- rectory and consists of the following files... for public release; distribution unlimited AFIT/GCE/ENG/90D-07 Graphical Representation of Parallel Algorithmic Processes THESIS Presented to the...goal of this study is to develop an algorithm animation facility for parallel processes executing on different architectures, from multiprocessor

  19. Evaluation of TIGGE Ensemble Forecasts of Precipitation in Distinct Climate Regions in Iran

    NASA Astrophysics Data System (ADS)

    Aminyavari, Saleh; Saghafian, Bahram; Delavar, Majid

    2018-04-01

    The application of numerical weather prediction (NWP) products is increasing dramatically. Existing reports indicate that ensemble predictions have better skill than deterministic forecasts. In this study, numerical ensemble precipitation forecasts in the TIGGE database were evaluated using deterministic, dichotomous (yes/no), and probabilistic techniques over Iran for the period 2008-16. Thirteen rain gauges spread over eight homogeneous precipitation regimes were selected for evaluation. The Inverse Distance Weighting and Kriging methods were adopted for interpolation of the prediction values, downscaled to the stations at lead times of one to three days. To enhance the forecast quality, NWP values were post-processed via Bayesian Model Averaging. The results showed that ECMWF had better scores than other products. However, products of all centers underestimated precipitation in high precipitation regions while overestimating precipitation in other regions. This points to a systematic bias in forecasts and demands application of bias correction techniques. Based on dichotomous evaluation, NCEP did better at most stations, although all centers overpredicted the number of precipitation events. Compared to those of ECMWF and NCEP, UKMO yielded higher scores in mountainous regions, but performed poorly at other selected stations. Furthermore, the evaluations showed that all centers had better skill in wet than in dry seasons. The quality of post-processed predictions was better than those of the raw predictions. In conclusion, the accuracy of the NWP predictions made by the selected centers could be classified as medium over Iran, while post-processing of predictions is recommended to improve the quality.

  20. Evaluation protocol for amusia: Portuguese sample.

    PubMed

    Peixoto, Maria Conceição; Martins, Jorge; Teixeira, Pedro; Alves, Marisa; Bastos, José; Ribeiro, Carlos

    2012-12-01

    Amusia is a disorder that affects the processing of music. Part of this processing happens in the primary auditory cortex. The study of this condition allows us to evaluate the central auditory pathways. To explore the diagnostic evaluation tests of amusia. The authors propose an evaluation protocol for patients with suspected amusia (after brain injury or complaints of poor musical perception), in parallel with the assessment of central auditory processing, already implemented in the department. The Montreal Evaluation of Battery of amusia was the basis for the selection of the tests. From this comprehensive battery of tests we selected some of the musical examples to evaluate different musical aspects, including memory and perception of music, ability concerning musical recognition and discrimination. In terms of memory there is a test for assessing delayed memory, adapted to the Portuguese culture. Prospective study. Although still experimental, with the possibility of adjustments in the assessment, we believe that this assessment, combined with the study of central auditory processing, will allow us to understand some central lesions, congenital or acquired hearing perception limitations.

  1. Efficacy and Safety of Roflumilast in Korean Patients with COPD

    PubMed Central

    Lee, Jae Seung; Hong, Yoon Ki; Park, Tae Sun; Lee, Sei Won; Oh, Yeon-Mok

    2016-01-01

    Purpose Roflumilast is the only oral phosphodiesterase 4 inhibitor approved to treat chronic obstructive pulmonary disease (COPD) patients [post-bronchodilator forced expiratory volume in 1 second (FEV1) <50% predicted] with chronic bronchitis and a history of frequent exacerbations. This study evaluated the efficacy and safety of roflumilast in Korean patients with COPD and compared the efficacy based on the severity of airflow limitation. Materials and Methods A post-hoc subgroup analysis was performed in Korean COPD patients participating in JADE, a 12-week, double-blinded, placebo-controlled, parallel-group, phase III trial in Asia. The primary efficacy endpoint was the mean [least-squares mean adjusted for covariates (LSMean)] change in post-bronchodilator FEV1 from baseline to each post-randomization visit. Safety endpoints included adverse events (AEs) and changes in laboratory values, vital signs, and electrocardiograms. Results A total of 260 Korean COPD patients were recruited, of which 207 were randomized to roflumilast (n=102) or placebo (n=105) treatment. After 12 weeks, LSMean post-bronchodilator FEV1 increased by 43 mL for patients receiving roflumilast and decreased by 60 mL for those taking placebo. Adverse events were more common in the roflumilast group than in the placebo group; however, the types and frequency of AEs were comparable to those reported in previous studies. Conclusion Roflumilast significantly improved lung function with a tolerable safety profile in Korean COPD patients irrespective of the severity of airflow limitation. PMID:27189287

  2. Cobboldia elephantis (Cobbold, 1866) larval infestation in an Indian elephant (Elephas maximus).

    PubMed

    Javare Gowda, Ananda K; Dharanesha, N K; Giridhar, P; Byre Gowda, S M

    2017-06-01

    In the present study, post-mortem was conducted on a female elephant aged about 37 years died at Rajeev Gandhi National Park, Hunsur, Mathigoodu Elephant Camp, Karnataka state. The animal suffered with diarrhoea, anorexia, dehydration and was unable to walk for about one week before death and was treated with antibiotics and fluid therapy for three days. The post-mortem examination revealed that, the gastric mucosa was severely congested, hyperaemic and numerous stomach bots attached to the mucosa. The bots were recovered from the gastric mucosa and processed for species identification. The posterior spiracles of the bots showed three longitudinal parallel slits in each spiracle, the abdominal segments had a row of belt like triangular shaped spines and the anterior end had two powerful oral hooks with cephalo-pharyngeal skeleton. Based on the above said morphological characters, the bots were identified as Cobboldia elephantis. This seems to be the first report of C. elephantis in free range wild elephant from Karnataka state.

  3. PSPs and ERPs: applying the dynamics of post-synaptic potentials to individual units in simulation of temporally extended Event-Related Potential reading data.

    PubMed

    Laszlo, Sarah; Armstrong, Blair C

    2014-05-01

    The Parallel Distributed Processing (PDP) framework is built on neural-style computation, and is thus well-suited for simulating the neural implementation of cognition. However, relatively little cognitive modeling work has concerned neural measures, instead focusing on behavior. Here, we extend a PDP model of reading-related components in the Event-Related Potential (ERP) to simulation of the N400 repetition effect. We accomplish this by incorporating the dynamics of cortical post-synaptic potentials--the source of the ERP signal--into the model. Simulations demonstrate that application of these dynamics is critical for model elicitation of repetition effects in the time and frequency domains. We conclude that by advancing a neurocomputational understanding of repetition effects, we are able to posit an interpretation of their source that is both explicitly specified and mechanistically different from the well-accepted cognitive one. Copyright © 2014 Elsevier Inc. All rights reserved.

  4. New bone post-processing tools in forensic imaging: a multi-reader feasibility study to evaluate detection time and diagnostic accuracy in rib fracture assessment.

    PubMed

    Glemser, Philip A; Pfleiderer, Michael; Heger, Anna; Tremper, Jan; Krauskopf, Astrid; Schlemmer, Heinz-Peter; Yen, Kathrin; Simons, David

    2017-03-01

    The aim of this multi-reader feasibility study was to evaluate new post-processing CT imaging tools in rib fracture assessment of forensic cases by analyzing detection time and diagnostic accuracy. Thirty autopsy cases (20 with and 10 without rib fractures in autopsy) were randomly selected and included in this study. All cases received a native whole body CT scan prior to the autopsy procedure, which included dissection and careful evaluation of each rib. In addition to standard transverse sections (modality A), CT images were subjected to a reconstruction algorithm to compute axial labelling of the ribs (modality B) as well as "unfolding" visualizations of the rib cage (modality C, "eagle tool"). Three radiologists with different clinical and forensic experience who were blinded to autopsy results evaluated all cases in a random manner of modality and case. Rib fracture assessment of each reader was evaluated compared to autopsy and a CT consensus read as radiologic reference. A detailed evaluation of relevant test parameters revealed a better accordance to the CT consensus read as to the autopsy. Modality C was the significantly quickest rib fracture detection modality despite slightly reduced statistic test parameters compared to modalities A and B. Modern CT post-processing software is able to shorten reading time and to increase sensitivity and specificity compared to standard autopsy alone. The eagle tool as an easy to use tool is suited for an initial rib fracture screening prior to autopsy and can therefore be beneficial for forensic pathologists.

  5. A Low Power, Parallel Wearable Multi-Sensor System for Human Activity Evaluation.

    PubMed

    Li, Yuecheng; Jia, Wenyan; Yu, Tianjian; Luan, Bo; Mao, Zhi-Hong; Zhang, Hong; Sun, Mingui

    2015-04-01

    In this paper, the design of a low power heterogeneous wearable multi-sensor system, built with Zynq System-on-Chip (SoC), for human activity evaluation is presented. The powerful data processing capability and flexibility of this SoC represent significant improvements over our previous ARM based system designs. The new system captures and compresses multiple color images and sensor data simultaneously. Several strategies are adopted to minimize power consumption. Our wearable system provides a new tool for the evaluation of human activity, including diet, physical activity and lifestyle.

  6. Rigidity and retention of root canal posts.

    PubMed

    Purton, D G; Chandler, N P; Love, R M

    1998-03-28

    To test the rigidity and the retention into roots of parallel root canal posts, one a spiral vented titanium post and the other a spiral serrated, hollow, stainless steel post. A serrated, stainless steel post was used as the control. A three-point bending test was used to test rigidity. To test retention, ten posts of each type were cemented into the roots of extracted teeth with a resin cement and the tensile loads required to remove them were compared using Student's t and Mann-Whitney U tests. The serrated stainless steel posts were significantly more rigid than either of the other types. The titanium posts and the stainless steel hollow posts were not significantly different in rigidity. The serrated, stainless steel posts were significantly better retained than either of the other types. The titanium posts showed greater retention than the hollow posts. Within the limits of the study the stainless steel, serrated posts were superior to the two newer types in terms of rigidity and retention into roots.

  7. Solution of the within-group multidimensional discrete ordinates transport equations on massively parallel architectures

    NASA Astrophysics Data System (ADS)

    Zerr, Robert Joseph

    2011-12-01

    The integral transport matrix method (ITMM) has been used as the kernel of new parallel solution methods for the discrete ordinates approximation of the within-group neutron transport equation. The ITMM abandons the repetitive mesh sweeps of the traditional source iterations (SI) scheme in favor of constructing stored operators that account for the direct coupling factors among all the cells and between the cells and boundary surfaces. The main goals of this work were to develop the algorithms that construct these operators and employ them in the solution process, determine the most suitable way to parallelize the entire procedure, and evaluate the behavior and performance of the developed methods for increasing number of processes. This project compares the effectiveness of the ITMM with the SI scheme parallelized with the Koch-Baker-Alcouffe (KBA) method. The primary parallel solution method involves a decomposition of the domain into smaller spatial sub-domains, each with their own transport matrices, and coupled together via interface boundary angular fluxes. Each sub-domain has its own set of ITMM operators and represents an independent transport problem. Multiple iterative parallel solution methods have investigated, including parallel block Jacobi (PBJ), parallel red/black Gauss-Seidel (PGS), and parallel GMRES (PGMRES). The fastest observed parallel solution method, PGS, was used in a weak scaling comparison with the PARTISN code. Compared to the state-of-the-art SI-KBA with diffusion synthetic acceleration (DSA), this new method without acceleration/preconditioning is not competitive for any problem parameters considered. The best comparisons occur for problems that are difficult for SI DSA, namely highly scattering and optically thick. SI DSA execution time curves are generally steeper than the PGS ones. However, until further testing is performed it cannot be concluded that SI DSA does not outperform the ITMM with PGS even on several thousand or tens of thousands of processors. The PGS method does outperform SI DSA for the periodic heterogeneous layers (PHL) configuration problems. Although this demonstrates a relative strength/weakness between the two methods, the practicality of these problems is much less, further limiting instances where it would be beneficial to select ITMM over SI DSA. The results strongly indicate a need for a robust, stable, and efficient acceleration method (or preconditioner for PGMRES). The spatial multigrid (SMG) method is currently incomplete in that it does not work for all cases considered and does not effectively improve the convergence rate for all values of scattering ratio c or cell dimension h. Nevertheless, it does display the desired trend for highly scattering, optically thin problems. That is, it tends to lower the rate of growth of number of iterations with increasing number of processes, P, while not increasing the number of additional operations per iteration to the extent that the total execution time of the rapidly converging accelerated iterations exceeds that of the slower unaccelerated iterations. A predictive parallel performance model has been developed for the PBJ method. Timing tests were performed such that trend lines could be fitted to the data for the different components and used to estimate the execution times. Applied to the weak scaling results, the model notably underestimates construction time, but combined with a slight overestimation in iterative solution time, the model predicts total execution time very well for large P. It also does a decent job with the strong scaling results, closely predicting the construction time and time per iteration, especially as P increases. Although not shown to be competitive up to 1,024 processing elements with the current state of the art, the parallelized ITMM exhibits promising scaling trends. Ultimately, compared to the KBA method, the parallelized ITMM may be found to be a very attractive option for transport calculations spatially decomposed over several tens of thousands of processes. Acceleration/preconditioning of the parallelized ITMM once developed will improve the convergence rate and improve its competitiveness. (Abstract shortened by UMI.)

  8. Endpoint-based parallel data processing in a parallel active messaging interface of a parallel computer

    DOEpatents

    Archer, Charles J; Blocksome, Michael E; Ratterman, Joseph D; Smith, Brian E

    2014-02-11

    Endpoint-based parallel data processing in a parallel active messaging interface ('PAMI') of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI, including establishing a data communications geometry, the geometry specifying, for tasks representing processes of execution of the parallel application, a set of endpoints that are used in collective operations of the PAMI including a plurality of endpoints for one of the tasks; receiving in endpoints of the geometry an instruction for a collective operation; and executing the instruction for a collective opeartion through the endpoints in dependence upon the geometry, including dividing data communications operations among the plurality of endpoints for one of the tasks.

  9. Distributed computing feasibility in a non-dedicated homogeneous distributed system

    NASA Technical Reports Server (NTRS)

    Leutenegger, Scott T.; Sun, Xian-He

    1993-01-01

    The low cost and availability of clusters of workstations have lead researchers to re-explore distributed computing using independent workstations. This approach may provide better cost/performance than tightly coupled multiprocessors. In practice, this approach often utilizes wasted cycles to run parallel jobs. The feasibility of such a non-dedicated parallel processing environment assuming workstation processes have preemptive priority over parallel tasks is addressed. An analytical model is developed to predict parallel job response times. Our model provides insight into how significantly workstation owner interference degrades parallel program performance. A new term task ratio, which relates the parallel task demand to the mean service demand of nonparallel workstation processes, is introduced. It was proposed that task ratio is a useful metric for determining how large the demand of a parallel applications must be in order to make efficient use of a non-dedicated distributed system.

  10. Endpoint-based parallel data processing in a parallel active messaging interface of a parallel computer

    DOEpatents

    Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E.

    2014-08-12

    Endpoint-based parallel data processing in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI, including establishing a data communications geometry, the geometry specifying, for tasks representing processes of execution of the parallel application, a set of endpoints that are used in collective operations of the PAMI including a plurality of endpoints for one of the tasks; receiving in endpoints of the geometry an instruction for a collective operation; and executing the instruction for a collective operation through the endpoints in dependence upon the geometry, including dividing data communications operations among the plurality of endpoints for one of the tasks.

  11. Compensatory Effort Parallels Midbrain Deactivation during Mental Fatigue: An fMRI Study

    PubMed Central

    Nakagawa, Seishu; Sugiura, Motoaki; Akitsuki, Yuko; Hosseini, S. M. Hadi; Kotozaki, Yuka; Miyauchi, Carlos Makoto; Yomogida, Yukihito; Yokoyama, Ryoichi; Takeuchi, Hikaru; Kawashima, Ryuta

    2013-01-01

    Fatigue reflects the functioning of our physiological negative feedback system, which prevents us from overworking. When fatigued, however, we often try to suppress this system in an effort to compensate for the resulting deterioration in performance. Previous studies have suggested that the effect of fatigue on neurovascular demand may be influenced by this compensatory effort. The primary goal of the present study was to isolate the effect of compensatory effort on neurovascular demand. Healthy male volunteers participated in a series of visual and auditory divided attention tasks that steadily increased fatigue levels for 2 hours. Functional magnetic resonance imaging scans were performed during the first and last quarter of the study (Pre and Post sessions, respectively). Tasks with low and high attentional load (Low and High conditions, respectively) were administrated in alternating blocks. We assumed that compensatory effort would be greater under the High-attentional-load condition compared with the Low-load condition. The difference was assessed during the two sessions. The effect of compensatory effort on neurovascular demand was evaluated by examining the interaction between load (High vs. Low) and time (Pre vs. Post). Significant fatigue-induced deactivation (i.e., Pre>Post) was observed in the frontal, temporal, occipital, and parietal cortices, in the cerebellum, and in the midbrain in both the High and Low conditions. The interaction was significantly greater in the High than in the Low condition in the midbrain. Neither significant fatigue-induced activation (i.e., Pre[PreE– PostE]) may reflect suppression of the negative feedback system that normally triggers recuperative rest to maintain homeostasis. PMID:23457592

  12. Performance evaluation of GPU parallelization, space-time adaptive algorithms, and their combination for simulating cardiac electrophysiology.

    PubMed

    Sachetto Oliveira, Rafael; Martins Rocha, Bernardo; Burgarelli, Denise; Meira, Wagner; Constantinides, Christakis; Weber Dos Santos, Rodrigo

    2018-02-01

    The use of computer models as a tool for the study and understanding of the complex phenomena of cardiac electrophysiology has attained increased importance nowadays. At the same time, the increased complexity of the biophysical processes translates into complex computational and mathematical models. To speed up cardiac simulations and to allow more precise and realistic uses, 2 different techniques have been traditionally exploited: parallel computing and sophisticated numerical methods. In this work, we combine a modern parallel computing technique based on multicore and graphics processing units (GPUs) and a sophisticated numerical method based on a new space-time adaptive algorithm. We evaluate each technique alone and in different combinations: multicore and GPU, multicore and GPU and space adaptivity, multicore and GPU and space adaptivity and time adaptivity. All the techniques and combinations were evaluated under different scenarios: 3D simulations on slabs, 3D simulations on a ventricular mouse mesh, ie, complex geometry, sinus-rhythm, and arrhythmic conditions. Our results suggest that multicore and GPU accelerate the simulations by an approximate factor of 33×, whereas the speedups attained by the space-time adaptive algorithms were approximately 48. Nevertheless, by combining all the techniques, we obtained speedups that ranged between 165 and 498. The tested methods were able to reduce the execution time of a simulation by more than 498× for a complex cellular model in a slab geometry and by 165× in a realistic heart geometry simulating spiral waves. The proposed methods will allow faster and more realistic simulations in a feasible time with no significant loss of accuracy. Copyright © 2017 John Wiley & Sons, Ltd.

  13. Detection of fetal sex chromosome aneuploidy by massively parallel sequencing of maternal plasma DNA: initial experience in a Chinese hospital.

    PubMed

    Yao, H; Jiang, F; Hu, H; Gao, Y; Zhu, Z; Zhang, H; Wang, Y; Guo, Y; Liu, L; Yuan, Y; Zhou, L; Wang, J; Du, B; Qu, N; Zhang, R; Dong, Y; Xu, H; Chen, F; Jiang, H; Liu, Y; Zhang, L; Tian, Z; Liu, Q; Zhang, C; Pan, X; Yang, S; Zhao, L; Wang, W; Liang, Z

    2014-07-01

    To evaluate the performance of a massively parallel sequencing (MPS)-based test in detecting fetal sex chromosome aneuploidy (SCA) and to present a comprehensive clinical counseling protocol for SCA-positive patients. This was a retrospective study in a large patient cohort of 5950 singleton pregnancies which underwent MPS-based testing as a prenatal screening test for trisomies 21, 18 and 13, with X and Y chromosomes as secondary findings, in Southwest Hospital in China. MPS-based SCA-positive women were offered the choice of knowing whether their SCA results were positive and those who did commenced a two-stage post-test clinical counseling protocol. In Stage 1, general information about SCA was given, and women were given the option of invasive testing for confirmation of findings; in Stage 2, those who had chosen to undergo invasive testing were informed about the specific SCA affecting their fetus and their management options. Thirty-three cases were classified as SCA-positive by MPS-based testing. After Stage 1 of the two-stage post-test clinical counseling session, 33 (100%) of these pregnant women chose to know the screening test results, and 25 (75.76%) underwent an invasive diagnostic procedure and karyotype analysis, in one of whom karyotyping failed. In thirteen cases, karyotyping confirmed the MPS-based test results (two X0 cases, seven XXX cases, three XXY cases and one XYY case), giving a positive predictive value of 54.17% (13/24 cases confirmed by karyotyping). After post-test clinical counseling session Stage 2, seven women chose to terminate the pregnancy: one X0 case, two XXX cases, the three XXY cases and the single XYY case. Six women decided to continue with pregnancy: one X0 case and five XXX cases. Our study showed the feasibility of clinical application of the MPS-based test in the non-invasive detection of fetal SCA. Together with a two-stage post-test clinical counseling protocol, it leads to a well-informed decision-making procedure. Copyright © 2014 ISUOG. Published by John Wiley & Sons Ltd.

  14. A Parallel Particle Swarm Optimization Algorithm Accelerated by Asynchronous Evaluations

    NASA Technical Reports Server (NTRS)

    Venter, Gerhard; Sobieszczanski-Sobieski, Jaroslaw

    2005-01-01

    A parallel Particle Swarm Optimization (PSO) algorithm is presented. Particle swarm optimization is a fairly recent addition to the family of non-gradient based, probabilistic search algorithms that is based on a simplified social model and is closely tied to swarming theory. Although PSO algorithms present several attractive properties to the designer, they are plagued by high computational cost as measured by elapsed time. One approach to reduce the elapsed time is to make use of coarse-grained parallelization to evaluate the design points. Previous parallel PSO algorithms were mostly implemented in a synchronous manner, where all design points within a design iteration are evaluated before the next iteration is started. This approach leads to poor parallel speedup in cases where a heterogeneous parallel environment is used and/or where the analysis time depends on the design point being analyzed. This paper introduces an asynchronous parallel PSO algorithm that greatly improves the parallel e ciency. The asynchronous algorithm is benchmarked on a cluster assembled of Apple Macintosh G5 desktop computers, using the multi-disciplinary optimization of a typical transport aircraft wing as an example.

  15. Health trainer-led motivational intervention plus usual care for people under community supervision compared with usual care alone: a study protocol for a parallel-group pilot randomised controlled trial (STRENGTHEN).

    PubMed

    Thompson, Tom P; Callaghan, Lynne; Hazeldine, Emma; Quinn, Cath; Walker, Samantha; Byng, Richard; Wallace, Gary; Creanor, Siobhan; Green, Colin; Hawton, Annie; Annison, Jill; Sinclair, Julia; Senior, Jane; Taylor, Adrian H

    2018-06-04

    People with experience of the criminal justice system typically have worse physical and mental health, lower levels of mental well-being and have less healthy lifestyles than the general population. Health trainers have worked with offenders in the community to provide support for lifestyle change, enhance mental well-being and signpost to appropriate services. There has been no rigorous evaluation of the effectiveness and cost-effectiveness of providing such community support. This study aims to determine the feasibility and acceptability of conducting a randomised trial and delivering a health trainer intervention to people receiving community supervision in the UK. A multicentre, parallel, two-group randomised controlled trial recruiting 120 participants with 1:1 individual allocation to receive support from a health trainer and usual care or usual care alone, with mixed methods process evaluation. Participants receive community supervision from an offender manager in either a Community Rehabilitation Company or the National Probation Service. If they have served a custodial sentence, then they have to have been released for at least 2 months. The supervision period must have at least 7 months left at recruitment. Participants are interested in receiving support to change diet, physical activity, alcohol use and smoking and/or improve mental well-being. The primary outcome is mental well-being with secondary outcomes related to smoking, physical activity, alcohol consumption and diet. The primary outcome will inform sample size calculations for a definitive trial. The study has been approved by the Health and Care Research Wales Ethics Committee (REC reference 16/WA/0171). Dissemination will include publication of the intervention development process and findings for the stated outcomes, parallel process evaluation and economic evaluation in peer-reviewed journals. Results will also be disseminated to stakeholders and trial participants. ISRCTN80475744; Pre-results. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  16. A Module Experimental Process System Development Unit (MEPSDU)

    NASA Technical Reports Server (NTRS)

    1982-01-01

    Restructuring research objectives from a technical readiness demonstration program to an investigation of high risk, high payoff activities associated with producing photovoltaic modules using non-CZ sheet material is reported. Deletion of the module frame in favor of a frameless design, and modification in cell series parallel electrical interconnect configuration are reviewed. A baseline process sequence was identified for the fabrication of modules using the selected dendritic web sheet material, and economic evaluations of the sequence were completed.

  17. Proceedings on Expert Systems Workshop Held in Pacific Grove, California on 16-18 April 1986

    DTIC Science & Technology

    1986-04-01

    list is empty, the scheduler process is guar- As a result, fewer evaluator cycles are wasted waiting for the schedulcr process to anteed to be waiting...SECURITY CLASS. (of this report) UNCLASSIFIED ISa. DECLASSI FICATION/DOWNGRADING SCHEDULE 16. DISTRIBUTION STATEMENT (of this Report) APPROVED FOR PUBLIC...parallel. makes them easy to port to alternative new *--Features unimplemented at present; scheduled formachines,.hse2 phase 2. To cover a larger set

  18. Performance perceptions and self-focused attention predict post-event processing after a real-life social performance situation.

    PubMed

    Helbig-Lang, Sylvia; Poels, Vanja; Lincoln, Tania M

    2016-11-01

    Cognitive approaches to social anxiety suggest that an excessive brooding about one's performance in a social situation (post-event processing; PEP) is involved in the maintenance of anxiety. To date, most studies investigating PEP were conducted in laboratory settings. The present study sought to replicate previous findings on predictors of PEP after a naturalistic social performance situation. Sixty-five students, who had to give an evaluated presentation for credits, completed measures of trait social anxiety. Immediately after their presentation, participants rated state anxiety and attentional focus during the presentation, and provided an overall evaluation of their performance. One week after the presentation, they rated PEP during the preceding week, and reappraised their performance. Regression analyses demonstrated that the performance ratings after and self-focused attention during the presentation were unique predictors of PEP over and above the effects of trait and state anxiety. There was no evidence that PEP was associated with a biased recall of individual performance evaluations. The results support cognitive theories that emphasize the importance of negative self-perceptions in the development of social anxiety and related processes, and underline self-focused attention and self-evaluative processes as important targets during treatment.

  19. Real-time computing platform for spiking neurons (RT-spike).

    PubMed

    Ros, Eduardo; Ortigosa, Eva M; Agís, Rodrigo; Carrillo, Richard; Arnold, Michael

    2006-07-01

    A computing platform is described for simulating arbitrary networks of spiking neurons in real time. A hybrid computing scheme is adopted that uses both software and hardware components to manage the tradeoff between flexibility and computational power; the neuron model is implemented in hardware and the network model and the learning are implemented in software. The incremental transition of the software components into hardware is supported. We focus on a spike response model (SRM) for a neuron where the synapses are modeled as input-driven conductances. The temporal dynamics of the synaptic integration process are modeled with a synaptic time constant that results in a gradual injection of charge. This type of model is computationally expensive and is not easily amenable to existing software-based event-driven approaches. As an alternative we have designed an efficient time-based computing architecture in hardware, where the different stages of the neuron model are processed in parallel. Further improvements occur by computing multiple neurons in parallel using multiple processing units. This design is tested using reconfigurable hardware and its scalability and performance evaluated. Our overall goal is to investigate biologically realistic models for the real-time control of robots operating within closed action-perception loops, and so we evaluate the performance of the system on simulating a model of the cerebellum where the emulation of the temporal dynamics of the synaptic integration process is important.

  20. Detection system of capillary array electrophoresis microchip based on optical fiber

    NASA Astrophysics Data System (ADS)

    Yang, Xiaobo; Bai, Haiming; Yan, Weiping

    2009-11-01

    To meet the demands of the post-genomic era study and the large parallel detections of epidemic diseases and drug screening, the high throughput micro-fluidic detection system is needed urgently. A scanning laser induced fluorescence detection system based on optical fiber has been established by using a green laser diode double-pumped solid-state laser as excitation source. It includes laser induced fluorescence detection subsystem, capillary array electrophoresis micro-chip, channel identification unit and fluorescent signal processing subsystem. V-shaped detecting probe composed with two optical fibers for transmitting the excitation light and detecting induced fluorescence were constructed. Parallel four-channel signal analysis of capillary electrophoresis was performed on this system by using Rhodamine B as the sample. The distinction of different samples and separation of samples were achieved with the constructed detection system. The lowest detected concentration is 1×10-5 mol/L for Rhodamine B. The results show that the detection system possesses some advantages, such as compact structure, better stability and higher sensitivity, which are beneficial to the development of microminiaturization and integration of capillary array electrophoresis chip.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Song

    CFD (Computational Fluid Dynamics) is a widely used technique in engineering design field. It uses mathematical methods to simulate and predict flow characteristics in a certain physical space. Since the numerical result of CFD computation is very hard to understand, VR (virtual reality) and data visualization techniques are introduced into CFD post-processing to improve the understandability and functionality of CFD computation. In many cases CFD datasets are very large (multi-gigabytes), and more and more interactions between user and the datasets are required. For the traditional VR application, the limitation of computing power is a major factor to prevent visualizing largemore » dataset effectively. This thesis presents a new system designing to speed up the traditional VR application by using parallel computing and distributed computing, and the idea of using hand held device to enhance the interaction between a user and VR CFD application as well. Techniques in different research areas including scientific visualization, parallel computing, distributed computing and graphical user interface designing are used in the development of the final system. As the result, the new system can flexibly be built on heterogeneous computing environment, dramatically shorten the computation time.« less

  2. Oral sumatriptan for migraine in children and adolescents: a randomized, multicenter, placebo-controlled, parallel group study.

    PubMed

    Fujita, Mitsue; Sato, Katsuaki; Nishioka, Hiroshi; Sakai, Fumihiko

    2014-04-01

    The objective of this article is to evaluate the efficacy and tolerability of two doses of oral sumatriptan vs placebo in the acute treatment of migraine in children and adolescents. Currently, there is no approved prescription medication in Japan for the treatment of migraine in children and adolescents. This was a multicenter, outpatient, single-attack, double-blind, randomized, placebo-controlled, parallel-group study. Eligible patients were children and adolescents aged 10 to 17 years diagnosed with migraine with or without aura (ICHD-II criteria 1.1 or 1.2) from 17 centers. They were randomized to receive sumatriptan 25 mg, 50 mg or placebo (1:1:2). The primary efficacy endpoint was headache relief by two grades on a five-grade scale at two hours post-dose. A total of 178 patients from 17 centers in Japan were enrolled and randomized to an investigational product in double-blind fashion. Of these, 144 patients self-treated a single migraine attack, and all provided a post-dose efficacy assessment and completed the study. The percentage of patients in the full analysis set (FAS) population who report pain relief at two hours post-treatment for the primary endpoint was higher in the placebo group than in the pooled sumatriptan group (38.6% vs 31.1%, 95% CI: -23.02 to 8.04, P  = 0.345). The percentage of patients in the FAS population who reported pain relief at four hours post-dose was higher in the pooled sumatriptan group (63.5%) than in the placebo group (51.4%) but failed to achieve statistical significance ( P  = 0.142). At four hours post-dose, percentages of patients who were pain free or had complete relief of photophobia or phonophobia were numerically higher in the sumatriptan pooled group compared to placebo. Both doses of oral sumatriptan were well tolerated. No adverse events (AEs) were serious or led to study withdrawal. The most common AEs were somnolence in 6% (two patients) in the sumatriptan 25 mg treatment group and chest discomfort in 7% (three patients) in the sumatriptan 50 mg treatment group. There was no statistically significant improvement between the sumatriptan pooled group and the placebo group for pain relief at two hours. Oral sumatriptan was well tolerated.

  3. Sequence Segmentation with changeptGUI.

    PubMed

    Tasker, Edward; Keith, Jonathan M

    2017-01-01

    Many biological sequences have a segmental structure that can provide valuable clues to their content, structure, and function. The program changept is a tool for investigating the segmental structure of a sequence, and can also be applied to multiple sequences in parallel to identify a common segmental structure, thus providing a method for integrating multiple data types to identify functional elements in genomes. In the previous edition of this book, a command line interface for changept is described. Here we present a graphical user interface for this package, called changeptGUI. This interface also includes tools for pre- and post-processing of data and results to facilitate investigation of the number and characteristics of segment classes.

  4. "A Desire for Growth": Online Full-Time Faculty's Perceptions of Evaluation Processes

    ERIC Educational Resources Information Center

    DeCosta, Meredith; Bergquist, Emily; Holbeck, Rick

    2015-01-01

    Post-secondary educational institutions use various means to evaluate the teaching performance of faculty members. There are benefits to effective faculty evaluation, including advancing the scholarship of teaching and learning, as well as improving the functionality and innovation of courses, curriculum, departments, and ultimately the broader…

  5. A Parallel Neuromorphic Text Recognition System and Its Implementation on a Heterogeneous High-Performance Computing Cluster

    DTIC Science & Technology

    2013-01-01

    M. Ahmadi, and M. Shridhar, “ Handwritten Numeral Recognition with Multiple Features and Multistage Classifiers,” Proc. IEEE Int’l Symp. Circuits...ARTICLE (Post Print) 3. DATES COVERED (From - To) SEP 2011 – SEP 2013 4. TITLE AND SUBTITLE A PARALLEL NEUROMORPHIC TEXT RECOGNITION SYSTEM AND ITS...research in computational intelligence has entered a new era. In this paper, we present an HPC-based context-aware intelligent text recognition

  6. Storing files in a parallel computing system using list-based index to identify replica files

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Faibish, Sorin; Bent, John M.; Tzelnic, Percy

    Improved techniques are provided for storing files in a parallel computing system using a list-based index to identify file replicas. A file and at least one replica of the file are stored in one or more storage nodes of the parallel computing system. An index for the file comprises at least one list comprising a pointer to a storage location of the file and a storage location of the at least one replica of the file. The file comprises one or more of a complete file and one or more sub-files. The index may also comprise a checksum value formore » one or more of the file and the replica(s) of the file. The checksum value can be evaluated to validate the file and/or the file replica(s). A query can be processed using the list.« less

  7. Configuration affects parallel stent grafting results.

    PubMed

    Tanious, Adam; Wooster, Mathew; Armstrong, Paul A; Zwiebel, Bruce; Grundy, Shane; Back, Martin R; Shames, Murray L

    2018-05-01

    A number of adjunctive "off-the-shelf" procedures have been described to treat complex aortic diseases. Our goal was to evaluate parallel stent graft configurations and to determine an optimal formula for these procedures. This is a retrospective review of all patients at a single medical center treated with parallel stent grafts from January 2010 to September 2015. Outcomes were evaluated on the basis of parallel graft orientation, type, and main body device. Primary end points included parallel stent graft compromise and overall endovascular aneurysm repair (EVAR) compromise. There were 78 patients treated with a total of 144 parallel stents for a variety of pathologic processes. There was a significant correlation between main body oversizing and snorkel compromise (P = .0195) and overall procedural complication (P = .0019) but not with endoleak rates. Patients were organized into the following oversizing groups for further analysis: 0% to 10%, 10% to 20%, and >20%. Those oversized into the 0% to 10% group had the highest rate of overall EVAR complication (73%; P = .0003). There were no significant correlations between any one particular configuration and overall procedural complication. There was also no significant correlation between total number of parallel stents employed and overall complication. Composite EVAR configuration had no significant correlation with individual snorkel compromise, endoleak, or overall EVAR or procedural complication. The configuration most prone to individual snorkel compromise and overall EVAR complication was a four-stent configuration with two stents in an antegrade position and two stents in a retrograde position (60% complication rate). The configuration most prone to endoleak was one or two stents in retrograde position (33% endoleak rate), followed by three stents in an all-antegrade position (25%). There was a significant correlation between individual stent configuration and stent compromise (P = .0385), with 31.25% of retrograde stents having any complication. Parallel stent grafting offers an off-the-shelf option to treat a variety of aortic diseases. There is an increased risk of parallel stent and overall EVAR compromise with <10% main body oversizing. Thirty-day mortality is increased when more than one parallel stent is placed. Antegrade configurations are preferred to any retrograde configuration, with optimal oversizing >20%. Copyright © 2017 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.

  8. Parallel processing via a dual olfactory pathway in the honeybee.

    PubMed

    Brill, Martin F; Rosenbaum, Tobias; Reus, Isabelle; Kleineidam, Christoph J; Nawrot, Martin P; Rössler, Wolfgang

    2013-02-06

    In their natural environment, animals face complex and highly dynamic olfactory input. Thus vertebrates as well as invertebrates require fast and reliable processing of olfactory information. Parallel processing has been shown to improve processing speed and power in other sensory systems and is characterized by extraction of different stimulus parameters along parallel sensory information streams. Honeybees possess an elaborate olfactory system with unique neuronal architecture: a dual olfactory pathway comprising a medial projection-neuron (PN) antennal lobe (AL) protocerebral output tract (m-APT) and a lateral PN AL output tract (l-APT) connecting the olfactory lobes with higher-order brain centers. We asked whether this neuronal architecture serves parallel processing and employed a novel technique for simultaneous multiunit recordings from both tracts. The results revealed response profiles from a high number of PNs of both tracts to floral, pheromonal, and biologically relevant odor mixtures tested over multiple trials. PNs from both tracts responded to all tested odors, but with different characteristics indicating parallel processing of similar odors. Both PN tracts were activated by widely overlapping response profiles, which is a requirement for parallel processing. The l-APT PNs had broad response profiles suggesting generalized coding properties, whereas the responses of m-APT PNs were comparatively weaker and less frequent, indicating higher odor specificity. Comparison of response latencies within and across tracts revealed odor-dependent latencies. We suggest that parallel processing via the honeybee dual olfactory pathway provides enhanced odor processing capabilities serving sophisticated odor perception and olfactory demands associated with a complex olfactory world of this social insect.

  9. Argonne simulation framework for intelligent transportation systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ewing, T.; Doss, E.; Hanebutte, U.

    1996-04-01

    A simulation framework has been developed which defines a high-level architecture for a large-scale, comprehensive, scalable simulation of an Intelligent Transportation System (ITS). The simulator is designed to run on parallel computers and distributed (networked) computer systems; however, a version for a stand alone workstation is also available. The ITS simulator includes an Expert Driver Model (EDM) of instrumented ``smart`` vehicles with in-vehicle navigation units. The EDM is capable of performing optimal route planning and communicating with Traffic Management Centers (TMC). A dynamic road map data base is sued for optimum route planning, where the data is updated periodically tomore » reflect any changes in road or weather conditions. The TMC has probe vehicle tracking capabilities (display position and attributes of instrumented vehicles), and can provide 2-way interaction with traffic to provide advisories and link times. Both the in-vehicle navigation module and the TMC feature detailed graphical user interfaces that includes human-factors studies to support safety and operational research. Realistic modeling of variations of the posted driving speed are based on human factor studies that take into consideration weather, road conditions, driver`s personality and behavior and vehicle type. The simulator has been developed on a distributed system of networked UNIX computers, but is designed to run on ANL`s IBM SP-X parallel computer system for large scale problems. A novel feature of the developed simulator is that vehicles will be represented by autonomous computer processes, each with a behavior model which performs independent route selection and reacts to external traffic events much like real vehicles. Vehicle processes interact with each other and with ITS components by exchanging messages. With this approach, one will be able to take advantage of emerging massively parallel processor (MPP) systems.« less

  10. PARLO: PArallel Run-Time Layout Optimization for Scientific Data Explorations with Heterogeneous Access Pattern

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gong, Zhenhuan; Boyuka, David; Zou, X

    Download Citation Email Print Request Permissions Save to Project The size and scope of cutting-edge scientific simulations are growing much faster than the I/O and storage capabilities of their run-time environments. The growing gap is exacerbated by exploratory, data-intensive analytics, such as querying simulation data with multivariate, spatio-temporal constraints, which induces heterogeneous access patterns that stress the performance of the underlying storage system. Previous work addresses data layout and indexing techniques to improve query performance for a single access pattern, which is not sufficient for complex analytics jobs. We present PARLO a parallel run-time layout optimization framework, to achieve multi-levelmore » data layout optimization for scientific applications at run-time before data is written to storage. The layout schemes optimize for heterogeneous access patterns with user-specified priorities. PARLO is integrated with ADIOS, a high-performance parallel I/O middleware for large-scale HPC applications, to achieve user-transparent, light-weight layout optimization for scientific datasets. It offers simple XML-based configuration for users to achieve flexible layout optimization without the need to modify or recompile application codes. Experiments show that PARLO improves performance by 2 to 26 times for queries with heterogeneous access patterns compared to state-of-the-art scientific database management systems. Compared to traditional post-processing approaches, its underlying run-time layout optimization achieves a 56% savings in processing time and a reduction in storage overhead of up to 50%. PARLO also exhibits a low run-time resource requirement, while also limiting the performance impact on running applications to a reasonable level.« less

  11. Accelerating research into bio-based FDCA-polyesters by using small scale parallel film reactors.

    PubMed

    Gruter, Gert-Jan M; Sipos, Laszlo; Adrianus Dam, Matheus

    2012-02-01

    High Throughput experimentation has been well established as a tool in early stage catalyst development and catalyst and process scale-up today. One of the more challenging areas of catalytic research is polymer catalysis. The main difference with most non-polymer catalytic conversions is the fact that the product is not a well defined molecule and the catalytic performance cannot be easily expressed only in terms of catalyst activity and selectivity. In polymerization reactions, polymer chains are formed that can have various lengths (resulting in a molecular weight distribution rather than a defined molecular weight), that can have different compositions (when random or block co-polymers are produced), that can have cross-linking (often significantly affecting physical properties), that can have different endgroups (often affecting subsequent processing steps) and several other variations. In addition, for polyolefins, mass and heat transfer, oxygen and moisture sensitivity, stereoregularity and many other intrinsic features make relevant high throughput screening in this field an incredible challenge. For polycondensation reactions performed in the melt often the viscosity becomes already high at modest molecular weights, which greatly influences mass transfer of the condensation product (often water or methanol). When reactions become mass transfer limited, catalyst performance comparison is often no longer relevant. This however does not mean that relevant experiments for these application areas cannot be performed on small scale. Relevant catalyst screening experiments for polycondensation reactions can be performed in very efficient small scale parallel equipment. Both transesterification and polycondensation as well as post condensation through solid-stating in parallel equipment have been developed. Next to polymer synthesis, polymer characterization also needs to be accelerated without making concessions to quality in order to draw relevant conclusions.

  12. Visual analysis of inter-process communication for large-scale parallel computing.

    PubMed

    Muelder, Chris; Gygi, Francois; Ma, Kwan-Liu

    2009-01-01

    In serial computation, program profiling is often helpful for optimization of key sections of code. When moving to parallel computation, not only does the code execution need to be considered but also communication between the different processes which can induce delays that are detrimental to performance. As the number of processes increases, so does the impact of the communication delays on performance. For large-scale parallel applications, it is critical to understand how the communication impacts performance in order to make the code more efficient. There are several tools available for visualizing program execution and communications on parallel systems. These tools generally provide either views which statistically summarize the entire program execution or process-centric views. However, process-centric visualizations do not scale well as the number of processes gets very large. In particular, the most common representation of parallel processes is a Gantt char t with a row for each process. As the number of processes increases, these charts can become difficult to work with and can even exceed screen resolution. We propose a new visualization approach that affords more scalability and then demonstrate it on systems running with up to 16,384 processes.

  13. Parallel processing for nonlinear dynamics simulations of structures including rotating bladed-disk assemblies

    NASA Technical Reports Server (NTRS)

    Hsieh, Shang-Hsien

    1993-01-01

    The principal objective of this research is to develop, test, and implement coarse-grained, parallel-processing strategies for nonlinear dynamic simulations of practical structural problems. There are contributions to four main areas: finite element modeling and analysis of rotational dynamics, numerical algorithms for parallel nonlinear solutions, automatic partitioning techniques to effect load-balancing among processors, and an integrated parallel analysis system.

  14. Parallel Numerical Simulations of Water Reservoirs

    NASA Astrophysics Data System (ADS)

    Torres, Pedro; Mangiavacchi, Norberto

    2010-11-01

    The study of the water flow and scalar transport in water reservoirs is important for the determination of the water quality during the initial stages of the reservoir filling and during the life of the reservoir. For this scope, a parallel 2D finite element code for solving the incompressible Navier-Stokes equations coupled with scalar transport was implemented using the message-passing programming model, in order to perform simulations of hidropower water reservoirs in a computer cluster environment. The spatial discretization is based on the MINI element that satisfies the Babuska-Brezzi (BB) condition, which provides sufficient conditions for a stable mixed formulation. All the distributed data structures needed in the different stages of the code, such as preprocessing, solving and post processing, were implemented using the PETSc library. The resulting linear systems for the velocity and the pressure fields were solved using the projection method, implemented by an approximate block LU factorization. In order to increase the parallel performance in the solution of the linear systems, we employ the static condensation method for solving the intermediate velocity at vertex and centroid nodes separately. We compare performance results of the static condensation method with the approach of solving the complete system. In our tests the static condensation method shows better performance for large problems, at the cost of an increased memory usage. Performance results for other intensive parts of the code in a computer cluster are also presented.

  15. The effect of cosmic-ray acceleration on supernova blast wave dynamics

    NASA Astrophysics Data System (ADS)

    Pais, M.; Pfrommer, C.; Ehlert, K.; Pakmor, R.

    2018-05-01

    Non-relativistic shocks accelerate ions to highly relativistic energies provided that the orientation of the magnetic field is closely aligned with the shock normal (quasi-parallel shock configuration). In contrast, quasi-perpendicular shocks do not efficiently accelerate ions. We model this obliquity-dependent acceleration process in a spherically expanding blast wave setup with the moving-mesh code AREPO for different magnetic field morphologies, ranging from homogeneous to turbulent configurations. A Sedov-Taylor explosion in a homogeneous magnetic field generates an oblate ellipsoidal shock surface due to the slower propagating blast wave in the direction of the magnetic field. This is because of the efficient cosmic ray (CR) production in the quasi-parallel polar cap regions, which softens the equation of state and increases the compressibility of the post-shock gas. We find that the solution remains self-similar because the ellipticity of the propagating blast wave stays constant in time. This enables us to derive an effective ratio of specific heats for a composite of thermal gas and CRs as a function of the maximum acceleration efficiency. We finally discuss the behavior of supernova remnants expanding into a turbulent magnetic field with varying coherence lengths. For a maximum CR acceleration efficiency of about 15 per cent at quasi-parallel shocks (as suggested by kinetic plasma simulations), we find an average efficiency of about 5 per cent, independent of the assumed magnetic coherence length.

  16. Problem solving strategies integrated into nursing process to promote clinical problem solving abilities of RN-BSN students.

    PubMed

    Wang, Jing-Jy; Lo, Chi-Hui Kao; Ku, Ya-Lie

    2004-11-01

    A set of problem solving strategies integrated into nursing process in nursing core courses (PSNP) was developed for students enrolled in a post-RN baccalaureate nursing program (RN-BSN) in a university in Taiwan. The purpose of this study, therefore, was to evaluate the effectiveness of PSNP on students' clinical problem solving abilities. The one-group post-test design with repeated measures was used. In total 114 nursing students with 47 full-time students and 67 part-time students participated in this study. The nursing core courses were undertaken separately in three semesters. After each semester's learning, students would start their clinical practice, and were asked to submit three written nursing process recordings during each clinic. Assignments from the three practices were named post-test I, II, and III sequentially, and provided the data for this study. The overall score of problem solving indicated that score on the post-test III was significantly better than that on post-test I and II, meaning both full-time and part-time students' clinical problem solving abilities improved at the last semester. In conclusion, problem-solving strategies integrated into nursing process designed for future RN-BSN students are recommendable.

  17. Perception Of "Features" And "Objects": Applications To The Design Of Instrument Panel Displays

    NASA Astrophysics Data System (ADS)

    Poynter, Douglas; Czarnomski, Alan J.

    1988-10-01

    An experiment was conducted to determine whether socalled feature displays allow for faster and more accurate processing compared to object displays. Previous psychological studies indicate that features can be processed in parallel across the visual field, whereas objects must be processed one at a time with the aid of attentional focus. Numbers and letters are examples of objects; line orientation and color are examples of features. In this experiment, subjects were asked to search displays composed of up to 16 elements for the presence of specific elements. The ability to detect, localize, and identify targets was influenced by display format. Digital errors increased with the number of elements, the number of targets, and the distance of the target from the fixation point. Line orientation errors increased only with the number of targets. Several other display types were evaluated, and each produced a pattern of errors similar to either digital or line orientation format. Results of the study were discussed in terms of Feature Integration Theory, which distinguishes between elements that are processed with parallel versus serial mechanisms.

  18. In-Bore Prostate Transperineal Interventions with an MRI-guided Parallel Manipulator: System Development and Preliminary Evaluation

    PubMed Central

    Eslami, Sohrab; Shang, Weijian; Li, Gang; Patel, Nirav; Fischer, Gregory S.; Tokuda, Junichi; Hata, Nobuhiko; Tempany, Clare M.; Iordachita, Iulian

    2015-01-01

    Background The robot-assisted minimally-invasive surgery is well recognized as a feasible solution for diagnosis and treatment of the prostate cancer in human. Methods In this paper the kinematics of a parallel 4 Degrees-of-Freedom (DOF) surgical manipulator designed for minimally invasive in-bore prostate percutaneous interventions through the patient's perineum. The proposed manipulator takes advantage of 4 sliders actuated by MRI-compatible piezoelectric motors and incremental rotary encoders. Errors, mostly originating from the design and manufacturing process, need to be identified and reduced before the robot is deployed in the clinical trials. Results The manipulator has undergone several experiments to evaluate the repeatability and accuracy of the needle placement which is an essential concern in percutaneous prostate interventions. Conclusion The acquired results endorse the sustainability, precision (about 1 mm in air (in x or y direction) at the needle's reference point) and reliability of the manipulator. PMID:26111458

  19. Evaluation of peristaltic micromixers for highly integrated microfluidic systems

    PubMed Central

    Kim, Duckjong; Rho, Hoon Suk; Jambovane, Sachin; Shin, Soojeong; Hong, Jong Wook

    2016-01-01

    Microfluidic devices based on the multilayer soft lithography allow accurate manipulation of liquids, handling reagents at the sub-nanoliter level, and performing multiple reactions in parallel processors by adapting micromixers. Here, we have experimentally evaluated and compared several designs of micromixers and operating conditions to find design guidelines for the micromixers. We tested circular, triangular, and rectangular mixing loops and measured mixing performance according to the position and the width of the valves that drive nanoliters of fluids in the micrometer scale mixing loop. We found that the rectangular mixer is best for the applications of highly integrated microfluidic platforms in terms of the mixing performance and the space utilization. This study provides an improved understanding of the flow behaviors inside micromixers and design guidelines for micromixers that are critical to build higher order fluidic systems for the complicated parallel bio/chemical processes on a chip. PMID:27036809

  20. Evaluation of peristaltic micromixers for highly integrated microfluidic systems

    NASA Astrophysics Data System (ADS)

    Kim, Duckjong; Rho, Hoon Suk; Jambovane, Sachin; Shin, Soojeong; Hong, Jong Wook

    2016-03-01

    Microfluidic devices based on the multilayer soft lithography allow accurate manipulation of liquids, handling reagents at the sub-nanoliter level, and performing multiple reactions in parallel processors by adapting micromixers. Here, we have experimentally evaluated and compared several designs of micromixers and operating conditions to find design guidelines for the micromixers. We tested circular, triangular, and rectangular mixing loops and measured mixing performance according to the position and the width of the valves that drive nanoliters of fluids in the micrometer scale mixing loop. We found that the rectangular mixer is best for the applications of highly integrated microfluidic platforms in terms of the mixing performance and the space utilization. This study provides an improved understanding of the flow behaviors inside micromixers and design guidelines for micromixers that are critical to build higher order fluidic systems for the complicated parallel bio/chemical processes on a chip.

  1. Mediators of the relationship between social anxiety and post-event rumination.

    PubMed

    Chen, Junwen; Rapee, Ronald M; Abbott, Maree J

    2013-01-01

    A variety of cognitive and attentional factors are hypothesised to be associated with post-event rumination, a key construct that has been proposed to contribute to the maintenance of social anxiety disorder (SAD). The present study aimed to explore factors contributing to post-event rumination following delivery of a speech in a clinical population. 121 participants with SAD completed measures of trait social anxiety a week before they undertook a speech task. After the speech, participants answered several questionnaires assessing their state anxiety, self-evaluation of performance, perceived focus of attention and probability and cost of expected negative evaluation. One-week later, participants completed measures of negative rumination experienced over the week. Results showed two pathways leading to post-event rumination: (1) a direct path from trait social anxiety to post-event rumination and (2) indirect paths from trait social anxiety to post-event rumination via its relationships with inappropriate attentional focus and self-evaluation of performance. The results suggest that post event rumination is at least partly predicted by the extent to which socially anxious individuals negatively perceive their own performance and their allocation of attentional resources to this negative self-image. Current findings support the key relationships among cognitive processes proposed by cognitive models. Copyright © 2012 Elsevier Ltd. All rights reserved.

  2. Human Neural Stem Cell Extracellular Vesicles Improve Recovery in a Porcine Model of Ischemic Stroke

    PubMed Central

    Webb, Robin L.; Kaiser, Erin E.; Jurgielewicz, Brian J.; Spellicy, Samantha; Scoville, Shelley L.; Thompson, Tyler A.; Swetenburg, Raymond L.; Hess, David C.; West, Franklin D.

    2018-01-01

    Background and Purpose— Recent work from our group suggests that human neural stem cell–derived extracellular vesicle (NSC EV) treatment improves both tissue and sensorimotor function in a preclinical thromboembolic mouse model of stroke. In this study, NSC EVs were evaluated in a pig ischemic stroke model, where clinically relevant end points were used to assess recovery in a more translational large animal model. Methods— Ischemic stroke was induced by permanent middle cerebral artery occlusion (MCAO), and either NSC EV or PBS treatment was administered intravenously at 2, 14, and 24 hours post-MCAO. NSC EV effects on tissue level recovery were evaluated via magnetic resonance imaging at 1 and 84 days post-MCAO. Effects on functional recovery were also assessed through longitudinal behavior and gait analysis testing. Results— NSC EV treatment was neuroprotective and led to significant improvements at the tissue and functional levels in stroked pigs. NSC EV treatment eliminated intracranial hemorrhage in ischemic lesions in NSC EV pigs (0 of 7) versus control pigs (7 of 8). NSC EV–treated pigs exhibited a significant decrease in cerebral lesion volume and decreased brain swelling relative to control pigs 1-day post-MCAO. NSC EVs significantly reduced edema in treated pigs relative to control pigs, as assessed by improved diffusivity through apparent diffusion coefficient maps. NSC EVs preserved white matter integrity with increased corpus callosum fractional anisotropy values 84 days post-MCAO. Behavior and mobility improvements paralleled structural changes as NSC EV–treated pigs exhibited improved outcomes, including increased exploratory behavior and faster restoration of spatiotemporal gait parameters. Conclusions— This study demonstrated for the first time that in a large animal model novel NSC EVs significantly improved neural tissue preservation and functional levels post-MCAO, suggesting NSC EVs may be a paradigm changing stroke therapeutic. PMID:29650593

  3. Decentralized diagnostics based on a distributed micro-genetic algorithm for transducer networks monitoring large experimental systems.

    PubMed

    Arpaia, P; Cimmino, P; Girone, M; La Commara, G; Maisto, D; Manna, C; Pezzetti, M

    2014-09-01

    Evolutionary approach to centralized multiple-faults diagnostics is extended to distributed transducer networks monitoring large experimental systems. Given a set of anomalies detected by the transducers, each instance of the multiple-fault problem is formulated as several parallel communicating sub-tasks running on different transducers, and thus solved one-by-one on spatially separated parallel processes. A micro-genetic algorithm merges evaluation time efficiency, arising from a small-size population distributed on parallel-synchronized processors, with the effectiveness of centralized evolutionary techniques due to optimal mix of exploitation and exploration. In this way, holistic view and effectiveness advantages of evolutionary global diagnostics are combined with reliability and efficiency benefits of distributed parallel architectures. The proposed approach was validated both (i) by simulation at CERN, on a case study of a cold box for enhancing the cryogeny diagnostics of the Large Hadron Collider, and (ii) by experiments, under the framework of the industrial research project MONDIEVOB (Building Remote Monitoring and Evolutionary Diagnostics), co-funded by EU and the company Del Bo srl, Napoli, Italy.

  4. Deliberation before determination: the definition and evaluation of good decision making.

    PubMed

    Elwyn, Glyn; Miron-Shatz, Talya

    2010-06-01

    In this article, we examine definitions of suggested approaches to measure the concept of good decisions, highlight the ways in which they converge, and explain why we have concerns about their emphasis on post-hoc estimations and post-decisional outcomes, their prescriptive concept of knowledge, and their lack of distinction between the process of deliberation, and the act of decision determination. There has been a steady trend to involve patients in decision making tasks in clinical practice, part of a shift away from paternalism towards the concept of informed choice. An increased understanding of the uncertainties that exist in medicine, arising from a weak evidence base and, in addition, the stochastic nature of outcomes at the individual level, have contributed to shifting the responsibility for decision making from physicians to patients. This led to increasing use of decision support and communication methods, with the ultimate aim of improving decision making by patients. Interest has therefore developed in attempting to define good decision making and in the development of measurement approaches. We pose and reflect whether decisions can be judged good or not, and, if so, how this goodness might be evaluated. We hypothesize that decisions cannot be measured by reference to their outcomes and offer an alternative means of assessment, which emphasizes the deliberation process rather than the decision's end results. We propose decision making comprises a pre-decisional process and an act of decision determination and consider how this model of decision making serves to develop a new approach to evaluating what constitutes a good decision making process. We proceed to offer an alternative, which parses decisions into the pre-decisional deliberation process, the act of determination and post-decisional outcomes. Evaluating the deliberation process, we propose, should comprise of a subjective sufficiency of knowledge, as well as emotional processing and affective forecasting of the alternatives. This should form the basis for a good act of determination.

  5. Automatic small target detection in synthetic infrared images

    NASA Astrophysics Data System (ADS)

    Yardımcı, Ozan; Ulusoy, Ä.°lkay

    2017-05-01

    Automatic detection of targets from far distances is a very challenging problem. Background clutter and small target size are the main difficulties which should be solved while reaching a high detection performance as well as a low computational load. The pre-processing, detection and post-processing approaches are very effective on the final results. In this study, first of all, various methods in the literature were evaluated separately for each of these stages using the simulated test scenarios. Then, a full system of detection was constructed among available solutions which resulted in the best performance in terms of detection. However, although a precision rate as 100% was reached, the recall values stayed low around 25-45%. Finally, a post-processing method was proposed which increased the recall value while keeping the precision at 100%. The proposed post-processing method, which is based on local operations, increased the recall value to 65-95% in all test scenarios.

  6. 7 CFR 4280.102 - General.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Improvements Program § 4280.102 General. (a) Sections 4280.103 through 4280.106 discuss definitions, exception... evaluation process, and post-grant Federal requirements for both the simplified and full application processes. Sections 4280.115 through 4280.117 address project planning, development, and completion as...

  7. Effects of image processing on the detective quantum efficiency

    NASA Astrophysics Data System (ADS)

    Park, Hye-Suk; Kim, Hee-Joung; Cho, Hyo-Min; Lee, Chang-Lae; Lee, Seung-Wan; Choi, Yu-Na

    2010-04-01

    Digital radiography has gained popularity in many areas of clinical practice. This transition brings interest in advancing the methodologies for image quality characterization. However, as the methodologies for such characterizations have not been standardized, the results of these studies cannot be directly compared. The primary objective of this study was to standardize methodologies for image quality characterization. The secondary objective was to evaluate affected factors to Modulation transfer function (MTF), noise power spectrum (NPS), and detective quantum efficiency (DQE) according to image processing algorithm. Image performance parameters such as MTF, NPS, and DQE were evaluated using the international electro-technical commission (IEC 62220-1)-defined RQA5 radiographic techniques. Computed radiography (CR) images of hand posterior-anterior (PA) for measuring signal to noise ratio (SNR), slit image for measuring MTF, white image for measuring NPS were obtained and various Multi-Scale Image Contrast Amplification (MUSICA) parameters were applied to each of acquired images. In results, all of modified images were considerably influence on evaluating SNR, MTF, NPS, and DQE. Modified images by the post-processing had higher DQE than the MUSICA=0 image. This suggests that MUSICA values, as a post-processing, have an affect on the image when it is evaluating for image quality. In conclusion, the control parameters of image processing could be accounted for evaluating characterization of image quality in same way. The results of this study could be guided as a baseline to evaluate imaging systems and their imaging characteristics by measuring MTF, NPS, and DQE.

  8. The Processing of Somatosensory Information Shifts from an Early Parallel into a Serial Processing Mode: A Combined fMRI/MEG Study.

    PubMed

    Klingner, Carsten M; Brodoehl, Stefan; Huonker, Ralph; Witte, Otto W

    2016-01-01

    The question regarding whether somatosensory inputs are processed in parallel or in series has not been clearly answered. Several studies that have applied dynamic causal modeling (DCM) to fMRI data have arrived at seemingly divergent conclusions. However, these divergent results could be explained by the hypothesis that the processing route of somatosensory information changes with time. Specifically, we suggest that somatosensory stimuli are processed in parallel only during the early stage, whereas the processing is later dominated by serial processing. This hypothesis was revisited in the present study based on fMRI analyses of tactile stimuli and the application of DCM to magnetoencephalographic (MEG) data collected during sustained (260 ms) tactile stimulation. Bayesian model comparisons were used to infer the processing stream. We demonstrated that the favored processing stream changes over time. We found that the neural activity elicited in the first 100 ms following somatosensory stimuli is best explained by models that support a parallel processing route, whereas a serial processing route is subsequently favored. These results suggest that the secondary somatosensory area (SII) receives information regarding a new stimulus in parallel with the primary somatosensory area (SI), whereas later processing in the SII is dominated by the preprocessed input from the SI.

  9. The Processing of Somatosensory Information Shifts from an Early Parallel into a Serial Processing Mode: A Combined fMRI/MEG Study

    PubMed Central

    Klingner, Carsten M.; Brodoehl, Stefan; Huonker, Ralph; Witte, Otto W.

    2016-01-01

    The question regarding whether somatosensory inputs are processed in parallel or in series has not been clearly answered. Several studies that have applied dynamic causal modeling (DCM) to fMRI data have arrived at seemingly divergent conclusions. However, these divergent results could be explained by the hypothesis that the processing route of somatosensory information changes with time. Specifically, we suggest that somatosensory stimuli are processed in parallel only during the early stage, whereas the processing is later dominated by serial processing. This hypothesis was revisited in the present study based on fMRI analyses of tactile stimuli and the application of DCM to magnetoencephalographic (MEG) data collected during sustained (260 ms) tactile stimulation. Bayesian model comparisons were used to infer the processing stream. We demonstrated that the favored processing stream changes over time. We found that the neural activity elicited in the first 100 ms following somatosensory stimuli is best explained by models that support a parallel processing route, whereas a serial processing route is subsequently favored. These results suggest that the secondary somatosensory area (SII) receives information regarding a new stimulus in parallel with the primary somatosensory area (SI), whereas later processing in the SII is dominated by the preprocessed input from the SI. PMID:28066197

  10. No appetite efficacy of a commercial structured lipid emulsion in minimally processed drinks.

    PubMed

    Smit, H J; Keenan, E; Kovacs, E M R; Wiseman, S A; Mela, D J; Rogers, P J

    2012-09-01

    Fabuless (Olibra) is a commercially structured lipid emulsion, claimed to be a food ingredient that is effective for food intake and appetite reduction. The present study assessed its efficacy in a yoghurt-based mini-drink undergoing low or minimal food manufacturing (thermal and shear) processes. Study 1: Twenty-four healthy volunteers (16 female, 8 male; age: 18-47 years; body mass index (BMI): 17-28 kg m(-2)) took part in a randomised, placebo-controlled, double-blind parallel crossover trial. Consumption of a minimally processed 'preload' mini-drink (containing two different doses of Fabuless or a control fat) at 2 h after breakfast was followed by appetite and mood ratings, and food intake measured in ad libitum meals at 3 and 7 h post consumption of the preload. Study 2: As Study 1 (16 female, 8 male; age: 20-54 years; BMI: 21-30 kg m(-2)). A chilled, virtually unprocessed, preload breakfast mini-drink (containing minimally processed Fabuless or a control fat) was provided 5 min after a standardised breakfast, followed by appetite and mood ratings, and food intake measured in ad libitum meals at 4 and 8 h post consumption of the preload. The structured lipid emulsion tested had no significant effect on the primary measures of food intake or appetite. Even when exposed to minimal food-manufacturing conditions, Fabuless showed no efficacy on measures of appetite and food intake.

  11. Mechanical properties and superficial characterization of a milled CAD-CAM glass fiber post.

    PubMed

    Ruschel, George Hebert; Gomes, Érica Alves; Silva-Sousa, Yara Terezinha; Pinelli, Rafaela Giedra Pirondi; Sousa-Neto, Manoel Damião; Pereira, Gabriel Kalil Rocha; Spazzin, Aloísio Oro

    2018-06-01

    Computer-aided design and computer-aided manufacturing (CAD-CAM) technology may be used to produce custom intraradicular posts, but studies are lacking. The purpose of this in vitro study was to evaluate the flexural properties (strength and modulus), failure mode, superficial morphology, and roughness of two CAD-CAM glass fiber posts (milled at different angulations) compared with a commercially available prefabricated glass fiber post. Three groups were tested (n = 10): PF (control group)- prefabricated glass fiber post; C-Cd-diagonally milled post; and C-Cv-vertically milled post. A 3-dimensional virtual image was obtained from a prefabricated post, which guided the posterior milling of posts from a glass fiber disk (Trilor Blanks; Bioloren). Surface roughness and morphology were evaluated using confocal laser microscopy. Flexural strength and modulus were evaluated with the 3-point bend test. Data were submitted to one-way analysis of variance followed by the Student-Newman-Keuls post hoc test (α = 0.05). The fractured surfaces were evaluated with scanning electron microscopy. The superficial roughness was highest for PF and similar for the experimental groups. Morphological analysis shows different sizes and directions of the glass fibers along the post. The flexural strength was highest for PF (900.1 ± 30.4 > C-Cd - 357.2 ± 30.7 > C-Cv 101.8 ± 4.3 MPa) as was the flexural modulus (PF 19.3 ± 2.0 GPa > C-Cv 10.1 ± 1.9 GPa > C-Cd 7.8 ± 1.3 GPa). A CAD-CAM milled post seems a promising development, but processing requires optimizing, as the prefabricated post still shows better mechanical properties and superficial characteristics. Copyright © 2018 Elsevier Ltd. All rights reserved.

  12. Interseismic Coupling, Co- and Post-seismic Slip: a Stochastic View on the Northern Chilean Subduction Zone

    NASA Astrophysics Data System (ADS)

    Jolivet, R.; Duputel, Z.; Simons, M.; Jiang, J.; Riel, B. V.; Moore, A. W.; Owen, S. E.

    2017-12-01

    Mapping subsurface fault slip during the different phases of the seismic cycle provides a probe of the mechanical properties and the state of stress along these faults. We focus on the northern Chile megathrust where first order estimates of interseismic fault locking suggests little to no overlap between regions slipping seismically versus those that are dominantly aseismic. However, published distributions of slip, be they during seismic or aseismic phases, rely on unphysical regularization of the inverse problem, thereby cluttering attempts to quantify the degree of overlap between seismic and aseismic slip. Considering all the implications of aseismic slip on our understanding of the nucleation, propagation and arrest of seismic ruptures, it is of utmost importance to quantify our confidence in the current description of fault coupling. Here, we take advantage of 20 years of InSAR observations and more than a decade of GPS measurements to derive probabilistic maps of inter-seismic coupling, as well as co-seismic and post-seismic slip along the northern Chile subduction megathrust. A wide InSAR velocity map is derived using a novel multi-pixel time series analysis method accounting for orbital errors, atmospheric noise and ground deformation. We use AlTar, a massively parallel Monte Carlo Markov Chain algorithm exploiting the acceleration capabilities of Graphic Processing Units, to derive the probability density functions (PDF) of slip. In northern Chile, we find high probabilities for a complete release of the elastic strain accumulated since the 1877 earthquake by the 2014, Iquique earthquake and for the presence of a large, independent, locked asperity left untapped by recent events, north of the Mejillones peninsula. We evaluate the probability of overlap between the co-, inter- and post-seismic slip and consider the potential occurrence of slow, aseismic slip events along this portion of the subduction zone.

  13. An adaptive optics imaging system designed for clinical use

    PubMed Central

    Zhang, Jie; Yang, Qiang; Saito, Kenichi; Nozato, Koji; Williams, David R.; Rossi, Ethan A.

    2015-01-01

    Here we demonstrate a new imaging system that addresses several major problems limiting the clinical utility of conventional adaptive optics scanning light ophthalmoscopy (AOSLO), including its small field of view (FOV), reliance on patient fixation for targeting imaging, and substantial post-processing time. We previously showed an efficient image based eye tracking method for real-time optical stabilization and image registration in AOSLO. However, in patients with poor fixation, eye motion causes the FOV to drift substantially, causing this approach to fail. We solve that problem here by tracking eye motion at multiple spatial scales simultaneously by optically and electronically integrating a wide FOV SLO (WFSLO) with an AOSLO. This multi-scale approach, implemented with fast tip/tilt mirrors, has a large stabilization range of ± 5.6°. Our method consists of three stages implemented in parallel: 1) coarse optical stabilization driven by a WFSLO image, 2) fine optical stabilization driven by an AOSLO image, and 3) sub-pixel digital registration of the AOSLO image. We evaluated system performance in normal eyes and diseased eyes with poor fixation. Residual image motion with incremental compensation after each stage was: 1) ~2–3 arc minutes, (arcmin) 2) ~0.5–0.8 arcmin and, 3) ~0.05–0.07 arcmin, for normal eyes. Performance in eyes with poor fixation was: 1) ~3–5 arcmin, 2) ~0.7–1.1 arcmin and 3) ~0.07–0.14 arcmin. We demonstrate that this system is capable of reducing image motion by a factor of ~400, on average. This new optical design provides additional benefits for clinical imaging, including a steering subsystem for AOSLO that can be guided by the WFSLO to target specific regions of interest such as retinal pathology and real-time averaging of registered images to eliminate image post-processing. PMID:26114033

  14. Repetitive TMS to augment cognitive processing therapy in combat veterans of recent conflicts with PTSD: A randomized clinical trial.

    PubMed

    Kozel, F Andrew; Motes, Michael A; Didehbani, Nyaz; DeLaRosa, Bambi; Bass, Christina; Schraufnagel, Caitlin D; Jones, Penelope; Morgan, Cassie Rae; Spence, Jeffrey S; Kraut, Michael A; Hart, John

    2018-03-15

    The objective was to test whether repetitive Transcranial Magnetic Stimulation (rTMS) just prior to Cognitive Processing Therapy (CPT) would significantly improve the clinical outcome compared to sham rTMS prior to CPT in veterans with PTSD. Veterans 18-60 years of age with current combat-related PTSD symptoms were randomized, using a 1:1 ratio in a parallel design, to active (rTMS+CPT) versus sham (sham+CPT) rTMS just prior to weekly CPT for 12-15 sessions. Blinded raters evaluated veterans at baseline, after the 5th and 9th treatments, and at 1, 3, and 6 months post-treatment. Clinician Administered PTSD Scale (CAPS) was the primary outcome measure with the PTSD Checklist (PCL) as a secondary outcome measure. The TMS coil (active or sham) was positioned over the right dorsolateral prefrontal cortex (110% MT, 1Hz continuously for 30min, 1800 pulses/treatment). Of the 515 individuals screened for the study, 103 participants were randomized to either active (n = 54) or sham rTMS (n = 49). Sixty-two participants (60%) completed treatment and 59 (57%) completed the 6-month assessment. The rTMS+CPT group showed greater symptom reductions from baseline on both CAPS and PCL across CPT sessions and follow-up assessments, t(df ≥ 325) ≤ -2.01, p ≤ 0.023, one-tailed and t(df ≥ 303) ≤ -2.14, p ≤ 0.017, one-tailed, respectively. Participants were predominantly male and limited to one era of conflicts as well as those who could safely undergo rTMS. The addition of rTMS to CPT compared to sham with CPT produced significantly greater PTSD symptom reduction early in treatment and was sustained up to six months post-treatment. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. The remote sensing image segmentation mean shift algorithm parallel processing based on MapReduce

    NASA Astrophysics Data System (ADS)

    Chen, Xi; Zhou, Liqing

    2015-12-01

    With the development of satellite remote sensing technology and the remote sensing image data, traditional remote sensing image segmentation technology cannot meet the massive remote sensing image processing and storage requirements. This article put cloud computing and parallel computing technology in remote sensing image segmentation process, and build a cheap and efficient computer cluster system that uses parallel processing to achieve MeanShift algorithm of remote sensing image segmentation based on the MapReduce model, not only to ensure the quality of remote sensing image segmentation, improved split speed, and better meet the real-time requirements. The remote sensing image segmentation MeanShift algorithm parallel processing algorithm based on MapReduce shows certain significance and a realization of value.

  16. Evaluation of location and number of aid post for sustainable humanitarian relief using agent based modeling (ABM) and geographic information system (GIS)

    NASA Astrophysics Data System (ADS)

    Khair, Fauzi; Sopha, Bertha Maya

    2017-12-01

    One of the crucial phases in disaster management is the response phase or the emergency response phase. It requires a sustainable system and a well-integrated management system. Any errors in the system on this phase will impact on significant increase of the victims number as well as material damage caused. Policies related to the location of aid posts are important decisions. The facts show that there are many failures in the process of providing assistance to the refugees due to lack of preparation and determination of facilities and aid post location. Therefore, this study aims to evaluate the number and location of aid posts on Merapi eruption in 2010. This study uses an integration between Agent Based Modeling (ABM) and Geographic Information System (GIS) about evaluation of the number and location of the aid post using some scenarios. The ABM approach aims to describe the agents behaviour (refugees and volunteers) in the event of a disaster with their respective characteristics. While the spatial data, GIS useful to describe real condition of the Sleman regency road. Based on the simulation result, it shows alternative scenarios that combine DERU UGM post, Maguwoharjo Stadium, Tagana Post and Pakem Main Post has better result in handling and distributing aid to evacuation barrack compared to initial scenario. Alternative scenarios indicates the unmet demands are less than the initial scenario.

  17. Mechanisms of Melatonin in Alleviating Alzheimer’s Disease

    PubMed Central

    Shukla, Mayuri; Govitrapong, Piyarat; Boontem, Parichart; Reiter, Russel J.; Satayavivad, Jutamaad

    2017-01-01

    Alzheimer’s disease (AD) is a chronic, progressive and prevalent neurodegenerative disease characterized by the loss of higher cognitive functions and an associated loss of memory. The thus far “incurable” stigma for AD prevails because of variations in the success rates of different treatment protocols in animal and human studies. Among the classical hypotheses explaining AD pathogenesis, the amyloid hypothesis is currently being targeted for drug development. The underlying concept is to prevent the formation of these neurotoxic peptides which play a central role in AD pathology and trigger a multispectral cascade of neurodegenerative processes post-aggregation. This could possibly be achieved by pharmacological inhibition of β- or γ-secretase or stimulating the non-amyloidogenic α-secretase. Melatonin the pineal hormone is a multifunctioning indoleamine. Production of this amphiphilic molecule diminishes with advancing age and this decrease runs parallel with the progression of AD which itself explains the potential benefits of melatonin in line of development and devastating consequences of the disease progression. Our recent studies have revealed a novel mechanism by which melatonin stimulates the nonamyloidogenic processing and inhibits the amyloidogenic processing of β-amyloid precursor protein (βAPP) by stimulating α-secretases and consequently down regulating both β- and γ-secretases at the transcriptional level. In this review, we discuss and evaluate the neuroprotective functions of melatonin in AD pathogenesis, including its role in the classical hypotheses in cellular and animal models and clinical interventions in AD patients, and suggest that with early detection, melatonin treatment is qualified to be an anti-AD therapy. PMID:28294066

  18. Type synthesis for 4-DOF parallel press mechanism using GF set theory

    NASA Astrophysics Data System (ADS)

    He, Jun; Gao, Feng; Meng, Xiangdun; Guo, Weizhong

    2015-07-01

    Parallel mechanisms is used in the large capacity servo press to avoid the over-constraint of the traditional redundant actuation. Currently, the researches mainly focus on the performance analysis for some specific parallel press mechanisms. However, the type synthesis and evaluation of parallel press mechanisms is seldom studied, especially for the four degrees of freedom(DOF) press mechanisms. The type synthesis of 4-DOF parallel press mechanisms is carried out based on the generalized function(GF) set theory. Five design criteria of 4-DOF parallel press mechanisms are firstly proposed. The general procedure of type synthesis of parallel press mechanisms is obtained, which includes number synthesis, symmetrical synthesis of constraint GF sets, decomposition of motion GF sets and design of limbs. Nine combinations of constraint GF sets of 4-DOF parallel press mechanisms, ten combinations of GF sets of active limbs, and eleven combinations of GF sets of passive limbs are synthesized. Thirty-eight kinds of press mechanisms are presented and then different structures of kinematic limbs are designed. Finally, the geometrical constraint complexity( GCC), kinematic pair complexity( KPC), and type complexity( TC) are proposed to evaluate the press types and the optimal press type is achieved. The general methodologies of type synthesis and evaluation for parallel press mechanism are suggested.

  19. The efficiency evaluation of support vibration isolation with mechanic inertial motion converter for vibroactive process equipment

    NASA Astrophysics Data System (ADS)

    Buryan, Yu. A.; Babichev, D. O.; Silkov, M. V.; Shtripling, L. O.; Kalashnikov, B. A.

    2017-08-01

    This research refers to the problems of processing equipment protection from vibration influence. The theory issues of vibration isolation for vibroactive objects such as engines, pumps, compressors, fans, piping, etc. are considered. The design of the perspective air spring with the parallel mounted mechanical inertial motion converter is offered. The mathematical model of the suspension, allowing selecting options to reduce the factor of the force transmission to the base in a certain frequency range is obtained.

  20. Performance Evaluation of Parallel Branch and Bound Search with the Intel iPSC (Intel Personal SuperComputer) Hypercube Computer.

    DTIC Science & Technology

    1986-12-01

    17 III. Analysis of Parallel Design ................................................ 18 Parallel Abstract Data ...Types ........................................... 18 Abstract Data Type .................................................. 19 Parallel ADT...22 Data -Structure Design ........................................... 23 Object-Oriented Design

  1. Detection and Evaluation of Spatio-Temporal Spike Patterns in Massively Parallel Spike Train Data with SPADE.

    PubMed

    Quaglio, Pietro; Yegenoglu, Alper; Torre, Emiliano; Endres, Dominik M; Grün, Sonja

    2017-01-01

    Repeated, precise sequences of spikes are largely considered a signature of activation of cell assemblies. These repeated sequences are commonly known under the name of spatio-temporal patterns (STPs). STPs are hypothesized to play a role in the communication of information in the computational process operated by the cerebral cortex. A variety of statistical methods for the detection of STPs have been developed and applied to electrophysiological recordings, but such methods scale poorly with the current size of available parallel spike train recordings (more than 100 neurons). In this work, we introduce a novel method capable of overcoming the computational and statistical limits of existing analysis techniques in detecting repeating STPs within massively parallel spike trains (MPST). We employ advanced data mining techniques to efficiently extract repeating sequences of spikes from the data. Then, we introduce and compare two alternative approaches to distinguish statistically significant patterns from chance sequences. The first approach uses a measure known as conceptual stability, of which we investigate a computationally cheap approximation for applications to such large data sets. The second approach is based on the evaluation of pattern statistical significance. In particular, we provide an extension to STPs of a method we recently introduced for the evaluation of statistical significance of synchronous spike patterns. The performance of the two approaches is evaluated in terms of computational load and statistical power on a variety of artificial data sets that replicate specific features of experimental data. Both methods provide an effective and robust procedure for detection of STPs in MPST data. The method based on significance evaluation shows the best overall performance, although at a higher computational cost. We name the novel procedure the spatio-temporal Spike PAttern Detection and Evaluation (SPADE) analysis.

  2. Detection and Evaluation of Spatio-Temporal Spike Patterns in Massively Parallel Spike Train Data with SPADE

    PubMed Central

    Quaglio, Pietro; Yegenoglu, Alper; Torre, Emiliano; Endres, Dominik M.; Grün, Sonja

    2017-01-01

    Repeated, precise sequences of spikes are largely considered a signature of activation of cell assemblies. These repeated sequences are commonly known under the name of spatio-temporal patterns (STPs). STPs are hypothesized to play a role in the communication of information in the computational process operated by the cerebral cortex. A variety of statistical methods for the detection of STPs have been developed and applied to electrophysiological recordings, but such methods scale poorly with the current size of available parallel spike train recordings (more than 100 neurons). In this work, we introduce a novel method capable of overcoming the computational and statistical limits of existing analysis techniques in detecting repeating STPs within massively parallel spike trains (MPST). We employ advanced data mining techniques to efficiently extract repeating sequences of spikes from the data. Then, we introduce and compare two alternative approaches to distinguish statistically significant patterns from chance sequences. The first approach uses a measure known as conceptual stability, of which we investigate a computationally cheap approximation for applications to such large data sets. The second approach is based on the evaluation of pattern statistical significance. In particular, we provide an extension to STPs of a method we recently introduced for the evaluation of statistical significance of synchronous spike patterns. The performance of the two approaches is evaluated in terms of computational load and statistical power on a variety of artificial data sets that replicate specific features of experimental data. Both methods provide an effective and robust procedure for detection of STPs in MPST data. The method based on significance evaluation shows the best overall performance, although at a higher computational cost. We name the novel procedure the spatio-temporal Spike PAttern Detection and Evaluation (SPADE) analysis. PMID:28596729

  3. Parallel algorithms for placement and routing in VLSI design. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Brouwer, Randall Jay

    1991-01-01

    The computational requirements for high quality synthesis, analysis, and verification of very large scale integration (VLSI) designs have rapidly increased with the fast growing complexity of these designs. Research in the past has focused on the development of heuristic algorithms, special purpose hardware accelerators, or parallel algorithms for the numerous design tasks to decrease the time required for solution. Two new parallel algorithms are proposed for two VLSI synthesis tasks, standard cell placement and global routing. The first algorithm, a parallel algorithm for global routing, uses hierarchical techniques to decompose the routing problem into independent routing subproblems that are solved in parallel. Results are then presented which compare the routing quality to the results of other published global routers and which evaluate the speedups attained. The second algorithm, a parallel algorithm for cell placement and global routing, hierarchically integrates a quadrisection placement algorithm, a bisection placement algorithm, and the previous global routing algorithm. Unique partitioning techniques are used to decompose the various stages of the algorithm into independent tasks which can be evaluated in parallel. Finally, results are presented which evaluate the various algorithm alternatives and compare the algorithm performance to other placement programs. Measurements are presented on the parallel speedups available.

  4. Evaluation of ensemble precipitation forecasts generated through post-processing in a Canadian catchment

    NASA Astrophysics Data System (ADS)

    Jha, Sanjeev K.; Shrestha, Durga L.; Stadnyk, Tricia A.; Coulibaly, Paulin

    2018-03-01

    Flooding in Canada is often caused by heavy rainfall during the snowmelt period. Hydrologic forecast centers rely on precipitation forecasts obtained from numerical weather prediction (NWP) models to enforce hydrological models for streamflow forecasting. The uncertainties in raw quantitative precipitation forecasts (QPFs) are enhanced by physiography and orography effects over a diverse landscape, particularly in the western catchments of Canada. A Bayesian post-processing approach called rainfall post-processing (RPP), developed in Australia (Robertson et al., 2013; Shrestha et al., 2015), has been applied to assess its forecast performance in a Canadian catchment. Raw QPFs obtained from two sources, Global Ensemble Forecasting System (GEFS) Reforecast 2 project, from the National Centers for Environmental Prediction, and Global Deterministic Forecast System (GDPS), from Environment and Climate Change Canada, are used in this study. The study period from January 2013 to December 2015 covered a major flood event in Calgary, Alberta, Canada. Post-processed results show that the RPP is able to remove the bias and reduce the errors of both GEFS and GDPS forecasts. Ensembles generated from the RPP reliably quantify the forecast uncertainty.

  5. From "At Risk" to "At Promise": An Evaluation of an Early Reading First Project

    ERIC Educational Resources Information Center

    Zoll, Susan Marie

    2012-01-01

    This study demonstrates the impact of an Early Reading First intervention on preschool children's language and literacy development using an ex post facto, causal-comparative research design. The project's professional development model was evaluated to produce a process and outcome evaluation to answer two overarching research questions: (1) What…

  6. Strain rates, stress markers and earthquake clustering (Invited)

    NASA Astrophysics Data System (ADS)

    Fry, B.; Gerstenberger, M.; Abercrombie, R. E.; Reyners, M.; Eberhart-Phillips, D. M.

    2013-12-01

    The 2010-present Canterbury earthquakes comprise a well-recorded sequence in a relatively low strain-rate shallow crustal region. We present new scientific results to test the hypothesis that: Earthquake sequences in low-strain rate areas experience high stress drop events, low-post seismic relaxation, and accentuated seismic clustering. This hypothesis is based on a physical description of the aftershock process in which the spatial distribution of stress accumulation and stress transfer are controlled by fault strength and orientation. Following large crustal earthquakes, time dependent forecasts are often developed by fitting parameters defined by Omori's aftershock decay law. In high-strain rate areas, simple forecast models utilizing a single p-value fit observed aftershock sequences well. In low-strain rate areas such as Canterbury, assumptions of simple Omori decay may not be sufficient to capture the clustering (sub-sequence) nature exhibited by the punctuated rise in activity following significant child events. In Canterbury, the moment release is more clustered than in more typical Omori sequences. The individual earthquakes in these clusters also exhibit somewhat higher stress drops than in the average crustal sequence in high-strain rate regions, suggesting the earthquakes occur on strong Andersonian-oriented faults, possibly juvenile or well-healed . We use the spectral ratio procedure outlined in (Viegas et al., 2010) to determine corner frequencies and Madariaga stress-drop values for over 800 events in the sequence. Furthermore, we will discuss the relevance of tomographic results of Reyners and Eberhart-Phillips (2013) documenting post-seismic stress-driven fluid processes following the three largest events in the sequence as well as anisotropic patterns in surface wave tomography (Fry et al., 2013). These tomographic studies are both compatible with the hypothesis, providing strong evidence for the presence of widespread and hydrated regional upper crustal cracking parallel to sub-parallel to the dominant transverse failure plane in the sequence. Joint interpretation of the three separate datasets provide a positive first attempt at testing our fundamental hypothesis.

  7. Carbon nanotube-based three-dimensional monolithic optoelectronic integrated system

    NASA Astrophysics Data System (ADS)

    Liu, Yang; Wang, Sheng; Liu, Huaping; Peng, Lian-Mao

    2017-06-01

    Single material-based monolithic optoelectronic integration with complementary metal oxide semiconductor-compatible signal processing circuits is one of the most pursued approaches in the post-Moore era to realize rapid data communication and functional diversification in a limited three-dimensional space. Here, we report an electrically driven carbon nanotube-based on-chip three-dimensional optoelectronic integrated circuit. We demonstrate that photovoltaic receivers, electrically driven transmitters and on-chip electronic circuits can all be fabricated using carbon nanotubes via a complementary metal oxide semiconductor-compatible low-temperature process, providing a seamless integration platform for realizing monolithic three-dimensional optoelectronic integrated circuits with diversified functionality such as the heterogeneous AND gates. These circuits can be vertically scaled down to sub-30 nm and operates in photovoltaic mode at room temperature. Parallel optical communication between functional layers, for example, bottom-layer digital circuits and top-layer memory, has been demonstrated by mapping data using a 2 × 2 transmitter/receiver array, which could be extended as the next generation energy-efficient signal processing paradigm.

  8. Serial and parallel attentive visual searches: evidence from cumulative distribution functions of response times.

    PubMed

    Sung, Kyongje

    2008-12-01

    Participants searched a visual display for a target among distractors. Each of 3 experiments tested a condition proposed to require attention and for which certain models propose a serial search. Serial versus parallel processing was tested by examining effects on response time means and cumulative distribution functions. In 2 conditions, the results suggested parallel rather than serial processing, even though the tasks produced significant set-size effects. Serial processing was produced only in a condition with a difficult discrimination and a very large set-size effect. The results support C. Bundesen's (1990) claim that an extreme set-size effect leads to serial processing. Implications for parallel models of visual selection are discussed.

  9. unmarked: An R package for fitting hierarchical models of wildlife occurrence and abundance

    USGS Publications Warehouse

    Fiske, Ian J.; Chandler, Richard B.

    2011-01-01

    Ecological research uses data collection techniques that are prone to substantial and unique types of measurement error to address scientific questions about species abundance and distribution. These data collection schemes include a number of survey methods in which unmarked individuals are counted, or determined to be present, at spatially- referenced sites. Examples include site occupancy sampling, repeated counts, distance sampling, removal sampling, and double observer sampling. To appropriately analyze these data, hierarchical models have been developed to separately model explanatory variables of both a latent abundance or occurrence process and a conditional detection process. Because these models have a straightforward interpretation paralleling mechanisms under which the data arose, they have recently gained immense popularity. The common hierarchical structure of these models is well-suited for a unified modeling interface. The R package unmarked provides such a unified modeling framework, including tools for data exploration, model fitting, model criticism, post-hoc analysis, and model comparison.

  10. CHIP as a membrane-shuttling proteostasis sensor

    PubMed Central

    Kopp, Yannick; Martínez-Limón, Adrián; Hofbauer, Harald F; Ernst, Robert; Calloni, Giulia

    2017-01-01

    Cells respond to protein misfolding and aggregation in the cytosol by adjusting gene transcription and a number of post-transcriptional processes. In parallel to functional reactions, cellular structure changes as well; however, the mechanisms underlying the early adaptation of cellular compartments to cytosolic protein misfolding are less clear. Here we show that the mammalian ubiquitin ligase C-terminal Hsp70-interacting protein (CHIP), if freed from chaperones during acute stress, can dock on cellular membranes thus performing a proteostasis sensor function. We reconstituted this process in vitro and found that mainly phosphatidic acid and phosphatidylinositol-4-phosphate enhance association of chaperone-free CHIP with liposomes. HSP70 and membranes compete for mutually exclusive binding to the tetratricopeptide repeat domain of CHIP. At new cellular locations, access to compartment-specific substrates would enable CHIP to participate in the reorganization of the respective organelles, as exemplified by the fragmentation of the Golgi apparatus (effector function). PMID:29091030

  11. The impact of cognitive behavioral therapy on post event processing among those with social anxiety disorder.

    PubMed

    Price, Matthew; Anderson, Page L

    2011-02-01

    Individuals with social anxiety are prone to engage in post event processing (PEP), a post mortem review of a social interaction that focuses on negative elements. The extent that PEP is impacted by cognitive behavioral therapy (CBT) and the relation between PEP and change during treatment has yet to be evaluated in a controlled study. The current study used multilevel modeling to determine if PEP decreased as a result of treatment and if PEP limits treatment response for two types of cognitive behavioral treatments, a group-based cognitive behavioral intervention and individually based virtual reality exposure. These hypotheses were evaluated using 91 participants diagnosed with social anxiety disorder. The findings suggested that PEP decreased as a result of treatment, and that social anxiety symptoms for individuals reporting greater levels of PEP improved at a slower rate than those with lower levels of PEP. Further research is needed to understand why PEP attenuates response to treatment. Copyright © 2010 Elsevier Ltd. All rights reserved.

  12. The Impact of Cognitive Behavioral Therapy on Post Event Processing Among Those with Social Anxiety Disorder

    PubMed Central

    Price, Matthew; Anderson, Page L.

    2011-01-01

    Individuals with social anxiety are prone to engage in post event processing (PEP), a post mortem review of a social interaction that focuses on negative elements. The extent that PEP is impacted by cognitive behavioral therapy (CBT) and the relation between PEP and change during treatment has yet to be evaluated in a controlled study. The current study used multilevel modeling to determine if PEP decreased as a result of treatment and if PEP limits treatment response for two types of cognitive behavioral treatments, a group-based cognitive behavioral intervention and individually based virtual reality exposure. These hypotheses were evaluated using 91 participants diagnosed with social anxiety disorder. The findings suggested that PEP decreased as a result of treatment, and that social anxiety symptoms for individuals reporting greater levels of PEP improved at a slower rate than those with lower levels of PEP. Further research is needed to understand why PEP attenuates response to treatment. PMID:21159328

  13. Parallel computing in genomic research: advances and applications

    PubMed Central

    Ocaña, Kary; de Oliveira, Daniel

    2015-01-01

    Today’s genomic experiments have to process the so-called “biological big data” that is now reaching the size of Terabytes and Petabytes. To process this huge amount of data, scientists may require weeks or months if they use their own workstations. Parallelism techniques and high-performance computing (HPC) environments can be applied for reducing the total processing time and to ease the management, treatment, and analyses of this data. However, running bioinformatics experiments in HPC environments such as clouds, grids, clusters, and graphics processing unit requires the expertise from scientists to integrate computational, biological, and mathematical techniques and technologies. Several solutions have already been proposed to allow scientists for processing their genomic experiments using HPC capabilities and parallelism techniques. This article brings a systematic review of literature that surveys the most recently published research involving genomics and parallel computing. Our objective is to gather the main characteristics, benefits, and challenges that can be considered by scientists when running their genomic experiments to benefit from parallelism techniques and HPC capabilities. PMID:26604801

  14. Massively parallel processor computer

    NASA Technical Reports Server (NTRS)

    Fung, L. W. (Inventor)

    1983-01-01

    An apparatus for processing multidimensional data with strong spatial characteristics, such as raw image data, characterized by a large number of parallel data streams in an ordered array is described. It comprises a large number (e.g., 16,384 in a 128 x 128 array) of parallel processing elements operating simultaneously and independently on single bit slices of a corresponding array of incoming data streams under control of a single set of instructions. Each of the processing elements comprises a bidirectional data bus in communication with a register for storing single bit slices together with a random access memory unit and associated circuitry, including a binary counter/shift register device, for performing logical and arithmetical computations on the bit slices, and an I/O unit for interfacing the bidirectional data bus with the data stream source. The massively parallel processor architecture enables very high speed processing of large amounts of ordered parallel data, including spatial translation by shifting or sliding of bits vertically or horizontally to neighboring processing elements.

  15. Parallel computing in genomic research: advances and applications.

    PubMed

    Ocaña, Kary; de Oliveira, Daniel

    2015-01-01

    Today's genomic experiments have to process the so-called "biological big data" that is now reaching the size of Terabytes and Petabytes. To process this huge amount of data, scientists may require weeks or months if they use their own workstations. Parallelism techniques and high-performance computing (HPC) environments can be applied for reducing the total processing time and to ease the management, treatment, and analyses of this data. However, running bioinformatics experiments in HPC environments such as clouds, grids, clusters, and graphics processing unit requires the expertise from scientists to integrate computational, biological, and mathematical techniques and technologies. Several solutions have already been proposed to allow scientists for processing their genomic experiments using HPC capabilities and parallelism techniques. This article brings a systematic review of literature that surveys the most recently published research involving genomics and parallel computing. Our objective is to gather the main characteristics, benefits, and challenges that can be considered by scientists when running their genomic experiments to benefit from parallelism techniques and HPC capabilities.

  16. Parallel processing and expert systems

    NASA Technical Reports Server (NTRS)

    Yan, Jerry C.; Lau, Sonie

    1991-01-01

    Whether it be monitoring the thermal subsystem of Space Station Freedom, or controlling the navigation of the autonomous rover on Mars, NASA missions in the 90's cannot enjoy an increased level of autonomy without the efficient use of expert systems. Merely increasing the computational speed of uniprocessors may not be able to guarantee that real time demands are met for large expert systems. Speed-up via parallel processing must be pursued alongside the optimization of sequential implementations. Prototypes of parallel expert systems have been built at universities and industrial labs in the U.S. and Japan. The state-of-the-art research in progress related to parallel execution of expert systems was surveyed. The survey is divided into three major sections: (1) multiprocessors for parallel expert systems; (2) parallel languages for symbolic computations; and (3) measurements of parallelism of expert system. Results to date indicate that the parallelism achieved for these systems is small. In order to obtain greater speed-ups, data parallelism and application parallelism must be exploited.

  17. Evaluation of the effectiveness and cost-effectiveness of Families for Health V2 for the treatment of childhood obesity: study protocol for a randomized controlled trial.

    PubMed

    Robertson, Wendy; Stewart-Brown, Sarah; Stallard, Nigel; Petrou, Stavros; Griffiths, Frances; Thorogood, Margaret; Simkiss, Douglas; Lang, Rebecca; Reddington, Kate; Poole, Fran; Rye, Gloria; Khan, Kamran A; Hamborg, Thomas; Kirby, Joanna

    2013-03-20

    Effective programs to help children manage their weight are required. Families for Health focuses on a parenting approach, designed to help parents develop their parenting skills to support lifestyle change within the family. Families for Health V1 showed sustained reductions in overweight after 2 years in a pilot evaluation, but lacks a randomized controlled trial (RCT) evidence base. This is a multi-center, investigator-blind RCT, with parallel economic evaluation, with a 12-month follow-up. The trial will recruit 120 families with at least one child aged 6 to 11 years who is overweight (≥91st centile BMI) or obese (≥98th centile BMI) from three localities and assigned randomly to Families for Health V2 (60 families) or the usual care control (60 families) groups. Randomization will be stratified by locality (Coventry, Warwickshire, Wolverhampton).Families for Health V2 is a family-based intervention run in a community venue. Parents/carers and children attend parallel groups for 2.5 hours weekly for 10 weeks. The usual care arm will be the usual support provided within each NHS locality.A mixed-methods evaluation will be carried out. Child and parent participants will be assessed at home visits at baseline, 3-month (post-treatment) and 12-month follow-up. The primary outcome measure is the change in the children's BMI z-scores at 12 months from the baseline. Secondary outcome measures include changes in the children's waist circumference, percentage body fat, physical activity, fruit/vegetable consumption and quality of life. The parents' BMI and mental well-being, family eating/activity, parent-child relationships and parenting style will also be assessed.Economic components will encompass the measurement and valuation of service utilization, including the costs of running Families for Health and usual care, and the EuroQol EQ-5D health outcomes. Cost-effectiveness will be expressed in terms of incremental cost per quality-adjusted life year gained. A de novo decision-analytic model will estimate the lifetime cost-effectiveness of the Families for Health program.Process evaluation will document recruitment, attendance and drop-out rates, and the fidelity of Families for Health delivery. Interviews with up to 24 parents and children from each arm will investigate perceptions and changes made. This paper describes our protocol to assess the effectiveness and cost-effectiveness of a parenting approach for managing childhood obesity and presents challenges to implementation. Current Controlled Trials http://ISRCTN45032201.

  18. Development and Applications of a Modular Parallel Process for Large Scale Fluid/Structures Problems

    NASA Technical Reports Server (NTRS)

    Guruswamy, Guru P.; Kwak, Dochan (Technical Monitor)

    2002-01-01

    A modular process that can efficiently solve large scale multidisciplinary problems using massively parallel supercomputers is presented. The process integrates disciplines with diverse physical characteristics by retaining the efficiency of individual disciplines. Computational domain independence of individual disciplines is maintained using a meta programming approach. The process integrates disciplines without affecting the combined performance. Results are demonstrated for large scale aerospace problems on several supercomputers. The super scalability and portability of the approach is demonstrated on several parallel computers.

  19. Development and Applications of a Modular Parallel Process for Large Scale Fluid/Structures Problems

    NASA Technical Reports Server (NTRS)

    Guruswamy, Guru P.; Byun, Chansup; Kwak, Dochan (Technical Monitor)

    2001-01-01

    A modular process that can efficiently solve large scale multidisciplinary problems using massively parallel super computers is presented. The process integrates disciplines with diverse physical characteristics by retaining the efficiency of individual disciplines. Computational domain independence of individual disciplines is maintained using a meta programming approach. The process integrates disciplines without affecting the combined performance. Results are demonstrated for large scale aerospace problems on several supercomputers. The super scalability and portability of the approach is demonstrated on several parallel computers.

  20. ‘If you are good, I get better’: the role of social hierarchy in perceptual decision-making

    PubMed Central

    Pannunzi, Mario; Ayneto, Alba; Deco, Gustavo; Sebastián-Gallés, Nuria

    2014-01-01

    So far, it was unclear if social hierarchy could influence sensory or perceptual cognitive processes. We evaluated the effects of social hierarchy on these processes using a basic visual perceptual decision task. We constructed a social hierarchy where participants performed the perceptual task separately with two covertly simulated players (superior, inferior). Participants were faster (better) when performing the discrimination task with the superior player. We studied the time course when social hierarchy was processed using event-related potentials and observed hierarchical effects even in early stages of sensory-perceptual processing, suggesting early top–down modulation by social hierarchy. Moreover, in a parallel analysis, we fitted a drift-diffusion model (DDM) to the results to evaluate the decision making process of this perceptual task in the context of a social hierarchy. Consistently, the DDM pointed to nondecision time (probably perceptual encoding) as the principal period influenced by social hierarchy. PMID:23946003

  1. Parallel implementation of all-digital timing recovery for high-speed and real-time optical coherent receivers.

    PubMed

    Zhou, Xian; Chen, Xue

    2011-05-09

    The digital coherent receivers combine coherent detection with digital signal processing (DSP) to compensate for transmission impairments, and therefore are a promising candidate for future high-speed optical transmission system. However, the maximum symbol rate supported by such real-time receivers is limited by the processing rate of hardware. In order to cope with this difficulty, the parallel processing algorithms is imperative. In this paper, we propose a novel parallel digital timing recovery loop (PDTRL) based on our previous work. Furthermore, for increasing the dynamic dispersion tolerance range of receivers, we embed a parallel adaptive equalizer in the PDTRL. This parallel joint scheme (PJS) can be used to complete synchronization, equalization and polarization de-multiplexing simultaneously. Finally, we demonstrate that PDTRL and PJS allow the hardware to process 112 Gbit/s POLMUX-DQPSK signal at the hundreds MHz range. © 2011 Optical Society of America

  2. Social Drinking on Social Media: Content Analysis of the Social Aspects of Alcohol-Related Posts on Facebook and Instagram.

    PubMed

    Hendriks, Hanneke; Van den Putte, Bas; Gebhardt, Winifred A; Moreno, Megan A

    2018-06-22

    Alcohol is often consumed in social contexts. An emerging social context in which alcohol is becoming increasingly apparent is social media. More and more young people display alcohol-related posts on social networking sites such as Facebook and Instagram. Considering the importance of the social aspects of alcohol consumption and social media use, this study investigated the social content of alcohol posts (ie, the evaluative social context and presence of people) and social processes (ie, the posting of and reactions to posts) involved with alcohol posts on social networking sites. Participants (N=192; mean age 20.64, SD 4.68 years, 132 women and 54 men) gave researchers access to their Facebook and/or Instagram profiles, and an extensive content analysis of these profiles was conducted. Coders were trained and then coded all screenshotted timelines in terms of evaluative social context, presence of people, and reactions to post. Alcohol posts of youth frequently depict alcohol in a positive social context (425/438, 97.0%) and display people holding drinks (277/412, 67.2%). In addition, alcohol posts were more often placed on participants' timelines by others (tagging; 238/439, 54.2%) than posted by participants themselves (201/439, 45.8%). Furthermore, it was revealed that such social posts received more likes (mean 35.50, SD 26.39) and comments than nonsocial posts (no people visible; mean 10.34, SD 13.19, P<.001). In terms of content and processes, alcohol posts on social media are social in nature and a part of young people's everyday social lives. Interventions aiming to decrease alcohol posts should therefore focus on the broad social context of individuals in which posting about alcohol takes place. Potential intervention strategies could involve making young people aware that when they post about social gatherings in which alcohol is visible and tag others, it may have unintended negative consequences and should be avoided. ©Hanneke Hendriks, Bas Van den Putte, Winifred A Gebhardt, Megan A Moreno. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 22.06.2018.

  3. Using the Extended Parallel Process Model to Prevent Noise-Induced Hearing Loss among Coal Miners in Appalachia

    ERIC Educational Resources Information Center

    Murray-Johnson, Lisa; Witte, Kim; Patel, Dhaval; Orrego, Victoria; Zuckerman, Cynthia; Maxfield, Andrew M.; Thimons, Edward D.

    2004-01-01

    Occupational noise-induced hearing loss is the second most self-reported occupational illness or injury in the United States. Among coal miners, more than 90% of the population reports a hearing deficit by age 55. In this formative evaluation, focus groups were conducted with coal miners in Appalachia to ascertain whether miners perceive hearing…

  4. Non-parametric analysis of LANDSAT maps using neural nets and parallel computers

    NASA Technical Reports Server (NTRS)

    Salu, Yehuda; Tilton, James

    1991-01-01

    Nearest neighbor approaches and a new neural network, the Binary Diamond, are used for the classification of images of ground pixels obtained by LANDSAT satellite. The performances are evaluated by comparing classifications of a scene in the vicinity of Washington DC. The problem of optimal selection of categories is addressed as a step in the classification process.

  5. An evaluation of GTAW-P versus GTA welding of alloy 718

    NASA Technical Reports Server (NTRS)

    Gamwell, W. R.; Kurgan, C.; Malone, T. W.

    1991-01-01

    Mechanical properties were evaluated to determine statistically whether the pulsed current gas tungsten arc welding (GTAW-P) process produces welds in alloy 718 with room temperature structural performance equivalent to current Space Shuttle Main Engine (SSME) welds manufactured by the constant current GTAW-P process. Evaluations were conducted on two base metal lots, two filler metal lots, two heat input levels, and two welding processes. The material form was 0.125-inch (3.175-mm) alloy 718 sheet. Prior to welding, sheets were treated to either the ST or STA-1 condition. After welding, panels were left as welded or heat treated to the STA-1 condition, and weld beads were left intact or machined flush. Statistical analyses were performed on yield strength, ultimate tensile strength (UTS), and high cycle fatigue (HCF) properties for all the post welded material conditions. Analyses of variance were performed on the data to determine if there were any significant effects on UTS or HCF life due to variations in base metal, filler metal, heat input level, or welding process. Statistical analyses showed that the GTAW-P process does produce welds with room temperature structural performance equivalent to current SSME welds manufactured by the GTAW process, regardless of prior material condition or post welding condition.

  6. Spatially parallel processing of within-dimension conjunctions.

    PubMed

    Linnell, K J; Humphreys, G W

    2001-01-01

    Within-dimension conjunction search for red-green targets amongst red-blue, and blue-green, nontargets is extremely inefficient (Wolfe et al, 1990 Journal of Experimental Psychology: Human Perception and Performance 16 879-892). We tested whether pairs of red-green conjunction targets can nevertheless be processed spatially in parallel. Participants made speeded detection responses whenever a red-green target was present. Across trials where a second identical target was present, the distribution of detection times was compatible with the assumption that targets were processed in parallel (Miller, 1982 Cognitive Psychology 14 247-279). We show that this was not an artifact of response-competition or feature-based processing. We suggest that within-dimension conjunctions can be processed spatially in parallel. Visual search for such items may be inefficient owing to within-dimension grouping between items.

  7. Kubo-Greenwood electrical conductivity formulation and implementation for projector augmented wave datasets

    NASA Astrophysics Data System (ADS)

    Calderín, L.; Karasiev, V. V.; Trickey, S. B.

    2017-12-01

    As the foundation for a new computational implementation, we survey the calculation of the complex electrical conductivity tensor based on the Kubo-Greenwood (KG) formalism (Kubo, 1957; Greenwood, 1958), with emphasis on derivations and technical aspects pertinent to use of projector augmented wave datasets with plane wave basis sets (Blöchl, 1994). New analytical results and a full implementation of the KG approach in an open-source Fortran 90 post-processing code for use with Quantum Espresso (Giannozzi et al., 2009) are presented. Named KGEC ([K]ubo [G]reenwood [E]lectronic [C]onductivity), the code calculates the full complex conductivity tensor (not just the average trace). It supports use of either the original KG formula or the popular one approximated in terms of a Dirac delta function. It provides both Gaussian and Lorentzian representations of the Dirac delta function (though the Lorentzian is preferable on basic grounds). KGEC provides decomposition of the conductivity into intra- and inter-band contributions as well as degenerate state contributions. It calculates the dc conductivity tensor directly. It is MPI parallelized over k-points, bands, and plane waves, with an option to recover the plane wave processes for their use in band parallelization as well. It is designed to provide rapid convergence with respect to k-point density. Examples of its use are given.

  8. Can supine recovery mitigate the exercise intensity dependent attenuation of post-exercise heat loss responses?

    PubMed

    Kenny, Glen P; Gagnon, Daniel; Jay, Ollie; McInnis, Natalie H; Journeay, W Shane; Reardon, Francis D

    2008-08-01

    Cutaneous vascular conductance (CVC) and sweat rate are subject to non-thermal baroreflex-mediated attenuation post-exercise. Various recovery modalities have been effective in attenuating these decreases in CVC and sweat rate post-exercise. However, the interaction of recovery posture and preceding exercise intensity on post-exercise thermoregulation remains unresolved. We evaluated the combined effect of supine recovery and exercise intensity on post-exercise cardiovascular and thermal responses relative to an upright seated posture. Seven females performed 15 min of cycling ergometry at low- (LIE, 55% maximal oxygen consumption) or high-(HIE, 85% maximal oxygen consumption) intensity followed by 60 min of recovery in either an upright seated or supine posture. Esophageal temperature, CVC, sweat rate, cardiac output, stroke volume, heart rate, total peripheral resistance, and mean arterial pressure (MAP) were measured at baseline, at end-exercise, and at 2, 5, 12, 20, and every 10 min thereafter until the end of recovery. MAP and stroke volume were maintained during supine recovery to a greater extent relative to an upright seated recovery following HIE (p

  9. Development and evaluation of web-based animated pedagogical agents for facilitating critical thinking in nursing.

    PubMed

    Morey, Diane J

    2012-01-01

    The purpose of this study was to evaluate the effectiveness of Web-based animated pedagogical agents on critical thinking among nursing students. A pedagogical agent or virtual character provides a possible innovative tool for critical thinking through active engagement of students by asking questions and providing feedback about a series of nursing case studies. This mixed methods experimental study used a pretest, posttest design with a control group. ANCOVA demonstrated no significant difference between the groups on the Critical Thinking Process Test. Pre- and post-think-alouds were analyzed using a rating tool and rubric for the presence of eight cognitive processes, level of critical thinking, and for accuracy of nursing diagnosis, conclusions, and evaluation. Chi-square analyses for each group revealed a significant difference for improvement of the critical thinking level and correct conclusions from pre-think-aloud to post-think-aloud, but only the pedagogical agent group had a significant result for appropriate evaluations.

  10. Hadoop neural network for parallel and distributed feature selection.

    PubMed

    Hodge, Victoria J; O'Keefe, Simon; Austin, Jim

    2016-06-01

    In this paper, we introduce a theoretical basis for a Hadoop-based neural network for parallel and distributed feature selection in Big Data sets. It is underpinned by an associative memory (binary) neural network which is highly amenable to parallel and distributed processing and fits with the Hadoop paradigm. There are many feature selectors described in the literature which all have various strengths and weaknesses. We present the implementation details of five feature selection algorithms constructed using our artificial neural network framework embedded in Hadoop YARN. Hadoop allows parallel and distributed processing. Each feature selector can be divided into subtasks and the subtasks can then be processed in parallel. Multiple feature selectors can also be processed simultaneously (in parallel) allowing multiple feature selectors to be compared. We identify commonalities among the five features selectors. All can be processed in the framework using a single representation and the overall processing can also be greatly reduced by only processing the common aspects of the feature selectors once and propagating these aspects across all five feature selectors as necessary. This allows the best feature selector and the actual features to select to be identified for large and high dimensional data sets through exploiting the efficiency and flexibility of embedding the binary associative-memory neural network in Hadoop. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  11. Orbiter data reduction complex data processing requirements for the OFT mission evaluation team (level C)

    NASA Technical Reports Server (NTRS)

    1979-01-01

    This document addresses requirements for post-test data reduction in support of the Orbital Flight Tests (OFT) mission evaluation team, specifically those which are planned to be implemented in the ODRC (Orbiter Data Reduction Complex). Only those requirements which have been previously baselined by the Data Systems and Analysis Directorate configuration control board are included. This document serves as the control document between Institutional Data Systems Division and the Integration Division for OFT mission evaluation data processing requirements, and shall be the basis for detailed design of ODRC data processing systems.

  12. Source, impact and removal of malodour from soiled clothing.

    PubMed

    Denawaka, Chamila J; Fowlis, Ian A; Dean, John R

    2016-03-18

    Static headspace--multi-capillary column--gas chromatography--ion mobility spectrometry (SHS-MCC-GC-IMS) has been applied to the analysis of malodour compounds from soiled clothing (socks and T-shirts), pre- and post washing, at low temperature (20°C). Six volatile compounds (VCs) (i.e. butyric acid, dimethyl disulfide, dimethyl trisulfide, 2-heptanone, 2-nonanone and 2-octanone) were identified. After sensory evaluation of soiled garments they were subjected to laundering with non-perfumed washing powder. The efficiency of the laundering process was evaluated by determining the reduction of each detected volatile compound (VC) post-wash (damp) for socks and T-shirts; VC concentration reductions of between 16 and 100% were noted, irrespective of sample type. Additionally the T-shirt study considered the change in VC concentration post-wash (dry) i.e. after the drying process at ambient temperature. Overall VC concentration reductions of between 25 and 98% were noted for T-shirt samples pre-wash to post-wash (dry). Finally, a potential biochemical metabolic pathway for the formation of malodour compounds associated with bacteria in axillary sweat is proposed. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  13. Post-Secondary Attendance by Parental Income in the U.S. and Canada: What Role for Financial Aid Policy? NBER Working Paper No. 17218

    ERIC Educational Resources Information Center

    Belley, Philippe; Frenette, Marc; Lochner, Lance

    2011-01-01

    This paper examines the implications of tuition and need-based financial aid policies for family income--post-secondary (PS) attendance relationships. We first conduct a parallel empirical analysis of the effects of parental income on PS attendance for recent high school cohorts in both the U.S. and Canada using data from the 1997 Cohort of the…

  14. Endpoint-based parallel data processing with non-blocking collective instructions in a parallel active messaging interface of a parallel computer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Archer, Charles J; Blocksome, Michael A; Cernohous, Bob R

    Methods, apparatuses, and computer program products for endpoint-based parallel data processing with non-blocking collective instructions in a parallel active messaging interface (`PAMI`) of a parallel computer are provided. Embodiments include establishing by a parallel application a data communications geometry, the geometry specifying a set of endpoints that are used in collective operations of the PAMI, including associating with the geometry a list of collective algorithms valid for use with the endpoints of the geometry. Embodiments also include registering in each endpoint in the geometry a dispatch callback function for a collective operation and executing without blocking, through a single onemore » of the endpoints in the geometry, an instruction for the collective operation.« less

  15. Second Evaluation of Job Queuing/Scheduling Software. Phase 1

    NASA Technical Reports Server (NTRS)

    Jones, James Patton; Brickell, Cristy; Chancellor, Marisa (Technical Monitor)

    1997-01-01

    The recent proliferation of high performance workstations and the increased reliability of parallel systems have illustrated the need for robust job management systems to support parallel applications. To address this issue, NAS compiled a requirements checklist for job queuing/scheduling software. Next, NAS evaluated the leading job management system (JMS) software packages against the checklist. A year has now elapsed since the first comparison was published, and NAS has repeated the evaluation. This report describes this second evaluation, and presents the results of Phase 1: Capabilities versus Requirements. We show that JMS support for running parallel applications on clusters of workstations and parallel systems is still lacking, however, definite progress has been made by the vendors to correct the deficiencies. This report is supplemented by a WWW interface to the data collected, to aid other sites in extracting the evaluation information on specific requirements of interest.

  16. Mechanical Behavior of Additively Manufactured Uranium-6 wt. pct. Niobium

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, A. S.; Wraith, M. W.; Burke, S. C.

    This report describes an effort to process uranium-6 weight% niobium using laser powder bed fusion. The chemistry, crystallography, microstructure and mechanical response resulting from this process are discussed with particular emphasis on the effect of the laser powder bed fusion process on impurities. In an effort to achieve homogenization and uniform mechanical behavior from different builds, as well as to induce a more conventional loading response, we explore post-processing heat treatments on this complex alloy. Elevated temperature heat treatment for recrystallization is evaluated and the effect of recrystallization on mechanical behavior in laser powder bed fusion processed U-6Nb is discussed.more » Wrought-like mechanical behavior and grain sizes are achieved through post-processing and are reported herein.« less

  17. Comparative study of resist stabilization techniques for metal etch processing

    NASA Astrophysics Data System (ADS)

    Becker, Gerry; Ross, Matthew F.; Wong, Selmer S.; Minter, Jason P.; Marlowe, Trey; Livesay, William R.

    1999-06-01

    This study investigates resist stabilization techniques as they are applied to a metal etch application. The techniques that are compared are conventional deep-UV/thermal stabilization, or UV bake, and electron beam stabilization. The electron beam tool use din this study, an ElectronCure system from AlliedSignal Inc., ELectron Vision Group, utilizes a flood electron source and a non-thermal process. These stabilization techniques are compared with respect to a metal etch process. In this study, two types of resist are considered for stabilization and etch: a g/i-line resist, Shipley SPR-3012, and an advanced i-line, Shipley SPR 955- Cm. For each of these resist the effects of stabilization on resist features are evaluated by post-stabilization SEM analysis. Etch selectivity in all cases is evaluated by using a timed metal etch, and measuring resists remaining relative to total metal thickness etched. Etch selectivity is presented as a function of stabilization condition. Analyses of the effects of the type of stabilization on this method of selectivity measurement are also presented. SEM analysis was also performed on the features after a compete etch process, and is detailed as a function of stabilization condition. Post-etch cleaning is also an important factor impacted by pre-etch resist stabilization. Results of post- etch cleaning are presented for both stabilization methods. SEM inspection is also detailed for the metal features after resist removal processing.

  18. Reliability of computer designed surgical guides in six implant rehabilitations with two years follow-up.

    PubMed

    Giordano, Mauro; Ausiello, Pietro; Martorelli, Massimo; Sorrentino, Roberto

    2012-09-01

    To evaluate the reliability and accuracy of computer-designed surgical guides in osseointegrated oral implant rehabilitation. Six implant rehabilitations, with a total of 17 implants, were completed with computer-designed surgical guides, performed with the master model developed by muco-compressive and muco-static impressions. In the first case, the surgical guide had exclusively mucosal support, in the second case exclusively dental support. For all six cases computer-aided surgical planning was performed by virtual analyses with 3D models obtained by dental scan DICOM data. The accuracy and stability of implant osseointegration over two years post surgery was then evaluated with clinical and radiographic examinations. Radiographic examination, performed with digital acquisitions (RVG - Radio Video graph) and parallel techniques, allowed two-dimensional feedback with a margin of linear error of 10%. Implant osseointegration was recorded for all the examined rehabilitations. During the clinical and radiographic post-surgical assessments, over the following two years, the peri-implant bone level was found to be stable and without appearance of any complications. The margin of error recorded between pre-operative positions assigned by virtual analysis and the post-surgical digital radiographic observations was as low as 0.2mm. Computer-guided implant surgery can be very effective in oral rehabilitations, providing an opportunity for the surgeon: (a) to avoid the necessity of muco-periosteal detachments and then (b) to perform minimally invasive interventions, whenever appropriate, with a flapless approach. Copyright © 2012 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.

  19. SaFaRI: sacral nerve stimulation versus the FENIX magnetic sphincter augmentation for adult faecal incontinence: a randomised investigation.

    PubMed

    Williams, Annabelle E; Croft, Julie; Napp, Vicky; Corrigan, Neil; Brown, Julia M; Hulme, Claire; Brown, Steven R; Lodge, Jen; Protheroe, David; Jayne, David G

    2016-02-01

    Faecal incontinence is a physically, psychologically and socially disabling condition. NICE guidance (2007) recommends surgical intervention, including sacral nerve stimulation (SNS), after failed conservative therapies. The FENIX magnetic sphincter augmentation (MSA) device is a novel continence device consisting of a flexible band of interlinked titanium beads with magnetic cores that is placed around the anal canal to augment anal sphincter tone through passive attraction of the beads. Preliminary studies suggest the FENIX MSA is safe, but efficacy data is limited. Rigorous evaluation is required prior to widespread adoption. The SaFaRI trial is a National Institute of Health Research (NIHR) Health Technology Assessment (HTA)-funded UK multi-site, parallel group, randomised controlled, unblinded trial that will investigate the use of the FENIX MSA, as compared to SNS, for adult faecal incontinence resistant to conservative management. Twenty sites across the UK, experienced in the treatment of faecal incontinence, will recruit 350 patients randomised equally to receive either SNS or FENIX MSA. Participants will be followed-up at 2 weeks post-surgery and at 6, 12 and 18 months post-randomisation. The primary endpoint is success, as defined by device in use and ≥50 % improvement in the Cleveland Clinic Incontinence Score (CCIS) at 18 months post-randomisation. Secondary endpoints include complications, quality of life and cost effectiveness. SaFaRI will rigorously evaluate a new technology for faecal incontinence, the FENIX™ MSA, allowing its safe and controlled introduction into current clinical practice. These results will inform the future surgical management of adult faecal incontinence.

  20. Evaluation of new data processing algorithms for planar gated ventriculography (MUGA)

    PubMed Central

    Fair, Joanna R.; Telepak, Robert J.

    2009-01-01

    Before implementing one of two new LVEF radionuclide gated ventriculogram (MUGA) systems, the results from 312 consecutive parallel patient studies were evaluated. Each gamma‐camera acquisition was simultaneously processed by semi‐automatic Medasys Pinnacle and by fully automatic and semiautomatic Philips nuclear medicine computer systems. The Philips systems yielded LVEF results within ±5LVEF percentage points of the Medasys system in fewer than half of the studies. The remaining values were higher or lower than those from the long‐used Medasys system. These differences might have changed cancer patient chemotherapy clinical decisions. As a result, our institution elected not to implement either new system. PACS: 87.57.U‐ Nuclear medicine imaging

  1. [CMACPAR an modified parallel neuro-controller for control processes].

    PubMed

    Ramos, E; Surós, R

    1999-01-01

    CMACPAR is a Parallel Neurocontroller oriented to real time systems as for example Control Processes. Its characteristics are mainly a fast learning algorithm, a reduced number of calculations, great generalization capacity, local learning and intrinsic parallelism. This type of neurocontroller is used in real time applications required by refineries, hydroelectric centers, factories, etc. In this work we present the analysis and the parallel implementation of a modified scheme of the Cerebellar Model CMAC for the n-dimensional space projection using a mean granularity parallel neurocontroller. The proposed memory management allows for a significant memory reduction in training time and required memory size.

  2. Parallel-Processing Test Bed For Simulation Software

    NASA Technical Reports Server (NTRS)

    Blech, Richard; Cole, Gary; Townsend, Scott

    1996-01-01

    Second-generation Hypercluster computing system is multiprocessor test bed for research on parallel algorithms for simulation in fluid dynamics, electromagnetics, chemistry, and other fields with large computational requirements but relatively low input/output requirements. Built from standard, off-shelf hardware readily upgraded as improved technology becomes available. System used for experiments with such parallel-processing concepts as message-passing algorithms, debugging software tools, and computational steering. First-generation Hypercluster system described in "Hypercluster Parallel Processor" (LEW-15283).

  3. Hierarchical fractional-step approximations and parallel kinetic Monte Carlo algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arampatzis, Giorgos, E-mail: garab@math.uoc.gr; Katsoulakis, Markos A., E-mail: markos@math.umass.edu; Plechac, Petr, E-mail: plechac@math.udel.edu

    2012-10-01

    We present a mathematical framework for constructing and analyzing parallel algorithms for lattice kinetic Monte Carlo (KMC) simulations. The resulting algorithms have the capacity to simulate a wide range of spatio-temporal scales in spatially distributed, non-equilibrium physiochemical processes with complex chemistry and transport micro-mechanisms. Rather than focusing on constructing exactly the stochastic trajectories, our approach relies on approximating the evolution of observables, such as density, coverage, correlations and so on. More specifically, we develop a spatial domain decomposition of the Markov operator (generator) that describes the evolution of all observables according to the kinetic Monte Carlo algorithm. This domain decompositionmore » corresponds to a decomposition of the Markov generator into a hierarchy of operators and can be tailored to specific hierarchical parallel architectures such as multi-core processors or clusters of Graphical Processing Units (GPUs). Based on this operator decomposition, we formulate parallel Fractional step kinetic Monte Carlo algorithms by employing the Trotter Theorem and its randomized variants; these schemes, (a) are partially asynchronous on each fractional step time-window, and (b) are characterized by their communication schedule between processors. The proposed mathematical framework allows us to rigorously justify the numerical and statistical consistency of the proposed algorithms, showing the convergence of our approximating schemes to the original serial KMC. The approach also provides a systematic evaluation of different processor communicating schedules. We carry out a detailed benchmarking of the parallel KMC schemes using available exact solutions, for example, in Ising-type systems and we demonstrate the capabilities of the method to simulate complex spatially distributed reactions at very large scales on GPUs. Finally, we discuss work load balancing between processors and propose a re-balancing scheme based on probabilistic mass transport methods.« less

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Demeure, I.M.

    The research presented here is concerned with representation techniques and tools to support the design, prototyping, simulation, and evaluation of message-based parallel, distributed computations. The author describes ParaDiGM-Parallel, Distributed computation Graph Model-a visual representation technique for parallel, message-based distributed computations. ParaDiGM provides several views of a computation depending on the aspect of concern. It is made of two complementary submodels, the DCPG-Distributed Computing Precedence Graph-model, and the PAM-Process Architecture Model-model. DCPGs are precedence graphs used to express the functionality of a computation in terms of tasks, message-passing, and data. PAM graphs are used to represent the partitioning of a computationmore » into schedulable units or processes, and the pattern of communication among those units. There is a natural mapping between the two models. He illustrates the utility of ParaDiGM as a representation technique by applying it to various computations (e.g., an adaptive global optimization algorithm, the client-server model). ParaDiGM representations are concise. They can be used in documenting the design and the implementation of parallel, distributed computations, in describing such computations to colleagues, and in comparing and contrasting various implementations of the same computation. He then describes VISA-VISual Assistant, a software tool to support the design, prototyping, and simulation of message-based parallel, distributed computations. VISA is based on the ParaDiGM model. In particular, it supports the editing of ParaDiGM graphs to describe the computations of interest, and the animation of these graphs to provide visual feedback during simulations. The graphs are supplemented with various attributes, simulation parameters, and interpretations which are procedures that can be executed by VISA.« less

  5. Application of advanced multidisciplinary analysis and optimization methods to vehicle design synthesis

    NASA Technical Reports Server (NTRS)

    Consoli, Robert David; Sobieszczanski-Sobieski, Jaroslaw

    1990-01-01

    Advanced multidisciplinary analysis and optimization methods, namely system sensitivity analysis and non-hierarchical system decomposition, are applied to reduce the cost and improve the visibility of an automated vehicle design synthesis process. This process is inherently complex due to the large number of functional disciplines and associated interdisciplinary couplings. Recent developments in system sensitivity analysis as applied to complex non-hierarchic multidisciplinary design optimization problems enable the decomposition of these complex interactions into sub-processes that can be evaluated in parallel. The application of these techniques results in significant cost, accuracy, and visibility benefits for the entire design synthesis process.

  6. Generator stator core vent duct spacer posts

    DOEpatents

    Griffith, John Wesley; Tong, Wei

    2003-06-24

    Generator stator cores are constructed by stacking many layers of magnetic laminations. Ventilation ducts may be inserted between these layers by inserting spacers into the core stack. The ventilation ducts allow for the passage of cooling gas through the core during operation. The spacers or spacer posts are positioned between groups of the magnetic laminations to define the ventilation ducts. The spacer posts are secured with longitudinal axes thereof substantially parallel to the core axis. With this structure, core tightness can be assured while maximizing ventilation duct cross section for gas flow and minimizing magnetic loss in the spacers.

  7. Ultrasonic and radiographic evaluation of advanced aerospace materials: Ceramic composites

    NASA Technical Reports Server (NTRS)

    Generazio, Edward R.

    1990-01-01

    Two conventional nondestructive evaluation techniques were used to evaluate advanced ceramic composite materials. It was shown that neither ultrasonic C-scan nor radiographic imaging can individually provide sufficient data for an accurate nondestructive evaluation. Both ultrasonic C-scan and conventional radiographic imaging are required for preliminary evaluation of these complex systems. The material variations that were identified by these two techniques are porosity, delaminations, bond quality between laminae, fiber alignment, fiber registration, fiber parallelism, and processing density flaws. The degree of bonding between fiber and matrix cannot be determined by either of these methods. An alternative ultrasonic technique, angular power spectrum scanning (APSS) is recommended for quantification of this interfacial bond.

  8. Performance Analysis of Multilevel Parallel Applications on Shared Memory Architectures

    NASA Technical Reports Server (NTRS)

    Biegel, Bryan A. (Technical Monitor); Jost, G.; Jin, H.; Labarta J.; Gimenez, J.; Caubet, J.

    2003-01-01

    Parallel programming paradigms include process level parallelism, thread level parallelization, and multilevel parallelism. This viewgraph presentation describes a detailed performance analysis of these paradigms for Shared Memory Architecture (SMA). This analysis uses the Paraver Performance Analysis System. The presentation includes diagrams of a flow of useful computations.

  9. Are we puppets on a string? Comparing the impact of contingency and validity on implicit and explicit evaluations.

    PubMed

    Peters, Kurt R; Gawronski, Bertram

    2011-04-01

    Research has demonstrated that implicit and explicit evaluations of the same object can diverge. Explanations of such dissociations frequently appeal to dual-process theories, such that implicit evaluations are assumed to reflect object-valence contingencies independent of their perceived validity, whereas explicit evaluations reflect the perceived validity of object-valence contingencies. Although there is evidence supporting these assumptions, it remains unclear if dissociations can arise in situations in which object-valence contingencies are judged to be true or false during the learning of these contingencies. Challenging dual-process accounts that propose a simultaneous operation of two parallel learning mechanisms, results from three experiments showed that the perceived validity of evaluative information about social targets qualified both explicit and implicit evaluations when validity information was available immediately after the encoding of the valence information; however, delaying the presentation of validity information reduced its qualifying impact for implicit, but not explicit, evaluations.

  10. Automatic Management of Parallel and Distributed System Resources

    NASA Technical Reports Server (NTRS)

    Yan, Jerry; Ngai, Tin Fook; Lundstrom, Stephen F.

    1990-01-01

    Viewgraphs on automatic management of parallel and distributed system resources are presented. Topics covered include: parallel applications; intelligent management of multiprocessing systems; performance evaluation of parallel architecture; dynamic concurrent programs; compiler-directed system approach; lattice gaseous cellular automata; and sparse matrix Cholesky factorization.

  11. On the Optimality of Serial and Parallel Processing in the Psychological Refractory Period Paradigm: Effects of the Distribution of Stimulus Onset Asynchronies

    ERIC Educational Resources Information Center

    Miller, Jeff; Ulrich, Rolf; Rolke, Bettina

    2009-01-01

    Within the context of the psychological refractory period (PRP) paradigm, we developed a general theoretical framework for deciding when it is more efficient to process two tasks in serial and when it is more efficient to process them in parallel. This analysis suggests that a serial mode is more efficient than a parallel mode under a wide variety…

  12. Parallel Monte Carlo Search for Hough Transform

    NASA Astrophysics Data System (ADS)

    Lopes, Raul H. C.; Franqueira, Virginia N. L.; Reid, Ivan D.; Hobson, Peter R.

    2017-10-01

    We investigate the problem of line detection in digital image processing and in special how state of the art algorithms behave in the presence of noise and whether CPU efficiency can be improved by the combination of a Monte Carlo Tree Search, hierarchical space decomposition, and parallel computing. The starting point of the investigation is the method introduced in 1962 by Paul Hough for detecting lines in binary images. Extended in the 1970s to the detection of space forms, what came to be known as Hough Transform (HT) has been proposed, for example, in the context of track fitting in the LHC ATLAS and CMS projects. The Hough Transform transfers the problem of line detection, for example, into one of optimization of the peak in a vote counting process for cells which contain the possible points of candidate lines. The detection algorithm can be computationally expensive both in the demands made upon the processor and on memory. Additionally, it can have a reduced effectiveness in detection in the presence of noise. Our first contribution consists in an evaluation of the use of a variation of the Radon Transform as a form of improving theeffectiveness of line detection in the presence of noise. Then, parallel algorithms for variations of the Hough Transform and the Radon Transform for line detection are introduced. An algorithm for Parallel Monte Carlo Search applied to line detection is also introduced. Their algorithmic complexities are discussed. Finally, implementations on multi-GPU and multicore architectures are discussed.

  13. The role of parallelism in the real-time processing of anaphora.

    PubMed

    Poirier, Josée; Walenski, Matthew; Shapiro, Lewis P

    2012-06-01

    Parallelism effects refer to the facilitated processing of a target structure when it follows a similar, parallel structure. In coordination, a parallelism-related conjunction triggers the expectation that a second conjunct with the same structure as the first conjunct should occur. It has been proposed that parallelism effects reflect the use of the first structure as a template that guides the processing of the second. In this study, we examined the role of parallelism in real-time anaphora resolution by charting activation patterns in coordinated constructions containing anaphora, Verb-Phrase Ellipsis (VPE) and Noun-Phrase Traces (NP-traces). Specifically, we hypothesised that an expectation of parallelism would incite the parser to assume a structure similar to the first conjunct in the second, anaphora-containing conjunct. The speculation of a similar structure would result in early postulation of covert anaphora. Experiment 1 confirms that following a parallelism-related conjunction, first-conjunct material is activated in the second conjunct. Experiment 2 reveals that an NP-trace in the second conjunct is posited immediately where licensed, which is earlier than previously reported in the literature. In light of our findings, we propose an intricate relation between structural expectations and anaphor resolution.

  14. The role of parallelism in the real-time processing of anaphora

    PubMed Central

    Poirier, Josée; Walenski, Matthew; Shapiro, Lewis P.

    2012-01-01

    Parallelism effects refer to the facilitated processing of a target structure when it follows a similar, parallel structure. In coordination, a parallelism-related conjunction triggers the expectation that a second conjunct with the same structure as the first conjunct should occur. It has been proposed that parallelism effects reflect the use of the first structure as a template that guides the processing of the second. In this study, we examined the role of parallelism in real-time anaphora resolution by charting activation patterns in coordinated constructions containing anaphora, Verb-Phrase Ellipsis (VPE) and Noun-Phrase Traces (NP-traces). Specifically, we hypothesised that an expectation of parallelism would incite the parser to assume a structure similar to the first conjunct in the second, anaphora-containing conjunct. The speculation of a similar structure would result in early postulation of covert anaphora. Experiment 1 confirms that following a parallelism-related conjunction, first-conjunct material is activated in the second conjunct. Experiment 2 reveals that an NP-trace in the second conjunct is posited immediately where licensed, which is earlier than previously reported in the literature. In light of our findings, we propose an intricate relation between structural expectations and anaphor resolution. PMID:23741080

  15. Managing Cognitive Dissonance: Experience from an Environmental Education Teachers' Training Course in the Czech Republic

    ERIC Educational Resources Information Center

    Cincera, Jan

    2013-01-01

    This paper presents a qualitative evaluation of seven in-service environmental education teacher training courses conducted in the Czech Republic in 2009-2011. The evaluation applied a grounded theory approach. 14 focus groups, 13 interviews and two post-programme questionnaires were used. The evaluation describes a process of managing cognitive…

  16. COMPUTERIZED TRAINING OF CRYOSURGERY – A SYSTEM APPROACH

    PubMed Central

    Keelan, Robert; Yamakawa, Soji; Shimada, Kenji; Rabin, Yoed

    2014-01-01

    The objective of the current study is to provide the foundation for a computerized training platform for cryosurgery. Consistent with clinical practice, the training process targets the correlation of the frozen region contour with the target region shape, using medical imaging and accepted criteria for clinical success. The current study focuses on system design considerations, including a bioheat transfer model, simulation techniques, optimal cryoprobe layout strategy, and a simulation core framework. Two fundamentally different approaches were considered for the development of a cryosurgery simulator, based on a finite-elements (FE) commercial code (ANSYS) and a proprietary finite-difference (FD) code. Results of this study demonstrate that the FE simulator is superior in terms of geometric modeling, while the FD simulator is superior in terms of runtime. Benchmarking results further indicate that the FD simulator is superior in terms of usage of memory resources, pre-processing, parallel processing, and post-processing. It is envisioned that future integration of a human-interface module and clinical data into the proposed computer framework will make computerized training of cryosurgery a practical reality. PMID:23995400

  17. Phase retrieval algorithm for JWST Flight and Testbed Telescope

    NASA Astrophysics Data System (ADS)

    Dean, Bruce H.; Aronstein, David L.; Smith, J. Scott; Shiri, Ron; Acton, D. Scott

    2006-06-01

    An image-based wavefront sensing and control algorithm for the James Webb Space Telescope (JWST) is presented. The algorithm heritage is discussed in addition to implications for algorithm performance dictated by NASA's Technology Readiness Level (TRL) 6. The algorithm uses feedback through an adaptive diversity function to avoid the need for phase-unwrapping post-processing steps. Algorithm results are demonstrated using JWST Testbed Telescope (TBT) commissioning data and the accuracy is assessed by comparison with interferometer results on a multi-wave phase aberration. Strategies for minimizing aliasing artifacts in the recovered phase are presented and orthogonal basis functions are implemented for representing wavefronts in irregular hexagonal apertures. Algorithm implementation on a parallel cluster of high-speed digital signal processors (DSPs) is also discussed.

  18. Optical vector network analyzer based on double-sideband modulation.

    PubMed

    Jun, Wen; Wang, Ling; Yang, Chengwu; Li, Ming; Zhu, Ning Hua; Guo, Jinjin; Xiong, Liangming; Li, Wei

    2017-11-01

    We report an optical vector network analyzer (OVNA) based on double-sideband (DSB) modulation using a dual-parallel Mach-Zehnder modulator. The device under test (DUT) is measured twice with different modulation schemes. By post-processing the measurement results, the response of the DUT can be obtained accurately. Since DSB modulation is used in our approach, the measurement range is doubled compared with conventional single-sideband (SSB) modulation-based OVNA. Moreover, the measurement accuracy is improved by eliminating the even-order sidebands. The key advantage of the proposed scheme is that the measurement of a DUT with bandpass response can also be simply realized, which is a big challenge for the SSB-based OVNA. The proposed method is theoretically and experimentally demonstrated.

  19. A unifying framework for rigid multibody dynamics and serial and parallel computational issues

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Jain, Abhinandan

    1989-01-01

    A unifying framework for various formulations of the dynamics of open-chain rigid multibody systems is discussed. Their suitability for serial and parallel processing is assessed. The framework is based on the derivation of intrinsic, i.e., coordinate-free, equations of the algorithms which provides a suitable abstraction and permits a distinction to be made between the computational redundancy in the intrinsic and extrinsic equations. A set of spatial notation is used which allows the derivation of the various algorithms in a common setting and thus clarifies the relationships among them. The three classes of algorithms viz., O(n), O(n exp 2) and O(n exp 3) or the solution of the dynamics problem are investigated. Researchers begin with the derivation of O(n exp 3) algorithms based on the explicit computation of the mass matrix and it provides insight into the underlying basis of the O(n) algorithms. From a computational perspective, the optimal choice of a coordinate frame for the projection of the intrinsic equations is discussed and the serial computational complexity of the different algorithms is evaluated. The three classes of algorithms are also analyzed for suitability for parallel processing. It is shown that the problem belongs to the class of N C and the time and processor bounds are of O(log2/2(n)) and O(n exp 4), respectively. However, the algorithm that achieves the above bounds is not stable. Researchers show that the fastest stable parallel algorithm achieves a computational complexity of O(n) with O(n exp 4), respectively. However, the algorithm that achieves the above bounds is not stable. Researchers show that the fastest stable parallel algorithm achieves a computational complexity of O(n) with O(n exp 2) processors, and results from the parallelization of the O(n exp 3) serial algorithm.

  20. PARAMO: A Parallel Predictive Modeling Platform for Healthcare Analytic Research using Electronic Health Records

    PubMed Central

    Ng, Kenney; Ghoting, Amol; Steinhubl, Steven R.; Stewart, Walter F.; Malin, Bradley; Sun, Jimeng

    2014-01-01

    Objective Healthcare analytics research increasingly involves the construction of predictive models for disease targets across varying patient cohorts using electronic health records (EHRs). To facilitate this process, it is critical to support a pipeline of tasks: 1) cohort construction, 2) feature construction, 3) cross-validation, 4) feature selection, and 5) classification. To develop an appropriate model, it is necessary to compare and refine models derived from a diversity of cohorts, patient-specific features, and statistical frameworks. The goal of this work is to develop and evaluate a predictive modeling platform that can be used to simplify and expedite this process for health data. Methods To support this goal, we developed a PARAllel predictive MOdeling (PARAMO) platform which 1) constructs a dependency graph of tasks from specifications of predictive modeling pipelines, 2) schedules the tasks in a topological ordering of the graph, and 3) executes those tasks in parallel. We implemented this platform using Map-Reduce to enable independent tasks to run in parallel in a cluster computing environment. Different task scheduling preferences are also supported. Results We assess the performance of PARAMO on various workloads using three datasets derived from the EHR systems in place at Geisinger Health System and Vanderbilt University Medical Center and an anonymous longitudinal claims database. We demonstrate significant gains in computational efficiency against a standard approach. In particular, PARAMO can build 800 different models on a 300,000 patient data set in 3 hours in parallel compared to 9 days if running sequentially. Conclusion This work demonstrates that an efficient parallel predictive modeling platform can be developed for EHR data. This platform can facilitate large-scale modeling endeavors and speed-up the research workflow and reuse of health information. This platform is only a first step and provides the foundation for our ultimate goal of building analytic pipelines that are specialized for health data researchers. PMID:24370496

Top