NASA Technical Reports Server (NTRS)
Ayguade, Eduard; Gonzalez, Marc; Martorell, Xavier; Jost, Gabriele
2004-01-01
In this paper we describe the parallelization of the multi-zone code versions of the NAS Parallel Benchmarks employing multi-level OpenMP parallelism. For our study we use the NanosCompiler, which supports nesting of OpenMP directives and provides clauses to control the grouping of threads, load balancing, and synchronization. We report the benchmark results, compare the timings with those of different hybrid parallelization paradigms and discuss OpenMP implementation issues which effect the performance of multi-level parallel applications.
NASA Astrophysics Data System (ADS)
Rodrigues, Manuel J.; Fernandes, David E.; Silveirinha, Mário G.; Falcão, Gabriel
2018-01-01
This work introduces a parallel computing framework to characterize the propagation of electron waves in graphene-based nanostructures. The electron wave dynamics is modeled using both "microscopic" and effective medium formalisms and the numerical solution of the two-dimensional massless Dirac equation is determined using a Finite-Difference Time-Domain scheme. The propagation of electron waves in graphene superlattices with localized scattering centers is studied, and the role of the symmetry of the microscopic potential in the electron velocity is discussed. The computational methodologies target the parallel capabilities of heterogeneous multi-core CPU and multi-GPU environments and are built with the OpenCL parallel programming framework which provides a portable, vendor agnostic and high throughput-performance solution. The proposed heterogeneous multi-GPU implementation achieves speedup ratios up to 75x when compared to multi-thread and multi-core CPU execution, reducing simulation times from several hours to a couple of minutes.
The development of a revised version of multi-center molecular Ornstein-Zernike equation
NASA Astrophysics Data System (ADS)
Kido, Kentaro; Yokogawa, Daisuke; Sato, Hirofumi
2012-04-01
Ornstein-Zernike (OZ)-type theory is a powerful tool to obtain 3-dimensional solvent distribution around solute molecule. Recently, we proposed multi-center molecular OZ method, which is suitable for parallel computing of 3D solvation structure. The distribution function in this method consists of two components, namely reference and residue parts. Several types of the function were examined as the reference part to investigate the numerical robustness of the method. As the benchmark, the method is applied to water, benzene in aqueous solution and single-walled carbon nanotube in chloroform solution. The results indicate that fully-parallelization is achieved by utilizing the newly proposed reference functions.
MLP: A Parallel Programming Alternative to MPI for New Shared Memory Parallel Systems
NASA Technical Reports Server (NTRS)
Taft, James R.
1999-01-01
Recent developments at the NASA AMES Research Center's NAS Division have demonstrated that the new generation of NUMA based Symmetric Multi-Processing systems (SMPs), such as the Silicon Graphics Origin 2000, can successfully execute legacy vector oriented CFD production codes at sustained rates far exceeding processing rates possible on dedicated 16 CPU Cray C90 systems. This high level of performance is achieved via shared memory based Multi-Level Parallelism (MLP). This programming approach, developed at NAS and outlined below, is distinct from the message passing paradigm of MPI. It offers parallelism at both the fine and coarse grained level, with communication latencies that are approximately 50-100 times lower than typical MPI implementations on the same platform. Such latency reductions offer the promise of performance scaling to very large CPU counts. The method draws on, but is also distinct from, the newly defined OpenMP specification, which uses compiler directives to support a limited subset of multi-level parallel operations. The NAS MLP method is general, and applicable to a large class of NASA CFD codes.
Design and Verification of Remote Sensing Image Data Center Storage Architecture Based on Hadoop
NASA Astrophysics Data System (ADS)
Tang, D.; Zhou, X.; Jing, Y.; Cong, W.; Li, C.
2018-04-01
The data center is a new concept of data processing and application proposed in recent years. It is a new method of processing technologies based on data, parallel computing, and compatibility with different hardware clusters. While optimizing the data storage management structure, it fully utilizes cluster resource computing nodes and improves the efficiency of data parallel application. This paper used mature Hadoop technology to build a large-scale distributed image management architecture for remote sensing imagery. Using MapReduce parallel processing technology, it called many computing nodes to process image storage blocks and pyramids in the background to improve the efficiency of image reading and application and sovled the need for concurrent multi-user high-speed access to remotely sensed data. It verified the rationality, reliability and superiority of the system design by testing the storage efficiency of different image data and multi-users and analyzing the distributed storage architecture to improve the application efficiency of remote sensing images through building an actual Hadoop service system.
A Multi-Level Parallelization Concept for High-Fidelity Multi-Block Solvers
NASA Technical Reports Server (NTRS)
Hatay, Ferhat F.; Jespersen, Dennis C.; Guruswamy, Guru P.; Rizk, Yehia M.; Byun, Chansup; Gee, Ken; VanDalsem, William R. (Technical Monitor)
1997-01-01
The integration of high-fidelity Computational Fluid Dynamics (CFD) analysis tools with the industrial design process benefits greatly from the robust implementations that are transportable across a wide range of computer architectures. In the present work, a hybrid domain-decomposition and parallelization concept was developed and implemented into the widely-used NASA multi-block Computational Fluid Dynamics (CFD) packages implemented in ENSAERO and OVERFLOW. The new parallel solver concept, PENS (Parallel Euler Navier-Stokes Solver), employs both fine and coarse granularity in data partitioning as well as data coalescing to obtain the desired load-balance characteristics on the available computer platforms. This multi-level parallelism implementation itself introduces no changes to the numerical results, hence the original fidelity of the packages are identically preserved. The present implementation uses the Message Passing Interface (MPI) library for interprocessor message passing and memory accessing. By choosing an appropriate combination of the available partitioning and coalescing capabilities only during the execution stage, the PENS solver becomes adaptable to different computer architectures from shared-memory to distributed-memory platforms with varying degrees of parallelism. The PENS implementation on the IBM SP2 distributed memory environment at the NASA Ames Research Center obtains 85 percent scalable parallel performance using fine-grain partitioning of single-block CFD domains using up to 128 wide computational nodes. Multi-block CFD simulations of complete aircraft simulations achieve 75 percent perfect load-balanced executions using data coalescing and the two levels of parallelism. SGI PowerChallenge, SGI Origin 2000, and a cluster of workstations are the other platforms where the robustness of the implementation is tested. The performance behavior on the other computer platforms with a variety of realistic problems will be included as this on-going study progresses.
NASA Astrophysics Data System (ADS)
Imazato, Harunobu; Mizoguchi, Shun'ya; Yata, Masaya
We consider the Gibbons-Hawking metric for a three-dimensional periodic array of multi-Taub-NUT centers, containing not only centers with a positive NUT charge but also ones with a negative NUT charge. The latter are regarded as representing the asymptotic form of the Atiyah-Hitchin metric. The periodic arrays of Taub-NUT centers have close parallels with ionic crystals, where the Gibbons-Hawking potential plays the role of the Coulomb static potential of the ions, and are similarly classified according to their space groups. After a periodic identification and a Z2 projection, the array is transformed by T-duality to a system of NS5-branes with the SU(2) structure, and a further standard embedding yields, though singular, a half-BPS heterotic 5-brane background with warped compact transverse dimensions. A discussion is given on the possibility of probing the singular geometry by two-dimensional gauge theories.
NASA Technical Reports Server (NTRS)
Tavana, Madjid
2005-01-01
"To understand and protect our home planet, to explore the universe and search for life, and to inspire the next generation of explorers" is NASA's mission. The Systems Management Office at Johnson Space Center (JSC) is searching for methods to effectively manage the Center's resources to meet NASA's mission. D-Side is a group multi-criteria decision support system (GMDSS) developed to support facility decisions at JSC. D-Side uses a series of sequential and structured processes to plot facilities in a three-dimensional (3-D) graph on the basis of each facility alignment with NASA's mission and goals, the extent to which other facilities are dependent on the facility, and the dollar value of capital investments that have been postponed at the facility relative to the facility replacement value. A similarity factor rank orders facilities based on their Euclidean distance from Ideal and Nadir points. These similarity factors are then used to allocate capital improvement resources across facilities. We also present a parallel model that can be used to support decisions concerning allocation of human resources investments across workforce units. Finally, we present results from a pilot study where 12 experienced facility managers from NASA used D-Side and the organization's current approach to rank order and allocate funds for capital improvement across 20 facilities. Users evaluated D-Side favorably in terms of ease of use, the quality of the decision-making process, decision quality, and overall value-added. Their evaluations of D-Side were significantly more favorable than their evaluations of the current approach. Keywords: NASA, Multi-Criteria Decision Making, Decision Support System, AHP, Euclidean Distance, 3-D Modeling, Facility Planning, Workforce Planning.
Vectorized and multitasked solution of the few-group neutron diffusion equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zee, S.K.; Turinsky, P.J.; Shayer, Z.
1989-03-01
A numerical algorithm with parallelism was used to solve the two-group, multidimensional neutron diffusion equations on computers characterized by shared memory, vector pipeline, and multi-CPU architecture features. Specifically, solutions were obtained on the Cray X/MP-48, the IBM-3090 with vector facilities, and the FPS-164. The material-centered mesh finite difference method approximation and outer-inner iteration method were employed. Parallelism was introduced in the inner iterations using the cyclic line successive overrelaxation iterative method and solving in parallel across lines. The outer iterations were completed using the Chebyshev semi-iterative method that allows parallelism to be introduced in both space and energy groups. Formore » the three-dimensional model, power, soluble boron, and transient fission product feedbacks were included. Concentrating on the pressurized water reactor (PWR), the thermal-hydraulic calculation of moderator density assumed single-phase flow and a closed flow channel, allowing parallelism to be introduced in the solution across the radial plane. Using a pinwise detail, quarter-core model of a typical PWR in cycle 1, for the two-dimensional model without feedback the measured million floating point operations per second (MFLOPS)/vector speedups were 83/11.7. 18/2.2, and 2.4/5.6 on the Cray, IBM, and FPS without multitasking, respectively. Lower performance was observed with a coarser mesh, i.e., shorter vector length, due to vector pipeline start-up. For an 18 x 18 x 30 (x-y-z) three-dimensional model with feedback of the same core, MFLOPS/vector speedups of --61/6.7 and an execution time of 0.8 CPU seconds on the Cray without multitasking were measured. Finally, using two CPUs and the vector pipelines of the Cray, a multitasking efficiency of 81% was noted for the three-dimensional model.« less
Chatterjee, Siddhartha [Yorktown Heights, NY; Gunnels, John A [Brewster, NY
2011-11-08
A method and structure of distributing elements of an array of data in a computer memory to a specific processor of a multi-dimensional mesh of parallel processors includes designating a distribution of elements of at least a portion of the array to be executed by specific processors in the multi-dimensional mesh of parallel processors. The pattern of the designating includes a cyclical repetitive pattern of the parallel processor mesh, as modified to have a skew in at least one dimension so that both a row of data in the array and a column of data in the array map to respective contiguous groupings of the processors such that a dimension of the contiguous groupings is greater than one.
NASA Astrophysics Data System (ADS)
Olson, Richard F.
2013-05-01
Rendering of point scatterer based radar scenes for millimeter wave (mmW) seeker tests in real-time hardware-in-the-loop (HWIL) scene generation requires efficient algorithms and vector-friendly computer architectures for complex signal synthesis. New processor technology from Intel implements an extended 256-bit vector SIMD instruction set (AVX, AVX2) in a multi-core CPU design providing peak execution rates of hundreds of GigaFLOPS (GFLOPS) on one chip. Real world mmW scene generation code can approach peak SIMD execution rates only after careful algorithm and source code design. An effective software design will maintain high computing intensity emphasizing register-to-register SIMD arithmetic operations over data movement between CPU caches or off-chip memories. Engineers at the U.S. Army Aviation and Missile Research, Development and Engineering Center (AMRDEC) applied two basic parallel coding methods to assess new 256-bit SIMD multi-core architectures for mmW scene generation in HWIL. These include use of POSIX threads built on vector library functions and more portable, highlevel parallel code based on compiler technology (e.g. OpenMP pragmas and SIMD autovectorization). Since CPU technology is rapidly advancing toward high processor core counts and TeraFLOPS peak SIMD execution rates, it is imperative that coding methods be identified which produce efficient and maintainable parallel code. This paper describes the algorithms used in point scatterer target model rendering, the parallelization of those algorithms, and the execution performance achieved on an AVX multi-core machine using the two basic parallel coding methods. The paper concludes with estimates for scale-up performance on upcoming multi-core technology.
Nakazato, Ryo; Slomka, Piotr J; Fish, Mathews; Schwartz, Ronald G; Hayes, Sean W; Thomson, Louise E J; Friedman, John D; Lemley, Mark; Mackin, Maria L; Peterson, Benjamin; Schwartz, Arielle M; Doran, Jesse A; Germano, Guido; Berman, Daniel S
2015-04-01
Obesity is a common source of artifact on conventional SPECT myocardial perfusion imaging (MPI). We evaluated image quality and diagnostic performance of high-efficiency (HE) cadmium-zinc-telluride parallel-hole SPECT MPI for coronary artery disease (CAD) in obese patients. 118 consecutive obese patients at three centers (BMI 43.6 ± 8.9 kg·m(-2), range 35-79.7 kg·m(-2)) had upright/supine HE-SPECT and invasive coronary angiography > 6 months (n = 67) or low likelihood of CAD (n = 51). Stress quantitative total perfusion deficit (TPD) for upright (U-TPD), supine (S-TPD), and combined acquisitions (C-TPD) was assessed. Image quality (IQ; 5 = excellent; < 3 nondiagnostic) was compared among BMI 35-39.9 (n = 58), 40-44.9 (n = 24) and ≥45 (n = 36) groups. ROC curve area for CAD detection (≥50% stenosis) for U-TPD, S-TPD, and C-TPD were 0.80, 0.80, and 0.87, respectively. Sensitivity/specificity was 82%/57% for U-TPD, 74%/71% for S-TPD, and 80%/82% for C-TPD. C-TPD had highest specificity (P = .02). C-TPD normalcy rate was higher than U-TPD (88% vs 75%, P = .02). Mean IQ was similar among BMI 35-39.9, 40-44.9 and ≥45 groups [4.6 vs 4.4 vs 4.5, respectively (P = .6)]. No patient had a nondiagnostic stress scan. In obese patients, HE-SPECT MPI with dedicated parallel-hole collimation demonstrated high image quality, normalcy rate, and diagnostic accuracy for CAD by quantitative analysis of combined upright/supine acquisitions.
Nakazato, Ryo; Slomka, Piotr J.; Fish, Mathews; Schwartz, Ronald G.; Hayes, Sean W.; Thomson, Louise E.J.; Friedman, John D.; Lemley, Mark; Mackin, Maria L.; Peterson, Benjamin; Schwartz, Arielle M.; Doran, Jesse A.; Germano, Guido; Berman, Daniel S.
2014-01-01
Background Obesity is a common source of artifact on conventional SPECT myocardial perfusion imaging (MPI). We evaluated image quality and diagnostic performance of high-efficiency (HE) cadmium-zinc-telluride (CZT) parallel-hole SPECT-MPI for coronary artery disease (CAD) in obese patients. Methods and Results 118 consecutive obese patients at 3 centers (BMI 43.6±8.9 kg/m2, range 35–79.7 kg/m2) had upright/supine HE-SPECT and ICA >6 months (n=67) or low-likelihood of CAD (n=51). Stress quantitative total perfusion deficit (TPD) for upright (U-TPD), supine (S-TPD) and combined acquisitions (C-TPD) was assessed. Image quality (IQ; 5=excellent; <3 nondiagnostic) was compared among BMI 35–39.9 (n=58), 40–44.9 (n=24) and ≥45 (n=36) groups. ROC-curve area for CAD detection (≥50% stenosis) for U-TPD, S-TPD, and C-TPD were 0.80, 0.80, and 0.87, respectively. Sensitivity/specificity was 82%/57% for U-TPD, 74%/71% for S-TPD, and 80%/82% for C-TPD. C-TPD had highest specificity (P=.02). C-TPD normalcy rate was higher than U-TPD (88% vs. 75%, P=.02). Mean IQ was similar among BMI 35–39.9, 40–44.9 and ≥45 groups [4.6 vs. 4.4 vs. 4.5, respectively (P=.6)]. No patient had a non-diagnostic stress scan. Conclusions In obese patients, HE-SPECT MPI with dedicated parallel-hole collimation demonstrated high image quality, normalcy rate, and diagnostic accuracy for CAD by quantitative analysis of combined upright/supine acquisitions. PMID:25388380
An Investigation of the Flow Physics of Acoustic Liners by Direct Numerical Simulation
NASA Technical Reports Server (NTRS)
Watson, Willie R. (Technical Monitor); Tam, Christopher
2004-01-01
This report concentrates on reporting the effort and status of work done on three dimensional (3-D) simulation of a multi-hole resonator in an impedance tube. This work is coordinated with a parallel experimental effort to be carried out at the NASA Langley Research Center. The outline of this report is as follows : 1. Preliminary consideration. 2. Computation model. 3. Mesh design and parallel computing. 4. Visualization. 5. Status of computer code development. 1. Preliminary Consideration.
O'Dywer, Lian; Littlewood, Simon J; Rahman, Shahla; Spencer, R James; Barber, Sophy K; Russell, Joanne S
2016-01-01
To use a two-arm parallel trial to compare treatment efficiency between a self-ligating and a conventional preadjusted edgewise appliance system. A prospective multi-center randomized controlled clinical trial was conducted in three hospital orthodontic departments. Subjects were randomly allocated to receive treatment with either a self-ligating (3M SmartClip) or conventional (3M Victory) preadjusted edgewise appliance bracket system using a computer-generated random sequence concealed in opaque envelopes, with stratification for operator and center. Two operators followed a standardized protocol regarding bracket bonding procedure and archwire sequence. Efficiency of each ligation system was assessed by comparing the duration of treatment (months), total number of appointments (scheduled and emergency visits), and number of bracket bond failures. One hundred thirty-eight subjects (mean age 14 years 11 months) were enrolled in the study, of which 135 subjects (97.8%) completed treatment. The mean treatment time and number of visits were 25.12 months and 19.97 visits in the SmartClip group and 25.80 months and 20.37 visits in the Victory group. The overall bond failure rate was 6.6% for the SmartClip and 7.2% for Victory, with a similar debond distribution between the two appliances. No significant differences were found between the bracket systems in any of the outcome measures. No serious harm was observed from either bracket system. There was no clinically significant difference in treatment efficiency between treatment with a self-ligating bracket system and a conventional ligation system.
Optoelectronic Materials Center
1991-06-11
surface - emitting GaAs/AIGaAs vertical - cavity laser (TJ- VCSEL ) incorporating wavelength-resonant...multi-quantum well, vertical cavity surface - emitted laser . This structure consists entirely of undoped epilayers, thus simplifying the problems of... cavity surface - emitting lasers ( VCSELs ) for doubling and for parallel optical data processing. Progress - GaAIAs/GaAs and InGaAs/GaAs RPG- VCSEL
Nipanikar, Sanjay U; Gajare, Kamalakar V; Vaidya, Vidyadhar G; Kamthe, Amol B; Upasani, Sachin A; Kumbhar, Vidyadhar S
2017-01-01
The main objective of the present study was to assess efficacy and safety of AHPL/AYTOP/0113 cream, a polyherbal formulation in comparison with Framycetin sulphate cream in acute wounds. It was an open label, randomized, comparative, parallel group and multi-center clinical study. Total 47 subjects were randomly assigned to Group-A (AHPL/AYTOP/0113 cream) and 42 subjects were randomly assigned to Group-B (Framycetin sulphate cream). All the subjects were advised to apply study drug, thrice daily for 21 days or up to complete wound healing (whichever was earlier). All the subjects were called for follow up on days 2, 4, 7, 10, 14, 17 and 21 or up to the day of complete wound healing. Data describing quantitative measures are expressed as mean ± SD. Comparison of variables representing categorical data was performed using Chi-square test. Group-A subjects took significantly less ( P < 0.05) i.e., (mean) 7.77 days than (mean) 9.87 days of Group-B subjects for wound healing. At the end of the study, statistically significant better ( P < 0.05) results were observed in Group-A than Group-B in mean wound surface area, wound healing parameters and pain associated with wound. Excellent overall efficacy and tolerability was observed in subjects of both the groups. No adverse event or adverse drug reaction was noted in any subject of both the groups. AHPL/AYTOP/0113 cream proved to be superior to Framycetin sulphate cream in healing of acute wounds.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chin, George; Marquez, Andres; Choudhury, Sutanay
2012-09-01
Triadic analysis encompasses a useful set of graph mining methods that is centered on the concept of a triad, which is a subgraph of three nodes and the configuration of directed edges across the nodes. Such methods are often applied in the social sciences as well as many other diverse fields. Triadic methods commonly operate on a triad census that counts the number of triads of every possible edge configuration in a graph. Like other graph algorithms, triadic census algorithms do not scale well when graphs reach tens of millions to billions of nodes. To enable the triadic analysis ofmore » large-scale graphs, we developed and optimized a triad census algorithm to efficiently execute on shared memory architectures. We will retrace the development and evolution of a parallel triad census algorithm. Over the course of several versions, we continually adapted the code’s data structures and program logic to expose more opportunities to exploit parallelism on shared memory that would translate into improved computational performance. We will recall the critical steps and modifications that occurred during code development and optimization. Furthermore, we will compare the performances of triad census algorithm versions on three specific systems: Cray XMT, HP Superdome, and AMD multi-core NUMA machine. These three systems have shared memory architectures but with markedly different hardware capabilities to manage parallelism.« less
Apparatus for injecting high power laser light into a fiber optic cable
Sweatt, William C.
1997-01-01
High intensity laser light is evenly injected into an optical fiber by the combination of a converging lens and a multisegment kinoform (binary optical element). The segments preferably have multi-order gratings on each which are aligned parallel to a radial line emanating from the center of the kinoform and pass through the center of the element. The grating in each segment causes circumferential (lateral) dispersion of the light, thereby avoiding detrimental concentration of light energy within the optical fiber.
NASA Technical Reports Server (NTRS)
Liu, Nan-Suey
2001-01-01
A multi-disciplinary design/analysis tool for combustion systems is critical for optimizing the low-emission, high-performance combustor design process. Based on discussions between then NASA Lewis Research Center and the jet engine companies, an industry-government team was formed in early 1995 to develop the National Combustion Code (NCC), which is an integrated system of computer codes for the design and analysis of combustion systems. NCC has advanced features that address the need to meet designer's requirements such as "assured accuracy", "fast turnaround", and "acceptable cost". The NCC development team is comprised of Allison Engine Company (Allison), CFD Research Corporation (CFDRC), GE Aircraft Engines (GEAE), NASA Glenn Research Center (LeRC), and Pratt & Whitney (P&W). The "unstructured mesh" capability and "parallel computing" are fundamental features of NCC from its inception. The NCC system is composed of a set of "elements" which includes grid generator, main flow solver, turbulence module, turbulence and chemistry interaction module, chemistry module, spray module, radiation heat transfer module, data visualization module, and a post-processor for evaluating engine performance parameters. Each element may have contributions from several team members. Such a multi-source multi-element system needs to be integrated in a way that facilitates inter-module data communication, flexibility in module selection, and ease of integration. The development of the NCC beta version was essentially completed in June 1998. Technical details of the NCC elements are given in the Reference List. Elements such as the baseline flow solver, turbulence module, and the chemistry module, have been extensively validated; and their parallel performance on large-scale parallel systems has been evaluated and optimized. However the scalar PDF module and the Spray module, as well as their coupling with the baseline flow solver, were developed in a small-scale distributed computing environment. As a result, the validation of the NCC beta version as a whole was quite limited. Current effort has been focused on the validation of the integrated code and the evaluation/optimization of its overall performance on large-scale parallel systems.
Real-time SHVC software decoding with multi-threaded parallel processing
NASA Astrophysics Data System (ADS)
Gudumasu, Srinivas; He, Yuwen; Ye, Yan; He, Yong; Ryu, Eun-Seok; Dong, Jie; Xiu, Xiaoyu
2014-09-01
This paper proposes a parallel decoding framework for scalable HEVC (SHVC). Various optimization technologies are implemented on the basis of SHVC reference software SHM-2.0 to achieve real-time decoding speed for the two layer spatial scalability configuration. SHVC decoder complexity is analyzed with profiling information. The decoding process at each layer and the up-sampling process are designed in parallel and scheduled by a high level application task manager. Within each layer, multi-threaded decoding is applied to accelerate the layer decoding speed. Entropy decoding, reconstruction, and in-loop processing are pipeline designed with multiple threads based on groups of coding tree units (CTU). A group of CTUs is treated as a processing unit in each pipeline stage to achieve a better trade-off between parallelism and synchronization. Motion compensation, inverse quantization, and inverse transform modules are further optimized with SSE4 SIMD instructions. Simulations on a desktop with an Intel i7 processor 2600 running at 3.4 GHz show that the parallel SHVC software decoder is able to decode 1080p spatial 2x at up to 60 fps (frames per second) and 1080p spatial 1.5x at up to 50 fps for those bitstreams generated with SHVC common test conditions in the JCT-VC standardization group. The decoding performance at various bitrates with different optimization technologies and different numbers of threads are compared in terms of decoding speed and resource usage, including processor and memory.
He, Zhong; Chen, Rong; Zhou, Yingfang; Geng, Li; Zhang, Zhenyu; Chen, Shuling; Yao, Yanjun; Lu, Junli; Lin, Shouqing
2009-05-20
To investigate the efficacy and safety of VAC BNO 1095 extract in Chinese women suffering from moderate to severe premenstrual syndrome (PMS). Prospective, double-blind, placebo controlled, parallel-group, multi-center clinical trial design was employed. After screening and preparation phase lasting three cycles, Eligible patients were randomly assigned into treatment or placebo groups and had treatment with VAC extract or placebo for up to three cycles. Efficacy was assessed using the Chinese version PMS-diary (PMSD) and PMTS. Two hundred and seventeen women were eligible to enter the treatment phase (TP) and were randomly assigned into the treatment group (108) or the placebo group (109), 208 provided the efficacy data (treatment 104, placebo 104), and 202 completed the treatment phase (treatment 101, placebo 101). The mean total PMSD score decreased from 29.23 at baseline (0 cycle) to 6.41 at the termination (3rd cycle) for the treatment group and from 28.14 at baseline (0 cycle) to 12.64 at the termination (3rd cycle) for the placebo group. The total PMSD score of 3rd cycle was significantly lower than the baseline in both groups (p<0.0001). The difference in the mean scores from the baseline to the 3rd cycle in the treatment group (22.71+/-10.33) was significantly lower than the difference in the placebo group (15.50+/-12.94, p<0.0001). Results of PMTS were similar, the total scores for PMTS were significantly lower between the two groups (p<0.01) and within each group (p<0.01). The score was decreased from 26.17+/-4.79 to 9.92+/-9.01 for the treatment group, and from 27.10+/-4.76 to 14.59+/-10.69 for the placebo group. A placebo effect of 50% was found in the present study. No serious adverse event (SAE) occurred in both groups. Vitex agnus castus (VAC BNO 1095 corresponding to 40mg herbal drug) is a safe, well tolerated and effective drug of the treatment for Chinese women with the moderate to severe PMS.
Xue, Yan; Qin, Xianghong; Zhou, Liya; Lin, Sanren; Wang, Ling; Hu, Haitang; Xia, Jielai
2018-05-01
Proton pump inhibitors (PPIs) are the main drugs for the treatment of reflux esophagitis. Phase II clinical trials showed that, compared with Esomeprazole, the new PPI Ilaparazole is great in terms of efficacy for reflux symptoms relief and curling for esophagitis. The aim of this study was to confirm suitable dose of Ilaparazole in the treatment of reflux esophagitis. This study used a randomized, double-blind, parallel positive drug control, multi-center design. A total of 537patients diagnosed as reflux esophagitis by gastroscopy were randomly divided into Ilaparazole group (n = 322, Ilaparazole 10 mg QD) and esomeprazole group (n = 215, Esomeprazole 40 mg QD). The patients in the two groups were treated for 8 weeks. Heartburn and reflux symptoms prior to treatment, and 2, 4 and 8 weeks after the treatment were assessed. Gastroscopy was performed after 4 weeks of treatment. Unhealed patients within 4 weeks underwent gastroscopy again at the end of 8 weeks. A total of 471 cases completed the treatment. In Esomeprazole and Ilaparazole groups. After 8 weeks treatment, the healing rate in Esomeprazole group and Ilaparazole group were 82.79% (94.94%) and 83.54% (92.50%), respectively. The corresponding rate difference [Ilaparazole-esomeprazole] was 0.75% (-2.44%) and the two-sided 95% CI was -5.72 to 7.22 (-6.90 to 2.01). The symptom disappearance rates for FAS (PPS) were 75.81% (82.02%) and 76.71% (80.36%) P = 0.8223 (0.7742). Adverse reactions related to the drugs were: 10.70% and 11.80%, (P = 0.7817). The efficacy and safety of Ilaparazole (10 mg/day) in treating reflux esophagitis was similar to esomeprazole (40 mg/day). Ilaparazole (10 mg/day) can be used in the treatment of esophagitis. The clinical trial registration number of the study is NCT 02860624. Copyright © 2018. Published by Elsevier Inc.
Shared Memory Parallelism for 3D Cartesian Discrete Ordinates Solver
NASA Astrophysics Data System (ADS)
Moustafa, Salli; Dutka-Malen, Ivan; Plagne, Laurent; Ponçot, Angélique; Ramet, Pierre
2014-06-01
This paper describes the design and the performance of DOMINO, a 3D Cartesian SN solver that implements two nested levels of parallelism (multicore+SIMD) on shared memory computation nodes. DOMINO is written in C++, a multi-paradigm programming language that enables the use of powerful and generic parallel programming tools such as Intel TBB and Eigen. These two libraries allow us to combine multi-thread parallelism with vector operations in an efficient and yet portable way. As a result, DOMINO can exploit the full power of modern multi-core processors and is able to tackle very large simulations, that usually require large HPC clusters, using a single computing node. For example, DOMINO solves a 3D full core PWR eigenvalue problem involving 26 energy groups, 288 angular directions (S16), 46 × 106 spatial cells and 1 × 1012 DoFs within 11 hours on a single 32-core SMP node. This represents a sustained performance of 235 GFlops and 40:74% of the SMP node peak performance for the DOMINO sweep implementation. The very high Flops/Watt ratio of DOMINO makes it a very interesting building block for a future many-nodes nuclear simulation tool.
A queueing network model to analyze the impact of parallelization of care on patient cycle time.
Jiang, Lixiang; Giachetti, Ronald E
2008-09-01
The total time a patient spends in an outpatient facility, called the patient cycle time, is a major contributor to overall patient satisfaction. A frequently recommended strategy to reduce the total time is to perform some activities in parallel thereby shortening patient cycle time. To analyze patient cycle time this paper extends and improves upon existing multi-class open queueing network model (MOQN) so that the patient flow in an urgent care center can be modeled. Results of the model are analyzed using data from an urgent care center contemplating greater parallelization of patient care activities. The results indicate that parallelization can reduce the cycle time for those patient classes which require more than one diagnostic and/ or treatment intervention. However, for many patient classes there would be little if any improvement, indicating the importance of tools to analyze business process reengineering rules. The paper makes contributions by implementing an approximation for fork/join queues in the network and by improving the approximation for multiple server queues in both low traffic and high traffic conditions. We demonstrate the accuracy of the MOQN results through comparisons to simulation results.
Multi-threading: A new dimension to massively parallel scientific computation
NASA Astrophysics Data System (ADS)
Nielsen, Ida M. B.; Janssen, Curtis L.
2000-06-01
Multi-threading is becoming widely available for Unix-like operating systems, and the application of multi-threading opens new ways for performing parallel computations with greater efficiency. We here briefly discuss the principles of multi-threading and illustrate the application of multi-threading for a massively parallel direct four-index transformation of electron repulsion integrals. Finally, other potential applications of multi-threading in scientific computing are outlined.
Parallel Lattice Basis Reduction Using a Multi-threaded Schnorr-Euchner LLL Algorithm
NASA Astrophysics Data System (ADS)
Backes, Werner; Wetzel, Susanne
In this paper, we introduce a new parallel variant of the LLL lattice basis reduction algorithm. Our new, multi-threaded algorithm is the first to provide an efficient, parallel implementation of the Schorr-Euchner algorithm for today’s multi-processor, multi-core computer architectures. Experiments with sparse and dense lattice bases show a speed-up factor of about 1.8 for the 2-thread and about factor 3.2 for the 4-thread version of our new parallel lattice basis reduction algorithm in comparison to the traditional non-parallel algorithm.
Self-balanced modulation and magnetic rebalancing method for parallel multilevel inverters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Hui; Shi, Yanjun
A self-balanced modulation method and a closed-loop magnetic flux rebalancing control method for parallel multilevel inverters. The combination of the two methods provides for balancing of the magnetic flux of the inter-cell transformers (ICTs) of the parallel multilevel inverters without deteriorating the quality of the output voltage. In various embodiments a parallel multi-level inverter modulator is provide including a multi-channel comparator to generate a multiplexed digitized ideal waveform for a parallel multi-level inverter and a finite state machine (FSM) module coupled to the parallel multi-channel comparator, the FSM module to receive the multiplexed digitized ideal waveform and to generate amore » pulse width modulated gate-drive signal for each switching device of the parallel multi-level inverter. The system and method provides for optimization of the output voltage spectrum without influence the magnetic balancing.« less
NASA Technical Reports Server (NTRS)
Weeks, Cindy Lou
1986-01-01
Experiments were conducted at NASA Ames Research Center to define multi-tasking software requirements for multiple-instruction, multiple-data stream (MIMD) computer architectures. The focus was on specifying solutions for algorithms in the field of computational fluid dynamics (CFD). The program objectives were to allow researchers to produce usable parallel application software as soon as possible after acquiring MIMD computer equipment, to provide researchers with an easy-to-learn and easy-to-use parallel software language which could be implemented on several different MIMD machines, and to enable researchers to list preferred design specifications for future MIMD computer architectures. Analysis of CFD algorithms indicated that extensions of an existing programming language, adaptable to new computer architectures, provided the best solution to meeting program objectives. The CoFORTRAN Language was written in response to these objectives and to provide researchers a means to experiment with parallel software solutions to CFD algorithms on machines with parallel architectures.
Parallel programming of saccades during natural scene viewing: evidence from eye movement positions.
Wu, Esther X W; Gilani, Syed Omer; van Boxtel, Jeroen J A; Amihai, Ido; Chua, Fook Kee; Yen, Shih-Cheng
2013-10-24
Previous studies have shown that saccade plans during natural scene viewing can be programmed in parallel. This evidence comes mainly from temporal indicators, i.e., fixation durations and latencies. In the current study, we asked whether eye movement positions recorded during scene viewing also reflect parallel programming of saccades. As participants viewed scenes in preparation for a memory task, their inspection of the scene was suddenly disrupted by a transition to another scene. We examined whether saccades after the transition were invariably directed immediately toward the center or were contingent on saccade onset times relative to the transition. The results, which showed a dissociation in eye movement behavior between two groups of saccades after the scene transition, supported the parallel programming account. Saccades with relatively long onset times (>100 ms) after the transition were directed immediately toward the center of the scene, probably to restart scene exploration. Saccades with short onset times (<100 ms) moved to the center only one saccade later. Our data on eye movement positions provide novel evidence of parallel programming of saccades during scene viewing. Additionally, results from the analyses of intersaccadic intervals were also consistent with the parallel programming hypothesis.
Cloudbursting - Solving the 3-body problem
NASA Astrophysics Data System (ADS)
Chang, G.; Heistand, S.; Vakhnin, A.; Huang, T.; Zimdars, P.; Hua, H.; Hood, R.; Koenig, J.; Mehrotra, P.; Little, M. M.; Law, E.
2014-12-01
Many science projects in the future will be accomplished through collaboration among 2 or more NASA centers along with, potentially, external scientists. Science teams will be composed of more geographically dispersed individuals and groups. However, the current computing environment does not make this easy and seamless. By being able to share computing resources among members of a multi-center team working on a science/ engineering project, limited pre-competition funds could be more efficiently applied and technical work could be conducted more effectively with less time spent moving data or waiting for computing resources to free up. Based on the work from an NASA CIO IT Labs task, this presentation will highlight our prototype work in identifying the feasibility and identify the obstacles, both technical and management, to perform "Cloudbursting" among private clouds located at three different centers. We will demonstrate the use of private cloud computing infrastructure at the Jet Propulsion Laboratory, Langley Research Center, and Ames Research Center to provide elastic computation to each other to perform parallel Earth Science data imaging. We leverage elastic load balancing and auto-scaling features at each data center so that each location can independently define how many resources to allocate to a particular job that was "bursted" from another data center and demonstrate that compute capacity scales up and down with the job. We will also discuss future work in the area, which could include the use of cloud infrastructure from different cloud framework providers as well as other cloud service providers.
Blanken, Peter; Hendriks, Vincent M; Huijsman, Ineke A; van Ree, Jan M; van den Brink, Wim
2016-07-01
To determine the efficacy of contingency management (CM), targeting cocaine use, as an add-on intervention for heroin dependent patients in supervised heroin-assisted treatment (HAT) with frequent cocaine use. Multi-center, open-label, parallel group, randomized controlled trial. Twelve specialized addiction treatment centers for HAT in The Netherlands; April 2006-January 2011. 214 chronic, treatment-refractory heroin dependent patients in HAT, with frequent cocaine use. Routine, daily supervised diacetylmorphine treatment, co-prescribed with oral methadone (HAT), with and without 6 months contingency management for cocaine use as an add-on intervention; HAT+CM and HAT-only, respectively. Primary outcome was the longest, uninterrupted duration of cocaine abstinence, based upon laboratory urinalysis. Secondary outcome measures included other cocaine-related measures, treatment retention in HAT, and multi-domain health-related treatment response. In an intention-to-treat analysis, HAT+CM was more effective than HAT-only in promoting longer, uninterrupted duration of cocaine abstinence (3.7 weeks versus 1.6 weeks; negative binomial regression: Exp(B)=2.34, 95%-CI: 1.70-3.23; p<0.001). This result remained significant in sensitivity analyses and was supported by all secondary, cocaine-related outcome measures. Treatment retention in HAT was high (91.6%) with no difference between the groups. The improvement in multi-domain health-related treatment response during the trial was numerically higher in HAT+CM (from 37.4% to 53.1%; +15.7%) than in HAT-only (from 44.5% to 46.5%; +2.0%), but this difference was statistically not significant. Contingency management is an effective add-on intervention to promote longer, uninterrupted periods of cocaine abstinence in chronic, treatment-refractory heroin dependent patients in heroin-assisted treatment with frequent cocaine use. The trial has been registered in The Netherlands National Trial Register under clinical trial registration number NTR4728. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Treinish, Lloyd A.; Gough, Michael L.; Wildenhain, W. David
1987-01-01
The capability was developed of rapidly producing visual representations of large, complex, multi-dimensional space and earth sciences data sets via the implementation of computer graphics modeling techniques on the Massively Parallel Processor (MPP) by employing techniques recently developed for typically non-scientific applications. Such capabilities can provide a new and valuable tool for the understanding of complex scientific data, and a new application of parallel computing via the MPP. A prototype system with such capabilities was developed and integrated into the National Space Science Data Center's (NSSDC) Pilot Climate Data System (PCDS) data-independent environment for computer graphics data display to provide easy access to users. While developing these capabilities, several problems had to be solved independently of the actual use of the MPP, all of which are outlined.
Outcome Evaluation of a Community Center-Based Program for Mothers at High Psychosocial Risk
ERIC Educational Resources Information Center
Rodrigo, Maria Jose; Maiquez, Maria Luisa; Correa, Ana Delia; Martin, Juan Carlos; Rodriguez, Guacimara
2006-01-01
Objective: This study reported the outcome evaluation of the "Apoyo Personal y Familiar" (APF) program for poorly-educated mothers from multi-problem families, showing inadequate behavior with their children. APF is a community-based multi-site program delivered through weekly group meetings in municipal resource centers. Method: A total…
Design and Experimental Validation of a Simple Controller for a Multi-Segment Magnetic Crawler Robot
2015-04-01
Ave, Cambridge, MA USA 02139; bSpace and Naval Warfare (SPAWAR) Systems Center Pacific, San Diego, CA USA 92152 ABSTRACT A novel, multi-segmented...high-level, autonomous control computer. A low-level, embedded microcomputer handles the commands to the driving motors. This paper presents the...to be demonstrated.14 The Unmanned Systems Group at SPAWAR Systems Center Pacific has developed a multi-segment magnetic crawler robot (MSMR
Toward a Model Framework of Generalized Parallel Componential Processing of Multi-Symbol Numbers
ERIC Educational Resources Information Center
Huber, Stefan; Cornelsen, Sonja; Moeller, Korbinian; Nuerk, Hans-Christoph
2015-01-01
In this article, we propose and evaluate a new model framework of parallel componential multi-symbol number processing, generalizing the idea of parallel componential processing of multi-digit numbers to the case of negative numbers by considering the polarity signs similar to single digits. In a first step, we evaluated this account by defining…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sreepathi, Sarat; Sripathi, Vamsi; Mills, Richard T
2013-01-01
Inefficient parallel I/O is known to be a major bottleneck among scientific applications employed on supercomputers as the number of processor cores grows into the thousands. Our prior experience indicated that parallel I/O libraries such as HDF5 that rely on MPI-IO do not scale well beyond 10K processor cores, especially on parallel file systems (like Lustre) with single point of resource contention. Our previous optimization efforts for a massively parallel multi-phase and multi-component subsurface simulator (PFLOTRAN) led to a two-phase I/O approach at the application level where a set of designated processes participate in the I/O process by splitting themore » I/O operation into a communication phase and a disk I/O phase. The designated I/O processes are created by splitting the MPI global communicator into multiple sub-communicators. The root process in each sub-communicator is responsible for performing the I/O operations for the entire group and then distributing the data to rest of the group. This approach resulted in over 25X speedup in HDF I/O read performance and 3X speedup in write performance for PFLOTRAN at over 100K processor cores on the ORNL Jaguar supercomputer. This research describes the design and development of a general purpose parallel I/O library, SCORPIO (SCalable block-ORiented Parallel I/O) that incorporates our optimized two-phase I/O approach. The library provides a simplified higher level abstraction to the user, sitting atop existing parallel I/O libraries (such as HDF5) and implements optimized I/O access patterns that can scale on larger number of processors. Performance results with standard benchmark problems and PFLOTRAN indicate that our library is able to maintain the same speedups as before with the added flexibility of being applicable to a wider range of I/O intensive applications.« less
Petrak, Frank; Herpertz, Stephan; Albus, Christian; Hermanns, Norbert; Hiemke, Christoph; Hiller, Wolfgang; Kronfeld, Kai; Kruse, Johannes; Kulzer, Bernd; Ruckes, Christian; Müller, Matthias J
2013-08-06
Depression is common in diabetes and associated with hyperglycemia, diabetes related complications and mortality. No single intervention has been identified that consistently leads to simultaneous improvement of depression and glycemic control. Our aim is to analyze the efficacy of a diabetes-specific cognitive behavioral group therapy (CBT) compared to sertraline (SER) in adults with depression and poorly controlled diabetes. This study is a multi-center parallel arm randomized controlled trial currently in its data analysis phase. We included 251 patients in 70 secondary care centers across Germany. Key inclusion criteria were: type 1 or 2 diabetes, major depression (diagnosed with the Structured Clinical Interview for DSM-IV, SCID) and hemoglobin A1C >7.5% despite current insulin therapy. During the initial phase, patients received either 50-200 mg/d sertraline or 10 CBT sessions aiming at the remission of depression and enhanced adherence to diabetes treatment and coping with diabetes. Both groups received diabetes treatment as usual. After 12 weeks of this initial open-label therapy, only the treatment-responders (50% depression symptoms reduction, Hamilton Depression Rating Scale, 17-item version [HAMD]) were included in the subsequent one year study phase and represented the primary analysis population. CBT-responders received no further treatment, while SER-responders obtained a continuous, flexible-dose SER regimen as relapse prevention. Adherence to treatment was analyzed using therapeutic drug monitoring (measurement of sertraline and N-desmethylsertraline concentrations in blood serum) and by counting the numbers of CBT sessions received. Outcome assessments were conducted by trained psychologists blinded to group assignment. Group differences in HbA1c (primary outcome) and depression (HAMD, secondary outcome) between 1-year follow-up and baseline will be analyzed by ANCOVA controlling for baseline values. As primary hypothesis we expect that CBT leads to significantly greater improvement of glycemic control in the one year follow-up in treatment responders of the short term phase. The DAD study is the first randomized controlled trial comparing antidepressants to a psychological treatment in diabetes patients with depression. Current controlled trials ISRCTN89333241.
Parallel Execution of Functional Mock-up Units in Buildings Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ozmen, Ozgur; Nutaro, James J.; New, Joshua Ryan
2016-06-30
A Functional Mock-up Interface (FMI) defines a standardized interface to be used in computer simulations to develop complex cyber-physical systems. FMI implementation by a software modeling tool enables the creation of a simulation model that can be interconnected, or the creation of a software library called a Functional Mock-up Unit (FMU). This report describes an FMU wrapper implementation that imports FMUs into a C++ environment and uses an Euler solver that executes FMUs in parallel using Open Multi-Processing (OpenMP). The purpose of this report is to elucidate the runtime performance of the solver when a multi-component system is imported asmore » a single FMU (for the whole system) or as multiple FMUs (for different groups of components as sub-systems). This performance comparison is conducted using two test cases: (1) a simple, multi-tank problem; and (2) a more realistic use case based on the Modelica Buildings Library. In both test cases, the performance gains are promising when each FMU consists of a large number of states and state events that are wrapped in a single FMU. Load balancing is demonstrated to be a critical factor in speeding up parallel execution of multiple FMUs.« less
NASA Astrophysics Data System (ADS)
Meng, Luming; Sheong, Fu Kit; Zeng, Xiangze; Zhu, Lizhe; Huang, Xuhui
2017-07-01
Constructing Markov state models from large-scale molecular dynamics simulation trajectories is a promising approach to dissect the kinetic mechanisms of complex chemical and biological processes. Combined with transition path theory, Markov state models can be applied to identify all pathways connecting any conformational states of interest. However, the identified pathways can be too complex to comprehend, especially for multi-body processes where numerous parallel pathways with comparable flux probability often coexist. Here, we have developed a path lumping method to group these parallel pathways into metastable path channels for analysis. We define the similarity between two pathways as the intercrossing flux between them and then apply the spectral clustering algorithm to lump these pathways into groups. We demonstrate the power of our method by applying it to two systems: a 2D-potential consisting of four metastable energy channels and the hydrophobic collapse process of two hydrophobic molecules. In both cases, our algorithm successfully reveals the metastable path channels. We expect this path lumping algorithm to be a promising tool for revealing unprecedented insights into the kinetic mechanisms of complex multi-body processes.
Research on Multi - Person Parallel Modeling Method Based on Integrated Model Persistent Storage
NASA Astrophysics Data System (ADS)
Qu, MingCheng; Wu, XiangHu; Tao, YongChao; Liu, Ying
2018-03-01
This paper mainly studies the multi-person parallel modeling method based on the integrated model persistence storage. The integrated model refers to a set of MDDT modeling graphics system, which can carry out multi-angle, multi-level and multi-stage description of aerospace general embedded software. Persistent storage refers to converting the data model in memory into a storage model and converting the storage model into a data model in memory, where the data model refers to the object model and the storage model is a binary stream. And multi-person parallel modeling refers to the need for multi-person collaboration, the role of separation, and even real-time remote synchronization modeling.
Performance Characteristics of the Multi-Zone NAS Parallel Benchmarks
NASA Technical Reports Server (NTRS)
Jin, Haoqiang; VanderWijngaart, Rob F.
2003-01-01
We describe a new suite of computational benchmarks that models applications featuring multiple levels of parallelism. Such parallelism is often available in realistic flow computations on systems of grids, but had not previously been captured in bench-marks. The new suite, named NPB Multi-Zone, is extended from the NAS Parallel Benchmarks suite, and involves solving the application benchmarks LU, BT and SP on collections of loosely coupled discretization meshes. The solutions on the meshes are updated independently, but after each time step they exchange boundary value information. This strategy provides relatively easily exploitable coarse-grain parallelism between meshes. Three reference implementations are available: one serial, one hybrid using the Message Passing Interface (MPI) and OpenMP, and another hybrid using a shared memory multi-level programming model (SMP+OpenMP). We examine the effectiveness of hybrid parallelization paradigms in these implementations on three different parallel computers. We also use an empirical formula to investigate the performance characteristics of the multi-zone benchmarks.
Wang, Bo; Liu, Xiru; Hu, Zhihai; Sun, Aijun; Ma, Yanwen; Chen Yingying; Zhang, Xuzhi; Liu, Meiling; Wang, Yi; Wang, Shuoshuo; Zhang, Yunjia; Li, Yijing; Shen, Weidong
2016-02-01
To evaluate the clinical efficacy of YANG's pricking-cupping therapy for knee osteoar thritis (KOA). Methods This was a multi-center randomized parallel controlled trial. One hundred and seventy one patients with KOA were randomly allocated to a pricking-cupping group (89 cases) and a conventional acu puncture group (82 cases). Neixiyan (EX-LE 4), Dubi (ST 35) and ashi points were selected in the two groups. Patients in the pricking-cupping group were treated with YANG's pricking-cupping therapy; the seven-star needles were used to perform pricking at acupoints, then cupping was used until slight bleeding was observed. Patients in the conventional acupuncture group were treated with semi-standardized filiform needle therapy. The treatment was given for 4 weeks (from a minimum of 5 times to a maximum of 10 times). The follow-up visit was 4 weeks. The Western Ontario and McMaster Universities Osteoarthritis Index (WOMAC) and the visual analogue scale (VAS) were adopted for the efficacy assessments. The pain score, stiffness score, physical function score and total score of WOMAC were all reduced after 4-week treatment and during follow-up visit in the two groups (all P<0. 0001). Except that the difference of stiffness score between the two groups was not significant after 4-week treatment (P>0. 05), each score and total score of WOMAC in the pricking-cupping group were lower than those in the conventional acupuncture group after 4-week treatment and during follow-up visit (P<0. 0001, P<0. 01). After 2-week treatment, 4-week treatment and during follow-up visit, the VAS was all reduced compared with that before treatment (all P<0. 0001) ; with the increase of the treatment, the reducing trend of VAS was more significant (P<0. 0001). The scores of VAS in the pricking-cupping group were lower than those in the conventional acupuncture group after 4-week treatment and during follow-up visit (P < 0. 01, P <0. 0001). CONCLUSION The YANG's pricking-cupping and conventional acupuncture therapy can both significantly improve knee joint pain and function in patients with KOA, which are relatively safe. The pricking cupping therapy is superior to conventional acupuncture with the identical selection of acupoints.
A Comparison of Multi-Age and Homogeneous Age Grouping in Early Childhood Centers.
ERIC Educational Resources Information Center
Freedman, Paula
Studies from several countries are described in this review of literature pertinent to assigning day care children to multi-age or homogeneous age groups. Three issues are discussed in this regard: (1) What difference does it make how one groups children? The answer is that a profound difference to children, staff, and parents may occur in terms…
Adaptively loaded SP-offset-QAM OFDM for IM/DD communication systems.
Zhao, Jian; Chan, Chun-Kit
2017-09-04
In this paper, we propose adaptively loaded set-partitioned offset quadrature amplitude modulation (SP-offset-QAM) orthogonal frequency division multiplexing (OFDM) for low-cost intensity-modulation direct-detection (IM/DD) communication systems. We compare this scheme with multi-band carrier-less amplitude phase modulation (CAP) and conventional OFDM, and demonstrate >40 Gbit/s transmission over 50-km single-mode fiber. It is shown that the use of SP-QAM formats, together with the adaptive loading algorithm specifically designed to this group of formats, results in significant performance improvement for all these three schemes. SP-offset-QAM OFDM exhibits greatly reduced complexity compared to SP-QAM based multi-band CAP, via parallelized implementation and minimized memory length for spectral shaping. On the other hand, this scheme shows better performance than SP-QAM based conventional OFDM at both back-to-back and after transmission. We also characterize the proposed scheme in terms of enhanced tolerance to fiber intra-channel nonlinearity and the potential to increase the communication security. The studies show that adaptive SP-offset-QAM OFDM is a promising IM/DD solution for medium- and long-reach optical access networks and data center connections.
Expressing Parallelism with ROOT
NASA Astrophysics Data System (ADS)
Piparo, D.; Tejedor, E.; Guiraud, E.; Ganis, G.; Mato, P.; Moneta, L.; Valls Pla, X.; Canal, P.
2017-10-01
The need for processing the ever-increasing amount of data generated by the LHC experiments in a more efficient way has motivated ROOT to further develop its support for parallelism. Such support is being tackled both for shared-memory and distributed-memory environments. The incarnations of the aforementioned parallelism are multi-threading, multi-processing and cluster-wide executions. In the area of multi-threading, we discuss the new implicit parallelism and related interfaces, as well as the new building blocks to safely operate with ROOT objects in a multi-threaded environment. Regarding multi-processing, we review the new MultiProc framework, comparing it with similar tools (e.g. multiprocessing module in Python). Finally, as an alternative to PROOF for cluster-wide executions, we introduce the efforts on integrating ROOT with state-of-the-art distributed data processing technologies like Spark, both in terms of programming model and runtime design (with EOS as one of the main components). For all the levels of parallelism, we discuss, based on real-life examples and measurements, how our proposals can increase the productivity of scientists.
Expressing Parallelism with ROOT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Piparo, D.; Tejedor, E.; Guiraud, E.
The need for processing the ever-increasing amount of data generated by the LHC experiments in a more efficient way has motivated ROOT to further develop its support for parallelism. Such support is being tackled both for shared-memory and distributed-memory environments. The incarnations of the aforementioned parallelism are multi-threading, multi-processing and cluster-wide executions. In the area of multi-threading, we discuss the new implicit parallelism and related interfaces, as well as the new building blocks to safely operate with ROOT objects in a multi-threaded environment. Regarding multi-processing, we review the new MultiProc framework, comparing it with similar tools (e.g. multiprocessing module inmore » Python). Finally, as an alternative to PROOF for cluster-wide executions, we introduce the efforts on integrating ROOT with state-of-the-art distributed data processing technologies like Spark, both in terms of programming model and runtime design (with EOS as one of the main components). For all the levels of parallelism, we discuss, based on real-life examples and measurements, how our proposals can increase the productivity of scientists.« less
Jack, Megan C; Kenkare, Sonya B; Saville, Benjamin R; Beidler, Stephanie K; Saba, Sam C; West, Alisha N; Hanemann, Michael S; van Aalst, John A
2010-01-01
Faced with work-hour restrictions, educators are mandated to improve the efficiency of resident and medical student education. Few studies have assessed learning styles in medicine; none have compared teaching and learning preferences. Validated tools exist to study these deficiencies. Kolb describes 4 learning styles: converging (practical), diverging (imaginative), assimilating (inductive), and accommodating (active). Grasha Teaching Styles are categorized into "clusters": 1 (teacher-centered, knowledge acquisition), 2 (teacher-centered, role modeling), 3 (student-centered, problem-solving), and 4 (student-centered, facilitative). Kolb's Learning Style Inventory (HayGroup, Philadelphia, Pennsylvania) and Grasha-Riechmann's TSS were administered to surgical faculty (n = 61), residents (n = 96), and medical students (n = 183) at a tertiary academic medical center, after informed consent was obtained (IRB # 06-0612). Statistical analysis was performed using χ(2) and Fisher exact tests. Surgical residents preferred active learning (p = 0.053), whereas faculty preferred reflective learning (p < 0.01). As a result of a comparison of teaching preferences, although both groups preferred student-centered, facilitative teaching, faculty preferred teacher-centered, role-modeling instruction (p = 0.02) more often. Residents had no dominant teaching style more often than surgical faculty (p = 0.01). Medical students preferred converging learning (42%) and cluster 4 teaching (35%). Statistical significance was unchanged when corrected for gender, resident training level, and subspecialization. Significant differences exist between faculty and residents in both learning and teaching preferences; this finding suggests inefficiency in resident education, as previous research suggests that learning styles parallel teaching styles. Absence of a predominant teaching style in residents suggests these individuals are learning to be teachers. The adaptation of faculty teaching methods to account for variations in resident learning styles may promote a better learning environment and more efficient faculty-resident interaction. Additional, multi-institutional studies using these tools are needed to elucidate these findings fully. Copyright © 2010 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
A multicenter, randomized, controlled trial of osteopathic manipulative treatment on preterms.
Cerritelli, Francesco; Pizzolorusso, Gianfranco; Renzetti, Cinzia; Cozzolino, Vincenzo; D'Orazio, Marianna; Lupacchini, Mariacristina; Marinelli, Benedetta; Accorsi, Alessandro; Lucci, Chiara; Lancellotti, Jenny; Ballabio, Silvia; Castelli, Carola; Molteni, Daniela; Besana, Roberto; Tubaldi, Lucia; Perri, Francesco Paolo; Fusilli, Paola; D'Incecco, Carmine; Barlafante, Gina
2015-01-01
Despite some preliminary evidence, it is still largely unknown whether osteopathic manipulative treatment improves preterm clinical outcomes. The present multi-center randomized single blind parallel group clinical trial enrolled newborns who met the criteria for gestational age between 29 and 37 weeks, without any congenital complication from 3 different public neonatal intensive care units. Preterm infants were randomly assigned to usual prenatal care (control group) or osteopathic manipulative treatment (study group). The primary outcome was the mean difference in length of hospital stay between groups. A total of 695 newborns were randomly assigned to either the study group (n= 352) or the control group (n=343). A statistical significant difference was observed between the two groups for the primary outcome (13.8 and 17.5 days for the study and control group respectively, p<0.001, effect size: 0.31). Multivariate analysis showed a reduction of the length of stay of 3.9 days (95% CI -5.5 to -2.3, p<0.001). Furthermore, there were significant reductions with treatment as compared to usual care in cost (difference between study and control group: 1,586.01€; 95% CI 1,087.18 to 6,277.28; p<0.001) but not in daily weight gain. There were no complications associated to the intervention. Osteopathic treatment reduced significantly the number of days of hospitalization and is cost-effective on a large cohort of preterm infants.
Adaptive multi-GPU Exchange Monte Carlo for the 3D Random Field Ising Model
NASA Astrophysics Data System (ADS)
Navarro, Cristóbal A.; Huang, Wei; Deng, Youjin
2016-08-01
This work presents an adaptive multi-GPU Exchange Monte Carlo approach for the simulation of the 3D Random Field Ising Model (RFIM). The design is based on a two-level parallelization. The first level, spin-level parallelism, maps the parallel computation as optimal 3D thread-blocks that simulate blocks of spins in shared memory with minimal halo surface, assuming a constant block volume. The second level, replica-level parallelism, uses multi-GPU computation to handle the simulation of an ensemble of replicas. CUDA's concurrent kernel execution feature is used in order to fill the occupancy of each GPU with many replicas, providing a performance boost that is more notorious at the smallest values of L. In addition to the two-level parallel design, the work proposes an adaptive multi-GPU approach that dynamically builds a proper temperature set free of exchange bottlenecks. The strategy is based on mid-point insertions at the temperature gaps where the exchange rate is most compromised. The extra work generated by the insertions is balanced across the GPUs independently of where the mid-point insertions were performed. Performance results show that spin-level performance is approximately two orders of magnitude faster than a single-core CPU version and one order of magnitude faster than a parallel multi-core CPU version running on 16-cores. Multi-GPU performance is highly convenient under a weak scaling setting, reaching up to 99 % efficiency as long as the number of GPUs and L increase together. The combination of the adaptive approach with the parallel multi-GPU design has extended our possibilities of simulation to sizes of L = 32 , 64 for a workstation with two GPUs. Sizes beyond L = 64 can eventually be studied using larger multi-GPU systems.
[Compliancy of pre-exposure prophylaxis for HIV infection in men who have sex with men in Chengdu].
Xu, J Y; Mou, Y C; Ma, Y L; Zhang, J Y
2017-05-10
Objective: To evaluate the compliancy of HIV pre-exposure prophylaxis (PrEP) in men who have sex with men (MSM) in Chengdu, Sichuan province, and explore the influencing factors. Methods: From 1 July 2013 to 30 September 2015, a random, open, multi-center and parallel control intervention study was conducted in 328 MSM enrolled by non-probability sampling in Chengdu. The MSM were divided into 3 groups randomly, i.e. daily group, intermittent group (before and after exposure) and control group. Clinical follow-up and questionnaire survey were carried out every 3 months. Their PrEP compliances were evaluated respectively and multivariate logistic regression analysis was conducted to identify the related factors. Results: A total of 141 MSM were surveyed, in whom 59(41.8 % ) had good PrEP compliancy. The PrEP compliancy rate was 69.0 % in daily group, higher than that in intermittent group (14.3 % ), the difference had significance ( χ (2)=45.29, P <0.001). Multivariate logistic analysis indicated that type of PrEP was the influencing factors of PrEP compliancy. Compared with daily group, the intermittent group had worse PrEP compliancy ( OR =0.07, 95 %CI : 0.03-0.16). Conclusion: The PrEP compliance of the MSM in this study was poor, the compliancy would be influenced by the type of PrEP.
The Handbook of Research Impact Assessment. Edition 7. Summer 1997.
1997-01-01
Treatment of Patients with Chronic-Schizophrenia - A Multi-National, Multicenter, Double-Blind, Parallel-Group Study Versus Haloperidol ", BRITISH JOURNAL OF...34The Scientific Production and International Reputation of Travassos,Lauro", MEMORIAS DO INSTITUTO OSWALDO CRUZ,1992, Vol 87, Iss S1, pp R7-R10 Courtial
NASA Astrophysics Data System (ADS)
Baregheh, Mandana; Mezentsev, Vladimir; Schmitz, Holger
2011-06-01
We describe a parallel multi-threaded approach for high performance modelling of wide class of phenomena in ultrafast nonlinear optics. Specific implementation has been performed using the highly parallel capabilities of a programmable graphics processor.
A fast parallel clustering algorithm for molecular simulation trajectories.
Zhao, Yutong; Sheong, Fu Kit; Sun, Jian; Sander, Pedro; Huang, Xuhui
2013-01-15
We implemented a GPU-powered parallel k-centers algorithm to perform clustering on the conformations of molecular dynamics (MD) simulations. The algorithm is up to two orders of magnitude faster than the CPU implementation. We tested our algorithm on four protein MD simulation datasets ranging from the small Alanine Dipeptide to a 370-residue Maltose Binding Protein (MBP). It is capable of grouping 250,000 conformations of the MBP into 4000 clusters within 40 seconds. To achieve this, we effectively parallelized the code on the GPU and utilize the triangle inequality of metric spaces. Furthermore, the algorithm's running time is linear with respect to the number of cluster centers. In addition, we found the triangle inequality to be less effective in higher dimensions and provide a mathematical rationale. Finally, using Alanine Dipeptide as an example, we show a strong correlation between cluster populations resulting from the k-centers algorithm and the underlying density. © 2012 Wiley Periodicals, Inc. Copyright © 2012 Wiley Periodicals, Inc.
2015-09-30
phone: +44 1334 462624 fax: +44 1334 463443 e-mail: markjohnson@st-andrews.ac.uk Todd Lindstrom Wildlife Computers 8345 154th Avenue NE...in situ processing algorithms for sound and motion data. In a parallel project Dr. Andrews at the Alaska SeaLife Center teamed with Wildlife ...from Wildlife Computers to produce a highly integrated Sound and Motion Recording and Telemetry (SMRT) tag. The complete tag development is expected
Development of Parallel Code for the Alaska Tsunami Forecast Model
NASA Astrophysics Data System (ADS)
Bahng, B.; Knight, W. R.; Whitmore, P.
2014-12-01
The Alaska Tsunami Forecast Model (ATFM) is a numerical model used to forecast propagation and inundation of tsunamis generated by earthquakes and other means in both the Pacific and Atlantic Oceans. At the U.S. National Tsunami Warning Center (NTWC), the model is mainly used in a pre-computed fashion. That is, results for hundreds of hypothetical events are computed before alerts, and are accessed and calibrated with observations during tsunamis to immediately produce forecasts. ATFM uses the non-linear, depth-averaged, shallow-water equations of motion with multiply nested grids in two-way communications between domains of each parent-child pair as waves get closer to coastal waters. Even with the pre-computation the task becomes non-trivial as sub-grid resolution gets finer. Currently, the finest resolution Digital Elevation Models (DEM) used by ATFM are 1/3 arc-seconds. With a serial code, large or multiple areas of very high resolution can produce run-times that are unrealistic even in a pre-computed approach. One way to increase the model performance is code parallelization used in conjunction with a multi-processor computing environment. NTWC developers have undertaken an ATFM code-parallelization effort to streamline the creation of the pre-computed database of results with the long term aim of tsunami forecasts from source to high resolution shoreline grids in real time. Parallelization will also permit timely regeneration of the forecast model database with new DEMs; and, will make possible future inclusion of new physics such as the non-hydrostatic treatment of tsunami propagation. The purpose of our presentation is to elaborate on the parallelization approach and to show the compute speed increase on various multi-processor systems.
SAPNEW: Parallel finite element code for thin shell structures on the Alliant FX-80
NASA Astrophysics Data System (ADS)
Kamat, Manohar P.; Watson, Brian C.
1992-11-01
The finite element method has proven to be an invaluable tool for analysis and design of complex, high performance systems, such as bladed-disk assemblies in aircraft turbofan engines. However, as the problem size increase, the computation time required by conventional computers can be prohibitively high. Parallel processing computers provide the means to overcome these computation time limits. This report summarizes the results of a research activity aimed at providing a finite element capability for analyzing turbomachinery bladed-disk assemblies in a vector/parallel processing environment. A special purpose code, named with the acronym SAPNEW, has been developed to perform static and eigen analysis of multi-degree-of-freedom blade models built-up from flat thin shell elements. SAPNEW provides a stand alone capability for static and eigen analysis on the Alliant FX/80, a parallel processing computer. A preprocessor, named with the acronym NTOS, has been developed to accept NASTRAN input decks and convert them to the SAPNEW format to make SAPNEW more readily used by researchers at NASA Lewis Research Center.
[Series: Medical Applications of the PHITS Code (2): Acceleration by Parallel Computing].
Furuta, Takuya; Sato, Tatsuhiko
2015-01-01
Time-consuming Monte Carlo dose calculation becomes feasible owing to the development of computer technology. However, the recent development is due to emergence of the multi-core high performance computers. Therefore, parallel computing becomes a key to achieve good performance of software programs. A Monte Carlo simulation code PHITS contains two parallel computing functions, the distributed-memory parallelization using protocols of message passing interface (MPI) and the shared-memory parallelization using open multi-processing (OpenMP) directives. Users can choose the two functions according to their needs. This paper gives the explanation of the two functions with their advantages and disadvantages. Some test applications are also provided to show their performance using a typical multi-core high performance workstation.
Energy reduction using multi-channels optical wireless communication based OFDM
NASA Astrophysics Data System (ADS)
Darwesh, Laialy; Arnon, Shlomi
2017-10-01
In recent years, an increasing number of data center networks (DCNs) have been built to provide various cloud applications. Major challenges in the design of next generation DC networks include reduction of the energy consumption, high flexibility and scalability, high data rates, minimum latency and high cyber security. Use of optical wireless communication (OWC) to augment the DC network could help to confront some of these challenges. In this paper we present an OWC multi channels communication method that could lead to significant energy reduction of the communication equipment. The method is to convert a high speed serial data stream to many slower and parallel streams and vies versa at the receiver. We implement this concept of multi channels using optical orthogonal frequency division multiplexing (O-OFDM) method. In our scheme, we use asymmetrically clipped optical OFDM (ACO-OFDM). Our results show that the realization of multi channels OFDM (ACO-OFDM) methods reduces the total energy consumption exponentially, as the number of channels transmitted through them rises.
Wallerstein, Avi; Jackson, W Bruce; Chambers, Jeffrey; Moezzi, Amir M; Lin, Hugh; Simmons, Peter A
2018-01-01
Purpose To compare the efficacy and safety of a preservative-free, multi-ingredient formulation of carboxymethylcellulose 0.5%, hyaluronic acid 0.1%, and organic osmolytes (CMC-HA), to preservative-free carboxymethylcellulose 0.5% (CMC) in the management of postoperative signs and symptoms of dry eye following laser-assisted in situ keratomileusis (LASIK). Methods This was a double-masked, randomized, parallel-group study conducted in 14 clinical centers in Canada and Australia. Subjects with no more than mild dry eye instilled CMC-HA or CMC for 90 days post-LASIK. Ocular Surface Disease Index© (OSDI; primary efficacy measure), corneal staining, tear break-up time (TBUT), Schirmer’s test, acceptability/tolerability surveys, and visual acuity were assessed at screening and days 2, 10, 30, 60, and 90 post-surgery. Safety analyses included all enrolled. Results A total of 148 subjects (CMC-HA, n=75; CMC, n=73) were enrolled and assigned to receive treatment, and 126 subjects completed the study without any protocol violations. Post-LASIK, dry eye signs/symptoms peaked at 10 days. OSDI scores for both groups returned to normal with no differences between treatment groups at day 90 (P=0.775). Corneal staining, Schirmer’s test, TBUT, and survey results were comparable. Higher mean improvements in uncorrected visual acuity were observed in the CMC-HA group at all study visits, reaching statistical significance at day 30 (P=0.013). Both treatments were well tolerated. Conclusion CMC-HA-containing artificial tears relieved post-LASIK ocular dryness as well as CMC alone, and demonstrated incremental benefit in uncorrected vision, with a favorable safety profile. Results support use of CMC-HA eye drops to reduce signs and symptoms of ocular dryness post-LASIK. PMID:29765198
Options for Parallelizing a Planning and Scheduling Algorithm
NASA Technical Reports Server (NTRS)
Clement, Bradley J.; Estlin, Tara A.; Bornstein, Benjamin D.
2011-01-01
Space missions have a growing interest in putting multi-core processors onboard spacecraft. For many missions processing power significantly slows operations. We investigate how continual planning and scheduling algorithms can exploit multi-core processing and outline different potential design decisions for a parallelized planning architecture. This organization of choices and challenges helps us with an initial design for parallelizing the CASPER planning system for a mesh multi-core processor. This work extends that presented at another workshop with some preliminary results.
Orientation of ripples induced by ultrafast laser pulses on copper in different liquids
NASA Astrophysics Data System (ADS)
Maragkaki, Stella; Elkalash, Abdallah; Gurevich, Evgeny L.
2017-12-01
Formation of laser-induced periodic surface structures (LIPSS or ripples) was studied on a metallic surface of polished copper using irradiation with multiple femtosecond laser pulses in different environmental conditions (air, water, ethanol and methanol). Uniform LIPSS have been achieved by controlling the peak fluence and the overlapping rate. Ripples in both orientations, perpendicular and parallel to laser polarization, were observed in all liquids simultaneously. The orientation of these ripples in the center of the ablated line was changing with the incident light intensity. For low intensities the orientation of the ripples is perpendicular to the laser polarization, whereas for high intensities it turns parallel to it without considerable changes in the period. Multi-directional LIPSS formation was also observed for moderate peak fluence in liquid environments.
The nuclear question: rethinking species importance in multi-species animal groups.
Srinivasan, Umesh; Raza, Rashid Hasnain; Quader, Suhel
2010-09-01
1. Animals group for various benefits, and may form either simple single-species groups, or more complex multi-species associations. Multi-species groups are thought to provide anti-predator and foraging benefits to participant individuals. 2. Despite detailed studies on multi-species animal groups, the importance of species in group initiation and maintenance is still rated qualitatively as 'nuclear' (maintaining groups) or 'attendant' (species following nuclear species) based on species-specific traits. This overly simplifies and limits understanding of inherently complex associations, and is biologically unrealistic, because species roles in multi-species groups are: (i) likely to be context-specific and not simply a fixed species property, and (ii) much more variable than this dichotomy indicates. 3. We propose a new view of species importance (measured as number of inter-species associations), along a continuum from 'most nuclear' to 'least nuclear'. Using mixed-species bird flocks from a tropical rainforest in India as an example, we derive inter-species association measures from randomizations on bird species abundance data (which takes into account species 'availability') and data on 86 mixed-species flocks from two different flock types. Our results show that the number and average strength of inter-species associations covary positively, and we argue that species with many, strong associations are the most nuclear. 4. From our data, group size and foraging method are ecological and behavioural traits of species that best explain nuclearity in mixed-species bird flocks. Parallels have been observed in multi-species fish shoals, in which group size and foraging method, as well as diet, have been shown to correlate with nuclearity. Further, the context in which multi-species groups occur, in conjunction with species-specific traits, influences the role played by a species in a multi-species group, and this highlights the importance of extrinsic factors in shaping species importance. 5. Our view of nuclearity provides predictive power in examining species roles in a variety of situations (e.g. predicting leadership in differently composed communities), and can be applied to examine a broad range of ecological and evolutionary questions pertinent to multi-species groups in general.
Collaborative Benchmarking: Discovering and Implementing Best Practices to Strengthen SEAs
ERIC Educational Resources Information Center
Building State Capacity and Productivity Center, 2013
2013-01-01
To help state educational agencies (SEAs) learn about and adapt best practices that exist in other SEAs and other organizations, the Building State Capacity and Productivity Center (BSCP Center) working closely with the Regional Comprehensive Centers will create multi-state groups, through a "Collaborative Benchmarking Best Practices Process" that…
Midander, Klara; Elihn, Karine; Wallén, Anna; Belova, Lyuba; Karlsson, Anna-Karin Borg; Wallinder, Inger Odnevall
2012-06-15
Continuous daily measurements of airborne particles were conducted during specific periods at an underground platform within the subway system of the city center of Stockholm, Sweden. Main emphasis was placed on number concentration, particle size distribution, soot content (analyzed as elemental and black carbon) and surface area concentration. Conventional measurements of mass concentrations were conducted in parallel as well as analysis of particle morphology, bulk- and surface composition. In addition, the presence of volatile and semi volatile organic compounds within freshly collected particle fractions of PM(10) and PM(2.5) were investigated and grouped according to functional groups. Similar periodic measurements were conducted at street level for comparison. The investigation clearly demonstrates a large dominance in number concentration of airborne nano-sized particles compared to coarse particles in the subway. Out of a mean particle number concentration of 12000 particles/cm(3) (7500 to 20000 particles/cm(3)), only 190 particles/cm(3) were larger than 250 nm. Soot particles from diesel exhaust, and metal-containing particles, primarily iron, were observed in the subway aerosol. Unique measurements on freshly collected subway particle size fractions of PM(10) and PM(2.5) identified several volatile and semi-volatile organic compounds, the presence of carcinogenic aromatic compounds and traces of flame retardants. This interdisciplinary and multi-analytical investigation aims to provide an improved understanding of reported adverse health effects induced by subway aerosols. Copyright © 2012 Elsevier B.V. All rights reserved.
2013-01-01
Background Depression is common in diabetes and associated with hyperglycemia, diabetes related complications and mortality. No single intervention has been identified that consistently leads to simultaneous improvement of depression and glycemic control. Our aim is to analyze the efficacy of a diabetes-specific cognitive behavioral group therapy (CBT) compared to sertraline (SER) in adults with depression and poorly controlled diabetes. Methods/Design This study is a multi-center parallel arm randomized controlled trial currently in its data analysis phase. We included 251 patients in 70 secondary care centers across Germany. Key inclusion criteria were: type 1 or 2 diabetes, major depression (diagnosed with the Structured Clinical Interview for DSM-IV, SCID) and hemoglobin A1C >7.5% despite current insulin therapy. During the initial phase, patients received either 50–200 mg/d sertraline or 10 CBT sessions aiming at the remission of depression and enhanced adherence to diabetes treatment and coping with diabetes. Both groups received diabetes treatment as usual. After 12 weeks of this initial open-label therapy, only the treatment-responders (50% depression symptoms reduction, Hamilton Depression Rating Scale, 17-item version [HAMD]) were included in the subsequent one year study phase and represented the primary analysis population. CBT-responders received no further treatment, while SER-responders obtained a continuous, flexible-dose SER regimen as relapse prevention. Adherence to treatment was analyzed using therapeutic drug monitoring (measurement of sertraline and N-desmethylsertraline concentrations in blood serum) and by counting the numbers of CBT sessions received. Outcome assessments were conducted by trained psychologists blinded to group assignment. Group differences in HbA1c (primary outcome) and depression (HAMD, secondary outcome) between 1-year follow-up and baseline will be analyzed by ANCOVA controlling for baseline values. As primary hypothesis we expect that CBT leads to significantly greater improvement of glycemic control in the one year follow-up in treatment responders of the short term phase. Discussion The DAD study is the first randomized controlled trial comparing antidepressants to a psychological treatment in diabetes patients with depression. The study is investigator initiated and was supported by the ‘Förderprogramm Klinische Studien (Clinical Trials)’ and the ‘Competence Network for Diabetes mellitus’ funded by the Federal Ministry of Education and Research (FKZ 01KG0505). Trial registration Current controlled trials ISRCTN89333241. PMID:23915015
NASA Astrophysics Data System (ADS)
Hou, Zhenlong; Huang, Danian
2017-09-01
In this paper, we make a study on the inversion of probability tomography (IPT) with gravity gradiometry data at first. The space resolution of the results is improved by multi-tensor joint inversion, depth weighting matrix and the other methods. Aiming at solving the problems brought by the big data in the exploration, we present the parallel algorithm and the performance analysis combining Compute Unified Device Architecture (CUDA) with Open Multi-Processing (OpenMP) based on Graphics Processing Unit (GPU) accelerating. In the test of the synthetic model and real data from Vinton Dome, we get the improved results. It is also proved that the improved inversion algorithm is effective and feasible. The performance of parallel algorithm we designed is better than the other ones with CUDA. The maximum speedup could be more than 200. In the performance analysis, multi-GPU speedup and multi-GPU efficiency are applied to analyze the scalability of the multi-GPU programs. The designed parallel algorithm is demonstrated to be able to process larger scale of data and the new analysis method is practical.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Procassini, R.J.
1997-12-31
The fine-scale, multi-space resolution that is envisioned for accurate simulations of complex weapons systems in three spatial dimensions implies flop-rate and memory-storage requirements that will only be obtained in the near future through the use of parallel computational techniques. Since the Monte Carlo transport models in these simulations usually stress both of these computational resources, they are prime candidates for parallelization. The MONACO Monte Carlo transport package, which is currently under development at LLNL, will utilize two types of parallelism within the context of a multi-physics design code: decomposition of the spatial domain across processors (spatial parallelism) and distribution ofmore » particles in a given spatial subdomain across additional processors (particle parallelism). This implementation of the package will utilize explicit data communication between domains (message passing). Such a parallel implementation of a Monte Carlo transport model will result in non-deterministic communication patterns. The communication of particles between subdomains during a Monte Carlo time step may require a significant level of effort to achieve a high parallel efficiency.« less
A Multicenter, Randomized, Controlled Trial of Osteopathic Manipulative Treatment on Preterms
Cerritelli, Francesco; Pizzolorusso, Gianfranco; Renzetti, Cinzia; Cozzolino, Vincenzo; D’Orazio, Marianna; Lupacchini, Mariacristina; Marinelli, Benedetta; Accorsi, Alessandro; Lucci, Chiara; Lancellotti, Jenny; Ballabio, Silvia; Castelli, Carola; Molteni, Daniela; Besana, Roberto; Tubaldi, Lucia; Perri, Francesco Paolo; Fusilli, Paola; D’Incecco, Carmine; Barlafante, Gina
2015-01-01
Background Despite some preliminary evidence, it is still largely unknown whether osteopathic manipulative treatment improves preterm clinical outcomes. Materials and Methods The present multi-center randomized single blind parallel group clinical trial enrolled newborns who met the criteria for gestational age between 29 and 37 weeks, without any congenital complication from 3 different public neonatal intensive care units. Preterm infants were randomly assigned to usual prenatal care (control group) or osteopathic manipulative treatment (study group). The primary outcome was the mean difference in length of hospital stay between groups. Results A total of 695 newborns were randomly assigned to either the study group (n= 352) or the control group (n=343). A statistical significant difference was observed between the two groups for the primary outcome (13.8 and 17.5 days for the study and control group respectively, p<0.001, effect size: 0.31). Multivariate analysis showed a reduction of the length of stay of 3.9 days (95% CI -5.5 to -2.3, p<0.001). Furthermore, there were significant reductions with treatment as compared to usual care in cost (difference between study and control group: 1,586.01€; 95% CI 1,087.18 to 6,277.28; p<0.001) but not in daily weight gain. There were no complications associated to the intervention. Conclusions Osteopathic treatment reduced significantly the number of days of hospitalization and is cost-effective on a large cohort of preterm infants. PMID:25974071
A Multi-Modality CMOS Sensor Array for Cell-Based Assay and Drug Screening.
Chi, Taiyun; Park, Jong Seok; Butts, Jessica C; Hookway, Tracy A; Su, Amy; Zhu, Chengjie; Styczynski, Mark P; McDevitt, Todd C; Wang, Hua
2015-12-01
In this paper, we present a fully integrated multi-modality CMOS cellular sensor array with four sensing modalities to characterize different cell physiological responses, including extracellular voltage recording, cellular impedance mapping, optical detection with shadow imaging and bioluminescence sensing, and thermal monitoring. The sensor array consists of nine parallel pixel groups and nine corresponding signal conditioning blocks. Each pixel group comprises one temperature sensor and 16 tri-modality sensor pixels, while each tri-modality sensor pixel can be independently configured for extracellular voltage recording, cellular impedance measurement (voltage excitation/current sensing), and optical detection. This sensor array supports multi-modality cellular sensing at the pixel level, which enables holistic cell characterization and joint-modality physiological monitoring on the same cellular sample with a pixel resolution of 80 μm × 100 μm. Comprehensive biological experiments with different living cell samples demonstrate the functionality and benefit of the proposed multi-modality sensing in cell-based assay and drug screening.
Parallel group independent component analysis for massive fMRI data sets.
Chen, Shaojie; Huang, Lei; Qiu, Huitong; Nebel, Mary Beth; Mostofsky, Stewart H; Pekar, James J; Lindquist, Martin A; Eloyan, Ani; Caffo, Brian S
2017-01-01
Independent component analysis (ICA) is widely used in the field of functional neuroimaging to decompose data into spatio-temporal patterns of co-activation. In particular, ICA has found wide usage in the analysis of resting state fMRI (rs-fMRI) data. Recently, a number of large-scale data sets have become publicly available that consist of rs-fMRI scans from thousands of subjects. As a result, efficient ICA algorithms that scale well to the increased number of subjects are required. To address this problem, we propose a two-stage likelihood-based algorithm for performing group ICA, which we denote Parallel Group Independent Component Analysis (PGICA). By utilizing the sequential nature of the algorithm and parallel computing techniques, we are able to efficiently analyze data sets from large numbers of subjects. We illustrate the efficacy of PGICA, which has been implemented in R and is freely available through the Comprehensive R Archive Network, through simulation studies and application to rs-fMRI data from two large multi-subject data sets, consisting of 301 and 779 subjects respectively.
Deterministic Evolutionary Trajectories Influence Primary Tumor Growth: TRACERx Renal.
Turajlic, Samra; Xu, Hang; Litchfield, Kevin; Rowan, Andrew; Horswell, Stuart; Chambers, Tim; O'Brien, Tim; Lopez, Jose I; Watkins, Thomas B K; Nicol, David; Stares, Mark; Challacombe, Ben; Hazell, Steve; Chandra, Ashish; Mitchell, Thomas J; Au, Lewis; Eichler-Jonsson, Claudia; Jabbar, Faiz; Soultati, Aspasia; Chowdhury, Simon; Rudman, Sarah; Lynch, Joanna; Fernando, Archana; Stamp, Gordon; Nye, Emma; Stewart, Aengus; Xing, Wei; Smith, Jonathan C; Escudero, Mickael; Huffman, Adam; Matthews, Nik; Elgar, Greg; Phillimore, Ben; Costa, Marta; Begum, Sharmin; Ward, Sophia; Salm, Max; Boeing, Stefan; Fisher, Rosalie; Spain, Lavinia; Navas, Carolina; Grönroos, Eva; Hobor, Sebastijan; Sharma, Sarkhara; Aurangzeb, Ismaeel; Lall, Sharanpreet; Polson, Alexander; Varia, Mary; Horsfield, Catherine; Fotiadis, Nicos; Pickering, Lisa; Schwarz, Roland F; Silva, Bruno; Herrero, Javier; Luscombe, Nick M; Jamal-Hanjani, Mariam; Rosenthal, Rachel; Birkbak, Nicolai J; Wilson, Gareth A; Pipek, Orsolya; Ribli, Dezso; Krzystanek, Marcin; Csabai, Istvan; Szallasi, Zoltan; Gore, Martin; McGranahan, Nicholas; Van Loo, Peter; Campbell, Peter; Larkin, James; Swanton, Charles
2018-04-19
The evolutionary features of clear-cell renal cell carcinoma (ccRCC) have not been systematically studied to date. We analyzed 1,206 primary tumor regions from 101 patients recruited into the multi-center prospective study, TRACERx Renal. We observe up to 30 driver events per tumor and show that subclonal diversification is associated with known prognostic parameters. By resolving the patterns of driver event ordering, co-occurrence, and mutual exclusivity at clone level, we show the deterministic nature of clonal evolution. ccRCC can be grouped into seven evolutionary subtypes, ranging from tumors characterized by early fixation of multiple mutational and copy number drivers and rapid metastases to highly branched tumors with >10 subclonal drivers and extensive parallel evolution associated with attenuated progression. We identify genetic diversity and chromosomal complexity as determinants of patient outcome. Our insights reconcile the variable clinical behavior of ccRCC and suggest evolutionary potential as a biomarker for both intervention and surveillance. Copyright © 2018 Francis Crick Institute. Published by Elsevier Inc. All rights reserved.
Patterns of Risk Using an Integrated Spatial Multi-Hazard Model (PRISM Model)
Multi-hazard risk assessment has long centered on small scale needs, whereby a single community or group of communities’ exposures are assessed to determine potential mitigation strategies. While this approach has advanced the understanding of hazard interactions, it is li...
Using Configural Frequency Analysis as a Person-Centered Analytic Approach with Categorical Data
ERIC Educational Resources Information Center
Stemmler, Mark; Heine, Jörg-Henrik
2017-01-01
Configural frequency analysis and log-linear modeling are presented as person-centered analytic approaches for the analysis of categorical or categorized data in multi-way contingency tables. Person-centered developmental psychology, based on the holistic interactionistic perspective of the Stockholm working group around David Magnusson and Lars…
Arnfred, Sidse M; Aharoni, Ruth; Hvenegaard, Morten; Poulsen, Stig; Bach, Bo; Arendt, Mikkel; Rosenberg, Nicole K; Reinholt, Nina
2017-01-23
Transdiagnostic Cognitive Behavior Therapy (TCBT) manuals delivered in individual format have been reported to be just as effective as traditional diagnosis specific CBT manuals. We have translated and modified the "The Unified Protocol for Transdiagnostic Treatment of Emotional Disorders" (UP-CBT) for group delivery in Mental Health Service (MHS), and shown effects comparable to traditional CBT in a naturalistic study. As the use of one manual instead of several diagnosis-specific manuals could simplify logistics, reduce waiting time, and increase therapist expertise compared to diagnosis specific CBT, we aim to test the relative efficacy of group UP-CBT and diagnosis specific group CBT. The study is a partially blinded, pragmatic, non-inferiority, parallel, multi-center randomized controlled trial (RCT) of UP-CBT vs diagnosis specific CBT for Unipolar Depression, Social Anxiety Disorder and Agoraphobia/Panic Disorder. In total, 248 patients are recruited from three regional MHS centers across Denmark and included in two intervention arms. The primary outcome is patient-ratings of well-being (WHO Well-being Index, WHO-5), secondary outcomes include level of depressive and anxious symptoms, personality variables, emotion regulation, reflective functioning, and social adjustment. Assessments are conducted before and after therapy and at 6 months follow-up. Weekly patient-rated outcomes and group evaluations are collected for every session. Outcome assessors, blind to treatment allocation, will perform the observer-based symptom ratings, and fidelity assessors will monitor manual adherence. The current study will be the first RCT investigating the dissemination of the UP in a MHS setting, the UP delivered in groups, and with depressive patients included. Hence the results are expected to add substantially to the evidence base for rational group psychotherapy in MHS. The planned moderator and mediator analyses could spur new hypotheses about mechanisms of change in psychotherapy and the association between patient characteristics and treatment effect. Clinicaltrials.gov NCT02954731 . Registered 25 October 2016.
Parallel processing in the honeybee olfactory pathway: structure, function, and evolution.
Rössler, Wolfgang; Brill, Martin F
2013-11-01
Animals face highly complex and dynamic olfactory stimuli in their natural environments, which require fast and reliable olfactory processing. Parallel processing is a common principle of sensory systems supporting this task, for example in visual and auditory systems, but its role in olfaction remained unclear. Studies in the honeybee focused on a dual olfactory pathway. Two sets of projection neurons connect glomeruli in two antennal-lobe hemilobes via lateral and medial tracts in opposite sequence with the mushroom bodies and lateral horn. Comparative studies suggest that this dual-tract circuit represents a unique adaptation in Hymenoptera. Imaging studies indicate that glomeruli in both hemilobes receive redundant sensory input. Recent simultaneous multi-unit recordings from projection neurons of both tracts revealed widely overlapping response profiles strongly indicating parallel olfactory processing. Whereas lateral-tract neurons respond fast with broad (generalistic) profiles, medial-tract neurons are odorant specific and respond slower. In analogy to "what-" and "where" subsystems in visual pathways, this suggests two parallel olfactory subsystems providing "what-" (quality) and "when" (temporal) information. Temporal response properties may support across-tract coincidence coding in higher centers. Parallel olfactory processing likely enhances perception of complex odorant mixtures to decode the diverse and dynamic olfactory world of a social insect.
NASA Astrophysics Data System (ADS)
Quan, Zhe; Wu, Lei
2017-09-01
This article investigates the use of parallel computing for solving the disjunctively constrained knapsack problem. The proposed parallel computing model can be viewed as a cooperative algorithm based on a multi-neighbourhood search. The cooperation system is composed of a team manager and a crowd of team members. The team members aim at applying their own search strategies to explore the solution space. The team manager collects the solutions from the members and shares the best one with them. The performance of the proposed method is evaluated on a group of benchmark data sets. The results obtained are compared to those reached by the best methods from the literature. The results show that the proposed method is able to provide the best solutions in most cases. In order to highlight the robustness of the proposed parallel computing model, a new set of large-scale instances is introduced. Encouraging results have been obtained.
NASA Astrophysics Data System (ADS)
Sorokin, V. A.; Volkov, Yu V.; Sherstneva, A. I.; Botygin, I. A.
2016-11-01
This paper overviews a method of generating climate regions based on an analytic signal theory. When applied to atmospheric surface layer temperature data sets, the method allows forming climatic structures with the corresponding changes in the temperature to make conclusions on the uniformity of climate in an area and to trace the climate changes in time by analyzing the type group shifts. The algorithm is based on the fact that the frequency spectrum of the thermal oscillation process is narrow-banded and has only one mode for most weather stations. This allows using the analytic signal theory, causality conditions and introducing an oscillation phase. The annual component of the phase, being a linear function, was removed by the least squares method. The remaining phase fluctuations allow consistent studying of their coordinated behavior and timing, using the Pearson correlation coefficient for dependence evaluation. This study includes program experiments to evaluate the calculation efficiency in the phase grouping task. The paper also overviews some single-threaded and multi-threaded computing models. It is shown that the phase grouping algorithm for meteorological data can be parallelized and that a multi-threaded implementation leads to a 25-30% increase in the performance.
Diegoli, Toni Marie; Rohde, Heinrich; Borowski, Stefan; Krawczak, Michael; Coble, Michael D; Nothnagel, Michael
2016-11-01
Typing of X chromosomal short tandem repeat (X STR) markers has become a standard element of human forensic genetic analysis. Joint consideration of many X STR markers at a time increases their discriminatory power but, owing to physical linkage, requires inter-marker recombination rates to be accurately known. We estimated the recombination rates between 15 well established X STR markers using genotype data from 158 families (1041 individuals) and following a previously proposed likelihood-based approach that allows for single-step mutations. To meet the computational requirements of this family-based type of analysis, we modified a previous implementation so as to allow multi-core parallelization on a high-performance computing system. While we obtained recombination rate estimates larger than zero for all but one pair of adjacent markers within the four previously proposed linkage groups, none of the three X STR pairs defining the junctions of these groups yielded a recombination rate estimate of 0.50. Corroborating previous studies, our results therefore argue against a simple model of independent X chromosomal linkage groups. Moreover, the refined recombination fraction estimates obtained in our study will facilitate the appropriate joint consideration of all 15 investigated markers in forensic analysis. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Parallel Multi-Step/Multi-Rate Integration of Two-Time Scale Dynamic Systems
NASA Technical Reports Server (NTRS)
Chang, Johnny T.; Ploen, Scott R.; Sohl, Garett. A,; Martin, Bryan J.
2004-01-01
Increasing demands on the fidelity of simulations for real-time and high-fidelity simulations are stressing the capacity of modern processors. New integration techniques are required that provide maximum efficiency for systems that are parallelizable. However many current techniques make assumptions that are at odds with non-cascadable systems. A new serial multi-step/multi-rate integration algorithm for dual-timescale continuous state systems is presented which applies to these systems, and is extended to a parallel multi-step/multi-rate algorithm. The superior performance of both algorithms is demonstrated through a representative example.
Efficient Parallelization of a Dynamic Unstructured Application on the Tera MTA
NASA Technical Reports Server (NTRS)
Oliker, Leonid; Biswas, Rupak
1999-01-01
The success of parallel computing in solving real-life computationally-intensive problems relies on their efficient mapping and execution on large-scale multiprocessor architectures. Many important applications are both unstructured and dynamic in nature, making their efficient parallel implementation a daunting task. This paper presents the parallelization of a dynamic unstructured mesh adaptation algorithm using three popular programming paradigms on three leading supercomputers. We examine an MPI message-passing implementation on the Cray T3E and the SGI Origin2OOO, a shared-memory implementation using cache coherent nonuniform memory access (CC-NUMA) of the Origin2OOO, and a multi-threaded version on the newly-released Tera Multi-threaded Architecture (MTA). We compare several critical factors of this parallel code development, including runtime, scalability, programmability, and memory overhead. Our overall results demonstrate that multi-threaded systems offer tremendous potential for quickly and efficiently solving some of the most challenging real-life problems on parallel computers.
Parallelization strategies for continuum-generalized method of moments on the multi-thread systems
NASA Astrophysics Data System (ADS)
Bustamam, A.; Handhika, T.; Ernastuti, Kerami, D.
2017-07-01
Continuum-Generalized Method of Moments (C-GMM) covers the Generalized Method of Moments (GMM) shortfall which is not as efficient as Maximum Likelihood estimator by using the continuum set of moment conditions in a GMM framework. However, this computation would take a very long time since optimizing regularization parameter. Unfortunately, these calculations are processed sequentially whereas in fact all modern computers are now supported by hierarchical memory systems and hyperthreading technology, which allowing for parallel computing. This paper aims to speed up the calculation process of C-GMM by designing a parallel algorithm for C-GMM on the multi-thread systems. First, parallel regions are detected for the original C-GMM algorithm. There are two parallel regions in the original C-GMM algorithm, that are contributed significantly to the reduction of computational time: the outer-loop and the inner-loop. Furthermore, this parallel algorithm will be implemented with standard shared-memory application programming interface, i.e. Open Multi-Processing (OpenMP). The experiment shows that the outer-loop parallelization is the best strategy for any number of observations.
NASA Technical Reports Server (NTRS)
Ortega, J. M.
1986-01-01
Various graduate research activities in the field of computer science are reported. Among the topics discussed are: (1) failure probabilities in multi-version software; (2) Gaussian Elimination on parallel computers; (3) three dimensional Poisson solvers on parallel/vector computers; (4) automated task decomposition for multiple robot arms; (5) multi-color incomplete cholesky conjugate gradient methods on the Cyber 205; and (6) parallel implementation of iterative methods for solving linear equations.
The design of multi-core DSP parallel model based on message passing and multi-level pipeline
NASA Astrophysics Data System (ADS)
Niu, Jingyu; Hu, Jian; He, Wenjing; Meng, Fanrong; Li, Chuanrong
2017-10-01
Currently, the design of embedded signal processing system is often based on a specific application, but this idea is not conducive to the rapid development of signal processing technology. In this paper, a parallel processing model architecture based on multi-core DSP platform is designed, and it is mainly suitable for the complex algorithms which are composed of different modules. This model combines the ideas of multi-level pipeline parallelism and message passing, and summarizes the advantages of the mainstream model of multi-core DSP (the Master-Slave model and the Data Flow model), so that it has better performance. This paper uses three-dimensional image generation algorithm to validate the efficiency of the proposed model by comparing with the effectiveness of the Master-Slave and the Data Flow model.
The AAS Working Group on Accessibility and Disability (WGAD) Year 1 Highlights and Database Access
NASA Astrophysics Data System (ADS)
Knierman, Karen A.; Diaz Merced, Wanda; Aarnio, Alicia; Garcia, Beatriz; Monkiewicz, Jacqueline A.; Murphy, Nicholas Arnold
2017-06-01
The AAS Working Group on Accessibility and Disability (WGAD) was formed in January of 2016 with the express purpose of seeking equity of opportunity and building inclusive practices for disabled astronomers at all educational and career stages. In this presentation, we will provide a summary of current activities, focusing on developing best practices for accessibility with respect to astronomical databases, publications, and meetings. Due to the reliance of space sciences on databases, it is important to have user centered design systems for data retrieval. The cognitive overload that may be experienced by users of current databases may be mitigated by use of multi-modal interfaces such as xSonify. Such interfaces would be in parallel or outside the original database and would not require additional software efforts from the original database. WGAD is partnering with the IAU Commission C1 WG Astronomy for Equity and Inclusion to develop such accessibility tools for databases and methods for user testing. To collect data on astronomical conference and meeting accessibility considerations, WGAD solicited feedback from January AAS attendees via a web form. These data, together with upcoming input from the community and analysis of accessibility documents of similar conferences, will be used to create a meeting accessibility document. Additionally, we will update the progress of journal access guidelines and our social media presence via Twitter. We recommend that astronomical journals form committees to evaluate the accessibility of their publications by performing user-centered usability studies.
Scholey, Andrew; Savage, Karen; O'Neill, Barry V; Owen, Lauren; Stough, Con; Priestley, Caroline; Wetherell, Mark
2014-09-01
This study assessed the effects of two doses of glucose and a caffeine-glucose combination on mood and performance of an ecologically valid, computerised multi-tasking platform. Following a double-blind, placebo-controlled, randomised, parallel-groups design, 150 healthy adults (mean age 34.78 years) consumed drinks containing placebo, 25 g glucose, 60 g glucose or 60 g glucose with 40 mg caffeine. They completed a multi-tasking framework at baseline and then 30 min following drink consumption with mood assessments immediately before and after the multi-tasking framework. Blood glucose and salivary caffeine were co-monitored. The caffeine-glucose group had significantly better total multi-tasking scores than the placebo or 60 g glucose groups and were significantly faster at mental arithmetic tasks than either glucose drink group. There were no significant treatment effects on mood. Caffeine and glucose levels confirmed compliance with overnight abstinence/fasting, respectively, and followed the predicted post-drink patterns. These data suggest that co-administration of glucose and caffeine allows greater allocation of attentional resources than placebo or glucose alone. At present, we cannot rule out the possibility that the effects are due to caffeine alone Future studies should aim at disentangling caffeine and glucose effects. © 2014 The Authors. Human Psychopharmacology: Clinical and Experimental published by John Wiley & Sons, Ltd.
Scholey, Andrew; Savage, Karen; O'Neill, Barry V; Owen, Lauren; Stough, Con; Priestley, Caroline; Wetherell, Mark
2014-01-01
Background This study assessed the effects of two doses of glucose and a caffeine–glucose combination on mood and performance of an ecologically valid, computerised multi-tasking platform. Materials and methods Following a double-blind, placebo-controlled, randomised, parallel-groups design, 150 healthy adults (mean age 34.78 years) consumed drinks containing placebo, 25 g glucose, 60 g glucose or 60 g glucose with 40 mg caffeine. They completed a multi-tasking framework at baseline and then 30 min following drink consumption with mood assessments immediately before and after the multi-tasking framework. Blood glucose and salivary caffeine were co-monitored. Results The caffeine–glucose group had significantly better total multi-tasking scores than the placebo or 60 g glucose groups and were significantly faster at mental arithmetic tasks than either glucose drink group. There were no significant treatment effects on mood. Caffeine and glucose levels confirmed compliance with overnight abstinence/fasting, respectively, and followed the predicted post-drink patterns. Conclusion These data suggest that co-administration of glucose and caffeine allows greater allocation of attentional resources than placebo or glucose alone. At present, we cannot rule out the possibility that the effects are due to caffeine alone Future studies should aim at disentangling caffeine and glucose effects. PMID:25196040
Butel, Jean; Braun, Kathryn L; Novotny, Rachel; Acosta, Mark; Castro, Rose; Fleming, Travis; Powers, Julianne; Nigg, Claudio R
2015-12-01
Addressing complex chronic disease prevention, like childhood obesity, requires a multi-level, multi-component culturally relevant approach with broad reach. Models are lacking to guide fidelity monitoring across multiple levels, components, and sites engaged in such interventions. The aim of this study is to describe the fidelity-monitoring approach of The Children's Healthy Living (CHL) Program, a multi-level multi-component intervention in five Pacific jurisdictions. A fidelity-monitoring rubric was developed. About halfway during the intervention, community partners were randomly selected and interviewed independently by local CHL staff and by Coordinating Center representatives to assess treatment fidelity. Ratings were compared and discussed by local and Coordinating Center staff. There was good agreement between the teams (Kappa = 0.50, p < 0.001), and intervention improvement opportunities were identified through data review and group discussion. Fidelity for the multi-level, multi-component, multi-site CHL intervention was successfully assessed, identifying adaptations as well as ways to improve intervention delivery prior to the end of the intervention.
Rapid Calculation of Max-Min Fair Rates for Multi-Commodity Flows in Fat-Tree Networks
Mollah, Md Atiqul; Yuan, Xin; Pakin, Scott; ...
2017-08-29
Max-min fairness is often used in the performance modeling of interconnection networks. Existing methods to compute max-min fair rates for multi-commodity flows have high complexity and are computationally infeasible for large networks. In this paper, we show that by considering topological features, this problem can be solved efficiently for the fat-tree topology that is widely used in data centers and high performance compute clusters. Several efficient new algorithms are developed for this problem, including a parallel algorithm that can take advantage of multi-core and shared-memory architectures. Using these algorithms, we demonstrate that it is possible to find the max-min fairmore » rate allocation for multi-commodity flows in fat-tree networks that support tens of thousands of nodes. We evaluate the run-time performance of the proposed algorithms and show improvement in orders of magnitude over the previously best known method. Finally, we further demonstrate a new application of max-min fair rate allocation that is only computationally feasible using our new algorithms.« less
Rapid Calculation of Max-Min Fair Rates for Multi-Commodity Flows in Fat-Tree Networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mollah, Md Atiqul; Yuan, Xin; Pakin, Scott
Max-min fairness is often used in the performance modeling of interconnection networks. Existing methods to compute max-min fair rates for multi-commodity flows have high complexity and are computationally infeasible for large networks. In this paper, we show that by considering topological features, this problem can be solved efficiently for the fat-tree topology that is widely used in data centers and high performance compute clusters. Several efficient new algorithms are developed for this problem, including a parallel algorithm that can take advantage of multi-core and shared-memory architectures. Using these algorithms, we demonstrate that it is possible to find the max-min fairmore » rate allocation for multi-commodity flows in fat-tree networks that support tens of thousands of nodes. We evaluate the run-time performance of the proposed algorithms and show improvement in orders of magnitude over the previously best known method. Finally, we further demonstrate a new application of max-min fair rate allocation that is only computationally feasible using our new algorithms.« less
Armour, Brianna L; Barnes, Steve R; Moen, Spencer O; Smith, Eric; Raymond, Amy C; Fairman, James W; Stewart, Lance J; Staker, Bart L; Begley, Darren W; Edwards, Thomas E; Lorimer, Donald D
2013-06-28
Pandemic outbreaks of highly virulent influenza strains can cause widespread morbidity and mortality in human populations worldwide. In the United States alone, an average of 41,400 deaths and 1.86 million hospitalizations are caused by influenza virus infection each year (1). Point mutations in the polymerase basic protein 2 subunit (PB2) have been linked to the adaptation of the viral infection in humans (2). Findings from such studies have revealed the biological significance of PB2 as a virulence factor, thus highlighting its potential as an antiviral drug target. The structural genomics program put forth by the National Institute of Allergy and Infectious Disease (NIAID) provides funding to Emerald Bio and three other Pacific Northwest institutions that together make up the Seattle Structural Genomics Center for Infectious Disease (SSGCID). The SSGCID is dedicated to providing the scientific community with three-dimensional protein structures of NIAID category A-C pathogens. Making such structural information available to the scientific community serves to accelerate structure-based drug design. Structure-based drug design plays an important role in drug development. Pursuing multiple targets in parallel greatly increases the chance of success for new lead discovery by targeting a pathway or an entire protein family. Emerald Bio has developed a high-throughput, multi-target parallel processing pipeline (MTPP) for gene-to-structure determination to support the consortium. Here we describe the protocols used to determine the structure of the PB2 subunit from four different influenza A strains.
Multi-LED parallel transmission for long distance underwater VLC system with one SPAD receiver
NASA Astrophysics Data System (ADS)
Wang, Chao; Yu, Hong-Yi; Zhu, Yi-Jun; Wang, Tao; Ji, Ya-Wei
2018-03-01
In this paper, a multiple light emitting diode (LED) chips parallel transmission (Multi-LED-PT) scheme for underwater visible light communication system with one photon-counting single photon avalanche diode (SPAD) receiver is proposed. As the lamp always consists of multi-LED chips, the data rate could be improved when we drive these multi-LED chips parallel by using the interleaver-division-multiplexing technique. For each chip, the on-off-keying modulation is used to reduce the influence of clipping. Then a serial successive interference cancellation detection algorithm based on ideal Poisson photon-counting channel by the SPAD is proposed. Finally, compared to the SPAD-based direct current-biased optical orthogonal frequency division multiplexing system, the proposed Multi-LED-PT system could improve the error-rate performance and anti-nonlinearity performance significantly under the effects of absorption, scattering and weak turbulence-induced channel fading together.
Pilot Jerrie Cobb Trains in the Multi-Axis Space Test Inertia Facility
1960-04-21
Jerrie Cobb prepares to operate the Multi-Axis Space Test Inertia Facility (MASTIF) inside the Altitude Wind Tunnel at the National Aeronautics and Space Administration (NASA) Lewis Research Center. The MASTIF was a three-axis rig with a pilot’s chair mounted in the center to train Project Mercury pilots to bring a spinning spacecraft under control. An astronaut was secured in a foam couch in the center of the rig. The rig was then spun on three axes from 2 to 50 rotations per minute. The pilots were tested on each of the three axis individually, then all three simultaneously. The two controllers in Cobb’s hands activated the small nitrogen gas thrusters that were used to bring the MASTIF under control. A makeshift spacecraft control panel was set up in front of the trainee’s face. Cobb was one of several female pilots who underwent the skill and endurance testing that paralleled that of the Project Mercury astronauts. In 1961 Jerrie Cobb was the first female to pass all three phases of the Mercury Astronaut Program. NASA rules, however, stipulated that only military test pilots could become astronauts and there were no female military test pilots. The seven Mercury astronauts had taken their turns on the MASTIF in February and March 1960.
National Centers for Environmental Prediction
the number of threads used. HWRF group cannot access Zeus and Jet for real-time data transfers from nodes used.). All single jobs will be run on one rack and will not share with parallel jobs. No official change the group when using tag_rstprod (-g option). autotag_rstprod is a script that tags all files. It
Multi-directional fault detection system
Archer, Charles Jens; Pinnow, Kurt Walter; Ratterman, Joseph D.; Smith, Brian Edward
2010-11-23
An apparatus, program product and method checks for nodal faults in a group of nodes comprising a center node and all adjacent nodes. The center node concurrently communicates with the immediately adjacent nodes in three dimensions. The communications are analyzed to determine a presence of a faulty node or connection.
Multi-directional fault detection system
Archer, Charles Jens [Rochester, MN; Pinnow, Kurt Walter [Rochester, MN; Ratterman, Joseph D [Rochester, MN; Smith, Brian Edward [Rochester, MN
2009-03-17
An apparatus, program product and method checks for nodal faults in a group of nodes comprising a center node and all adjacent nodes. The center node concurrently communicates with the immediately adjacent nodes in three dimensions. The communications are analyzed to determine a presence of a faulty node or connection.
Multi-directional fault detection system
Archer, Charles Jens; Pinnow, Kurt Walter; Ratterman, Joseph D.; Smith, Brian Edward
2010-06-29
An apparatus, program product and method checks for nodal faults in a group of nodes comprising a center node and all adjacent nodes. The center node concurrently communicates with the immediately adjacent nodes in three dimensions. The communications are analyzed to determine a presence of a faulty node or connection.
NASA Astrophysics Data System (ADS)
Hofierka, Jaroslav; Lacko, Michal; Zubal, Stanislav
2017-10-01
In this paper, we describe the parallelization of three complex and computationally intensive modules of GRASS GIS using the OpenMP application programming interface for multi-core computers. These include the v.surf.rst module for spatial interpolation, the r.sun module for solar radiation modeling and the r.sim.water module for water flow simulation. We briefly describe the functionality of the modules and parallelization approaches used in the modules. Our approach includes the analysis of the module's functionality, identification of source code segments suitable for parallelization and proper application of OpenMP parallelization code to create efficient threads processing the subtasks. We document the efficiency of the solutions using the airborne laser scanning data representing land surface in the test area and derived high-resolution digital terrain model grids. We discuss the performance speed-up and parallelization efficiency depending on the number of processor threads. The study showed a substantial increase in computation speeds on a standard multi-core computer while maintaining the accuracy of results in comparison to the output from original modules. The presented parallelization approach showed the simplicity and efficiency of the parallelization of open-source GRASS GIS modules using OpenMP, leading to an increased performance of this geospatial software on standard multi-core computers.
Wang, F; Fan, Q X; Wang, H H; Han, D M; Song, N S; Lu, H
2017-06-23
Objective: To evaluate the efficacy and safety of Xiaoaiping combined with chemotherapy in the treatment of advanced esophageal cancer. Methods: This is a multi-center, randomized, open label and parallel controlled study. A total of 124 advanced esophageal cancer patients with Karnofsky Performance Status (KPS) score ≥60 and expected survival time≥3 months were enrolled. We adopted design and divided the patients into study and control group. The patients in study group received Xiaoaiping combined with S-1 and cisplatin. The control group received S-1 and cisplatin. Each group included 62 patients and 21 days as a treatment cycle. The efficacy and adverse events in patients of the two groups were observed and compared. Results: 57 patients in the study group and 55 in the control group were included in efficacy assessment. The response rate was 54.4% and 34.5% in the study group and control group, respectively( P <0.05). Disease control rates were 86.0% and 69.1%, respectively( P <0.05). The median progression-free survival (PFS) was 7.97 in the study group and 6.43 months in the control group( P <0.05). The median overall survival(OS) was 12.93 in the study group and 10.93 months in the control group( P <0.05). The most common adverse events in the two groups were nausea and vomiting, thrombocytopenia, anemia, neutropenia, liver damage, pigmentation, oral mucositis, renal impairment and diarrhea. The incidences of nausea, vomiting, thrombocytopenia, leukopenia, neutropenia and diarrhea in the study group were significantly higher than those in the control group( P <0.05). Conclusion: Xiaoaiping combined with S-1 and cisplatin significantly increased response rate, and prolongedpatients' survival in patients with advanced esophageal cancer.
Promoting Teen Health and Reducing Risks: A Look at Adolescent Health Services in New York City.
ERIC Educational Resources Information Center
Citizens' Committee for Children of New York, NY.
This study examined data from focus groups with New York City adolescents and interviews with health care providers serving New York City adolescents (hospital based clinics, school based health centers, child health clinics, community health centers, and a multi-service adolescent center) in order to determine how to promote health and reduce…
Garty, Guy; Chen, Youhua; Turner, Helen C; Zhang, Jian; Lyulko, Oleksandra V; Bertucci, Antonella; Xu, Yanping; Wang, Hongliang; Simaan, Nabil; Randers-Pehrson, Gerhard; Lawrence Yao, Y; Brenner, David J
2011-08-01
Over the past five years the Center for Minimally Invasive Radiation Biodosimetry at Columbia University has developed the Rapid Automated Biodosimetry Tool (RABiT), a completely automated, ultra-high throughput biodosimetry workstation. This paper describes recent upgrades and reliability testing of the RABiT. The RABiT analyses fingerstick-derived blood samples to estimate past radiation exposure or to identify individuals exposed above or below a cut-off dose. Through automated robotics, lymphocytes are extracted from fingerstick blood samples into filter-bottomed multi-well plates. Depending on the time since exposure, the RABiT scores either micronuclei or phosphorylation of the histone H2AX, in an automated robotic system, using filter-bottomed multi-well plates. Following lymphocyte culturing, fixation and staining, the filter bottoms are removed from the multi-well plates and sealed prior to automated high-speed imaging. Image analysis is performed online using dedicated image processing hardware. Both the sealed filters and the images are archived. We have developed a new robotic system for lymphocyte processing, making use of an upgraded laser power and parallel processing of four capillaries at once. This system has allowed acceleration of lymphocyte isolation, the main bottleneck of the RABiT operation, from 12 to 2 sec/sample. Reliability tests have been performed on all robotic subsystems. Parallel handling of multiple samples through the use of dedicated, purpose-built, robotics and high speed imaging allows analysis of up to 30,000 samples per day.
Garty, Guy; Chen, Youhua; Turner, Helen; Zhang, Jian; Lyulko, Oleksandra; Bertucci, Antonella; Xu, Yanping; Wang, Hongliang; Simaan, Nabil; Randers-Pehrson, Gerhard; Yao, Y. Lawrence; Brenner, David J.
2011-01-01
Purpose Over the past five years the Center for Minimally Invasive Radiation Biodosimetry at Columbia University has developed the Rapid Automated Biodosimetry Tool (RABiT), a completely automated, ultra-high throughput biodosimetry workstation. This paper describes recent upgrades and reliability testing of the RABiT. Materials and methods The RABiT analyzes fingerstick-derived blood samples to estimate past radiation exposure or to identify individuals exposed above or below a cutoff dose. Through automated robotics, lymphocytes are extracted from fingerstick blood samples into filter-bottomed multi-well plates. Depending on the time since exposure, the RABiT scores either micronuclei or phosphorylation of the histone H2AX, in an automated robotic system, using filter-bottomed multi-well plates. Following lymphocyte culturing, fixation and staining, the filter bottoms are removed from the multi-well plates and sealed prior to automated high-speed imaging. Image analysis is performed online using dedicated image processing hardware. Both the sealed filters and the images are archived. Results We have developed a new robotic system for lymphocyte processing, making use of an upgraded laser power and parallel processing of four capillaries at once. This system has allowed acceleration of lymphocyte isolation, the main bottleneck of the RABiT operation, from 12 to 2 sec/sample. Reliability tests have been performed on all robotic subsystems. Conclusions Parallel handling of multiple samples through the use of dedicated, purpose-built, robotics and high speed imaging allows analysis of up to 30,000 samples per day. PMID:21557703
21 CFR 343.80 - Professional labeling.
Code of Federal Regulations, 2013 CFR
2013-04-01
..., randomized, multi-center, placebo-controlled trials of predominantly male post-MI subjects and one randomized... group on the aspirin molecule. This acetyl group is responsible for the inactivation of cyclo-oxygenase... event rate was reduced to 5 percent from the 10 percent rate in the placebo group. Chronic Stable Angina...
21 CFR 343.80 - Professional labeling.
Code of Federal Regulations, 2012 CFR
2012-04-01
..., randomized, multi-center, placebo-controlled trials of predominantly male post-MI subjects and one randomized... group on the aspirin molecule. This acetyl group is responsible for the inactivation of cyclo-oxygenase... event rate was reduced to 5 percent from the 10 percent rate in the placebo group. Chronic Stable Angina...
21 CFR 343.80 - Professional labeling.
Code of Federal Regulations, 2014 CFR
2014-04-01
..., randomized, multi-center, placebo-controlled trials of predominantly male post-MI subjects and one randomized... group on the aspirin molecule. This acetyl group is responsible for the inactivation of cyclo-oxygenase... event rate was reduced to 5 percent from the 10 percent rate in the placebo group. Chronic Stable Angina...
21 CFR 343.80 - Professional labeling.
Code of Federal Regulations, 2010 CFR
2010-04-01
..., randomized, multi-center, placebo-controlled trials of predominantly male post-MI subjects and one randomized... group on the aspirin molecule. This acetyl group is responsible for the inactivation of cyclo-oxygenase... event rate was reduced to 5 percent from the 10 percent rate in the placebo group. Chronic Stable Angina...
21 CFR 343.80 - Professional labeling.
Code of Federal Regulations, 2011 CFR
2011-04-01
..., randomized, multi-center, placebo-controlled trials of predominantly male post-MI subjects and one randomized... group on the aspirin molecule. This acetyl group is responsible for the inactivation of cyclo-oxygenase... event rate was reduced to 5 percent from the 10 percent rate in the placebo group. Chronic Stable Angina...
Neural decoding of collective wisdom with multi-brain computing.
Eckstein, Miguel P; Das, Koel; Pham, Binh T; Peterson, Matthew F; Abbey, Craig K; Sy, Jocelyn L; Giesbrecht, Barry
2012-01-02
Group decisions and even aggregation of multiple opinions lead to greater decision accuracy, a phenomenon known as collective wisdom. Little is known about the neural basis of collective wisdom and whether its benefits arise in late decision stages or in early sensory coding. Here, we use electroencephalography and multi-brain computing with twenty humans making perceptual decisions to show that combining neural activity across brains increases decision accuracy paralleling the improvements shown by aggregating the observers' opinions. Although the largest gains result from an optimal linear combination of neural decision variables across brains, a simpler neural majority decision rule, ubiquitous in human behavior, results in substantial benefits. In contrast, an extreme neural response rule, akin to a group following the most extreme opinion, results in the least improvement with group size. Analyses controlling for number of electrodes and time-points while increasing number of brains demonstrate unique benefits arising from integrating neural activity across different brains. The benefits of multi-brain integration are present in neural activity as early as 200 ms after stimulus presentation in lateral occipital sites and no additional benefits arise in decision related neural activity. Sensory-related neural activity can predict collective choices reached by aggregating individual opinions, voting results, and decision confidence as accurately as neural activity related to decision components. Estimation of the potential for the collective to execute fast decisions by combining information across numerous brains, a strategy prevalent in many animals, shows large time-savings. Together, the findings suggest that for perceptual decisions the neural activity supporting collective wisdom and decisions arises in early sensory stages and that many properties of collective cognition are explainable by the neural coding of information across multiple brains. Finally, our methods highlight the potential of multi-brain computing as a technique to rapidly and in parallel gather increased information about the environment as well as to access collective perceptual/cognitive choices and mental states. Copyright © 2011 Elsevier Inc. All rights reserved.
Preliminary Work in Atmospheric Turbulence Profiles with the Differential Multi-image Motion Monitor
2016-09-01
Center Pacific’s (SSC Pacific) Optical Channel Characterization in Maritime Atmospheres (OCCIMA) Python code is demonstrated with examples that match...OCCIMA) Python code, show how to model the DM3 and anisoplanitic jitter measurements, and finally demonstrate how the turbulence strength profile... python modules. 0.0 0.5 1.0 1.5 2.0 Separation at target plane (m) 2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8 2.9 3.0 A ni so pl an at ic jit te r( λ /D ) Parallel
Alignment of x-ray tube focal spots for spectral measurement.
Nishizawa, K; Maekoshi, H; Kamiya, Y; Kobayashi, Y; Ohara, K; Sakuma, S
1982-01-01
A general method to align a diagnostic x-ray machine for x-ray spectrum measurement purpose was theoretically and experimentally investigated by means of the optical alignment of focal pinhole images. Focal pinhole images were obtained by using a multi-pinholed lead plate. the vertical plane, including the central axis and tube axis, was decided upon by observing the symmetry of focal images. the central axis was designated as a line through the center of focus parallel to the target surface lying in the vertical plane. A method to determine the manipulation of the central axis in any direction is presented.
Gauthier, Lynne V; Kane, Chelsea; Borstad, Alexandra; Strahl, Nancy; Uswatte, Gitendra; Taub, Edward; Morris, David; Hall, Alli; Arakelian, Melissa; Mark, Victor
2017-06-08
Constraint-Induced Movement therapy (CI therapy) is shown to reduce disability, increase use of the more affected arm/hand, and promote brain plasticity for individuals with upper extremity hemiparesis post-stroke. Randomized controlled trials consistently demonstrate that CI therapy is superior to other rehabilitation paradigms, yet it is available to only a small minority of the estimated 1.2 million chronic stroke survivors with upper extremity disability. The current study aims to establish the comparative effectiveness of a novel, patient-centered approach to rehabilitation utilizing newly developed, inexpensive, and commercially available gaming technology to disseminate CI therapy to underserved individuals. Video game delivery of CI therapy will be compared against traditional clinic-based CI therapy and standard upper extremity rehabilitation. Additionally, individual factors that differentially influence response to one treatment versus another will be examined. This protocol outlines a multi-site, randomized controlled trial with parallel group design. Two hundred twenty four adults with chronic hemiparesis post-stroke will be recruited at four sites. Participants are randomized to one of four study groups: (1) traditional clinic-based CI therapy, (2) therapist-as-consultant video game CI therapy, (3) therapist-as-consultant video game CI therapy with additional therapist contact via telerehabilitation/video consultation, and (4) standard upper extremity rehabilitation. After 6-month follow-up, individuals assigned to the standard upper extremity rehabilitation condition crossover to stand-alone video game CI therapy preceded by a therapist consultation. All interventions are delivered over a period of three weeks. Primary outcome measures include motor improvement as measured by the Wolf Motor Function Test (WMFT), quality of arm use for daily activities as measured by Motor Activity Log (MAL), and quality of life as measured by the Quality of Life in Neurological Disorders (NeuroQOL). This multi-site RCT is designed to determine comparative effectiveness of in-home technology-based delivery of CI therapy versus standard upper extremity rehabilitation and in-clinic CI therapy. The study design also enables evaluation of the effect of therapist contact time on treatment outcomes within a therapist-as-consultant model of gaming and technology-based rehabilitation. Clinicaltrials.gov, NCT02631850 .
Wang, Yan-Xia; Xiang, Cheng; Liu, Bo; Zhu, Yong; Luan, Yong; Liu, Shu-Tian; Qin, Kai-Rong
2016-12-28
In vivo studies have demonstrated that reasonable exercise training can improve endothelial function. To confirm the key role of wall shear stress induced by exercise on endothelial cells, and to understand how wall shear stress affects the structure and the function of endothelial cells, it is crucial to design and fabricate an in vitro multi-component parallel-plate flow chamber system which can closely replicate exercise-induced wall shear stress waveforms in artery. The in vivo wall shear stress waveforms from the common carotid artery of a healthy volunteer in resting and immediately after 30 min acute aerobic cycling exercise were first calculated by measuring the inner diameter and the center-line blood flow velocity with a color Doppler ultrasound. According to the above in vivo wall shear stress waveforms, we designed and fabricated a parallel-plate flow chamber system with appropriate components based on a lumped parameter hemodynamics model. To validate the feasibility of this system, human umbilical vein endothelial cells (HUVECs) line were cultured within the parallel-plate flow chamber under abovementioned two types of wall shear stress waveforms and the intracellular actin microfilaments and nitric oxide (NO) production level were evaluated using fluorescence microscope. Our results show that the trends of resting and exercise-induced wall shear stress waveforms, especially the maximal, minimal and mean wall shear stress as well as oscillatory shear index, generated by the parallel-plate flow chamber system are similar to those acquired from the common carotid artery. In addition, the cellular experiments demonstrate that the actin microfilaments and the production of NO within cells exposed to the two different wall shear stress waveforms exhibit different dynamic behaviors; there are larger numbers of actin microfilaments and higher level NO in cells exposed in exercise-induced wall shear stress condition than resting wall shear stress condition. The parallel-plate flow chamber system can well reproduce wall shear stress waveforms acquired from the common carotid artery in resting and immediately after exercise states. Furthermore, it can be used for studying the endothelial cells responses under resting and exercise-induced wall shear stress environments in vitro.
Focusing on Concepts by Covering Them Simultaneously
ERIC Educational Resources Information Center
Shwartz, Pete
2017-01-01
"Parallel" pedagogy covers the four mechanics concepts of momentum, energy, forces, and kinematics simultaneously instead of building each concept on an understanding of the previous one. Course content is delivered through interactive videos, allowing class time for group work and student-centered activities. We start with simple…
Platt, Jennica; Baxter, Nancy; Jones, Jennifer; Metcalfe, Kelly; Causarano, Natalie; Hofer, Stefan O P; O'Neill, Anne; Cheng, Terry; Starenkyj, Elizabeth; Zhong, Toni
2013-07-06
The Pre-Consultation Educational Group INTERVENTION pilot study seeks to assess the feasibility and inform the optimal design for a definitive randomized controlled trial that aims to improve the quality of decision-making in postmastectomy breast reconstruction patients. This is a mixed-methods pilot feasibility randomized controlled trial that will follow a single-center, 1:1 allocation, two-arm parallel group superiority design. The University Health Network, a tertiary care cancer center in Toronto, Canada. Adult women referred to one of three plastic and reconstructive surgeons for delayed breast reconstruction or prophylactic mastectomy with immediate breast reconstruction. We designed a multi-disciplinary educational group workshop that incorporates the key components of shared decision-making, decision-support, and psychosocial support for cancer survivors prior to the initial surgical consult. The intervention consists of didactic lectures by a plastic surgeon and nurse specialist on breast reconstruction choices, pre- and postoperative care; a value-clarification exercise led by a social worker; and discussions with a breast reconstruction patient. Usual care includes access to an informational booklet, website, and patient volunteer if desired. Expected pilot outcomes include feasibility, recruitment, and retention targets. Acceptability of intervention and full trial outcomes will be established through qualitative interviews. Trial outcomes will include decision-quality measures, patient-reported outcomes, and service outcomes, and the treatment effect estimate and variability will be used to inform the sample size calculation for a full trial. Our pilot study seeks to identify the (1) feasibility, acceptability, and design of a definitive RCT and (2) the optimal content and delivery of our proposed educational group intervention. Thirty patients have been recruited to date (8 April 2013), of whom 15 have been randomized to one of three decision support workshops. The trial will close as planned in May 2013. NCT01857882.
Tankam, Patrice; Santhanam, Anand P; Lee, Kye-Sung; Won, Jungeun; Canavesi, Cristina; Rolland, Jannick P
2014-07-01
Gabor-domain optical coherence microscopy (GD-OCM) is a volumetric high-resolution technique capable of acquiring three-dimensional (3-D) skin images with histological resolution. Real-time image processing is needed to enable GD-OCM imaging in a clinical setting. We present a parallelized and scalable multi-graphics processing unit (GPU) computing framework for real-time GD-OCM image processing. A parallelized control mechanism was developed to individually assign computation tasks to each of the GPUs. For each GPU, the optimal number of amplitude-scans (A-scans) to be processed in parallel was selected to maximize GPU memory usage and core throughput. We investigated five computing architectures for computational speed-up in processing 1000×1000 A-scans. The proposed parallelized multi-GPU computing framework enables processing at a computational speed faster than the GD-OCM image acquisition, thereby facilitating high-speed GD-OCM imaging in a clinical setting. Using two parallelized GPUs, the image processing of a 1×1×0.6 mm3 skin sample was performed in about 13 s, and the performance was benchmarked at 6.5 s with four GPUs. This work thus demonstrates that 3-D GD-OCM data may be displayed in real-time to the examiner using parallelized GPU processing.
Decoupling Principle Analysis and Development of a Parallel Three-Dimensional Force Sensor
Zhao, Yanzhi; Jiao, Leihao; Weng, Dacheng; Zhang, Dan; Zheng, Rencheng
2016-01-01
In the development of the multi-dimensional force sensor, dimension coupling is the ubiquitous factor restricting the improvement of the measurement accuracy. To effectively reduce the influence of dimension coupling on the parallel multi-dimensional force sensor, a novel parallel three-dimensional force sensor is proposed using a mechanical decoupling principle, and the influence of the friction on dimension coupling is effectively reduced by making the friction rolling instead of sliding friction. In this paper, the mathematical model is established by combining with the structure model of the parallel three-dimensional force sensor, and the modeling and analysis of mechanical decoupling are carried out. The coupling degree (ε) of the designed sensor is defined and calculated, and the calculation results show that the mechanical decoupling parallel structure of the sensor possesses good decoupling performance. A prototype of the parallel three-dimensional force sensor was developed, and FEM analysis was carried out. The load calibration and data acquisition experiment system are built, and then calibration experiments were done. According to the calibration experiments, the measurement accuracy is less than 2.86% and the coupling accuracy is less than 3.02%. The experimental results show that the sensor system possesses high measuring accuracy, which provides a basis for the applied research of the parallel multi-dimensional force sensor. PMID:27649194
NCC: A Multidisciplinary Design/Analysis Tool for Combustion Systems
NASA Technical Reports Server (NTRS)
Liu, Nan-Suey; Quealy, Angela
1999-01-01
A multi-disciplinary design/analysis tool for combustion systems is critical for optimizing the low-emission, high-performance combustor design process. Based on discussions between NASA Lewis Research Center and the jet engine companies, an industry-government team was formed in early 1995 to develop the National Combustion Code (NCC), which is an integrated system of computer codes for the design and analysis of combustion systems. NCC has advanced features that address the need to meet designer's requirements such as "assured accuracy", "fast turnaround", and "acceptable cost". The NCC development team is comprised of Allison Engine Company (Allison), CFD Research Corporation (CFDRC), GE Aircraft Engines (GEAE), NASA Lewis Research Center (LeRC), and Pratt & Whitney (P&W). This development team operates under the guidance of the NCC steering committee. The "unstructured mesh" capability and "parallel computing" are fundamental features of NCC from its inception. The NCC system is composed of a set of "elements" which includes grid generator, main flow solver, turbulence module, turbulence and chemistry interaction module, chemistry module, spray module, radiation heat transfer module, data visualization module, and a post-processor for evaluating engine performance parameters. Each element may have contributions from several team members. Such a multi-source multi-element system needs to be integrated in a way that facilitates inter-module data communication, flexibility in module selection, and ease of integration.
Modeling Cooperative Threads to Project GPU Performance for Adaptive Parallelism
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meng, Jiayuan; Uram, Thomas; Morozov, Vitali A.
Most accelerators, such as graphics processing units (GPUs) and vector processors, are particularly suitable for accelerating massively parallel workloads. On the other hand, conventional workloads are developed for multi-core parallelism, which often scale to only a few dozen OpenMP threads. When hardware threads significantly outnumber the degree of parallelism in the outer loop, programmers are challenged with efficient hardware utilization. A common solution is to further exploit the parallelism hidden deep in the code structure. Such parallelism is less structured: parallel and sequential loops may be imperfectly nested within each other, neigh boring inner loops may exhibit different concurrency patternsmore » (e.g. Reduction vs. Forall), yet have to be parallelized in the same parallel section. Many input-dependent transformations have to be explored. A programmer often employs a larger group of hardware threads to cooperatively walk through a smaller outer loop partition and adaptively exploit any encountered parallelism. This process is time-consuming and error-prone, yet the risk of gaining little or no performance remains high for such workloads. To reduce risk and guide implementation, we propose a technique to model workloads with limited parallelism that can automatically explore and evaluate transformations involving cooperative threads. Eventually, our framework projects the best achievable performance and the most promising transformations without implementing GPU code or using physical hardware. We envision our technique to be integrated into future compilers or optimization frameworks for autotuning.« less
Genetic Parallel Programming: design and implementation.
Cheang, Sin Man; Leung, Kwong Sak; Lee, Kin Hong
2006-01-01
This paper presents a novel Genetic Parallel Programming (GPP) paradigm for evolving parallel programs running on a Multi-Arithmetic-Logic-Unit (Multi-ALU) Processor (MAP). The MAP is a Multiple Instruction-streams, Multiple Data-streams (MIMD), general-purpose register machine that can be implemented on modern Very Large-Scale Integrated Circuits (VLSIs) in order to evaluate genetic programs at high speed. For human programmers, writing parallel programs is more difficult than writing sequential programs. However, experimental results show that GPP evolves parallel programs with less computational effort than that of their sequential counterparts. It creates a new approach to evolving a feasible problem solution in parallel program form and then serializes it into a sequential program if required. The effectiveness and efficiency of GPP are investigated using a suite of 14 well-studied benchmark problems. Experimental results show that GPP speeds up evolution substantially.
Colorado Outdoor Education Center Teacher's Field Guide.
ERIC Educational Resources Information Center
Colorado Outdoor Education Center, Inc., Florissant.
The Colorado Outdoor Education Center aims to educate the total person by offering programs which help each individual to gain a sense of the earth, of community, and of self. At High Trails the students are brought into direct contact with nature, utilizing small groups and emphasizing direct experiences. The integrated, multi-disciplinary…
An Intentional Laboratory: The San Carlos Charter Learning Center.
ERIC Educational Resources Information Center
Darwish, Elise
2000-01-01
Describes the San Carlos Charter Learning Center, a K-8 school chartered by the San Carlos, California, school district to be a research and development site. It has successfully shared practices in multi-age groupings, interdisciplinary instruction, parents as teachers, and staff evaluation. The article expands on the school's challenges and…
Findling, Robert L; Quinn, Declan; Hatch, Simon J; Cameron, Sara J; DeCory, Heleen H; McDowell, Michael
2006-12-01
To compare the efficacy and safety of two methylphenidate (MPH) formulations--once-daily modified-release MPH (EqXL, Equasym XL) and twice-daily immediate-release methylphenidate (MPH-IR, Ritalin)--and placebo in children with Attention Deficit/Hyperactivity Disorder (ADHD). Children aged 6-12 years on a stable dose of MPH were randomized into a double-blind, three-arm, parallel-group, multi-center study and received 3 weeks of EqXL (20, 40, or 60 mg qd), MPH-IR (10, 20, or 30 mg bid) or placebo. Non-inferiority of EqXL to MPH-IR was assessed by the difference in the inattention/overactivity component of the overall teacher's IOWA Conners' Rating Scale on the last week of treatment (per protocol population). Safety was monitored by adverse events, laboratory parameters, vital signs, physical exam, and a Side Effect Rating Scale. The lower 97.5% confidence interval bound of the difference between MPH groups fell above the non-inferiority margin (-1.5 points) not only during the last week of treatment but during all three treatment weeks. Both MPH-treatment groups experienced superior benefit when compared to placebo during all treatment weeks (P < 0.001). All treatments were well tolerated. EqXL given once-daily was non-inferior to MPH-IR given twice-daily. Both treatments were superior to placebo in reducing ADHD symptoms.
Moen, Spencer O.; Smith, Eric; Raymond, Amy C.; Fairman, James W.; Stewart, Lance J.; Staker, Bart L.; Begley, Darren W.; Edwards, Thomas E.; Lorimer, Donald D.
2013-01-01
Pandemic outbreaks of highly virulent influenza strains can cause widespread morbidity and mortality in human populations worldwide. In the United States alone, an average of 41,400 deaths and 1.86 million hospitalizations are caused by influenza virus infection each year 1. Point mutations in the polymerase basic protein 2 subunit (PB2) have been linked to the adaptation of the viral infection in humans 2. Findings from such studies have revealed the biological significance of PB2 as a virulence factor, thus highlighting its potential as an antiviral drug target. The structural genomics program put forth by the National Institute of Allergy and Infectious Disease (NIAID) provides funding to Emerald Bio and three other Pacific Northwest institutions that together make up the Seattle Structural Genomics Center for Infectious Disease (SSGCID). The SSGCID is dedicated to providing the scientific community with three-dimensional protein structures of NIAID category A-C pathogens. Making such structural information available to the scientific community serves to accelerate structure-based drug design. Structure-based drug design plays an important role in drug development. Pursuing multiple targets in parallel greatly increases the chance of success for new lead discovery by targeting a pathway or an entire protein family. Emerald Bio has developed a high-throughput, multi-target parallel processing pipeline (MTPP) for gene-to-structure determination to support the consortium. Here we describe the protocols used to determine the structure of the PB2 subunit from four different influenza A strains. PMID:23851357
Giguère, Chantal M; Bauman, Nancy M; Sato, Yutaka; Burke, Diane K; Greinwald, John H; Pransky, Seth; Kelley, Peggy; Georgeson, Keith; Smith, Richard J H
2002-10-01
To describe and to determine the robustness of our study evaluating the efficacy of OK-432 (Picibanil) as a therapeutic modality for lymphangiomas. Prospective, randomized trial and parallel-case series at 13 US tertiary care referral centers. Thirty patients diagnosed as having lymphangioma. Ages in 25 ranged from 6 months to 18 years. Twenty-nine had lesions located in the head-and-neck area. Every patient received a 4-dose injection series of OK-432 scheduled 6 to 8 weeks apart unless a contraindication existed or a complete response was observed before completion of all injections. A control group was observed for 6 months. Successful outcome of therapy was defined as a complete or a substantial (>60%) reduction in lymphangioma size as determined by calculated lesion volumes on computed tomographic or magnetic resonance imaging scans. Overall, 19 (86%) of the 22 patients with predominantly macrocystic lymphangiomas had a successful outcome. OK-432 should be efficacious in the treatment of lymphangiomas. Our study design is well structured to clearly define the role of this treatment agent.
Data Parallel Bin-Based Indexing for Answering Queries on Multi-Core Architectures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gosink, Luke; Wu, Kesheng; Bethel, E. Wes
2009-06-02
The multi-core trend in CPUs and general purpose graphics processing units (GPUs) offers new opportunities for the database community. The increase of cores at exponential rates is likely to affect virtually every server and client in the coming decade, and presents database management systems with a huge, compelling disruption that will radically change how processing is done. This paper presents a new parallel indexing data structure for answering queries that takes full advantage of the increasing thread-level parallelism emerging in multi-core architectures. In our approach, our Data Parallel Bin-based Index Strategy (DP-BIS) first bins the base data, and then partitionsmore » and stores the values in each bin as a separate, bin-based data cluster. In answering a query, the procedures for examining the bin numbers and the bin-based data clusters offer the maximum possible level of concurrency; each record is evaluated by a single thread and all threads are processed simultaneously in parallel. We implement and demonstrate the effectiveness of DP-BIS on two multi-core architectures: a multi-core CPU and a GPU. The concurrency afforded by DP-BIS allows us to fully utilize the thread-level parallelism provided by each architecture--for example, our GPU-based DP-BIS implementation simultaneously evaluates over 12,000 records with an equivalent number of concurrently executing threads. In comparing DP-BIS's performance across these architectures, we show that the GPU-based DP-BIS implementation requires significantly less computation time to answer a query than the CPU-based implementation. We also demonstrate in our analysis that DP-BIS provides better overall performance than the commonly utilized CPU and GPU-based projection index. Finally, due to data encoding, we show that DP-BIS accesses significantly smaller amounts of data than index strategies that operate solely on a column's base data; this smaller data footprint is critical for parallel processors that possess limited memory resources (e.g., GPUs).« less
Link!: Potential Field Guidance Algorithm for In-Flight Linking of Multi-Rotor Aircraft
NASA Technical Reports Server (NTRS)
Cooper, John R.; Rothhaar, Paul M.
2017-01-01
Link! is a multi-center NASA e ort to study the feasibility of multi-aircraft aerial docking systems. In these systems, a group of vehicles physically link to each other during flight to form a larger ensemble vehicle with increased aerodynamic performance and mission utility. This paper presents a potential field guidance algorithm for a group of multi-rotor vehicles to link to each other during flight. The linking is done in pairs. Each vehicle first selects a mate. Then the potential field is constructed with three rules: move towards the mate, avoid collisions with non-mates, and stay close to the rest of the group. Once a pair links, they are then considered to be a single vehicle. After each pair is linked, the process repeats until there is only one vehicle left. The paper contains simulation results for a system of 16 vehicles.
A highly efficient multi-core algorithm for clustering extremely large datasets
2010-01-01
Background In recent years, the demand for computational power in computational biology has increased due to rapidly growing data sets from microarray and other high-throughput technologies. This demand is likely to increase. Standard algorithms for analyzing data, such as cluster algorithms, need to be parallelized for fast processing. Unfortunately, most approaches for parallelizing algorithms largely rely on network communication protocols connecting and requiring multiple computers. One answer to this problem is to utilize the intrinsic capabilities in current multi-core hardware to distribute the tasks among the different cores of one computer. Results We introduce a multi-core parallelization of the k-means and k-modes cluster algorithms based on the design principles of transactional memory for clustering gene expression microarray type data and categorial SNP data. Our new shared memory parallel algorithms show to be highly efficient. We demonstrate their computational power and show their utility in cluster stability and sensitivity analysis employing repeated runs with slightly changed parameters. Computation speed of our Java based algorithm was increased by a factor of 10 for large data sets while preserving computational accuracy compared to single-core implementations and a recently published network based parallelization. Conclusions Most desktop computers and even notebooks provide at least dual-core processors. Our multi-core algorithms show that using modern algorithmic concepts, parallelization makes it possible to perform even such laborious tasks as cluster sensitivity and cluster number estimation on the laboratory computer. PMID:20370922
NASA Astrophysics Data System (ADS)
Liao, S.; Chen, L.; Li, J.; Xiong, W.; Wu, Q.
2015-07-01
Existing spatiotemporal database supports spatiotemporal aggregation query over massive moving objects datasets. Due to the large amounts of data and single-thread processing method, the query speed cannot meet the application requirements. On the other hand, the query efficiency is more sensitive to spatial variation then temporal variation. In this paper, we proposed a spatiotemporal aggregation query method using multi-thread parallel technique based on regional divison and implemented it on the server. Concretely, we divided the spatiotemporal domain into several spatiotemporal cubes, computed spatiotemporal aggregation on all cubes using the technique of multi-thread parallel processing, and then integrated the query results. By testing and analyzing on the real datasets, this method has improved the query speed significantly.
Performance Enhancement Strategies for Multi-Block Overset Grid CFD Applications
NASA Technical Reports Server (NTRS)
Djomehri, M. Jahed; Biswas, Rupak
2003-01-01
The overset grid methodology has significantly reduced time-to-solution of highfidelity computational fluid dynamics (CFD) simulations about complex aerospace configurations. The solution process resolves the geometrical complexity of the problem domain by using separately generated but overlapping structured discretization grids that periodically exchange information through interpolation. However, high performance computations of such large-scale realistic applications must be handled efficiently on state-of-the-art parallel supercomputers. This paper analyzes the effects of various performance enhancement strategies on the parallel efficiency of an overset grid Navier-Stokes CFD application running on an SGI Origin2000 machinc. Specifically, the role of asynchronous communication, grid splitting, and grid grouping strategies are presented and discussed. Details of a sophisticated graph partitioning technique for grid grouping are also provided. Results indicate that performance depends critically on the level of latency hiding and the quality of load balancing across the processors.
NASA Astrophysics Data System (ADS)
Kido, Kentaro; Kasahara, Kento; Yokogawa, Daisuke; Sato, Hirofumi
2015-07-01
In this study, we reported the development of a new quantum mechanics/molecular mechanics (QM/MM)-type framework to describe chemical processes in solution by combining standard molecular-orbital calculations with a three-dimensional formalism of integral equation theory for molecular liquids (multi-center molecular Ornstein-Zernike (MC-MOZ) method). The theoretical procedure is very similar to the 3D-reference interaction site model self-consistent field (RISM-SCF) approach. Since the MC-MOZ method is highly parallelized for computation, the present approach has the potential to be one of the most efficient procedures to treat chemical processes in solution. Benchmark tests to check the validity of this approach were performed for two solute (solute water and formaldehyde) systems and a simple SN2 reaction (Cl- + CH3Cl → ClCH3 + Cl-) in aqueous solution. The results for solute molecular properties and solvation structures obtained by the present approach were in reasonable agreement with those obtained by other hybrid frameworks and experiments. In particular, the results of the proposed approach are in excellent agreements with those of 3D-RISM-SCF.
Kido, Kentaro; Kasahara, Kento; Yokogawa, Daisuke; Sato, Hirofumi
2015-07-07
In this study, we reported the development of a new quantum mechanics/molecular mechanics (QM/MM)-type framework to describe chemical processes in solution by combining standard molecular-orbital calculations with a three-dimensional formalism of integral equation theory for molecular liquids (multi-center molecular Ornstein-Zernike (MC-MOZ) method). The theoretical procedure is very similar to the 3D-reference interaction site model self-consistent field (RISM-SCF) approach. Since the MC-MOZ method is highly parallelized for computation, the present approach has the potential to be one of the most efficient procedures to treat chemical processes in solution. Benchmark tests to check the validity of this approach were performed for two solute (solute water and formaldehyde) systems and a simple SN2 reaction (Cl(-) + CH3Cl → ClCH3 + Cl(-)) in aqueous solution. The results for solute molecular properties and solvation structures obtained by the present approach were in reasonable agreement with those obtained by other hybrid frameworks and experiments. In particular, the results of the proposed approach are in excellent agreements with those of 3D-RISM-SCF.
Jafari, Nahid; Hearne, John; Churilov, Leonid
2013-11-10
A post-hoc individual patient matching procedure was recently proposed within the context of parallel group randomized clinical trials (RCTs) as a method for estimating treatment effect. In this paper, we consider a post-hoc individual patient matching problem within a parallel group RCT as a multi-objective decision-making problem focussing on the trade-off between the quality of individual matches and the overall percentage of matching. Using acute stroke trials as a context, we utilize exact optimization and simulation techniques to investigate a complex relationship between the overall percentage of individual post-hoc matching, the size of the respective RCT, and the quality of matching on variables highly prognostic for a good functional outcome after stroke, as well as the dispersion in these variables. It is empirically confirmed that a high percentage of individual post-hoc matching can only be achieved when the differences in prognostic baseline variables between individually matched subjects within the same pair are sufficiently large and that the unmatched subjects are qualitatively different to the matched ones. It is concluded that the post-hoc individual matching as a technique for treatment effect estimation in parallel-group RCTs should be exercised with caution because of its propensity to introduce significant bias and reduce validity. If used with appropriate caution and thorough evaluation, this approach can complement other viable alternative approaches for estimating the treatment effect. Copyright © 2013 John Wiley & Sons, Ltd.
Gholamzadeh Baeis, Mehdi; Amiri, Ghasem; Miladinia, Mojtaba
2017-01-01
This study examines the effect of the addition of IMOD, a novel multi-herbal drug to the highly active anti-retroviral therapy (HAART) regimen, on the immunological status of HIV-positive patients. A randomized two-parallel-group (HAART group versus HAART+IMOD group), pretest-posttest design was used.Sixty patients with indications for treatment with the HAART regimen participated. One week before and 2 days after the treatments, immunological parameters including total lymphocyte count (TLC) and CD4 cell count were assessed.The intervention group received the HAART regimen plus IMOD every day for 3 months. The control group received only the HAART regimen every day for 3 months. In the intervention group, a significant difference was observed in CD4between before and after drug therapy (CD4 was increased). However, in the control group, the difference in CD4 was not significant before and after drug therapy. The difference in TLC was not significantly different between the two groups before and after therapy. Nevertheless, TLC was higher in the intervention group. IMOD (as a herbal drug) has been successfully added to the HAART regimen to improve the immunological status of HIV-positive patients.
Parallel and fault-tolerant algorithms for hypercube multiprocessors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aykanat, C.
1988-01-01
Several techniques for increasing the performance of parallel algorithms on distributed-memory message-passing multi-processor systems are investigated. These techniques are effectively implemented for the parallelization of the Scaled Conjugate Gradient (SCG) algorithm on a hypercube connected message-passing multi-processor. Significant performance improvement is achieved by using these techniques. The SCG algorithm is used for the solution phase of an FE modeling system. Almost linear speed-up is achieved, and it is shown that hypercube topology is scalable for an FE class of problem. The SCG algorithm is also shown to be suitable for vectorization, and near supercomputer performance is achieved on a vectormore » hypercube multiprocessor by exploiting both parallelization and vectorization. Fault-tolerance issues for the parallel SCG algorithm and for the hypercube topology are also addressed.« less
Using OpenMP vs. Threading Building Blocks for Medical Imaging on Multi-cores
NASA Astrophysics Data System (ADS)
Kegel, Philipp; Schellmann, Maraike; Gorlatch, Sergei
We compare two parallel programming approaches for multi-core systems: the well-known OpenMP and the recently introduced Threading Building Blocks (TBB) library by Intel®. The comparison is made using the parallelization of a real-world numerical algorithm for medical imaging. We develop several parallel implementations, and compare them w.r.t. programming effort, programming style and abstraction, and runtime performance. We show that TBB requires a considerable program re-design, whereas with OpenMP simple compiler directives are sufficient. While TBB appears to be less appropriate for parallelizing existing implementations, it fosters a good programming style and higher abstraction level for newly developed parallel programs. Our experimental measurements on a dual quad-core system demonstrate that OpenMP slightly outperforms TBB in our implementation.
NASA Technical Reports Server (NTRS)
Kenny, R. J.; Greene, W. D.
2016-01-01
This presentation covers the overall scope, schedule, and activities associated with the NASA - Marshall Space Flight Center (MSFC) involvement with the Combustion Stability Tool Development (CSTD) program. The CSTD program is funded by the Air Force Space & Missile Systems Center; it is approximately two years in duration and; and it is sponsoring MSFC to: design, fabricate, & execute multi-element hardware testing, support Air Force Research Laboratory (AFRL) single element testing, and execute testing of a small-scale, multi-element combustion chamber. Specific MSFC Engineering Directorate involvement, per CSTD-sponsored task, will be outlined. This presentation serves a primer for the corresponding works that provide details of the technical work performed by individual groups within MSFC.
Lee, Mi Young; Choi, Dong Seop; Lee, Moon Kyu; Lee, Hyoung Woo; Park, Tae Sun; Kim, Doo Man; Chung, Choon Hee; Kim, Duk Kyu; Kim, In Joo; Jang, Hak Chul; Park, Yong Soo; Kwon, Hyuk Sang; Lee, Seung Hun; Shin, Hee Kang
2014-01-01
We studied the efficacy and safety of acarbose in comparison with voglibose in type 2 diabetes patients whose blood glucose levels were inadequately controlled with basal insulin alone or in combination with metformin (or a sulfonylurea). This study was a 24-week prospective, open-label, randomized, active-controlled multi-center study. Participants were randomized to receive either acarbose (n=59, 300 mg/day) or voglibose (n=62, 0.9 mg/day). The mean HbA1c at week 24 was significantly decreased approximately 0.7% from baseline in both acarbose (from 8.43% ± 0.71% to 7.71% ± 0.93%) and voglibose groups (from 8.38% ± 0.73% to 7.68% ± 0.94%). The mean fasting plasma glucose level and self-monitoring of blood glucose data from 1 hr before and after each meal were significantly decreased at week 24 in comparison to baseline in both groups. The levels 1 hr after dinner at week 24 were significantly decreased in the acarbose group (from 233.54 ± 69.38 to 176.80 ± 46.63 mg/dL) compared with the voglibose group (from 224.18 ± 70.07 to 193.01 ± 55.39 mg/dL). In conclusion, both acarbose and voglibose are efficacious and safe in patients with type 2 diabetes who are inadequately controlled with basal insulin. (ClinicalTrials.gov number, NCT00970528).
Lee, Mi Young; Lee, Moon Kyu; Lee, Hyoung Woo; Park, Tae Sun; Kim, Doo Man; Chung, Choon Hee; Kim, Duk Kyu; Kim, In Joo; Jang, Hak Chul; Park, Yong Soo; Kwon, Hyuk Sang; Lee, Seung Hun; Shin, Hee Kang
2014-01-01
We studied the efficacy and safety of acarbose in comparison with voglibose in type 2 diabetes patients whose blood glucose levels were inadequately controlled with basal insulin alone or in combination with metformin (or a sulfonylurea). This study was a 24-week prospective, open-label, randomized, active-controlled multi-center study. Participants were randomized to receive either acarbose (n=59, 300 mg/day) or voglibose (n=62, 0.9 mg/day). The mean HbA1c at week 24 was significantly decreased approximately 0.7% from baseline in both acarbose (from 8.43% ± 0.71% to 7.71% ± 0.93%) and voglibose groups (from 8.38% ± 0.73% to 7.68% ± 0.94%). The mean fasting plasma glucose level and self-monitoring of blood glucose data from 1 hr before and after each meal were significantly decreased at week 24 in comparison to baseline in both groups. The levels 1 hr after dinner at week 24 were significantly decreased in the acarbose group (from 233.54 ± 69.38 to 176.80 ± 46.63 mg/dL) compared with the voglibose group (from 224.18 ± 70.07 to 193.01 ± 55.39 mg/dL). In conclusion, both acarbose and voglibose are efficacious and safe in patients with type 2 diabetes who are inadequately controlled with basal insulin. (ClinicalTrials.gov number, NCT00970528) PMID:24431911
Parallel transformation of K-SVD solar image denoising algorithm
NASA Astrophysics Data System (ADS)
Liang, Youwen; Tian, Yu; Li, Mei
2017-02-01
The images obtained by observing the sun through a large telescope always suffered with noise due to the low SNR. K-SVD denoising algorithm can effectively remove Gauss white noise. Training dictionaries for sparse representations is a time consuming task, due to the large size of the data involved and to the complexity of the training algorithms. In this paper, an OpenMP parallel programming language is proposed to transform the serial algorithm to the parallel version. Data parallelism model is used to transform the algorithm. Not one atom but multiple atoms updated simultaneously is the biggest change. The denoising effect and acceleration performance are tested after completion of the parallel algorithm. Speedup of the program is 13.563 in condition of using 16 cores. This parallel version can fully utilize the multi-core CPU hardware resources, greatly reduce running time and easily to transplant in multi-core platform.
Hybrid Parallelism for Volume Rendering on Large-, Multi-, and Many-Core Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Howison, Mark; Bethel, E. Wes; Childs, Hank
2012-01-01
With the computing industry trending towards multi- and many-core processors, we study how a standard visualization algorithm, ray-casting volume rendering, can benefit from a hybrid parallelism approach. Hybrid parallelism provides the best of both worlds: using distributed-memory parallelism across a large numbers of nodes increases available FLOPs and memory, while exploiting shared-memory parallelism among the cores within each node ensures that each node performs its portion of the larger calculation as efficiently as possible. We demonstrate results from weak and strong scaling studies, at levels of concurrency ranging up to 216,000, and with datasets as large as 12.2 trillion cells.more » The greatest benefit from hybrid parallelism lies in the communication portion of the algorithm, the dominant cost at higher levels of concurrency. We show that reducing the number of participants with a hybrid approach significantly improves performance.« less
Kabiri, Golnoosh; Ziaei, Tayebe; Aval, Masumeh Rezaei; Vakili, Mohammad Ali
2017-09-15
Background Sexual puberty in adolescents occurs before their mental and emotional maturity and exposes them to high-risk sexual behaviors. Because sexual risk-taking occurs before adolescents become involved in a sexual relationship, this study was conducted to identify the effect of group counseling based on self-awareness skill on sexual risk-taking among female high school students in Gorgan in order to suggest some preventative measures. Methods The present parallel study is a randomized field trial conducted on 96 girl students who were studying in grades 10, 11 and 12 of high school with an age range of 14-18 years old. Sampling was done based on a multi-stage process. In the first stage, through the randomized clustering approach, four centers among six health centers were selected. In the second stage, 96 samples were collected through consecutive sampling. Finally, the samples were divided into two intervention and control groups (each one having 48 subjects) through the simple randomized approach. It has to be noted that no blinding was done in the present study. The data were collected using a demographic specifications form and the Iranian Adolescents Risk-Taking Scale (IARS). The consultation sessions based on self-awareness skill were explained to an intervention group through 60-min sessions over 7 weeks. The pretest was conducted for both groups and the posttest was completed 1 week and 1 month after the intervention by the intervention and control groups. Finally, after the loss of follow-up/drop out, a total of 80 subjects remained in the study; 42 subjects in the intervention group and 38 subjects in the control group. Data analyses were done using SPSS v.16 along with the Freidman non-parametric test and the Mann-Whitney and Wilcoxon tests. Results The results showed that the sexual risk-taking mean scores in the intervention group (10.54 ± 15.64) were reduced by applying 1-week (8.03 ± 12.82) and 1-month (4.91 ± 10.10) follow-ups after the intervention. This reduction was statistically significant (p = 14%). However, no statistically significant difference was observed in the control group. Conclusion Group counseling based on self-awareness skill decreased the sexual risk-taking in girl students of the high school. As prevention is prior to treatment, this method could be proposed as the prevention of high-risk sexual behavior to healthcare centers and educational environments and non-government organizations (NGOs) interacting with adolescents.
2013-01-01
Background Dual sensory loss (DSL) has a negative impact on health and wellbeing and its prevalence is expected to increase due to demographic aging. However, specialized care or rehabilitation programs for DSL are scarce. Until now, low vision rehabilitation does not sufficiently target concurrent impairments in vision and hearing. This study aims to 1) develop a DSL protocol (for occupational therapists working in low vision rehabilitation) which focuses on optimal use of the senses and teaches DSL patients and their communication partners to use effective communication strategies, and 2) describe the multicenter parallel randomized controlled trial (RCT) designed to test the effectiveness and cost-effectiveness of the DSL protocol. Methods/design To develop a DSL protocol, literature was reviewed and content was discussed with professionals in eye/ear care (interviews/focus groups) and DSL patients (interviews). A pilot study was conducted to test and confirm the DSL protocol. In addition, a two-armed international multi-center RCT will evaluate the effectiveness and cost-effectiveness of the DSL protocol compared to waiting list controls, in 124 patients in low vision rehabilitation centers in the Netherlands and Belgium. Discussion This study provides a treatment protocol for rehabilitation of DSL within low vision rehabilitation, which aims to be a valuable addition to the general low vision rehabilitation care. Trial registration Netherlands Trial Register (NTR) identifier: NTR2843 PMID:23941667
NASA Technical Reports Server (NTRS)
OKeefe, Matthew (Editor); Kerr, Christopher L. (Editor)
1998-01-01
This report contains the abstracts and technical papers from the Second International Workshop on Software Engineering and Code Design in Parallel Meteorological and Oceanographic Applications, held June 15-18, 1998, in Scottsdale, Arizona. The purpose of the workshop is to bring together software developers in meteorology and oceanography to discuss software engineering and code design issues for parallel architectures, including Massively Parallel Processors (MPP's), Parallel Vector Processors (PVP's), Symmetric Multi-Processors (SMP's), Distributed Shared Memory (DSM) multi-processors, and clusters. Issues to be discussed include: (1) code architectures for current parallel models, including basic data structures, storage allocation, variable naming conventions, coding rules and styles, i/o and pre/post-processing of data; (2) designing modular code; (3) load balancing and domain decomposition; (4) techniques that exploit parallelism efficiently yet hide the machine-related details from the programmer; (5) tools for making the programmer more productive; and (6) the proliferation of programming models (F--, OpenMP, MPI, and HPF).
A Method for Teaching English as a Second Language and its Evaluation.
ERIC Educational Resources Information Center
Rosenhouse, Judith
This paper describes the experience of two children, native-speakers of Hebrew, in a language center in England. The language center provides a total immersion program in English for a multi-lingual population of children aged 5 to 12 years. The small-group and individualized instruction, the instructional materials and facilities, and the close…
Concurrent Probabilistic Simulation of High Temperature Composite Structural Response
NASA Technical Reports Server (NTRS)
Abdi, Frank
1996-01-01
A computational structural/material analysis and design tool which would meet industry's future demand for expedience and reduced cost is presented. This unique software 'GENOA' is dedicated to parallel and high speed analysis to perform probabilistic evaluation of high temperature composite response of aerospace systems. The development is based on detailed integration and modification of diverse fields of specialized analysis techniques and mathematical models to combine their latest innovative capabilities into a commercially viable software package. The technique is specifically designed to exploit the availability of processors to perform computationally intense probabilistic analysis assessing uncertainties in structural reliability analysis and composite micromechanics. The primary objectives which were achieved in performing the development were: (1) Utilization of the power of parallel processing and static/dynamic load balancing optimization to make the complex simulation of structure, material and processing of high temperature composite affordable; (2) Computational integration and synchronization of probabilistic mathematics, structural/material mechanics and parallel computing; (3) Implementation of an innovative multi-level domain decomposition technique to identify the inherent parallelism, and increasing convergence rates through high- and low-level processor assignment; (4) Creating the framework for Portable Paralleled architecture for the machine independent Multi Instruction Multi Data, (MIMD), Single Instruction Multi Data (SIMD), hybrid and distributed workstation type of computers; and (5) Market evaluation. The results of Phase-2 effort provides a good basis for continuation and warrants Phase-3 government, and industry partnership.
Experimental determination of pCo perturbation factors for plane-parallel chambers
NASA Astrophysics Data System (ADS)
Kapsch, R. P.; Bruggmoser, G.; Christ, G.; Dohm, O. S.; Hartmann, G. H.; Schüle, E.
2007-12-01
For plane-parallel chambers used in electron dosimetry, modern dosimetry protocols recommend a cross-calibration against a calibrated cylindrical chamber. The rationale for this is the unacceptably large (up to 3-4%) chamber-to-chamber variations of the perturbation factors (pwall)Co, which have been reported for plane-parallel chambers of a given type. In some recent publications, it was shown that this is no longer the case for modern plane-parallel chambers. The aims of the present study are to obtain reliable information about the variation of the perturbation factors for modern types of plane-parallel chambers, and—if this variation is found to be acceptably small—to determine type-specific mean values for these perturbation factors which can be used for absorbed dose measurements in electron beams using plane-parallel chambers. In an extensive multi-center study, the individual perturbation factors pCo (which are usually assumed to be equal to (pwall)Co) for a total of 35 plane-parallel chambers of the Roos type, 15 chambers of the Markus type and 12 chambers of the Advanced Markus type were determined. From a total of 188 cross-calibration measurements, variations of the pCo values for different chambers of the same type of at most 1.0%, 0.9% and 0.6% were found for the chambers of the Roos, Markus and Advanced Markus types, respectively. The mean pCo values obtained from all measurements are \\bar{p}^Roos_Co = 1.0198, \\bar{p}^Markus_Co = 1.0175 and \\bar{p}^Advanced_Co = 1.0155 ; the relative experimental standard deviation of the individual pCo values is less than 0.24% for all chamber types; the relative standard uncertainty of the mean pCo values is 1.1%.
NASA Technical Reports Server (NTRS)
Eberhardt, D. S.; Baganoff, D.; Stevens, K.
1984-01-01
Implicit approximate-factored algorithms have certain properties that are suitable for parallel processing. A particular computational fluid dynamics (CFD) code, using this algorithm, is mapped onto a multiple-instruction/multiple-data-stream (MIMD) computer architecture. An explanation of this mapping procedure is presented, as well as some of the difficulties encountered when trying to run the code concurrently. Timing results are given for runs on the Ames Research Center's MIMD test facility which consists of two VAX 11/780's with a common MA780 multi-ported memory. Speedups exceeding 1.9 for characteristic CFD runs were indicated by the timing results.
Spatial Variations of DOM Compositions in the River with Multi-functional Weir
NASA Astrophysics Data System (ADS)
Yoon, S. M.; Choi, J. H.
2017-12-01
With the global trend to construct artificial impoundments over the last decades, there was a Large River Restoration Project conducted in South Korea from 2010 to 2011. The project included enlargement of river channel capacity and construction of multi-functional weirs, which can alter the hydrological flow of the river and cause spatial variations of water quality indicators, especially DOM (Dissolved Organic Matter) compositions. In order to analyze the spatial variations of organic matter, water samples were collected longitudinally (5 points upstream from the weir), horizontally (left, center, right at each point) and vertically (1m interval at each point). The specific UV-visible absorbance (SUVA) and fluorescence excitation-emission matrices (EEMs) have been used as rapid and non-destructive analytical methods for DOM compositions. In addition, parallel factor analysis (PARAFAC) has adopted for extracting a set of representative fluorescence components from EEMs. It was assumed that autochthonous DOM would be dominant near the weir due to the stagnation of hydrological flow. However, the results showed that the values of fluorescence index (FI) were 1.29-1.47, less than 2, indicating DOM of allochthonous origin dominated in the water near the weir. PARAFAC analysis also showed the peak at 450 nm of emission and < 250 nm of excitation which represented the humic substances group with terrestrial origins. There was no significant difference in the values of Biological index (BIX), however, values of humification index (HIX) spatially increased toward the weir. From the results of the water sample analysis, the river with multi-functional weir is influenced by the allochthonous DOM instead of autochthonous DOM and seems to accumulate humic substances near the weir.
NASA Technical Reports Server (NTRS)
Engin, Doruk; Mathason, Brian; Stephen, Mark; Yu, Anthony; Cao, He; Fouron, Jean-Luc; Storm, Mark
2016-01-01
Accurate global measurements of tropospheric CO2 mixing ratios are needed to study CO2 emissions and CO2 exchange with the land and oceans. NASA Goddard Space Flight Center (GSFC) is developing a pulsed lidar approach for an integrated path differential absorption (IPDA) lidar to allow global measurements of atmospheric CO2 column densities from space. Our group has developed, and successfully flown, an airborne pulsed lidar instrument that uses two tunable pulsed laser transmitters allowing simultaneous measurement of a single CO2 absorption line in the 1570 nm band, absorption of an O2 line pair in the oxygen A-band (765 nm), range, and atmospheric backscatter profiles in the same path. Both lasers are pulsed at 10 kHz, and the two absorption line regions are sampled at typically a 300 Hz rate. A space-based version of this lidar must have a much larger lidar power-area product due to the x40 longer range and faster along track velocity compared to airborne instrument. Initial link budget analysis indicated that for a 400 km orbit, a 1.5 m diameter telescope and a 10 second integration time, a 2 mJ laser energy is required to attain the precision needed for each measurement. To meet this energy requirement, we have pursued parallel power scaling efforts to enable space-based lidar measurement of CO2 concentrations. These included a multiple aperture approach consists of multi-element large mode area fiber amplifiers and a single-aperture approach consists of a multi-pass Er:Yb:Phosphate glass based planar waveguide amplifier (PWA). In this paper we will present our laser amplifier design approaches and preliminary results.
Krityakierne, Tipaluck; Akhtar, Taimoor; Shoemaker, Christine A.
2016-02-02
This paper presents a parallel surrogate-based global optimization method for computationally expensive objective functions that is more effective for larger numbers of processors. To reach this goal, we integrated concepts from multi-objective optimization and tabu search into, single objective, surrogate optimization. Our proposed derivative-free algorithm, called SOP, uses non-dominated sorting of points for which the expensive function has been previously evaluated. The two objectives are the expensive function value of the point and the minimum distance of the point to previously evaluated points. Based on the results of non-dominated sorting, P points from the sorted fronts are selected as centersmore » from which many candidate points are generated by random perturbations. Based on surrogate approximation, the best candidate point is subsequently selected for expensive evaluation for each of the P centers, with simultaneous computation on P processors. Centers that previously did not generate good solutions are tabu with a given tenure. We show almost sure convergence of this algorithm under some conditions. The performance of SOP is compared with two RBF based methods. The test results show that SOP is an efficient method that can reduce time required to find a good near optimal solution. In a number of cases the efficiency of SOP is so good that SOP with 8 processors found an accurate answer in less wall-clock time than the other algorithms did with 32 processors.« less
The Parallel System for Integrating Impact Models and Sectors (pSIMS)
NASA Technical Reports Server (NTRS)
Elliott, Joshua; Kelly, David; Chryssanthacopoulos, James; Glotter, Michael; Jhunjhnuwala, Kanika; Best, Neil; Wilde, Michael; Foster, Ian
2014-01-01
We present a framework for massively parallel climate impact simulations: the parallel System for Integrating Impact Models and Sectors (pSIMS). This framework comprises a) tools for ingesting and converting large amounts of data to a versatile datatype based on a common geospatial grid; b) tools for translating this datatype into custom formats for site-based models; c) a scalable parallel framework for performing large ensemble simulations, using any one of a number of different impacts models, on clusters, supercomputers, distributed grids, or clouds; d) tools and data standards for reformatting outputs to common datatypes for analysis and visualization; and e) methodologies for aggregating these datatypes to arbitrary spatial scales such as administrative and environmental demarcations. By automating many time-consuming and error-prone aspects of large-scale climate impacts studies, pSIMS accelerates computational research, encourages model intercomparison, and enhances reproducibility of simulation results. We present the pSIMS design and use example assessments to demonstrate its multi-model, multi-scale, and multi-sector versatility.
NASA Astrophysics Data System (ADS)
Nishiura, Daisuke; Furuichi, Mikito; Sakaguchi, Hide
2015-09-01
The computational performance of a smoothed particle hydrodynamics (SPH) simulation is investigated for three types of current shared-memory parallel computer devices: many integrated core (MIC) processors, graphics processing units (GPUs), and multi-core CPUs. We are especially interested in efficient shared-memory allocation methods for each chipset, because the efficient data access patterns differ between compute unified device architecture (CUDA) programming for GPUs and OpenMP programming for MIC processors and multi-core CPUs. We first introduce several parallel implementation techniques for the SPH code, and then examine these on our target computer architectures to determine the most effective algorithms for each processor unit. In addition, we evaluate the effective computing performance and power efficiency of the SPH simulation on each architecture, as these are critical metrics for overall performance in a multi-device environment. In our benchmark test, the GPU is found to produce the best arithmetic performance as a standalone device unit, and gives the most efficient power consumption. The multi-core CPU obtains the most effective computing performance. The computational speed of the MIC processor on Xeon Phi approached that of two Xeon CPUs. This indicates that using MICs is an attractive choice for existing SPH codes on multi-core CPUs parallelized by OpenMP, as it gains computational acceleration without the need for significant changes to the source code.
Multi-thread parallel algorithm for reconstructing 3D large-scale porous structures
NASA Astrophysics Data System (ADS)
Ju, Yang; Huang, Yaohui; Zheng, Jiangtao; Qian, Xu; Xie, Heping; Zhao, Xi
2017-04-01
Geomaterials inherently contain many discontinuous, multi-scale, geometrically irregular pores, forming a complex porous structure that governs their mechanical and transport properties. The development of an efficient reconstruction method for representing porous structures can significantly contribute toward providing a better understanding of the governing effects of porous structures on the properties of porous materials. In order to improve the efficiency of reconstructing large-scale porous structures, a multi-thread parallel scheme was incorporated into the simulated annealing reconstruction method. In the method, four correlation functions, which include the two-point probability function, the linear-path functions for the pore phase and the solid phase, and the fractal system function for the solid phase, were employed for better reproduction of the complex well-connected porous structures. In addition, a random sphere packing method and a self-developed pre-conditioning method were incorporated to cast the initial reconstructed model and select independent interchanging pairs for parallel multi-thread calculation, respectively. The accuracy of the proposed algorithm was evaluated by examining the similarity between the reconstructed structure and a prototype in terms of their geometrical, topological, and mechanical properties. Comparisons of the reconstruction efficiency of porous models with various scales indicated that the parallel multi-thread scheme significantly shortened the execution time for reconstruction of a large-scale well-connected porous model compared to a sequential single-thread procedure.
Timmermans, Catherine; Doffagne, Erik; Venet, David; Desmet, Lieven; Legrand, Catherine; Burzykowski, Tomasz; Buyse, Marc
2016-01-01
Data quality may impact the outcome of clinical trials; hence, there is a need to implement quality control strategies for the data collected. Traditional approaches to quality control have primarily used source data verification during on-site monitoring visits, but these approaches are hugely expensive as well as ineffective. There is growing interest in central statistical monitoring (CSM) as an effective way to ensure data quality and consistency in multicenter clinical trials. CSM with SMART™ uses advanced statistical tools that help identify centers with atypical data patterns which might be the sign of an underlying quality issue. This approach was used to assess the quality and consistency of the data collected in the Stomach Cancer Adjuvant Multi-institutional Trial Group Trial, involving 1495 patients across 232 centers in Japan. In the Stomach Cancer Adjuvant Multi-institutional Trial Group Trial, very few atypical data patterns were found among the participating centers, and none of these patterns were deemed to be related to a quality issue that could significantly affect the outcome of the trial. CSM can be used to provide a check of the quality of the data from completed multicenter clinical trials before analysis, publication, and submission of the results to regulatory agencies. It can also form the basis of a risk-based monitoring strategy in ongoing multicenter trials. CSM aims at improving data quality in clinical trials while also reducing monitoring costs.
Mar, Alan [Albuquerque, NM; Zutavern, Fred J [Albuquerque, NM; Loubriel, Guillermo [Albuquerque, NM
2007-02-06
An improved photoconductive semiconductor switch comprises multiple-line optical triggering of multiple, high-current parallel filaments between the switch electrodes. The switch can also have a multi-gap, interdigitated electrode for the generation of additional parallel filaments. Multi-line triggering can increase the switch lifetime at high currents by increasing the number of current filaments and reducing the current density at the contact electrodes in a controlled manner. Furthermore, the improved switch can mitigate the degradation of switching conditions with increased number of firings of the switch.
1960-01-01
This small group of unidentified officials is dwarfed by the gigantic size of the Saturn V first stage (S-1C) at the shipping area of the Manufacturing Engineering Laboratory at Marshall Space Flight Center in Huntsville, Alabama. The towering 363-foot Saturn V was a multi-stage, multi-engine launch vehicle standing taller than the Statue of Liberty. Altogether, the Saturn V engines produced as much power as 85 Hoover Dams.
Two Parallel Olfactory Pathways for Processing General Odors in a Cockroach
Watanabe, Hidehiro; Nishino, Hiroshi; Mizunami, Makoto; Yokohari, Fumio
2017-01-01
In animals, sensory processing via parallel pathways, including the olfactory system, is a common design. However, the mechanisms that parallel pathways use to encode highly complex and dynamic odor signals remain unclear. In the current study, we examined the anatomical and physiological features of parallel olfactory pathways in an evolutionally basal insect, the cockroach Periplaneta americana. In this insect, the entire system for processing general odors, from olfactory sensory neurons to higher brain centers, is anatomically segregated into two parallel pathways. Two separate populations of secondary olfactory neurons, type1 and type2 projection neurons (PNs), with dendrites in distinct glomerular groups relay olfactory signals to segregated areas of higher brain centers. We conducted intracellular recordings, revealing olfactory properties and temporal patterns of both types of PNs. Generally, type1 PNs exhibit higher odor-specificities to nine tested odorants than type2 PNs. Cluster analyses revealed that odor-evoked responses were temporally complex and varied in type1 PNs, while type2 PNs exhibited phasic on-responses with either early or late latencies to an effective odor. The late responses are 30–40 ms later than the early responses. Simultaneous intracellular recordings from two different PNs revealed that a given odor activated both types of PNs with different temporal patterns, and latencies of early and late responses in type2 PNs might be precisely controlled. Our results suggest that the cockroach is equipped with two anatomically and physiologically segregated parallel olfactory pathways, which might employ different neural strategies to encode odor information. PMID:28529476
NASA Technical Reports Server (NTRS)
Kemp, E.; Jacob, J.; Rosenberg, R.; Jusem, J. C.; Emmitt, G. D.; Wood, S.; Greco, L. P.; Riishojgaard, L. P.; Masutani, M.; Ma, Z.;
2013-01-01
NASA Goddard Space Flight Center's Software Systems Support Office (SSSO) is participating in a multi-agency study of the impact of assimilating Doppler wind lidar observations on numerical weather prediction. Funded by NASA's Earth Science Technology Office, SSSO has worked with Simpson Weather Associates to produce time series of synthetic lidar observations mimicking the OAWL and WISSCR lidar instruments deployed on the International Space Station. In addition, SSSO has worked to assimilate a portion of these observations those drawn from the NASA fvGCM Nature Run into the NASA GEOS-DAS global weather prediction system in a series of Observing System Simulation Experiments (OSSEs). These OSSEs will complement parallel OSSEs prepared by the Joint Center for Satellite Data Assimilation and by NOAA's Atlantic Oceanographic and Meteorological Laboratory. In this talk, we will describe our procedure and provide available OSSE results.
Scalability and Portability of Two Parallel Implementations of ADI
NASA Technical Reports Server (NTRS)
Phung, Thanh; VanderWijngaart, Rob F.
1994-01-01
Two domain decompositions for the implementation of the NAS Scalar Penta-diagonal Parallel Benchmark on MIMD systems are investigated, namely transposition and multi-partitioning. Hardware platforms considered are the Intel iPSC/860 and Paragon XP/S-15, and clusters of SGI workstations on ethernet, communicating through PVM. It is found that the multi-partitioning strategy offers the kind of coarse granularity that allows scaling up to hundreds of processors on a massively parallel machine. Moreover, efficiency is retained when the code is ported verbatim (save message passing syntax) to a PVM environment on a modest size cluster of workstations.
CPU timing routines for a CONVEX C220 computer system
NASA Technical Reports Server (NTRS)
Bynum, Mary Ann
1989-01-01
The timing routines available on the CONVEX C220 computer system in the Structural Mechanics Division (SMD) at NASA Langley Research Center are examined. The function of the timing routines, the use of the timing routines in sequential, parallel, and vector code, and the interpretation of the results from the timing routines with respect to the CONVEX model of computing are described. The timing routines available on the SMD CONVEX fall into two groups. The first group includes standard timing routines generally available with UNIX 4.3 BSD operating systems, while the second group includes routines unique to the SMD CONVEX. The standard timing routines described in this report are /bin/csh time,/bin/time, etime, and ctime. The routines unique to the SMD CONVEX are getinfo, second, cputime, toc, and a parallel profiling package made up of palprof, palinit, and palsum.
NASA Astrophysics Data System (ADS)
Ma, Sangback
In this paper we compare various parallel preconditioners such as Point-SSOR (Symmetric Successive OverRelaxation), ILU(0) (Incomplete LU) in the Wavefront ordering, ILU(0) in the Multi-color ordering, Multi-Color Block SOR (Successive OverRelaxation), SPAI (SParse Approximate Inverse) and pARMS (Parallel Algebraic Recursive Multilevel Solver) for solving large sparse linear systems arising from two-dimensional PDE (Partial Differential Equation)s on structured grids. Point-SSOR is well-known, and ILU(0) is one of the most popular preconditioner, but it is inherently serial. ILU(0) in the Wavefront ordering maximizes the parallelism in the natural order, but the lengths of the wave-fronts are often nonuniform. ILU(0) in the Multi-color ordering is a simple way of achieving a parallelism of the order N, where N is the order of the matrix, but its convergence rate often deteriorates as compared to that of natural ordering. We have chosen the Multi-Color Block SOR preconditioner combined with direct sparse matrix solver, since for the Laplacian matrix the SOR method is known to have a nondeteriorating rate of convergence when used with the Multi-Color ordering. By using block version we expect to minimize the interprocessor communications. SPAI computes the sparse approximate inverse directly by least squares method. Finally, ARMS is a preconditioner recursively exploiting the concept of independent sets and pARMS is the parallel version of ARMS. Experiments were conducted for the Finite Difference and Finite Element discretizations of five two-dimensional PDEs with large meshsizes up to a million on an IBM p595 machine with distributed memory. Our matrices are real positive, i. e., their real parts of the eigenvalues are positive. We have used GMRES(m) as our outer iterative method, so that the convergence of GMRES(m) for our test matrices are mathematically guaranteed. Interprocessor communications were done using MPI (Message Passing Interface) primitives. The results show that in general ILU(0) in the Multi-Color ordering ahd ILU(0) in the Wavefront ordering outperform the other methods but for symmetric and nearly symmetric 5-point matrices Multi-Color Block SOR gives the best performance, except for a few cases with a small number of processors.
[Eye movement study in multiple object search process].
Xu, Zhaofang; Liu, Zhongqi; Wang, Xingwei; Zhang, Xin
2017-04-01
The aim of this study is to investigate the search time regulation of objectives and eye movement behavior characteristics in the multi-objective visual search. The experimental task was accomplished with computer programming and presented characters on a 24 inch computer display. The subjects were asked to search three targets among the characters. Three target characters in the same group were of high similarity degree while those in different groups of target characters and distraction characters were in different similarity degrees. We recorded the search time and eye movement data through the whole experiment. It could be seen from the eye movement data that the quantity of fixation points was large when the target characters and distraction characters were similar. There were three kinds of visual search patterns for the subjects including parallel search, serial search, and parallel-serial search. In addition, the last pattern had the best search performance among the three search patterns, that is, the subjects who used parallel-serial search pattern spent shorter time finding the target. The order that the targets presented were able to affect the search performance significantly; and the similarity degree between target characters and distraction characters could also affect the search performance.
Ihm, Sang-Hyun; Jeon, Hui-Kyung; Chae, Shung Chull; Lim, Do-Sun; Kim, Kee-Sik; Choi, Dong-Ju; Ha, Jong-Won; Kim, Dong-Soo; Kim, Kye Hun; Cho, Myeong-Chan; Baek, Sang Hong
2013-01-01
Central blood pressure (BP) is pathophysiologically more important than peripheral BP for the pathogenesis of cardiovascular disease. Arterial stiffness is also a good predictor of cardiovascular morbidity and mortality. The effects of benidipine, a unique dual L-/T-type calcium channel blocker, on central BP have not been reported. This study aimed to compare the effect of benidipine and losartan on the central BP and arterial stiffness in mild to moderate essential hypertensives. This 24 weeks, multi-center, open label, randomized, active drug comparative, parallel group study was designed as a non-inferiority study. The eligible patients (n = 200) were randomly assigned to receive benidipine (n = 101) or losartan (n = 99). Radial artery applanation tonometry and pulse wave analysis were used to measure the central BP, pulse wave velocity (PWV) and augmentation index (AIx). We also measured the metabolic and inflammatory markers. After 24 weeks, the central BP decreased significantly from baseline by (16.8 ± 14.0/10.5 ± 9.2) mmHg (1 mmHg = 0.133 kPa) (systolic/diastolic BP; P < 0.001) in benidipine group and (18.9 ± 14.7/12.1 ± 10.2) mmHg (P < 0.001) in losartan group respectively. Both benidipine and losartan groups significantly lowered peripheral BP (P < 0.001) and AIx (P < 0.05), but there were no significant differences between the two groups. The mean aortic, brachial and femoral PWV did not change in both groups after 24-week treatment. There were no significant changes of the blood metabolic and inflammatory biomarkers in each group. Benidipine is as effective as losartan in lowering the central and peripheral BP, and improving arterial stiffness.
Fathi, Yasamin; Faghih, Shiva; Zibaeenezhad, Mohammad Javad; Tabatabaei, Sayed Hamid Reza
2016-02-01
Controversy exists regarding whether increasing dairy intake without energy restriction would lead to weight loss. We aimed to compare the potential weight-reducing effects of kefir drink (a probiotic dairy product) and milk in a dairy-rich non-energy-restricted diet in overweight or obese premenopausal women. One hundred and forty-four subjects were assessed for eligibility in this single-center, multi-arm, parallel-group, randomized controlled trial. Of these, seventy-five eligible women aged 25-45 years were randomly assigned to three groups, labeled as control, milk, and kefir, to receive an outpatient dietary regimen for 8 weeks. Subjects in the control group received a diet providing a maintenance level of energy intake, containing 2 servings/day of low-fat dairy products, while those in the milk and kefir groups received a weight maintenance diet, containing 2 additional servings/day (a total of 4 servings/day) of dairy products from low-fat milk or commercial kefir drink, respectively. Anthropometric outcomes including weight, body mass index (BMI), and waist circumference (WC) were measured every 2 weeks. Fifty-eight subjects completed the study. Using analysis of covariance models in the intention-to-treat population (n = 75), we found that at 8 weeks, subjects in the kefir and milk groups had significantly greater reductions in weight, BMI, and WC compared to those in the control group (all p < 0.01). However, no such significant differences were found between the kefir and milk groups. Kefir drink leads to a similar weight loss, compared with milk, in a dairy-rich non-energy-restricted diet in overweight or obese premenopausal women. However, further studies are warranted.
Parallel LC circuit model for multi-band absorption and preliminary design of radiative cooling.
Feng, Rui; Qiu, Jun; Liu, Linhua; Ding, Weiqiang; Chen, Lixue
2014-12-15
We perform a comprehensive analysis of multi-band absorption by exciting magnetic polaritons in the infrared region. According to the independent properties of the magnetic polaritons, we propose a parallel inductance and capacitance(PLC) circuit model to explain and predict the multi-band resonant absorption peaks, which is fully validated by using the multi-sized structure with identical dielectric spacing layer and the multilayer structure with the same strip width. More importantly, we present the application of the PLC circuit model to preliminarily design a radiative cooling structure realized by merging several close peaks together. This omnidirectional and polarization insensitive structure is a good candidate for radiative cooling application.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sahara, Shinya; Kawai, Nobuyuki; Sato, Morio, E-mail: morisato@mail.wakayama-med.ac.jp
Purpose: To compare the efficacy of transcatheter arterial chemoembolization (TACE) using multiple anticancer drugs (epirubicin, cisplatin, mitomycin C, and 5-furuorouracil: Multi group) with TACE using epirubicin (EP group) for hepatocellular carcinoma (HCC). Materials and Methods: The study design was a single-center, prospective, randomized controlled trial. Patients with unrespectable HCC confined to the liver, unsuitable for radiofrequency ablation, were assigned to the Multi group or the EP group. We assessed radiographic response as the primary endpoint; secondary endpoints were progression-free survival (PFS), safety, and hepatic branch artery abnormality (Grade I, no damage or mild vessel wall irregularity; Grade II, overt stenosis;more » Grade III, occlusion; Grades II and III indicated significant hepatic artery damage). A total of 51 patients were enrolled: 24 in the Multi group vs. 27 in the EP group. Results: No significant difference in HCC patient background was found between the groups. Radiographic response, PFS, and 1- and 2-year overall survival of the Multi vs. EP group were 54% vs. 48%, 6.1 months vs. 8.7 months, and 95% and 65% vs. 85% and 76%, respectively, with no significant difference. Significantly greater Grade 3 transaminase elevation was found in the Multi group (p = 0.023). Hepatic artery abnormality was observed in 34% of the Multi group and in 17.1% of the EP group (p = 0.019). Conclusion: TACE with multiple anti-cancer drugs was tolerable but appeared not to contribute to an increase in radiographic response or PFS, and caused significantly more hepatic arterial abnormalities compared with TACE with epirubicin alone.« less
An embedded multi-core parallel model for real-time stereo imaging
NASA Astrophysics Data System (ADS)
He, Wenjing; Hu, Jian; Niu, Jingyu; Li, Chuanrong; Liu, Guangyu
2018-04-01
The real-time processing based on embedded system will enhance the application capability of stereo imaging for LiDAR and hyperspectral sensor. The task partitioning and scheduling strategies for embedded multiprocessor system starts relatively late, compared with that for PC computer. In this paper, aimed at embedded multi-core processing platform, a parallel model for stereo imaging is studied and verified. After analyzing the computing amount, throughout capacity and buffering requirements, a two-stage pipeline parallel model based on message transmission is established. This model can be applied to fast stereo imaging for airborne sensors with various characteristics. To demonstrate the feasibility and effectiveness of the parallel model, a parallel software was designed using test flight data, based on the 8-core DSP processor TMS320C6678. The results indicate that the design performed well in workload distribution and had a speed-up ratio up to 6.4.
Pladevall, Manel; Brotons, Carlos; Gabriel, Rafael; Arnau, Anna; Suarez, Carmen; de la Figuera, Mariano; Marquez, Emilio; Coca, Antonio; Sobrino, Javier; Divine, George; Heisler, Michele; Williams, L Keoki
2010-01-01
Background Medication non-adherence is common and results in preventable disease complications. This study assesses the effectiveness of a multifactorial intervention to improve both medication adherence and blood pressure control and to reduce cardiovascular events. Methods and Results In this multi-center, cluster-randomized trial, physicians from hospital-based hypertension clinics and primary care centers across Spain were randomized to receive and provide the intervention to their high-risk patients. Eligible patients were ≥50 years of age, had uncontrolled hypertension, and had an estimated 10-year cardiovascular risk greater than 30%. Physicians randomized to the intervention group counted patients’ pills, designated a family member to support adherence behavior, and provided educational information to patients. The primary outcome was blood pressure control at 6 months. Secondary outcomes included both medication adherence and a composite end-point of all cause mortality and cardiovascular-related hospitalizations. Seventy-nine physicians and 877 patients participated in the trial. The mean duration of follow-up was 39 months. Intervention patients were less likely to have an uncontrolled systolic blood pressure (odds ratio 0.62; 95% confidence interval [CI] 0.50–0.78) and were more likely to be adherent (OR 1.91; 95% CI 1.19–3.05) when compared with control group patients at 6 months. After five years 16% of the patients in the intervention group and 19% in the control group met the composite end-point (hazard ratio 0.97; 95% CI 0.67–1.39). Conclusions A multifactorial intervention to improve adherence to antihypertensive medication was effective in improving both adherence and blood pressure control, but it did not appear to improve long-term cardiovascular events. PMID:20823391
USDA-ARS?s Scientific Manuscript database
The Comprehensive Assessment of the Long-term Effects of Reducing Intake of Energy Phase 2 (CALERIE) study is a systematic investigation of sustained 25% calorie restriction (CR) in non-obese humans. CALERIE is a multicenter (3 clinical sites, one coordinating center), parallel group, randomized con...
GPU accelerated dynamic functional connectivity analysis for functional MRI data.
Akgün, Devrim; Sakoğlu, Ünal; Esquivel, Johnny; Adinoff, Bryon; Mete, Mutlu
2015-07-01
Recent advances in multi-core processors and graphics card based computational technologies have paved the way for an improved and dynamic utilization of parallel computing techniques. Numerous applications have been implemented for the acceleration of computationally-intensive problems in various computational science fields including bioinformatics, in which big data problems are prevalent. In neuroimaging, dynamic functional connectivity (DFC) analysis is a computationally demanding method used to investigate dynamic functional interactions among different brain regions or networks identified with functional magnetic resonance imaging (fMRI) data. In this study, we implemented and analyzed a parallel DFC algorithm based on thread-based and block-based approaches. The thread-based approach was designed to parallelize DFC computations and was implemented in both Open Multi-Processing (OpenMP) and Compute Unified Device Architecture (CUDA) programming platforms. Another approach developed in this study to better utilize CUDA architecture is the block-based approach, where parallelization involves smaller parts of fMRI time-courses obtained by sliding-windows. Experimental results showed that the proposed parallel design solutions enabled by the GPUs significantly reduce the computation time for DFC analysis. Multicore implementation using OpenMP on 8-core processor provides up to 7.7× speed-up. GPU implementation using CUDA yielded substantial accelerations ranging from 18.5× to 157× speed-up once thread-based and block-based approaches were combined in the analysis. Proposed parallel programming solutions showed that multi-core processor and CUDA-supported GPU implementations accelerated the DFC analyses significantly. Developed algorithms make the DFC analyses more practical for multi-subject studies with more dynamic analyses. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Lohn, Stefan B.; Dong, Xin; Carminati, Federico
2012-12-01
Chip-Multiprocessors are going to support massive parallelism by many additional physical and logical cores. Improving performance can no longer be obtained by increasing clock-frequency because the technical limits are almost reached. Instead, parallel execution must be used to gain performance. Resources like main memory, the cache hierarchy, bandwidth of the memory bus or links between cores and sockets are not going to be improved as fast. Hence, parallelism can only result into performance gains if the memory usage is optimized and the communication between threads is minimized. Besides concurrent programming has become a domain for experts. Implementing multi-threading is error prone and labor-intensive. A full reimplementation of the whole AliRoot source-code is unaffordable. This paper describes the effort to evaluate the adaption of AliRoot to the needs of multi-threading and to provide the capability of parallel processing by using a semi-automatic source-to-source transformation to address the problems as described before and to provide a straight-forward way of parallelization with almost no interference between threads. This makes the approach simple and reduces the required manual changes in the code. In a first step, unconditional thread-safety will be introduced to bring the original sequential and thread unaware source-code into the position of utilizing multi-threading. Afterwards further investigations have to be performed to point out candidates of classes that are useful to share amongst threads. Then in a second step, the transformation has to change the code to share these classes and finally to verify if there are anymore invalid interferences between threads.
2007-09-17
been proposed; these include a combination of variable fidelity models, parallelisation strategies and hybridisation techniques (Coello, Veldhuizen et...Coello et al (Coello, Veldhuizen et al. 2002). 4.4.2 HIERARCHICAL POPULATION TOPOLOGY A hierarchical population topology, when integrated into...to hybrid parallel Multi-Objective Evolutionary Algorithms (pMOEA) (Cantu-Paz 2000; Veldhuizen , Zydallis et al. 2003); it uses a master slave
A multi-satellite orbit determination problem in a parallel processing environment
NASA Technical Reports Server (NTRS)
Deakyne, M. S.; Anderle, R. J.
1988-01-01
The Engineering Orbit Analysis Unit at GE Valley Forge used an Intel Hypercube Parallel Processor to investigate the performance and gain experience of parallel processors with a multi-satellite orbit determination problem. A general study was selected in which major blocks of computation for the multi-satellite orbit computations were used as units to be assigned to the various processors on the Hypercube. Problems encountered or successes achieved in addressing the orbit determination problem would be more likely to be transferable to other parallel processors. The prime objective was to study the algorithm to allow processing of observations later in time than those employed in the state update. Expertise in ephemeris determination was exploited in addressing these problems and the facility used to bring a realism to the study which would highlight the problems which may not otherwise be anticipated. Secondary objectives were to gain experience of a non-trivial problem in a parallel processor environment, to explore the necessary interplay of serial and parallel sections of the algorithm in terms of timing studies, to explore the granularity (coarse vs. fine grain) to discover the granularity limit above which there would be a risk of starvation where the majority of nodes would be idle or under the limit where the overhead associated with splitting the problem may require more work and communication time than is useful.
NASA Astrophysics Data System (ADS)
Bansal, Shonak; Singh, Arun Kumar; Gupta, Neena
2017-02-01
In real-life, multi-objective engineering design problems are very tough and time consuming optimization problems due to their high degree of nonlinearities, complexities and inhomogeneity. Nature-inspired based multi-objective optimization algorithms are now becoming popular for solving multi-objective engineering design problems. This paper proposes original multi-objective Bat algorithm (MOBA) and its extended form, namely, novel parallel hybrid multi-objective Bat algorithm (PHMOBA) to generate shortest length Golomb ruler called optimal Golomb ruler (OGR) sequences at a reasonable computation time. The OGRs found their application in optical wavelength division multiplexing (WDM) systems as channel-allocation algorithm to reduce the four-wave mixing (FWM) crosstalk. The performances of both the proposed algorithms to generate OGRs as optical WDM channel-allocation is compared with other existing classical computing and nature-inspired algorithms, including extended quadratic congruence (EQC), search algorithm (SA), genetic algorithms (GAs), biogeography based optimization (BBO) and big bang-big crunch (BB-BC) optimization algorithms. Simulations conclude that the proposed parallel hybrid multi-objective Bat algorithm works efficiently as compared to original multi-objective Bat algorithm and other existing algorithms to generate OGRs for optical WDM systems. The algorithm PHMOBA to generate OGRs, has higher convergence and success rate than original MOBA. The efficiency improvement of proposed PHMOBA to generate OGRs up to 20-marks, in terms of ruler length and total optical channel bandwidth (TBW) is 100 %, whereas for original MOBA is 85 %. Finally the implications for further research are also discussed.
Neural simulations on multi-core architectures.
Eichner, Hubert; Klug, Tobias; Borst, Alexander
2009-01-01
Neuroscience is witnessing increasing knowledge about the anatomy and electrophysiological properties of neurons and their connectivity, leading to an ever increasing computational complexity of neural simulations. At the same time, a rather radical change in personal computer technology emerges with the establishment of multi-cores: high-density, explicitly parallel processor architectures for both high performance as well as standard desktop computers. This work introduces strategies for the parallelization of biophysically realistic neural simulations based on the compartmental modeling technique and results of such an implementation, with a strong focus on multi-core architectures and automation, i.e. user-transparent load balancing.
Neural Simulations on Multi-Core Architectures
Eichner, Hubert; Klug, Tobias; Borst, Alexander
2009-01-01
Neuroscience is witnessing increasing knowledge about the anatomy and electrophysiological properties of neurons and their connectivity, leading to an ever increasing computational complexity of neural simulations. At the same time, a rather radical change in personal computer technology emerges with the establishment of multi-cores: high-density, explicitly parallel processor architectures for both high performance as well as standard desktop computers. This work introduces strategies for the parallelization of biophysically realistic neural simulations based on the compartmental modeling technique and results of such an implementation, with a strong focus on multi-core architectures and automation, i.e. user-transparent load balancing. PMID:19636393
Multi-Probe SPM using Interference Patterns for a Parallel Nano Imaging
NASA Astrophysics Data System (ADS)
Koyama, Hirotaka; Oohira, Fumikazu; Hosogi, Maho; Hashiguchi, Gen
This paper proposes a new composition of the multi-probe using optical interference patterns for a parallel nano imaging in a large area scanning. We achieved large-scale integration with 50,000 probes fabricated with MEMS technology, and measured the optical interference patterns with CCD, which was difficult in a conventional single scanning probe. In this research, the multi-probes are made of Si3N4 by MEMS process, and, the multi-probes are joined with a Pyrex glass by an anodic bonding. We designed, fabricated, and evaluated the characteristics of the probe. In addition, we changed the probe shape to decrease the warpage of the Si3N4 probe. We used the supercritical drying to avoid stiction of the Si3N4 probe with the glass surface and fabricated 4 types of the probe shapes without stiction. We took some interference patterns by CCD and measured the position of them. We calculate the probe height using the interference displacement and compared the result with the theoretical deflection curve. As a result, these interference patterns matched the theoretical deflection curve. We found that this multi-probe chip using interference patterns is effective in measurement for a parallel nano imaging.
Parallel 3D Multi-Stage Simulation of a Turbofan Engine
NASA Technical Reports Server (NTRS)
Turner, Mark G.; Topp, David A.
1998-01-01
A 3D multistage simulation of each component of a modern GE Turbofan engine has been made. An axisymmetric view of this engine is presented in the document. This includes a fan, booster rig, high pressure compressor rig, high pressure turbine rig and a low pressure turbine rig. In the near future, all components will be run in a single calculation for a solution of 49 blade rows. The simulation exploits the use of parallel computations by using two levels of parallelism. Each blade row is run in parallel and each blade row grid is decomposed into several domains and run in parallel. 20 processors are used for the 4 blade row analysis. The average passage approach developed by John Adamczyk at NASA Lewis Research Center has been further developed and parallelized. This is APNASA Version A. It is a Navier-Stokes solver using a 4-stage explicit Runge-Kutta time marching scheme with variable time steps and residual smoothing for convergence acceleration. It has an implicit K-E turbulence model which uses an ADI solver to factor the matrix. Between 50 and 100 explicit time steps are solved before a blade row body force is calculated and exchanged with the other blade rows. This outer iteration has been coined a "flip." Efforts have been made to make the solver linearly scaleable with the number of blade rows. Enough flips are run (between 50 and 200) so the solution in the entire machine is not changing. The K-E equations are generally solved every other explicit time step. One of the key requirements in the development of the parallel code was to make the parallel solution exactly (bit for bit) match the serial solution. This has helped isolate many small parallel bugs and guarantee the parallelization was done correctly. The domain decomposition is done only in the axial direction since the number of points axially is much larger than the other two directions. This code uses MPI for message passing. The parallel speed up of the solver portion (no 1/0 or body force calculation) for a grid which has 227 points axially.
A high performance parallel algorithm for 1-D FFT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agarwal, R.C.; Gustavson, F.G.; Zubair, M.
1994-12-31
In this paper the authors propose a parallel high performance FFT algorithm based on a multi-dimensional formulation. They use this to solve a commonly encountered FFT based kernel on a distributed memory parallel machine, the IBM scalable parallel system, SP1. The kernel requires a forward FFT computation of an input sequence, multiplication of the transformed data by a coefficient array, and finally an inverse FFT computation of the resultant data. They show that the multi-dimensional formulation helps in reducing the communication costs and also improves the single node performance by effectively utilizing the memory system of the node. They implementedmore » this kernel on the IBM SP1 and observed a performance of 1.25 GFLOPS on a 64-node machine.« less
Lee, Hee Yeon; Lee, Kyung Hee; Kim, Bong-Seog; Song, Hong Suk; Yang, Sung Hyun; Kim, Joon Hee; Kim, Yeul Hong; Kim, Jong Gwang; Kim, Sang-We; Kim, Dong-Wan; Kim, Si-Young; Park, Hee Sook
2014-01-01
Purpose This study was conducted to evaluate the efficacy and safety of azasetron compared to ondansetron in the prevention of delayed chemotherapy-induced nausea and vomiting. Materials and Methods This study was a multi-center, prospective, randomized, double-dummy, double-blind and parallel-group trial involving 12 institutions in Korea between May 2005 and December 2005. A total of 265 patients with moderately and highly emetogenic chemotherapy were included and randomly assigned to either the azasetron or ondansetron group. All patients received azasetron (10 mg intravenously) and dexamethasone (20 mg intravenously) on day 1 and dexamethasone (4 mg orally every 12 hours) on days 2-4. The azasetron group received azasetron (10 mg orally) with placebo of ondansetron (orally every 12 hours), and the ondansetron group received ondansetron (8 mg orally every 12 hours) with placebo of azasetron (orally) on days 2-6. Results Over days 2-6, the effective ratio of complete response in the azasetron and ondansetron groups was 45% and 54.5%, respectively (95% confidence interval, -21.4 to 2.5%). Thus, the non-inferiority of azasetron compared with ondansetron in delayed chemotherapy-induced nausea and vomiting was not proven in the present study. All treatments were well tolerated and no unexpected drug-related adverse events were reported. The most common adverse events related to the treatment were constipation and hiccups, and there were no differences in the overall incidence of adverse events. Conclusion In the present study, azasetron showed inferiority in the control of delayed chemotherapy-induced nausea and vomiting compared with ondansetron whereas safety profiles were similar between the two groups. PMID:24520219
Botanicals and Hepatotoxicity.
Roytman, Marina M; Poerzgen, Peter; Navarro, Victor
2018-06-19
The use of botanicals, often in the form of multi-ingredient herbal dietary supplements (HDS), has grown tremendously in the past three decades despite their unproven efficacy. This is paralleled by an increase in dietary supplement-related health complications, notably hepatotoxicity. This article reviews the demographics and motivations of dietary supplement (DS) consumers and the regulatory framework for DS in the US and other developed countries. It examines in detail three groups of multi-ingredient HDS associated with hepatotoxicity: OxyElite Pro (two formulations), green tea extract-based DS, and "designer anabolic steroids." These examples illustrate the difficulties in identifying and adjudicating causality of suspect compound(s) of multi-ingredient HDS-associated liver injury in the clinical setting. The article outlines future directions for further study of HDS-associated hepatotoxicity as well as measures to safeguard the consumer against it. © 2018, The American Society for Clinical Pharmacology and Therapeutics.
Numerical and Analytical Model of an Electrodynamic Dust Shield for Solar Panels on Mars
NASA Technical Reports Server (NTRS)
Calle, C. I.; Linell, B.; Chen, A.; Meyer, J.; Clements, S.; Mazumder, M. K.
2006-01-01
Masuda and collaborators at the University of Tokyo developed a method to confine and transport particles called the electric curtain in which a series of parallel electrodes connected to an AC source generates a traveling wave that acts as a contactless conveyor. The curtain electrodes can be excited by a single-phase or a multi-phase AC voltage. A multi-phase curtain produces a non-uniform traveling wave that provides controlled transport of those particles [1-6]. Multi-phase electric curtains from two to six phases have been developed and studied by several research groups [7-9]. We have developed an Electrodynamic Dust Shield prototype using threephase AC voltage electrodes to remove dust from surfaces. The purpose of the modeling work presented here is to research and to better understand the physics governing the electrodynamic shield, as well as to advance and to support the experimental dust shield research.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Norman, Matthew R
2014-01-01
The novel ADER-DT time discretization is applied to two-dimensional transport in a quadrature-free, WENO- and FCT-limited, Finite-Volume context. Emphasis is placed on (1) the serial and parallel computational properties of ADER-DT and this framework and (2) the flexibility of ADER-DT and this framework in efficiently balancing accuracy with other constraints important to transport applications. This study demonstrates a range of choices for the user when approaching their specific application while maintaining good parallel properties. In this method, genuine multi-dimensionality, single-step and single-stage time stepping, strict positivity, and a flexible range of limiting are all achieved with only one parallel synchronizationmore » and data exchange per time step. In terms of parallel data transfers per simulated time interval, this improves upon multi-stage time stepping and post-hoc filtering techniques such as hyperdiffusion. This method is evaluated with standard transport test cases over a range of limiting options to demonstrate quantitatively and qualitatively what a user should expect when employing this method in their application.« less
2013-01-01
Background Episodic cluster headache (ECH) is a primary headache disorder that severely impairs patient’s quality of life. First-line therapy in the initiation of a prophylactic treatment is verapamil. Due to its delayed onset of efficacy and the necessary slow titration of dosage for tolerability reasons prednisone is frequently added by clinicians to the initial prophylactic treatment of a cluster episode. This treatment strategy is thought to effectively reduce the number and intensity of cluster attacks in the beginning of a cluster episode (before verapamil is effective). This study will assess the efficacy and safety of oral prednisone as an add-on therapy to verapamil and compare it to a monotherapy with verapamil in the initial prophylactic treatment of a cluster episode. Methods and design PredCH is a prospective, randomized, double-blind, placebo-controlled trial with parallel study arms. Eligible patients with episodic cluster headache will be randomized to a treatment intervention with prednisone or a placebo arm. The multi-center trial will be conducted in eight German headache clinics that specialize in the treatment of ECH. Discussion PredCH is designed to assess whether oral prednisone added to first-line agent verapamil helps reduce the number and intensity of cluster attacks in the beginning of a cluster episode as compared to monotherapy with verapamil. Trial registration German Clinical Trials Register DRKS00004716 PMID:23889923
Parallel Work of CO2 Ejectors Installed in a Multi-Ejector Module of Refrigeration System
NASA Astrophysics Data System (ADS)
Bodys, Jakub; Palacz, Michal; Haida, Michal; Smolka, Jacek; Nowak, Andrzej J.; Banasiak, Krzysztof; Hafner, Armin
2016-09-01
A performance analysis on of fixed ejectors installed in a multi-ejector module in a CO2 refrigeration system is presented in this study. The serial and the parallel work of four fixed-geometry units that compose the multi-ejector pack was carried out. The executed numerical simulations were performed with the use of validated Homogeneous Equilibrium Model (HEM). The computational tool ejectorPL for typical transcritical parameters at the motive nozzle were used in all the tests. A wide range of the operating conditions for supermarket applications in three different European climate zones were taken into consideration. The obtained results present the high and stable performance of all the ejectors in the multi-ejector pack.
NASA Astrophysics Data System (ADS)
Tolson, B.; Matott, L. S.; Gaffoor, T. A.; Asadzadeh, M.; Shafii, M.; Pomorski, P.; Xu, X.; Jahanpour, M.; Razavi, S.; Haghnegahdar, A.; Craig, J. R.
2015-12-01
We introduce asynchronous parallel implementations of the Dynamically Dimensioned Search (DDS) family of algorithms including DDS, discrete DDS, PA-DDS and DDS-AU. These parallel algorithms are unique from most existing parallel optimization algorithms in the water resources field in that parallel DDS is asynchronous and does not require an entire population (set of candidate solutions) to be evaluated before generating and then sending a new candidate solution for evaluation. One key advance in this study is developing the first parallel PA-DDS multi-objective optimization algorithm. The other key advance is enhancing the computational efficiency of solving optimization problems (such as model calibration) by combining a parallel optimization algorithm with the deterministic model pre-emption concept. These two efficiency techniques can only be combined because of the asynchronous nature of parallel DDS. Model pre-emption functions to terminate simulation model runs early, prior to completely simulating the model calibration period for example, when intermediate results indicate the candidate solution is so poor that it will definitely have no influence on the generation of further candidate solutions. The computational savings of deterministic model preemption available in serial implementations of population-based algorithms (e.g., PSO) disappear in synchronous parallel implementations as these algorithms. In addition to the key advances above, we implement the algorithms across a range of computation platforms (Windows and Unix-based operating systems from multi-core desktops to a supercomputer system) and package these for future modellers within a model-independent calibration software package called Ostrich as well as MATLAB versions. Results across multiple platforms and multiple case studies (from 4 to 64 processors) demonstrate the vast improvement over serial DDS-based algorithms and highlight the important role model pre-emption plays in the performance of parallel, pre-emptable DDS algorithms. Case studies include single- and multiple-objective optimization problems in water resources model calibration and in many cases linear or near linear speedups are observed.
NASA Technical Reports Server (NTRS)
Seevers, P. M.; Lewis, D. T.; Drew, J. V.
1974-01-01
Interpretations of imagery from the Earth Resources Technology Satellite (ERTS-1) indicate that soil associations and attendant range sites can be identified on the basis of vegetation and topography using multi-temporal imagery. Optical density measurements of imagery from the visible red band of the multispectral scanner (MSS band 5) obtained during the growing season were related to field measurements of vegetative biomass, a factor that closely parallels range condition class on specific range sites. ERTS-1 imagery also permitted inventory and assessment of center-pivot irrigation systems in the Sand Hills region in relation to soil and topographic conditions and energy requirements.
Banana regime pressure anisotropy in a bumpy cylinder magnetic field
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garcia-Perciante, A.L.; Callen, J.D.; Shaing, K.C.
The pressure anisotropy is calculated for a plasma in a bumpy cylindrical magnetic field in the low collisionality (banana) regime for small magnetic-field modulations ({epsilon}{identical_to}{delta}B/2B<<1). Solutions are obtained by integrating the drift-kinetic equation along field lines in steady state. A closure for the local value of the parallel viscous force B{center_dot}{nabla}{center_dot}{pi}{sub parallel} is then calculated and is shown to exceed the flux-surface-averaged parallel viscous force by a factor of O(1/{epsilon}). A high-frequency limit ({omega}>>{nu}) for the pressure anisotropy is also determined and the calculation is then extended to include the full frequency dependence by using an expansion inmore » Cordey eigenfunctions.« less
Experiments with a Parallel Multi-Objective Evolutionary Algorithm for Scheduling
NASA Technical Reports Server (NTRS)
Brown, Matthew; Johnston, Mark D.
2013-01-01
Evolutionary multi-objective algorithms have great potential for scheduling in those situations where tradeoffs among competing objectives represent a key requirement. One challenge, however, is runtime performance, as a consequence of evolving not just a single schedule, but an entire population, while attempting to sample the Pareto frontier as accurately and uniformly as possible. The growing availability of multi-core processors in end user workstations, and even laptops, has raised the question of the extent to which such hardware can be used to speed up evolutionary algorithms. In this paper we report on early experiments in parallelizing a Generalized Differential Evolution (GDE) algorithm for scheduling long-range activities on NASA's Deep Space Network. Initial results show that significant speedups can be achieved, but that performance does not necessarily improve as more cores are utilized. We describe our preliminary results and some initial suggestions from parallelizing the GDE algorithm. Directions for future work are outlined.
Gao, Zhengguang; Liu, Hongzhan; Ma, Xiaoping; Lu, Wei
2016-11-10
Multi-hop parallel relaying is considered in a free-space optical (FSO) communication system deploying binary phase-shift keying (BPSK) modulation under the combined effects of a gamma-gamma (GG) distribution and misalignment fading. Based on the best path selection criterion, the cumulative distribution function (CDF) of this cooperative random variable is derived. Then the performance of this optical mesh network is analyzed in detail. A Monte Carlo simulation is also conducted to demonstrate the effectiveness of the results for the average bit error rate (ABER) and outage probability. The numerical result proves that it needs a smaller average transmitted optical power to achieve the same ABER and outage probability when using the multi-hop parallel network in FSO links. Furthermore, the system use of more number of hops and cooperative paths can improve the quality of the communication.
Multi-channel temperature measurement system for automotive battery stack
NASA Astrophysics Data System (ADS)
Lewczuk, Radoslaw; Wojtkowski, Wojciech
2017-08-01
A multi-channel temperature measurement system for monitoring of automotive battery stack is presented in the paper. The presented system is a complete battery temperature measuring system for hybrid / electric vehicles that incorporates multi-channel temperature measurements with digital temperature sensors communicating through 1-Wire buses, individual 1-Wire bus for each sensor for parallel computing (parallel measurements instead of sequential), FPGA device which collects data from sensors and translates it for CAN bus frames. CAN bus is incorporated for communication with car Battery Management System and uses additional CAN bus controller which communicates with FPGA device through SPI bus. The described system can parallel measure up to 12 temperatures but can be easily extended in the future in case of additional needs. The structure of the system as well as particular devices are described in the paper. Selected results of experimental investigations which show proper operation of the system are presented as well.
NASA Astrophysics Data System (ADS)
Ji, X.; Shen, C.
2017-12-01
Flood inundation presents substantial societal hazards and also changes biogeochemistry for systems like the Amazon. It is often expensive to simulate high-resolution flood inundation and propagation in a long-term watershed-scale model. Due to the Courant-Friedrichs-Lewy (CFL) restriction, high resolution and large local flow velocity both demand prohibitively small time steps even for parallel codes. Here we develop a parallel surface-subsurface process-based model enhanced by multi-resolution meshes that are adaptively switched on or off. The high-resolution overland flow meshes are enabled only when the flood wave invades to floodplains. This model applies semi-implicit, semi-Lagrangian (SISL) scheme in solving dynamic wave equations, and with the assistant of the multi-mesh method, it also adaptively chooses the dynamic wave equation only in the area of deep inundation. Therefore, the model achieves a balance between accuracy and computational cost.
Fast data reconstructed method of Fourier transform imaging spectrometer based on multi-core CPU
NASA Astrophysics Data System (ADS)
Yu, Chunchao; Du, Debiao; Xia, Zongze; Song, Li; Zheng, Weijian; Yan, Min; Lei, Zhenggang
2017-10-01
Imaging spectrometer can gain two-dimensional space image and one-dimensional spectrum at the same time, which shows high utility in color and spectral measurements, the true color image synthesis, military reconnaissance and so on. In order to realize the fast reconstructed processing of the Fourier transform imaging spectrometer data, the paper designed the optimization reconstructed algorithm with OpenMP parallel calculating technology, which was further used for the optimization process for the HyperSpectral Imager of `HJ-1' Chinese satellite. The results show that the method based on multi-core parallel computing technology can control the multi-core CPU hardware resources competently and significantly enhance the calculation of the spectrum reconstruction processing efficiency. If the technology is applied to more cores workstation in parallel computing, it will be possible to complete Fourier transform imaging spectrometer real-time data processing with a single computer.
NASA Astrophysics Data System (ADS)
Lawry, B. J.; Encarnacao, A.; Hipp, J. R.; Chang, M.; Young, C. J.
2011-12-01
With the rapid growth of multi-core computing hardware, it is now possible for scientific researchers to run complex, computationally intensive software on affordable, in-house commodity hardware. Multi-core CPUs (Central Processing Unit) and GPUs (Graphics Processing Unit) are now commonplace in desktops and servers. Developers today have access to extremely powerful hardware that enables the execution of software that could previously only be run on expensive, massively-parallel systems. It is no longer cost-prohibitive for an institution to build a parallel computing cluster consisting of commodity multi-core servers. In recent years, our research team has developed a distributed, multi-core computing system and used it to construct global 3D earth models using seismic tomography. Traditionally, computational limitations forced certain assumptions and shortcuts in the calculation of tomographic models; however, with the recent rapid growth in computational hardware including faster CPU's, increased RAM, and the development of multi-core computers, we are now able to perform seismic tomography, 3D ray tracing and seismic event location using distributed parallel algorithms running on commodity hardware, thereby eliminating the need for many of these shortcuts. We describe Node Resource Manager (NRM), a system we developed that leverages the capabilities of a parallel computing cluster. NRM is a software-based parallel computing management framework that works in tandem with the Java Parallel Processing Framework (JPPF, http://www.jppf.org/), a third party library that provides a flexible and innovative way to take advantage of modern multi-core hardware. NRM enables multiple applications to use and share a common set of networked computers, regardless of their hardware platform or operating system. Using NRM, algorithms can be parallelized to run on multiple processing cores of a distributed computing cluster of servers and desktops, which results in a dramatic speedup in execution time. NRM is sufficiently generic to support applications in any domain, as long as the application is parallelizable (i.e., can be subdivided into multiple individual processing tasks). At present, NRM has been effective in decreasing the overall runtime of several algorithms: 1) the generation of a global 3D model of the compressional velocity distribution in the Earth using tomographic inversion, 2) the calculation of the model resolution matrix, model covariance matrix, and travel time uncertainty for the aforementioned velocity model, and 3) the correlation of waveforms with archival data on a massive scale for seismic event detection. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Shen, Wenfeng; Wei, Daming; Xu, Weimin; Zhu, Xin; Yuan, Shizhong
2010-10-01
Biological computations like electrocardiological modelling and simulation usually require high-performance computing environments. This paper introduces an implementation of parallel computation for computer simulation of electrocardiograms (ECGs) in a personal computer environment with an Intel CPU of Core (TM) 2 Quad Q6600 and a GPU of Geforce 8800GT, with software support by OpenMP and CUDA. It was tested in three parallelization device setups: (a) a four-core CPU without a general-purpose GPU, (b) a general-purpose GPU plus 1 core of CPU, and (c) a four-core CPU plus a general-purpose GPU. To effectively take advantage of a multi-core CPU and a general-purpose GPU, an algorithm based on load-prediction dynamic scheduling was developed and applied to setting (c). In the simulation with 1600 time steps, the speedup of the parallel computation as compared to the serial computation was 3.9 in setting (a), 16.8 in setting (b), and 20.0 in setting (c). This study demonstrates that a current PC with a multi-core CPU and a general-purpose GPU provides a good environment for parallel computations in biological modelling and simulation studies. Copyright 2010 Elsevier Ireland Ltd. All rights reserved.
Breck, Andrew; Goodman, Ken; Dunn, Lillian; Stephens, Robert L; Dawkins, Nicola; Dixon, Beth; Jernigan, Jan; Kakietek, Jakub; Lesesne, Catherine; Lessard, Laura; Nonas, Cathy; O'Dell, Sarah Abood; Osuji, Thearis A; Bronson, Bernice; Xu, Ye; Kettel Khan, Laura
2014-10-16
This article describes the multi-method cross-sectional design used to evaluate New York City Department of Health and Mental Hygiene's regulations of nutrition, physical activity, and screen time for children aged 3 years or older in licensed group child care centers. The Center Evaluation Component collected data from a stratified random sample of 176 licensed group child care centers in New York City. Compliance with the regulations was measured through a review of center records, a facility inventory, and interviews of center directors, lead teachers, and food service staff. The Classroom Evaluation Component included an observational and biometric study of a sample of approximately 1,400 children aged 3 or 4 years attending 110 child care centers and was designed to complement the center component at the classroom and child level. The study methodology detailed in this paper may aid researchers in designing policy evaluation studies that can inform other jurisdictions considering similar policies.
Saturn V First Stage (S-1C) At MSFC
NASA Technical Reports Server (NTRS)
1960-01-01
This small group of unidentified officials is dwarfed by the gigantic size of the Saturn V first stage (S-1C) at the shipping area of the Manufacturing Engineering Laboratory at Marshall Space Flight Center in Huntsville, Alabama. The towering 363-foot Saturn V was a multi-stage, multi-engine launch vehicle standing taller than the Statue of Liberty. Altogether, the Saturn V engines produced as much power as 85 Hoover Dams.
Katouda, Michio; Naruse, Akira; Hirano, Yukihiko; Nakajima, Takahito
2016-11-15
A new parallel algorithm and its implementation for the RI-MP2 energy calculation utilizing peta-flop-class many-core supercomputers are presented. Some improvements from the previous algorithm (J. Chem. Theory Comput. 2013, 9, 5373) have been performed: (1) a dual-level hierarchical parallelization scheme that enables the use of more than 10,000 Message Passing Interface (MPI) processes and (2) a new data communication scheme that reduces network communication overhead. A multi-node and multi-GPU implementation of the present algorithm is presented for calculations on a central processing unit (CPU)/graphics processing unit (GPU) hybrid supercomputer. Benchmark results of the new algorithm and its implementation using the K computer (CPU clustering system) and TSUBAME 2.5 (CPU/GPU hybrid system) demonstrate high efficiency. The peak performance of 3.1 PFLOPS is attained using 80,199 nodes of the K computer. The peak performance of the multi-node and multi-GPU implementation is 514 TFLOPS using 1349 nodes and 4047 GPUs of TSUBAME 2.5. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Furumatsu, T; Kodama, Y; Fujii, M; Tanaka, T; Hino, T; Kamatsuki, Y; Yamada, K; Miyazawa, S; Ozaki, T
2017-05-01
Injuries to the medial meniscus (MM) posterior root lead to accelerated cartilage degeneration of the knee. An anatomic placement of the MM posterior root attachment is considered to be critical in transtibial pullout repair of the medial meniscus posterior root tear (MMPRT). However, tibial tunnel creation at the anatomic attachment of the MM posterior root is technically difficult using a conventional aiming device. The aim of this study was to compare two aiming guides. We hypothesized that a newly-developed guide, specifically designed, creates the tibial tunnel at an adequate position rather than a conventional device. Twenty-six patients underwent transtibial pullout repairs. Tibial tunnel creation was performed using the Multi-use guide (8 cases) or the PRT guide that had a narrow twisting/curving shape (18 cases). Three-dimensional computed tomography images of the tibial surface were evaluated using the Tsukada's measurement method postoperatively. Expected anatomic center of the MM posterior root attachment and tibial tunnel center were evaluated using the percentage-based posterolateral location on the tibial surface. Percentage distance between anatomic center and tunnel center was calculated. Anatomic center of the MM posterior root footprint located at a position of 78.5% posterior and 39.4% lateral. Both tunnels were anteromedial but tibial tunnel center located at a more favorable position in the PRT group: percentage distance was significantly smaller in the PRT guide group (8.7%) than in the Multi-use guide group (13.1%). The PRT guide may have great advantage to achieve a more anatomic location of the tibial tunnel in MMPRT pullout repair. III. Copyright © 2017 Elsevier Masson SAS. All rights reserved.
Chen, Shicai; Shi, Song; Xia, Yanghui; Liu, Fei; Chen, Donghui; Zhu, Minhui; Li, Meng; Zheng, Hongliang
2014-01-01
To investigate changes in S3 sleep and the apnea hypopnea index (AHI), SpO2 desaturation and CT90, and to determine changes in the degree of airway collapse and in the cross-sectional area of the retropalatal and lingual region in obstructive sleep apnea hypopnea syndrome patients. All subjects underwent overnight polysomnography and were evaluated using Müller's test and magnetic resonance imaging at baseline, 3, and 12 months following surgery. The mean S3 scores in patients receiving uvulopalatopharyngoplasty combined with genioglossus advancement (UPPP-GA) or UPPP combined with tongue base advancement using the Repose™ system (UPPP-TBA) noticeably increased. Marked improvement was seen in the mean AHI, LSO2, and CT90 scores 3 and 12 months following surgery compared to baseline. Airway collapsed by 25-50% in the greatest proportion undergoing surgery at the tongue base. UPPP-GA and UPPP-TBA more effectively improve S3 sleep, and mean AHI, LSO2, and CT90 scores. In addition, they effectively alleviate airway obstruction by improving the cross-sectional area of these regions. © 2014 S. Karger AG, Basel.
User-centered design of multi-gene sequencing panel reports for clinicians.
Cutting, Elizabeth; Banchero, Meghan; Beitelshees, Amber L; Cimino, James J; Fiol, Guilherme Del; Gurses, Ayse P; Hoffman, Mark A; Jeng, Linda Jo Bone; Kawamoto, Kensaku; Kelemen, Mark; Pincus, Harold Alan; Shuldiner, Alan R; Williams, Marc S; Pollin, Toni I; Overby, Casey Lynnette
2016-10-01
The objective of this study was to develop a high-fidelity prototype for delivering multi-gene sequencing panel (GS) reports to clinicians that simulates the user experience of a final application. The delivery and use of GS reports can occur within complex and high-paced healthcare environments. We employ a user-centered software design approach in a focus group setting in order to facilitate gathering rich contextual information from a diverse group of stakeholders potentially impacted by the delivery of GS reports relevant to two precision medicine programs at the University of Maryland Medical Center. Responses from focus group sessions were transcribed, coded and analyzed by two team members. Notification mechanisms and information resources preferred by participants from our first phase of focus groups were incorporated into scenarios and the design of a software prototype for delivering GS reports. The goal of our second phase of focus group, to gain input on the prototype software design, was accomplished through conducting task walkthroughs with GS reporting scenarios. Preferences for notification, content and consultation from genetics specialists appeared to depend upon familiarity with scenarios for ordering and delivering GS reports. Despite familiarity with some aspects of the scenarios we proposed, many of our participants agreed that they would likely seek consultation from a genetics specialist after viewing the test reports. In addition, participants offered design and content recommendations. Findings illustrated a need to support customized notification approaches, user-specific information, and access to genetics specialists with GS reports. These design principles can be incorporated into software applications that deliver GS reports. Our user-centered approach to conduct this assessment and the specific input we received from clinicians may also be relevant to others working on similar projects. Copyright © 2016 Elsevier Inc. All rights reserved.
Applicability Evaluation of Simplified Cognitive Behavioral Therapy.
Zhang, Li; Zhu, Zhipei; Fang, Fang; Shen, Yuan; Liu, Na; Li, Chunbo
2018-04-25
We have developed a structured cognitive behavioral therapy manual for anxiety disorder in China, and the present study evaluated the applicability of simplified cognitive behavioral therapy based on our previous research. To evaluate the applicability of simplified cognitive behavioral therapy (SCBT) on generalized anxiety disorder (GAD) by conducting a multi-center controlled clinical trial. A multi-center controlled clinical trial of SCBT was conducted on patients with GAD, including institutions specializing in mental health and psychiatry units in general hospitals. The participants were divided into 3 groups: SCBT group, SCBT with medication group and medication group. The drop-out rates of these three groups, the therapy satisfaction of patients who received SCBT and the evaluation of SCBT from therapists were compared. (1) There was no significant difference among the drop-out rates in the three groups. (2) Only the duration and times of therapy were significantly different between the two groups of patients who received the SCBT, and the therapy satisfaction of the SCBT group was higher than that of the SCBT with medication group. (3) Eighteen therapists who conducted the SCBT indicated that the manual was easy to comprehend and operate, and this therapy could achieve the therapy goals. The applicability of SCBT for patients with GAD is relatively high, and it is hopeful that SCBT can become a psychological treatment which can be applied in medical institutions of various levels.
Parallel and Portable Monte Carlo Particle Transport
NASA Astrophysics Data System (ADS)
Lee, S. R.; Cummings, J. C.; Nolen, S. D.; Keen, N. D.
1997-08-01
We have developed a multi-group, Monte Carlo neutron transport code in C++ using object-oriented methods and the Parallel Object-Oriented Methods and Applications (POOMA) class library. This transport code, called MC++, currently computes k and α eigenvalues of the neutron transport equation on a rectilinear computational mesh. It is portable to and runs in parallel on a wide variety of platforms, including MPPs, clustered SMPs, and individual workstations. It contains appropriate classes and abstractions for particle transport and, through the use of POOMA, for portable parallelism. Current capabilities are discussed, along with physics and performance results for several test problems on a variety of hardware, including all three Accelerated Strategic Computing Initiative (ASCI) platforms. Current parallel performance indicates the ability to compute α-eigenvalues in seconds or minutes rather than days or weeks. Current and future work on the implementation of a general transport physics framework (TPF) is also described. This TPF employs modern C++ programming techniques to provide simplified user interfaces, generic STL-style programming, and compile-time performance optimization. Physics capabilities of the TPF will be extended to include continuous energy treatments, implicit Monte Carlo algorithms, and a variety of convergence acceleration techniques such as importance combing.
Xie, Ying; He, Yu-Bin; Zhang, Shi-Xin; Pan, Ai-Qun; Zhang, Jun; Guan, Xiao-Hong; Wang, Jin-Xue; Guo, Wen-Sheng
2014-09-01
To evaluate the efficacy and safety of using Jiangzhi Tongluo Soft Capsule (JTSC) combined with Atorvastatin Calcium Tablet (ACT) or ACT alone in treatment of combined hyperlipidemia. A randomized, double blinded, parallel control, and multi-center clinical research design was adopted. Totally 138 combined hyperlipidemia patients were randomly assigned to the combined treatment group (A) and the atorvastatin treatment group (B) by random digit table, 69 in each group. All patients took ACT 20 mg per day. Patients in the A group took JTSC 100 mg each time, 3 times per day. Those in the B group took JTSC simulated agent, 100 mg each time, 3 times per day. The treatment period for all was 8 weeks. Serum levels of triglyceride (TG), total cholesterol (TC), low density lipoprotein cholesterol (LDL-C), and high density lipoprotein cholesterol (HDL-C) were observed before treatment, at week 4 and 8 after treatment; and safety was assessed as well. At week 4 and 8 after treatment serum TG decreased by 26.69% and 33.29% respectively in the A group (both P < 0.01), while it was decreased by 25.7% and 22.98% respectively in the B group (both P < 0.01). At week 8 decreased serum TG was obviously higher in the A group than in the B group (P < 0.05). Compared with before treatment, serum levels of LDL-C and TC levels decreased significantly in the two groups (all P < 0.01). There was no statistical difference in the drop-out value and the drop-out rate of serum LDL-C and TC levels (P > 0.05). At week 8 the serum HDL-C level showed an increasing tendency in the two groups. No obvious increase in peptase or creatase occurred in the two groups after treatment. JTSC combined with ACT could lower the serum TG level of combined hyperlipidemia patients with safety.
Is the thumb a fifth finger? A study of digit interaction during force production tasks
Olafsdottir, Halla; Zatsiorsky, Vladimir M.; Latash, Mark L.
2010-01-01
We studied indices of digit interaction in single- and multi-digit maximal voluntary contraction (MVC) tests when the thumb acted either in parallel or in opposition to the fingers. The peak force produced by the thumb was much higher when the thumb acted in opposition to the fingers and its share of the total force in the five-digit MVC test increased dramatically. The fingers showed relatively similar peak forces and unchanged sharing patterns in the four-finger MVC task when the thumb acted in parallel and in opposition to the fingers. Enslaving during one-digit tasks showed relatively mild differences between the two conditions, while the differences became large when enslaving was quantified for multi-digit tasks. Force deficit was pronounced when the thumb acted in parallel to the fingers; it showed a monotonic increase with the number of explicitly involved digits up to four digits and then a drop when all five digits were involved. Force deficit all but disappeared when the thumb acted in opposition to the fingers. However, for both thumb positions, indices of digit interaction were similar for groups of digits that did or did not include the thumb. These results suggest that, given a certain hand configuration, the central nervous system treats the thumb as a fifth finger. They provide strong support for the hypothesis that indices of digit interaction reflect neural factors, not the peripheral design of the hand. An earlier formal model was able to account for the data when the thumb acted in parallel to the fingers. However, it failed for the data with the thumb acting in opposition to the fingers. PMID:15322785
Parallel computing in enterprise modeling.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goldsby, Michael E.; Armstrong, Robert C.; Shneider, Max S.
2008-08-01
This report presents the results of our efforts to apply high-performance computing to entity-based simulations with a multi-use plugin for parallel computing. We use the term 'Entity-based simulation' to describe a class of simulation which includes both discrete event simulation and agent based simulation. What simulations of this class share, and what differs from more traditional models, is that the result sought is emergent from a large number of contributing entities. Logistic, economic and social simulations are members of this class where things or people are organized or self-organize to produce a solution. Entity-based problems never have an a priorimore » ergodic principle that will greatly simplify calculations. Because the results of entity-based simulations can only be realized at scale, scalable computing is de rigueur for large problems. Having said that, the absence of a spatial organizing principal makes the decomposition of the problem onto processors problematic. In addition, practitioners in this domain commonly use the Java programming language which presents its own problems in a high-performance setting. The plugin we have developed, called the Parallel Particle Data Model, overcomes both of these obstacles and is now being used by two Sandia frameworks: the Decision Analysis Center, and the Seldon social simulation facility. While the ability to engage U.S.-sized problems is now available to the Decision Analysis Center, this plugin is central to the success of Seldon. Because Seldon relies on computationally intensive cognitive sub-models, this work is necessary to achieve the scale necessary for realistic results. With the recent upheavals in the financial markets, and the inscrutability of terrorist activity, this simulation domain will likely need a capability with ever greater fidelity. High-performance computing will play an important part in enabling that greater fidelity.« less
WDM mid-board optics for chip-to-chip wavelength routing interconnects in the H2020 ICT-STREAMS
NASA Astrophysics Data System (ADS)
Kanellos, G. T.; Pleros, N.
2017-02-01
Multi-socket server boards have emerged to increase the processing power density on the board level and further flatten the data center networks beyond leaf-spine architectures. Scaling however the number of processors per board puts current electronic technologies into challenge, as it requires high bandwidth interconnects and high throughput switches with increased number of ports that are currently unavailable. On-board optical interconnection has proved the potential to efficiently satisfy the bandwidth needs, but their use has been limited to parallel links without performing any smart routing functionality. With CWDM optical interconnects already a commodity, cyclical wavelength routing proposed to fit the datacom for rack-to-rack and board-to-board communication now becomes a promising on-board routing platform. ICT-STREAMS is a European research project that aims to combine WDM parallel on-board transceivers with a cyclical AWGR, in order to create a new board-level, chip-to-chip interconnection paradigm that will leverage WDM parallel transmission to a powerful wavelength routing platform capable to interconnect multiple processors with unprecedented bandwidth and throughput capacity. Direct, any-to-any, on-board interconnection of multiple processors will significantly contribute to further flatten the data centers and facilitate east-west communication. In the present communication, we present ICT-STREAMS on-board wavelength routing architecture for multiple chip-to-chip interconnections and evaluate the overall system performance in terms of throughput and latency for several schemes and traffic profiles. We also review recent advances of the ICT-STREAMS platform key-enabling technologies that span from Si in-plane lasers and polymer based electro-optical circuit boards to silicon photonics transceivers and photonic-crystal amplifiers.
Three dimensional simulations of viscous folding in diverging microchannels
NASA Astrophysics Data System (ADS)
Xu, Bingrui; Chergui, Jalel; Shin, Seungwon; Juric, Damir
2016-11-01
Three dimensional simulations on the viscous folding in diverging microchannels reported by Cubaud and Mason are performed using the parallel code BLUE for multi-phase flows. The more viscous liquid L1 is injected into the channel from the center inlet, and the less viscous liquid L2 from two side inlets. Liquid L1 takes the form of a thin filament due to hydrodynamic focusing in the long channel that leads to the diverging region. The thread then becomes unstable to a folding instability, due to the longitudinal compressive stress applied to it by the diverging flow of liquid L2. We performed a parameter study in which the flow rate ratio, the viscosity ratio, the Reynolds number, and the shape of the channel were varied relative to a reference model. In our simulations, the cross section of the thread produced by focusing is elliptical rather than circular. The initial folding axis can be either parallel or perpendicular to the narrow dimension of the chamber. In the former case, the folding slowly transforms via twisting to perpendicular folding, or it may remain parallel. The direction of folding onset is determined by the velocity profile and the elliptical shape of the thread cross section in the channel that feeds the diverging part of the cell.
Hybrid Parallelization of Adaptive MHD-Kinetic Module in Multi-Scale Fluid-Kinetic Simulation Suite
Borovikov, Sergey; Heerikhuisen, Jacob; Pogorelov, Nikolai
2013-04-01
The Multi-Scale Fluid-Kinetic Simulation Suite has a computational tool set for solving partially ionized flows. In this paper we focus on recent developments of the kinetic module which solves the Boltzmann equation using the Monte-Carlo method. The module has been recently redesigned to utilize intra-node hybrid parallelization. We describe in detail the redesign process, implementation issues, and modifications made to the code. Finally, we conduct a performance analysis.
Protocol: a multi-level intervention program to reduce stress in 9-1-1 telecommunicators.
Meischke, Hendrika; Lilly, Michelle; Beaton, Randal; Calhoun, Rebecca; Tu, Ann; Stangenes, Scott; Painter, Ian; Revere, Debra; Baseman, Janet
2018-05-02
Nationwide, emergency response systems depend on 9-1-1 telecommunicators to prioritize, triage, and dispatch assistance to those in distress. 9-1-1 call center telecommunicators (TCs) are challenged by acute and chronic workplace stressors: tense interactions with citizen callers in crisis; overtime; shift-work; ever-changing technologies; and negative work culture, including co-worker conflict. This workforce is also subject to routine exposures to secondary traumatization while handling calls involving emergency situations and while making time urgent, high stake decisions over the phone. Our study aims to test the effectiveness of a multi-part intervention to reduce stress in 9-1-1 TCs through an online mindfulness training and a toolkit containing workplace stressor reduction resources. The study employs a randomized controlled trial design with three data collection points. The multi-part intervention includes an individual-level online mindfulness training and a call center-level organizational stress reduction toolkit. 160 TCs will be recruited from 9-1-1 call centers, complete a baseline survey at enrollment, and are randomly assigned to an intervention or a control group. Intervention group participants will start a 7-week online mindfulness training developed in-house and tailored to 9-1-1 TCs and their call center environment; control participants will be "waitlisted" and start the training after the study period ends. Following the intervention group's completion of the mindfulness training, all participants complete a second survey. Next, the online toolkit with call-center wide stress reduction resources is made available to managers of all participating call centers. After 3 months, a third survey will be completed by all participants. The primary outcome is 9-1-1 TCs' self-reported symptoms of stress at three time points as measured by the C-SOSI (Calgary Symptoms of Stress Inventory). Secondary outcomes will include: perceptions of social work environment (measured by metrics of social support and network conflict); mindfulness; and perceptions of social work environment and mindfulness as mediators of stress reduction. This study will evaluate the effectiveness of an online mindfulness training and call center-wide stress reduction toolkit in reducing self-reported stress in 9-1-1 TCs. The results of this study will add to the growing body of research on worksite stress reduction programs. ClinicalTrials.gov Registration Number: NCT02961621 Registered on November 7, 2016 (retrospectively registered).
Fujitani, Yoshio; Fujimoto, Shimpei; Takahashi, Kiyohito; Satoh, Hiroaki; Hirose, Takahisa; Hiyoshi, Toru; Ai, Masumi; Okada, Yosuke; Gosho, Masahiko; Mita, Tomoya; Watada, Hirotaka
2016-11-01
To compare the efficacy on glycemic parameters between a 12-week administration of once-daily linagliptin and thrice-daily voglibose in Japanese patients with type 2 diabetes. In a multi-center, randomized, parallel-group study, 382 patients with diabetes were randomized to the linagliptin group (n=192) or the voglibose group (n=190). A meal tolerance test was performed at weeks 0 and 12. Primary outcomes were the change from baseline to week 12 in serum glucose levels at 2h during the meal tolerance test, HbA1c levels, and serum fasting glucose levels, which were compared between the 2 groups. Whereas changes in serum glucose levels at 2h during the meal tolerance test did not differ between the groups, the mean change in HbA1c levels from baseline to week 12 in the linagliptin group (-0.5±0.5% [-5.1±5.4mmol/mol]) was significantly larger than in the voglibose group (-0.2±0.5% [-2.7±5.4mmol/mol]). In addition, there was significant difference in changes in serum fasting glucose levels (-0.51±0.95mmol/L in the linagliptin group vs. -0.18±0.92mmol/L in the voglibose group, P<0.001). The incidences of hypoglycemia, serious adverse events (AEs), and discontinuations due to AEs were low and similar in both groups. However, gastrointestinal AEs were significantly lower in the linagliptin group (1.05% vs. 5.85%; P=0.01). These data suggested that linagliptin monotherapy had a stronger glucose-lowering effect than voglibose monotherapy with respect to HbA1c and serum fasting glucose levels, but not serum glucose levels 2h after the start of the meal tolerance test. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Costello, John M; Dunbar-Masterson, Carolyn; Allan, Catherine K; Gauvreau, Kimberlee; Newburger, Jane W; McGowan, Francis X; Wessel, David L; Mayer, John E; Salvin, Joshua W; Dionne, Roger E; Laussen, Peter C
2014-07-01
We sought to determine whether empirical nesiritide or milrinone would improve the early postoperative course after Fontan surgery. We hypothesized that compared with milrinone or placebo, patients assigned to receive nesiritide would have improved early postoperative outcomes. In a single-center, randomized, double-blinded, placebo-controlled, multi-arm parallel-group clinical trial, patients undergoing primary Fontan surgery were assigned to receive nesiritide, milrinone, or placebo. A loading dose of study drug was administered on cardiopulmonary bypass followed by a continuous infusion for ≥12 hours and ≤5 days after cardiac intensive care unit admission. The primary outcome was days alive and out of the hospital within 30 days of surgery. Secondary outcomes included measures of cardiovascular function, renal function, resource use, and adverse events. Among 106 enrolled subjects, 35, 36, and 35 were randomized to the nesiritide, milrinone, and placebo groups, respectively, and all were analyzed based on intention to treat. Demographics, patient characteristics, and operative factors were similar among treatment groups. No significant treatment group differences were found for median days alive and out of the hospital within 30 days of surgery (nesiritide, 20 [minimum to maximum, 0-24]; milrinone, 18 [0-23]; placebo, 20 [0-23]; P=0.38). Treatment groups did not significantly differ in cardiac index, arrhythmias, peak lactate, inotropic scores, urine output, duration of mechanical ventilation, intensive care or chest tube drainage, or adverse events. Compared with placebo, empirical perioperative nesiritide or milrinone infusions are not associated with improved early clinical outcomes after Fontan surgery. http://www.clinicaltrials.gov. Unique identifier: NCT00543309. © 2014 American Heart Association, Inc.
Array-based Hierarchical Mesh Generation in Parallel
Ray, Navamita; Grindeanu, Iulian; Zhao, Xinglin; ...
2015-11-03
In this paper, we describe an array-based hierarchical mesh generation capability through uniform refinement of unstructured meshes for efficient solution of PDE's using finite element methods and multigrid solvers. A multi-degree, multi-dimensional and multi-level framework is designed to generate the nested hierarchies from an initial mesh that can be used for a number of purposes such as multi-level methods to generating large meshes. The capability is developed under the parallel mesh framework “Mesh Oriented dAtaBase” a.k.a MOAB. We describe the underlying data structures and algorithms to generate such hierarchies and present numerical results for computational efficiency and mesh quality. Inmore » conclusion, we also present results to demonstrate the applicability of the developed capability to a multigrid finite-element solver.« less
Multi-petascale highly efficient parallel supercomputer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Asaad, Sameh; Bellofatto, Ralph E.; Blocksome, Michael A.
A Multi-Petascale Highly Efficient Parallel Supercomputer of 100 petaflop-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC). The ASIC nodes are interconnected by a five dimensional torus network that optimally maximize the throughput of packet communications between nodes and minimize latency. The network implements collective network and a global asynchronous network that provides global barrier and notification functions. Integrated in the node design include a list-based prefetcher. The memory system implements transaction memory, thread level speculation, and multiversioning cache that improves soft error rate at the same time andmore » supports DMA functionality allowing for parallel processing message-passing.« less
A fast ultrasonic simulation tool based on massively parallel implementations
NASA Astrophysics Data System (ADS)
Lambert, Jason; Rougeron, Gilles; Lacassagne, Lionel; Chatillon, Sylvain
2014-02-01
This paper presents a CIVA optimized ultrasonic inspection simulation tool, which takes benefit of the power of massively parallel architectures: graphical processing units (GPU) and multi-core general purpose processors (GPP). This tool is based on the classical approach used in CIVA: the interaction model is based on Kirchoff, and the ultrasonic field around the defect is computed by the pencil method. The model has been adapted and parallelized for both architectures. At this stage, the configurations addressed by the tool are : multi and mono-element probes, planar specimens made of simple isotropic materials, planar rectangular defects or side drilled holes of small diameter. Validations on the model accuracy and performances measurements are presented.
Multi-Resolution Climate Ensemble Parameter Analysis with Nested Parallel Coordinates Plots.
Wang, Junpeng; Liu, Xiaotong; Shen, Han-Wei; Lin, Guang
2017-01-01
Due to the uncertain nature of weather prediction, climate simulations are usually performed multiple times with different spatial resolutions. The outputs of simulations are multi-resolution spatial temporal ensembles. Each simulation run uses a unique set of values for multiple convective parameters. Distinct parameter settings from different simulation runs in different resolutions constitute a multi-resolution high-dimensional parameter space. Understanding the correlation between the different convective parameters, and establishing a connection between the parameter settings and the ensemble outputs are crucial to domain scientists. The multi-resolution high-dimensional parameter space, however, presents a unique challenge to the existing correlation visualization techniques. We present Nested Parallel Coordinates Plot (NPCP), a new type of parallel coordinates plots that enables visualization of intra-resolution and inter-resolution parameter correlations. With flexible user control, NPCP integrates superimposition, juxtaposition and explicit encodings in a single view for comparative data visualization and analysis. We develop an integrated visual analytics system to help domain scientists understand the connection between multi-resolution convective parameters and the large spatial temporal ensembles. Our system presents intricate climate ensembles with a comprehensive overview and on-demand geographic details. We demonstrate NPCP, along with the climate ensemble visualization system, based on real-world use-cases from our collaborators in computational and predictive science.
Vipie: web pipeline for parallel characterization of viral populations from multiple NGS samples.
Lin, Jake; Kramna, Lenka; Autio, Reija; Hyöty, Heikki; Nykter, Matti; Cinek, Ondrej
2017-05-15
Next generation sequencing (NGS) technology allows laboratories to investigate virome composition in clinical and environmental samples in a culture-independent way. There is a need for bioinformatic tools capable of parallel processing of virome sequencing data by exactly identical methods: this is especially important in studies of multifactorial diseases, or in parallel comparison of laboratory protocols. We have developed a web-based application allowing direct upload of sequences from multiple virome samples using custom parameters. The samples are then processed in parallel using an identical protocol, and can be easily reanalyzed. The pipeline performs de-novo assembly, taxonomic classification of viruses as well as sample analyses based on user-defined grouping categories. Tables of virus abundance are produced from cross-validation by remapping the sequencing reads to a union of all observed reference viruses. In addition, read sets and reports are created after processing unmapped reads against known human and bacterial ribosome references. Secured interactive results are dynamically plotted with population and diversity charts, clustered heatmaps and a sortable and searchable abundance table. The Vipie web application is a unique tool for multi-sample metagenomic analysis of viral data, producing searchable hits tables, interactive population maps, alpha diversity measures and clustered heatmaps that are grouped in applicable custom sample categories. Known references such as human genome and bacterial ribosomal genes are optionally removed from unmapped ('dark matter') reads. Secured results are accessible and shareable on modern browsers. Vipie is a freely available web-based tool whose code is open source.
ERIC Educational Resources Information Center
Allen, Kimberly M.
2017-01-01
This quasi-experimental pretest/posttest control group study design evaluated the effects of a multi-component educational intervention on the well-being of older adults. Participants were members of a senior community center in the Midwestern United States. The sample consisted of 45 participants assigned to either a treatment or control group.…
Moxibustion for cancer-related fatigue: study protocol for a randomized controlled trial.
Kim, Mikyung; Kim, Jung-Eun; Lee, Hye-Yoon; Kim, Ae-Ran; Park, Hyo-Ju; Kwon, O-Jin; Kim, Eun-Jung; Park, Yeon-Cheol; Seo, Byung-Kwan; Cho, Jung Hyo; Kim, Joo-Hee
2017-07-05
Cancer-related fatigue is one of the most common symptoms experienced by cancer patients, and it diminishes their quality of life. However, there is currently no confirmed standard treatment for cancer-related fatigue, and thus, many patients who suffer cancer-related fatigue seek complementary and alternative medicines such as moxibustion. Moxibustion is one of the most popular therapies in traditional Korean medicine used to manage fatigue. Recent studies have also demonstrated that moxibustion is effective for treating chronic fatigue. However, there is insufficient evidence supporting the effect of moxibustion against cancer-related fatigue. The aim of this study is to assess the efficacy and safety of moxibustion treatment for cancer-related fatigue. A multi-center, three-armed parallel, randomized controlled trial will be conducted. Ninety-six patients with cancer-related fatigue will be recruited from three clinical research centers. They will be randomly allocated to one of three groups in a 1:1:1 ratio. The moxibustion group will receive moxibustion treatment at CV8, CV12, LI4 and ST36. The sham moxibustion group will receive sham moxibustion at non-acupoints. Both the moxibustion and sham moxibustion groups will receive 30-min treatments twice a week for 8 weeks. The usual care group will not receive moxibustion treatment. All participants will be educated via a brochure on how to manage cancer-related fatigue in daily life. The outcome measurements will be evaluated at baseline, week 5, week 9, and week 13 by assessors who are blinded to the group allocation. The primary outcome measure will be the mean change in the average scores of the Brief Fatigue Inventory before and after treatments between groups. The secondary outcome measures will be the mean difference in changes from baseline of the Brief Fatigue Inventory, functional assessments of cancer therapy-fatigue, European Organization for Research and Treatment of Cancer Quality of Life Questionnaire C-30 scores, and Montreal Cognitive Assessment scores between groups. Safety will be assessed by monitoring adverse events at each visit. The results of this study will provide evidence to confirm whether moxibustion can be used as a therapeutic option for treating cancer-related fatigue. Clinical Research Information Service KCT0002170 . Registered 16 December 2016.
Murphy, Mark; Alley, Marcus; Demmel, James; Keutzer, Kurt; Vasanawala, Shreyas; Lustig, Michael
2012-06-01
We present l₁-SPIRiT, a simple algorithm for auto calibrating parallel imaging (acPI) and compressed sensing (CS) that permits an efficient implementation with clinically-feasible runtimes. We propose a CS objective function that minimizes cross-channel joint sparsity in the wavelet domain. Our reconstruction minimizes this objective via iterative soft-thresholding, and integrates naturally with iterative self-consistent parallel imaging (SPIRiT). Like many iterative magnetic resonance imaging reconstructions, l₁-SPIRiT's image quality comes at a high computational cost. Excessively long runtimes are a barrier to the clinical use of any reconstruction approach, and thus we discuss our approach to efficiently parallelizing l₁-SPIRiT and to achieving clinically-feasible runtimes. We present parallelizations of l₁-SPIRiT for both multi-GPU systems and multi-core CPUs, and discuss the software optimization and parallelization decisions made in our implementation. The performance of these alternatives depends on the processor architecture, the size of the image matrix, and the number of parallel imaging channels. Fundamentally, achieving fast runtime requires the correct trade-off between cache usage and parallelization overheads. We demonstrate image quality via a case from our clinical experimentation, using a custom 3DFT spoiled gradient echo (SPGR) sequence with up to 8× acceleration via Poisson-disc undersampling in the two phase-encoded directions.
Murphy, Mark; Alley, Marcus; Demmel, James; Keutzer, Kurt; Vasanawala, Shreyas; Lustig, Michael
2012-01-01
We present ℓ1-SPIRiT, a simple algorithm for auto calibrating parallel imaging (acPI) and compressed sensing (CS) that permits an efficient implementation with clinically-feasible runtimes. We propose a CS objective function that minimizes cross-channel joint sparsity in the Wavelet domain. Our reconstruction minimizes this objective via iterative soft-thresholding, and integrates naturally with iterative Self-Consistent Parallel Imaging (SPIRiT). Like many iterative MRI reconstructions, ℓ1-SPIRiT’s image quality comes at a high computational cost. Excessively long runtimes are a barrier to the clinical use of any reconstruction approach, and thus we discuss our approach to efficiently parallelizing ℓ1-SPIRiT and to achieving clinically-feasible runtimes. We present parallelizations of ℓ1-SPIRiT for both multi-GPU systems and multi-core CPUs, and discuss the software optimization and parallelization decisions made in our implementation. The performance of these alternatives depends on the processor architecture, the size of the image matrix, and the number of parallel imaging channels. Fundamentally, achieving fast runtime requires the correct trade-off between cache usage and parallelization overheads. We demonstrate image quality via a case from our clinical experimentation, using a custom 3DFT Spoiled Gradient Echo (SPGR) sequence with up to 8× acceleration via poisson-disc undersampling in the two phase-encoded directions. PMID:22345529
The Crabapple Experience: Insights from Program Evaluations.
ERIC Educational Resources Information Center
Elmore, Randy; Wisenbaker, Joe
2000-01-01
An evaluation of a Georgia middle school's multi-age grouping program revealed significant progress regarding student self-esteem, achievement, community building, and teacher collaboration. The Crabapple experience illustrates how one model of student-centered, developmentally appropriate, and integrated learning can benefit middle-level…
Fast parallel algorithm for slicing STL based on pipeline
NASA Astrophysics Data System (ADS)
Ma, Xulong; Lin, Feng; Yao, Bo
2016-05-01
In Additive Manufacturing field, the current researches of data processing mainly focus on a slicing process of large STL files or complicated CAD models. To improve the efficiency and reduce the slicing time, a parallel algorithm has great advantages. However, traditional algorithms can't make full use of multi-core CPU hardware resources. In the paper, a fast parallel algorithm is presented to speed up data processing. A pipeline mode is adopted to design the parallel algorithm. And the complexity of the pipeline algorithm is analyzed theoretically. To evaluate the performance of the new algorithm, effects of threads number and layers number are investigated by a serial of experiments. The experimental results show that the threads number and layers number are two remarkable factors to the speedup ratio. The tendency of speedup versus threads number reveals a positive relationship which greatly agrees with the Amdahl's law, and the tendency of speedup versus layers number also keeps a positive relationship agreeing with Gustafson's law. The new algorithm uses topological information to compute contours with a parallel method of speedup. Another parallel algorithm based on data parallel is used in experiments to show that pipeline parallel mode is more efficient. A case study at last shows a suspending performance of the new parallel algorithm. Compared with the serial slicing algorithm, the new pipeline parallel algorithm can make full use of the multi-core CPU hardware, accelerate the slicing process, and compared with the data parallel slicing algorithm, the new slicing algorithm in this paper adopts a pipeline parallel model, and a much higher speedup ratio and efficiency is achieved.
A Tutorial on Parallel and Concurrent Programming in Haskell
NASA Astrophysics Data System (ADS)
Peyton Jones, Simon; Singh, Satnam
This practical tutorial introduces the features available in Haskell for writing parallel and concurrent programs. We first describe how to write semi-explicit parallel programs by using annotations to express opportunities for parallelism and to help control the granularity of parallelism for effective execution on modern operating systems and processors. We then describe the mechanisms provided by Haskell for writing explicitly parallel programs with a focus on the use of software transactional memory to help share information between threads. Finally, we show how nested data parallelism can be used to write deterministically parallel programs which allows programmers to use rich data types in data parallel programs which are automatically transformed into flat data parallel versions for efficient execution on multi-core processors.
Information Infrastructure Technology and Applications (IITA) Program: Annual K-12 Workshop
NASA Technical Reports Server (NTRS)
Hunter, Paul; Likens, William; Leon, Mark
1995-01-01
The purpose of the K-12 workshop is to stimulate a cross pollination of inter-center activity and introduce the regional centers to curing edge K-1 activities. The format of the workshop consists of project presentations, working groups, and working group reports, all contained in a three day period. The agenda is aggressive and demanding. The K-12 Education Project is a multi-center activity managed by the Information Infrastructure Technology and Applications (IITA)/K-12 Project Office at the NASA Ames Research Center (ARC). this workshop is conducted in support of executing the K-12 Education element of the IITA Project The IITA/K-12 Project funds activities that use the National Information Infrastructure (NII) (e.g., the Internet) to foster reform and restructuring in mathematics, science, computing, engineering, and technical education.
Yu, Dongjun; Wu, Xiaowei; Shen, Hongbin; Yang, Jian; Tang, Zhenmin; Qi, Yong; Yang, Jingyu
2012-12-01
Membrane proteins are encoded by ~ 30% in the genome and function importantly in the living organisms. Previous studies have revealed that membrane proteins' structures and functions show obvious cell organelle-specific properties. Hence, it is highly desired to predict membrane protein's subcellular location from the primary sequence considering the extreme difficulties of membrane protein wet-lab studies. Although many models have been developed for predicting protein subcellular locations, only a few are specific to membrane proteins. Existing prediction approaches were constructed based on statistical machine learning algorithms with serial combination of multi-view features, i.e., different feature vectors are simply serially combined to form a super feature vector. However, such simple combination of features will simultaneously increase the information redundancy that could, in turn, deteriorate the final prediction accuracy. That's why it was often found that prediction success rates in the serial super space were even lower than those in a single-view space. The purpose of this paper is investigation of a proper method for fusing multiple multi-view protein sequential features for subcellular location predictions. Instead of serial strategy, we propose a novel parallel framework for fusing multiple membrane protein multi-view attributes that will represent protein samples in complex spaces. We also proposed generalized principle component analysis (GPCA) for feature reduction purpose in the complex geometry. All the experimental results through different machine learning algorithms on benchmark membrane protein subcellular localization datasets demonstrate that the newly proposed parallel strategy outperforms the traditional serial approach. We also demonstrate the efficacy of the parallel strategy on a soluble protein subcellular localization dataset indicating the parallel technique is flexible to suite for other computational biology problems. The software and datasets are available at: http://www.csbio.sjtu.edu.cn/bioinf/mpsp.
Lü, Qiang; Xia, Xiao-Yan; Chen, Rong; Miao, Da-Jun; Chen, Sha-Sha; Quan, Li-Jun; Li, Hai-Ou
2012-01-01
Protein structure prediction (PSP), which is usually modeled as a computational optimization problem, remains one of the biggest challenges in computational biology. PSP encounters two difficult obstacles: the inaccurate energy function problem and the searching problem. Even if the lowest energy has been luckily found by the searching procedure, the correct protein structures are not guaranteed to obtain. A general parallel metaheuristic approach is presented to tackle the above two problems. Multi-energy functions are employed to simultaneously guide the parallel searching threads. Searching trajectories are in fact controlled by the parameters of heuristic algorithms. The parallel approach allows the parameters to be perturbed during the searching threads are running in parallel, while each thread is searching the lowest energy value determined by an individual energy function. By hybridizing the intelligences of parallel ant colonies and Monte Carlo Metropolis search, this paper demonstrates an implementation of our parallel approach for PSP. 16 classical instances were tested to show that the parallel approach is competitive for solving PSP problem. This parallel approach combines various sources of both searching intelligences and energy functions, and thus predicts protein conformations with good quality jointly determined by all the parallel searching threads and energy functions. It provides a framework to combine different searching intelligence embedded in heuristic algorithms. It also constructs a container to hybridize different not-so-accurate objective functions which are usually derived from the domain expertise.
Lü, Qiang; Xia, Xiao-Yan; Chen, Rong; Miao, Da-Jun; Chen, Sha-Sha; Quan, Li-Jun; Li, Hai-Ou
2012-01-01
Background Protein structure prediction (PSP), which is usually modeled as a computational optimization problem, remains one of the biggest challenges in computational biology. PSP encounters two difficult obstacles: the inaccurate energy function problem and the searching problem. Even if the lowest energy has been luckily found by the searching procedure, the correct protein structures are not guaranteed to obtain. Results A general parallel metaheuristic approach is presented to tackle the above two problems. Multi-energy functions are employed to simultaneously guide the parallel searching threads. Searching trajectories are in fact controlled by the parameters of heuristic algorithms. The parallel approach allows the parameters to be perturbed during the searching threads are running in parallel, while each thread is searching the lowest energy value determined by an individual energy function. By hybridizing the intelligences of parallel ant colonies and Monte Carlo Metropolis search, this paper demonstrates an implementation of our parallel approach for PSP. 16 classical instances were tested to show that the parallel approach is competitive for solving PSP problem. Conclusions This parallel approach combines various sources of both searching intelligences and energy functions, and thus predicts protein conformations with good quality jointly determined by all the parallel searching threads and energy functions. It provides a framework to combine different searching intelligence embedded in heuristic algorithms. It also constructs a container to hybridize different not-so-accurate objective functions which are usually derived from the domain expertise. PMID:23028708
Recognition as a patient-centered medical home: fundamental or incidental?
Dohan, Daniel; McCuistion, Mary Honodel; Frosch, Dominick L; Hung, Dorothy Y; Tai-Seale, Ming
2013-01-01
Little is known about reasons why a medical group would seek recognition as a patient-centered medical home (PCMH). We examined the motivations for seeking recognition in one group and assessed why the group allowed recognition to lapse 3 years later. As part of a larger mixed methods case study, we conducted 38 key informant interviews with executives, clinicians, and front-line staff. Interviews were conducted according to a guide that evolved during the project and were audio-recorded and fully transcribed. Transcripts were analyzed and thematically coded. PCMH principles were consistent with the organization's culture and mission, which valued innovation and putting patients first. Motivations for implementing specific PCMH components varied; some components were seen as part of the organization's patient-centered culture, whereas others helped the practice compete in its local market. Informants consistently reported that National Committee for Quality Assurance recognition arose incidentally because of a 1-time incentive from a local group of large employers and because the organization decided to allocate some organizational resources to respond to the complex reporting requirements for about one-half of its clinics. Becoming patient centered and seeking recognition as such ran along separate but parallel tracks within this organization. As the Affordable Care Act continues to focus attention on primary care redesign, this apparent disconnect should be borne in mind.
Salient contour extraction from complex natural scene in night vision image
NASA Astrophysics Data System (ADS)
Han, Jing; Yue, Jiang; Zhang, Yi; Bai, Lian-fa
2014-03-01
The theory of center-surround interaction in non-classical receptive field can be applied in night vision information processing. In this work, an optimized compound receptive field modulation method is proposed to extract salient contour from complex natural scene in low-light-level (LLL) and infrared images. The kernel idea is that multi-feature analysis can recognize the inhomogeneity in modulatory coverage more accurately and that center and surround with the grouping structure satisfying Gestalt rule deserves high connection-probability. Computationally, a multi-feature contrast weighted inhibition model is presented to suppress background and lower mutual inhibition among contour elements; a fuzzy connection facilitation model is proposed to achieve the enhancement of contour response, the connection of discontinuous contour and the further elimination of randomly distributed noise and texture; a multi-scale iterative attention method is designed to accomplish dynamic modulation process and extract contours of targets in multi-size. This work provides a series of biologically motivated computational visual models with high-performance for contour detection from cluttered scene in night vision images.
Shah, Dipali Yogesh; Wadekar, Swati Ishwara; Dadpe, Ashwini Manish; Jadhav, Ganesh Ranganath; Choudhary, Lalit Jayant; Kalra, Dheeraj Deepak
2017-01-01
The purpose of this study was to compare and evaluate the shaping ability of ProTaper (PT) and Self-Adjusting File (SAF) system using cone-beam computed tomography (CBCT) to assess their performance in oval-shaped root canals. Sixty-two mandibular premolars with single oval canals were divided into two experimental groups ( n = 31) according to the systems used: Group I - PT and Group II - SAF. Canals were evaluated before and after instrumentation using CBCT to assess centering ratio and canal transportation at three levels. Data were statistically analyzed using one-way analysis of variance, post hoc Tukey's test, and t -test. The SAF showed better centering ability and lesser canal transportation than the PT only in the buccolingual plane at 6 and 9 mm levels. The shaping ability of the PT was best in the apical third in both the planes. The SAF had statistically significant better centering and lesser canal transportation in the buccolingual as compared to the mesiodistal plane at the middle and coronal levels. The SAF produced significantly less transportation and remained centered than the PT at the middle and coronal levels in the buccolingual plane of oval canals. In the mesiodistal plane, the performance of both the systems was parallel.
NASA Technical Reports Server (NTRS)
Fischer, James R.; Grosch, Chester; Mcanulty, Michael; Odonnell, John; Storey, Owen
1987-01-01
NASA's Office of Space Science and Applications (OSSA) gave a select group of scientists the opportunity to test and implement their computational algorithms on the Massively Parallel Processor (MPP) located at Goddard Space Flight Center, beginning in late 1985. One year later, the Working Group presented its report, which addressed the following: algorithms, programming languages, architecture, programming environments, the way theory relates, and performance measured. The findings point to a number of demonstrated computational techniques for which the MPP architecture is ideally suited. For example, besides executing much faster on the MPP than on conventional computers, systolic VLSI simulation (where distances are short), lattice simulation, neural network simulation, and image problems were found to be easier to program on the MPP's architecture than on a CYBER 205 or even a VAX. The report also makes technical recommendations covering all aspects of MPP use, and recommendations concerning the future of the MPP and machines based on similar architectures, expansion of the Working Group, and study of the role of future parallel processors for space station, EOS, and the Great Observatories era.
Alternative Fuels Data Center: Electric Vehicle Charging for Multi-Unit
Dwellings Electric Vehicle Charging for Multi-Unit Dwellings to someone by E-mail Share Alternative Fuels Data Center: Electric Vehicle Charging for Multi-Unit Dwellings on Facebook Tweet about Alternative Fuels Data Center: Electric Vehicle Charging for Multi-Unit Dwellings on Twitter Bookmark
Jali - Unstructured Mesh Infrastructure for Multi-Physics Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garimella, Rao V; Berndt, Markus; Coon, Ethan
2017-04-13
Jali is a parallel unstructured mesh infrastructure library designed for use by multi-physics simulations. It supports 2D and 3D arbitrary polyhedral meshes distributed over hundreds to thousands of nodes. Jali can read write Exodus II meshes along with fields and sets on the mesh and support for other formats is partially implemented or is (https://github.com/MeshToolkit/MSTK), an open source general purpose unstructured mesh infrastructure library from Los Alamos National Laboratory. While it has been made to work with other mesh frameworks such as MOAB and STKmesh in the past, support for maintaining the interface to these frameworks has been suspended formore » now. Jali supports distributed as well as on-node parallelism. Support of on-node parallelism is through direct use of the the mesh in multi-threaded constructs or through the use of "tiles" which are submeshes or sub-partitions of a partition destined for a compute node.« less
Topical perspective on massive threading and parallelism.
Farber, Robert M
2011-09-01
Unquestionably computer architectures have undergone a recent and noteworthy paradigm shift that now delivers multi- and many-core systems with tens to many thousands of concurrent hardware processing elements per workstation or supercomputer node. GPGPU (General Purpose Graphics Processor Unit) technology in particular has attracted significant attention as new software development capabilities, namely CUDA (Compute Unified Device Architecture) and OpenCL™, have made it possible for students as well as small and large research organizations to achieve excellent speedup for many applications over more conventional computing architectures. The current scientific literature reflects this shift with numerous examples of GPGPU applications that have achieved one, two, and in some special cases, three-orders of magnitude increased computational performance through the use of massive threading to exploit parallelism. Multi-core architectures are also evolving quickly to exploit both massive-threading and massive-parallelism such as the 1.3 million threads Blue Waters supercomputer. The challenge confronting scientists in planning future experimental and theoretical research efforts--be they individual efforts with one computer or collaborative efforts proposing to use the largest supercomputers in the world is how to capitalize on these new massively threaded computational architectures--especially as not all computational problems will scale to massive parallelism. In particular, the costs associated with restructuring software (and potentially redesigning algorithms) to exploit the parallelism of these multi- and many-threaded machines must be considered along with application scalability and lifespan. This perspective is an overview of the current state of threading and parallelize with some insight into the future. Published by Elsevier Inc.
Large Scale Document Inversion using a Multi-threaded Computing System
Jung, Sungbo; Chang, Dar-Jen; Park, Juw Won
2018-01-01
Current microprocessor architecture is moving towards multi-core/multi-threaded systems. This trend has led to a surge of interest in using multi-threaded computing devices, such as the Graphics Processing Unit (GPU), for general purpose computing. We can utilize the GPU in computation as a massive parallel coprocessor because the GPU consists of multiple cores. The GPU is also an affordable, attractive, and user-programmable commodity. Nowadays a lot of information has been flooded into the digital domain around the world. Huge volume of data, such as digital libraries, social networking services, e-commerce product data, and reviews, etc., is produced or collected every moment with dramatic growth in size. Although the inverted index is a useful data structure that can be used for full text searches or document retrieval, a large number of documents will require a tremendous amount of time to create the index. The performance of document inversion can be improved by multi-thread or multi-core GPU. Our approach is to implement a linear-time, hash-based, single program multiple data (SPMD), document inversion algorithm on the NVIDIA GPU/CUDA programming platform utilizing the huge computational power of the GPU, to develop high performance solutions for document indexing. Our proposed parallel document inversion system shows 2-3 times faster performance than a sequential system on two different test datasets from PubMed abstract and e-commerce product reviews. CCS Concepts •Information systems➝Information retrieval • Computing methodologies➝Massively parallel and high-performance simulations. PMID:29861701
Large Scale Document Inversion using a Multi-threaded Computing System.
Jung, Sungbo; Chang, Dar-Jen; Park, Juw Won
2017-06-01
Current microprocessor architecture is moving towards multi-core/multi-threaded systems. This trend has led to a surge of interest in using multi-threaded computing devices, such as the Graphics Processing Unit (GPU), for general purpose computing. We can utilize the GPU in computation as a massive parallel coprocessor because the GPU consists of multiple cores. The GPU is also an affordable, attractive, and user-programmable commodity. Nowadays a lot of information has been flooded into the digital domain around the world. Huge volume of data, such as digital libraries, social networking services, e-commerce product data, and reviews, etc., is produced or collected every moment with dramatic growth in size. Although the inverted index is a useful data structure that can be used for full text searches or document retrieval, a large number of documents will require a tremendous amount of time to create the index. The performance of document inversion can be improved by multi-thread or multi-core GPU. Our approach is to implement a linear-time, hash-based, single program multiple data (SPMD), document inversion algorithm on the NVIDIA GPU/CUDA programming platform utilizing the huge computational power of the GPU, to develop high performance solutions for document indexing. Our proposed parallel document inversion system shows 2-3 times faster performance than a sequential system on two different test datasets from PubMed abstract and e-commerce product reviews. •Information systems➝Information retrieval • Computing methodologies➝Massively parallel and high-performance simulations.
NASA Technical Reports Server (NTRS)
Hussaini, M. Y. (Editor); Kumar, A. (Editor); Salas, M. D. (Editor)
1993-01-01
The purpose here is to assess the state of the art in the areas of numerical analysis that are particularly relevant to computational fluid dynamics (CFD), to identify promising new developments in various areas of numerical analysis that will impact CFD, and to establish a long-term perspective focusing on opportunities and needs. Overviews are given of discretization schemes, computational fluid dynamics, algorithmic trends in CFD for aerospace flow field calculations, simulation of compressible viscous flow, and massively parallel computation. Also discussed are accerelation methods, spectral and high-order methods, multi-resolution and subcell resolution schemes, and inherently multidimensional schemes.
Aerial views of construction on the RLV hangar at the Shuttle Landing Facility
NASA Technical Reports Server (NTRS)
1999-01-01
Looking southwest, this view shows ongoing construction of a multi-purpose hangar, which is part of the $8 million Reusable Launch Vehicle (RLV) Support Complex at Kennedy Space Center. Edging the construction is Sharkey Road, which parallels the landing strip of the Shuttle Landing Facility nearby. The RLV complex will include facilities for related ground support equipment and administrative/ technical support. It will be available to accommodate the Space Shuttle; the X-34 RLV technology demonstrator; the L-1011 carrier aircraft for Pegasus and X-34; and other RLV and X-vehicle programs. The complex is jointly funded by the Spaceport Florida Authority, NASA's Space Shuttle Program and KSC. The facility will be operational in early 2000.
Anand, Rishi; Gorev, Maxim V; Poghosyan, Hermine; Pothier, Lindsay; Matkins, John; Kotler, Gregory; Moroz, Sarah; Armstrong, James; Nemtsov, Sergei V; Orlov, Michael V
2016-08-01
To compare the efficacy and accuracy of rotational angiography with three-dimensional reconstruction (3DATG) image merged with electro-anatomical mapping (EAM) vs. CT-EAM. A prospective, randomized, parallel, two-center study conducted in 36 patients (25 men, age 65 ± 10 years) undergoing AF ablation (33 % paroxysmal, 67 % persistent) guided by 3DATG (group 1) vs. CT (group 2) image fusion with EAM. 3DATG was performed on the Philips Allura Xper FD 10 system. Procedural characteristics including time, radiation exposure, outcome, and navigation accuracy were compared between two groups. There was no significant difference between the groups in total procedure duration or time spent for various procedural steps. Minor differences in procedural characteristics were present between two centers. Segmentation and fusion time for 3DATG or CT-EAM was short and similar between both centers. Accuracy of navigation guided by either method was high and did not depend on left atrial size. Maintenance of sinus rhythm between the two groups was no different up to 24 months of follow-up. This study did not find superiority of 3DATG-EAM image merge to guide AF ablation when compared to CT-EAM fusion. Both merging techniques result in similar navigation accuracy.
Performance and Application of Parallel OVERFLOW Codes on Distributed and Shared Memory Platforms
NASA Technical Reports Server (NTRS)
Djomehri, M. Jahed; Rizk, Yehia M.
1999-01-01
The presentation discusses recent studies on the performance of the two parallel versions of the aerodynamics CFD code, OVERFLOW_MPI and _MLP. Developed at NASA Ames, the serial version, OVERFLOW, is a multidimensional Navier-Stokes flow solver based on overset (Chimera) grid technology. The code has recently been parallelized in two ways. One is based on the explicit message-passing interface (MPI) across processors and uses the _MPI communication package. This approach is primarily suited for distributed memory systems and workstation clusters. The second, termed the multi-level parallel (MLP) method, is simple and uses shared memory for all communications. The _MLP code is suitable on distributed-shared memory systems. For both methods, the message passing takes place across the processors or processes at the advancement of each time step. This procedure is, in effect, the Chimera boundary conditions update, which is done in an explicit "Jacobi" style. In contrast, the update in the serial code is done in more of the "Gauss-Sidel" fashion. The programming efforts for the _MPI code is more complicated than for the _MLP code; the former requires modification of the outer and some inner shells of the serial code, whereas the latter focuses only on the outer shell of the code. The _MPI version offers a great deal of flexibility in distributing grid zones across a specified number of processors in order to achieve load balancing. The approach is capable of partitioning zones across multiple processors or sending each zone and/or cluster of several zones into a single processor. The message passing across the processors consists of Chimera boundary and/or an overlap of "halo" boundary points for each partitioned zone. The MLP version is a new coarse-grain parallel concept at the zonal and intra-zonal levels. A grouping strategy is used to distribute zones into several groups forming sub-processes which will run in parallel. The total volume of grid points in each group are approximately balanced. A proper number of threads are initially allocated to each group, and in subsequent iterations during the run-time, the number of threads are adjusted to achieve load balancing across the processes. Each process exploits the multitasking directives already established in Overflow.
NASA Technical Reports Server (NTRS)
1994-01-01
CESDIS, the Center of Excellence in Space Data and Information Sciences was developed jointly by NASA, Universities Space Research Association (USRA), and the University of Maryland in 1988 to focus on the design of advanced computing techniques and data systems to support NASA Earth and space science research programs. CESDIS is operated by USRA under contract to NASA. The Director, Associate Director, Staff Scientists, and administrative staff are located on-site at NASA's Goddard Space Flight Center in Greenbelt, Maryland. The primary CESDIS mission is to increase the connection between computer science and engineering research programs at colleges and universities and NASA groups working with computer applications in Earth and space science. Research areas of primary interest at CESDIS include: 1) High performance computing, especially software design and performance evaluation for massively parallel machines; 2) Parallel input/output and data storage systems for high performance parallel computers; 3) Data base and intelligent data management systems for parallel computers; 4) Image processing; 5) Digital libraries; and 6) Data compression. CESDIS funds multiyear projects at U. S. universities and colleges. Proposals are accepted in response to calls for proposals and are selected on the basis of peer reviews. Funds are provided to support faculty and graduate students working at their home institutions. Project personnel visit Goddard during academic recess periods to attend workshops, present seminars, and collaborate with NASA scientists on research projects. Additionally, CESDIS takes on specific research tasks of shorter duration for computer science research requested by NASA Goddard scientists.
Tankam, Patrice; Santhanam, Anand P.; Lee, Kye-Sung; Won, Jungeun; Canavesi, Cristina; Rolland, Jannick P.
2014-01-01
Abstract. Gabor-domain optical coherence microscopy (GD-OCM) is a volumetric high-resolution technique capable of acquiring three-dimensional (3-D) skin images with histological resolution. Real-time image processing is needed to enable GD-OCM imaging in a clinical setting. We present a parallelized and scalable multi-graphics processing unit (GPU) computing framework for real-time GD-OCM image processing. A parallelized control mechanism was developed to individually assign computation tasks to each of the GPUs. For each GPU, the optimal number of amplitude-scans (A-scans) to be processed in parallel was selected to maximize GPU memory usage and core throughput. We investigated five computing architectures for computational speed-up in processing 1000×1000 A-scans. The proposed parallelized multi-GPU computing framework enables processing at a computational speed faster than the GD-OCM image acquisition, thereby facilitating high-speed GD-OCM imaging in a clinical setting. Using two parallelized GPUs, the image processing of a 1×1×0.6 mm3 skin sample was performed in about 13 s, and the performance was benchmarked at 6.5 s with four GPUs. This work thus demonstrates that 3-D GD-OCM data may be displayed in real-time to the examiner using parallelized GPU processing. PMID:24695868
Equalizer: a scalable parallel rendering framework.
Eilemann, Stefan; Makhinya, Maxim; Pajarola, Renato
2009-01-01
Continuing improvements in CPU and GPU performances as well as increasing multi-core processor and cluster-based parallelism demand for flexible and scalable parallel rendering solutions that can exploit multipipe hardware accelerated graphics. In fact, to achieve interactive visualization, scalable rendering systems are essential to cope with the rapid growth of data sets. However, parallel rendering systems are non-trivial to develop and often only application specific implementations have been proposed. The task of developing a scalable parallel rendering framework is even more difficult if it should be generic to support various types of data and visualization applications, and at the same time work efficiently on a cluster with distributed graphics cards. In this paper we introduce a novel system called Equalizer, a toolkit for scalable parallel rendering based on OpenGL which provides an application programming interface (API) to develop scalable graphics applications for a wide range of systems ranging from large distributed visualization clusters and multi-processor multipipe graphics systems to single-processor single-pipe desktop machines. We describe the system architecture, the basic API, discuss its advantages over previous approaches, present example configurations and usage scenarios as well as scalability results.
Pohlig, Gabriele; Bernhard, Sonja C; Blum, Johannes; Burri, Christian; Mpanya, Alain; Lubaki, Jean-Pierre Fina; Mpoto, Alfred Mpoo; Munungu, Blaise Fungula; N'tombe, Patrick Mangoni; Deo, Gratias Kambau Manesa; Mutantu, Pierre Nsele; Kuikumbi, Florent Mbo; Mintwo, Alain Fukinsia; Munungi, Augustin Kayeye; Dala, Amadeu; Macharia, Stephen; Bilenge, Constantin Miaka Mia; Mesu, Victor Kande Betu Ku; Franco, Jose Ramon; Dituvanga, Ndinga Dieyi; Tidwell, Richard R; Olson, Carol A
2016-02-01
Sleeping sickness (human African trypanosomiasis [HAT]) is a neglected tropical disease with limited treatment options that currently require parenteral administration. In previous studies, orally administered pafuramidine was well tolerated in healthy patients (for up to 21 days) and stage 1 HAT patients (for up to 10 days), and demonstrated efficacy comparable to pentamidine. This was a Phase 3, multi-center, randomized, open-label, parallel-group, active control study where 273 male and female patients with first stage Trypanosoma brucei gambiense HAT were treated at six sites: one trypanosomiasis reference center in Angola, one hospital in South Sudan, and four hospitals in the Democratic Republic of the Congo between August 2005 and September 2009 to support the registration of pafuramidine for treatment of first stage HAT in collaboration with the United States Food and Drug Administration. Patients were treated with either 100 mg of pafuramidine orally twice a day for 10 days or 4 mg/kg pentamidine intramuscularly once daily for 7 days to assess the efficacy and safety of pafuramidine versus pentamidine. Pregnant and lactating women as well as adolescents were included. The primary efficacy endpoint was the combined rate of clinical and parasitological cure at 12 months. The primary safety outcome was the frequency and severity of adverse events. The study was registered on the International Clinical Trials Registry Platform at www.clinicaltrials.gov with the number ISRCTN85534673. The overall cure rate at 12 months was 89% in the pafuramidine group and 95% in the pentamidine group; pafuramidine was non-inferior to pentamidine as the upper bound of the 95% confidence interval did not exceed 15%. The safety profile of pafuramidine was superior to pentamidine; however, 3 patients in the pafuramidine group had glomerulonephritis or nephropathy approximately 8 weeks post-treatment. Two of these events were judged as possibly related to pafuramidine. Despite good tolerability observed in preceding studies, the development program for pafuramidine was discontinued due to delayed post-treatment toxicity.
Jiang, Wenwen; Larson, Peder E Z; Lustig, Michael
2018-03-09
To correct gradient timing delays in non-Cartesian MRI while simultaneously recovering corruption-free auto-calibration data for parallel imaging, without additional calibration scans. The calibration matrix constructed from multi-channel k-space data should be inherently low-rank. This property is used to construct reconstruction kernels or sensitivity maps. Delays between the gradient hardware across different axes and RF receive chain, which are relatively benign in Cartesian MRI (excluding EPI), lead to trajectory deviations and hence data inconsistencies for non-Cartesian trajectories. These in turn lead to higher rank and corrupted calibration information which hampers the reconstruction. Here, a method named Simultaneous Auto-calibration and Gradient delays Estimation (SAGE) is proposed that estimates the actual k-space trajectory while simultaneously recovering the uncorrupted auto-calibration data. This is done by estimating the gradient delays that result in the lowest rank of the calibration matrix. The Gauss-Newton method is used to solve the non-linear problem. The method is validated in simulations using center-out radial, projection reconstruction and spiral trajectories. Feasibility is demonstrated on phantom and in vivo scans with center-out radial and projection reconstruction trajectories. SAGE is able to estimate gradient timing delays with high accuracy at a signal to noise ratio level as low as 5. The method is able to effectively remove artifacts resulting from gradient timing delays and restore image quality in center-out radial, projection reconstruction, and spiral trajectories. The low-rank based method introduced simultaneously estimates gradient timing delays and provides accurate auto-calibration data for improved image quality, without any additional calibration scans. © 2018 International Society for Magnetic Resonance in Medicine.
Ahrens, Birgit; Hellmuth, Christian; Haiden, Nadja; Olbertz, Dirk; Hamelmann, Eckard; Vusurovic, Milica; Fleddermann, Manja; Roehle, Robert; Knoll, Anette; Koletzko, Berthold; Wahn, Ulrich; Beyer, Kirsten
2018-05-01
A high protein content of nonhydrolyzed infant formula exceeding metabolic requirements can induce rapid weight gain and obesity. Hydrolyzed formula with too low protein (LP) content may result in inadequate growth. The aim of this study was to investigate noninferiority of partial and extensively hydrolyzed formulas (pHF, eHF) with lower hydrolyzed protein content than conventionally, regularly used formulas, with or without synbiotics for normal growth of healthy term infants. In an European multi-center, parallel, prospective, controlled, double-blind trial, 402 formula-fed infants were randomly assigned to four groups: LP-formulas (1.9 g protein/100 kcal) as pHF with or without synbiotics, LP-eHF formula with synbiotics, or regular protein eHF (2.3 g protein/100 kcal). One hundred and one breast-fed infants served as observational reference group. As primary endpoint, noninferiority of daily weight gain during the first 4 months of life was investigated comparing the LP-group to a regular protein eHF group. A comparison of daily weight gain in infants receiving LPpHF (2.15 g/day CI -0.18 to inf.) with infants receiving regular protein eHF showed noninferior weight gain (-3.5 g/day margin; per protocol [PP] population). Noninferiority was also confirmed for the other tested LP formulas. Likewise, analysis of metabolic parameters and plasma amino acid concentrations demonstrated a safe and balanced nutritional composition. Energetic efficiency for growth (weight) was slightly higher in LPeHF and synbiotics compared with LPpHF and synbiotics. All tested hydrolyzed LP formulas allowed normal weight gain without being inferior to regular protein eHF in the first 4 months of life. This trial was registered at clinicaltrials.gov, NCT01143233.
Okely, Anthony D; Collins, Clare E; Morgan, Philip J; Jones, Rachel A; Warren, Janet M; Cliff, Dylan P; Burrows, Tracy L; Colyvas, Kim; Steele, Julie R; Baur, Louise A
2010-09-01
To evaluate whether a child-centered physical activity program, combined with a parent-centered dietary program, was more efficacious than each treatment alone, in preventing unhealthy weight-gain in overweight children. An assessor-blinded randomized controlled trial involving 165 overweight/obese 5.5- to 9.9- year-old children. Participants were randomly assigned to 1 of 3 interventions: a parent-centered dietary program (Diet); a child-centered physical activity program (Activity); or a combination of both (Diet+Activity). All groups received 10 weekly face-to-face sessions followed by 3 monthly relapse-prevention phone calls. Analysis was by intention-to-treat. The primary outcome was change in body mass index z-score at 6 and 12 months (n=114 and 106, respectively). Body mass index z-scores were reduced at 12-months in all groups, with the Diet (mean [95% confidence interval]) (-0.39 [-0.51 to 0.27]) and Diet + Activity (-0.32, [-0.36, -0.23]) groups showing a greater reduction than the Activity group (-0.17 [-0.28, -0.06]) (P=.02). Changes in other outcomes (waist circumference and metabolic profile) were not statistically significant among groups. Relative body weight decreased at 6 months and was sustained at 12 months through treatment with a child-centered physical activity program, a parent-centered dietary program, or both. The greatest effect was achieved when a parent-centered dietary component was included. Copyright (c) 2010 Mosby, Inc. All rights reserved.
Highly efficient spatial data filtering in parallel using the opensource library CPPPO
NASA Astrophysics Data System (ADS)
Municchi, Federico; Goniva, Christoph; Radl, Stefan
2016-10-01
CPPPO is a compilation of parallel data processing routines developed with the aim to create a library for "scale bridging" (i.e. connecting different scales by mean of closure models) in a multi-scale approach. CPPPO features a number of parallel filtering algorithms designed for use with structured and unstructured Eulerian meshes, as well as Lagrangian data sets. In addition, data can be processed on the fly, allowing the collection of relevant statistics without saving individual snapshots of the simulation state. Our library is provided with an interface to the widely-used CFD solver OpenFOAM®, and can be easily connected to any other software package via interface modules. Also, we introduce a novel, extremely efficient approach to parallel data filtering, and show that our algorithms scale super-linearly on multi-core clusters. Furthermore, we provide a guideline for choosing the optimal Eulerian cell selection algorithm depending on the number of CPU cores used. Finally, we demonstrate the accuracy and the parallel scalability of CPPPO in a showcase focusing on heat and mass transfer from a dense bed of particles.
A DICOM based radiotherapy plan database for research collaboration and reporting
NASA Astrophysics Data System (ADS)
Westberg, J.; Krogh, S.; Brink, C.; Vogelius, I. R.
2014-03-01
Purpose: To create a central radiotherapy (RT) plan database for dose analysis and reporting, capable of calculating and presenting statistics on user defined patient groups. The goal is to facilitate multi-center research studies with easy and secure access to RT plans and statistics on protocol compliance. Methods: RT institutions are able to send data to the central database using DICOM communications on a secure computer network. The central system is composed of a number of DICOM servers, an SQL database and in-house developed software services to process the incoming data. A web site within the secure network allows the user to manage their submitted data. Results: The RT plan database has been developed in Microsoft .NET and users are able to send DICOM data between RT centers in Denmark. Dose-volume histogram (DVH) calculations performed by the system are comparable to those of conventional RT software. A permission system was implemented to ensure access control and easy, yet secure, data sharing across centers. The reports contain DVH statistics for structures in user defined patient groups. The system currently contains over 2200 patients in 14 collaborations. Conclusions: A central RT plan repository for use in multi-center trials and quality assurance was created. The system provides an attractive alternative to dummy runs by enabling continuous monitoring of protocol conformity and plan metrics in a trial.
Multi-resonant electromagnetic shunt in base isolation for vibration damping and energy harvesting
NASA Astrophysics Data System (ADS)
Pei, Yalu; Liu, Yilun; Zuo, Lei
2018-06-01
This paper investigates multi-resonant electromagnetic shunts applied to base isolation for dual-function vibration damping and energy harvesting. Two multi-mode shunt circuit configurations, namely parallel and series, are proposed and optimized based on the H2 criteria. The root-mean-square (RMS) value of the relative displacement between the base and the primary structure is minimized. Practically, this will improve the safety of base-isolated buildings subjected the broad bandwidth ground acceleration. Case studies of a base-isolated building are conducted in both the frequency and time domains to investigate the effectiveness of multi-resonant electromagnetic shunts under recorded earthquake signals. It shows that both multi-mode shunt circuits outperform traditional single mode shunt circuits by suppressing the first and the second vibration modes simultaneously. Moreover, for the same stiffness ratio, the parallel shunt circuit is more effective at harvesting energy and suppressing vibration, and can more robustly handle parameter mistuning than the series shunt circuit. Furthermore, this paper discusses experimental validation of the effectiveness of multi-resonant electromagnetic shunts for vibration damping and energy harvesting on a scaled-down base isolation system.
Shin, Sang Soo; Shin, Young-Jeon
2016-01-01
With an increasing number of studies highlighting regional social capital (SC) as a determinant of health, many studies are using multi-level analysis with merged and averaged scores of community residents' survey responses calculated from community SC data. Sufficient examination is required to validate if the merged and averaged data can represent the community. Therefore, this study analyzes the validity of the selected indicators and their applicability in multi-level analysis. Within and between analysis (WABA) was performed after creating community variables using merged and averaged data of community residents' responses from the 2013 Community Health Survey in Korea, using subjective self-rated health assessment as a dependent variable. Further analysis was performed following the model suggested by WABA result. Both E-test results (1) and WABA results (2) revealed that single-level analysis needs to be performed using qualitative SC variable with cluster mean centering. Through single-level multivariate regression analysis, qualitative SC with cluster mean centering showed positive effect on self-rated health (0.054, p<0.001), although there was no substantial difference in comparison to analysis using SC variables without cluster mean centering or multi-level analysis. As modification in qualitative SC was larger within the community than between communities, we validate that relational analysis of individual self-rated health can be performed within the group, using cluster mean centering. Other tests besides the WABA can be performed in the future to confirm the validity of using community variables and their applicability in multi-level analysis.
Halvade-RNA: Parallel variant calling from transcriptomic data using MapReduce.
Decap, Dries; Reumers, Joke; Herzeel, Charlotte; Costanza, Pascal; Fostier, Jan
2017-01-01
Given the current cost-effectiveness of next-generation sequencing, the amount of DNA-seq and RNA-seq data generated is ever increasing. One of the primary objectives of NGS experiments is calling genetic variants. While highly accurate, most variant calling pipelines are not optimized to run efficiently on large data sets. However, as variant calling in genomic data has become common practice, several methods have been proposed to reduce runtime for DNA-seq analysis through the use of parallel computing. Determining the effectively expressed variants from transcriptomics (RNA-seq) data has only recently become possible, and as such does not yet benefit from efficiently parallelized workflows. We introduce Halvade-RNA, a parallel, multi-node RNA-seq variant calling pipeline based on the GATK Best Practices recommendations. Halvade-RNA makes use of the MapReduce programming model to create and manage parallel data streams on which multiple instances of existing tools such as STAR and GATK operate concurrently. Whereas the single-threaded processing of a typical RNA-seq sample requires ∼28h, Halvade-RNA reduces this runtime to ∼2h using a small cluster with two 20-core machines. Even on a single, multi-core workstation, Halvade-RNA can significantly reduce runtime compared to using multi-threading, thus providing for a more cost-effective processing of RNA-seq data. Halvade-RNA is written in Java and uses the Hadoop MapReduce 2.0 API. It supports a wide range of distributions of Hadoop, including Cloudera and Amazon EMR.
Parallel Agent-Based Simulations on Clusters of GPUs and Multi-Core Processors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aaby, Brandon G; Perumalla, Kalyan S; Seal, Sudip K
2010-01-01
An effective latency-hiding mechanism is presented in the parallelization of agent-based model simulations (ABMS) with millions of agents. The mechanism is designed to accommodate the hierarchical organization as well as heterogeneity of current state-of-the-art parallel computing platforms. We use it to explore the computation vs. communication trade-off continuum available with the deep computational and memory hierarchies of extant platforms and present a novel analytical model of the tradeoff. We describe our implementation and report preliminary performance results on two distinct parallel platforms suitable for ABMS: CUDA threads on multiple, networked graphical processing units (GPUs), and pthreads on multi-core processors. Messagemore » Passing Interface (MPI) is used for inter-GPU as well as inter-socket communication on a cluster of multiple GPUs and multi-core processors. Results indicate the benefits of our latency-hiding scheme, delivering as much as over 100-fold improvement in runtime for certain benchmark ABMS application scenarios with several million agents. This speed improvement is obtained on our system that is already two to three orders of magnitude faster on one GPU than an equivalent CPU-based execution in a popular simulator in Java. Thus, the overall execution of our current work is over four orders of magnitude faster when executed on multiple GPUs.« less
Multi-water-bag models of ion temperature gradient instability in cylindrical geometry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coulette, David; Besse, Nicolas
2013-05-15
Ion temperature gradient instabilities play a major role in the understanding of anomalous transport in core fusion plasmas. In the considered cylindrical geometry, ion dynamics is described using a drift-kinetic multi-water-bag model for the parallel velocity dependency of the ion distribution function. In a first stage, global linear stability analysis is performed. From the obtained normal modes, parametric dependencies of the main spectral characteristics of the instability are then examined. Comparison of the multi-water-bag results with a reference continuous Maxwellian case allows us to evaluate the effects of discrete parallel velocity sampling induced by the Multi-Water-Bag model. Differences between themore » global model and local models considered in previous works are discussed. Using results from linear, quasilinear, and nonlinear numerical simulations, an analysis of the first stage saturation dynamics of the instability is proposed, where the divergence between the three models is examined.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marrinan, Thomas; Leigh, Jason; Renambot, Luc
Mixed presence collaboration involves remote collaboration between multiple collocated groups. This paper presents the design and results of a user study that focused on mixed presence collaboration using large-scale tiled display walls. The research was conducted in order to compare data synchronization schemes for multi-user visualization applications. Our study compared three techniques for sharing data between display spaces with varying constraints and affordances. The results provide empirical evidence that using data sharing techniques with continuous synchronization between the sites lead to improved collaboration for a search and analysis task between remotely located groups. We have also identified aspects of synchronizedmore » sessions that result in increased remote collaborator awareness and parallel task coordination. It is believed that this research will lead to better utilization of large-scale tiled display walls for distributed group work.« less
Implementation and performance of parallel Prolog interpreter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wei, S.; Kale, L.V.; Balkrishna, R.
1988-01-01
In this paper, the authors discuss the implementation of a parallel Prolog interpreter on different parallel machines. The implementation is based on the REDUCE--OR process model which exploits both AND and OR parallelism in logic programs. It is machine independent as it runs on top of the chare-kernel--a machine-independent parallel programming system. The authors also give the performance of the interpreter running a diverse set of benchmark pargrams on parallel machines including shared memory systems: an Alliant FX/8, Sequent and a MultiMax, and a non-shared memory systems: Intel iPSC/32 hypercube, in addition to its performance on a multiprocessor simulation system.
Polarized Infrared Absorption of Dipole Centers in Cadmium Halide and PbI2 Crystals
NASA Astrophysics Data System (ADS)
Terakami, Mitsushi; Nakagawa, Hideyuki
2004-03-01
Polarized infrared (IR) absorption measurements on CN- or OH- centers in cadmium halide and PbI2 crystals were carried out at 6 K with a high spectral resolution of 0.025 cm-1 at 2000 cm-1 by using a FTIR spectrometer. Several sharp absorption lines with widths less than 0.1 cm-1 are observed in the energy region of the stretching vibration, i.e. 2000 to 2250 cm-1 for CN- and 2500 to 4500 cm-1 for OH-. These lines are classified into several groups attributed to (1) an isolated center simply substituted for a halogen ion, (2) an interstitial center located between the cadmium and halogen ion sheets and (3) a coupled center with an anion vacancy or a host metal ion. Almost all of the dipole axes (bond axes) of the CN- ions doped in MI2 (M = Cd or Pb) are parallel to the crystal c-axes, while those of the isolated and coupled CN- centers in CdX2 (X = Cl or Br) lean away from the direction of the c-axis. The most OH- ions doped in CdX2 (X = Cl, Br or I) and PbI2 are arranged in the halogen-ion planes with their dipole axes parallel to the crystal c-axes. The first overtone yields values of χe and ωeχe for CN- and OH- in CdX2 and PbI2. These values explain well the isotope shift of the main stretching band in CdX2 and PbI2.
Adherence predictors in an Internet-based Intervention program for depression.
Castro, Adoración; López-Del-Hoyo, Yolanda; Peake, Christian; Mayoral, Fermín; Botella, Cristina; García-Campayo, Javier; Baños, Rosa María; Nogueira-Arjona, Raquel; Roca, Miquel; Gili, Margalida
2018-05-01
Internet-delivered psychotherapy has been demonstrated to be effective in the treatment of depression. Nevertheless, the study of the adherence in this type of the treatment reported divergent results. The main objective of this study is to analyze predictors of adherence in a primary care Internet-based intervention for depression in Spain. A multi-center, three arm, parallel, randomized controlled trial was conducted with 194 depressive patients, who were allocated in self-guided or supported-guided intervention. Sociodemographic and clinical characteristics were gathered using a case report form. The Mini international neuropsychiatric interview diagnoses major depression. Beck Depression Inventory was used to assess depression severity. The visual analogic scale assesses the respondent's self-rated health and Short Form Health Survey was used to measure the health-related quality of life. Age results a predictor variable for both intervention groups (with and without therapist support). Perceived health is a negative predictor of adherence for the self-guided intervention when change in depression severity was included in the model. Change in depression severity results a predictor of adherence in the support-guided intervention. Our findings demonstrate that in our sample, there are differences in sociodemographic and clinical variables between active and dropout participants and we provide adherence predictors in each intervention condition of this Internet-based program for depression (self-guided and support-guided). It is important to point that further research in this area is essential to improve tailored interventions and to know specific patients groups can benefit from these interventions.
Multi-threaded ATLAS simulation on Intel Knights Landing processors
NASA Astrophysics Data System (ADS)
Farrell, Steven; Calafiura, Paolo; Leggett, Charles; Tsulaia, Vakhtang; Dotti, Andrea; ATLAS Collaboration
2017-10-01
The Knights Landing (KNL) release of the Intel Many Integrated Core (MIC) Xeon Phi line of processors is a potential game changer for HEP computing. With 72 cores and deep vector registers, the KNL cards promise significant performance benefits for highly-parallel, compute-heavy applications. Cori, the newest supercomputer at the National Energy Research Scientific Computing Center (NERSC), was delivered to its users in two phases with the first phase online at the end of 2015 and the second phase now online at the end of 2016. Cori Phase 2 is based on the KNL architecture and contains over 9000 compute nodes with 96GB DDR4 memory. ATLAS simulation with the multithreaded Athena Framework (AthenaMT) is a good potential use-case for the KNL architecture and supercomputers like Cori. ATLAS simulation jobs have a high ratio of CPU computation to disk I/O and have been shown to scale well in multi-threading and across many nodes. In this paper we will give an overview of the ATLAS simulation application with details on its multi-threaded design. Then, we will present a performance analysis of the application on KNL devices and compare it to a traditional x86 platform to demonstrate the capabilities of the architecture and evaluate the benefits of utilizing KNL platforms like Cori for ATLAS production.
Harvey, John J; Chester, Stephanie; Burke, Stephen A; Ansbro, Marisela; Aden, Tricia; Gose, Remedios; Sciulli, Rebecca; Bai, Jing; DesJardin, Lucy; Benfer, Jeffrey L; Hall, Joshua; Smole, Sandra; Doan, Kimberly; Popowich, Michael D; St George, Kirsten; Quinlan, Tammy; Halse, Tanya A; Li, Zhen; Pérez-Osorio, Ailyn C; Glover, William A; Russell, Denny; Reisdorf, Erik; Whyte, Thomas; Whitaker, Brett; Hatcher, Cynthia; Srinivasan, Velusamy; Tatti, Kathleen; Tondella, Maria Lucia; Wang, Xin; Winchell, Jonas M; Mayer, Leonard W; Jernigan, Daniel; Mawle, Alison C
2016-02-01
In this study, a multicenter evaluation of the Life Technologies TaqMan(®) Array Card (TAC) with 21 custom viral and bacterial respiratory assays was performed on the Applied Biosystems ViiA™ 7 Real-Time PCR System. The goal of the study was to demonstrate the analytical performance of this platform when compared to identical individual pathogen specific laboratory developed tests (LDTs) designed at the Centers for Disease Control and Prevention (CDC), equivalent LDTs provided by state public health laboratories, or to three different commercial multi-respiratory panels. CDC and Association of Public Health Laboratories (APHL) LDTs had similar analytical sensitivities for viral pathogens, while several of the bacterial pathogen APHL LDTs demonstrated sensitivities one log higher than the corresponding CDC LDT. When compared to CDC LDTs, TAC assays were generally one to two logs less sensitive depending on the site performing the analysis. Finally, TAC assays were generally more sensitive than their counterparts in three different commercial multi-respiratory panels. TAC technology allows users to spot customized assays and design TAC layout, simplify assay setup, conserve specimen, dramatically reduce contamination potential, and as demonstrated in this study, analyze multiple samples in parallel with good reproducibility between instruments and operators. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Ryu, Hoon; Jeong, Yosang; Kang, Ji-Hoon; Cho, Kyu Nam
2016-12-01
Modelling of multi-million atomic semiconductor structures is important as it not only predicts properties of physically realizable novel materials, but can accelerate advanced device designs. This work elaborates a new Technology-Computer-Aided-Design (TCAD) tool for nanoelectronics modelling, which uses a sp3d5s∗ tight-binding approach to describe multi-million atomic structures, and simulate electronic structures with high performance computing (HPC), including atomic effects such as alloy and dopant disorders. Being named as Quantum simulation tool for Advanced Nanoscale Devices (Q-AND), the tool shows nice scalability on traditional multi-core HPC clusters implying the strong capability of large-scale electronic structure simulations, particularly with remarkable performance enhancement on latest clusters of Intel Xeon PhiTM coprocessors. A review of the recent modelling study conducted to understand an experimental work of highly phosphorus-doped silicon nanowires, is presented to demonstrate the utility of Q-AND. Having been developed via Intel Parallel Computing Center project, Q-AND will be open to public to establish a sound framework of nanoelectronics modelling with advanced HPC clusters of a many-core base. With details of the development methodology and exemplary study of dopant electronics, this work will present a practical guideline for TCAD development to researchers in the field of computational nanoelectronics.
Vitamin E tocotrienol supplementation improves lipid profiles in chronic hemodialysis patients.
Daud, Zulfitri A Mat; Tubie, Boniface; Sheyman, Marina; Osia, Robert; Adams, Judy; Tubie, Sharon; Khosla, Pramod
2013-01-01
Chronic hemodialysis patients experience accelerated atherosclerosis contributed to by dyslipidemia, inflammation, and an impaired antioxidant system. Vitamin E tocotrienols possess anti-inflammatory and antioxidant properties. However, the impact of dietary intervention with Vitamin E tocotrienols is unknown in this population. A randomized, double-blind, placebo-controlled, parallel trial was conducted in 81 patients undergoing chronic hemodialysis. Subjects were provided daily with capsules containing either vitamin E tocotrienol-rich fraction (TRF) (180 mg tocotrienols, 40 mg tocopherols) or placebo (0.48 mg tocotrienols, 0.88 mg tocopherols). Endpoints included measurements of inflammatory markers (C-reactive protein and interleukin 6), oxidative status (total antioxidant power and malondialdehyde), lipid profiles (plasma total cholesterol, triacylglycerols, and high-density lipoprotein cholesterol), as well as cholesteryl-ester transfer protein activity and apolipoprotein A1. TRF supplementation did not impact any nutritional, inflammatory, or oxidative status biomarkers over time when compared with the baseline within the group (one-way repeated measures analysis of variance) or when compared with the placebo group at a particular time point (independent t-test). However, the TRF supplemented group showed improvement in lipid profiles after 12 and 16 weeks of intervention when compared with placebo at the respective time points. Normalized plasma triacylglycerols (cf baseline) in the TRF group were reduced by 33 mg/dL (P=0.032) and 36 mg/dL (P=0.072) after 12 and 16 weeks of intervention but no significant improvement was seen in the placebo group. Similarly, normalized plasma high-density lipoprotein cholesterol was higher (P<0.05) in the TRF group as compared with placebo at both week 12 and week 16. The changes in the TRF group at week 12 and week 16 were associated with higher plasma apolipoprotein A1 concentration (P<0.02) and lower cholesteryl-ester transfer protein activity (P<0.001). TRF supplementation improved lipid profiles in this study of maintenance hemodialysis patients. A multi-centered trial is warranted to confirm these observations.
Center Director Bridges visits Disability Awareness and Action working Group Technology Fair
NASA Technical Reports Server (NTRS)
1999-01-01
Center Director Roy Bridges stops at the Stewart Eye Institute table at the Disability Awareness and Action Working Group (DAAWG) 1999 Technology Fair being held Oct. 20-21 at Kennedy Space Center. Behind Bridges is Sterling Walker, director of Engineering Development at KSC and chairman of DAAWG. At the near right are George and Marian Hall, who are with the Institute. At the left is Nancie Strott, a multi-media specialist with Dynacs and chairperson of the Fair. The Fair is highlighting vendors demonstrating mobility, hearing, vision and silent disability assistive technology. The purpose is to create an awareness of the types of technology currently available to assist people with various disabilities in the workplace. The theme is that of this year's National Disability Employment Awareness Month, 'Opening Doors to Ability.' Some of the vendors participating are Canine Companions for Independence, Goodwill Industries, Accessible Structures, Division of Blind Services, Space Coast Center for Independent Living, KSC Fitness Center and Delaware North Parks Services.
Center Director Bridges visits Disability Awareness and Action working Group Technology Fair
NASA Technical Reports Server (NTRS)
1999-01-01
Center Director Roy Bridges stops to talk to one of the vendors at the Disability Awareness and Action Working Group (DAAWG) Technology Fair being held Oct. 20-21 at Kennedy Space Center. With him at the far left is Sterling Walker, director of Engineering Development at KSC and chairman of DAAWG, and Nancie Strott, a multi-media specialist with Dynacs and chairperson of the Fair; at the right is Carol Cavanaugh, with KSC Public Services. The Fair is highlighting vendors demonstrating mobility, hearing, vision and silent disability assistive technology. The purpose is to create an awareness of the types of technology currently available to assist people with various disabilities in the workplace. The theme is that of this year's National Disability Employment Awareness Month, 'Opening Doors to Ability.' Some of the vendors participating are Canine Companions for Independence, Goodwill Industries, Accessible Structures, Division of Blind Services, Space Coast Center for Independent Living, KSC Fitness Center and Delaware North Parks Services.
Center Director Bridges visits Disability Awareness and Action working Group Technology Fair
NASA Technical Reports Server (NTRS)
1999-01-01
Center Director Roy Bridges stops to pet one of the dogs that serves with Canine Companions for Independence, a vendor displaying its capabilities at the Disability Awareness and Action Working Group (DAAWG) 1999 Technology Fair being held Oct. 20-21 at Kennedy Space Center. Standing at the right is Carol Cavanaugh, with KSC Public Services; behind Bridges is Nancie Strott (left), a multi-media specialist with Dynacs and chairperson of the Fair, and Sterling Walker (right), director of Engineering Development and chairman of DAAWG. The Fair is highlighting vendors demonstrating mobility, hearing, vision and silent disability assistive technology. The purpose is to create an awareness of the types of technology currently available to assist people with various disabilities in the workplace. The theme is that of this year's National Disability Employment Awareness Month, 'Opening Doors to Ability.' Some of the other vendors participating are Goodwill Industries, Accessible Structures, Division of Blind Services, Space Coast Center for Independent Living, KSC Fitness Center and Delaware North Parks Services.
Embodied and Distributed Parallel DJing.
Cappelen, Birgitta; Andersson, Anders-Petter
2016-01-01
Everyone has a right to take part in cultural events and activities, such as music performances and music making. Enforcing that right, within Universal Design, is often limited to a focus on physical access to public areas, hearing aids etc., or groups of persons with special needs performing in traditional ways. The latter might be people with disabilities, being musicians playing traditional instruments, or actors playing theatre. In this paper we focus on the innovative potential of including people with special needs, when creating new cultural activities. In our project RHYME our goal was to create health promoting activities for children with severe disabilities, by developing new musical and multimedia technologies. Because of the users' extreme demands and rich contribution, we ended up creating both a new genre of musical instruments and a new art form. We call this new art form Embodied and Distributed Parallel DJing, and the new genre of instruments for Empowering Multi-Sensorial Things.
Design and Development of a Run-Time Monitor for Multi-Core Architectures in Cloud Computing
Kang, Mikyung; Kang, Dong-In; Crago, Stephen P.; Park, Gyung-Leen; Lee, Junghoon
2011-01-01
Cloud computing is a new information technology trend that moves computing and data away from desktops and portable PCs into large data centers. The basic principle of cloud computing is to deliver applications as services over the Internet as well as infrastructure. A cloud is a type of parallel and distributed system consisting of a collection of inter-connected and virtualized computers that are dynamically provisioned and presented as one or more unified computing resources. The large-scale distributed applications on a cloud require adaptive service-based software, which has the capability of monitoring system status changes, analyzing the monitored information, and adapting its service configuration while considering tradeoffs among multiple QoS features simultaneously. In this paper, we design and develop a Run-Time Monitor (RTM) which is a system software to monitor the application behavior at run-time, analyze the collected information, and optimize cloud computing resources for multi-core architectures. RTM monitors application software through library instrumentation as well as underlying hardware through a performance counter optimizing its computing configuration based on the analyzed data. PMID:22163811
Design and development of a run-time monitor for multi-core architectures in cloud computing.
Kang, Mikyung; Kang, Dong-In; Crago, Stephen P; Park, Gyung-Leen; Lee, Junghoon
2011-01-01
Cloud computing is a new information technology trend that moves computing and data away from desktops and portable PCs into large data centers. The basic principle of cloud computing is to deliver applications as services over the Internet as well as infrastructure. A cloud is a type of parallel and distributed system consisting of a collection of inter-connected and virtualized computers that are dynamically provisioned and presented as one or more unified computing resources. The large-scale distributed applications on a cloud require adaptive service-based software, which has the capability of monitoring system status changes, analyzing the monitored information, and adapting its service configuration while considering tradeoffs among multiple QoS features simultaneously. In this paper, we design and develop a Run-Time Monitor (RTM) which is a system software to monitor the application behavior at run-time, analyze the collected information, and optimize cloud computing resources for multi-core architectures. RTM monitors application software through library instrumentation as well as underlying hardware through a performance counter optimizing its computing configuration based on the analyzed data.
Zhang, Lina; Zhang, Zhiqin; Chen, Yangmei; Qin, Xinyue; Zhou, Huadong; Zhang, Chaodong; Sun, Hongbin; Tang, Ronghua; Zheng, Jinou; Yi, Lin; Deng, Liying; Li, Jinfang
2013-08-01
Rasagiline mesylate is a highly potent, selective and irreversible monoamine oxidase type B (MAOB) inhibitor and is effective as monotherapy or adjunct to levodopa for patients with Parkinson's disease (PD). However, few studies have evaluated the efficacy and safety of rasagiline in the Chinese population. This study was designed to investigate the safety and efficacy of rasagiline as adjunctive therapy to levodopa treatment in Chinese PD patients. This was a randomized, double-blind, placebo-controlled, parallel-group, multi-centre trial conducted over a 12-wk period that enrolled 244 PD patients with motor fluctuations. Participants were randomly assigned to oral rasagiline mesylate (1 mg) or placebo, once daily. Altogether, 219 patients completed the trial. Rasagiline showed significantly greater efficacy compared with placebo. During the treatment period, the primary efficacy variable--mean adjusted total daily off time--decreased from baseline by 1.7 h in patients treated with 1.0 mg/d rasagiline compared to placebo (p < 0.05). Scores using the Unified Parkinson's Disease Rating Scale also improved during rasagiline treatment. Rasagiline was well tolerated. This study demonstrated that rasagiline mesylate is effective and well tolerated as an adjunct to levodopa treatment in Chinese PD patients with fluctuations.
NASA Astrophysics Data System (ADS)
Wu, J.; Yang, Y.; Luo, Q.; Wu, J.
2012-12-01
This study presents a new hybrid multi-objective evolutionary algorithm, the niched Pareto tabu search combined with a genetic algorithm (NPTSGA), whereby the global search ability of niched Pareto tabu search (NPTS) is improved by the diversification of candidate solutions arose from the evolving nondominated sorting genetic algorithm II (NSGA-II) population. Also, the NPTSGA coupled with the commonly used groundwater flow and transport codes, MODFLOW and MT3DMS, is developed for multi-objective optimal design of groundwater remediation systems. The proposed methodology is then applied to a large-scale field groundwater remediation system for cleanup of large trichloroethylene (TCE) plume at the Massachusetts Military Reservation (MMR) in Cape Cod, Massachusetts. Furthermore, a master-slave (MS) parallelization scheme based on the Message Passing Interface (MPI) is incorporated into the NPTSGA to implement objective function evaluations in distributed processor environment, which can greatly improve the efficiency of the NPTSGA in finding Pareto-optimal solutions to the real-world application. This study shows that the MS parallel NPTSGA in comparison with the original NPTS and NSGA-II can balance the tradeoff between diversity and optimality of solutions during the search process and is an efficient and effective tool for optimizing the multi-objective design of groundwater remediation systems under complicated hydrogeologic conditions.
A mixed parallel strategy for the solution of coupled multi-scale problems at finite strains
NASA Astrophysics Data System (ADS)
Lopes, I. A. Rodrigues; Pires, F. M. Andrade; Reis, F. J. P.
2018-02-01
A mixed parallel strategy for the solution of homogenization-based multi-scale constitutive problems undergoing finite strains is proposed. The approach aims to reduce the computational time and memory requirements of non-linear coupled simulations that use finite element discretization at both scales (FE^2). In the first level of the algorithm, a non-conforming domain decomposition technique, based on the FETI method combined with a mortar discretization at the interface of macroscopic subdomains, is employed. A master-slave scheme, which distributes tasks by macroscopic element and adopts dynamic scheduling, is then used for each macroscopic subdomain composing the second level of the algorithm. This strategy allows the parallelization of FE^2 simulations in computers with either shared memory or distributed memory architectures. The proposed strategy preserves the quadratic rates of asymptotic convergence that characterize the Newton-Raphson scheme. Several examples are presented to demonstrate the robustness and efficiency of the proposed parallel strategy.
Besnier, Francois; Glover, Kevin A.
2013-01-01
This software package provides an R-based framework to make use of multi-core computers when running analyses in the population genetics program STRUCTURE. It is especially addressed to those users of STRUCTURE dealing with numerous and repeated data analyses, and who could take advantage of an efficient script to automatically distribute STRUCTURE jobs among multiple processors. It also consists of additional functions to divide analyses among combinations of populations within a single data set without the need to manually produce multiple projects, as it is currently the case in STRUCTURE. The package consists of two main functions: MPI_structure() and parallel_structure() as well as an example data file. We compared the performance in computing time for this example data on two computer architectures and showed that the use of the present functions can result in several-fold improvements in terms of computation time. ParallelStructure is freely available at https://r-forge.r-project.org/projects/parallstructure/. PMID:23923012
Optimizing the Performance of Reactive Molecular Dynamics Simulations for Multi-core Architectures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aktulga, Hasan Metin; Coffman, Paul; Shan, Tzu-Ray
2015-12-01
Hybrid parallelism allows high performance computing applications to better leverage the increasing on-node parallelism of modern supercomputers. In this paper, we present a hybrid parallel implementation of the widely used LAMMPS/ReaxC package, where the construction of bonded and nonbonded lists and evaluation of complex ReaxFF interactions are implemented efficiently using OpenMP parallelism. Additionally, the performance of the QEq charge equilibration scheme is examined and a dual-solver is implemented. We present the performance of the resulting ReaxC-OMP package on a state-of-the-art multi-core architecture Mira, an IBM BlueGene/Q supercomputer. For system sizes ranging from 32 thousand to 16.6 million particles, speedups inmore » the range of 1.5-4.5x are observed using the new ReaxC-OMP software. Sustained performance improvements have been observed for up to 262,144 cores (1,048,576 processes) of Mira with a weak scaling efficiency of 91.5% in larger simulations containing 16.6 million particles.« less
Groups in the radiative transfer theory
NASA Astrophysics Data System (ADS)
Nikoghossian, Arthur
2016-11-01
The paper presents a group-theoretical description of radiation transfer in inhomogeneous and multi-component atmospheres with the plane-parallel geometry. It summarizes and generalizes the results obtained recently by the author for some standard transfer problems of astrophysical interest with allowance of the angle and frequency distributions of the radiation field. We introduce the concept of composition groups for media with different optical and physical properties. Group representations are derived for two possible cases of illumination of a composite finite atmosphere. An algorithm for determining the reflectance and transmittance of inhomogeneous and multi-component atmospheres is described. The group theory is applied also to determining the field of radiation inside an inhomogeneous atmosphere. The concept of a group of optical depth translations is introduced. The developed theory is illustrated with the problem of radiation diffusion with partial frequency distribution assuming that the inhomogeneity is due to depth-variation of the scattering coefficient. It is shown that once reflectance and transmittance of a medium are determined, the internal field of radiation in the source-free atmosphere is found without solving any new equations. The transfer problems for a semi-infinite atmosphere and an atmosphere with internal sources of energy are discussed. The developed theory allows to derive summation laws for the mean number of scattering events underwent by the photons in the course of diffusion in the atmosphere.
Multi-Institution Research Centers: Planning and Management Challenges
ERIC Educational Resources Information Center
Spooner, Catherine; Lavey, Lisa; Mukuka, Chilandu; Eames-Brown, Rosslyn
2016-01-01
Funding multi-institution centers of research excellence (CREs) has become a common means of supporting collaborative partnerships to address specific research topics. However, there is little guidance for those planning or managing a multi-institution CRE, which faces specific challenges not faced by single-institution research centers. We…
Jeong, Ji Yun; Jeon, Jae-Han; Bae, Kwi-Hyun; Choi, Yeon-Kyung; Park, Keun-Gyu; Kim, Jung-Guk; Won, Kyu Chang; Cha, Bong Soo; Ahn, Chul Woo; Kim, Dong Won; Lee, Chang Hee; Lee, In-Kyu
2018-01-17
This study was performed to determine the effectiveness of the Smart Care service on glucose control based on telemedicine and telemonitoring compared with conventional treatment in patients with type 2 diabetes. This 24-week prospective multi-center randomized controlled trial involved 338 adult patients with type 2 diabetes at four university hospitals in South Korea. The patients were randomly assigned to a control group (group A, n = 113), a telemonitoring group (group B, n = 113), or a telemedicine group (group C, n = 112). Patients in the telemonitoring group visited the outpatient clinic regularly, accompanied by an additional telemonitoring service that included remote glucose monitoring with automated patient decision support by text. Remote glucose monitoring was identical in the telemedicine group, but assessment by outpatient visits was replaced by video conferencing with an endocrinologist. The adjusted net reductions in HbA1c concentration after 24 weeks were similar in the conventional, telemonitoring, and telemedicine groups (-0.66% ± 1.03% vs. -0.66% ± 1.09% vs. -0.81% ± 1.05%; p > 0.05 for each pairwise comparison). Fasting glucose concentrations were lower in the telemonitoring and telemedicine groups than in the conventional group. Rates of hypoglycemia were lower in the telemedicine group than in the other two groups, and compliance with medication was better in the telemonitoring and telemedicine than in the conventional group. No serious adverse events were associated with telemedicine. Telehealthcare was as effective as conventional care at improving glycemia in patients with type 2 diabetes without serious adverse effects.
Shah, Dipali Yogesh; Wadekar, Swati Ishwara; Dadpe, Ashwini Manish; Jadhav, Ganesh Ranganath; Choudhary, Lalit Jayant; Kalra, Dheeraj Deepak
2017-01-01
Context and Aims: The purpose of this study was to compare and evaluate the shaping ability of ProTaper (PT) and Self-Adjusting File (SAF) system using cone-beam computed tomography (CBCT) to assess their performance in oval-shaped root canals. Materials and Methods: Sixty-two mandibular premolars with single oval canals were divided into two experimental groups (n = 31) according to the systems used: Group I – PT and Group II – SAF. Canals were evaluated before and after instrumentation using CBCT to assess centering ratio and canal transportation at three levels. Data were statistically analyzed using one-way analysis of variance, post hoc Tukey's test, and t-test. Results: The SAF showed better centering ability and lesser canal transportation than the PT only in the buccolingual plane at 6 and 9 mm levels. The shaping ability of the PT was best in the apical third in both the planes. The SAF had statistically significant better centering and lesser canal transportation in the buccolingual as compared to the mesiodistal plane at the middle and coronal levels. Conclusions: The SAF produced significantly less transportation and remained centered than the PT at the middle and coronal levels in the buccolingual plane of oval canals. In the mesiodistal plane, the performance of both the systems was parallel. PMID:28855757
Real-time multi-mode neutron multiplicity counter
Rowland, Mark S; Alvarez, Raymond A
2013-02-26
Embodiments are directed to a digital data acquisition method that collects data regarding nuclear fission at high rates and performs real-time preprocessing of large volumes of data into directly useable forms for use in a system that performs non-destructive assaying of nuclear material and assemblies for mass and multiplication of special nuclear material (SNM). Pulses from a multi-detector array are fed in parallel to individual inputs that are tied to individual bits in a digital word. Data is collected by loading a word at the individual bit level in parallel, to reduce the latency associated with current shift-register systems. The word is read at regular intervals, all bits simultaneously, with no manipulation. The word is passed to a number of storage locations for subsequent processing, thereby removing the front-end problem of pulse pileup. The word is used simultaneously in several internal processing schemes that assemble the data in a number of more directly useable forms. The detector includes a multi-mode counter that executes a number of different count algorithms in parallel to determine different attributes of the count data.
NASA Astrophysics Data System (ADS)
Gutzwiller, David; Gontier, Mathieu; Demeulenaere, Alain
2014-11-01
Multi-Block structured solvers hold many advantages over their unstructured counterparts, such as a smaller memory footprint and efficient serial performance. Historically, multi-block structured solvers have not been easily adapted for use in a High Performance Computing (HPC) environment, and the recent trend towards hybrid GPU/CPU architectures has further complicated the situation. This paper will elaborate on developments and innovations applied to the NUMECA FINE/Turbo solver that have allowed near-linear scalability with real-world problems on over 250 hybrid GPU/GPU cluster nodes. Discussion will focus on the implementation of virtual partitioning and load balancing algorithms using a novel meta-block concept. This implementation is transparent to the user, allowing all pre- and post-processing steps to be performed using a simple, unpartitioned grid topology. Additional discussion will elaborate on developments that have improved parallel performance, including fully parallel I/O with the ADIOS API and the GPU porting of the computationally heavy CPUBooster convergence acceleration module. Head of HPC and Release Management, Numeca International.
SNAVA-A real-time multi-FPGA multi-model spiking neural network simulation architecture.
Sripad, Athul; Sanchez, Giovanny; Zapata, Mireya; Pirrone, Vito; Dorta, Taho; Cambria, Salvatore; Marti, Albert; Krishnamourthy, Karthikeyan; Madrenas, Jordi
2018-01-01
Spiking Neural Networks (SNN) for Versatile Applications (SNAVA) simulation platform is a scalable and programmable parallel architecture that supports real-time, large-scale, multi-model SNN computation. This parallel architecture is implemented in modern Field-Programmable Gate Arrays (FPGAs) devices to provide high performance execution and flexibility to support large-scale SNN models. Flexibility is defined in terms of programmability, which allows easy synapse and neuron implementation. This has been achieved by using a special-purpose Processing Elements (PEs) for computing SNNs, and analyzing and customizing the instruction set according to the processing needs to achieve maximum performance with minimum resources. The parallel architecture is interfaced with customized Graphical User Interfaces (GUIs) to configure the SNN's connectivity, to compile the neuron-synapse model and to monitor SNN's activity. Our contribution intends to provide a tool that allows to prototype SNNs faster than on CPU/GPU architectures but significantly cheaper than fabricating a customized neuromorphic chip. This could be potentially valuable to the computational neuroscience and neuromorphic engineering communities. Copyright © 2017 Elsevier Ltd. All rights reserved.
Self-regulated learning and self-directed study in a pre-college sample
Abar, Beau; Loken, Eric
2009-01-01
Self-regulated learning (SRL) is a multi-dimensional construct that has been difficult to operationalize using traditional, variable-centered methodologies. The current paper takes a person-centered approach to the study of SRL in a sample of 205 high-school students. Using latent profile analysis on self-reports of seven aspects of SRL, three groups were identified: high SRL, low SRL, and average SRL. Student self-reports of goal orientation were used as validation for the profile solution, with the high academic self- regulation group reporting the highest levels of mastery orientation while the low self-regulation group reported highest levels of avoidant orientation. Profiles were also compared on independently collected, behavioral measures of study behaviors, with the highly self-regulated group tending to study more material and for a longer time than less self-regulated individuals. PMID:20161484
Acoustic simulation in architecture with parallel algorithm
NASA Astrophysics Data System (ADS)
Li, Xiaohong; Zhang, Xinrong; Li, Dan
2004-03-01
In allusion to complexity of architecture environment and Real-time simulation of architecture acoustics, a parallel radiosity algorithm was developed. The distribution of sound energy in scene is solved with this method. And then the impulse response between sources and receivers at frequency segment, which are calculated with multi-process, are combined into whole frequency response. The numerical experiment shows that parallel arithmetic can improve the acoustic simulating efficiency of complex scene.
Stakeholder engagement in dredged material management decisions.
Collier, Zachary A; Bates, Matthew E; Wood, Matthew D; Linkov, Igor
2014-10-15
Dredging and disposal issues often become controversial with local stakeholders because of their competing interests. These interests tend to manifest themselves in stakeholders holding onto entrenched positions, and deadlock can result without a methodology to move the stakeholder group past the status quo. However, these situations can be represented as multi-stakeholder, multi-criteria decision problems. In this paper, we describe a case study in which multi-criteria decision analysis was implemented in a multi-stakeholder setting in order to generate recommendations on dredged material placement for Long Island Sound's Dredged Material Management Plan. A working-group of representatives from various stakeholder organizations was formed and consulted to help prioritize sediment placement sites for each dredging center in the region by collaboratively building a multi-criteria decision model. The resulting model framed the problem as several alternatives, criteria, sub-criteria, and metrics relevant to stakeholder interests in the Long Island Sound region. An elicitation of values, represented as criteria weights, was then conducted. Results show that in general, stakeholders tended to agree that all criteria were at least somewhat important, and on average there was strong agreement on the order of preferences among the diverse groups of stakeholders. By developing the decision model iteratively with stakeholders as a group and soliciting their preferences, the process sought to increase stakeholder involvement at the front-end of the prioritization process and lead to increased knowledge and consensus regarding the importance of site-specific criteria. Published by Elsevier B.V.
Mueller-Stierlin, Annabel Sandra; Helmbrecht, Marina Julia; Herder, Katrin; Prinz, Stefanie; Rosenfeld, Nadine; Walendzik, Julia; Holzmann, Marco; Dinc, Uemmueguelsuem; Schützwohl, Matthias; Becker, Thomas; Kilian, Reinhold
2017-08-01
The Network for Mental Health (NWpG-IC) is an integrated mental health care program implemented in 2009 by cooperation between health insurance companies and community mental health providers in Germany. Meanwhile about 10,000 patients have been enrolled. This is the first study evaluating the effectiveness of the program in comparison to standard mental health care in Germany. In a parallel-group controlled trial over 18 months conducted in five regions across Germany, a total of 260 patients enrolled in NWpG-IC and 251 patients in standard mental health care (TAU) were recruited between August 2013 and November 2014. The NWpG-IC patients had access to special services such as community-based multi-professional teams, case management, crisis intervention and family-oriented psychoeducation in addition to standard mental health care. The primary outcome empowerment (EPAS) and the secondary outcomes quality of life (WHO-QoL-BREF), satisfaction with psychiatric treatment (CSQ-8), psychosocial and clinical impairment (HoNOS) and information about mental health service needs (CAN) were measured four times at 6-month intervals. Linear mixed-effect regression models were used to estimate the main effects and interaction effects of treatment, time and primary diagnosis. Due to the non-randomised group assignment, propensity score adjustment was used to control the selection bias. NWpG-IC and TAU groups did not differ with respect to most primary and secondary outcomes in our participating patients who showed a broad spectrum of psychiatric diagnoses and illness severities. However, a significant improvement in terms of patients' satisfaction with psychiatric care and their perception of treatment participation in favour of the NWpG-IC group was found. Providing integrated mental health care for unspecific mentally ill target groups increases treatment participation and service satisfaction but seems not suitable to enhance the overall outcomes of mental health care in Germany. The implementation of strategies for ameliorating the needs orientation of the NWpG-IC should be considered. German Clinical Trial Register DRKS00005111 , registered 26 July 2013.
NASA Technical Reports Server (NTRS)
Houck, J. A.; Markos, A. T.
1980-01-01
This paper describes the work being done at the National Aeronautics and Space Administration's (NASA) Langley Research Center on the development of a multi-media crew-training program for the Terminal Configured Vehicle (TCV) Mission Simulator. Brief descriptions of the goals and objectives of the TCV Program and of the TCV Mission Simulator are presented. A detailed description of the training program is provided along with a description of the performance of the first group of four commercial pilots to be qualified in the TCV Mission Simulator.
NASA Technical Reports Server (NTRS)
Rhouck, J. A.; Markos, A. T.
1980-01-01
This paper describes the work being done at the National Aeronautics and Space Administration's (NASA) Langley Research Center on the development of a multi-media crew-training program for the Terminal Configured Vehicle (TCV) Mission Simulator. Brief descriptions of the goals and objectives of the TCV Program and of the TCV Mission Simulator are presented. A detailed description of the training program is provided along with a description of the performance of the first group of four commercial pilots to be qualified in the TCV Mission Simulator.
A high-order language for a system of closely coupled processing elements
NASA Technical Reports Server (NTRS)
Feyock, S.; Collins, W. R.
1986-01-01
The research reported in this paper was occasioned by the requirements on part of the Real-Time Digital Simulator (RTDS) project under way at NASA Lewis Research Center. The RTDS simulation scheme employs a network of CPUs running lock-step cycles in the parallel computations of jet airplane simulations. Their need for a high order language (HOL) that would allow non-experts to write simulation applications and that could be implemented on a possibly varying network can best be fulfilled by using the programming language Ada. We describe how the simulation problems can be modeled in Ada, how to map a single, multi-processing Ada program into code for individual processors, regardless of network reconfiguration, and why some Ada language features are particulary well-suited to network simulations.
Spatial data analytics on heterogeneous multi- and many-core parallel architectures using python
Laura, Jason R.; Rey, Sergio J.
2017-01-01
Parallel vector spatial analysis concerns the application of parallel computational methods to facilitate vector-based spatial analysis. The history of parallel computation in spatial analysis is reviewed, and this work is placed into the broader context of high-performance computing (HPC) and parallelization research. The rise of cyber infrastructure and its manifestation in spatial analysis as CyberGIScience is seen as a main driver of renewed interest in parallel computation in the spatial sciences. Key problems in spatial analysis that have been the focus of parallel computing are covered. Chief among these are spatial optimization problems, computational geometric problems including polygonization and spatial contiguity detection, the use of Monte Carlo Markov chain simulation in spatial statistics, and parallel implementations of spatial econometric methods. Future directions for research on parallelization in computational spatial analysis are outlined.
Firouzi, Somayyeh; Majid, Hazreen Abdul; Ismail, Amin; Kamaruddin, Nor Azmi; Barakatun-Nisak, Mohd-Yusof
2017-06-01
Evidence of a possible connection between gut microbiota and several physiological processes linked to type 2 diabetes is increasing. However, the effect of multi-strain probiotics in people with type 2 diabetes remains unclear. This study investigated the effect of multi-strain microbial cell preparation-also refers to multi-strain probiotics-on glycemic control and other diabetes-related outcomes in people with type 2 diabetes. A randomized, double-blind, parallel-group, controlled clinical trial. Diabetes clinic of a teaching hospital in Kuala Lumpur, Malaysia. A total of 136 participants with type 2 diabetes, aged 30-70 years, were recruited and randomly assigned to receive either probiotics (n = 68) or placebo (n = 68) for 12 weeks. Primary outcomes were glycemic control-related parameters, and secondary outcomes were anthropomorphic variables, lipid profile, blood pressure and high-sensitivity C-reactive protein. The Lactobacillus and Bifidobacterium quantities were measured before and after intervention as an indicator of successful passage of the supplement through gastrointestinal tract. Intention-to-treat (ITT) analysis was performed on all participants, while per-protocol (PP) analysis was performed on those participants who had successfully completed the trial with good compliance rate. With respect to primary outcomes, glycated hemoglobin decreased by 0.14 % in the probiotics and increased by 0.02 % in the placebo group in PP analysis (p < 0.05, small effect size of 0.050), while these changes were not significant in ITT analysis. Fasting insulin increased by 1.8 µU/mL in placebo group and decreased by 2.9 µU/mL in probiotics group in PP analysis. These changes were significant between groups at both analyses (p < 0.05, medium effect size of 0.062 in PP analysis and small effect size of 0.033 in ITT analysis). Secondary outcomes did not change significantly. Probiotics successfully passed through the gastrointestinal tract. Probiotics modestly improved HbA1c and fasting insulin in people with type 2 diabetes.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-01
... DEPARTMENT OF HEALTH AND HUMAN SERVICES National Institutes of Health Notice Correction; A Multi-Center International Hospital-Based Case-Control Study of Lymphoma in Asia (AsiaLymph) (NCI) The Federal... project titled, ``A multi-center international hospital-based case-control study of lymphoma in Asia (Asia...
Vectorization for Molecular Dynamics on Intel Xeon Phi Corpocessors
NASA Astrophysics Data System (ADS)
Yi, Hongsuk
2014-03-01
Many modern processors are capable of exploiting data-level parallelism through the use of single instruction multiple data (SIMD) execution. The new Intel Xeon Phi coprocessor supports 512 bit vector registers for the high performance computing. In this paper, we have developed a hierarchical parallelization scheme for accelerated molecular dynamics simulations with the Terfoff potentials for covalent bond solid crystals on Intel Xeon Phi coprocessor systems. The scheme exploits multi-level parallelism computing. We combine thread-level parallelism using a tightly coupled thread-level and task-level parallelism with 512-bit vector register. The simulation results show that the parallel performance of SIMD implementations on Xeon Phi is apparently superior to their x86 CPU architecture.
National Centers for Environmental Prediction
Products Operational Forecast Graphics Experimental Forecast Graphics Verification and Diagnostics Model PARALLEL/EXPERIMENTAL MODEL FORECAST GRAPHICS OPERATIONAL VERIFICATION / DIAGNOSTICS PARALLEL VERIFICATION Developmental Air Quality Forecasts and Verification Back to Table of Contents 2. PARALLEL/EXPERIMENTAL GRAPHICS
The Goddard Space Flight Center Program to develop parallel image processing systems
NASA Technical Reports Server (NTRS)
Schaefer, D. H.
1972-01-01
Parallel image processing which is defined as image processing where all points of an image are operated upon simultaneously is discussed. Coherent optical, noncoherent optical, and electronic methods are considered parallel image processing techniques.
Laser Amplifier Development for the Remote Sensing of CO2 from Space
NASA Technical Reports Server (NTRS)
Yu, Anthony W.; Abshire, James B.; Storm, Mark; Betin, Alexander
2015-01-01
Accurate global measurements of tropospheric CO2 mixing ratios are needed to study CO2 emissions and CO2 exchange with the land and oceans. NASA Goddard Space Flight Center (GSFC) is developing a pulsed lidar approach for an integrated path differential absorption (IPDA) lidar to allow global measurements of atmospheric CO2 column densities from space. Our group has developed, and successfully flown, an airborne pulsed lidar instrument that uses two tunable pulsed laser transmitters allowing simultaneous measurement of a single CO2 absorption line in the 1570 nm band, absorption of an O2 line pair in the oxygen A-band (765 nm), range, and atmospheric backscatter profiles in the same path. Both lasers are pulsed at 10 kHz, and the two absorption line regions are sampled at typically a 300 Hz rate. A space-based version of this lidar must have a much larger lidar power-area product due to the approximately x40 longer range and faster along track velocity compared to airborne instrument. Initial link budget analysis indicated that for a 400 km orbit, a 1.5 m diameter telescope and a 10 second integration time, a approximately 2 mJ laser energy is required to attain the precision needed for each measurement. To meet this energy requirement, we have pursued parallel power scaling efforts to enable space-based lidar measurement of CO2 concentrations. These included a multiple aperture approach consists of multi-element large mode area fiber amplifiers and a single-aperture approach consists of a multi-pass Er:Yb:Phosphate glass based planar waveguide amplifier (PWA). In this paper we will present our laser amplifier design approaches and preliminary results.
Riegman, Peter H J; de Jong, Bas W D; Llombart-Bosch, Antonio
2010-04-01
Today's translational cancer research increasingly depends on international multi-center studies. Biobanking infrastructure or comprehensive sample exchange platforms to enable networking of clinical cancer biobanks are instrumental to facilitate communication, uniform sample quality, and rules for exchange. The Organization of European Cancer Institutes (OECI) Pathobiology Working Group supports European biobanking infrastructure by maintaining the OECI-TuBaFrost exchange platform and organizing regular meetings. This platform originated from a European Commission project and is updated with knowledge from ongoing and new biobanking projects. This overview describes how European biobanking projects that have a large impact on clinical biobanking, including EuroBoNeT, SPIDIA, and BBMRI, contribute to the update of the OECI-TuBaFrost exchange platform. Combining the results of these European projects enabled the creation of an open (upon valid registration only) catalogue view of cancer biobanks and their available samples to initiate research projects. In addition, closed environments supporting active projects could be developed together with the latest views on quality, access rules, ethics, and law. With these contributions, the OECI Pathobiology Working Group contributes to and stimulates a professional attitude within biobanks at the European comprehensive cancer centers. Improving the fundamentals of cancer sample exchange in Europe stimulates the performance of large multi-center studies, resulting in experiments with the desired statistical significance outcome. With this approach, future innovation in cancer patient care can be realized faster and more reliably.
Mendelow, A. David; Rowan, Elise N.; Francis, Richard; McColl, Elaine; McNamee, Paul; Chambers, Iain R.; Unterberg, Andreas; Boyers, Dwayne; Mitchell, Patrick M.
2015-01-01
Abstract Intraparenchymal hemorrhages occur in a proportion of severe traumatic brain injury TBI patients, but the role of surgery in their treatment is unclear. This international multi-center, patient-randomized, parallel-group trial compared early surgery (hematoma evacuation within 12 h of randomization) with initial conservative treatment (subsequent evacuation allowed if deemed necessary). Patients were randomized using an independent randomization service within 48 h of TBI. Patients were eligible if they had no more than two intraparenchymal hemorrhages of 10 mL or more and did not have an extradural or subdural hematoma that required surgery. The primary outcome measure was the traditional dichotomous split of the Glasgow Outcome Scale obtained by postal questionnaires sent directly to patients at 6 months. The trial was halted early by the UK funding agency (NIHR HTA) for failure to recruit sufficient patients from the UK (trial registration: ISRCTN19321911). A total of 170 patients were randomized from 31 of 59 registered centers worldwide. Of 82 patients randomized to early surgery with complete follow-up, 30 (37%) had an unfavorable outcome. Of 85 patients randomized to initial conservative treatment with complete follow-up, 40 (47%) had an unfavorable outcome (odds ratio, 0.65; 95% confidence interval, CI 0.35, 1.21; p=0.17), with an absolute benefit of 10.5% (CI, −4.4–25.3%). There were significantly more deaths in the first 6 months in the initial conservative treatment group (33% vs. 15%; p=0.006). The 10.5% absolute benefit with early surgery was consistent with the initial power calculation. However, with the low sample size resulting from the premature termination, we cannot exclude the possibility that this could be a chance finding. A further trial is required urgently to assess whether this encouraging signal can be confirmed. PMID:25738794
Komarov, Ivan; D'Souza, Roshan M
2012-01-01
The Gillespie Stochastic Simulation Algorithm (GSSA) and its variants are cornerstone techniques to simulate reaction kinetics in situations where the concentration of the reactant is too low to allow deterministic techniques such as differential equations. The inherent limitations of the GSSA include the time required for executing a single run and the need for multiple runs for parameter sweep exercises due to the stochastic nature of the simulation. Even very efficient variants of GSSA are prohibitively expensive to compute and perform parameter sweeps. Here we present a novel variant of the exact GSSA that is amenable to acceleration by using graphics processing units (GPUs). We parallelize the execution of a single realization across threads in a warp (fine-grained parallelism). A warp is a collection of threads that are executed synchronously on a single multi-processor. Warps executing in parallel on different multi-processors (coarse-grained parallelism) simultaneously generate multiple trajectories. Novel data-structures and algorithms reduce memory traffic, which is the bottleneck in computing the GSSA. Our benchmarks show an 8×-120× performance gain over various state-of-the-art serial algorithms when simulating different types of models.
Multi-stage separations based on dielectrophoresis
Mariella, Jr., Raymond P.
2004-07-13
A system utilizing multi-stage traps based on dielectrophoresis. Traps with electrodes arranged transverse to the flow and traps with electrodes arranged parallel to the flow with combinations of direct current and alternating voltage are used to trap, concentrate, separate, and/or purify target particles.
NASA Technical Reports Server (NTRS)
Waller, Marvin C. (Editor); Scanlon, Charles H. (Editor)
1996-01-01
A Government and Industry workshop on Flight-Deck-Centered Parallel Runway Approaches in Instrument Meteorological Conditions (IMC) was conducted October 29, 1996 at the NASA Langley Research Center. This document contains the slides and records of the proceedings of the workshop. The purpose of the workshop was to disclose to the National airspace community the status of ongoing NASA R&D to address the closely spaced parallel runway problem in IMC and to seek advice and input on direction of future work to assure an optimized research approach. The workshop also included a description of a Paired Approach Concept which is being studied at United Airlines for application at the San Francisco International Airport.
Parallel software tools at Langley Research Center
NASA Technical Reports Server (NTRS)
Moitra, Stuti; Tennille, Geoffrey M.; Lakeotes, Christopher D.; Randall, Donald P.; Arthur, Jarvis J.; Hammond, Dana P.; Mall, Gerald H.
1993-01-01
This document gives a brief overview of parallel software tools available on the Intel iPSC/860 parallel computer at Langley Research Center. It is intended to provide a source of information that is somewhat more concise than vendor-supplied material on the purpose and use of various tools. Each of the chapters on tools is organized in a similar manner covering an overview of the functionality, access information, how to effectively use the tool, observations about the tool and how it compares to similar software, known problems or shortfalls with the software, and reference documentation. It is primarily intended for users of the iPSC/860 at Langley Research Center and is appropriate for both the experienced and novice user.
USDA-ARS?s Scientific Manuscript database
With enhanced data availability, distributed watershed models for large areas with high spatial and temporal resolution are increasingly used to understand water budgets and examine effects of human activities and climate change/variability on water resources. Developing parallel computing software...
Array-based, parallel hierarchical mesh refinement algorithms for unstructured meshes
Ray, Navamita; Grindeanu, Iulian; Zhao, Xinglin; ...
2016-08-18
In this paper, we describe an array-based hierarchical mesh refinement capability through uniform refinement of unstructured meshes for efficient solution of PDE's using finite element methods and multigrid solvers. A multi-degree, multi-dimensional and multi-level framework is designed to generate the nested hierarchies from an initial coarse mesh that can be used for a variety of purposes such as in multigrid solvers/preconditioners, to do solution convergence and verification studies and to improve overall parallel efficiency by decreasing I/O bandwidth requirements (by loading smaller meshes and in memory refinement). We also describe a high-order boundary reconstruction capability that can be used tomore » project the new points after refinement using high-order approximations instead of linear projection in order to minimize and provide more control on geometrical errors introduced by curved boundaries.The capability is developed under the parallel unstructured mesh framework "Mesh Oriented dAtaBase" (MOAB Tautges et al. (2004)). We describe the underlying data structures and algorithms to generate such hierarchies in parallel and present numerical results for computational efficiency and effect on mesh quality. Furthermore, we also present results to demonstrate the applicability of the developed capability to study convergence properties of different point projection schemes for various mesh hierarchies and to a multigrid finite-element solver for elliptic problems.« less
pyPaSWAS: Python-based multi-core CPU and GPU sequence alignment.
Warris, Sven; Timal, N Roshan N; Kempenaar, Marcel; Poortinga, Arne M; van de Geest, Henri; Varbanescu, Ana L; Nap, Jan-Peter
2018-01-01
Our previously published CUDA-only application PaSWAS for Smith-Waterman (SW) sequence alignment of any type of sequence on NVIDIA-based GPUs is platform-specific and therefore adopted less than could be. The OpenCL language is supported more widely and allows use on a variety of hardware platforms. Moreover, there is a need to promote the adoption of parallel computing in bioinformatics by making its use and extension more simple through more and better application of high-level languages commonly used in bioinformatics, such as Python. The novel application pyPaSWAS presents the parallel SW sequence alignment code fully packed in Python. It is a generic SW implementation running on several hardware platforms with multi-core systems and/or GPUs that provides accurate sequence alignments that also can be inspected for alignment details. Additionally, pyPaSWAS support the affine gap penalty. Python libraries are used for automated system configuration, I/O and logging. This way, the Python environment will stimulate further extension and use of pyPaSWAS. pyPaSWAS presents an easy Python-based environment for accurate and retrievable parallel SW sequence alignments on GPUs and multi-core systems. The strategy of integrating Python with high-performance parallel compute languages to create a developer- and user-friendly environment should be considered for other computationally intensive bioinformatics algorithms.
Kimoto, Suguru; Kawai, Yasuhiko; Gunji, Atsuko; Kondo, Hisatomo; Nomura, Taro; Murakami, Tomohiko; Tsuboi, Akito; Hong, Guang; Minakuchi, Shunsuke; Sato, Yusuke; Ohwada, Gaku; Suzuki, Tetsuya; Kimoto, Katsuhiko; Hoshi, Noriyuki; Saita, Makiko; Yoneyama, Yoshikazu; Sato, Yohei; Morokuma, Masakazu; Okazaki, Joji; Maeda, Takeshi; Nakai, Kenichiro; Ichikawa, Tetsuo; Nagao, Kan; Fujimoto, Keiko; Murata, Hiroshi; Kurogi, Tadafumi; Yoshida, Kazuhiro; Nishimura, Masahiro; Nishi, Yasuhiro; Murakami, Mamoru; Hosoi, Toshio; Hamada, Taizo
2016-10-18
Denture adhesives, characterized as medical products in 1935 by the American Dental Association, have been considered useful adjuncts for improving denture retention and stability. However, many dentists in Japan are hesitant to acknowledge denture adhesives in daily practice because of the stereotype that dentures should be inherently stable, without the aid of adhesives. The aim of this study is to verify the efficacy of denture adhesives to establish guidelines for Japanese users. The null hypothesis is that the application of denture adhesives, including the cream and powder types, or a control (isotonic sodium chloride solution) would not produce different outcomes nor would they differentially improve the set outcomes between baseline and day 4 post-application. This ten-center, randomized controlled trial with parallel groups is ongoing. Three hundred edentulous patients with complete dentures will be allocated to three groups (cream-type adhesive, powder-type adhesive, and control groups). The participants will wear their dentures with the denture adhesive for 4 days, including during eight meals (three breakfasts, two lunches, and three dinners). The baseline measurements and final measurements for the denture adhesives will be performed on the first day and after breakfast on the fourth day. The primary outcome is a general satisfaction rating for the denture. The secondary outcomes are denture satisfaction ratings for various denture functions, occlusal bite force, resistance to dislodgement, masticatory performance, perceived chewing ability, and oral health-related quality of life. Between-subjects comparisons among the three groups and within-subjects comparisons of the pre- and post-intervention measurements will be performed. Furthermore, a multiple regression analysis will be performed. The main analyses will be based on the intention-to-treat principle. A sample size of 100 subjects per group, including an assumed dropout rate of 10 %, will be required to achieve 80 % power with a 5 % alpha level. This randomized clinical trial will provide information about denture adhesives to complete denture wearers, prosthodontic educators, and dentists in Japan. We believe this new evidence on denture adhesive use from Japan will aid dentists in their daily practice even in other countries. ClinicalTrials.gov NCT01712802 . Registered on 17 October 2012.
[Pancreatoduodenectomy: learning curve within single multi-field center].
Kaprin, A D; Kostin, A A; Nikiforov, P V; Egorov, V I; Grishin, N A; Lozhkin, M V; Petrov, L O; Bykasov, S A; Sidorov, D V
2018-01-01
To analyze learning curve by using of immediate results of pancreatoduodenectomy at multi-field oncology institute. For the period 2010-2016 at Abdominal Oncology Department of Herzen Moscow Oncology Research Institute 120 pancreatoduodenal resections were consistently performed. All patients were divided into two groups: the first 60 procedures (group A) and subsequent 60 operations (group B). Herewith, first 60 operations were performed within the first 4.5 years of study period, the next 60 operations - within remaining 2.5 years. Learning curves showed significantly variable intraoperative blood loss (1100 ml and 725 ml), surgery time (589 min and 513 min) and postoperative hospital-stay (15 days and 13 days) in group A followed by gradual improvement of these values in group B. Incidence of negative resection margin (R0) was also significantly improved in the last 60 operations (70 and 92%, respectively). Despite pancreatoduodenectomy is one of the most difficult surgical interventions in abdominal surgery learning curve will differ from one surgeon to another.
Parallel auto-correlative statistics with VTK.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pebay, Philippe Pierre; Bennett, Janine Camille
2013-08-01
This report summarizes existing statistical engines in VTK and presents both the serial and parallel auto-correlative statistics engines. It is a sequel to [PT08, BPRT09b, PT09, BPT09, PT10] which studied the parallel descriptive, correlative, multi-correlative, principal component analysis, contingency, k-means, and order statistics engines. The ease of use of the new parallel auto-correlative statistics engine is illustrated by the means of C++ code snippets and algorithm verification is provided. This report justifies the design of the statistics engines with parallel scalability in mind, and provides scalability and speed-up analysis results for the autocorrelative statistics engine.
NASA Technical Reports Server (NTRS)
Dongarra, Jack (Editor); Messina, Paul (Editor); Sorensen, Danny C. (Editor); Voigt, Robert G. (Editor)
1990-01-01
Attention is given to such topics as an evaluation of block algorithm variants in LAPACK and presents a large-grain parallel sparse system solver, a multiprocessor method for the solution of the generalized Eigenvalue problem on an interval, and a parallel QR algorithm for iterative subspace methods on the CM2. A discussion of numerical methods includes the topics of asynchronous numerical solutions of PDEs on parallel computers, parallel homotopy curve tracking on a hypercube, and solving Navier-Stokes equations on the Cedar Multi-Cluster system. A section on differential equations includes a discussion of a six-color procedure for the parallel solution of elliptic systems using the finite quadtree structure, data parallel algorithms for the finite element method, and domain decomposition methods in aerodynamics. Topics dealing with massively parallel computing include hypercube vs. 2-dimensional meshes and massively parallel computation of conservation laws. Performance and tools are also discussed.
Myer, Gregory D; Wordeman, Samuel C; Sugimoto, Dai; Bates, Nathaniel A; Roewer, Benjamin D; Medina McKeon, Jennifer M; DiCesare, Christopher A; Di Stasi, Stephanie L; Barber Foss, Kim D; Thomas, Staci M; Hewett, Timothy E
2014-05-01
Multi-center collaborations provide a powerful alternative to overcome the inherent limitations to single-center investigations. Specifically, multi-center projects can support large-scale prospective, longitudinal studies that investigate relatively uncommon outcomes, such as anterior cruciate ligament injury. This project was conceived to assess within- and between-center reliability of an affordable, clinical nomogram utilizing two-dimensional video methods to screen for risk of knee injury. The authors hypothesized that the two-dimensional screening methods would provide good-to-excellent reliability within and between institutions for assessment of frontal and sagittal plane biomechanics. Nineteen female, high school athletes participated. Two-dimensional video kinematics of the lower extremity during a drop vertical jump task were collected on all 19 study participants at each of the three facilities. Within-center and between-center reliability were assessed with intra- and inter-class correlation coefficients. Within-center reliability of the clinical nomogram variables was consistently excellent, but between-center reliability was fair-to-good. Within-center intra-class correlation coefficient for all nomogram variables combined was 0.98, while combined between-center inter-class correlation coefficient was 0.63. Injury risk screening protocols were reliable within and repeatable between centers. These results demonstrate the feasibility of multi-site biomechanical studies and establish a framework for further dissemination of injury risk screening algorithms. Specifically, multi-center studies may allow for further validation and optimization of two-dimensional video screening tools. 2b.
2013-01-01
Background Pressure ulcers are considered an important issue, mainly affecting immobilized older patients. These pressure ulcers increase the care burden for the professional health service staff as well as pharmaceutical expenditure. There are a number of studies on the effectiveness of different products used for the prevention of pressure ulcers; however, most of these studies were carried out at a hospital level, basically using hyperoxygenated fatty acids (HOFA). There are no studies focused specifically on the use of olive-oil-based products and therefore this research is intended to find the most cost-effective treatment and achieve an alternative treatment. Methods/design The main objective is to assess the effectiveness of olive oil, comparing it with HOFA, to treat immobilized patients at home who are at risk of pressure ulcers. As a secondary objective, the cost-effectiveness balance of this new application with regard to the HOFA will be assessed. The study is designed as a noninferiority, triple-blinded, parallel, multi-center, randomized clinical trial. The scope of the study is the population attending primary health centers in Andalucía (Spain) in the regional areas of Malaga, Granada, Seville, and Cadiz. Immobilized patients at risk of pressure ulcers will be targeted. The target group will be treated by application of an olive-oil-based formula whereas the control group will be treated by application of HOFA to the control group. The follow-up period will be 16 weeks. The main variable will be the presence of pressure ulcers in the patient. Secondary variables include sociodemographic and clinical information, caregiver information, and whether technical support exists. Statistical analysis will include the Kolmogorov-Smirnov test, symmetry and kurtosis analysis, bivariate analysis using the Student’s t and chi-squared tests as well as the Wilcoxon and the Man-Whitney U tests, ANOVA and multivariate logistic regression analysis. Discussion The regular use of olive-oil-based formulas should be effective in preventing pressure ulcers in immobilized patients, thus leading to a more cost-effective product and an alternative treatment. Trial registration Clinicaltrials.gov identifier: NCT01595347. PMID:24152576
Lupiáñez-Pérez, Inmaculada; Morilla-Herrera, Juan Carlos; Ginel-Mendoza, Leovigildo; Martín-Santos, Francisco Javier; Navarro-Moya, Francisco Javier; Sepúlveda-Guerra, Rafaela Pilar; Vázquez-Cerdeiros, Rosa; Cuevas-Fernández-Gallego, Magdalena; Benítez-Serrano, Isabel María; Lupiáñez-Pérez, Yolanda; Morales-Asencio, José Miguel
2013-10-23
Pressure ulcers are considered an important issue, mainly affecting immobilized older patients. These pressure ulcers increase the care burden for the professional health service staff as well as pharmaceutical expenditure. There are a number of studies on the effectiveness of different products used for the prevention of pressure ulcers; however, most of these studies were carried out at a hospital level, basically using hyperoxygenated fatty acids (HOFA). There are no studies focused specifically on the use of olive-oil-based products and therefore this research is intended to find the most cost-effective treatment and achieve an alternative treatment. The main objective is to assess the effectiveness of olive oil, comparing it with HOFA, to treat immobilized patients at home who are at risk of pressure ulcers. As a secondary objective, the cost-effectiveness balance of this new application with regard to the HOFA will be assessed. The study is designed as a noninferiority, triple-blinded, parallel, multi-center, randomized clinical trial. The scope of the study is the population attending primary health centers in Andalucía (Spain) in the regional areas of Malaga, Granada, Seville, and Cadiz. Immobilized patients at risk of pressure ulcers will be targeted. The target group will be treated by application of an olive-oil-based formula whereas the control group will be treated by application of HOFA to the control group. The follow-up period will be 16 weeks. The main variable will be the presence of pressure ulcers in the patient. Secondary variables include sociodemographic and clinical information, caregiver information, and whether technical support exists. Statistical analysis will include the Kolmogorov-Smirnov test, symmetry and kurtosis analysis, bivariate analysis using the Student's t and chi-squared tests as well as the Wilcoxon and the Man-Whitney U tests, ANOVA and multivariate logistic regression analysis. The regular use of olive-oil-based formulas should be effective in preventing pressure ulcers in immobilized patients, thus leading to a more cost-effective product and an alternative treatment. Clinicaltrials.gov identifier: NCT01595347.
Randomized control trial of topical clonidine for treatment of painful diabetic neuropathy
Campbell, Claudia M.; Kipnes, Mark S.; Stouch, Bruce C.; Brady, Kerrie L.; Kelly, Margaret; Schmidt, William K.; Petersen, Karin L.; Rowbotham, Michael C.; Campbell, James N.
2012-01-01
A length-dependent neuropathy with pain in the feet is a common complication of diabetes (painful diabetic neuropathy, PDN). It was hypothesized that pain may arise from sensitized-hyperactive cutaneous nociceptors, and that this abnormal signaling may be reduced by topical administration of the α2-adrenergic agonist, clonidine, to the painful area. This was a randomized, double-blind, placebo-controlled, parallel-group, multi-center trial. Nociceptor function was measured by determining the painfulness of 0.1% topical capsaicin applied to the pre-tibial area of each subject for 30 minutes during screening. Subjects were then randomized to receive 0.1% topical clonidine gel (n=89) or placebo gel (n=90) applied t.i.d. to their feet for 12 weeks. The difference in foot pain at week 12 in relation to baseline, rated on a 0-10 numerical pain rating scale (NPRS), was compared between groups. Baseline NPRS was imputed for missing data for subjects who terminated the study early. The subjects treated with clonidine showed a trend toward decreased foot pain compared to the placebo-treated group (the primary endpoint; p=0.07). In subjects who felt any level of pain to capsaicin, clonidine was superior to placebo (p<0.05). In subjects with a capsaicin pain rating ≥2 (0-10, NPRS), the mean decrease in foot pain was 2.6 for active compared to 1.4 for placebo (p=0.01). Topical clonidine gel significantly reduces the level of foot pain in PDN subjects with functional (and possibly sensitized) nociceptors in the affected skin as revealed by testing with topical capsaicin. Screening for cutaneous nociceptor function may help distinguish candidates for topical therapy for neuropathic pain. PMID:22683276
Tamura, Kazuo; Kawai, Yasukazu; Kiguchi, Toru; Okamoto, Masataka; Kaneko, Masahiko; Maemondo, Makoto; Gemba, Kenichi; Fujimaki, Katsumichi; Kirito, Keita; Goto, Tetsuya; Fujisaki, Tomoaki; Takeda, Kenji; Nakajima, Akihiro; Ueda, Takanori
2016-10-01
Control of serum uric acid (sUA) levels is very important during chemotherapy in patients with malignant tumors, as the risks of tumor lysis syndrome (TLS) and renal events are increased with increasing levels of sUA. We investigated the efficacy and safety of febuxostat, a potent non-purine xanthine oxidase inhibitor, compared with allopurinol for prevention of hyperuricemia in patients with malignant tumors, including solid tumors, receiving chemotherapy in Japan. An allopurinol-controlled multicenter, open-label, randomized, parallel-group comparative study was carried out. Patients with malignant tumors receiving chemotherapy, who had an intermediate risk of TLS or a high risk of TLS and were not scheduled to be treated with rasburicase, were enrolled and then randomized to febuxostat (60 mg/day) or allopurinol (300 or 200 mg/day). All patients started to take the study drug 24 h before chemotherapy. The primary objective was to confirm the non-inferiority of febuxostat to allopurinol based on the area under the curve (AUC) of sUA for a 6-day treatment period. Forty-nine and 51 patients took febuxostat and allopurinol, respectively. sUA decreased over time after initiation of study treatment. The least squares mean difference of the AUC of sUA between the treatment groups was -33.61 mg h/dL, and the 95 % confidence interval was -70.67 to 3.45, demonstrating the non-inferiority of febuxostat to allopurinol. No differences were noted in safety outcomes between the treatment groups. Febuxostat demonstrated an efficacy and safety similar to allopurinol in patients with malignant tumors receiving chemotherapy. http://www.clinicaltrials.jp ; Identifier: JapicCTI-132398.
Navigation Performance of Global Navigation Satellite Systems in the Space Service Volume
NASA Technical Reports Server (NTRS)
Force, Dale A.
2013-01-01
GPS has been used for spacecraft navigation for many years center dot In support of this, the US has committed that future GPS satellites will continue to provide signals in the Space Service Volume center dot NASA is working with international agencies to obtain similar commitments from other providers center dot In support of this effort, I simulated multi-constellation navigation in the Space Service Volume In this presentation, I extend the work to examine the navigational benefits and drawbacks of the new constellations center dot A major benefit is the reduced geometric dilution of precision (GDOP). I show that there is a substantial reduction in GDOP by using all of the GNSS constellations center dot The increased number of GNSS satellites broadcasting does produce mutual interference, raising the noise floor. A near/far signal problem can also occur where a nearby satellite drowns out satellites that are far away. - In these simulations, no major effect was observed Typically, the use of multi-constellation GNSS navigation improves GDOP by a factor of two or more over GPS alone center dot In addition, at the higher altitudes, four satellite solutions can be obtained much more often center dot This show the value of having commitments to provide signals in the Space Service Volume Besides a commitment to provide a minimum signal in the Space Service Volume, detailed signal gain information is useful for mission planning center dot Knowledge of group and phase delay over the pattern would also reduce the navigational uncertainty
Enhancing Image Processing Performance for PCID in a Heterogeneous Network of Multi-core Processors
2009-09-01
TFLOPS of Playstation 3 (PS3) nodes with IBM Cell Broadband Engine multi-cores and 15 dual-quad Xeon head nodes. The interconnect fabric includes... 4 3. INFORMATION MANAGEMENT FOR PARALLELIZATION AND...STREAMING............................................................. 7 4 . RESULTS
Reconfigurable microfluidic hanging drop network for multi-tissue interaction and analysis.
Frey, Olivier; Misun, Patrick M; Fluri, David A; Hengstler, Jan G; Hierlemann, Andreas
2014-06-30
Integration of multiple three-dimensional microtissues into microfluidic networks enables new insights in how different organs or tissues of an organism interact. Here, we present a platform that extends the hanging-drop technology, used for multi-cellular spheroid formation, to multifunctional complex microfluidic networks. Engineered as completely open, 'hanging' microfluidic system at the bottom of a substrate, the platform features high flexibility in microtissue arrangements and interconnections, while fabrication is simple and operation robust. Multiple spheroids of different cell types are formed in parallel on the same platform; the different tissues are then connected in physiological order for multi-tissue experiments through reconfiguration of the fluidic network. Liquid flow is precisely controlled through the hanging drops, which enable nutrient supply, substance dosage and inter-organ metabolic communication. The possibility to perform parallelized microtissue formation on the same chip that is subsequently used for complex multi-tissue experiments renders the developed platform a promising technology for 'body-on-a-chip'-related research.
Getting mitochondria to center stage
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schatz, Gottfried, E-mail: gottfried.schatz@unibas.ch
2013-05-10
The question of how eukaryotic cells assemble their mitochondria was long considered to be inaccessible to biochemical investigation. This attitude changed about fifty years ago when the powerful tools of yeast genetics, electron microscopy and molecular biology were brought to bear on this problem. The rising interest in mitochondrial biogenesis thus paralleled and assisted in the birth of modern biology. This brief recollection recounts the days when research on mitochondrial biogenesis was an exotic effort limited to a small group of outsiders.
The Spider Center Wide File System; From Concept to Reality
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shipman, Galen M; Dillow, David A; Oral, H Sarp
2009-01-01
The Leadership Computing Facility (LCF) at Oak Ridge National Laboratory (ORNL) has a diverse portfolio of computational resources ranging from a petascale XT4/XT5 simulation system (Jaguar) to numerous other systems supporting development, visualization, and data analytics. In order to support vastly different I/O needs of these systems Spider, a Lustre-based center wide file system was designed and deployed to provide over 240 GB/s of aggregate throughput with over 10 Petabytes of formatted capacity. A multi-stage InfiniBand network, dubbed as Scalable I/O Network (SION), with over 889 GB/s of bisectional bandwidth was deployed as part of Spider to provide connectivity tomore » our simulation, development, visualization, and other platforms. To our knowledge, while writing this paper, Spider is the largest and fastest POSIX-compliant parallel file system in production. This paper will detail the overall architecture of the Spider system, challenges in deploying and initial testings of a file system of this scale, and novel solutions to these challenges which offer key insights into file system design in the future.« less
Computational electromagnetics: the physics of smooth versus oscillatory fields.
Chew, W C
2004-03-15
This paper starts by discussing the difference in the physics between solutions to Laplace's equation (static) and Maxwell's equations for dynamic problems (Helmholtz equation). Their differing physical characters are illustrated by how the two fields convey information away from their source point. The paper elucidates the fact that their differing physical characters affect the use of Laplacian field and Helmholtz field in imaging. They also affect the design of fast computational algorithms for electromagnetic scattering problems. Specifically, a comparison is made between fast algorithms developed using wavelets, the simple fast multipole method, and the multi-level fast multipole algorithm for electrodynamics. The impact of the physical characters of the dynamic field on the parallelization of the multi-level fast multipole algorithm is also discussed. The relationship of diagonalization of translators to group theory is presented. Finally, future areas of research for computational electromagnetics are described.
Rosenbaum, Stacy; Hirwa, Jean Paul; Silk, Joan B.; Vigilant, Linda; Stoinski, Tara S.
2016-01-01
Sexually selected infanticide is an important source of infant mortality in many mammalian species. In species with long-term male-female associations, females may benefit from male protection against infanticidal outsiders. We tested whether mountain gorilla (Gorilla beringei beringei) mothers in single and multi-male groups monitored by the Dian Fossey Gorilla Fund’s Karisoke Research Center actively facilitated interactions between their infants and a potentially protective male. We also evaluated the criteria mothers in multi-male groups used to choose a preferred male social partner. In single male groups, where infanticide risk and paternity certainty are high, females with infants <1 year old spent more time near and affiliated more with males than females without young infants. In multi-male groups, where infanticide rates and paternity certainty are lower, mothers with new infants exhibited few behavioral changes toward males. The sole notable change was that females with young infants proportionally increased their time near males they previously spent little time near when compared to males they had previously preferred, perhaps to encourage paternity uncertainty and deter aggression. Rank was a much better predictor of females’ social partner choice than paternity. Older infants (2–3 years) in multi-male groups mirrored their mothers’ preferences for individual male social partners; 89% spent the most time in close proximity to the male their mother had spent the most time near when they were <1 year old. Observed discrepancies between female behavior in single and multi-male groups likely reflect different levels of postpartum intersexual conflict; in groups where paternity certainty and infanticide risk are both high, male-female interests align and females behave accordingly. This highlights the importance of considering individual and group-level variation when evaluating intersexual conflict across the reproductive cycle. PMID:26863300
Investigation of Implantable Multi-Channel Electrode Array in Rat Cerebral Cortex Used for Recording
NASA Astrophysics Data System (ADS)
Taniguchi, Noriyuki; Fukayama, Osamu; Suzuki, Takafumi; Mabuchi, Kunihiko
There have recently been many studies concerning the control of robot movements using neural signals recorded from the brain (usually called the Brain-Machine interface (BMI)). We fabricated implantable multi-electrode arrays to obtain neural signals from the rat cerebral cortex. As any multi-electrode array should have electrode alignment that minimizes invasion, it is necessary to customize the recording site. We designed three types of 22-channel multi-electrode arrays, i.e., 1) wide, 2) three-layered, and 3) separate. The first extensively covers the cerebral cortex. The second has a length of 2 mm, which can cover the area of the primary motor cortex. The third array has a separate structure, which corresponds to the position of the forelimb and hindlimb areas of the primary motor cortex. These arrays were implanted into the cerebral cortex of a rat. We estimated the walking speed from neural signals using our fabricated three-layered array to investigate its feasibility for BMI research. The neural signal of the rat and its walking speed were simultaneously recorded. The results revealed that evaluation using either the anterior electrode group or posterior group provided accurate estimates. However, two electrode groups around the center yielded poor estimates although it was possible to record neural signals.
Parallelising a molecular dynamics algorithm on a multi-processor workstation
NASA Astrophysics Data System (ADS)
Müller-Plathe, Florian
1990-12-01
The Verlet neighbour-list algorithm is parallelised for a multi-processor Hewlett-Packard/Apollo DN10000 workstation. The implementation makes use of memory shared between the processors. It is a genuine master-slave approach by which most of the computational tasks are kept in the master process and the slaves are only called to do part of the nonbonded forces calculation. The implementation features elements of both fine-grain and coarse-grain parallelism. Apart from three calls to library routines, two of which are standard UNIX calls, and two machine-specific language extensions, the whole code is written in standard Fortran 77. Hence, it may be expected that this parallelisation concept can be transfered in parts or as a whole to other multi-processor shared-memory computers. The parallel code is routinely used in production work.
2,2′-Dimethoxy-4,4′-[rel-(2R,3S)-2,3-dimethylbutane-1,4-diyl]diphenol
Salinas-Salazar, Carmen L.; del Rayo Camacho-Corona, María; Bernès, Sylvain; Waksman de Torres, Noemi
2009-01-01
The title molecule, C20H26O4, commonly known as meso-dihydroguaiaretic acid, is a naturally occurring lignan extracted from Larrea tridentata and other plants. The molecule has a noncrystallographic inversion center situated at the midpoint of the central C—C bond, generating the meso stereoisomer. The central C—C—C—C alkyl chain displays an all-trans conformation, allowing an almost parallel arrangement of the benzene rings, which make a dihedral angle of 5.0 (3)°. Both hydroxy groups form weak O—H⋯O—H chains of hydrogen bonds along [100]. The resulting supramolecular structure is an undulating plane parallel to (010). PMID:21583141
Systems medicine and integrated care to combat chronic noncommunicable diseases
2011-01-01
We propose an innovative, integrated, cost-effective health system to combat major non-communicable diseases (NCDs), including cardiovascular, chronic respiratory, metabolic, rheumatologic and neurologic disorders and cancers, which together are the predominant health problem of the 21st century. This proposed holistic strategy involves comprehensive patient-centered integrated care and multi-scale, multi-modal and multi-level systems approaches to tackle NCDs as a common group of diseases. Rather than studying each disease individually, it will take into account their intertwined gene-environment, socio-economic interactions and co-morbidities that lead to individual-specific complex phenotypes. It will implement a road map for predictive, preventive, personalized and participatory (P4) medicine based on a robust and extensive knowledge management infrastructure that contains individual patient information. It will be supported by strategic partnerships involving all stakeholders, including general practitioners associated with patient-centered care. This systems medicine strategy, which will take a holistic approach to disease, is designed to allow the results to be used globally, taking into account the needs and specificities of local economies and health systems. PMID:21745417
Fine-grained parallel RNAalifold algorithm for RNA secondary structure prediction on FPGA
Xia, Fei; Dou, Yong; Zhou, Xingming; Yang, Xuejun; Xu, Jiaqing; Zhang, Yang
2009-01-01
Background In the field of RNA secondary structure prediction, the RNAalifold algorithm is one of the most popular methods using free energy minimization. However, general-purpose computers including parallel computers or multi-core computers exhibit parallel efficiency of no more than 50%. Field Programmable Gate-Array (FPGA) chips provide a new approach to accelerate RNAalifold by exploiting fine-grained custom design. Results RNAalifold shows complicated data dependences, in which the dependence distance is variable, and the dependence direction is also across two dimensions. We propose a systolic array structure including one master Processing Element (PE) and multiple slave PEs for fine grain hardware implementation on FPGA. We exploit data reuse schemes to reduce the need to load energy matrices from external memory. We also propose several methods to reduce energy table parameter size by 80%. Conclusion To our knowledge, our implementation with 16 PEs is the only FPGA accelerator implementing the complete RNAalifold algorithm. The experimental results show a factor of 12.2 speedup over the RNAalifold (ViennaPackage – 1.6.5) software for a group of aligned RNA sequences with 2981-residue running on a Personal Computer (PC) platform with Pentium 4 2.6 GHz CPU. PMID:19208138
A Generic Mesh Data Structure with Parallel Applications
ERIC Educational Resources Information Center
Cochran, William Kenneth, Jr.
2009-01-01
High performance, massively-parallel multi-physics simulations are built on efficient mesh data structures. Most data structures are designed from the bottom up, focusing on the implementation of linear algebra routines. In this thesis, we explore a top-down approach to design, evaluating the various needs of many aspects of simulation, not just…
Hierarchial parallel computer architecture defined by computational multidisciplinary mechanics
NASA Technical Reports Server (NTRS)
Padovan, Joe; Gute, Doug; Johnson, Keith
1989-01-01
The goal is to develop an architecture for parallel processors enabling optimal handling of multi-disciplinary computation of fluid-solid simulations employing finite element and difference schemes. The goals, philosphical and modeling directions, static and dynamic poly trees, example problems, interpolative reduction, the impact on solvers are shown in viewgraph form.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Uhr, L.
1987-01-01
This book is written by research scientists involved in the development of massively parallel, but hierarchically structured, algorithms, architectures, and programs for image processing, pattern recognition, and computer vision. The book gives an integrated picture of the programs and algorithms that are being developed, and also of the multi-computer hardware architectures for which these systems are designed.
Alaviani, Mehri; Khosravan, Shahla; Alami, Ali; Moshki, Mahdi
2015-01-01
Background Loneliness is one of the most significant problems during aging. This research has been done to determine the effect of a multi-strategy program based on Pender’s Health Promotion model to prevent loneliness of elderly women by improving social relationships. Methods In this quasi-experimental study done in 2013 from January to November, 150 old women suffering medium loneliness referred to Gonabad urban Health Centers were enrolled. Data were gathered using Russell’s UCLA loneliness questionnaire and the questionnaires based on Pender’s Health Promotion Model about loneliness. The results were analyzed by descriptive statistics and Chi-square, T-pair, and independent-T tests through SPSS, version 20. Results Loneliness decreased significantly in the interventional group compared to the control group (P<0.00). In addition, mean scores related to variables of Health Promotion Model (received benefits and barriers, self-efficacy, interpersonal effectives of loneliness) in both groups were significantly different before and after the study (P<0.05). Conclusion Constructs of Pender’s Health Promotion Model can be used as a framework for planning interventions in order to anticipate, improve and modify related behaviors related to loneliness in old women. PMID:26005693
Storing files in a parallel computing system based on user or application specification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Faibish, Sorin; Bent, John M.; Nick, Jeffrey M.
2016-03-29
Techniques are provided for storing files in a parallel computing system based on a user-specification. A plurality of files generated by a distributed application in a parallel computing system are stored by obtaining a specification from the distributed application indicating how the plurality of files should be stored; and storing one or more of the plurality of files in one or more storage nodes of a multi-tier storage system based on the specification. The plurality of files comprise a plurality of complete files and/or a plurality of sub-files. The specification can optionally be processed by a daemon executing on onemore » or more nodes in a multi-tier storage system. The specification indicates how the plurality of files should be stored, for example, identifying one or more storage nodes where the plurality of files should be stored.« less
The parallel algorithm for the 2D discrete wavelet transform
NASA Astrophysics Data System (ADS)
Barina, David; Najman, Pavel; Kleparnik, Petr; Kula, Michal; Zemcik, Pavel
2018-04-01
The discrete wavelet transform can be found at the heart of many image-processing algorithms. Until now, the transform on general-purpose processors (CPUs) was mostly computed using a separable lifting scheme. As the lifting scheme consists of a small number of operations, it is preferred for processing using single-core CPUs. However, considering a parallel processing using multi-core processors, this scheme is inappropriate due to a large number of steps. On such architectures, the number of steps corresponds to the number of points that represent the exchange of data. Consequently, these points often form a performance bottleneck. Our approach appropriately rearranges calculations inside the transform, and thereby reduces the number of steps. In other words, we propose a new scheme that is friendly to parallel environments. When evaluating on multi-core CPUs, we consistently overcome the original lifting scheme. The evaluation was performed on 61-core Intel Xeon Phi and 8-core Intel Xeon processors.
Parallel, multi-stage processing of colors, faces and shapes in macaque inferior temporal cortex
Lafer-Sousa, Rosa; Conway, Bevil R.
2014-01-01
Visual-object processing culminates in inferior temporal (IT) cortex. To assess the organization of IT, we measured fMRI responses in alert monkey to achromatic images (faces, fruit, bodies, places) and colored gratings. IT contained multiple color-biased regions, which were typically ventral to face patches and, remarkably, yoked to them, spaced regularly at four locations predicted by known anatomy. Color and face selectivity increased for more anterior regions, indicative of a broad hierarchical arrangement. Responses to non-face shapes were found across IT, but were stronger outside color-biased regions and face patches, consistent with multiple parallel streams. IT also contained multiple coarse eccentricity maps: face patches overlapped central representations; color-biased regions spanned mid-peripheral representations; and place-biased regions overlapped peripheral representations. These results suggest that IT comprises parallel, multi-stage processing networks subject to one organizing principle. PMID:24141314
Configuration affects parallel stent grafting results.
Tanious, Adam; Wooster, Mathew; Armstrong, Paul A; Zwiebel, Bruce; Grundy, Shane; Back, Martin R; Shames, Murray L
2018-05-01
A number of adjunctive "off-the-shelf" procedures have been described to treat complex aortic diseases. Our goal was to evaluate parallel stent graft configurations and to determine an optimal formula for these procedures. This is a retrospective review of all patients at a single medical center treated with parallel stent grafts from January 2010 to September 2015. Outcomes were evaluated on the basis of parallel graft orientation, type, and main body device. Primary end points included parallel stent graft compromise and overall endovascular aneurysm repair (EVAR) compromise. There were 78 patients treated with a total of 144 parallel stents for a variety of pathologic processes. There was a significant correlation between main body oversizing and snorkel compromise (P = .0195) and overall procedural complication (P = .0019) but not with endoleak rates. Patients were organized into the following oversizing groups for further analysis: 0% to 10%, 10% to 20%, and >20%. Those oversized into the 0% to 10% group had the highest rate of overall EVAR complication (73%; P = .0003). There were no significant correlations between any one particular configuration and overall procedural complication. There was also no significant correlation between total number of parallel stents employed and overall complication. Composite EVAR configuration had no significant correlation with individual snorkel compromise, endoleak, or overall EVAR or procedural complication. The configuration most prone to individual snorkel compromise and overall EVAR complication was a four-stent configuration with two stents in an antegrade position and two stents in a retrograde position (60% complication rate). The configuration most prone to endoleak was one or two stents in retrograde position (33% endoleak rate), followed by three stents in an all-antegrade position (25%). There was a significant correlation between individual stent configuration and stent compromise (P = .0385), with 31.25% of retrograde stents having any complication. Parallel stent grafting offers an off-the-shelf option to treat a variety of aortic diseases. There is an increased risk of parallel stent and overall EVAR compromise with <10% main body oversizing. Thirty-day mortality is increased when more than one parallel stent is placed. Antegrade configurations are preferred to any retrograde configuration, with optimal oversizing >20%. Copyright © 2017 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Zheng, Yan
2015-03-01
Internet of things (IoT), focusing on providing users with information exchange and intelligent control, attracts a lot of attention of researchers from all over the world since the beginning of this century. IoT is consisted of large scale of sensor nodes and data processing units, and the most important features of IoT can be illustrated as energy confinement, efficient communication and high redundancy. With the sensor nodes increment, the communication efficiency and the available communication band width become bottle necks. Many research work is based on the instance which the number of joins is less. However, it is not proper to the increasing multi-join query in whole internet of things. To improve the communication efficiency between parallel units in the distributed sensor network, this paper proposed parallel query optimization algorithm based on distribution attributes cost graph. The storage information relations and the network communication cost are considered in this algorithm, and an optimized information changing rule is established. The experimental result shows that the algorithm has good performance, and it would effectively use the resource of each node in the distributed sensor network. Therefore, executive efficiency of multi-join query between different nodes could be improved.
Data Acquisition System for Multi-Frequency Radar Flight Operations Preparation
NASA Technical Reports Server (NTRS)
Leachman, Jonathan
2010-01-01
A three-channel data acquisition system was developed for the NASA Multi-Frequency Radar (MFR) system. The system is based on a commercial-off-the-shelf (COTS) industrial PC (personal computer) and two dual-channel 14-bit digital receiver cards. The decimated complex envelope representations of the three radar signals are passed to the host PC via the PCI bus, and then processed in parallel by multiple cores of the PC CPU (central processing unit). The innovation is this parallelization of the radar data processing using multiple cores of a standard COTS multi-core CPU. The data processing portion of the data acquisition software was built using autonomous program modules or threads, which can run simultaneously on different cores. A master program module calculates the optimal number of processing threads, launches them, and continually supplies each with data. The benefit of this new parallel software architecture is that COTS PCs can be used to implement increasingly complex processing algorithms on an increasing number of radar range gates and data rates. As new PCs become available with higher numbers of CPU cores, the software will automatically utilize the additional computational capacity.
Optimization of the coherence function estimation for multi-core central processing unit
NASA Astrophysics Data System (ADS)
Cheremnov, A. G.; Faerman, V. A.; Avramchuk, V. S.
2017-02-01
The paper considers use of parallel processing on multi-core central processing unit for optimization of the coherence function evaluation arising in digital signal processing. Coherence function along with other methods of spectral analysis is commonly used for vibration diagnosis of rotating machinery and its particular nodes. An algorithm is given for the function evaluation for signals represented with digital samples. The algorithm is analyzed for its software implementation and computational problems. Optimization measures are described, including algorithmic, architecture and compiler optimization, their results are assessed for multi-core processors from different manufacturers. Thus, speeding-up of the parallel execution with respect to sequential execution was studied and results are presented for Intel Core i7-4720HQ и AMD FX-9590 processors. The results show comparatively high efficiency of the optimization measures taken. In particular, acceleration indicators and average CPU utilization have been significantly improved, showing high degree of parallelism of the constructed calculating functions. The developed software underwent state registration and will be used as a part of a software and hardware solution for rotating machinery fault diagnosis and pipeline leak location with acoustic correlation method.
Exploring the Relationship between Conduct Disorder and Residential Treatment Outcomes
ERIC Educational Resources Information Center
Shabat, Julia Cathcart; Lyons, John S.; Martinovich, Zoran
2008-01-01
We examined the differential outcomes in residential treatment for youths with conduct disorder (CD)--with special attention paid to interactions with age and gender--in a sample of children and adolescents in 50 residential treatment centers and group homes across Illinois. Multi-disciplinary teams rated youths ages 6-20 (N = 457) on measures of…
Core Clinical Data Elements for Cancer Genomic Repositories: A Multi-stakeholder Consensus.
Conley, Robert B; Dickson, Dane; Zenklusen, Jean Claude; Al Naber, Jennifer; Messner, Donna A; Atasoy, Ajlan; Chaihorsky, Lena; Collyar, Deborah; Compton, Carolyn; Ferguson, Martin; Khozin, Sean; Klein, Roger D; Kotte, Sri; Kurzrock, Razelle; Lin, C Jimmy; Liu, Frank; Marino, Ingrid; McDonough, Robert; McNeal, Amy; Miller, Vincent; Schilsky, Richard L; Wang, Lisa I
2017-11-16
The Center for Medical Technology Policy and the Molecular Evidence Development Consortium gathered a diverse group of more than 50 stakeholders to develop consensus on a core set of data elements and values essential to understanding the clinical utility of molecularly targeted therapies in oncology. Copyright © 2017 Elsevier Inc. All rights reserved.
Vidas, Mercedes; Folnegović-Smalc, Vera; Catipović, Marija; Kisić, Marko
2011-09-01
The aim of this study was to investigate whether mothers with newborn children, the usage of autogenic training with advice on breastfeeding effect on: the decision and the duration of breastfeeding, increase maternal confidence and support. It was assumed that the above result in a higher percentage of mothers who exclusively breastfed baby during the first six months of child's life. The survey was conducted in the Association "For a healthy and happy childhood"-Counseling center for mother and child, in Bjelovar in 2010. The Counseling center was attended by 100 nursing mothers with children aged up to two months. They randomly went to the study or control group. Mothers of both groups were advised to successful breastfeeding. Study group has practiced autogenic training until the child's age of six months. In parallel, by using psychotherapeutic interview and specific questionnaires we collected data on the somatic, psychological and social situation of the mother, discovered mother's mental changes (anxiety, depression) that were treated. The results at the end of the study confirm the initial expected benefits from the application of autogenic training. Mothers of the study group were significantly more emotionally balanced with a higher self-esteem. Autogenous training with the advices for successful breastfeeding conducted in this counseling center contributed in significantly higher rate of breastfeeding children up to six months of life, improved mental and physical health of mother and child and their peculiar relationship.
Dreyfuss, Paul; Henning, Troy; Malladi, Niriksha; Goldstein, Barry; Bogduk, Nikolai
2009-01-01
To determine the physiologic effectiveness of multi-site, multi-depth sacral lateral branch injections. Double-blind, randomized, placebo-controlled study. Outpatient pain management center. Twenty asymptomatic volunteers. The dorsal innervation to the sacroiliac joint (SIJ) is from the L5 dorsal ramus and the S1-3 lateral branches. Multi-site, multi-depth lateral branch blocks were developed to compensate for the complex regional anatomy that limited the effectiveness of single-site, single-depth lateral branch injections. Bilateral multi-site, multi-depth lateral branch green dye injections and subsequent dissection on two cadavers revealed a 91% accuracy with this technique. Session 1: 20 asymptomatic subjects had a 25-g spinal needle probe their interosseous (IO) and dorsal sacroiliac (DSI) ligaments. The inferior dorsal SIJ was entered and capsular distension with contrast medium was performed. Discomfort had to occur with each provocation maneuver and a contained arthrogram was necessary to continue in the study. Session 2: 1 week later; computer randomized, double-blind multi-site, multi-depth lateral branch blocks injections were performed. Ten subjects received active (bupivicaine 0.75%) and 10 subjects received sham (normal saline) multi-site, multi-depth lateral branch injections. Thirty minutes later, provocation testing was repeated with identical methodology used in session 1. Presence or absence of pain for ligamentous probing and SIJ capsular distension. Seventy percent of the active group had an insensate IO and DSI ligaments, and inferior dorsal SIJ vs 0-10% of the sham group. Twenty percent of the active vs 10% of the sham group did not feel repeat capsular distension. Six of seven subjects (86%) retained the ability to feel repeat capsular distension despite an insensate dorsal SIJ complex. Multi-site, multi-depth lateral branch blocks are physiologically effective at a rate of 70%. Multi-site, multi-depth lateral branch blocks do not effectively block the intra-articular portion of the SIJ. There is physiological evidence that the intra-articular portion of the SIJ is innervated from both ventral and dorsal sources. Comparative multi-site, multi-depth lateral branch blocks should be considered a potentially valuable tool to diagnose extra-articular SIJ pain and determine if lateral branch radiofrequency neurotomy may assist one with SIJ pain.
NASA Astrophysics Data System (ADS)
Zatarain Salazar, Jazmin; Reed, Patrick M.; Quinn, Julianne D.; Giuliani, Matteo; Castelletti, Andrea
2017-11-01
Reservoir operations are central to our ability to manage river basin systems serving conflicting multi-sectoral demands under increasingly uncertain futures. These challenges motivate the need for new solution strategies capable of effectively and efficiently discovering the multi-sectoral tradeoffs that are inherent to alternative reservoir operation policies. Evolutionary many-objective direct policy search (EMODPS) is gaining importance in this context due to its capability of addressing multiple objectives and its flexibility in incorporating multiple sources of uncertainties. This simulation-optimization framework has high potential for addressing the complexities of water resources management, and it can benefit from current advances in parallel computing and meta-heuristics. This study contributes a diagnostic assessment of state-of-the-art parallel strategies for the auto-adaptive Borg Multi Objective Evolutionary Algorithm (MOEA) to support EMODPS. Our analysis focuses on the Lower Susquehanna River Basin (LSRB) system where multiple sectoral demands from hydropower production, urban water supply, recreation and environmental flows need to be balanced. Using EMODPS with different parallel configurations of the Borg MOEA, we optimize operating policies over different size ensembles of synthetic streamflows and evaporation rates. As we increase the ensemble size, we increase the statistical fidelity of our objective function evaluations at the cost of higher computational demands. This study demonstrates how to overcome the mathematical and computational barriers associated with capturing uncertainties in stochastic multiobjective reservoir control optimization, where parallel algorithmic search serves to reduce the wall-clock time in discovering high quality representations of key operational tradeoffs. Our results show that emerging self-adaptive parallelization schemes exploiting cooperative search populations are crucial. Such strategies provide a promising new set of tools for effectively balancing exploration, uncertainty, and computational demands when using EMODPS.
Density-based parallel skin lesion border detection with webCL
2015-01-01
Background Dermoscopy is a highly effective and noninvasive imaging technique used in diagnosis of melanoma and other pigmented skin lesions. Many aspects of the lesion under consideration are defined in relation to the lesion border. This makes border detection one of the most important steps in dermoscopic image analysis. In current practice, dermatologists often delineate borders through a hand drawn representation based upon visual inspection. Due to the subjective nature of this technique, intra- and inter-observer variations are common. Because of this, the automated assessment of lesion borders in dermoscopic images has become an important area of study. Methods Fast density based skin lesion border detection method has been implemented in parallel with a new parallel technology called WebCL. WebCL utilizes client side computing capabilities to use available hardware resources such as multi cores and GPUs. Developed WebCL-parallel density based skin lesion border detection method runs efficiently from internet browsers. Results Previous research indicates that one of the highest accuracy rates can be achieved using density based clustering techniques for skin lesion border detection. While these algorithms do have unfavorable time complexities, this effect could be mitigated when implemented in parallel. In this study, density based clustering technique for skin lesion border detection is parallelized and redesigned to run very efficiently on the heterogeneous platforms (e.g. tablets, SmartPhones, multi-core CPUs, GPUs, and fully-integrated Accelerated Processing Units) by transforming the technique into a series of independent concurrent operations. Heterogeneous computing is adopted to support accessibility, portability and multi-device use in the clinical settings. For this, we used WebCL, an emerging technology that enables a HTML5 Web browser to execute code in parallel for heterogeneous platforms. We depicted WebCL and our parallel algorithm design. In addition, we tested parallel code on 100 dermoscopy images and showed the execution speedups with respect to the serial version. Results indicate that parallel (WebCL) version and serial version of density based lesion border detection methods generate the same accuracy rates for 100 dermoscopy images, in which mean of border error is 6.94%, mean of recall is 76.66%, and mean of precision is 99.29% respectively. Moreover, WebCL version's speedup factor for 100 dermoscopy images' lesion border detection averages around ~491.2. Conclusions When large amount of high resolution dermoscopy images considered in a usual clinical setting along with the critical importance of early detection and diagnosis of melanoma before metastasis, the importance of fast processing dermoscopy images become obvious. In this paper, we introduce WebCL and the use of it for biomedical image processing applications. WebCL is a javascript binding of OpenCL, which takes advantage of GPU computing from a web browser. Therefore, WebCL parallel version of density based skin lesion border detection introduced in this study can supplement expert dermatologist, and aid them in early diagnosis of skin lesions. While WebCL is currently an emerging technology, a full adoption of WebCL into the HTML5 standard would allow for this implementation to run on a very large set of hardware and software systems. WebCL takes full advantage of parallel computational resources including multi-cores and GPUs on a local machine, and allows for compiled code to run directly from the Web Browser. PMID:26423836
Density-based parallel skin lesion border detection with webCL.
Lemon, James; Kockara, Sinan; Halic, Tansel; Mete, Mutlu
2015-01-01
Dermoscopy is a highly effective and noninvasive imaging technique used in diagnosis of melanoma and other pigmented skin lesions. Many aspects of the lesion under consideration are defined in relation to the lesion border. This makes border detection one of the most important steps in dermoscopic image analysis. In current practice, dermatologists often delineate borders through a hand drawn representation based upon visual inspection. Due to the subjective nature of this technique, intra- and inter-observer variations are common. Because of this, the automated assessment of lesion borders in dermoscopic images has become an important area of study. Fast density based skin lesion border detection method has been implemented in parallel with a new parallel technology called WebCL. WebCL utilizes client side computing capabilities to use available hardware resources such as multi cores and GPUs. Developed WebCL-parallel density based skin lesion border detection method runs efficiently from internet browsers. Previous research indicates that one of the highest accuracy rates can be achieved using density based clustering techniques for skin lesion border detection. While these algorithms do have unfavorable time complexities, this effect could be mitigated when implemented in parallel. In this study, density based clustering technique for skin lesion border detection is parallelized and redesigned to run very efficiently on the heterogeneous platforms (e.g. tablets, SmartPhones, multi-core CPUs, GPUs, and fully-integrated Accelerated Processing Units) by transforming the technique into a series of independent concurrent operations. Heterogeneous computing is adopted to support accessibility, portability and multi-device use in the clinical settings. For this, we used WebCL, an emerging technology that enables a HTML5 Web browser to execute code in parallel for heterogeneous platforms. We depicted WebCL and our parallel algorithm design. In addition, we tested parallel code on 100 dermoscopy images and showed the execution speedups with respect to the serial version. Results indicate that parallel (WebCL) version and serial version of density based lesion border detection methods generate the same accuracy rates for 100 dermoscopy images, in which mean of border error is 6.94%, mean of recall is 76.66%, and mean of precision is 99.29% respectively. Moreover, WebCL version's speedup factor for 100 dermoscopy images' lesion border detection averages around ~491.2. When large amount of high resolution dermoscopy images considered in a usual clinical setting along with the critical importance of early detection and diagnosis of melanoma before metastasis, the importance of fast processing dermoscopy images become obvious. In this paper, we introduce WebCL and the use of it for biomedical image processing applications. WebCL is a javascript binding of OpenCL, which takes advantage of GPU computing from a web browser. Therefore, WebCL parallel version of density based skin lesion border detection introduced in this study can supplement expert dermatologist, and aid them in early diagnosis of skin lesions. While WebCL is currently an emerging technology, a full adoption of WebCL into the HTML5 standard would allow for this implementation to run on a very large set of hardware and software systems. WebCL takes full advantage of parallel computational resources including multi-cores and GPUs on a local machine, and allows for compiled code to run directly from the Web Browser.
Multi-image CAD employing features derived from ipsilateral mammographic views
NASA Astrophysics Data System (ADS)
Good, Walter F.; Zheng, Bin; Chang, Yuan-Hsiang; Wang, Xiao Hui; Maitz, Glenn S.; Gur, David
1999-05-01
On mammograms, certain kinds of features related to masses (e.g., location, texture, degree of spiculation, and integrated density difference) tend to be relatively invariant, or at last predictable, with respect to breast compression. Thus, ipsilateral pairs of mammograms may contain information not available from analyzing single views separately. To demonstrate the feasibility of incorporating multi-view features into CAD algorithm, `single-image' CAD was applied to each individual image in a set of 60 ipsilateral studies, after which all possible pairs of suspicious regions, consisting of one from each view, were formed. For these 402 pairs we defined and evaluated `multi-view' features such as: (1) relative position of centers of regions; (2) ratio of lengths of region projections parallel to nipple axis lines; (3) ratio of integrated contrast difference; (4) ratio of the sizes of the suspicious regions; and (5) measure of relative complexity of region boundaries. Each pair was identified as either a `true positive/true positive' (T) pair (i.e., two regions which are projections of the same actual mass), or as a falsely associated pair (F). Distributions for each feature were calculated. A Bayesian network was trained and tested to classify pairs of suspicious regions based exclusively on the multi-view features described above. Distributions for all features were significantly difference for T versus F pairs as indicated by likelihood ratios. Performance of the Bayesian network, which was measured by ROC analysis, indicates a significant ability to distinguish between T pairs and F pairs (Az equals 0.82 +/- 0.03), using information that is attributed to the multi-view content. This study is the first demonstration that there is a significant amount of spatial information that can be derived from ipsilateral pairs of mammograms.
Maillefert, J F; Kloppenburg, M; Fernandes, L; Punzi, L; Günther, K-P; Martin Mola, E; Lohmander, L S; Pavelka, K; Lopez-Olivo, M A; Dougados, M; Hawker, G A
2009-10-01
To conduct a multi-language translation and cross-cultural adaptation of the Intermittent and Constant OsteoArthritis Pain (ICOAP) questionnaire for hip and knee osteoarthritis (OA). The questionnaires were translated and cross-culturally adapted in parallel, using a common protocol, into the following languages: Czech, Dutch, French (France), German, Italian, Norwegian, Spanish (Castillan), North and Central American Spanish, Swedish. The process was conducted following five steps: (1)--independent translation into the target language by two or three persons; (2)--consensus meeting to obtain a single preliminary translated version; (3)--backward translation by an independent bilingual native English speaker, blinded to the English original version; (4)--final version produced by a multidisciplinary consensus committee; (5)--pre-testing of the final version with 10-20 target-language-native hip and knee OA patients. The process could be followed and completed in all countries. Only slight differences were identified in the structure of the sentences between the original and the translated versions. A large majority of the patients felt that the questionnaire was easy to understand and complete. Only a few minor criticisms were expressed. Moreover, a majority of patients found the concepts of constant pain and pain that comes and goes to be of a great pertinence and were very happy with the distinction. The ICOAP questionnaire is now available for multi-center international studies.
Sen, Rupam; Mal, Dasarath; Lopes, Armandina M L; Brandão, Paula; Araújo, João P; Lin, Zhi
2013-10-01
Two new layered transition metal carboxylate frameworks, [Co3(L)2(H2O)6]·2H2O () and [Ni3(L)2(H2O)6]·2H2O () (L = tartronate anion or hydroxymalonic acid), have been synthesized and characterized by X-ray single crystal analysis. Both compounds have similar 2D structures. In both compounds there are two types of metal centers where one center is doubly bridged by the alkoxy oxygen atoms through μ2-O bridging to form a 1D infinite chain parallel to the crystallographic b-axis with the corners shared between the metal polyhedra. Magnetic susceptibility measurements revealed the existence of antiferromagnetic short range correlations between Co(Ni) intra-chain metal centers (with exchange constants JCo = -22.6 and JNi = -35.4 K). At low temperatures, long range order is observed in both compounds at Néel temperatures of 11 (for ) and 16 (for ) K, revealing that other exchange interactions, rather than the intra-chain ones, play a role in these systems. Whereas compound has an antiferromagnetic ground state, compound exhibits a ferromagnetic component, probably due to spin canting. Isothermal magnetization data unveiled a rich phase diagram with three metamagnetic phase transitions below 8 K in compound .
A scalable parallel black oil simulator on distributed memory parallel computers
NASA Astrophysics Data System (ADS)
Wang, Kun; Liu, Hui; Chen, Zhangxin
2015-11-01
This paper presents our work on developing a parallel black oil simulator for distributed memory computers based on our in-house parallel platform. The parallel simulator is designed to overcome the performance issues of common simulators that are implemented for personal computers and workstations. The finite difference method is applied to discretize the black oil model. In addition, some advanced techniques are employed to strengthen the robustness and parallel scalability of the simulator, including an inexact Newton method, matrix decoupling methods, and algebraic multigrid methods. A new multi-stage preconditioner is proposed to accelerate the solution of linear systems from the Newton methods. Numerical experiments show that our simulator is scalable and efficient, and is capable of simulating extremely large-scale black oil problems with tens of millions of grid blocks using thousands of MPI processes on parallel computers.
Lenhard, Stephen C.; Yerby, Brittany; Forsgren, Mikael F.; Liachenko, Serguei; Johansson, Edvin; Pilling, Mark A.; Peterson, Richard A.; Yang, Xi; Williams, Dominic P.; Ungersma, Sharon E.; Morgan, Ryan E.; Brouwer, Kim L. R.; Jucker, Beat M.; Hockings, Paul D.
2018-01-01
Drug-induced liver injury (DILI) is a leading cause of acute liver failure and transplantation. DILI can be the result of impaired hepatobiliary transporters, with altered bile formation, flow, and subsequent cholestasis. We used gadoxetate dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI), combined with pharmacokinetic modelling, to measure hepatobiliary transporter function in vivo in rats. The sensitivity and robustness of the method was tested by evaluating the effect of a clinical dose of the antibiotic rifampicin in four different preclinical imaging centers. The mean gadoxetate uptake rate constant for the vehicle groups at all centers was 39.3 +/- 3.4 s-1 (n = 23) and 11.7 +/- 1.3 s-1 (n = 20) for the rifampicin groups. The mean gadoxetate efflux rate constant for the vehicle groups was 1.53 +/- 0.08 s-1 (n = 23) and for the rifampicin treated groups was 0.94 +/- 0.08 s-1 (n = 20). Both the uptake and excretion transporters of gadoxetate were statistically significantly inhibited by the clinical dose of rifampicin at all centers and the size of this treatment group effect was consistent across the centers. Gadoxetate is a clinically approved MRI contrast agent, so this method is readily transferable to the clinic. Conclusion: Rate constants of gadoxetate uptake and excretion are sensitive and robust biomarkers to detect early changes in hepatobiliary transporter function in vivo in rats prior to established biomarkers of liver toxicity. PMID:29771932
Falaki, Ali; Huang, Xuemei; Lewis, Mechelle M.; Latash, Mark L.
2017-01-01
Background Postural instability is one of most disabling motor symptoms in Parkinson’s disease. Indices of multi-muscle synergies are new measurements of postural stability. Objectives We explored the effects of dopamine-replacement drugs on multi-muscle synergies stabilizing center of pressure coordinate and their adjustments prior to a self-triggered perturbation in patients with Parkinson’s disease. We hypothesized that both synergy indices and synergy adjustments would be improved on dopaminergic drugs. Methods Patients at Hoehn-Yahr stages II and III performed whole-body tasks both off- and on-drugs while standing. Muscle modes were identified as factors in the muscle activation space. Synergy indices stabilizing center of pressure in the anterior-posterior direction were quantified in the muscle mode space during a load-release task. Results Dopamine-replacement drugs led to more consistent organization of muscles in stable groups (muscle modes). On-drugs patients showed larger indices of synergies and anticipatory synergy adjustments. In contrast, no medication effects were seen on anticipatory postural adjustments or other performance indices. Conclusions Dopamine-replacement drugs lead to significant changes in characteristics of multi-muscle synergies in Parkinson’s disease. Studies of synergies may provide a biomarker sensitive to problems with postural stability and agility and to efficacy of dopamine-replacement therapy. PMID:28110044
Microwave switching power divider. [antenna feeds
NASA Technical Reports Server (NTRS)
Stockton, R. J.; Johnson, R. W. (Inventor)
1981-01-01
A pair of parallel, spaced-apart circular ground planes define a microwave cavity with multi-port microwave power distributing switching circuitry formed on opposite sides of a thin circular dielectric substrate disposed between the ground planes. The power distributing circuitry includes a conductive disk located at the center of the substrate and connected to a source of microwave energy. A high speed, low insertion loss switching diode and a dc blocking capacitor are connected in series between the outer end of a transmission line and an output port. A high impedance, microwave blocking dc bias choke is connected between each switching diode and a source of switching current. The switching source forward biases the diodes to couple microwave energy from the conductive disk to selected output ports and, to associated antenna elements connected to the output ports to form a synthesized antenna pattern.
Calibration for single multi-mode fiber digital scanning microscopy imaging system
NASA Astrophysics Data System (ADS)
Yin, Zhe; Liu, Guodong; Liu, Bingguo; Gan, Yu; Zhuang, Zhitao; Chen, Fengdong
2015-11-01
Single multimode fiber (MMF) digital scanning imaging system is a development tendency of modern endoscope. We concentrate on the calibration method of the imaging system. Calibration method comprises two processes, forming scanning focused spots and calibrating the couple factors varied with positions. Adaptive parallel coordinate algorithm (APC) is adopted to form the focused spots at the multimode fiber (MMF) output. Compare with other algorithm, APC contains many merits, i.e. rapid speed, small amount calculations and no iterations. The ratio of the optics power captured by MMF to the intensity of the focused spots is called couple factor. We setup the calibration experimental system to form the scanning focused spots and calculate the couple factors for different object positions. The experimental result the couple factor is higher in the center than the edge.
Vasil'eva, S V; Strel'tsova, D A; Vlaskina, A V; Mikoian, V D; Vanin, A F
2012-01-01
Dinitrosyl iron complexes (DNICs) with thiol ligands--binuclear and mononuclear--inhibited aidB gene expression in E. coli cells. This process is due to the nitrosylation of the active center in iron-sulfur protein Fnr [4Fe-4S]2+ by low-molecular DNICs. The next step is transformation of the above DNICs into the DNICs with the thiol groups in the apo-form of Fnr protein. These nitrosylated proteins are characterized by the EPR signal with g perpendicular = 2.04 and g parallel 1 = 2,014. An addition of sulfur containing L-Cys or N-A-L-Cys as well as Na2S to the cells lead to the increasing in the aidB gene expression simultaneously with an appearance of the EPR signal with g perpendicular = 2.04 and g parallel = 2.02 as the characteristics of the DNICs with persulfide (R-S-S-) ligands. We suppose that the recovery of the aidB gene activity was due to the accumulation of inorganic sulfur in the cells and reconstruction of the active center in Fnr[4Fe-4S]2+. It appears that the above process is the function of L-cysteine-desulfurase protein which repaired the active center of Fnr[4Fe-4S]2+ protein using the sulfur from L-Cys or N-A-L-Cys after its deacetylation. On the other side the ions of inorganic sulfur being reacted with SH-groups led to the transformation of DNIC with thiol ligands into the persulfides. Na2S was the most potent activator of the aidB gene expression in our experiments.
A Queue Simulation Tool for a High Performance Scientific Computing Center
NASA Technical Reports Server (NTRS)
Spear, Carrie; McGalliard, James
2007-01-01
The NASA Center for Computational Sciences (NCCS) at the Goddard Space Flight Center provides high performance highly parallel processors, mass storage, and supporting infrastructure to a community of computational Earth and space scientists. Long running (days) and highly parallel (hundreds of CPUs) jobs are common in the workload. NCCS management structures batch queues and allocates resources to optimize system use and prioritize workloads. NCCS technical staff use a locally developed discrete event simulation tool to model the impacts of evolving workloads, potential system upgrades, alternative queue structures and resource allocation policies.
Oral health status and alveolar bone loss in treated leprosy patients of central India.
Rawlani, S M; Rawlani, S; Degwekar, S; Bhowte, R R; Motwani, M
2011-01-01
A descriptive cross sectional study was carried out, in a group of 160 leprosy patients treated with multi drug therapy. The patients with age group of 25 to 60 year were considered. Out of 160 patients 50 patients were selected by simple random sampling technique for radiological assessments. Intra-oral periapical radiographs (6 for each patient) were taken. The paralleling long cone technique was used and radiographs were attached with grids so as to enable measuring the bone height. The grid was spaced in 1 mm marking and placed directly over the film. Clinical examination revealed that Prevalence of dental caries was 76.25% and periodontal disease was 78.75%. Mean DMFT score was 2.26. Mean OHI-S score was 3.50. Score for Gingival index was 1.60 and average loss of gingival attachment was 1.2 mm. Radiographic findings showed mean alveolar bone loss in maxillary anterior region to be 5.05 mm and in maxillary posterior region it was 4.92 mm. Alveolar bone loss in mandibular anterior region was 4.35 mm and in mandibular posterior region was 5.14 mm. Overall Dental Health Status of the leprosy patients was poor and needed more attention for dental care. There was also an increase in the alveolar bone loss, which was generalized. This bone loss could be due to advance stage of the disease or late approach to rehabilitation center, these patients also had peripheral neuropathy leading to hand and feet deformity in the form of claw hand or ulcer on hand, making maintenance of oral hygiene difficult.
Sequential or parallel decomposed processing of two-digit numbers? Evidence from eye-tracking.
Moeller, Korbinian; Fischer, Martin H; Nuerk, Hans-Christoph; Willmes, Klaus
2009-02-01
While reaction time data have shown that decomposed processing of two-digit numbers occurs, there is little evidence about how decomposed processing functions. Poltrock and Schwartz (1984) argued that multi-digit numbers are compared in a sequential digit-by-digit fashion starting at the leftmost digit pair. In contrast, Nuerk and Willmes (2005) favoured parallel processing of the digits constituting a number. These models (i.e., sequential decomposition, parallel decomposition) make different predictions regarding the fixation pattern in a two-digit number magnitude comparison task and can therefore be differentiated by eye fixation data. We tested these models by evaluating participants' eye fixation behaviour while selecting the larger of two numbers. The stimulus set consisted of within-decade comparisons (e.g., 53_57) and between-decade comparisons (e.g., 42_57). The between-decade comparisons were further divided into compatible and incompatible trials (cf. Nuerk, Weger, & Willmes, 2001) and trials with different decade and unit distances. The observed fixation pattern implies that the comparison of two-digit numbers is not executed by sequentially comparing decade and unit digits as proposed by Poltrock and Schwartz (1984) but rather in a decomposed but parallel fashion. Moreover, the present fixation data provide first evidence that digit processing in multi-digit numbers is not a pure bottom-up effect, but is also influenced by top-down factors. Finally, implications for multi-digit number processing beyond the range of two-digit numbers are discussed.
Parallelization and checkpointing of GPU applications through program transformation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Solano-Quinde, Lizandro Damian
2012-01-01
GPUs have emerged as a powerful tool for accelerating general-purpose applications. The availability of programming languages that makes writing general-purpose applications for running on GPUs tractable have consolidated GPUs as an alternative for accelerating general purpose applications. Among the areas that have benefited from GPU acceleration are: signal and image processing, computational fluid dynamics, quantum chemistry, and, in general, the High Performance Computing (HPC) Industry. In order to continue to exploit higher levels of parallelism with GPUs, multi-GPU systems are gaining popularity. In this context, single-GPU applications are parallelized for running in multi-GPU systems. Furthermore, multi-GPU systems help to solvemore » the GPU memory limitation for applications with large application memory footprint. Parallelizing single-GPU applications has been approached by libraries that distribute the workload at runtime, however, they impose execution overhead and are not portable. On the other hand, on traditional CPU systems, parallelization has been approached through application transformation at pre-compile time, which enhances the application to distribute the workload at application level and does not have the issues of library-based approaches. Hence, a parallelization scheme for GPU systems based on application transformation is needed. Like any computing engine of today, reliability is also a concern in GPUs. GPUs are vulnerable to transient and permanent failures. Current checkpoint/restart techniques are not suitable for systems with GPUs. Checkpointing for GPU systems present new and interesting challenges, primarily due to the natural differences imposed by the hardware design, the memory subsystem architecture, the massive number of threads, and the limited amount of synchronization among threads. Therefore, a checkpoint/restart technique suitable for GPU systems is needed. The goal of this work is to exploit higher levels of parallelism and to develop support for application-level fault tolerance in applications using multiple GPUs. Our techniques reduce the burden of enhancing single-GPU applications to support these features. To achieve our goal, this work designs and implements a framework for enhancing a single-GPU OpenCL application through application transformation.« less
A privacy-preserving parallel and homomorphic encryption scheme
NASA Astrophysics Data System (ADS)
Min, Zhaoe; Yang, Geng; Shi, Jingqi
2017-04-01
In order to protect data privacy whilst allowing efficient access to data in multi-nodes cloud environments, a parallel homomorphic encryption (PHE) scheme is proposed based on the additive homomorphism of the Paillier encryption algorithm. In this paper we propose a PHE algorithm, in which plaintext is divided into several blocks and blocks are encrypted with a parallel mode. Experiment results demonstrate that the encryption algorithm can reach a speed-up ratio at about 7.1 in the MapReduce environment with 16 cores and 4 nodes.
An OpenACC-Based Unified Programming Model for Multi-accelerator Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Jungwon; Lee, Seyong; Vetter, Jeffrey S
2015-01-01
This paper proposes a novel SPMD programming model of OpenACC. Our model integrates the different granularities of parallelism from vector-level parallelism to node-level parallelism into a single, unified model based on OpenACC. It allows programmers to write programs for multiple accelerators using a uniform programming model whether they are in shared or distributed memory systems. We implement a prototype of our model and evaluate its performance with a GPU-based supercomputer using three benchmark applications.
Operation of high power converters in parallel
NASA Technical Reports Server (NTRS)
Decker, D. K.; Inouye, L. Y.
1993-01-01
High power converters that are used in space power subsystems are limited in power handling capability due to component and thermal limitations. For applications, such as Space Station Freedom, where multi-kilowatts of power must be delivered to user loads, parallel operation of converters becomes an attractive option when considering overall power subsystem topologies. TRW developed three different unequal power sharing approaches for parallel operation of converters. These approaches, known as droop, master-slave, and proportional adjustment, are discussed and test results are presented.
Concurrent Mission and Systems Design at NASA Glenn Research Center: The Origins of the COMPASS Team
NASA Technical Reports Server (NTRS)
McGuire, Melissa L.; Oleson, Steven R.; Sarver-Verhey, Timothy R.
2012-01-01
Established at the NASA Glenn Research Center (GRC) in 2006 to meet the need for rapid mission analysis and multi-disciplinary systems design for in-space and human missions, the Collaborative Modeling for Parametric Assessment of Space Systems (COMPASS) team is a multidisciplinary, concurrent engineering group whose primary purpose is to perform integrated systems analysis, but it is also capable of designing any system that involves one or more of the disciplines present in the team. The authors were involved in the development of the COMPASS team and its design process, and are continuously making refinements and enhancements. The team was unofficially started in the early 2000s as part of the distributed team known as Team JIMO (Jupiter Icy Moons Orbiter) in support of the multi-center collaborative JIMO spacecraft design during Project Prometheus. This paper documents the origins of a concurrent mission and systems design team at GRC and how it evolved into the COMPASS team, including defining the process, gathering the team and tools, building the facility, and performing studies.
2011-01-01
Background Colorectal cancer is the second most common tumor in developed countries, with a lifetime prevalence of 5%. About one third of these tumors are located in the rectum. Surgery in terms of low anterior resection with mesorectal excision is the central element in the treatment of rectal cancer being the only option for definite cure. Creating a protective diverting stoma prevents complications like anastomotic failure and meanwhile is the standard procedure. Bowel obstruction is one of the main and the clinically and economically most relevant complication following closure of loop ileostomy. The best surgical technique for closure of loop ileostomy has not been defined yet. Methods/Design A study protocol was developed on the basis of the only randomized controlled mono-center trial to solve clinical equipoise concerning the optimal surgical technique for closure of loop ileostomy after low anterior resection due to rectal cancer. The HASTA trial is a multi-center pragmatic randomized controlled surgical trial with two parallel groups to compare hand-suture versus stapling for closure of loop ileostomy. It will include 334 randomized patients undergoing closure of loop ileostomy after low anterior resection with protective ileostomy due to rectal cancer in approximately 20 centers consisting of German hospitals of all level of health care. The primary endpoint is the rate of bowel obstruction within 30 days after ileostomy closure. In addition, a set of surgical and general variables including quality of life will be analyzed with a follow-up of 12 months. An investigators meeting with a practical session will help to minimize performance bias and enforce protocol adherence. Centers are monitored centrally as well as on-site before and during recruitment phase to assure inclusion, treatment and follow up according to the protocol. Discussion Aim of the HASTA trial is to evaluate the efficacy of hand-suture versus stapling for closure of loop ileostomy in patients with rectal cancer. Trial registration German Clinical Trial Register Number: DRKS00000040 PMID:21303515
Angular description for 3D scattering centers
NASA Astrophysics Data System (ADS)
Bhalla, Rajan; Raynal, Ann Marie; Ling, Hao; Moore, John; Velten, Vincent J.
2006-05-01
The electromagnetic scattered field from an electrically large target can often be well modeled as if it is emanating from a discrete set of scattering centers (see Fig. 1). In the scattering center extraction tool we developed previously based on the shooting and bouncing ray technique, no correspondence is maintained amongst the 3D scattering center extracted at adjacent angles. In this paper we present a multi-dimensional clustering algorithm to track the angular and spatial behaviors of 3D scattering centers and group them into features. The extracted features for the Slicy and backhoe targets are presented. We also describe two metrics for measuring the angular persistence and spatial mobility of the 3D scattering centers that make up these features in order to gather insights into target physics and feature stability. We find that features that are most persistent are also the most mobile and discuss implications for optimal SAR imaging.
Robinson, Thomas N.; Matheson, Donna; Desai, Manisha; Wilson, Darrell M.; Weintraub, Dana L.; Haskell, William L.; McClain, Arianna; McClure, Samuel; Banda, Jorge; Sanders, Lee M.; Haydel, K. Farish; Killen, Joel D.
2013-01-01
Objective To test the effects of a three-year, community-based, multi-component, multi-level, multi-setting (MMM) approach for treating overweight and obese children. Design Two-arm, parallel group, randomized controlled trial with measures at baseline, 12, 24, and 36 months after randomization. Participants Seven through eleven year old, overweight and obese children (BMI ≥ 85th percentile) and their parents/caregivers recruited from community locations in low-income, primarily Latino neighborhoods in Northern California. Interventions Families are randomized to the MMM intervention versus a community health education active-placebo comparison intervention. Interventions last for three years for each participant. The MMM intervention includes a community-based after school team sports program designed specifically for overweight and obese children, a home-based family intervention to reduce screen time, alter the home food/eating environment, and promote self-regulatory skills for eating and activity behavior change, and a primary care behavioral counseling intervention linked to the community and home interventions. The active-placebo comparison intervention includes semi-annual health education home visits, monthly health education newsletters for children and for parents/guardians, and a series of community-based health education events for families. Main Outcome Measure Body mass index trajectory over the three-year study. Secondary outcome measures include waist circumference, triceps skinfold thickness, accelerometer-measured physical activity, 24-hour dietary recalls, screen time and other sedentary behaviors, blood pressure, fasting lipids, glucose, insulin, hemoglobin A1c, C-reactive protein, alanine aminotransferase, and psychosocial measures. Conclusions The Stanford GOALS trial is testing the efficacy of a novel community-based multi-component, multi-level, multi-setting treatment for childhood overweight and obesity in low-income, Latino families. PMID:24028942
Robinson, Thomas N; Matheson, Donna; Desai, Manisha; Wilson, Darrell M; Weintraub, Dana L; Haskell, William L; McClain, Arianna; McClure, Samuel; Banda, Jorge A; Sanders, Lee M; Haydel, K Farish; Killen, Joel D
2013-11-01
To test the effects of a three-year, community-based, multi-component, multi-level, multi-setting (MMM) approach for treating overweight and obese children. Two-arm, parallel group, randomized controlled trial with measures at baseline, 12, 24, and 36 months after randomization. Seven through eleven year old, overweight and obese children (BMI ≥ 85th percentile) and their parents/caregivers recruited from community locations in low-income, primarily Latino neighborhoods in Northern California. Families are randomized to the MMM intervention versus a community health education active-placebo comparison intervention. Interventions last for three years for each participant. The MMM intervention includes a community-based after school team sports program designed specifically for overweight and obese children, a home-based family intervention to reduce screen time, alter the home food/eating environment, and promote self-regulatory skills for eating and activity behavior change, and a primary care behavioral counseling intervention linked to the community and home interventions. The active-placebo comparison intervention includes semi-annual health education home visits, monthly health education newsletters for children and for parents/guardians, and a series of community-based health education events for families. Body mass index trajectory over the three-year study. Secondary outcome measures include waist circumference, triceps skinfold thickness, accelerometer-measured physical activity, 24-hour dietary recalls, screen time and other sedentary behaviors, blood pressure, fasting lipids, glucose, insulin, hemoglobin A1c, C-reactive protein, alanine aminotransferase, and psychosocial measures. The Stanford GOALS trial is testing the efficacy of a novel community-based multi-component, multi-level, multi-setting treatment for childhood overweight and obesity in low-income, Latino families. © 2013 Elsevier Inc. All rights reserved.
Yeh, Hsin-Chieh; Clark, Jeanne M; Emmons, Karen E; Moore, Reneé H; Bennett, Gary G; Warner, Erica T; Sarwer, David B; Jerome, Gerald J; Miller, Edgar R; Volger, Sheri; Louis, Thomas A; Wells, Barbara; Wadden, Thomas A; Colditz, Graham A; Appel, Lawrence J
2010-08-01
The National Heart, Lung, and Blood Institute (NHLBI) funded three institutions to conduct effectiveness trials of weight loss interventions in primary care settings. Unlike traditional multi-center clinical trials, each study was established as an independent trial with a distinct protocol. Still, efforts were made to coordinate and standardize several aspects of the trials. The three trials formed a collaborative group, the 'Practice-based Opportunities for Weight Reduction (POWER) Trials Collaborative Research Group.' We describe the common and distinct features of the three trials, the key characteristics of the collaborative group, and the lessons learned from this novel organizational approach. The Collaborative Research Group consists of three individual studies: 'Be Fit, Be Well' (Washington University in St. Louis/Harvard University), 'POWER Hopkins' (Johns Hopkins), and 'POWER-UP' (University of Pennsylvania). There are a total of 15 participating clinics with ~1100 participants. The common primary outcome is change in weight at 24 months of follow-up, but each protocol has trial-specific elements including different interventions and different secondary outcomes. A Resource Coordinating Unit at Johns Hopkins provides administrative support. The Collaborative Research Group established common components to facilitate potential cross-site comparisons. The main advantage of this approach is to develop and evaluate several interventions, when there is insufficient evidence to test one or two approaches, as would be done in a traditional multi-center trial. The challenges of the organizational design include the complex decision-making process, the extent of potential data pooling, time intensive efforts to standardize reports, and the additional responsibilities of the DSMB to monitor three distinct protocols.
Zou, Yi; Chakravarty, Swapnajit; Zhu, Liang; Chen, Ray T.
2014-01-01
We experimentally demonstrate an efficient and robust method for series connection of photonic crystal microcavities that are coupled to photonic crystal waveguides in the slow light transmission regime. We demonstrate that group index taper engineering provides excellent optical impedance matching between the input and output strip waveguides and the photonic crystal waveguide, a nearly flat transmission over the entire guided mode spectrum and clear multi-resonance peaks corresponding to individual microcavities that are connected in series. Series connected photonic crystal microcavities are further multiplexed in parallel using cascaded multimode interference power splitters to generate a high density silicon nanophotonic microarray comprising 64 photonic crystal microcavity sensors, all of which are interrogated simultaneously at the same instant of time. PMID:25316921
MiDAS ENCORE: Randomized Controlled Clinical Trial Report of 6-Month Results.
Staats, Peter S; Benyamin, Ramsin M
2016-02-01
Patients suffering from neurogenic claudication due to lumbar spinal stenosis (LSS) often experience moderate to severe pain and significant functional disability. Neurogenic claudication results from progressive degenerative changes in the spine, and most often affects the elderly. Both the MILD® procedure and epidural steroid injections (ESIs) offer interventional pain treatment options for LSS patients experiencing neurogenic claudication refractory to more conservative therapies. MILD provides an alternative to ESIs via minimally invasive lumbar decompression. Prospective, multi-center, randomized controlled clinical trial. Twenty-six US interventional pain management centers. To compare patient outcomes following treatment with either MILD (treatment group) or ESIs (active control group) in LSS patients with neurogenic claudication and verified ligamentum flavum hypertrophy. This prospective, multi-center, randomized controlled clinical trial includes 2 study arms with a 1-to-1 randomization ratio. A total of 302 patients were enrolled, with 149 randomized to MILD and 153 to the active control. Six-month follow-up has been completed and is presented in this report. In addition, one year follow-up will be conducted for patients in both study arms, and supplementary 2 year outcome data will be collected for patients in the MILD group only. Outcomes are assessed using the Oswestry Disability Index (ODI), numeric pain rating scale (NPRS) and Zurich Claudication Questionnaire (ZCQ). Primary efficacy is the proportion of ODI responders, tested for statistical superiority of the MILD group versus the active control group. ODI responders are defined as patients achieving the validated Minimal Important Change (MIC) of =10 point improvement in ODI from baseline to follow-up. Similarly, secondary efficacy includes proportion of NPRS and ZCQ responders using validated MIC thresholds. Primary safety is the incidence of device or procedure-related adverse events in each group. At 6 months, all primary and secondary efficacy results provided statistically significant evidence that MILD is superior to the active control. For primary efficacy, the proportion of ODI responders in the MILD group (62.2%) was statistically significantly higher than for the epidural steroid group (35.7%) (P < 0.001). Further, all secondary efficacy parameters demonstrated statistical superiority of MILD versus the active control. The primary safety endpoint was achieved, demonstrating that there is no difference in safety between MILD and ESIs (P = 1.00). Limitations include lack of patient blinding due to considerable differences in treatment protocols, and a potentially higher non-responder rate for both groups versus standard-of-care due to study restrictions on adjunctive pain therapies. Six month follow-up data from this trial demonstrate that the MILD procedure is statistically superior to epidural steroids, a known active treatment for LSS patients with neurogenic claudication and verified central stenosis due to ligamentum flavum hypertrophy. The results of all primary and secondary efficacy outcome measures achieved statistically superior outcomes in the MILD group versus ESIs. Further, there were no statistically significant differences in the safety profile between study groups. This prospective, multi-center, randomized controlled clinical trial provides strong evidence of the effectiveness of MILD versus epidural steroids in this patient population. NCT02093520.
User-Centered Design through Learner-Centered Instruction
ERIC Educational Resources Information Center
Altay, Burçak
2014-01-01
This article initially demonstrates the parallels between the learner-centered approach in education and the user-centered approach in design disciplines. Afterward, a course on human factors that applies learner-centered methods to teach user-centered design is introduced. The focus is on three tasks to identify the application of theoretical and…
Image matrix processor for fast multi-dimensional computations
Roberson, George P.; Skeate, Michael F.
1996-01-01
An apparatus for multi-dimensional computation which comprises a computation engine, including a plurality of processing modules. The processing modules are configured in parallel and compute respective contributions to a computed multi-dimensional image of respective two dimensional data sets. A high-speed, parallel access storage system is provided which stores the multi-dimensional data sets, and a switching circuit routes the data among the processing modules in the computation engine and the storage system. A data acquisition port receives the two dimensional data sets representing projections through an image, for reconstruction algorithms such as encountered in computerized tomography. The processing modules include a programmable local host, by which they may be configured to execute a plurality of different types of multi-dimensional algorithms. The processing modules thus include an image manipulation processor, which includes a source cache, a target cache, a coefficient table, and control software for executing image transformation routines using data in the source cache and the coefficient table and loading resulting data in the target cache. The local host processor operates to load the source cache with a two dimensional data set, loads the coefficient table, and transfers resulting data out of the target cache to the storage system, or to another destination.
System, methods and apparatus for program optimization for multi-threaded processor architectures
Bastoul, Cedric; Lethin, Richard A; Leung, Allen K; Meister, Benoit J; Szilagyi, Peter; Vasilache, Nicolas T; Wohlford, David E
2015-01-06
Methods, apparatus and computer software product for source code optimization are provided. In an exemplary embodiment, a first custom computing apparatus is used to optimize the execution of source code on a second computing apparatus. In this embodiment, the first custom computing apparatus contains a memory, a storage medium and at least one processor with at least one multi-stage execution unit. The second computing apparatus contains at least two multi-stage execution units that allow for parallel execution of tasks. The first custom computing apparatus optimizes the code for parallelism, locality of operations and contiguity of memory accesses on the second computing apparatus. This Abstract is provided for the sole purpose of complying with the Abstract requirement rules. This Abstract is submitted with the explicit understanding that it will not be used to interpret or to limit the scope or the meaning of the claims.
Rubus: A compiler for seamless and extensible parallelism.
Adnan, Muhammad; Aslam, Faisal; Nawaz, Zubair; Sarwar, Syed Mansoor
2017-01-01
Nowadays, a typical processor may have multiple processing cores on a single chip. Furthermore, a special purpose processing unit called Graphic Processing Unit (GPU), originally designed for 2D/3D games, is now available for general purpose use in computers and mobile devices. However, the traditional programming languages which were designed to work with machines having single core CPUs, cannot utilize the parallelism available on multi-core processors efficiently. Therefore, to exploit the extraordinary processing power of multi-core processors, researchers are working on new tools and techniques to facilitate parallel programming. To this end, languages like CUDA and OpenCL have been introduced, which can be used to write code with parallelism. The main shortcoming of these languages is that programmer needs to specify all the complex details manually in order to parallelize the code across multiple cores. Therefore, the code written in these languages is difficult to understand, debug and maintain. Furthermore, to parallelize legacy code can require rewriting a significant portion of code in CUDA or OpenCL, which can consume significant time and resources. Thus, the amount of parallelism achieved is proportional to the skills of the programmer and the time spent in code optimizations. This paper proposes a new open source compiler, Rubus, to achieve seamless parallelism. The Rubus compiler relieves the programmer from manually specifying the low-level details. It analyses and transforms a sequential program into a parallel program automatically, without any user intervention. This achieves massive speedup and better utilization of the underlying hardware without a programmer's expertise in parallel programming. For five different benchmarks, on average a speedup of 34.54 times has been achieved by Rubus as compared to Java on a basic GPU having only 96 cores. Whereas, for a matrix multiplication benchmark the average execution speedup of 84 times has been achieved by Rubus on the same GPU. Moreover, Rubus achieves this performance without drastically increasing the memory footprint of a program.
Rubus: A compiler for seamless and extensible parallelism
Adnan, Muhammad; Aslam, Faisal; Sarwar, Syed Mansoor
2017-01-01
Nowadays, a typical processor may have multiple processing cores on a single chip. Furthermore, a special purpose processing unit called Graphic Processing Unit (GPU), originally designed for 2D/3D games, is now available for general purpose use in computers and mobile devices. However, the traditional programming languages which were designed to work with machines having single core CPUs, cannot utilize the parallelism available on multi-core processors efficiently. Therefore, to exploit the extraordinary processing power of multi-core processors, researchers are working on new tools and techniques to facilitate parallel programming. To this end, languages like CUDA and OpenCL have been introduced, which can be used to write code with parallelism. The main shortcoming of these languages is that programmer needs to specify all the complex details manually in order to parallelize the code across multiple cores. Therefore, the code written in these languages is difficult to understand, debug and maintain. Furthermore, to parallelize legacy code can require rewriting a significant portion of code in CUDA or OpenCL, which can consume significant time and resources. Thus, the amount of parallelism achieved is proportional to the skills of the programmer and the time spent in code optimizations. This paper proposes a new open source compiler, Rubus, to achieve seamless parallelism. The Rubus compiler relieves the programmer from manually specifying the low-level details. It analyses and transforms a sequential program into a parallel program automatically, without any user intervention. This achieves massive speedup and better utilization of the underlying hardware without a programmer’s expertise in parallel programming. For five different benchmarks, on average a speedup of 34.54 times has been achieved by Rubus as compared to Java on a basic GPU having only 96 cores. Whereas, for a matrix multiplication benchmark the average execution speedup of 84 times has been achieved by Rubus on the same GPU. Moreover, Rubus achieves this performance without drastically increasing the memory footprint of a program. PMID:29211758
Variable Swing Optimal Parallel Links - Minimal Power, Maximal Density for Parallel Links
2009-01-01
implemented; it allows controlling the transmitter current by a simple design of a differential pair with a 100 ohms termination resistor. Figure 3.4...optimization. Zuber, P., et al. 2005. 0-7695-2288-2. 21. A 36Gb/s ACCI Multi-Channel Bus using a Fully Differential Pulse Receiver. Wilson, Lei Luo
Kaminski, Rafal; Kulinski, Krzysztof; Kozar-Kaminska, Katarzyna; Wielgus, Monika; Langner, Maciej; Wasko, Marcin K; Kowalczewski, Jacek; Pomianowski, Stanislaw
2018-01-01
The present study aimed to investigate the effectiveness and safety of platelet-rich plasma (PRP) application in arthroscopic repair of complete vertical tear of meniscus located in the red-white zone. This single center, prospective, randomized, double-blind, placebo-controlled, parallel-arm study included 37 patients with complete vertical meniscus tears. Patients received an intrarepair site injection of either PRP or sterile 0.9% saline during an index arthroscopy. The primary endpoint was the rate of meniscus healing in the two groups. The secondary endpoints were changes in the International Knee Documentation Committee (IKDC) score, Knee Injury and Osteoarthritis Outcome Score (KOOS), Western Ontario and McMaster Universities Osteoarthritis Index (WOMAC), and analog scale (VAS) in the two groups at 42 months. After 18 weeks, the meniscus healing rate was significantly higher in the PRP-treated group than in the control group (85% versus 47%, P = 0.048). Functional outcomes were significantly better 42 months after treatment than at baseline in both groups. The IKDC score, WOMAC, and KOOS were significantly better in the PRP-treated group than in the control group. No adverse events were reported during the study period. The findings of this study indicate that PRP augmentation in meniscus repair results in improvements in both meniscus healing and functional outcome.
Kulinski, Krzysztof; Kozar-Kaminska, Katarzyna; Wielgus, Monika; Langner, Maciej; Wasko, Marcin K.; Kowalczewski, Jacek; Pomianowski, Stanislaw
2018-01-01
Objective The present study aimed to investigate the effectiveness and safety of platelet-rich plasma (PRP) application in arthroscopic repair of complete vertical tear of meniscus located in the red-white zone. Methods This single center, prospective, randomized, double-blind, placebo-controlled, parallel-arm study included 37 patients with complete vertical meniscus tears. Patients received an intrarepair site injection of either PRP or sterile 0.9% saline during an index arthroscopy. The primary endpoint was the rate of meniscus healing in the two groups. The secondary endpoints were changes in the International Knee Documentation Committee (IKDC) score, Knee Injury and Osteoarthritis Outcome Score (KOOS), Western Ontario and McMaster Universities Osteoarthritis Index (WOMAC), and analog scale (VAS) in the two groups at 42 months. Results After 18 weeks, the meniscus healing rate was significantly higher in the PRP-treated group than in the control group (85% versus 47%, P = 0.048). Functional outcomes were significantly better 42 months after treatment than at baseline in both groups. The IKDC score, WOMAC, and KOOS were significantly better in the PRP-treated group than in the control group. No adverse events were reported during the study period. Conclusions The findings of this study indicate that PRP augmentation in meniscus repair results in improvements in both meniscus healing and functional outcome. PMID:29713647
Influence of multi-walled carbon nanotubes on the cognitive abilities of Wistar rats
Sayapina, Nina V.; Sergievich, Alexander A.; Kuznetsov, Vladimir L.; Chaika, Vladimir V.; Lisitskaya, Irina G.; Khoroshikh, Pavel P.; Batalova, Tatyana A.; Tsarouhas, Kostas; Spandidos, Demetrios; Tsatsakis, Aristidis M.; Fenga, Concettina; Golokhvast, Kirill S.
2016-01-01
Studies of the neurobehavioral effects of carbon nanomaterials, particularly those of multi-walled carbon nanotubes (MWCNTs), have concentrated on cognitive effects, but data are scarce. The aim of this study was to assess the influence of MWCNTs on a number of higher nervous system functions of Wistar rats. For a period of 10 days, two experimental groups were fed with MWCNTs of different diameters (MWCNT-1 group, 8–10 nm; MWCNT-2 group, 18–20 nm) once a day at a dosage of 500 mg/kg. In the open-field test, reductions of integral indications of researching activity were observed for the two MWCNT-treated groups, with a parallel significant (P<0.01) increase in stress levels for these groups compared with the untreated control group. In the elevated plus-maze test, integral indices of researching activity in the MWCNT-1 and MWCNT-2 groups reduced by day 10 by 51 and 62%, respectively, while rat stress levels remained relatively unchanged. In the universal problem solving box test, reductions in motivation and energy indices of researching activity were observed in the two experimental groups. Searching activity in the MWCNT-1 group by day 3 was reduced by 50% (P<0.01) and in the MWCNT-2 group the relevant reduction reached 11.2%. By day 10, the reduction compared with controls, was 64% (P<0.01) and 58% (P<0.01) for the MWCNT-1 and MWCNT-2 groups, respectively. In conclusion, a series of specific tests demonstrated that MWCNT-treated rats experienced a significant reduction of some of their cognitive abilities, a disturbing and worrying finding, taking into consideration the continuing and accelerating use of carbon nanotubes in medicine and science. PMID:27588053
[The specialty clinical centers within the structure of the regional multi-specialty hospital].
Fadeev, M G
2008-01-01
The analysis of the functioning of the regional referral clinical center of hand surgery, the eye injury center, the pediatric burns center and the neurosurgical center situated on the basis of large multi-field hospitals of the City of Ekaterinburg is presented. Such common conditions of their activity as experienced manpower availability and medical Academy chairs maintenance are revealed. The special referral clinical centers organized prior to the perstroyka and reformation, continue to function successfully providing high-tech medical care to the patients of the megapolis and to the inhabitants of the Sverdlovskaya Oblast. The effectiveness and perspectiveness of further functioning of the special referral clinical centers embedded into the structure of the municipal multi-field hospitals in the conditions of health reforms is demonstrated.
Xia, Shuang; Li, Xueqin; Shi, Yanbin; Liu, Jinxin; Zhang, Mengjie; Gu, Tenghui; Pan, Shinong; Song, Liucun; Xu, Jinsheng; Sun, Yan; Zhao, Qingxia; Lu, Zhiyan; Lu, Puxuan; Li, Hongjun
2016-02-01
The objective of this paper is to correlate the MRI distribution of cryptococcal meningoencephalitis in HIV-1 infection patients with CD4 T cell count and immune reconstitution effect.A large retrospective cohort study of HIV patients from multi-HIV centers in China was studied to demonstrate the MRI distribution of cryptococcal meningoencephalitis and its correlation with the different immune status.The consecutive clinical and neuroimaging data of 55 HIV-1-infected patients with cryptococcal meningoencephalitis collected at multi-HIV centers in China during the years of 2011 to 2014 was retrospectively analyzed. The enrolled patients were divided into 2 groups based on the distribution of lesions. One group of patients had their lesions at the central brain (group 1, n = 34) and the other group of patients had their lesions at the superficial brain (group 2, n = 21). We explored their MRI characterization of brain. In addition, we also compared their CD4 T cell counts and immune reconstitution effects between the 2 groups based on the imaging findings.No statistical difference was found in terms of age and gender between the 2 groups. The medians of CD4 T cell counts were 11.67 cells/mm (3.00-52.00 cells/mm) in group 1 and 42.00 cells/mm (10.00-252.00 cells/mm) in group 2. Statistical difference of CD4 T cell count was found between the 2 groups (P = 0.023). Thirteen patients in group 1 (13/34) and 12 patients in group 2 (12/21) received highly active antiretroviral treatment (HAART). Patients of group 2 received HAART therapy more frequently than patients of group 1 (P = 0.021).Central and superficial brain lesions detected by MR imaging in HIV-1-infected patients with cryptococcal meningoencephalitis are in correlation with the host immunity and HAART therapy.
ERIC Educational Resources Information Center
Valdez, Carmen R.; Mills, Monique T.; Bohlig, Amanda J.; Kaplan, David
2013-01-01
This person-centered study examines the extent to which parents' language dominance influences the effects of an after school, multi-family group intervention, FAST, on low-income children's emotional and behavioral outcomes via parents' relations with other parents and with school staff. Social capital resides in relationships of trust and shared…
"Seeing" the School Reform Elephant: Connecting Policy Makers, Parents, Practioners, and Students.
ERIC Educational Resources Information Center
Wagner, Tony; Sconyers, Nancy
This report is part of a multi-year project conducted by the Institute for Responsive Education (IRE) and Boston University components of the Center on Families, Communities, Schools and Children's Learning. The report draws on results of a series of focus groups and interviews conducted in 1994 and 1995 to explore how policymakers and parents,…
ERIC Educational Resources Information Center
Rucker, Douglas; Feldman, David
The comparative effectiveness of two student monitoring and reinforcement strategies was assessed among primary school students. The 50 participating students met in a multi-purpose instructional center during one of two sessions for academic periods of 30 minutes, three times a week. Students were assigned to one of six study groups in the…
IPSL-CM5A2. An Earth System Model designed to run long simulations for past and future climates.
NASA Astrophysics Data System (ADS)
Sepulchre, Pierre; Caubel, Arnaud; Marti, Olivier; Hourdin, Frédéric; Dufresne, Jean-Louis; Boucher, Olivier
2017-04-01
The IPSL-CM5A model was developed and released in 2013 "to study the long-term response of the climate system to natural and anthropogenic forcings as part of the 5th Phase of the Coupled Model Intercomparison Project (CMIP5)" [Dufresne et al., 2013]. Although this model also has been used for numerous paleoclimate studies, a major limitation was its computation time, which averaged 10 model-years / day on 32 cores of the Curie supercomputer (on TGCC computing center, France). Such performances were compatible with the experimental designs of intercomparison projects (e.g. CMIP, PMIP) but became limiting for modelling activities involving several multi-millenial experiments, which are typical for Quaternary or "deeptime" paleoclimate studies, in which a fully-equilibrated deep-ocean is mandatory. Here we present the Earth-System model IPSL-CM5A2. Based on IPSL-CM5A, technical developments have been performed both on separate components and on the coupling system in order to speed up the whole coupled model. These developments include the integration of hybrid parallelization MPI-OpenMP in LMDz atmospheric component, the use of a new input-ouput library to perform parallel asynchronous input/output by using computing cores as "IO servers", the use of a parallel coupling library between the ocean and the atmospheric components. Running on 304 cores, the model can now simulate 55 years per day, opening new gates towards multi-millenial simulations. Apart from obtaining better computing performances, one aim of setting up IPSL-CM5A2 was also to overcome the cold bias depicted in global surface air temperature (t2m) in IPSL-CM5A. We present the tuning strategy to overcome this bias as well as the main characteristics (including biases) of the pre-industrial climate simulated by IPSL-CM5A2. Lastly, we shortly present paleoclimate simulations run with this model, for the Holocene and for deeper timescales in the Cenozoic, for which the particular continental configuration was overcome by a new design of the ocean tripolar grid.
A multi-center randomized trial of two different intravenous fluids during labor
DAPUZZO-ARGIRIOU, Lisa M.; SMULIAN, John C.; ROCHON, Meredith L.; GALDI, Luisa; KISSLING, Jessika M.; SCHNATZ, Peter F.; RIOS, Angel GONZALEZ; AIROLDI, James; CARRILLO, Mary Anne; MAINES, Jaimie; KUNSELMAN, Allen R.; REPKE, John; LEGRO, Richard S.
2017-01-01
Objective To determine if the intrapartum use of a 5% glucose-containing intravenous solution decreases the chance of a cesarean delivery for women presenting in active labor. Methods This was a multi-center, prospective, single (patient) blind, randomized study design implemented at 4 obstetric residency programs in Pennsylvania. Singleton, term, consenting women presenting in active spontaneous labor with a cervical dilation of <6cm were randomized to lactated Ringer's with or without 5% glucose (LR versus D5LR) as their maintenance intravenous fluid. The primary outcome was the cesarean birth rate. Secondary outcomes included labor characteristics, as well as maternal or neonatal complications. Results There were 309 women analyzed. Demographic variables and admitting cervical dilation were similar among study groups. There was no significant difference in the cesarean delivery rate for the D5LR group (23/153 or 15.0%) versus the LR arm (18/156 or 11.5%), [RR (95%CI) of 1.32 (0.75, 2.35), P=0.34]. There were no differences in augmentation rates or intrapartum complications. Conclusions The use of intravenous fluid containing 5% dextrose does not lower the chance of cesarean delivery for women admitted in active labor. PMID:25758624
Crosetto, D.B.
1996-12-31
The present device provides for a dynamically configurable communication network having a multi-processor parallel processing system having a serial communication network and a high speed parallel communication network. The serial communication network is used to disseminate commands from a master processor to a plurality of slave processors to effect communication protocol, to control transmission of high density data among nodes and to monitor each slave processor`s status. The high speed parallel processing network is used to effect the transmission of high density data among nodes in the parallel processing system. Each node comprises a transputer, a digital signal processor, a parallel transfer controller, and two three-port memory devices. A communication switch within each node connects it to a fast parallel hardware channel through which all high density data arrives or leaves the node. 6 figs.
Crosetto, Dario B.
1996-01-01
The present device provides for a dynamically configurable communication network having a multi-processor parallel processing system having a serial communication network and a high speed parallel communication network. The serial communication network is used to disseminate commands from a master processor (100) to a plurality of slave processors (200) to effect communication protocol, to control transmission of high density data among nodes and to monitor each slave processor's status. The high speed parallel processing network is used to effect the transmission of high density data among nodes in the parallel processing system. Each node comprises a transputer (104), a digital signal processor (114), a parallel transfer controller (106), and two three-port memory devices. A communication switch (108) within each node (100) connects it to a fast parallel hardware channel (70) through which all high density data arrives or leaves the node.
Parallel Computation of the Jacobian Matrix for Nonlinear Equation Solvers Using MATLAB
NASA Technical Reports Server (NTRS)
Rose, Geoffrey K.; Nguyen, Duc T.; Newman, Brett A.
2017-01-01
Demonstrating speedup for parallel code on a multicore shared memory PC can be challenging in MATLAB due to underlying parallel operations that are often opaque to the user. This can limit potential for improvement of serial code even for the so-called embarrassingly parallel applications. One such application is the computation of the Jacobian matrix inherent to most nonlinear equation solvers. Computation of this matrix represents the primary bottleneck in nonlinear solver speed such that commercial finite element (FE) and multi-body-dynamic (MBD) codes attempt to minimize computations. A timing study using MATLAB's Parallel Computing Toolbox was performed for numerical computation of the Jacobian. Several approaches for implementing parallel code were investigated while only the single program multiple data (spmd) method using composite objects provided positive results. Parallel code speedup is demonstrated but the goal of linear speedup through the addition of processors was not achieved due to PC architecture.
Boshuisen, Kim; Lamberink, Herm J; van Schooneveld, Monique Mj; Cross, J Helen; Arzimanoglou, Alexis; van der Tweel, Ingeborg; Geleijns, Karin; Uiterwaal, Cuno Spm; Braun, Kees Pj
2015-10-26
The goals of intentional curative pediatric epilepsy surgery are to achieve seizure-freedom and antiepileptic drug (AED) freedom. Retrospective cohort studies have indicated that early postoperative AED withdrawal unmasks incomplete surgical success and AED dependency sooner, but not at the cost of long-term seizure outcome. Moreover, AED withdrawal seemed to improve cognitive outcome. A randomized trial is needed to confirm these findings. We hypothesized that early AED withdrawal in children is not only safe, but also beneficial with respect to cognitive functioning. This is a multi-center pragmatic randomized clinical trial to investigate whether early AED withdrawal improves cognitive function, in terms of attention, executive function and intelligence, quality of life and behavior, and to confirm safety in terms of eventual seizure freedom, seizure recurrences and "seizure and AED freedom." Patients will be randomly allocated in parallel groups (1:1) to either early or late AED withdrawal. Randomization will be concealed and stratified for preoperative IQ and medical center. In the early withdrawal arm reduction of AEDs will start 4 months after surgery, while in the late withdrawal arm reduction starts 12 months after surgery, with intended complete cessation of drugs after 12 and 20 months respectively. Cognitive outcome measurements will be performed preoperatively, and at 1 and 2 years following surgery, and consist of assessment of attention and executive functioning using the EpiTrack Junior test and intelligence expressed as IQ (Wechsler Intelligence Scales). Seizure outcomes will be assessed at 24 months after surgery, and at 20 months following start of AED reduction. We aim to randomize 180 patients who underwent anticipated curative epilepsy surgery below 16 years of age, were able to perform the EpiTrack Junior test preoperatively, and have no predictors of poor postoperative seizure prognosis (multifocal magnetic resonance imaging (MRI) abnormalities, incomplete resection of the lesion, epileptic postoperative electroencephalogram (EEG) abnormalities, or more than three AEDs at the time of surgery). Growing experience with epilepsy surgery has changed the view towards postoperative medication policy. In a European collaboration, we designed a multi-center pragmatic randomized clinical trial comparing early with late AED withdrawal to investigate benefits and safety of early AED withdrawal. The TTS trial is supported by the Dutch Epilepsy Fund (NL 08-10) ISRCTN88423240/ 08/05/2013.
2013-01-01
Background Group-based social skills training (SST) has repeatedly been recommended as treatment of choice in high-functioning autism spectrum disorder (HFASD). To date, no sufficiently powered randomised controlled trial has been performed to establish efficacy and safety of SST in children and adolescents with HFASD. In this randomised, multi-centre, controlled trial with 220 children and adolescents with HFASD it is hypothesized, that add-on group-based SST using the 12 weeks manualised SOSTA–FRA program will result in improved social responsiveness (measured by the parent rated social responsiveness scale, SRS) compared to treatment as usual (TAU). It is further expected, that parent and self reported anxiety and depressive symptoms will decline and pro-social behaviour will increase in the treatment group. A neurophysiological study in the Frankfurt HFASD subgroup will be performed pre- and post treatment to assess changes in neural function induced by SST versus TAU. Methods/design The SOSTA – net trial is designed as a prospective, randomised, multi-centre, controlled trial with two parallel groups. The primary outcome is change in SRS score directly after the intervention and at 3 months follow-up. Several secondary outcome measures are also obtained. The target sample consists of 220 individuals with ASD, included at the six study centres. Discussion This study is currently one of the largest trials on SST in children and adolescents with HFASD worldwide. Compared to recent randomised controlled studies, our study shows several advantages with regard to in- and exclusion criteria, study methods, and the therapeutic approach chosen, which can be easily implemented in non-university-based clinical settings. Trial registration ISRCTN94863788 – SOSTA – net: Group-based social skills training in children and adolescents with high functioning autism spectrum disorder. PMID:23289935
The potential of multi-port optical memories in digital computing
NASA Technical Reports Server (NTRS)
Alford, C. O.; Gaylord, T. K.
1975-01-01
A high-capacity memory with a relatively high data transfer rate and multi-port simultaneous access capability may serve as the basis for new computer architectures. The implementation of a multi-port optical memory is discussed. Several computer structures are presented that might profitably use such a memory. These structures include (1) a simultaneous record access system, (2) a simultaneously shared memory computer system, and (3) a parallel digital processing structure.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Earl, Christopher; Might, Matthew; Bagusetty, Abhishek
This study presents Nebo, a declarative domain-specific language embedded in C++ for discretizing partial differential equations for transport phenomena on multiple architectures. Application programmers use Nebo to write code that appears sequential but can be run in parallel, without editing the code. Currently Nebo supports single-thread execution, multi-thread execution, and many-core (GPU-based) execution. With single-thread execution, Nebo performs on par with code written by domain experts. With multi-thread execution, Nebo can linearly scale (with roughly 90% efficiency) up to 12 cores, compared to its single-thread execution. Moreover, Nebo’s many-core execution can be over 140x faster than its single-thread execution.
Three-Component Reaction Discovery Enabled by Mass Spectrometry of Self-Assembled Monolayers
Montavon, Timothy J.; Li, Jing; Cabrera-Pardo, Jaime R.; Mrksich, Milan; Kozmin, Sergey A.
2011-01-01
Multi-component reactions have been extensively employed in many areas of organic chemistry. Despite significant progress, the discovery of such enabling transformations remains challenging. Here, we present the development of a parallel, label-free reaction-discovery platform, which can be used for identification of new multi-component transformations. Our approach is based on the parallel mass spectrometric screening of interfacial chemical reactions on arrays of self-assembled monolayers. This strategy enabled the identification of a simple organic phosphine that can catalyze a previously unknown condensation of siloxy alkynes, aldehydes and amines to produce 3-hydroxy amides with high efficiency and diastereoselectivity. The reaction was further optimized using solution phase methods. PMID:22169871
Earl, Christopher; Might, Matthew; Bagusetty, Abhishek; ...
2016-01-26
This study presents Nebo, a declarative domain-specific language embedded in C++ for discretizing partial differential equations for transport phenomena on multiple architectures. Application programmers use Nebo to write code that appears sequential but can be run in parallel, without editing the code. Currently Nebo supports single-thread execution, multi-thread execution, and many-core (GPU-based) execution. With single-thread execution, Nebo performs on par with code written by domain experts. With multi-thread execution, Nebo can linearly scale (with roughly 90% efficiency) up to 12 cores, compared to its single-thread execution. Moreover, Nebo’s many-core execution can be over 140x faster than its single-thread execution.
Real time display Fourier-domain OCT using multi-thread parallel computing with data vectorization
NASA Astrophysics Data System (ADS)
Eom, Tae Joong; Kim, Hoon Seop; Kim, Chul Min; Lee, Yeung Lak; Choi, Eun-Seo
2011-03-01
We demonstrate a real-time display of processed OCT images using multi-thread parallel computing with a quad-core CPU of a personal computer. The data of each A-line are treated as one vector to maximize the data translation rate between the cores of the CPU and RAM stored image data. A display rate of 29.9 frames/sec for processed OCT data (4096 FFT-size x 500 A-scans) is achieved in our system using a wavelength swept source with 52-kHz swept frequency. The data processing times of the OCT image and a Doppler OCT image with a 4-time average are 23.8 msec and 91.4 msec.
Zheng, Jin-Ping; Wen, Fu-Qiang; Bai, Chun-Xue; Wan, Huan-Ying; Kang, Jian; Chen, Ping; Yao, Wan-Zhen; Ma, Li-Jun; Xia, Qi-Kui; Gao, Yi; Zhong, Nan-Shan
2013-04-01
Chronic obstructive pulmonary disease (COPD) is characterized by persistent airflow limitation; from a pathophysiological point of view it involves many components, including mucus hypersecretion, oxidative stress and inflammation. N-acetylcysteine (NAC) is a mucolytic agent with antioxidant and anti-inflammatory properties. Long-term efficacy of NAC 600mg/d in COPD is controversial; a dose-effect relationship has been demonstrated, but at present it is not known whether a higher dose provides clinical benefits. The PANTHEON Study is a prospective, ICS stratified, randomized, double-blind, placebo-controlled, parallel-group, multi-center trial designed to assess the efficacy and safety of high-dose (1200 mg/daily) NAC treatment for one year in moderate-to-severe COPD patients. The primary endpoint is the annual exacerbation rate. Secondary endpoints include recurrent exacerbations hazard ratio, time to first exacerbation, as well as quality of life and pulmonary function. The hypothesis, design and methodology are described and baseline characteristics of recruited patients are presented. 1006 COPD patients (444 treated with maintenance ICS, 562 ICS naive, aged 66.27±8.76 yrs, average post-bronchodilator FEV1 48.95±11.80 of predicted) have been randomized at 34 hospitals in China. Final results of this study will provide objective data on the effects of high-dose (1200 mg/daily) long-term NAC treatment in the prevention of COPD exacerbations and other outcome variables.
Implementing Shared Memory Parallelism in MCBEND
NASA Astrophysics Data System (ADS)
Bird, Adam; Long, David; Dobson, Geoff
2017-09-01
MCBEND is a general purpose radiation transport Monte Carlo code from AMEC Foster Wheelers's ANSWERS® Software Service. MCBEND is well established in the UK shielding community for radiation shielding and dosimetry assessments. The existing MCBEND parallel capability effectively involves running the same calculation on many processors. This works very well except when the memory requirements of a model restrict the number of instances of a calculation that will fit on a machine. To more effectively utilise parallel hardware OpenMP has been used to implement shared memory parallelism in MCBEND. This paper describes the reasoning behind the choice of OpenMP, notes some of the challenges of multi-threading an established code such as MCBEND and assesses the performance of the parallel method implemented in MCBEND.
Parallel workflow tools to facilitate human brain MRI post-processing
Cui, Zaixu; Zhao, Chenxi; Gong, Gaolang
2015-01-01
Multi-modal magnetic resonance imaging (MRI) techniques are widely applied in human brain studies. To obtain specific brain measures of interest from MRI datasets, a number of complex image post-processing steps are typically required. Parallel workflow tools have recently been developed, concatenating individual processing steps and enabling fully automated processing of raw MRI data to obtain the final results. These workflow tools are also designed to make optimal use of available computational resources and to support the parallel processing of different subjects or of independent processing steps for a single subject. Automated, parallel MRI post-processing tools can greatly facilitate relevant brain investigations and are being increasingly applied. In this review, we briefly summarize these parallel workflow tools and discuss relevant issues. PMID:26029043
Carpet: Adaptive Mesh Refinement for the Cactus Framework
NASA Astrophysics Data System (ADS)
Schnetter, Erik; Hawley, Scott; Hawke, Ian
2016-11-01
Carpet is an adaptive mesh refinement and multi-patch driver for the Cactus Framework (ascl:1102.013). Cactus is a software framework for solving time-dependent partial differential equations on block-structured grids, and Carpet acts as driver layer providing adaptive mesh refinement, multi-patch capability, as well as parallelization and efficient I/O.
76 FR 66309 - Pilot Program for Parallel Review of Medical Products; Correction
Federal Register 2010, 2011, 2012, 2013, 2014
2011-10-26
... DEPARTMENT OF HEALTH AND HUMAN SERVICES Centers for Medicare and Medicaid Services [CMS-3180-N2] Food and Drug Administration [Docket No. FDA-2010-N-0308] Pilot Program for Parallel Review of Medical... technologies to participate in a program of parallel FDA-CMS review. The document was published with an...
Parallel processing architecture for H.264 deblocking filter on multi-core platforms
NASA Astrophysics Data System (ADS)
Prasad, Durga P.; Sonachalam, Sekar; Kunchamwar, Mangesh K.; Gunupudi, Nageswara Rao
2012-03-01
Massively parallel computing (multi-core) chips offer outstanding new solutions that satisfy the increasing demand for high resolution and high quality video compression technologies such as H.264. Such solutions not only provide exceptional quality but also efficiency, low power, and low latency, previously unattainable in software based designs. While custom hardware and Application Specific Integrated Circuit (ASIC) technologies may achieve lowlatency, low power, and real-time performance in some consumer devices, many applications require a flexible and scalable software-defined solution. The deblocking filter in H.264 encoder/decoder poses difficult implementation challenges because of heavy data dependencies and the conditional nature of the computations. Deblocking filter implementations tend to be fixed and difficult to reconfigure for different needs. The ability to scale up for higher quality requirements such as 10-bit pixel depth or a 4:2:2 chroma format often reduces the throughput of a parallel architecture designed for lower feature set. A scalable architecture for deblocking filtering, created with a massively parallel processor based solution, means that the same encoder or decoder will be deployed in a variety of applications, at different video resolutions, for different power requirements, and at higher bit-depths and better color sub sampling patterns like YUV, 4:2:2, or 4:4:4 formats. Low power, software-defined encoders/decoders may be implemented using a massively parallel processor array, like that found in HyperX technology, with 100 or more cores and distributed memory. The large number of processor elements allows the silicon device to operate more efficiently than conventional DSP or CPU technology. This software programing model for massively parallel processors offers a flexible implementation and a power efficiency close to that of ASIC solutions. This work describes a scalable parallel architecture for an H.264 compliant deblocking filter for multi core platforms such as HyperX technology. Parallel techniques such as parallel processing of independent macroblocks, sub blocks, and pixel row level are examined in this work. The deblocking architecture consists of a basic cell called deblocking filter unit (DFU) and dependent data buffer manager (DFM). The DFU can be used in several instances, catering to different performance needs the DFM serves the data required for the different number of DFUs, and also manages all the neighboring data required for future data processing of DFUs. This approach achieves the scalability, flexibility, and performance excellence required in deblocking filters.
Sahoo, Satya S.; Ogbuji, Chimezie; Luo, Lingyun; Dong, Xiao; Cui, Licong; Redline, Susan S.; Zhang, Guo-Qiang
2011-01-01
Clinical studies often use data dictionaries with controlled sets of terms to facilitate data collection, limited interoperability and sharing at a local site. Multi-center retrospective clinical studies require that these data dictionaries, originating from individual participating centers, be harmonized in preparation for the integration of the corresponding clinical research data. Domain ontologies are often used to facilitate multi-center data integration by modeling terms from data dictionaries in a logic-based language, but interoperability among domain ontologies (using automated techniques) is an unresolved issue. Although many upper-level reference ontologies have been proposed to address this challenge, our experience in integrating multi-center sleep medicine data highlights the need for an upper level ontology that models a common set of terms at multiple-levels of abstraction, which is not covered by the existing upper-level ontologies. We introduce a methodology underpinned by a Minimal Domain of Discourse (MiDas) algorithm to automatically extract a minimal common domain of discourse (upper-domain ontology) from an existing domain ontology. Using the Multi-Modality, Multi-Resource Environment for Physiological and Clinical Research (Physio-MIMI) multi-center project in sleep medicine as a use case, we demonstrate the use of MiDas in extracting a minimal domain of discourse for sleep medicine, from Physio-MIMI’s Sleep Domain Ontology (SDO). We then extend the resulting domain of discourse with terms from the data dictionary of the Sleep Heart and Health Study (SHHS) to validate MiDas. To illustrate the wider applicability of MiDas, we automatically extract the respective domains of discourse from 6 sample domain ontologies from the National Center for Biomedical Ontologies (NCBO) and the OBO Foundry. PMID:22195180
Sahoo, Satya S; Ogbuji, Chimezie; Luo, Lingyun; Dong, Xiao; Cui, Licong; Redline, Susan S; Zhang, Guo-Qiang
2011-01-01
Clinical studies often use data dictionaries with controlled sets of terms to facilitate data collection, limited interoperability and sharing at a local site. Multi-center retrospective clinical studies require that these data dictionaries, originating from individual participating centers, be harmonized in preparation for the integration of the corresponding clinical research data. Domain ontologies are often used to facilitate multi-center data integration by modeling terms from data dictionaries in a logic-based language, but interoperability among domain ontologies (using automated techniques) is an unresolved issue. Although many upper-level reference ontologies have been proposed to address this challenge, our experience in integrating multi-center sleep medicine data highlights the need for an upper level ontology that models a common set of terms at multiple-levels of abstraction, which is not covered by the existing upper-level ontologies. We introduce a methodology underpinned by a Minimal Domain of Discourse (MiDas) algorithm to automatically extract a minimal common domain of discourse (upper-domain ontology) from an existing domain ontology. Using the Multi-Modality, Multi-Resource Environment for Physiological and Clinical Research (Physio-MIMI) multi-center project in sleep medicine as a use case, we demonstrate the use of MiDas in extracting a minimal domain of discourse for sleep medicine, from Physio-MIMI's Sleep Domain Ontology (SDO). We then extend the resulting domain of discourse with terms from the data dictionary of the Sleep Heart and Health Study (SHHS) to validate MiDas. To illustrate the wider applicability of MiDas, we automatically extract the respective domains of discourse from 6 sample domain ontologies from the National Center for Biomedical Ontologies (NCBO) and the OBO Foundry.
Soutome, Sakiko; Yanamoto, Souichi; Funahara, Madoka; Hasegawa, Takumi; Komori, Takahide; Oho, Takahiko; Umeda, Masahiro
2016-08-01
Post-operative pneumonia is a frequent and possibly fatal complication of esophagectomy and is likely caused by aspiration of oropharyngeal fluid that contains pathogenic micro-organisms. We conducted a multi-center retrospective study to investigate the preventive effect of oral health care on post-operative pneumonia among patients with esophageal cancer who underwent esophagectomy. A total of 280 patients underwent esophagectomy at three university hospitals. These patients were divided retrospectively into those who received pre-operative oral care from dentists and dental hygienists (oral care group; n = 173) and those who did not receive such care (control group; n = 107). We evaluated the correlations between the occurrence of post-operative pneumonia and 18 predictive variables (patient factors, tumor factors, treatment factors, and pre-operative oral care) using the χ(2) test and logistic regression analysis. The differences of mean hospital days and mortality rate in both groups were analyzed by the Student t-test. Age, post-operative dysphagia, and absence of pre-operative oral care were correlated significantly with post-operative pneumonia in the univariable analysis. Multivariable analysis revealed that diabetes mellitus, post-operative dysphagia, and the absence of pre-operative oral care were independent risk factors for post-operative pneumonia. The mean hospital stay and mortality rate did not differ between the oral care and control groups. Pre-operative oral care may be an effective and easy method to prevent post-operative pneumonia in patients who are undergoing esophagectomy.
FleCSPH - a parallel and distributed SPH implementation based on the FleCSI framework
DOE Office of Scientific and Technical Information (OSTI.GOV)
Junghans, Christoph; Loiseau, Julien
2017-06-20
FleCSPH is a multi-physics compact application that exercises FleCSI parallel data structures for tree-based particle methods. In particular, FleCSPH implements a smoothed-particle hydrodynamics (SPH) solver for the solution of Lagrangian problems in astrophysics and cosmology. FleCSPH includes support for gravitational forces using the fast multipole method (FMM).
Webinar Presentation: Assessing Neurodevelopment in Parallel Animal and Human Studies
This presentation, Assessing Neurodevelopment in Parallel Animal and Human Studies, was given at the NIEHS/EPA Children's Centers 2015 Webinar Series: Interdisciplinary Approaches to Neurodevelopment held on Sept. 9, 2015.
A Programming Framework for Scientific Applications on CPU-GPU Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Owens, John
2013-03-24
At a high level, my research interests center around designing, programming, and evaluating computer systems that use new approaches to solve interesting problems. The rapid change of technology allows a variety of different architectural approaches to computationally difficult problems, and a constantly shifting set of constraints and trends makes the solutions to these problems both challenging and interesting. One of the most important recent trends in computing has been a move to commodity parallel architectures. This sea change is motivated by the industry’s inability to continue to profitably increase performance on a single processor and instead to move to multiplemore » parallel processors. In the period of review, my most significant work has been leading a research group looking at the use of the graphics processing unit (GPU) as a general-purpose processor. GPUs can potentially deliver superior performance on a broad range of problems than their CPU counterparts, but effectively mapping complex applications to a parallel programming model with an emerging programming environment is a significant and important research problem.« less
A Parallel Finite Set Statistical Simulator for Multi-Target Detection and Tracking
NASA Astrophysics Data System (ADS)
Hussein, I.; MacMillan, R.
2014-09-01
Finite Set Statistics (FISST) is a powerful Bayesian inference tool for the joint detection, classification and tracking of multi-target environments. FISST is capable of handling phenomena such as clutter, misdetections, and target birth and decay. Implicit within the approach are solutions to the data association and target label-tracking problems. Finally, FISST provides generalized information measures that can be used for sensor allocation across different types of tasks such as: searching for new targets, and classification and tracking of known targets. These FISST capabilities have been demonstrated on several small-scale illustrative examples. However, for implementation in a large-scale system as in the Space Situational Awareness problem, these capabilities require a lot of computational power. In this paper, we implement FISST in a parallel environment for the joint detection and tracking of multi-target systems. In this implementation, false alarms and misdetections will be modeled. Target birth and decay will not be modeled in the present paper. We will demonstrate the success of the method for as many targets as we possibly can in a desktop parallel environment. Performance measures will include: number of targets in the simulation, certainty of detected target tracks, computational time as a function of clutter returns and number of targets, among other factors.
Anandakrishnan, Ramu; Scogland, Tom R W; Fenley, Andrew T; Gordon, John C; Feng, Wu-chun; Onufriev, Alexey V
2010-06-01
Tools that compute and visualize biomolecular electrostatic surface potential have been used extensively for studying biomolecular function. However, determining the surface potential for large biomolecules on a typical desktop computer can take days or longer using currently available tools and methods. Two commonly used techniques to speed-up these types of electrostatic computations are approximations based on multi-scale coarse-graining and parallelization across multiple processors. This paper demonstrates that for the computation of electrostatic surface potential, these two techniques can be combined to deliver significantly greater speed-up than either one separately, something that is in general not always possible. Specifically, the electrostatic potential computation, using an analytical linearized Poisson-Boltzmann (ALPB) method, is approximated using the hierarchical charge partitioning (HCP) multi-scale method, and parallelized on an ATI Radeon 4870 graphical processing unit (GPU). The implementation delivers a combined 934-fold speed-up for a 476,040 atom viral capsid, compared to an equivalent non-parallel implementation on an Intel E6550 CPU without the approximation. This speed-up is significantly greater than the 42-fold speed-up for the HCP approximation alone or the 182-fold speed-up for the GPU alone. Copyright (c) 2010 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Haghshenasfard, Zahra; Cottam, M. G.
2018-01-01
Theoretical studies are reported for the quantum-statistical properties of microwave-driven multi-mode magnon systems as represented by ferromagnetic nanowires with a stripe geometry. Effects of both the exchange and the dipole-dipole interactions, as well as a Zeeman term for an external applied field, are included in the magnetic Hamiltonian. The model also contains the time-dependent nonlinear effects due to parallel pumping with an electromagnetic field. Using a coherent magnon state representation in terms of creation and annihilation operators, we investigate the effects of parallel pumping on the temporal evolution of various nonclassical properties of the system. A focus is on the interbranch mixing produced by the pumping field when there are three or more modes. In particular, the occupation magnon number and the multi-mode cross correlations between magnon modes are studied. Manipulation of the collapse and revival phenomena of the average magnon occupation number and the control of the cross correlation between the magnon modes are demonstrated through tuning of the parallel pumping field amplitude and appropriate choices for the coherent magnon states. The cross correlations are a direct consequence of the interbranch pumping effects and do not appear in the corresponding one- or two-mode magnon systems.
Otsuna, Hideo; Shinomiya, Kazunori; Ito, Kei
2014-01-01
Compared with connections between the retinae and primary visual centers, relatively less is known in both mammals and insects about the functional segregation of neural pathways connecting primary and higher centers of the visual processing cascade. Here, using the Drosophila visual system as a model, we demonstrate two levels of parallel computation in the pathways that connect primary visual centers of the optic lobe to computational circuits embedded within deeper centers in the central brain. We show that a seemingly simple achromatic behavior, namely phototaxis, is under the control of several independent pathways, each of which is responsible for navigation towards unique wavelengths. Silencing just one pathway is enough to disturb phototaxis towards one characteristic monochromatic source, whereas phototactic behavior towards white light is not affected. The response spectrum of each demonstrable pathway is different from that of individual photoreceptors, suggesting subtractive computations. A choice assay between two colors showed that these pathways are responsible for navigation towards, but not for the detection itself of, the monochromatic light. The present study provides novel insights about how visual information is separated and processed in parallel to achieve robust control of an innate behavior. PMID:24574974
Solution of multi-center molecular integrals of Slater-type orbitals
NASA Technical Reports Server (NTRS)
Tai, H.
1989-01-01
The troublesome multi-center molecular integrals of Slater-type orbitals (STO) in molecular physics calculations can be evaluated by using the Fourier transform and proper coupling of the two center exchange integrals. A numerical integration procedure is then readily rendered to the final expression in which the integrand consists of well known special functions of arguments containing the geometrical arrangement of the nuclear centers and the exponents of the atomic orbitals. A practical procedure was devised for the calculation of a general multi-center molecular integrals coupling arbitrary Slater-type orbitals. Symmetry relations and asymptotic conditions are discussed. Explicit expressions of three-center one-electron nuclear-attraction integrals and four-center two-electron repulsion integrals for STO of principal quantum number n=2 are listed. A few numerical results are given for the purpose of comparison.
Jakobsen, L H; Wirth, R; Smoliner, C; Klebach, M; Hofman, Z; Kondrup, J
2017-04-01
During the first days of tube feeding (TF) gastrointestinal (GI) complications are common and administration of sufficient nutrition is a challenge. Not all standard nutritionally complete formulas contain dietary fiber, fish oil or carotenoids, key dietary nutrients for health and wellbeing. The aim of this study was to investigate the effects of a fiber, fish oil and carotenoid enriched TF formula on diarrhea, constipation and nutrient bioavailability. A multi-center randomized, double-blind, controlled, parallel trial compared the effects of a dietary fiber, fish oil and carotenoid-enriched TF formula (test) with an isocaloric non-enriched formula (control) in 51 patients requiring initiation of TF. Incidence of diarrhea and constipation (based on stool frequency and consistency) was recorded daily. Plasma status of EPA, DHA and carotenoids was measured after 7 days. The incidence of diarrhea was lower in patients receiving the test formula compared with the control group (19% vs. 48%, p = 0.034). EPA and DHA status (% of total plasma phospholipids) was higher after 7 days in test compared with control group (EPA: p = 0.002, DHA: p = 0.082). Plasma carotenoid levels were higher after 7 days in the test group compared with control group (lutein: p = 0.024, α-carotene: p = 0.005, lycopene: p = 0.020, β-carotene: p = 0.054). This study suggests that the nutrient-enriched TF formula tested might have a positive effect on GI tolerance with less diarrhea incidence and significantly improved EPA, DHA and carotenoid plasma levels during the initiation of TF in hospitalized patients who are at risk of diarrhea and low nutrient status. This trial was registered at trialregister.nl; registration number 2924. Copyright © 2016. Published by Elsevier Ltd.
Longitudinal MRI findings from the vitamin E and Donepezil treatment study for MCI
Jack, Clifford R.; Petersen, Ronald C.; Grundman, Michael; Jin, Shelia; Gamst, Anthony; Ward, Chadwick P.; Sencakova, Drahomira; Doody, Rachelle S.; Thal, Leon J.
2009-01-01
The vitamin E and donepezil trial for the treatment of amnestic mild cognitive impairment (MCI) was conducted at 69 centers in North America; 24 centers participated in an MRI sub study. The objective of this study was to evaluate the effect of treatment on MRI atrophy rates; and validate rate measures from serial MRI as indicators of disease progression in multi center therapeutic trials for MCI. Annual percent change (APC) from baseline to follow-up was measured for hippocampus, entorhinal cortex, whole brain, and ventricle in the 131 subjects who remained in the treatment study and completed technically satisfactory baseline and follow-up scans. Although a non-significant trend toward slowing of hippocampal atrophy rates was seen in APOE ∈4 carriers treated with donepezil; no treatment effect was confirmed for any MRI measure in either treatment group. For each of the four brain atrophy rate measures, APCs were greater in subjects who converted to AD than non-converters, and were greater in APOE ∈4 carriers than non-carriers. MRI APCs and changes in cognitive test performance were uniformly correlated in the expected direction (all p < 0.000). Results of this study support the feasibility of using MRI as an outcome measure of disease progression in multi center therapeutic trials for MCI. PMID:17452062
Outcome evaluation of a community center-based program for mothers at high psychosocial risk.
Rodrigo, María José; Máiquez, María Luisa; Correa, Ana Delia; Martín, Juan Carlos; Rodríguez, Guacimara
2006-09-01
This study reported the outcome evaluation of the "Apoyo Personal y Familiar" (APF) program for poorly-educated mothers from multi-problem families, showing inadequate behavior with their children. APF is a community-based multi-site program delivered through weekly group meetings in municipal resource centers. A total of 340 mothers referred by the municipal social services of Tenerife, Spain were assessed; 185 mothers participated in the APF program that lasted 8 months, and 155 mothers were in the control group. Pre-post test comparisons for the intervention group and post-test comparisons with the control group on self-rating measures of maternal beliefs, personal agency and child-rearing practices were performed. Multivariate tests, t tests and effect sizes (ES) were calculated to determine the program effectiveness on the outcome measures. Mothers' support of nurturist and nativist beliefs and the reported use of Neglect-permissive and Coercive practices significantly decreased after program completion whereas the reported use of Inductive practices significantly increased. Increases in self-efficacy, internal control and role difficulty were also significant in relation to those of the control group. The program was especially effective for older mothers, with fewer children, living in a two-parent family, in urban areas and with either low or medium educational levels. The program was very effective in changing the mothers' perceived competences and modestly effective in changing their beliefs about child development and education and reported child-rearing practices. Changes in personal agency are very important for at-risk parents who feel helpless and with no control over their lives.
Vitamin E tocotrienol supplementation improves lipid profiles in chronic hemodialysis patients
Daud, Zulfitri A Mat; Tubie, Boniface; Sheyman, Marina; Osia, Robert; Adams, Judy; Tubie, Sharon; Khosla, Pramod
2013-01-01
Purpose Chronic hemodialysis patients experience accelerated atherosclerosis contributed to by dyslipidemia, inflammation, and an impaired antioxidant system. Vitamin E tocotrienols possess anti-inflammatory and antioxidant properties. However, the impact of dietary intervention with Vitamin E tocotrienols is unknown in this population. Patients and methods A randomized, double-blind, placebo-controlled, parallel trial was conducted in 81 patients undergoing chronic hemodialysis. Subjects were provided daily with capsules containing either vitamin E tocotrienol-rich fraction (TRF) (180 mg tocotrienols, 40 mg tocopherols) or placebo (0.48 mg tocotrienols, 0.88 mg tocopherols). Endpoints included measurements of inflammatory markers (C-reactive protein and interleukin 6), oxidative status (total antioxidant power and malondialdehyde), lipid profiles (plasma total cholesterol, triacylglycerols, and high-density lipoprotein cholesterol), as well as cholesteryl-ester transfer protein activity and apolipoprotein A1. Results TRF supplementation did not impact any nutritional, inflammatory, or oxidative status biomarkers over time when compared with the baseline within the group (one-way repeated measures analysis of variance) or when compared with the placebo group at a particular time point (independent t-test). However, the TRF supplemented group showed improvement in lipid profiles after 12 and 16 weeks of intervention when compared with placebo at the respective time points. Normalized plasma triacylglycerols (cf baseline) in the TRF group were reduced by 33 mg/dL (P=0.032) and 36 mg/dL (P=0.072) after 12 and 16 weeks of intervention but no significant improvement was seen in the placebo group. Similarly, normalized plasma high-density lipoprotein cholesterol was higher (P<0.05) in the TRF group as compared with placebo at both week 12 and week 16. The changes in the TRF group at week 12 and week 16 were associated with higher plasma apolipoprotein A1 concentration (P<0.02) and lower cholesteryl-ester transfer protein activity (P<0.001). Conclusion TRF supplementation improved lipid profiles in this study of maintenance hemodialysis patients. A multi-centered trial is warranted to confirm these observations. PMID:24348043
Izewska, Joanna; Wesolowska, Paulina; Azangwe, Godfrey; Followill, David S.; Thwaites, David I.; Arib, Mehenna; Stefanic, Amalia; Viegas, Claudio; Suming, Luo; Ekendahl, Daniela; Bulski, Wojciech; Georg, Dietmar
2016-01-01
Abstract The International Atomic Energy Agency (IAEA) has a long tradition of supporting development of methodologies for national networks providing quality audits in radiotherapy. A series of co-ordinated research projects (CRPs) has been conducted by the IAEA since 1995 assisting national external audit groups developing national audit programs. The CRP ‘Development of Quality Audits for Radiotherapy Dosimetry for Complex Treatment Techniques’ was conducted in 2009–2012 as an extension of previously developed audit programs. Material and methods. The CRP work described in this paper focused on developing and testing two steps of dosimetry audit: verification of heterogeneity corrections, and treatment planning system (TPS) modeling of small MLC fields, which are important for the initial stages of complex radiation treatments, such as IMRT. The project involved development of a new solid slab phantom with heterogeneities containing special measurement inserts for thermoluminescent dosimeters (TLD) and radiochromic films. The phantom and the audit methodology has been developed at the IAEA and tested in multi-center studies involving the CRP participants. Results. The results of multi-center testing of methodology for two steps of dosimetry audit show that the design of audit procedures is adequate and the methodology is feasible for meeting the audit objectives. A total of 97% TLD results in heterogeneity situations obtained in the study were within 3% and all results within 5% agreement with the TPS predicted doses. In contrast, only 64% small beam profiles were within 3 mm agreement between the TPS calculated and film measured doses. Film dosimetry results have highlighted some limitations in TPS modeling of small beam profiles in the direction of MLC leave movements. Discussion. Through multi-center testing, any challenges or difficulties in the proposed audit methodology were identified, and the methodology improved. Using the experience of these studies, the participants could incorporate the auditing procedures in their national programs. PMID:26934916
Izewska, Joanna; Wesolowska, Paulina; Azangwe, Godfrey; Followill, David S; Thwaites, David I; Arib, Mehenna; Stefanic, Amalia; Viegas, Claudio; Suming, Luo; Ekendahl, Daniela; Bulski, Wojciech; Georg, Dietmar
2016-07-01
The International Atomic Energy Agency (IAEA) has a long tradition of supporting development of methodologies for national networks providing quality audits in radiotherapy. A series of co-ordinated research projects (CRPs) has been conducted by the IAEA since 1995 assisting national external audit groups developing national audit programs. The CRP 'Development of Quality Audits for Radiotherapy Dosimetry for Complex Treatment Techniques' was conducted in 2009-2012 as an extension of previously developed audit programs. The CRP work described in this paper focused on developing and testing two steps of dosimetry audit: verification of heterogeneity corrections, and treatment planning system (TPS) modeling of small MLC fields, which are important for the initial stages of complex radiation treatments, such as IMRT. The project involved development of a new solid slab phantom with heterogeneities containing special measurement inserts for thermoluminescent dosimeters (TLD) and radiochromic films. The phantom and the audit methodology has been developed at the IAEA and tested in multi-center studies involving the CRP participants. The results of multi-center testing of methodology for two steps of dosimetry audit show that the design of audit procedures is adequate and the methodology is feasible for meeting the audit objectives. A total of 97% TLD results in heterogeneity situations obtained in the study were within 3% and all results within 5% agreement with the TPS predicted doses. In contrast, only 64% small beam profiles were within 3 mm agreement between the TPS calculated and film measured doses. Film dosimetry results have highlighted some limitations in TPS modeling of small beam profiles in the direction of MLC leave movements. Through multi-center testing, any challenges or difficulties in the proposed audit methodology were identified, and the methodology improved. Using the experience of these studies, the participants could incorporate the auditing procedures in their national programs.
IOPA: I/O-aware parallelism adaption for parallel programs
Liu, Tao; Liu, Yi; Qian, Chen; Qian, Depei
2017-01-01
With the development of multi-/many-core processors, applications need to be written as parallel programs to improve execution efficiency. For data-intensive applications that use multiple threads to read/write files simultaneously, an I/O sub-system can easily become a bottleneck when too many of these types of threads exist; on the contrary, too few threads will cause insufficient resource utilization and hurt performance. Therefore, programmers must pay much attention to parallelism control to find the appropriate number of I/O threads for an application. This paper proposes a parallelism control mechanism named IOPA that can adjust the parallelism of applications to adapt to the I/O capability of a system and balance computing resources and I/O bandwidth. The programming interface of IOPA is also provided to programmers to simplify parallel programming. IOPA is evaluated using multiple applications with both solid state and hard disk drives. The results show that the parallel applications using IOPA can achieve higher efficiency than those with a fixed number of threads. PMID:28278236
IOPA: I/O-aware parallelism adaption for parallel programs.
Liu, Tao; Liu, Yi; Qian, Chen; Qian, Depei
2017-01-01
With the development of multi-/many-core processors, applications need to be written as parallel programs to improve execution efficiency. For data-intensive applications that use multiple threads to read/write files simultaneously, an I/O sub-system can easily become a bottleneck when too many of these types of threads exist; on the contrary, too few threads will cause insufficient resource utilization and hurt performance. Therefore, programmers must pay much attention to parallelism control to find the appropriate number of I/O threads for an application. This paper proposes a parallelism control mechanism named IOPA that can adjust the parallelism of applications to adapt to the I/O capability of a system and balance computing resources and I/O bandwidth. The programming interface of IOPA is also provided to programmers to simplify parallel programming. IOPA is evaluated using multiple applications with both solid state and hard disk drives. The results show that the parallel applications using IOPA can achieve higher efficiency than those with a fixed number of threads.
Han, Ji Won; Lee, Hyeonggon; Hong, Jong Woo; Kim, Kayoung; Kim, Taehyun; Byun, Hye Jin; Ko, Ji Won; Youn, Jong Chul; Ryu, Seung-Ho; Lee, Nam-Jin; Pae, Chi-Un; Kim, Ki Woong
2017-01-01
We developed and evaluated the effect of Multimodal Cognitive Enhancement Therapy (MCET) consisting of cognitive training, cognitive stimulations, reality orientation, physical therapy, reminiscence therapy, and music therapy in combination in older people with mild cognitive impairment (MCI) or mild dementia. This study was a multi-center, double-blind, randomized, placebo-controlled, two-period cross-over study (two 8-week treatment phases separated by a 4-week wash-out period). Sixty-four participants with MCI or dementia whose Clinical Dementia Rating was 0.5 or 1 were randomized to the MCET group or the mock-therapy (placebo) group. Outcomes were measured at baseline, week 9, and week 21. Fifty-five patients completed the study. Mini-Mental State Examination (effect size = 0.47, p = 0.013) and Alzheimer's Disease Assessment Scale-Cognitive Subscale (effect size = 0.35, p = 0.045) scores were significantly improved in the MCET compared with mock-therapy group. Revised Memory and Behavior Problems Checklist frequency (effect size = 0.38, p = 0.046) and self-rated Quality of Life - Alzheimer's Disease (effect size = 0.39, p = 0.047) scores were significantly improved in the MCET compared with mock-therapy. MCET improved cognition, behavior, and quality of life in people with MCI or mild dementia more effectively than conventional cognitive enhancing activities did.
NASA Astrophysics Data System (ADS)
Balaji, V.; Benson, Rusty; Wyman, Bruce; Held, Isaac
2016-10-01
Climate models represent a large variety of processes on a variety of timescales and space scales, a canonical example of multi-physics multi-scale modeling. Current hardware trends, such as Graphical Processing Units (GPUs) and Many Integrated Core (MIC) chips, are based on, at best, marginal increases in clock speed, coupled with vast increases in concurrency, particularly at the fine grain. Multi-physics codes face particular challenges in achieving fine-grained concurrency, as different physics and dynamics components have different computational profiles, and universal solutions are hard to come by. We propose here one approach for multi-physics codes. These codes are typically structured as components interacting via software frameworks. The component structure of a typical Earth system model consists of a hierarchical and recursive tree of components, each representing a different climate process or dynamical system. This recursive structure generally encompasses a modest level of concurrency at the highest level (e.g., atmosphere and ocean on different processor sets) with serial organization underneath. We propose to extend concurrency much further by running more and more lower- and higher-level components in parallel with each other. Each component can further be parallelized on the fine grain, potentially offering a major increase in the scalability of Earth system models. We present here first results from this approach, called coarse-grained component concurrency, or CCC. Within the Geophysical Fluid Dynamics Laboratory (GFDL) Flexible Modeling System (FMS), the atmospheric radiative transfer component has been configured to run in parallel with a composite component consisting of every other atmospheric component, including the atmospheric dynamics and all other atmospheric physics components. We will explore the algorithmic challenges involved in such an approach, and present results from such simulations. Plans to achieve even greater levels of coarse-grained concurrency by extending this approach within other components, such as the ocean, will be discussed.
Parallel detection experiment of fluorescence confocal microscopy using DMD.
Wang, Qingqing; Zheng, Jihong; Wang, Kangni; Gui, Kun; Guo, Hanming; Zhuang, Songlin
2016-05-01
Parallel detection of fluorescence confocal microscopy (PDFCM) system based on Digital Micromirror Device (DMD) is reported in this paper in order to realize simultaneous multi-channel imaging and improve detection speed. DMD is added into PDFCM system, working to take replace of the single traditional pinhole in the confocal system, which divides the laser source into multiple excitation beams. The PDFCM imaging system based on DMD is experimentally set up. The multi-channel image of fluorescence signal of potato cells sample is detected by parallel lateral scanning in order to verify the feasibility of introducing the DMD into fluorescence confocal microscope. In addition, for the purpose of characterizing the microscope, the depth response curve is also acquired. The experimental result shows that in contrast to conventional microscopy, the DMD-based PDFCM system has higher axial resolution and faster detection speed, which may bring some potential benefits in the biology and medicine analysis. SCANNING 38:234-239, 2016. © 2015 Wiley Periodicals, Inc. © Wiley Periodicals, Inc.
What is adaptive about adaptive decision making? A parallel constraint satisfaction account.
Glöckner, Andreas; Hilbig, Benjamin E; Jekel, Marc
2014-12-01
There is broad consensus that human cognition is adaptive. However, the vital question of how exactly this adaptivity is achieved has remained largely open. Herein, we contrast two frameworks which account for adaptive decision making, namely broad and general single-mechanism accounts vs. multi-strategy accounts. We propose and fully specify a single-mechanism model for decision making based on parallel constraint satisfaction processes (PCS-DM) and contrast it theoretically and empirically against a multi-strategy account. To achieve sufficiently sensitive tests, we rely on a multiple-measure methodology including choice, reaction time, and confidence data as well as eye-tracking. Results show that manipulating the environmental structure produces clear adaptive shifts in choice patterns - as both frameworks would predict. However, results on the process level (reaction time, confidence), in information acquisition (eye-tracking), and from cross-predicting choice consistently corroborate single-mechanisms accounts in general, and the proposed parallel constraint satisfaction model for decision making in particular. Copyright © 2014 Elsevier B.V. All rights reserved.
Leung, Chung Ming; Wang, Ya; Chen, Wusi
2016-11-01
In this letter, the airfoil-based electromagnetic energy harvester containing parallel array motion between moving coil and trajectory matching multi-pole magnets was investigated. The magnets were aligned in an alternatively magnetized formation of 6 magnets to explore enhanced power density. In particular, the magnet array was positioned in parallel to the trajectory of the tip coil within its tip deflection span. The finite element simulations of the magnetic flux density and induced voltages at an open circuit condition were studied to find the maximum number of alternatively magnetized magnets that was required for the proposed energy harvester. Experimental results showed that the energy harvester with a pair of 6 alternatively magnetized linear magnet arrays was able to generate an induced voltage (V o ) of 20 V, with an open circuit condition, and 475 mW, under a 30 Ω optimal resistance load operating with the wind speed (U) at 7 m/s and a natural bending frequency of 3.54 Hz. Compared to the traditional electromagnetic energy harvester with a single magnet moving through a coil, the proposed energy harvester, containing multi-pole magnets and parallel array motion, enables the moving coil to accumulate a stronger magnetic flux in each period of the swinging motion. In addition to the comparison made with the airfoil-based piezoelectric energy harvester of the same size, our proposed electromagnetic energy harvester generates 11 times more power output, which is more suitable for high-power-density energy harvesting applications at regions with low environmental frequency.
NASA Astrophysics Data System (ADS)
Shcherbakov, Alexandre S.; Chavez Dagostino, Miguel; Arellanes, Adan Omar; Tepichin Rodriguez, Eduardo
2017-08-01
We describe a potential prototype of modern spectrometer based on acousto-optical technique with three parallel optical arms for analysis of radio-wave signals specific to astronomical observations. Each optical arm exhibits original performances to provide parallel multi-band observations with different scales simultaneously. Similar multi-band instrument is able to realize measurements within various scenarios from planetary atmospheres to attractive objects in the distant Universe. The arrangement under development has two novelties. First, each optical arm represents an individual spectrum analyzer with its individual performances. Such an approach is conditioned by exploiting various materials for acousto-optical cells operating within various regimes, frequency ranges, and light wavelengths from independent light sources. Individually produced beam shapers give both the needed incident light polarization and the required apodization for light beam to increase the dynamic range of the system as a whole. After parallel acousto-optical processing, a few data flows from these optical arms are united by the joint CCD matrix on the stage of the combined extremely high-bit rate electronic data processing that provides the system performances as well. The other novelty consists in the usage of various materials for designing wide-aperture acousto-optical cells exhibiting the best performances within each of optical arms. Here, one can mention specifically selected cuts of tellurium dioxide, bastron, and lithium niobate, which overlap selected areas within the frequency range from 40 MHz to 2.0 GHz. Thus one yields the united versatile instrument for comprehensive studies of astronomical objects simultaneously with precise synchronization in various frequency ranges.
NASA Astrophysics Data System (ADS)
Slaughter, A. E.; Permann, C.; Peterson, J. W.; Gaston, D.; Andrs, D.; Miller, J.
2014-12-01
The Idaho National Laboratory (INL)-developed Multiphysics Object Oriented Simulation Environment (MOOSE; www.mooseframework.org), is an open-source, parallel computational framework for enabling the solution of complex, fully implicit multiphysics systems. MOOSE provides a set of computational tools that scientists and engineers can use to create sophisticated multiphysics simulations. Applications built using MOOSE have computed solutions for chemical reaction and transport equations, computational fluid dynamics, solid mechanics, heat conduction, mesoscale materials modeling, geomechanics, and others. To facilitate the coupling of diverse and highly-coupled physical systems, MOOSE employs the Jacobian-free Newton-Krylov (JFNK) method when solving the coupled nonlinear systems of equations arising in multiphysics applications. The MOOSE framework is written in C++, and leverages other high-quality, open-source scientific software packages such as LibMesh, Hypre, and PETSc. MOOSE uses a "hybrid parallel" model which combines both shared memory (thread-based) and distributed memory (MPI-based) parallelism to ensure efficient resource utilization on a wide range of computational hardware. MOOSE-based applications are inherently modular, which allows for simulation expansion (via coupling of additional physics modules) and the creation of multi-scale simulations. Any application developed with MOOSE supports running (in parallel) any other MOOSE-based application. Each application can be developed independently, yet easily communicate with other applications (e.g., conductivity in a slope-scale model could be a constant input, or a complete phase-field micro-structure simulation) without additional code being written. This method of development has proven effective at INL and expedites the development of sophisticated, sustainable, and collaborative simulation tools.
NASA Astrophysics Data System (ADS)
Scudder, J. D.
2017-12-01
Enroute to a new formulation of the heat law for the solar wind plasma the role of the invariably neglected, but omnipresent, thermal force for the multi-fluid physics of the corona and solar wind expansion will be discussed. This force (a) controls the size of the collisional ion electron energy exchange, favoring the thermal vs supra thermal electrons; (b) occurs whenever heat flux occurs; (c) remains after the electron and ion fluids come to a no slip, zero parallel current, equilibrium; (d) enhances the equilibrium parallel electric field; but (e) has a size that is theoretically independent of the electron collision frequency - allowing its importance to persist far up into the corona where collisions are invariably ignored in first approximation. The constituent parts of the thermal force allow the derivation of a new generalized electron heat flow relation that will be presented. It depends on the separate field aligned divergences of electron and ion pressures and the gradients of the ion gravitational potential and parallel flow energies and is based upon a multi-component electron distribution function. The new terms in this heat law explicitly incorporate the astrophysical context of gradients, acceleration and external forces that make demands on the parallel electric field and quasi-neutrality; essentially all of these effects are missing in traditional formulations.
Posse, Stefan
2011-01-01
The rapid development of fMRI was paralleled early on by the adaptation of MR spectroscopic imaging (MRSI) methods to quantify water relaxation changes during brain activation. This review describes the evolution of multi-echo acquisition from high-speed MRSI to multi-echo EPI and beyond. It highlights milestones in the development of multi-echo acquisition methods, such as the discovery of considerable gains in fMRI sensitivity when combining echo images, advances in quantification of the BOLD effect using analytical biophysical modeling and interleaved multi-region shimming. The review conveys the insight gained from combining fMRI and MRSI methods and concludes with recent trends in ultra-fast fMRI, which will significantly increase temporal resolution of multi-echo acquisition. PMID:22056458
Liu, Zhou; Shum, Ho Cheung
2013-01-01
In this work, we demonstrate a robust and reliable approach to fabricate multi-compartment particles for cell co-culture studies. By taking advantage of the laminar flow within our microfluidic nozzle, multiple parallel streams of liquids flow towards the nozzle without significant mixing. Afterwards, the multiple parallel streams merge into a single stream, which is sprayed into air, forming monodisperse droplets under an electric field with a high field strength. The resultant multi-compartment droplets are subsequently cross-linked in a calcium chloride solution to form calcium alginate micro-particles with multiple compartments. Each compartment of the particles can be used for encapsulating different types of cells or biological cell factors. These hydrogel particles with cross-linked alginate chains show similarity in the physical and mechanical environment as the extracellular matrix of biological cells. Thus, the multi-compartment particles provide a promising platform for cell studies and co-culture of different cells. In our study, cells are encapsulated in the multi-compartment particles and the viability of cells is quantified using a fluorescence microscope after the cells are stained for a live/dead assay. The high cell viability after encapsulation indicates the cytocompatibility and feasibility of our technique. Our multi-compartment particles have great potential as a platform for studying cell-cell interactions as well as interactions of cells with extracellular factors.
Liu, Zhou; Shum, Ho Cheung
2013-01-01
In this work, we demonstrate a robust and reliable approach to fabricate multi-compartment particles for cell co-culture studies. By taking advantage of the laminar flow within our microfluidic nozzle, multiple parallel streams of liquids flow towards the nozzle without significant mixing. Afterwards, the multiple parallel streams merge into a single stream, which is sprayed into air, forming monodisperse droplets under an electric field with a high field strength. The resultant multi-compartment droplets are subsequently cross-linked in a calcium chloride solution to form calcium alginate micro-particles with multiple compartments. Each compartment of the particles can be used for encapsulating different types of cells or biological cell factors. These hydrogel particles with cross-linked alginate chains show similarity in the physical and mechanical environment as the extracellular matrix of biological cells. Thus, the multi-compartment particles provide a promising platform for cell studies and co-culture of different cells. In our study, cells are encapsulated in the multi-compartment particles and the viability of cells is quantified using a fluorescence microscope after the cells are stained for a live/dead assay. The high cell viability after encapsulation indicates the cytocompatibility and feasibility of our technique. Our multi-compartment particles have great potential as a platform for studying cell-cell interactions as well as interactions of cells with extracellular factors. PMID:24404050
NAS Parallel Benchmark Results 11-96. 1.0
NASA Technical Reports Server (NTRS)
Bailey, David H.; Bailey, David; Chancellor, Marisa K. (Technical Monitor)
1997-01-01
The NAS Parallel Benchmarks have been developed at NASA Ames Research Center to study the performance of parallel supercomputers. The eight benchmark problems are specified in a "pencil and paper" fashion. In other words, the complete details of the problem to be solved are given in a technical document, and except for a few restrictions, benchmarkers are free to select the language constructs and implementation techniques best suited for a particular system. These results represent the best results that have been reported to us by the vendors for the specific 3 systems listed. In this report, we present new NPB (Version 1.0) performance results for the following systems: DEC Alpha Server 8400 5/440, Fujitsu VPP Series (VX, VPP300, and VPP700), HP/Convex Exemplar SPP2000, IBM RS/6000 SP P2SC node (120 MHz), NEC SX-4/32, SGI/CRAY T3E, SGI Origin200, and SGI Origin2000. We also report High Performance Fortran (HPF) based NPB results for IBM SP2 Wide Nodes, HP/Convex Exemplar SPP2000, and SGI/CRAY T3D. These results have been submitted by Applied Parallel Research (APR) and Portland Group Inc. (PGI). We also present sustained performance per dollar for Class B LU, SP and BT benchmarks.
Laparoscopic repair of perforated peptic ulcer: patch versus simple closure.
Abd Ellatif, M E; Salama, A F; Elezaby, A F; El-Kaffas, H F; Hassan, A; Magdy, A; Abdallah, E; El-Morsy, G
2013-01-01
Laparoscopic correction of perforated peptic ulcer (PPU) has become an accepted way of management. Patch omentoplasty stayed for decades the main method of repair. The goal of the present study was to evaluate whether laparoscopic simple repair of PPU is as safe as patch omentoplasty. Since June 2005, 179 consecutive patients of PPU were treated by laparoscopic repair at our centers. We conducted a retrospective chart review in December 2012. Group I (patch group) included patients who were treated with standard patch omentoplasty. Group II (non-patch group) included patients who received simple repair without patch. From June 2007 to Dec. 2012, 179 consecutive patients of PPU who were treated by laparoscopic repair at our centers were enrolled in this multi-center retrospective study. 108 patients belong to patch group. While 71 patients were treated with laparoscopic simple repair. Operative time was significantly shorter in group II (non patch) (p = 0.01). No patient was converted to laparotomy. There was no difference in age, gender, ASA score, surgical risk (Boey's) score, and incidence of co-morbidities. Both groups were comparable in terms of hospital stay, time to resume oral intake, postoperative complications and surgical outcomes. Laparoscopic simple repair of PPU is a safe procedure compared with the traditional patch omentoplasty in presence of certain selection criteria. Copyright © 2013 Surgical Associates Ltd. Published by Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Kempler, Steve; Leptoukh, Greg; Lynnes, Chris
2010-01-01
The presentation purpose is to describe multi-instrument tools and services that facilitate access and usability of NASA Earth science data at Goddard Space Flight Center (GSFC). NASA's Earth observing system includes 14 satellites. Topics include EOSDIS facilities and system architecture, and overview of GSFC Earth Science Data and Information Services Center (GES DISC) mission, Mirador data search, Giovanni, multi-instrument data exploration, Google Earth[TM], data merging, and applications.
NASA Desert RATS 2011 Education Pilot Project and Classroom Activities
NASA Technical Reports Server (NTRS)
Gruener, J. E.; McGlone, M.; Allen, J.; Tobola, K.; Graff, P.
2012-01-01
The National Aeronautics and Space Administration's (NASA's) Desert Research and Technology Studies (Desert RATS) is a multi-year series of tests of hardware and operations carried out annually in the high desert of Arizona, as an analog to future exploration activities beyond low Earth orbit [1]. For the past several years, these tests have occurred in the San Francisco Volcanic Field, north of Flagstaff. For the 2011 Desert RATS season, the Exploration Systems Mission Directorate (ESMD) at NASA headquarters provided support to develop an education pilot project that would include student activities to parallel the Desert RATS mission planning and exploration activities in the classroom, and educator training sessions. The development of the pilot project was a joint effort between the NASA Johnson Space Center (JSC) Astromaterials Research and Exploration Science (ARES) Directorate and the Aerospace Education Services Project (AESP), managed at Penn State University.
ERIC Educational Resources Information Center
Kubota, Ryuko
2016-01-01
In applied linguistics and language education, an increased focus has been placed on plurality and hybridity to challenge monolingualism, the native speaker norm, and the modernist view of language and language use as unitary and bounded. The multi/plural turn parallels postcolonial theory in that they both support hybridity and fluidity while…
Electro-Optic Computing Architectures. Volume I
1998-02-01
The objective of the Electro - Optic Computing Architecture (EOCA) program was to develop multi-function electro - optic interfaces and optical...interconnect units to enhance the performance of parallel processor systems and form the building blocks for future electro - optic computing architectures...Specifically, three multi-function interface modules were targeted for development - an Electro - Optic Interface (EOI), an Optical Interconnection Unit (OW
CXRO - Mi-Young Im, Staff Scientist
X-Ray Database Zone Plate Education Nanomagnetism X-Ray Microscopy LDJIM EUV Lithography EUV Mask Publications Contact The Center for X-Ray Optics is a multi-disciplined research group within Lawrence Berkeley -Ray Optics X-Ray Database Nanomagnetism X-Ray Microscopy EUV Lithography EUV Mask Imaging
FTIR Analyses of Hypervelocity Impact Deposits: DebriSat Tests
2015-03-27
Aerospace Concept Design Center advised on selection of materials for various subsystems. • Test chamber lined with “soft catch” foam panels to trap...C-0001 Authorized by: Space Systems Group Distribution Statement A: Approved for public release; distribution unlimited Report...Pre Preshot target was a multi-shock shield supplied by NASA designed to catch the projectile. It consisted of seven bumper panels consisting of
Surface contamination analysis technology team overview
NASA Astrophysics Data System (ADS)
Burns, H. Dewitt, Jr.
1996-11-01
The surface contamination analysis technology (SCAT) team was originated as a working roup of NASA civil service, Space Shuttle contractor, and university groups. Participating members of the SCAT Team have included personnel from NASA Marshall Space Flight Center's Materials and Processes Laboratory and Langley Research Center's Instrument Development Group; contractors-Thiokol Corporation's Inspection Technology Group, AC Engineering support contractor, Aerojet, SAIC, and Lockheed MArtin/Oak Ridge Y-12 support contractor and Shuttle External Tank prime contractor; and the University of Alabama in Huntsville's Center for Robotics and Automation. The goal of the SCAT team as originally defined was to develop and integrate a multi-purpose inspection head for robotic application to in-process inspection of contamination sensitive surfaces. One area of interest was replacement of ozone depleting solvents currently used for surface cleanliness verification. The team approach brought together the appropriate personnel to determine what surface inspection techniques were applicable to multi-program surface cleanliness inspection. Major substrates of interest were chosen to simulate space shuttle critical bonding surface or surfaces sensitive to contamination such as fuel system component surfaces. Inspection techniques evaluated include optically stimulated electron emission or photoelectron emission; Fourier transform infrared spectroscopy; near infrared fiber optic spectroscopy; and, ultraviolet fluorescence. Current plans are to demonstrate an integrated system in MSFC's Productivity Enhancement Complex within five years from initiation of this effort in 1992. Instrumentation specifications and designs developed under this effort include a portable diffuse reflectance FTIR system built by Surface Optics Corporation and a third generation optically stimulated electron emission system built by LaRC. This paper will discuss the evaluation of the various techniques on a number of substrate materials contaminated with hydrocarbons, silicones, and fluorocarbons. Discussion will also include standards development for instrument calibration and testing.
Implementing and analyzing the multi-threaded LP-inference
NASA Astrophysics Data System (ADS)
Bolotova, S. Yu; Trofimenko, E. V.; Leschinskaya, M. V.
2018-03-01
The logical production equations provide new possibilities for the backward inference optimization in intelligent production-type systems. The strategy of a relevant backward inference is aimed at minimization of a number of queries to external information source (either to a database or an interactive user). The idea of the method is based on the computing of initial preimages set and searching for the true preimage. The execution of each stage can be organized independently and in parallel and the actual work at a given stage can also be distributed between parallel computers. This paper is devoted to the parallel algorithms of the relevant inference based on the advanced scheme of the parallel computations “pipeline” which allows to increase the degree of parallelism. The author also provides some details of the LP-structures implementation.
Zerfu, Taddese Alemu; Ayele, Henok Taddese; Bogale, Tariku Nigatu
2018-06-01
To investigate the effect of innovative means to distribute LARC on contraceptive use, we implemented a three arm, parallel groups, cluster randomized community trial design. The intervention consisted of placing trained community-based reproductive health nurses (CORN) within health centers or health posts. The nurses provided counseling to encourage women to use LARC and distributed all contraceptive methods. A total of 282 villages were randomly selected and assigned to a control arm (n = 94) or 1 of 2 treatment arms (n = 94 each). The treatment groups differed by where the new service providers were deployed, health post or health center. We calculated difference-in-difference (DID) estimates to assess program impacts on LARC use. After nine months of intervention, the use of LARC methods increased significantly by 72.3 percent, while the use of short acting methods declined by 19.6 percent. The proportion of women using LARC methods increased by 45.9 percent and 45.7 percent in the health post and health center based intervention arms, respectively. Compared to the control group, the DID estimates indicate that the use of LARC methods increased by 11.3 and 12.3 percentage points in the health post and health center based intervention arms. Given the low use of LARC methods in similar settings, deployment of contextually trained nurses at the grassroots level could substantially increase utilization of these methods. © 2018 The Population Council, Inc.
NASA Astrophysics Data System (ADS)
Pan, Yanqiao; Huang, YongAn; Guo, Lei; Ding, Yajiang; Yin, Zhouping
2015-04-01
It is critical and challenging to achieve the individual jetting ability and high consistency in multi-nozzle electrohydrodynamic jet printing (E-jet printing). We proposed multi-level voltage method (MVM) to implement the addressable E-jet printing using multiple parallel nozzles with high consistency. The fabricated multi-nozzle printhead for MVM consists of three parts: PMMA holder, stainless steel capillaries (27G, outer diameter 400 μm) and FR-4 extractor layer. The key of MVM is to control the maximum meniscus electric field on each nozzle. The individual jetting control can be implemented when the rings under the jetting nozzles are 0 kV and the other rings are 0.5 kV. The onset electric field for each nozzle is ˜3.4 kV/mm by numerical simulation. Furthermore, a series of printing experiments are performed to show the advantage of MVM in printing consistency than the "one-voltage method" and "improved E-jet method", by combination with finite element analyses. The good dimension consistency (274μm, 276μm, 280μm) and position consistency of the droplet array on the hydrophobic Si substrate verified the enhancements. It shows that MVM is an effective technique to implement the addressable E-jet printing with multiple parallel nozzles in high consistency.
Image matrix processor for fast multi-dimensional computations
Roberson, G.P.; Skeate, M.F.
1996-10-15
An apparatus for multi-dimensional computation is disclosed which comprises a computation engine, including a plurality of processing modules. The processing modules are configured in parallel and compute respective contributions to a computed multi-dimensional image of respective two dimensional data sets. A high-speed, parallel access storage system is provided which stores the multi-dimensional data sets, and a switching circuit routes the data among the processing modules in the computation engine and the storage system. A data acquisition port receives the two dimensional data sets representing projections through an image, for reconstruction algorithms such as encountered in computerized tomography. The processing modules include a programmable local host, by which they may be configured to execute a plurality of different types of multi-dimensional algorithms. The processing modules thus include an image manipulation processor, which includes a source cache, a target cache, a coefficient table, and control software for executing image transformation routines using data in the source cache and the coefficient table and loading resulting data in the target cache. The local host processor operates to load the source cache with a two dimensional data set, loads the coefficient table, and transfers resulting data out of the target cache to the storage system, or to another destination. 10 figs.
De Ridder, D J M K; Everaert, K; Fernández, L García; Valero, J V Forner; Durán, A Borau; Abrisqueta, M L Jauregui; Ventura, M G; Sotillo, A Rodriguez
2005-12-01
To compare the performance of SpeediCath hydrophilic-coated catheters versus uncoated polyvinyl chloride (PVC) catheters, in traumatic spinal cord injured patients presenting with functional neurogenic bladder-sphincter disorders. A 1-year, prospective, open, parallel, comparative, randomised, multi centre study included 123 male patients, > or =16 y and injured within the last 6 months. Primary endpoints were occurrence of symptomatic urinary tract infection (UTI) and hematuria. Secondary endpoints were development of urethral strictures and convenience of use. The main hypothesis was that coated catheters cause fewer complications in terms of symptomatic UTIs and hematuria. 57 out of 123 patients completed the 12-month study. Fewer patients using the SpeediCath hydrophilic-coated catheter (64%) experienced 1 or more UTIs compared to the uncoated PVC catheter group (82%) (p = 0.02). Thus, twice as many patients in the SpeediCath group were free of UTI. There was no significant difference in the number of patients experiencing bleeding episodes (38/55 SpeediCath; 32/59 PVC) and no overall difference in the occurrence of hematuria, leukocyturia and bacteriuria. The results indicate that there is a beneficial effect regarding UTI when using hydrophilic-coated catheters.
The performance of silk scaffolds in a rat model of augmentation cystoplasty.
Seth, Abhishek; Chung, Yeun Goo; Gil, Eun Seok; Tu, Duong; Franck, Debra; Di Vizio, Dolores; Adam, Rosalyn M; Kaplan, David L; Estrada, Carlos R; Mauney, Joshua R
2013-07-01
The diverse processing plasticity of silk-based biomaterials offers a versatile platform for understanding the impact of structural and mechanical matrix properties on bladder regenerative processes. Three distinct groups of 3-D matrices were fabricated from aqueous solutions of Bombyx mori silk fibroin either by a gel spinning technique (GS1 and GS2 groups) or a solvent-casting/salt-leaching method in combination with silk film casting (FF group). SEM analyses revealed that GS1 matrices consisted of smooth, compact multi-laminates of parallel-oriented silk fibers while GS2 scaffolds were composed of porous (pore size range, 5-50 μm) lamellar-like sheets buttressed by a dense outer layer. Bi-layer FF scaffolds were comprised of porous foams (pore size, ~400 μm) fused on their external face with a homogenous, nonporous silk film. Silk groups and small intestinal submucosa (SIS) matrices were evaluated in a rat model of augmentation cystoplasty for 10 weeks of implantation and compared to cystotomy controls. Gross tissue evaluations revealed the presence of intra-luminal stones in all experimental groups. The incidence and size of urinary calculi was the highest in animals implanted with gel spun silk matrices and SIS with frequencies ≥57% and stone diameters of 3-4 mm. In contrast, rats augmented with FF scaffolds displayed substantially lower rates (20%) and stone size (2 mm), similar to the levels observed in controls (13%, 2 mm). Histological (hematoxylin and eosin, Masson's trichrome) and immunohistochemical (IHC) analyses showed comparable extents of smooth muscle regeneration and contractile protein (α-smooth muscle actin and SM22α) expression within defect sites supported by all matrix groups similar to controls. Parallel evaluations demonstrated the formation of a transitional, multi-layered urothelium with prominent uroplakin and p63 protein expression in all experimental groups. De novo innervation and vascularization processes were evident in all regenerated tissues indicated by Fox3-positive neuronal cells and vessels lined with CD31 expressing endothelial cells. In comparison to other biomaterial groups, cystometric analyses at 10 weeks post-op revealed that animals implanted with the FF matrix configuration displayed superior urodynamic characteristics including compliance, functional capacity, as well as spontaneous non voiding contractions consistent with control levels. Our data demonstrate that variations in scaffold processing techniques can influence the in vivo functional performance of silk matrices in bladder reconstructive procedures. Copyright © 2013 Elsevier Ltd. All rights reserved.
A hybrid algorithm for parallel molecular dynamics simulations
NASA Astrophysics Data System (ADS)
Mangiardi, Chris M.; Meyer, R.
2017-10-01
This article describes algorithms for the hybrid parallelization and SIMD vectorization of molecular dynamics simulations with short-range forces. The parallelization method combines domain decomposition with a thread-based parallelization approach. The goal of the work is to enable efficient simulations of very large (tens of millions of atoms) and inhomogeneous systems on many-core processors with hundreds or thousands of cores and SIMD units with large vector sizes. In order to test the efficiency of the method, simulations of a variety of configurations with up to 74 million atoms have been performed. Results are shown that were obtained on multi-core systems with Sandy Bridge and Haswell processors as well as systems with Xeon Phi many-core processors.
Parallel processing implementation for the coupled transport of photons and electrons using OpenMP
NASA Astrophysics Data System (ADS)
Doerner, Edgardo
2016-05-01
In this work the use of OpenMP to implement the parallel processing of the Monte Carlo (MC) simulation of the coupled transport for photons and electrons is presented. This implementation was carried out using a modified EGSnrc platform which enables the use of the Microsoft Visual Studio 2013 (VS2013) environment, together with the developing tools available in the Intel Parallel Studio XE 2015 (XE2015). The performance study of this new implementation was carried out in a desktop PC with a multi-core CPU, taking as a reference the performance of the original platform. The results were satisfactory, both in terms of scalability as parallelization efficiency.
Comparing the OpenMP, MPI, and Hybrid Programming Paradigm on an SMP Cluster
NASA Technical Reports Server (NTRS)
Jost, Gabriele; Jin, Hao-Qiang; anMey, Dieter; Hatay, Ferhat F.
2003-01-01
Clusters of SMP (Symmetric Multi-Processors) nodes provide support for a wide range of parallel programming paradigms. The shared address space within each node is suitable for OpenMP parallelization. Message passing can be employed within and across the nodes of a cluster. Multiple levels of parallelism can be achieved by combining message passing and OpenMP parallelization. Which programming paradigm is the best will depend on the nature of the given problem, the hardware components of the cluster, the network, and the available software. In this study we compare the performance of different implementations of the same CFD benchmark application, using the same numerical algorithm but employing different programming paradigms.
Astronaut candidate strength measurement using the Cybex 2 and the LIDO Multi-Joint 2 dynamometers
NASA Technical Reports Server (NTRS)
Carroll, Amy E.; Wilmington, Robert P.
1992-01-01
The Anthropometry and Biomechanics Laboratory in the man-Systems division at NASA's Johnson Space Center has as one of its responsibilities the anthropometry and strength measurement data collection of astronaut candidates. The anthropometry data is used to ensure that the astronaut candidates are within the height restrictions for space vehicle and space suit design requirements, for example. The strength data is used to help detect abnormalities or isolate injuries to muscle groups that could jeopardize the astronauts safety. The Cybex II Dynamometer has been used for strength measurements from 1985 through 1991. The Cybex II was one of the first instruments of its kind to measure strength and similarity of muscle groups by isolating the specific joint of interest. In November 1991, a LIDO Multi-Joint II Dynamometer was purchased to upgrade the strength measurement data collection capability of the Anthropometry and Biomechanics Laboratory. The LIDO Multi-Joint II Dynamometer design offers several advantages over the Cybex II Dynamometer including a more sophisticated method of joint isolation and a more accurate and efficient computer based data collection system.
2009-01-01
Objective To evaluate the efficacy and safety of 1 mg and 4 mg doses of preservative-free intravitreal triamcinolone in comparison with focal/grid photocoagulation for the treatment of diabetic macular edema (DME). Design Multi-center randomized clinical trial Participants 840 study eyes of 693 subjects with DME involving the fovea and visual acuity 20/40 to 20/320 Methods Eyes were randomized to focal/grid photocoagulation (N=330), 1 mg intravitreal triamcinolone (N=256), or 4 mg intravitreal triamcinolone (N=254). Retreatment was given for persistent or new edema at 4-month intervals. The primary outcome was at 2 years. Main Outcome Measures Visual acuity measured with the Electronic Early Treatment Diabetic Retinopathy Study (E-ETDRS) method (primary), optical coherence tomography (OCT)-measured retinal thickness (secondary), and safety. Results At 4 months, mean visual acuity was better in the 4 mg triamcinolone group than in either the laser group (P<0.001) or the 1 mg triamcinolone group (P=0.001). By 1 year, there were no significant differences among groups in mean visual acuity. At the 16-month visit and extending through the primary outcome visit at 2 years, mean visual acuity was better in the laser group than in the other two groups (at 2 years, P=0.02 comparing the laser and 1 mg groups, P=0.002 comparing the laser and 4 mg groups, and P=0.49 comparing the 1mg and 4 mg groups). Treatment group differences in the visual acuity outcome could not be attributed solely to cataract formation. OCT results generally paralleled the visual acuity results. Intraocular pressure was increased from baseline by ≥10 mm Hg at any visit in 4%, 16%, and 33% of eyes in the three treatment groups, respectively, and cataract surgery was performed in 13%, 23%, and 51% of eyes in the three treatment groups, respectively. Conclusions Over a 2-year period, focal/grid photocoagulation is more effective and has fewer side effects than 1 mg or 4 mg doses of preservative-free intravitreal triamcinolone for most patients with DME who have characteristics similar to the cohort in this clinical trial. The results of this study also support that focal/grid photocoagulation currently should be the benchmark against which other treatments are compared in clinical trials of DME. PMID:18662829
Load Balancing Strategies for Multi-Block Overset Grid Applications
NASA Technical Reports Server (NTRS)
Djomehri, M. Jahed; Biswas, Rupak; Lopez-Benitez, Noe; Biegel, Bryan (Technical Monitor)
2002-01-01
The multi-block overset grid method is a powerful technique for high-fidelity computational fluid dynamics (CFD) simulations about complex aerospace configurations. The solution process uses a grid system that discretizes the problem domain by using separately generated but overlapping structured grids that periodically update and exchange boundary information through interpolation. For efficient high performance computations of large-scale realistic applications using this methodology, the individual grids must be properly partitioned among the parallel processors. Overall performance, therefore, largely depends on the quality of load balancing. In this paper, we present three different load balancing strategies far overset grids and analyze their effects on the parallel efficiency of a Navier-Stokes CFD application running on an SGI Origin2000 machine.
Scheduling Earth Observing Fleets Using Evolutionary Algorithms: Problem Description and Approach
NASA Technical Reports Server (NTRS)
Globus, Al; Crawford, James; Lohn, Jason; Morris, Robert; Clancy, Daniel (Technical Monitor)
2002-01-01
We describe work in progress concerning multi-instrument, multi-satellite scheduling. Most, although not all, Earth observing instruments currently in orbit are unique. In the relatively near future, however, we expect to see fleets of Earth observing spacecraft, many carrying nearly identical instruments. This presents a substantially new scheduling challenge. Inspired by successful commercial applications of evolutionary algorithms in scheduling domains, this paper presents work in progress regarding the use of evolutionary algorithms to solve a set of Earth observing related model problems. Both the model problems and the software are described. Since the larger problems will require substantial computation and evolutionary algorithms are embarrassingly parallel, we discuss our parallelization techniques using dedicated and cycle-scavenged workstations.
Fujita, Mitsue; Sato, Katsuaki; Nishioka, Hiroshi; Sakai, Fumihiko
2014-04-01
The objective of this article is to evaluate the efficacy and tolerability of two doses of oral sumatriptan vs placebo in the acute treatment of migraine in children and adolescents. Currently, there is no approved prescription medication in Japan for the treatment of migraine in children and adolescents. This was a multicenter, outpatient, single-attack, double-blind, randomized, placebo-controlled, parallel-group study. Eligible patients were children and adolescents aged 10 to 17 years diagnosed with migraine with or without aura (ICHD-II criteria 1.1 or 1.2) from 17 centers. They were randomized to receive sumatriptan 25 mg, 50 mg or placebo (1:1:2). The primary efficacy endpoint was headache relief by two grades on a five-grade scale at two hours post-dose. A total of 178 patients from 17 centers in Japan were enrolled and randomized to an investigational product in double-blind fashion. Of these, 144 patients self-treated a single migraine attack, and all provided a post-dose efficacy assessment and completed the study. The percentage of patients in the full analysis set (FAS) population who report pain relief at two hours post-treatment for the primary endpoint was higher in the placebo group than in the pooled sumatriptan group (38.6% vs 31.1%, 95% CI: -23.02 to 8.04, P = 0.345). The percentage of patients in the FAS population who reported pain relief at four hours post-dose was higher in the pooled sumatriptan group (63.5%) than in the placebo group (51.4%) but failed to achieve statistical significance ( P = 0.142). At four hours post-dose, percentages of patients who were pain free or had complete relief of photophobia or phonophobia were numerically higher in the sumatriptan pooled group compared to placebo. Both doses of oral sumatriptan were well tolerated. No adverse events (AEs) were serious or led to study withdrawal. The most common AEs were somnolence in 6% (two patients) in the sumatriptan 25 mg treatment group and chest discomfort in 7% (three patients) in the sumatriptan 50 mg treatment group. There was no statistically significant improvement between the sumatriptan pooled group and the placebo group for pain relief at two hours. Oral sumatriptan was well tolerated.
National Centers for Environmental Prediction
/ VISION | About EMC EMC > NAM > EXPERIMENTAL DATA Home NAM Operational Products HIRESW Operational Products Operational Forecast Graphics Experimental Forecast Graphics Verification and Diagnostics Model PARALLEL/EXPERIMENTAL MODEL FORECAST GRAPHICS OPERATIONAL VERIFICATION / DIAGNOSTICS PARALLEL VERIFICATION
NASA Astrophysics Data System (ADS)
Yang, Liping; Zhang, Lei; He, Jiansen; Tu, Chuanyi; Li, Shengtai; Wang, Xin; Wang, Linghua
2018-03-01
Multi-order structure functions in the solar wind are reported to display a monofractal scaling when sampled parallel to the local magnetic field and a multifractal scaling when measured perpendicularly. Whether and to what extent will the scaling anisotropy be weakened by the enhancement of turbulence amplitude relative to the background magnetic strength? In this study, based on two runs of the magnetohydrodynamic (MHD) turbulence simulation with different relative levels of turbulence amplitude, we investigate and compare the scaling of multi-order magnetic structure functions and magnetic probability distribution functions (PDFs) as well as their dependence on the direction of the local field. The numerical results show that for the case of large-amplitude MHD turbulence, the multi-order structure functions display a multifractal scaling at all angles to the local magnetic field, with PDFs deviating significantly from the Gaussian distribution and a flatness larger than 3 at all angles. In contrast, for the case of small-amplitude MHD turbulence, the multi-order structure functions and PDFs have different features in the quasi-parallel and quasi-perpendicular directions: a monofractal scaling and Gaussian-like distribution in the former, and a conversion of a monofractal scaling and Gaussian-like distribution into a multifractal scaling and non-Gaussian tail distribution in the latter. These results hint that when intermittencies are abundant and intense, the multifractal scaling in the structure functions can appear even if it is in the quasi-parallel direction; otherwise, the monofractal scaling in the structure functions remains even if it is in the quasi-perpendicular direction.
NASA Astrophysics Data System (ADS)
Shi, Sheng-bing; Chen, Zhen-xing; Qin, Shao-gang; Song, Chun-yan; Jiang, Yun-hong
2014-09-01
With the development of science and technology, photoelectric equipment comprises visible system, infrared system, laser system and so on, integration, information and complication are higher than past. Parallelism and jumpiness of optical axis are important performance of photoelectric equipment,directly affect aim, ranging, orientation and so on. Jumpiness of optical axis directly affect hit precision of accurate point damage weapon, but we lack the facility which is used for testing this performance. In this paper, test system which is used fo testing parallelism and jumpiness of optical axis is devised, accurate aim isn't necessary and data processing are digital in the course of testing parallelism, it can finish directly testing parallelism of multi-axes, aim axis and laser emission axis, parallelism of laser emission axis and laser receiving axis and first acuualizes jumpiness of optical axis of optical sighting device, it's a universal test system.
Efficient Array Design for Sonotherapy
Stephens, Douglas N.; Kruse, Dustin E.; Ergun, Arif S.; Barnes, Stephen; Ming Lu, X.; Ferrara, Katherine
2008-01-01
New linear multi-row, multi-frequency arrays have been designed, constructed and tested as fully operational ultrasound probes to produce confocal imaging and therapeutic acoustic intensities with a standard commercial ultrasound imaging system. The triple-array probes and imaging system produce high quality B-mode images with a center row imaging array at 5.3 MHz, and sufficient acoustic power with dual therapeutic arrays to produce mild hyperthermia at 1.54 MHz. The therapeutic array pair in the first probe design (termed G3) utilizes a high bandwidth and peak pressure, suitable for mechanical therapies. The second multi-array design (termed G4) has a redesigned therapeutic array pair which is optimized for high time-averaged power output suitable for mild hyperthermia applications. The “thermal therapy” design produces more than 4 Watts of acoustic power from the low frequency arrays with only a 10.5 °C internal rise in temperature after 100 seconds of continuous use with an unmodified conventional imaging system, or substantially longer operation at lower acoustic power. The low frequency arrays in both probe designs were examined and contrasted for real power transfer efficiency with a KLM model which includes all lossy contributions in the power delivery path from system transmitters to tissue load. Laboratory verification was successfully performed for the KLM derived estimates of transducer parallel model acoustic resistance and dissipation resistance, which are the critical design factors for acoustic power output and undesired internal heating respectively. PMID:18591737
Monte Carlo charged-particle tracking and energy deposition on a Lagrangian mesh.
Yuan, J; Moses, G A; McKenty, P W
2005-10-01
A Monte Carlo algorithm for alpha particle tracking and energy deposition on a cylindrical computational mesh in a Lagrangian hydrodynamics code used for inertial confinement fusion (ICF) simulations is presented. The straight line approximation is used to follow propagation of "Monte Carlo particles" which represent collections of alpha particles generated from thermonuclear deuterium-tritium (DT) reactions. Energy deposition in the plasma is modeled by the continuous slowing down approximation. The scheme addresses various aspects arising in the coupling of Monte Carlo tracking with Lagrangian hydrodynamics; such as non-orthogonal severely distorted mesh cells, particle relocation on the moving mesh and particle relocation after rezoning. A comparison with the flux-limited multi-group diffusion transport method is presented for a polar direct drive target design for the National Ignition Facility. Simulations show the Monte Carlo transport method predicts about earlier ignition than predicted by the diffusion method, and generates higher hot spot temperature. Nearly linear speed-up is achieved for multi-processor parallel simulations.
Developing an Energy Performance Modeling Startup Kit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wood, A.
2012-10-01
In 2011, the NAHB Research Center began the first part of the multi-year effort by assessing the needs and motivations of residential remodelers regarding energy performance remodeling. The scope is multifaceted - all perspectives will be sought related to remodeling firms ranging in size from small-scale, sole proprietor to national. This will allow the Research Center to gain a deeper understanding of the remodeling and energy retrofit business and the needs of contractors when offering energy upgrade services. To determine the gaps and the motivation for energy performance remodeling, the NAHB Research Center conducted (1) an initial series of focusmore » groups with remodelers at the 2011 International Builders' Show, (2) a second series of focus groups with remodelers at the NAHB Research Center in conjunction with the NAHB Spring Board meeting in DC, and (3) quantitative market research with remodelers based on the findings from the focus groups. The goal was threefold, to: Understand the current remodeling industry and the role of energy efficiency; Identify the gaps and barriers to adding energy efficiency into remodeling; and Quantify and prioritize the support needs of professional remodelers to increase sales and projects involving improving home energy efficiency. This report outlines all three of these tasks with remodelers.« less
NASA Astrophysics Data System (ADS)
Song, Y.; Gui, Z.; Wu, H.; Wei, Y.
2017-09-01
Analysing spatiotemporal distribution patterns and its dynamics of different industries can help us learn the macro-level developing trends of those industries, and in turn provides references for industrial spatial planning. However, the analysis process is challenging task which requires an easy-to-understand information presentation mechanism and a powerful computational technology to support the visual analytics of big data on the fly. Due to this reason, this research proposes a web-based framework to enable such a visual analytics requirement. The framework uses standard deviational ellipse (SDE) and shifting route of gravity centers to show the spatial distribution and yearly developing trends of different enterprise types according to their industry categories. The calculation of gravity centers and ellipses is paralleled using Apache Spark to accelerate the processing. In the experiments, we use the enterprise registration dataset in Mainland China from year 1960 to 2015 that contains fine-grain location information (i.e., coordinates of each individual enterprise) to demonstrate the feasibility of this framework. The experiment result shows that the developed visual analytics method is helpful to understand the multi-level patterns and developing trends of different industries in China. Moreover, the proposed framework can be used to analyse any nature and social spatiotemporal point process with large data volume, such as crime and disease.
NASA Astrophysics Data System (ADS)
Lu, Haibao; Yu, Kai; Huang, Wei Min; Leng, Jinsong
2016-12-01
We present an explicit model to study the mechanics and physics of the shape memory effect (SME) in polymers based on the Takayanagi principle. The molecular structural characteristics and elastic behavior of shape memory polymers (SMPs) with multi-phases are investigated in terms of the thermomechanical properties of the individual components, of which the contributions are combined by using Takayanagi’s series-parallel model and parallel-series model, respectively. After that, Boltzmann superposition principle is employed to couple the multi-SME, elastic modulus parameter (E) and temperature parameter (T) in SMPs. Furthermore, the extended Takayanagi model is proposed to separate the plasticizing effect and physical swelling effect on the thermo-/chemo-responsive SME in polymers and then compared with the available experimental data reported in the literature. This study is expected to provide a powerful simulation tool for modeling and experimental substantiation of the mechanics and working mechanism of SME in polymers.
Eisenstein, Eric L; Diener, Lawrence W; Nahm, Meredith; Weinfurt, Kevin P
2011-12-01
New technologies may be required to integrate the National Institutes of Health's Patient Reported Outcome Management Information System (PROMIS) into multi-center clinical trials. To better understand this need, we identified likely PROMIS reporting formats, developed a multi-center clinical trial process model, and identified gaps between current capabilities and those necessary for PROMIS. These results were evaluated by key trial constituencies. Issues reported by principal investigators fell into two categories: acceptance by key regulators and the scientific community, and usability for researchers and clinicians. Issues reported by the coordinating center, participating sites, and study subjects were those faced when integrating new technologies into existing clinical trial systems. We then defined elements of a PROMIS Tool Kit required for integrating PROMIS into a multi-center clinical trial environment. The requirements identified in this study serve as a framework for future investigators in the design, development, implementation, and operation of PROMIS Tool Kit technologies.
Diener, Lawrence W.; Nahm, Meredith; Weinfurt, Kevin P.
2013-01-01
New technologies may be required to integrate the National Institutes of Health’s Patient Reported Outcome Management Information System (PROMIS) into multi-center clinical trials. To better understand this need, we identified likely PROMIS reporting formats, developed a multi-center clinical trial process model, and identified gaps between current capabilities and those necessary for PROMIS. These results were evaluated by key trial constituencies. Issues reported by principal investigators fell into two categories: acceptance by key regulators and the scientific community, and usability for researchers and clinicians. Issues reported by the coordinating center, participating sites, and study subjects were those faced when integrating new technologies into existing clinical trial systems. We then defined elements of a PROMIS Tool Kit required for integrating PROMIS into a multi-center clinical trial environment. The requirements identified in this study serve as a framework for future investigators in the design, development, implementation, and operation of PROMIS Tool Kit technologies. PMID:20703765
NASA Astrophysics Data System (ADS)
Sanna, N.; Baccarelli, I.; Morelli, G.
2009-12-01
SCELib is a computer program which implements the Single Center Expansion (SCE) method to describe molecular electronic densities and the interaction potentials between a charged projectile (electron or positron) and a target molecular system. The first version (CPC Catalog identifier ADMG_v1_0) was submitted to the CPC Program Library in 2000, and version 2.0 (ADMG_v2_0) was submitted in 2004. We here announce the new release 3.0 which presents additional features with respect to the previous versions aiming at a significative enhance of its capabilities to deal with larger molecular systems. SCELib 3.0 allows for ab initio effective core potential (ECP) calculations of the molecular wavefunctions to be used in the SCE method in addition to the standard all-electron description of the molecule. The list of supported architectures has been updated and the code has been ported to platforms based on accelerating coprocessors, such as the NVIDIA GPGPU and the new parallel model adopted is able to efficiently run on a mixed many-core computing system. Program summaryProgram title: SCELib3.0 Catalogue identifier: ADMG_v3_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADMG_v3_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 2 018 862 No. of bytes in distributed program, including test data, etc.: 4 955 014 Distribution format: tar.gz Programming language: C Compilers used: xlc V8.x, Intel C V10.x, Portland Group V7.x, nvcc V2.x Computer: All SMP platforms based on AIX, Linux and SUNOS operating systems over SPARC, POWER, Intel Itanium2, X86, em64t and Opteron processors Operating system: SUNOS, IBM AIX, Linux RedHat (Enterprise), Linux SuSE (SLES) Has the code been vectorized or parallelized?: Yes. 1 to 32 (CPU or GPU) used RAM: Up to 32 GB depending on the molecular system and runtime parameters Classification: 16.5 Catalogue identifier of previous version: ADMG_v2_0 Journal reference of previous version: Comput. Phys. Comm. 162 (2004) 51 External routines: CUDA libraries (SDK V2.x). Does the new version supersede the previous version?: Yes Nature of problem: In this set of codes an efficient procedure is implemented to describe the wavefunction and related molecular properties of a polyatomic molecular system within the Single Center of Expansion (SCE) approximation. The resulting SCE wavefunction, electron density, electrostatic and correlation/polarization potentials can then be used in a wide variety of applications, such as electron-molecule scattering calculations, quantum chemistry studies, biomodelling and drug design. Solution method: The polycentre Hartree-Fock solution for a molecule of arbitrary geometry, based on linear combination of Gaussian-Type Orbital (GTO), is expanded over a single center, typically the Center Of Mass (C.O.M.), by means of a Gauss Legendre/Chebyschev quadrature over the θ,φ angular coordinates. The resulting SCE numerical wavefunction is then used to calculate the one-particle electron density, the electrostatic potential and two different models for the correlation/polarization potentials induced by the impinging electron, which have the correct asymptotic behavior for the leading dipole molecular polarizabilities. Reasons for new version: The present release of SCELib allows the study of larger molecular systems with respect to the previous versions by means of theoretical and technological advances, with the first implementation of the code over a many-core computing system. Summary of revisions: The major features added with respect to SCELib Version 2.0 are molecular wavefunctions obtained via the Los Alamos (Hay and Wadt) LAN ECP plus DZ description of the inner-shell electrons (on Na-La, Hf-Bi elements) [1] can now be single-center-expanded; the addition required modifications of: (i) the filtering code readgau, (ii) the main reading function setinp, (iii) the sphint code (including changes to the CalcMO code), (iv) the densty code, (v) the vst code; the classes of platforms supported now include two more architectures based on accelerated coprocessors (Nvidia GSeries GPGPU and ClearSpeed e720 (ClearSpeed version, experimental; initial preliminary porting of the sphint() function not for production runs - see the code documentation for additional detail). A single-precision representation for real numbers in the SCE mapping of the GTOs ( sphint code), has been implemented into the new code; the I h symmetry point group for the molecular systems has been added to those already allowed in the SCE procedure; the orientation of the molecular axis system for the Cs (planar) symmetry has been changed in accord with the standard orientation adopted by the latest version of the quantum chemistry code (Gaussian C03 [2]), which is used to generate the input multi-centre molecular wavefunctions ( z-axis perpendicular to the symmetry plane); the abelian subgroup for the Cs point group has been changed from C 1 to Cs; atomic basis functions including g-type GTOs can now be single-center-expanded. Restrictions: Depending on the molecular system under study and on the operating conditions the program may or may not fit into available RAM memory. In this case a feature of the program is to memory map a disk file in order to efficiently access the memory data through a disk device. The parallel GP-GPU implementation limits the number of CPU threads to the number of GPU cores present. Running time: The execution time strongly depends on the molecular target description and on the hardware/OS chosen, it is directly proportional to the ( r,θ,φ) grid size and to the number of angular basis functions used. Thus, from the program printout of the main arrays memory occupancy, the user can approximately derive the expected computer time needed for a given calculation executed in serial mode. For parallel executions the overall efficiency must be further taken into account, and this depends on the no. of processors used as well as on the parallel architecture chosen, so a simple general law is at present not determinable. References:[1] P.J. Hay, W.R. Wadt, J. Chem. Phys. 82 (1985) 270; W.R. Wadt, P.J. Hay, J. Chem. Phys. 284 (1985);P.J. Hay, W.R. Wadt, J. Chem. Phys. 299 (1985). [2] M.J. Frisch et al., Gaussian 03, revision C.02, Gaussian, Inc., Wallingford, CT, 2004.
Parallel and Serial Grouping of Image Elements in Visual Perception
ERIC Educational Resources Information Center
Houtkamp, Roos; Roelfsema, Pieter R.
2010-01-01
The visual system groups image elements that belong to an object and segregates them from other objects and the background. Important cues for this grouping process are the Gestalt criteria, and most theories propose that these are applied in parallel across the visual scene. Here, we find that Gestalt grouping can indeed occur in parallel in some…
National Centers for Environmental Prediction
Operational Forecast Graphics Experimental Forecast Graphics Verification and Diagnostics Model Configuration /EXPERIMENTAL MODEL FORECAST GRAPHICS OPERATIONAL VERIFICATION / DIAGNOSTICS PARALLEL VERIFICATION / DIAGNOSTICS Developmental Air Quality Forecasts and Verification Back to Table of Contents 2. PARALLEL/EXPERIMENTAL GRAPHICS
Effects of visual information regarding allocentric processing in haptic parallelity matching.
Van Mier, Hanneke I
2013-10-01
Research has revealed that haptic perception of parallelity deviates from physical reality. Large and systematic deviations have been found in haptic parallelity matching most likely due to the influence of the hand-centered egocentric reference frame. Providing information that increases the influence of allocentric processing has been shown to improve performance on haptic matching. In this study allocentric processing was stimulated by providing informative vision in haptic matching tasks that were performed using hand- and arm-centered reference frames. Twenty blindfolded participants (ten men, ten women) explored the orientation of a reference bar with the non-dominant hand and subsequently matched (task HP) or mirrored (task HM) its orientation on a test bar with the dominant hand. Visual information was provided by means of informative vision with participants having full view of the test bar, while the reference bar was blocked from their view (task VHP). To decrease the egocentric bias of the hands, participants also performed a visual haptic parallelity drawing task (task VHPD) using an arm-centered reference frame, by drawing the orientation of the reference bar. In all tasks, the distance between and orientation of the bars were manipulated. A significant effect of task was found; performance improved from task HP, to VHP to VHPD, and HM. Significant effects of distance were found in the first three tasks, whereas orientation and gender effects were only significant in tasks HP and VHP. The results showed that stimulating allocentric processing by means of informative vision and reducing the egocentric bias by using an arm-centered reference frame led to most accurate performance on parallelity matching. © 2013 Elsevier B.V. All rights reserved.
Suzuki, Yasuo; Iida, Mitsuo; Ito, Hiroaki; Nishino, Haruo; Ohmori, Toshihide; Arai, Takehiro; Yokoyama, Tadashi; Okubo, Takanori; Hibi, Toshifumi
2017-05-01
The noninferiority of pH-dependent release mesalamine (Asacol) once daily (QD) to 3 times daily (TID) administration was investigated. This was a phase 3, multicenter, randomized, double-blind, parallel-group, active-control study, with dynamic and stochastic allocation using central registration. Patients with ulcerative colitis in remission (a bloody stool score of 0, and an ulcerative colitis disease activity index of ≤2), received the study drug (Asacol 2.4 g/d) for 48 weeks. The primary efficacy endpoint of the nonrecurrence rate was assessed on the full analysis set. The noninferiority margin was 10%. Six hundred and four subjects were eligible and were allocated; 603 subjects received the study drug. The full analysis set comprised 602 subjects (QD: 301, TID: 301). Nonrecurrence rates were 88.4% in the QD and 89.6% in the TID. The difference between nonrecurrence rates was -1.3% (95% confidence interval: -6.2, 3.7), confirming noninferiority. No differences in the safety profile were observed between the two treatment groups. On post hoc analysis by integrating the QD and the TID, nonrecurrence rate with a mucosal appearance score of 0 at determination of eligibility was significantly higher than the score of 1. The mean compliance rates were 97.7% in the QD and 98.1% in the TID. QD dosing with Asacol is as effective and safe as TID for maintenance of remission in patients with ulcerative colitis. Additionally, this study indicated that maintaining a good mucosal state is the key for longer maintenance of remission.
Suzuki, Yasuo; Iida, Mitsuo; Ito, Hiroaki; Nishino, Haruo; Ohmori, Toshihide; Arai, Takehiro; Yokoyama, Tadashi; Okubo, Takanori
2017-01-01
Background: The noninferiority of pH-dependent release mesalamine (Asacol) once daily (QD) to 3 times daily (TID) administration was investigated. Methods: This was a phase 3, multicenter, randomized, double-blind, parallel-group, active-control study, with dynamic and stochastic allocation using central registration. Patients with ulcerative colitis in remission (a bloody stool score of 0, and an ulcerative colitis disease activity index of ≤2), received the study drug (Asacol 2.4 g/d) for 48 weeks. The primary efficacy endpoint of the nonrecurrence rate was assessed on the full analysis set. The noninferiority margin was 10%. Results: Six hundred and four subjects were eligible and were allocated; 603 subjects received the study drug. The full analysis set comprised 602 subjects (QD: 301, TID: 301). Nonrecurrence rates were 88.4% in the QD and 89.6% in the TID. The difference between nonrecurrence rates was −1.3% (95% confidence interval: −6.2, 3.7), confirming noninferiority. No differences in the safety profile were observed between the two treatment groups. On post hoc analysis by integrating the QD and the TID, nonrecurrence rate with a mucosal appearance score of 0 at determination of eligibility was significantly higher than the score of 1. The mean compliance rates were 97.7% in the QD and 98.1% in the TID. Conclusions: QD dosing with Asacol is as effective and safe as TID for maintenance of remission in patients with ulcerative colitis. Additionally, this study indicated that maintaining a good mucosal state is the key for longer maintenance of remission. PMID:28368909
Cooper, Curtis; Thorne, Anona; Klein, Marina; Conway, Brian; Boivin, Guy; Haase, David; Shafran, Stephen; Zubyk, Wendy; Singer, Joel; Halperin, Scott; Walmsley, Sharon
2011-03-25
The risk of poor vaccine immunogenicity and more severe influenza disease in HIV necessitate strategies to improve vaccine efficacy. A randomized, multi-centered, controlled, vaccine trial with three parallel groups was conducted at 12 CIHR Canadian HIV Trials Network sites. Three dosing strategies were used in HIV infected adults (18 to 60 years): two standard doses over 28 days, two double doses over 28 days and a single standard dose of influenza vaccine, administered prior to the 2008 influenza season. A trivalent killed split non-adjuvanted influenza vaccine (Fluviral™) was used. Serum hemagglutinin inhibition (HAI) activity for the three influenza strains in the vaccine was measured to assess immunogenicity. 297 of 298 participants received at least one injection. Baseline CD4 (median 470 cells/µL) and HIV RNA (76% of patients with viral load <50 copies/mL) were similar between groups. 89% were on HAART. The overall immunogenicity of influenza vaccine across time points and the three influenza strains assessed was poor (Range HAI ≥ 40 = 31-58%). Double dose plus double dose booster slightly increased the proportion achieving HAI titre doubling from baseline for A/Brisbane and B/Florida at weeks 4, 8 and 20 compared to standard vaccine dose. Increased immunogenicity with increased antigen dose and booster dosing was most apparent in participants with unsuppressed HIV RNA at baseline. None of 8 serious adverse events were thought to be immunization-related. Even with increased antigen dose and booster dosing, non-adjuvanted influenza vaccine immunogenicity is poor in HIV infected individuals. Alternative influenza vaccines are required in this hyporesponsive population. ClinicalTrials.gov NCT00764998.
Automation of multi-agent control for complex dynamic systems in heterogeneous computational network
NASA Astrophysics Data System (ADS)
Oparin, Gennady; Feoktistov, Alexander; Bogdanova, Vera; Sidorov, Ivan
2017-01-01
The rapid progress of high-performance computing entails new challenges related to solving large scientific problems for various subject domains in a heterogeneous distributed computing environment (e.g., a network, Grid system, or Cloud infrastructure). The specialists in the field of parallel and distributed computing give the special attention to a scalability of applications for problem solving. An effective management of the scalable application in the heterogeneous distributed computing environment is still a non-trivial issue. Control systems that operate in networks, especially relate to this issue. We propose a new approach to the multi-agent management for the scalable applications in the heterogeneous computational network. The fundamentals of our approach are the integrated use of conceptual programming, simulation modeling, network monitoring, multi-agent management, and service-oriented programming. We developed a special framework for an automation of the problem solving. Advantages of the proposed approach are demonstrated on the parametric synthesis example of the static linear regulator for complex dynamic systems. Benefits of the scalable application for solving this problem include automation of the multi-agent control for the systems in a parallel mode with various degrees of its detailed elaboration.
Hallock, Michael J.; Stone, John E.; Roberts, Elijah; Fry, Corey; Luthey-Schulten, Zaida
2014-01-01
Simulation of in vivo cellular processes with the reaction-diffusion master equation (RDME) is a computationally expensive task. Our previous software enabled simulation of inhomogeneous biochemical systems for small bacteria over long time scales using the MPD-RDME method on a single GPU. Simulations of larger eukaryotic systems exceed the on-board memory capacity of individual GPUs, and long time simulations of modest-sized cells such as yeast are impractical on a single GPU. We present a new multi-GPU parallel implementation of the MPD-RDME method based on a spatial decomposition approach that supports dynamic load balancing for workstations containing GPUs of varying performance and memory capacity. We take advantage of high-performance features of CUDA for peer-to-peer GPU memory transfers and evaluate the performance of our algorithms on state-of-the-art GPU devices. We present parallel e ciency and performance results for simulations using multiple GPUs as system size, particle counts, and number of reactions grow. We also demonstrate multi-GPU performance in simulations of the Min protein system in E. coli. Moreover, our multi-GPU decomposition and load balancing approach can be generalized to other lattice-based problems. PMID:24882911
Hallock, Michael J; Stone, John E; Roberts, Elijah; Fry, Corey; Luthey-Schulten, Zaida
2014-05-01
Simulation of in vivo cellular processes with the reaction-diffusion master equation (RDME) is a computationally expensive task. Our previous software enabled simulation of inhomogeneous biochemical systems for small bacteria over long time scales using the MPD-RDME method on a single GPU. Simulations of larger eukaryotic systems exceed the on-board memory capacity of individual GPUs, and long time simulations of modest-sized cells such as yeast are impractical on a single GPU. We present a new multi-GPU parallel implementation of the MPD-RDME method based on a spatial decomposition approach that supports dynamic load balancing for workstations containing GPUs of varying performance and memory capacity. We take advantage of high-performance features of CUDA for peer-to-peer GPU memory transfers and evaluate the performance of our algorithms on state-of-the-art GPU devices. We present parallel e ciency and performance results for simulations using multiple GPUs as system size, particle counts, and number of reactions grow. We also demonstrate multi-GPU performance in simulations of the Min protein system in E. coli . Moreover, our multi-GPU decomposition and load balancing approach can be generalized to other lattice-based problems.
Anazawa, Takashi; Yamazaki, Motohiro
2017-12-05
Although multi-point, multi-color fluorescence-detection systems are widely used in various sciences, they would find wider applications if they are miniaturized. Accordingly, an ultra-small, four-emission-point and four-color fluorescence-detection system was developed. Its size (space between emission points and a detection plane) is 15 × 10 × 12 mm, which is three-orders-of-magnitude smaller than that of a conventional system. Fluorescence from four emission points with an interval of 1 mm on the same plane was respectively collimated by four lenses and split into four color fluxes by four dichroic mirrors. Then, a total of sixteen parallel color fluxes were directly input into an image sensor and simultaneously detected. The emission-point plane and the detection plane (the image-sensor surface) were parallel and separated by a distance of only 12 mm. The developed system was applied to four-capillary array electrophoresis and successfully achieved Sanger DNA sequencing. Moreover, compared with a conventional system, the developed system had equivalent high fluorescence-detection sensitivity (lower detection limit of 17 pM dROX) and 1.6-orders-of-magnitude higher dynamic range (4.3 orders of magnitude).
The Effectiveness of Off Campus Multi-Institutional Teaching Centers as Perceived by Students
ERIC Educational Resources Information Center
Flores-Mejorado, Dina; Edmonson, Stacey; Fisher, Alice
2008-01-01
The purpose of this study was to examine and compare the perceptions of undergraduate and graduate students of a selected state university in Texas attending the Multi Institutional Teaching Center (MITC)/The University Center (TUC) or the main campus regarding the effectiveness of student services. As universities face limited resources and…
Massively parallel GPU-accelerated minimization of classical density functional theory
NASA Astrophysics Data System (ADS)
Stopper, Daniel; Roth, Roland
2017-08-01
In this paper, we discuss the ability to numerically minimize the grand potential of hard disks in two-dimensional and of hard spheres in three-dimensional space within the framework of classical density functional and fundamental measure theory on modern graphics cards. Our main finding is that a massively parallel minimization leads to an enormous performance gain in comparison to standard sequential minimization schemes. Furthermore, the results indicate that in complex multi-dimensional situations, a heavy parallel minimization of the grand potential seems to be mandatory in order to reach a reasonable balance between accuracy and computational cost.
Parallel evolution of image processing tools for multispectral imagery
NASA Astrophysics Data System (ADS)
Harvey, Neal R.; Brumby, Steven P.; Perkins, Simon J.; Porter, Reid B.; Theiler, James P.; Young, Aaron C.; Szymanski, John J.; Bloch, Jeffrey J.
2000-11-01
We describe the implementation and performance of a parallel, hybrid evolutionary-algorithm-based system, which optimizes image processing tools for feature-finding tasks in multi-spectral imagery (MSI) data sets. Our system uses an integrated spatio-spectral approach and is capable of combining suitably-registered data from different sensors. We investigate the speed-up obtained by parallelization of the evolutionary process via multiple processors (a workstation cluster) and develop a model for prediction of run-times for different numbers of processors. We demonstrate our system on Landsat Thematic Mapper MSI , covering the recent Cerro Grande fire at Los Alamos, NM, USA.
Nuclide Depletion Capabilities in the Shift Monte Carlo Code
Davidson, Gregory G.; Pandya, Tara M.; Johnson, Seth R.; ...
2017-12-21
A new depletion capability has been developed in the Exnihilo radiation transport code suite. This capability enables massively parallel domain-decomposed coupling between the Shift continuous-energy Monte Carlo solver and the nuclide depletion solvers in ORIGEN to perform high-performance Monte Carlo depletion calculations. This paper describes this new depletion capability and discusses its various features, including a multi-level parallel decomposition, high-order transport-depletion coupling, and energy-integrated power renormalization. Several test problems are presented to validate the new capability against other Monte Carlo depletion codes, and the parallel performance of the new capability is analyzed.
Multi-joint postural behavior in patients with knee osteoarthritis.
Turcot, Katia; Sagawa, Yoshimasa; Hoffmeyer, Pierre; Suvà, Domizio; Armand, Stéphane
2015-12-01
Previous studies have demonstrated balance impairment in patients with knee osteoarthritis (OA). Although it is currently accepted that postural control depends on multi-joint coordination, no study has previously considered this postural strategy in patients suffering from knee OA. The objectives of this study were to investigate the multi-joint postural behavior in patients with knee OA and to evaluate the association with clinical outcomes. Eighty-seven patients with knee OA and twenty-five healthy elderly were recruited to the study. A motion analysis system and two force plates were used to investigate the joint kinematics (trunk and lower body segments), the lower body joint moments, the vertical ground reaction force ratio and the center of pressure (COP) during a quiet standing task. Pain, functional capacity and quality of life status were also recorded. Patients with symptomatic and severe knee OA adopt a more flexed posture at all joint levels in comparison with the control group. A significant difference in the mean ratio was found between groups, showing an asymmetric weight distribution in patients with knee OA. A significant decrease in the COP range in the anterior-posterior direction was also observed in the group of patients. Only small associations were observed between postural impairments and clinical outcomes. This study brings new insights regarding the postural behavior of patients with severe knee OA during a quiet standing task. The results confirm the multi-joint asymmetric posture adopted by this population. Copyright © 2014 Elsevier B.V. All rights reserved.
Kim, Won Hwa; Singh, Vikas; Chung, Moo K.; Hinrichs, Chris; Pachauri, Deepti; Okonkwo, Ozioma C.; Johnson, Sterling C.
2014-01-01
Statistical analysis on arbitrary surface meshes such as the cortical surface is an important approach to understanding brain diseases such as Alzheimer’s disease (AD). Surface analysis may be able to identify specific cortical patterns that relate to certain disease characteristics or exhibit differences between groups. Our goal in this paper is to make group analysis of signals on surfaces more sensitive. To do this, we derive multi-scale shape descriptors that characterize the signal around each mesh vertex, i.e., its local context, at varying levels of resolution. In order to define such a shape descriptor, we make use of recent results from harmonic analysis that extend traditional continuous wavelet theory from the Euclidean to a non-Euclidean setting (i.e., a graph, mesh or network). Using this descriptor, we conduct experiments on two different datasets, the Alzheimer’s Disease NeuroImaging Initiative (ADNI) data and images acquired at the Wisconsin Alzheimer’s Disease Research Center (W-ADRC), focusing on individuals labeled as having Alzheimer’s disease (AD), mild cognitive impairment (MCI) and healthy controls. In particular, we contrast traditional univariate methods with our multi-resolution approach which show increased sensitivity and improved statistical power to detect a group-level effects. We also provide an open source implementation. PMID:24614060
Real-time multiplicity counter
Rowland, Mark S [Alamo, CA; Alvarez, Raymond A [Berkeley, CA
2010-07-13
A neutron multi-detector array feeds pulses in parallel to individual inputs that are tied to individual bits in a digital word. Data is collected by loading a word at the individual bit level in parallel. The word is read at regular intervals, all bits simultaneously, to minimize latency. The electronics then pass the word to a number of storage locations for subsequent processing, thereby removing the front-end problem of pulse pileup.
LLMapReduce: Multi-Lingual Map-Reduce for Supercomputing Environments
2015-11-20
1990s. Popularized by Google [36] and Apache Hadoop [37], map-reduce has become a staple technology of the ever- growing big data community...Lexington, MA, U.S.A Abstract— The map-reduce parallel programming model has become extremely popular in the big data community. Many big data ...to big data users running on a supercomputer. LLMapReduce dramatically simplifies map-reduce programming by providing simple parallel programming
[CMACPAR an modified parallel neuro-controller for control processes].
Ramos, E; Surós, R
1999-01-01
CMACPAR is a Parallel Neurocontroller oriented to real time systems as for example Control Processes. Its characteristics are mainly a fast learning algorithm, a reduced number of calculations, great generalization capacity, local learning and intrinsic parallelism. This type of neurocontroller is used in real time applications required by refineries, hydroelectric centers, factories, etc. In this work we present the analysis and the parallel implementation of a modified scheme of the Cerebellar Model CMAC for the n-dimensional space projection using a mean granularity parallel neurocontroller. The proposed memory management allows for a significant memory reduction in training time and required memory size.
Parallel computing of a climate model on the dawn 1000 by domain decomposition method
NASA Astrophysics Data System (ADS)
Bi, Xunqiang
1997-12-01
In this paper the parallel computing of a grid-point nine-level atmospheric general circulation model on the Dawn 1000 is introduced. The model was developed by the Institute of Atmospheric Physics (IAP), Chinese Academy of Sciences (CAS). The Dawn 1000 is a MIMD massive parallel computer made by National Research Center for Intelligent Computer (NCIC), CAS. A two-dimensional domain decomposition method is adopted to perform the parallel computing. The potential ways to increase the speed-up ratio and exploit more resources of future massively parallel supercomputation are also discussed.
Parallel computation of GA search for the artery shape determinants with CFD
NASA Astrophysics Data System (ADS)
Himeno, M.; Noda, S.; Fukasaku, K.; Himeno, R.
2010-06-01
We studied which factors play important role to determine the shape of arteries at the carotid artery bifurcation by performing multi-objective optimization with computation fluid dynamics (CFD) and the genetic algorithm (GA). To perform it, the most difficult problem is how to reduce turn-around time of the GA optimization with 3D unsteady computation of blood flow. We devised two levels of parallel computation method with the following features: level 1: parallel CFD computation with appropriate number of cores; level 2: parallel jobs generated by "master", which finds quickly available job cue and dispatches jobs, to reduce turn-around time. As a result, the turn-around time of one GA trial, which would have taken 462 days with one core, was reduced to less than two days on RIKEN supercomputer system, RICC, with 8192 cores. We performed a multi-objective optimization to minimize the maximum mean WSS and to minimize the sum of circumference for four different shapes and obtained a set of trade-off solutions for each shape. In addition, we found that the carotid bulb has the feature of the minimum local mean WSS and minimum local radius. We confirmed that our method is effective for examining determinants of artery shapes.
NASA Astrophysics Data System (ADS)
Shcherbakov, Alexandre S.; Chavez Dagostino, Miguel; Arellanes, Adan O.; Aguirre Lopez, Arturo
2016-09-01
We develop a multi-band spectrometer with a few spatially parallel optical arms for the combined processing of their data flow. Such multi-band capability has various applications in astrophysical scenarios at different scales: from objects in the distant universe to planetary atmospheres in the Solar system. Each optical arm exhibits original performances to provide parallel multi-band observations with different scales simultaneously. Similar possibility is based on designing each optical arm individually via exploiting different materials for acousto-optical cells operating within various regimes, frequency ranges and light wavelengths from independent light sources. Individual beam shapers provide both the needed incident light polarization and the required apodization to increase the dynamic range of a system. After parallel acousto-optical processing, data flows are united by the joint CCD matrix on the stage of the combined electronic data processing. At the moment, the prototype combines still three bands, i.e. includes three spatial optical arms. The first low-frequency arm operates at the central frequencies 60-80 MHz with frequency bandwidth 40 MHz. The second arm is oriented to middle-frequencies 350-500 MHz with frequency bandwidth 200-300 MHz. The third arm is intended for ultra-high-frequency radio-wave signals about 1.0-1.5 GHz with frequency bandwidth <300 MHz. To-day, this spectrometer has the following preliminary performances. The first arm exhibits frequency resolution 20 KHz; while the second and third arms give the resolution 150-200 KHz. The numbers of resolvable spots are 1500- 2000 depending on the regime of operation. The fourth optical arm at the frequency range 3.5 GHz is currently under construction.
Mitra, Ayan; Politte, David G; Whiting, Bruce R; Williamson, Jeffrey F; O'Sullivan, Joseph A
2017-01-01
Model-based image reconstruction (MBIR) techniques have the potential to generate high quality images from noisy measurements and a small number of projections which can reduce the x-ray dose in patients. These MBIR techniques rely on projection and backprojection to refine an image estimate. One of the widely used projectors for these modern MBIR based technique is called branchless distance driven (DD) projection and backprojection. While this method produces superior quality images, the computational cost of iterative updates keeps it from being ubiquitous in clinical applications. In this paper, we provide several new parallelization ideas for concurrent execution of the DD projectors in multi-GPU systems using CUDA programming tools. We have introduced some novel schemes for dividing the projection data and image voxels over multiple GPUs to avoid runtime overhead and inter-device synchronization issues. We have also reduced the complexity of overlap calculation of the algorithm by eliminating the common projection plane and directly projecting the detector boundaries onto image voxel boundaries. To reduce the time required for calculating the overlap between the detector edges and image voxel boundaries, we have proposed a pre-accumulation technique to accumulate image intensities in perpendicular 2D image slabs (from a 3D image) before projection and after backprojection to ensure our DD kernels run faster in parallel GPU threads. For the implementation of our iterative MBIR technique we use a parallel multi-GPU version of the alternating minimization (AM) algorithm with penalized likelihood update. The time performance using our proposed reconstruction method with Siemens Sensation 16 patient scan data shows an average of 24 times speedup using a single TITAN X GPU and 74 times speedup using 3 TITAN X GPUs in parallel for combined projection and backprojection.
Simultaneous Multi-Slice fMRI using Spiral Trajectories
Zahneisen, Benjamin; Poser, Benedikt A.; Ernst, Thomas; Stenger, V. Andrew
2014-01-01
Parallel imaging methods using multi-coil receiver arrays have been shown to be effective for increasing MRI acquisition speed. However parallel imaging methods for fMRI with 2D sequences show only limited improvements in temporal resolution because of the long echo times needed for BOLD contrast. Recently, Simultaneous Multi-Slice (SMS) imaging techniques have been shown to increase fMRI temporal resolution by factors of four and higher. In SMS fMRI multiple slices can be acquired simultaneously using Echo Planar Imaging (EPI) and the overlapping slices are un-aliased using a parallel imaging reconstruction with multiple receivers. The slice separation can be further improved using the “blipped-CAIPI” EPI sequence that provides a more efficient sampling of the SMS 3D k-space. In this paper a blipped-spiral SMS sequence for ultra-fast fMRI is presented. The blipped-spiral sequence combines the sampling efficiency of spiral trajectories with the SMS encoding concept used in blipped-CAIPI EPI. We show that blipped spiral acquisition can achieve almost whole brain coverage at 3 mm isotropic resolution in 168 ms. It is also demonstrated that the high temporal resolution allows for dynamic BOLD lag time measurement using visual/motor and retinotopic mapping paradigms. The local BOLD lag time within the visual cortex following the retinotopic mapping stimulation of expanding flickering rings is directly measured and easily translated into an eccentricity map of the cortex. PMID:24518259
NASA Astrophysics Data System (ADS)
Singh, Santosh Kumar; Ghatak Choudhuri, Sumit
2018-05-01
Parallel connection of UPS inverters to enhance power rating is a widely accepted practice. Inter-modular circulating currents appear when multiple inverter modules are connected in parallel to supply variable critical load. Interfacing of modules henceforth requires an intensive design, using proper control strategy. The potentiality of human intuitive Fuzzy Logic (FL) control with imprecise system model is well known and thus can be utilised in parallel-connected UPS systems. Conventional FL controller is computational intensive, especially with higher number of input variables. This paper proposes application of Hierarchical-Fuzzy Logic control for parallel connected Multi-modular inverters system for reduced computational burden on the processor for a given switching frequency. Simulated results in MATLAB environment and experimental verification using Texas TMS320F2812 DSP are included to demonstrate feasibility of the proposed control scheme.
2003-01-05
KENNEDY SPACE CENTER, FLA. -- In the Multi-Purpose Processing Facility, a technician cleans NASA's Solar Radiation and Climate Experiment (SORCE) before its mating to the Pegasus XL Expendable Launch Vehicle. Built by Orbital Sciences Space Systems Group, SORCE will study and measure solar irradiance as a source of energy in the Earth's atmosphere. The launch of SORCE is scheduled for Jan. 25 at 3:14 p.m. from Cape Canaveral Air Force Station, Fla.
Liu, Bo; Zhang, Lijia; Xin, Xiangjun
2018-03-19
This paper proposes and demonstrates an enhanced secure 4-D modulation optical generalized filter bank multi-carrier (GFBMC) system based on joint constellation and Stokes vector scrambling. The constellation and Stokes vectors are scrambled by using different scrambling parameters. A multi-scroll Chua's circuit map is adopted as the chaotic model. Large secure key space can be obtained due to the multi-scroll attractors and independent operability of subcarriers. A 40.32Gb/s encrypted optical GFBMC signal with 128 parallel subcarriers is successfully demonstrated in the experiment. The results show good resistance against the illegal receiver and indicate a potential way for the future optical multi-carrier system.
Walden, Anita; Nahm, Meredith; Barnett, M Edwina; Conde, Jose G; Dent, Andrew; Fadiel, Ahmed; Perry, Theresa; Tolk, Chris; Tcheng, James E; Eisenstein, Eric L
2011-01-01
New data management models are emerging in multi-center clinical studies. We evaluated the incremental costs associated with decentralized vs. centralized models. We developed clinical research network economic models to evaluate three data management models: centralized, decentralized with local software, and decentralized with shared database. Descriptive information from three clinical research studies served as inputs for these models. The primary outcome was total data management costs. Secondary outcomes included: data management costs for sites, local data centers, and central coordinating centers. Both decentralized models were more costly than the centralized model for each clinical research study: the decentralized with local software model was the most expensive. Decreasing the number of local data centers and case book pages reduced cost differentials between models. Decentralized vs. centralized data management in multi-center clinical research studies is associated with increases in data management costs.
Walden, Anita; Nahm, Meredith; Barnett, M. Edwina; Conde, Jose G.; Dent, Andrew; Fadiel, Ahmed; Perry, Theresa; Tolk, Chris; Tcheng, James E.; Eisenstein, Eric L.
2012-01-01
Background New data management models are emerging in multi-center clinical studies. We evaluated the incremental costs associated with decentralized vs. centralized models. Methods We developed clinical research network economic models to evaluate three data management models: centralized, decentralized with local software, and decentralized with shared database. Descriptive information from three clinical research studies served as inputs for these models. Main Outcome Measures The primary outcome was total data management costs. Secondary outcomes included: data management costs for sites, local data centers, and central coordinating centers. Results Both decentralized models were more costly than the centralized model for each clinical research study: the decentralized with local software model was the most expensive. Decreasing the number of local data centers and case book pages reduced cost differentials between models. Conclusion Decentralized vs. centralized data management in multi-center clinical research studies is associated with increases in data management costs. PMID:21335692
A Parallel Particle Swarm Optimization Algorithm Accelerated by Asynchronous Evaluations
NASA Technical Reports Server (NTRS)
Venter, Gerhard; Sobieszczanski-Sobieski, Jaroslaw
2005-01-01
A parallel Particle Swarm Optimization (PSO) algorithm is presented. Particle swarm optimization is a fairly recent addition to the family of non-gradient based, probabilistic search algorithms that is based on a simplified social model and is closely tied to swarming theory. Although PSO algorithms present several attractive properties to the designer, they are plagued by high computational cost as measured by elapsed time. One approach to reduce the elapsed time is to make use of coarse-grained parallelization to evaluate the design points. Previous parallel PSO algorithms were mostly implemented in a synchronous manner, where all design points within a design iteration are evaluated before the next iteration is started. This approach leads to poor parallel speedup in cases where a heterogeneous parallel environment is used and/or where the analysis time depends on the design point being analyzed. This paper introduces an asynchronous parallel PSO algorithm that greatly improves the parallel e ciency. The asynchronous algorithm is benchmarked on a cluster assembled of Apple Macintosh G5 desktop computers, using the multi-disciplinary optimization of a typical transport aircraft wing as an example.
Non-Cartesian Parallel Imaging Reconstruction
Wright, Katherine L.; Hamilton, Jesse I.; Griswold, Mark A.; Gulani, Vikas; Seiberlich, Nicole
2014-01-01
Non-Cartesian parallel imaging has played an important role in reducing data acquisition time in MRI. The use of non-Cartesian trajectories can enable more efficient coverage of k-space, which can be leveraged to reduce scan times. These trajectories can be undersampled to achieve even faster scan times, but the resulting images may contain aliasing artifacts. Just as Cartesian parallel imaging can be employed to reconstruct images from undersampled Cartesian data, non-Cartesian parallel imaging methods can mitigate aliasing artifacts by using additional spatial encoding information in the form of the non-homogeneous sensitivities of multi-coil phased arrays. This review will begin with an overview of non-Cartesian k-space trajectories and their sampling properties, followed by an in-depth discussion of several selected non-Cartesian parallel imaging algorithms. Three representative non-Cartesian parallel imaging methods will be described, including Conjugate Gradient SENSE (CG SENSE), non-Cartesian GRAPPA, and Iterative Self-Consistent Parallel Imaging Reconstruction (SPIRiT). After a discussion of these three techniques, several potential promising clinical applications of non-Cartesian parallel imaging will be covered. PMID:24408499
NASA Astrophysics Data System (ADS)
Hadade, Ioan; di Mare, Luca
2016-08-01
Modern multicore and manycore processors exhibit multiple levels of parallelism through a wide range of architectural features such as SIMD for data parallel execution or threads for core parallelism. The exploitation of multi-level parallelism is therefore crucial for achieving superior performance on current and future processors. This paper presents the performance tuning of a multiblock CFD solver on Intel SandyBridge and Haswell multicore CPUs and the Intel Xeon Phi Knights Corner coprocessor. Code optimisations have been applied on two computational kernels exhibiting different computational patterns: the update of flow variables and the evaluation of the Roe numerical fluxes. We discuss at great length the code transformations required for achieving efficient SIMD computations for both kernels across the selected devices including SIMD shuffles and transpositions for flux stencil computations and global memory transformations. Core parallelism is expressed through threading based on a number of domain decomposition techniques together with optimisations pertaining to alleviating NUMA effects found in multi-socket compute nodes. Results are correlated with the Roofline performance model in order to assert their efficiency for each distinct architecture. We report significant speedups for single thread execution across both kernels: 2-5X on the multicore CPUs and 14-23X on the Xeon Phi coprocessor. Computations at full node and chip concurrency deliver a factor of three speedup on the multicore processors and up to 24X on the Xeon Phi manycore coprocessor.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Langer, S; Rotman, D; Schwegler, E
The Institutional Computing Executive Group (ICEG) review of FY05-06 Multiprogrammatic and Institutional Computing (M and IC) activities is presented in the attached report. In summary, we find that the M and IC staff does an outstanding job of acquiring and supporting a wide range of institutional computing resources to meet the programmatic and scientific goals of LLNL. The responsiveness and high quality of support given to users and the programs investing in M and IC reflects the dedication and skill of the M and IC staff. M and IC has successfully managed serial capacity, parallel capacity, and capability computing resources.more » Serial capacity computing supports a wide range of scientific projects which require access to a few high performance processors within a shared memory computer. Parallel capacity computing supports scientific projects that require a moderate number of processors (up to roughly 1000) on a parallel computer. Capability computing supports parallel jobs that push the limits of simulation science. M and IC has worked closely with Stockpile Stewardship, and together they have made LLNL a premier institution for computational and simulation science. Such a standing is vital to the continued success of laboratory science programs and to the recruitment and retention of top scientists. This report provides recommendations to build on M and IC's accomplishments and improve simulation capabilities at LLNL. We recommend that institution fully fund (1) operation of the atlas cluster purchased in FY06 to support a few large projects; (2) operation of the thunder and zeus clusters to enable 'mid-range' parallel capacity simulations during normal operation and a limited number of large simulations during dedicated application time; (3) operation of the new yana cluster to support a wide range of serial capacity simulations; (4) improvements to the reliability and performance of the Lustre parallel file system; (5) support for the new GDO petabyte-class storage facility on the green network for use in data intensive external collaborations; and (6) continued support for visualization and other methods for analyzing large simulations. We also recommend that M and IC begin planning in FY07 for the next upgrade of its parallel clusters. LLNL investments in M and IC have resulted in a world-class simulation capability leading to innovative science. We thank the LLNL management for its continued support and thank the M and IC staff for its vision and dedicated efforts to make it all happen.« less
Suni, Jaana H; Rinne, Marjo; Tokola, Kari; Mänttäri, Ari; Vasankari, Tommi
2017-01-01
Neck and low back pain (LBP) are common in office workers. Exercise trials to reduce neck and LBP conducted in sport sector are lacking. We investigated the effectiveness of the standardised Fustra20Neck&Back exercise program for reducing pain and increasing fitness in office workers with recurrent non-specific neck and/or LBP. Volunteers were recruited through newspaper and Facebook. The design is a multi-centre randomised, two-arm, parallel group trial across 34 fitness clubs in Finland. Eligibility was determined by structured telephone interview. Instructors were specially educated professionals. Neuromuscular exercise was individually guided twice weekly for 10 weeks. Webropol survey, and objective measurements of fitness, physical activity, and sedentary behavior were conducted at baseline, and at 3 and 12 months. Mean differences between study groups (Exercise vs Control) were analysed using a general linear mixed model according to the intention-to-treat principle. At least moderate intensity pain (≥40 mm) in both the neck and back was detected in 44% of participants at baseline. Exercise compliance was excellent: 92% participated 15-20 times out of 20 possible. Intensity and frequency of neck pain, and strain in neck/shoulders decreased significantly in the Exercise group compared with the Control group. No differences in LBP and strain were detected. Neck/shoulder and trunk flexibility improved, as did quality of life in terms of pain and physical functioning. The Fustra20Neck&Back exercise program was effective for reducing neck/shoulder pain and strain, but not LBP. Evidence-based exercise programs of sports clubs have potential to prevent persistent, disabling musculoskeletal problems.
Zhou, Q; Zuo, M H; Li, Q W; Tian, Y T; Xie, Y B; Wang, Y B; Yang, G Y; Ye, Y J; Guo, P; Liu, J P; Liu, Z L; An, C; Zhou, T; Tian, Z; Liu, C B; Hu, Y; Chi, X Y; Shen, Y; Xia, Y; Hu, K W
2017-12-23
Objective: To investigate the safety and efficacy of the Weitan Waifu patch on the postsurgical gastroparesis syndrome (PGS) of gastrointestinal cancer. Methods: The multi-center, double-blind, randomized controlled trial was conducted with superiority design. Patients with PGS of gastrointestinal cancer diagnosed in 4 AAA hospitals and the abdominal symptom manifested as cold syndrome by Chinese local syndrome differentiation were recruited. These patients were randomly divided into two groups according to 1∶1 proportion. Placebo or Weitan Waifu patch was applied in control group or intervention group, respectively, based on the basic treatments, including nutrition support, gastrointestinal decompression, promoting gastric dynamics medicine.Two acupuncture points (Zhongwan and Shenque) were stuck with placebo in control group or patch in treatment group. The intervention course was 14 days or reached the effective standard. Results: From July 15, 2013 to Jun 3, 2015, 128 participants were recruited and 120 eligible cases were included in the full analysis set (FAS), and 60 cases in each group. 88 cases were included in the per-protocol set (PPS), including 45 cases in the treatment group and 43 cases in the control group. In the FAS, the clinical effective rate in the treatment group was 68.3%, significantly superior than 41.7% of the control group ( P =0.003). The medium time of effective therapy in the treatment group was 8 days, significantly shorter than 10 days in the control group ( P =0.017). In the FAS, 3 adverse events occurred in the treatment group, including mild to moderate decrustation, pruritus and nausea. The incidence rate of adverse events was 5.0% (3/60) and these symptoms were spontaneously remitted after drug withdrawal. No severe adverse events were observed in the control group. There was no significant difference between these two groups ( P =0.244). Conclusion: Weitan Waifu patch is a safely and effectively therapeutic method for patients with PGS (cold syndrome) of gastroenterological cancer. Trial registration: International Standard Randomized Controlled Trial Number Register, ISRCTN18291857.
Augmenting The HST Pure Parallel Observations
NASA Astrophysics Data System (ADS)
Patterson, Alan; Soutchkova, G.; Workman, W.
2012-05-01
Pure Parallel (PP) programs, designated GO/PAR, are a subgroup of General Observer (GO) programs. PP execute simultaneously with prime GO observations to which they are "attached". The PP observations can be performed with ACS/WFC, WFC3/UVIS or WFC3/IR and can be attached only to GO visits in which the instruments are either COS or STIS. The current HST Parallel Observation Processing System (POPS) was introduced after the Servicing Mission 4. It increased the HST productivity by 10% in terms of the utilization of HST prime orbits and was highly appreciated by the HST observers, allowing them to design efficient, multi-orbit survey projects for collecting large amounts of data on identifiable targets. The results of the WFC3 Infrared Spectroscopic Parallel Survey (WISP), Hubble Infrared Pure Parallel Imaging Extragalactic Survey (HIPPIES), and The Brightest-of-Reionizing Galaxies Pure Parallel Survey (BoRG) exemplify this benefit. In Cycle 19, however, the full advantage of GO/PARs came under risk. Whereas each of the previous cycles provided over one million seconds of exposure time for PP, in Cycle 19 that number reduced to 680,000 seconds. This dramatic decline occurred because of fundamental changes in the construction of COS prime observations. To preserve the science output of PP, the PP Working Group was tasked to find a way to recover the lost time and maximize the total time available for PP observing. The solution was to expand the definition of a PP opportunity to allow PP exposures to span one or more primary exposure readouts. So starting in HST Cycle 20, PP opportunities will no longer be limited to GO visits with a single uninterrupted exposure in an orbit. The resulting enhancements in HST Cycle 20 to the PP opportunity identification and matching process are expected to restore the PP time to previously achieved and possibly even greater levels.
A parallel algorithm for multi-level logic synthesis using the transduction method. M.S. Thesis
NASA Technical Reports Server (NTRS)
Lim, Chieng-Fai
1991-01-01
The Transduction Method has been shown to be a powerful tool in the optimization of multilevel networks. Many tools such as the SYLON synthesis system (X90), (CM89), (LM90) have been developed based on this method. A parallel implementation is presented of SYLON-XTRANS (XM89) on an eight processor Encore Multimax shared memory multiprocessor. It minimizes multilevel networks consisting of simple gates through parallel pruning, gate substitution, gate merging, generalized gate substitution, and gate input reduction. This implementation, called Parallel TRANSduction (PTRANS), also uses partitioning to break large circuits up and performs inter- and intra-partition dynamic load balancing. With this, good speedups and high processor efficiencies are achievable without sacrificing the resulting circuit quality.