Fracture simulation of restored teeth using a continuum damage mechanics failure model.
Li, Haiyan; Li, Jianying; Zou, Zhenmin; Fok, Alex Siu-Lun
2011-07-01
The aim of this paper is to validate the use of a finite-element (FE) based continuum damage mechanics (CDM) failure model to simulate the debonding and fracture of restored teeth. Fracture testing of plastic model teeth, with or without a standard Class-II MOD (mesial-occusal-distal) restoration, was carried out to investigate their fracture behavior. In parallel, 2D FE models of the teeth are constructed and analyzed using the commercial FE software ABAQUS. A CDM failure model, implemented into ABAQUS via the user element subroutine (UEL), is used to simulate the debonding and/or final fracture of the model teeth under a compressive load. The material parameters needed for the CDM model to simulate fracture are obtained through separate mechanical tests. The predicted results are then compared with the experimental data of the fracture tests to validate the failure model. The failure processes of the intact and restored model teeth are successfully reproduced by the simulation. However, the fracture parameters obtained from testing small specimens need to be adjusted to account for the size effect. The results indicate that the CDM model is a viable model for the prediction of debonding and fracture in dental restorations. Copyright © 2011 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.
Stem revenue losses with effective CDM management.
Alwell, Michael
2003-09-01
Effective CDM management not only minimizes revenue losses due to denied claims, but also helps eliminate administrative costs associated with correcting coding errors. Accountability for CDM management should be assigned to a single individual, who ideally reports to the CFO or high-level finance director. If your organization is prone to making billing errors due to CDM deficiencies, you should consider purchasing CDM software to help you manage your CDM.
Installé, Arnaud Jf; Van den Bosch, Thierry; De Moor, Bart; Timmerman, Dirk
2014-10-20
Using machine-learning techniques, clinical diagnostic model research extracts diagnostic models from patient data. Traditionally, patient data are often collected using electronic Case Report Form (eCRF) systems, while mathematical software is used for analyzing these data using machine-learning techniques. Due to the lack of integration between eCRF systems and mathematical software, extracting diagnostic models is a complex, error-prone process. Moreover, due to the complexity of this process, it is usually only performed once, after a predetermined number of data points have been collected, without insight into the predictive performance of the resulting models. The objective of the study of Clinical Data Miner (CDM) software framework is to offer an eCRF system with integrated data preprocessing and machine-learning libraries, improving efficiency of the clinical diagnostic model research workflow, and to enable optimization of patient inclusion numbers through study performance monitoring. The CDM software framework was developed using a test-driven development (TDD) approach, to ensure high software quality. Architecturally, CDM's design is split over a number of modules, to ensure future extendability. The TDD approach has enabled us to deliver high software quality. CDM's eCRF Web interface is in active use by the studies of the International Endometrial Tumor Analysis consortium, with over 4000 enrolled patients, and more studies planned. Additionally, a derived user interface has been used in six separate interrater agreement studies. CDM's integrated data preprocessing and machine-learning libraries simplify some otherwise manual and error-prone steps in the clinical diagnostic model research workflow. Furthermore, CDM's libraries provide study coordinators with a method to monitor a study's predictive performance as patient inclusions increase. To our knowledge, CDM is the only eCRF system integrating data preprocessing and machine-learning libraries. This integration improves the efficiency of the clinical diagnostic model research workflow. Moreover, by simplifying the generation of learning curves, CDM enables study coordinators to assess more accurately when data collection can be terminated, resulting in better models or lower patient recruitment costs.
GACD: Integrated Software for Genetic Analysis in Clonal F1 and Double Cross Populations.
Zhang, Luyan; Meng, Lei; Wu, Wencheng; Wang, Jiankang
2015-01-01
Clonal species are common among plants. Clonal F1 progenies are derived from the hybridization between 2 heterozygous clones. In self- and cross-pollinated species, double crosses can be made from 4 inbred lines. A clonal F1 population can be viewed as a double cross population when the linkage phase is determined. The software package GACD (Genetic Analysis of Clonal F1 and Double cross) is freely available public software, capable of building high-density linkage maps and mapping quantitative trait loci (QTL) in clonal F1 and double cross populations. Three functionalities are integrated in GACD version 1.0: binning of redundant markers (BIN); linkage map construction (CDM); and QTL mapping (CDQ). Output of BIN can be directly used as input of CDM. After adding the phenotypic data, the output of CDM can be used as input of CDQ. Thus, GACD acts as a pipeline for genetic analysis. GACD and example datasets are freely available from www.isbreeding.net. © The American Genetic Association. 2015. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Computer Assisted Chronic Disease Management: Does It Work? A Pilot Study Using Mixed Methods
Jones, Kay M.; Biezen, Ruby; Piterman, Leon
2013-01-01
Background. Key factors for the effective chronic disease management (CDM) include the availability of practical and effective computer tools and continuing professional development/education. This study tested the effectiveness of a computer assisted chronic disease management tool, a broadband-based service known as cdmNet in increasing the development of care plans for patients with chronic disease in general practice. Methodology. Mixed methods are the breakthrough series methodology (workshops and plan-do-study-act cycles) and semistructured interviews. Results. Throughout the intervention period a pattern emerged suggesting GPs use of cdmNet initially increased, then plateaued practice nurses' and practice managers' roles expanded as they became more involved in using cdmNet. Seven main messages emerged from the GP interviews. Discussion. The overall use of cdmNet by participating GPs varied from “no change” to “significant change and developing many the GPMPs (general practice management plans) using cdmNet.” The variation may be due to several factors, not the least, allowing GPs adequate time to familiarise themselves with the software and recognising the benefit of the team approach. Conclusion. The breakthrough series methodology facilitated upskilling GPs' management of patients diagnosed with a chronic disease and learning how to use the broadband-based service cdmNet. PMID:24959576
DICE/ColDICE: 6D collisionless phase space hydrodynamics using a lagrangian tesselation
NASA Astrophysics Data System (ADS)
Sousbie, Thierry
2018-01-01
DICE is a C++ template library designed to solve collisionless fluid dynamics in 6D phase space using massively parallel supercomputers via an hybrid OpenMP/MPI parallelization. ColDICE, based on DICE, implements a cosmological and physical VLASOV-POISSON solver for cold systems such as dark matter (CDM) dynamics.
MOSAIC--A Modular Approach to Data Management in Epidemiological Studies.
Bialke, M; Bahls, T; Havemann, C; Piegsa, J; Weitmann, K; Wegner, T; Hoffmann, W
2015-01-01
In the context of an increasing number of multi-centric studies providing data from different sites and sources the necessity for central data management (CDM) becomes undeniable. This is exacerbated by a multiplicity of featured data types, formats and interfaces. In relation to methodological medical research the definition of central data management needs to be broadened beyond the simple storage and archiving of research data. This paper highlights typical requirements of CDM for cohort studies and registries and illustrates how orientation for CDM can be provided by addressing selected data management challenges. Therefore in the first part of this paper a short review summarises technical, organisational and legal challenges for CDM in cohort studies and registries. A deduced set of typical requirements of CDM in epidemiological research follows. In the second part the MOSAIC project is introduced (a modular systematic approach to implement CDM). The modular nature of MOSAIC contributes to manage both technical and organisational challenges efficiently by providing practical tools. A short presentation of a first set of tools, aiming for selected CDM requirements in cohort studies and registries, comprises a template for comprehensive documentation of data protection measures, an interactive reference portal for gaining insights and sharing experiences, supplemented by modular software tools for generation and management of generic pseudonyms, for participant management and for sophisticated consent management. Altogether, work within MOSAIC addresses existing challenges in epidemiological research in the context of CDM and facilitates the standardized collection of data with pre-programmed modules and provided document templates. The necessary effort for in-house programming is reduced, which accelerates the start of data collection.
MultiNest: Efficient and Robust Bayesian Inference
NASA Astrophysics Data System (ADS)
Feroz, F.; Hobson, M. P.; Bridges, M.
2011-09-01
We present further development and the first public release of our multimodal nested sampling algorithm, called MultiNest. This Bayesian inference tool calculates the evidence, with an associated error estimate, and produces posterior samples from distributions that may contain multiple modes and pronounced (curving) degeneracies in high dimensions. The developments presented here lead to further substantial improvements in sampling efficiency and robustness, as compared to the original algorithm presented in Feroz & Hobson (2008), which itself significantly outperformed existing MCMC techniques in a wide range of astrophysical inference problems. The accuracy and economy of the MultiNest algorithm is demonstrated by application to two toy problems and to a cosmological inference problem focusing on the extension of the vanilla LambdaCDM model to include spatial curvature and a varying equation of state for dark energy. The MultiNest software is fully parallelized using MPI and includes an interface to CosmoMC. It will also be released as part of the SuperBayeS package, for the analysis of supersymmetric theories of particle physics, at this http URL.
Ocular Chromatic Aberrations and Their Effects on Polychromatic Retinal Image Quality
NASA Astrophysics Data System (ADS)
Zhang, Xiaoxiao
Previous studies of ocular chromatic aberrations have concentrated on chromatic difference of focus (CDF). Less is known about the chromatic difference of image position (CDP) in the peripheral retina and no experimental attempt has been made to measure the ocular chromatic difference of magnification (CDM). Consequently, theoretical modelling of human eyes is incomplete. The insufficient knowledge of ocular chromatic aberrations is partially responsible for two unsolved applied vision problems: (1) how to improve vision by correcting ocular chromatic aberration? (2) what is the impact of ocular chromatic aberration on the use of isoluminance gratings as a tool in spatial-color vision?. Using optical ray tracing methods, MTF analysis methods of image quality, and psychophysical methods, I have developed a more complete model of ocular chromatic aberrations and their effects on vision. The ocular CDM was determined psychophysically by measuring the tilt in the apparent frontal parallel plane (AFPP) induced by interocular difference in image wavelength. This experimental result was then used to verify a theoretical relationship between the ocular CDM, the ocular CDF and the entrance pupil of the eye. In the retinal image after correcting the ocular CDF with existing achromatizing methods, two forms of chromatic aberration (CDM and chromatic parallax) were examined. The CDM was predicted by theoretical ray tracing and measured with the same method used to determine ocular CDM. The chromatic parallax was predicted with a nodal ray model and measured with the two-color vernier alignment method. The influence of these two aberrations on polychromatic MTF were calculated. Using this improved model of ocular chromatic aberration, luminance artifacts in the images of isoluminance gratings were calculated. The predicted luminance artifacts were then compared with experimental data from previous investigators. The results show that: (1) A simple relationship exists between two major chromatic aberrations and the location of the pupil; (2) The ocular CDM is measurable and varies among individuals; (3) All existing methods to correct ocular chromatic aberration face another aberration, chromatic parallax, which is inherent in the methodology; (4) Ocular chromatic aberrations have the potential to contaminate psychophysical experimental results on human spatial-color vision.
Reporting Subscores Using R: A Software Review
ERIC Educational Resources Information Center
Dai, Shenghai; Svetina, Dubravka; Wang, Xiaolin
2017-01-01
There is an increasing interest in reporting test subscores for diagnostic purposes. In this article, we review nine popular R packages (subscore, mirt, TAM, sirt, CDM, NPCD, lavaan, sem, and OpenMX) that are capable of implementing subscore-reporting methods within one or more frameworks including classical test theory, multidimensional item…
Cognitive Diagnostic Modeling Using R
ERIC Educational Resources Information Center
Ravand, Hamdollah
2015-01-01
Cognitive diagnostic models (CDM) have been around for more than a decade but their application is far from widespread for mainly two reasons: (1) CDMs are novel, as compared to traditional IRT models. Consequently, many researchers lack familiarity with them and their properties, and (2) Software programs doing CDMs have been expensive and not…
Faron, Matthew L.; Coon, Christopher; Liebregts, Theo; van Bree, Anita; Jansz, Arjan R.; Soucy, Genevieve; Korver, John
2016-01-01
Vancomycin-resistant enterococci (VRE) are an important cause of health care-acquired infections (HAIs). Studies have shown that active surveillance of high-risk patients for VRE colonization can aid in reducing HAIs; however, these screens generate a significant cost to the laboratory and health care system. Digital imaging capable of differentiating negative and “nonnegative” chromogenic agar can reduce the labor cost of these screens and potentially improve patient care. In this study, we evaluated the performance of the WASPLab Chromogenic Detection Module (CDM) (Copan, Brescia, Italy) software to analyze VRE chromogenic agar and compared the results to technologist plate reading. Specimens collected at 3 laboratories were cultured using the WASPLab CDM and plated to each site's standard-of-care chromogenic media, which included Colorex VRE (BioMed Diagnostics, White City, OR) or Oxoid VRE (Oxoid, Basingstoke, United Kingdom). Digital images were scored using the CDM software after 24 or 40 h of growth, and all manual reading was performed using digital images on a high-definition (HD) monitor. In total, 104,730 specimens were enrolled and automation agreed with manual analysis for 90.1% of all specimens tested, with sensitivity and specificity of 100% and 89.5%, respectively. Automation results were discordant for 10,348 specimens, and all discordant images were reviewed by a laboratory supervisor or director. After a second review, 499 specimens were identified as representing missed positive cultures falsely called negative by the technologist, 1,616 were identified as containing borderline color results (negative result but with no package insert color visible), and 8,234 specimens were identified as containing colorimetric pigmentation due to residual matrix from the specimen or yeast (Candida). Overall, the CDM was accurate at identifying negative VRE plates, which comprised 84% (87,973) of the specimens in this study. PMID:27413193
Large-scale structure after COBE: Peculiar velocities and correlations of cold dark matter halos
NASA Technical Reports Server (NTRS)
Zurek, Wojciech H.; Quinn, Peter J.; Salmon, John K.; Warren, Michael S.
1994-01-01
Large N-body simulations on parallel supercomputers allow one to simultaneously investigate large-scale structure and the formation of galactic halos with unprecedented resolution. Our study shows that the masses as well as the spatial distribution of halos on scales of tens of megaparsecs in a cold dark matter (CDM) universe with the spectrum normalized to the anisotropies detected by Cosmic Background Explorer (COBE) is compatible with the observations. We also show that the average value of the relative pairwise velocity dispersion sigma(sub v) - used as a principal argument against COBE-normalized CDM models-is significantly lower for halos than for individual particles. When the observational methods of extracting sigma(sub v) are applied to the redshift catalogs obtained from the numerical experiments, estimates differ significantly between different observation-sized samples and overlap observational estimates obtained following the same procedure.
ERIC Educational Resources Information Center
DeCarlo, Lawrence T.
2011-01-01
Cognitive diagnostic models (CDMs) attempt to uncover latent skills or attributes that examinees must possess in order to answer test items correctly. The DINA (deterministic input, noisy "and") model is a popular CDM that has been widely used. It is shown here that a logistic version of the model can easily be fit with standard software for…
2015-08-01
Atomic/Molecular Massively Parallel Simulator ( LAMMPS ) Software by N Scott Weingarten and James P Larentzos Approved for...Massively Parallel Simulator ( LAMMPS ) Software by N Scott Weingarten Weapons and Materials Research Directorate, ARL James P Larentzos Engility...Shifted Periodic Boundary Conditions in the Large-Scale Atomic/Molecular Massively Parallel Simulator ( LAMMPS ) Software 5a. CONTRACT NUMBER 5b
Parallel Logic Programming and Parallel Systems Software and Hardware
1989-07-29
Conference, Dallas TX. January 1985. (55) [Rous75] Roussel, P., "PROLOG: Manuel de Reference et d’Uilisation", Group d’ Intelligence Artificielle , Universite d...completed. Tools were provided for software development using artificial intelligence techniques. Al software for massively parallel architectures was...using artificial intelligence tech- niques. Al software for massively parallel architectures was started. 1. Introduction We describe research conducted
Clinical Predictive Modeling Development and Deployment through FHIR Web Services.
Khalilia, Mohammed; Choi, Myung; Henderson, Amelia; Iyengar, Sneha; Braunstein, Mark; Sun, Jimeng
2015-01-01
Clinical predictive modeling involves two challenging tasks: model development and model deployment. In this paper we demonstrate a software architecture for developing and deploying clinical predictive models using web services via the Health Level 7 (HL7) Fast Healthcare Interoperability Resources (FHIR) standard. The services enable model development using electronic health records (EHRs) stored in OMOP CDM databases and model deployment for scoring individual patients through FHIR resources. The MIMIC2 ICU dataset and a synthetic outpatient dataset were transformed into OMOP CDM databases for predictive model development. The resulting predictive models are deployed as FHIR resources, which receive requests of patient information, perform prediction against the deployed predictive model and respond with prediction scores. To assess the practicality of this approach we evaluated the response and prediction time of the FHIR modeling web services. We found the system to be reasonably fast with one second total response time per patient prediction.
Nurse-led management of chronic disease in a residential care setting.
Neylon, Julie
2015-11-01
Introduction of the advanced nurse practitioner (ANP) role has enabled nurses to develop their clinical knowledge and skills, providing greater service provision and improved access to healthcare services. It can also help with the challenges of providing care to an ageing population in primary care. This article reports on the evaluation of an ANP-led clinic in two residential care homes that provides annual reviews for chronic disease management (CDM). A mixed method approach was used to evaluate the service using clinical data obtained from the electronic patient record system and software and patient satisfaction questionnaires. The number of patients receiving CDM reviews in the homes increased as a result of the clinic. Completed satisfaction questionnaires further demonstrated patients' satisfaction and willingness to engage with the service. The service highlights the ANP's effectiveness in managing residential care home patients with chronic diseases and improving their access to healthcare services.
Clinical Predictive Modeling Development and Deployment through FHIR Web Services
Khalilia, Mohammed; Choi, Myung; Henderson, Amelia; Iyengar, Sneha; Braunstein, Mark; Sun, Jimeng
2015-01-01
Clinical predictive modeling involves two challenging tasks: model development and model deployment. In this paper we demonstrate a software architecture for developing and deploying clinical predictive models using web services via the Health Level 7 (HL7) Fast Healthcare Interoperability Resources (FHIR) standard. The services enable model development using electronic health records (EHRs) stored in OMOP CDM databases and model deployment for scoring individual patients through FHIR resources. The MIMIC2 ICU dataset and a synthetic outpatient dataset were transformed into OMOP CDM databases for predictive model development. The resulting predictive models are deployed as FHIR resources, which receive requests of patient information, perform prediction against the deployed predictive model and respond with prediction scores. To assess the practicality of this approach we evaluated the response and prediction time of the FHIR modeling web services. We found the system to be reasonably fast with one second total response time per patient prediction. PMID:26958207
Structure formation in f(T) gravity and a solution for H0 tension
NASA Astrophysics Data System (ADS)
Nunes, Rafael C.
2018-05-01
We investigate the evolution of scalar perturbations in f(T) teleparallel gravity and its effects on the cosmic microwave background (CMB) anisotropy. The f(T) gravity generalizes the teleparallel gravity which is formulated on the Weitzenböck spacetime, characterized by the vanishing curvature tensor (absolute parallelism) and the non-vanishing torsion tensor. For the first time, we derive the observational constraints on the modified teleparallel gravity using the CMB temperature power spectrum from Planck's estimation, in addition to data from baryonic acoustic oscillations (BAO) and local Hubble constant measurements. We find that a small deviation of the f(T) gravity model from the ΛCDM cosmology is slightly favored. Besides that, the f(T) gravity model does not show tension on the Hubble constant that prevails in the ΛCDM cosmology. It is clear that f(T) gravity is also consistent with the CMB observations, and undoubtedly it can serve as a viable candidate amongst other modified gravity theories.
Tuti, Timothy; Bitok, Michael; Paton, Chris; Makone, Boniface; Malla, Lucas; Muinga, Naomi; Gathara, David; English, Mike
2016-01-01
Objective To share approaches and innovations adopted to deliver a relatively inexpensive clinical data management (CDM) framework within a low-income setting that aims to deliver quality pediatric data useful for supporting research, strengthening the information culture and informing improvement efforts in local clinical practice. Materials and methods The authors implemented a CDM framework to support a Clinical Information Network (CIN) using Research Electronic Data Capture (REDCap), a noncommercial software solution designed for rapid development and deployment of electronic data capture tools. It was used for collection of standardized data from case records of multiple hospitals’ pediatric wards. R, an open-source statistical language, was used for data quality enhancement, analysis, and report generation for the hospitals. Results In the first year of CIN, the authors have developed innovative solutions to support the implementation of a secure, rapid pediatric data collection system spanning 14 hospital sites with stringent data quality checks. Data have been collated on over 37 000 admission episodes, with considerable improvement in clinical documentation of admissions observed. Using meta-programming techniques in R, coupled with branching logic, randomization, data lookup, and Application Programming Interface (API) features offered by REDCap, CDM tasks were configured and automated to ensure quality data was delivered for clinical improvement and research use. Conclusion A low-cost clinically focused but geographically dispersed quality CDM (Clinical Data Management) in a long-term, multi-site, and real world context can be achieved and sustained and challenges can be overcome through thoughtful design and implementation of open-source tools for handling data and supporting research. PMID:26063746
Tuti, Timothy; Bitok, Michael; Paton, Chris; Makone, Boniface; Malla, Lucas; Muinga, Naomi; Gathara, David; English, Mike
2016-01-01
To share approaches and innovations adopted to deliver a relatively inexpensive clinical data management (CDM) framework within a low-income setting that aims to deliver quality pediatric data useful for supporting research, strengthening the information culture and informing improvement efforts in local clinical practice. The authors implemented a CDM framework to support a Clinical Information Network (CIN) using Research Electronic Data Capture (REDCap), a noncommercial software solution designed for rapid development and deployment of electronic data capture tools. It was used for collection of standardized data from case records of multiple hospitals' pediatric wards. R, an open-source statistical language, was used for data quality enhancement, analysis, and report generation for the hospitals. In the first year of CIN, the authors have developed innovative solutions to support the implementation of a secure, rapid pediatric data collection system spanning 14 hospital sites with stringent data quality checks. Data have been collated on over 37 000 admission episodes, with considerable improvement in clinical documentation of admissions observed. Using meta-programming techniques in R, coupled with branching logic, randomization, data lookup, and Application Programming Interface (API) features offered by REDCap, CDM tasks were configured and automated to ensure quality data was delivered for clinical improvement and research use. A low-cost clinically focused but geographically dispersed quality CDM (Clinical Data Management) in a long-term, multi-site, and real world context can be achieved and sustained and challenges can be overcome through thoughtful design and implementation of open-source tools for handling data and supporting research. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association.
High-energy physics software parallelization using database techniques
NASA Astrophysics Data System (ADS)
Argante, E.; van der Stok, P. D. V.; Willers, I.
1997-02-01
A programming model for software parallelization, called CoCa, is introduced that copes with problems caused by typical features of high-energy physics software. By basing CoCa on the database transaction paradimg, the complexity induced by the parallelization is for a large part transparent to the programmer, resulting in a higher level of abstraction than the native message passing software. CoCa is implemented on a Meiko CS-2 and on a SUN SPARCcenter 2000 parallel computer. On the CS-2, the performance is comparable with the performance of native PVM and MPI.
Planes of satellite galaxies and the cosmic web
NASA Astrophysics Data System (ADS)
Libeskind, Noam I.; Hoffman, Yehuda; Tully, R. Brent; Courtois, Helene M.; Pomarède, Daniel; Gottlöber, Stefan; Steinmetz, Matthias
2015-09-01
Recent observational studies have demonstrated that the majority of satellite galaxies tend to orbit their hosts on highly flattened, vast, possibly corotating planes. Two nearly parallel planes of satellites have been confirmed around the M31 galaxy and around the Centaurus A galaxy, while the Milky Way also sports a plane of satellites. It has been argued that such an alignment of satellites on vast planes is unexpected in the standard Λ cold dark matter (ΛCDM) model of cosmology if not even in contradiction to its generic predictions. Guided by ΛCDM numerical simulations, which suggest that satellites are channelled towards hosts along the axis of the slowest collapse as dictated by the ambient velocity shear tensor, we re-examine the planes of local satellites systems within the framework of the local shear tensor derived from the Cosmicflows-2 data set. The analysis reveals that the Local Group and Centaurus A reside in a filament stretched by the Virgo cluster and compressed by the expansion of the Local Void. Four out of five thin planes of satellite galaxies are indeed closely aligned with the axis of compression induced by the Local Void. Being the less massive system, the moderate misalignment of the Milky Way's satellite plane can likely be ascribed to its greater susceptibility to tidal torques, as suggested by numerical simulations. The alignment of satellite systems in the local Universe with the ambient shear field is thus in general agreement with predictions of the ΛCDM model.
Expert System Development Methodology (ESDM)
NASA Technical Reports Server (NTRS)
Sary, Charisse; Gilstrap, Lewey; Hull, Larry G.
1990-01-01
The Expert System Development Methodology (ESDM) provides an approach to developing expert system software. Because of the uncertainty associated with this process, an element of risk is involved. ESDM is designed to address the issue of risk and to acquire the information needed for this purpose in an evolutionary manner. ESDM presents a life cycle in which a prototype evolves through five stages of development. Each stage consists of five steps, leading to a prototype for that stage. Development may proceed to a conventional development methodology (CDM) at any time if enough has been learned about the problem to write requirements. ESDM produces requirements so that a product may be built with a CDM. ESDM is considered preliminary because is has not yet been applied to actual projects. It has been retrospectively evaluated by comparing the methods used in two ongoing expert system development projects that did not explicitly choose to use this methodology but which provided useful insights into actual expert system development practices and problems.
A high-speed linear algebra library with automatic parallelism
NASA Technical Reports Server (NTRS)
Boucher, Michael L.
1994-01-01
Parallel or distributed processing is key to getting highest performance workstations. However, designing and implementing efficient parallel algorithms is difficult and error-prone. It is even more difficult to write code that is both portable to and efficient on many different computers. Finally, it is harder still to satisfy the above requirements and include the reliability and ease of use required of commercial software intended for use in a production environment. As a result, the application of parallel processing technology to commercial software has been extremely small even though there are numerous computationally demanding programs that would significantly benefit from application of parallel processing. This paper describes DSSLIB, which is a library of subroutines that perform many of the time-consuming computations in engineering and scientific software. DSSLIB combines the high efficiency and speed of parallel computation with a serial programming model that eliminates many undesirable side-effects of typical parallel code. The result is a simple way to incorporate the power of parallel processing into commercial software without compromising maintainability, reliability, or ease of use. This gives significant advantages over less powerful non-parallel entries in the market.
A design methodology for portable software on parallel computers
NASA Technical Reports Server (NTRS)
Nicol, David M.; Miller, Keith W.; Chrisman, Dan A.
1993-01-01
This final report for research that was supported by grant number NAG-1-995 documents our progress in addressing two difficulties in parallel programming. The first difficulty is developing software that will execute quickly on a parallel computer. The second difficulty is transporting software between dissimilar parallel computers. In general, we expect that more hardware-specific information will be included in software designs for parallel computers than in designs for sequential computers. This inclusion is an instance of portability being sacrificed for high performance. New parallel computers are being introduced frequently. Trying to keep one's software on the current high performance hardware, a software developer almost continually faces yet another expensive software transportation. The problem of the proposed research is to create a design methodology that helps designers to more precisely control both portability and hardware-specific programming details. The proposed research emphasizes programming for scientific applications. We completed our study of the parallelizability of a subsystem of the NASA Earth Radiation Budget Experiment (ERBE) data processing system. This work is summarized in section two. A more detailed description is provided in Appendix A ('Programming Practices to Support Eventual Parallelism'). Mr. Chrisman, a graduate student, wrote and successfully defended a Ph.D. dissertation proposal which describes our research associated with the issues of software portability and high performance. The list of research tasks are specified in the proposal. The proposal 'A Design Methodology for Portable Software on Parallel Computers' is summarized in section three and is provided in its entirety in Appendix B. We are currently studying a proposed subsystem of the NASA Clouds and the Earth's Radiant Energy System (CERES) data processing system. This software is the proof-of-concept for the Ph.D. dissertation. We have implemented and measured the performance of a portion of this subsystem on the Intel iPSC/2 parallel computer. These results are provided in section four. Our future work is summarized in section five, our acknowledgements are stated in section six, and references for published papers associated with NAG-1-995 are provided in section seven.
Carbon credit of renewable energy projects in Malaysia
NASA Astrophysics Data System (ADS)
Lim, X.; Lam, W. H.; Shamsuddin, A. H.
2013-06-01
The introduction of Clean Development Mechanism (CDM) to Malaysia improves the environment of the country. Besides achieving sustainable development, the carbon credit earned through CDM enhances the financial state of the nation. Both CDM and renewable energy contribute to the society by striving to reduce carbon emission. Most of the CDM projects are related to renewable energy, which recorded 69% out of total CDM projects. This paper presents the energy overview and status of renewable energies in the country. Then, the renewable energy will be related to the CDM.
A real-time MPEG software decoder using a portable message-passing library
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kwong, Man Kam; Tang, P.T. Peter; Lin, Biquan
1995-12-31
We present a real-time MPEG software decoder that uses message-passing libraries such as MPL, p4 and MPI. The parallel MPEG decoder currently runs on the IBM SP system but can be easil ported to other parallel machines. This paper discusses our parallel MPEG decoding algorithm as well as the parallel programming environment under which it uses. Several technical issues are discussed, including balancing of decoding speed, memory limitation, 1/0 capacities, and optimization of MPEG decoding components. This project shows that a real-time portable software MPEG decoder is feasible in a general-purpose parallel machine.
2006-12-01
CDM Camp Dresser & McKee Inc. CSU Colorado State University DCA dichloroethane DO dissolved oxygen DoD Department of Defense EA EA...Ph.D. (PI), Camp Dresser & McKee Inc. (CDM); John Eisenbeis, Ph.D., CDM; Kristy Warren, CDM; Dan Adams, CDM; Michael Allen, Bangor Naval Submarine Base...alcohol (PVA) using cyanuric chloride, and the resulting product was cross -linked with glutaraldehyde in presence of HCl to form a hydrogel that was
Rieck, Allison; Pettigrew, Simone
2013-01-01
Community pharmacists (CPs) have been changing their role to focus on patient-centred services to improve the quality of chronic disease management (CDM) in primary care. However, CPs have not been readily included in collaborative CDM with other primary care professionals such as physicians. There is little understanding of the CP role change and whether it affects the utilisation of CPs in primary care collaborative CDM. To explore physician and CP perceptions of the CP's role in Australian primary care and how these perceptions may influence the quality of physician/CP CDM programmes. Data were collected from physicians and CPs using semi-structured interviews. A qualitative methodology utilising thematic analysis was employed during data analysis. Qualitative methodology trustworthiness techniques, negative case analysis and member checking were utilised to substantiate the resultant themes. A total of 22 physicians and 22 CPs were interviewed. Strong themes emerged regarding the participant perceptions of the CP's CDM role in primary care. The majority of interviewed physicians perceived that CPs did not have the appropriate CDM knowledge to complement physician knowledge to provide improved CDM compared with what they could provide on their own. Most of the interviewed CPs expressed a willingness and capability to undertake CDM; however, they were struggling to provide sustainable CDM in the business setting within which they function in the primary care environment. Role theory was selected as it provided the optimum explanation of the resultant themes. First, physician lack of confidence in the appropriateness of CP CDM knowledge causes physicians to be confused about the role CPs would undertake in a collaborative CDM that would benefit the physicians and their patients. Thus, by increasing physician awareness of CP CDM knowledge, physicians may see CPs as suitable CDM collaborators. Second, CPs are experiencing role conflict and stress in trying to change their role. Strengthening the service business model may reduce these CP role issues and allow CPs to reach their full potential in CDM and improve the quality of collaborative CDM in Australian primary care.
Characteristics of Hospitalized Children With a Diagnosis of Malnutrition: United States, 2010.
Abdelhadi, Ruba A; Bouma, Sandra; Bairdain, Sigrid; Wolff, Jodi; Legro, Amanda; Plogsted, Steve; Guenter, Peggi; Resnick, Helaine; Slaughter-Acey, Jaime C; Corkins, Mark R
2016-07-01
Malnutrition is common in hospitalized patients in the United States. In 2010, 80,710 of 6,280,710 hospitalized children <17 years old had a coded diagnosis of malnutrition (CDM). This report summarizes nationally representative, person-level characteristics of hospitalized children with a CDM. Data are from the 2010 Healthcare Cost and Utilization Project, which contains patient-level data on hospital inpatient stays. When weighted appropriately, estimates from the project represent all U.S. hospitalizations. The data set contains up to 25 ICD-9-CM diagnostic codes for each patient. Children with a CDM listed during hospitalization were identified. In 2010, 1.3% of hospitalized patients <17 years had a CDM. Since the data include only those with a CDM, malnutrition's true prevalence may be underrepresented. Length of stay among children with a CDM was almost 2.5 times longer than those without a CDM. Hospital costs for children with a CDM were >3 times higher than those without a CDM. Hospitalized children with a CDM were less likely to have routine discharge and almost 3.5 times more likely to require postdischarge home care. Children with a CDM were more likely to have multiple comorbidities. Hospitalized children with a CDM are associated with more comorbidities, longer hospital stay, and higher healthcare costs than those without this diagnosis. These undernourished children may utilize more healthcare resources in the hospital and community. Clinicians and policymakers should factor this into healthcare resource utilization planning. Recognizing and accurately coding malnutrition in hospitalized children may reveal the true prevalence of malnutrition. © 2016 American Society for Parenteral and Enteral Nutrition.
NASA Technical Reports Server (NTRS)
OKeefe, Matthew (Editor); Kerr, Christopher L. (Editor)
1998-01-01
This report contains the abstracts and technical papers from the Second International Workshop on Software Engineering and Code Design in Parallel Meteorological and Oceanographic Applications, held June 15-18, 1998, in Scottsdale, Arizona. The purpose of the workshop is to bring together software developers in meteorology and oceanography to discuss software engineering and code design issues for parallel architectures, including Massively Parallel Processors (MPP's), Parallel Vector Processors (PVP's), Symmetric Multi-Processors (SMP's), Distributed Shared Memory (DSM) multi-processors, and clusters. Issues to be discussed include: (1) code architectures for current parallel models, including basic data structures, storage allocation, variable naming conventions, coding rules and styles, i/o and pre/post-processing of data; (2) designing modular code; (3) load balancing and domain decomposition; (4) techniques that exploit parallelism efficiently yet hide the machine-related details from the programmer; (5) tools for making the programmer more productive; and (6) the proliferation of programming models (F--, OpenMP, MPI, and HPF).
Edelstein, Burton L; Ng, Man Wai
2015-01-01
An Institute of Medicine report places chronic disease management (CDM) as an intervention on a treatment spectrum between prevention and acute care. CDM commonly focuses on conditions in which patient self-care efforts are significant. Framing early childhood caries (ECC) as such a chronic condition invites dentistry to reconsider its approach to caries management and shift gears from a strictly surgical approach to one that also incorporates a medical approach. This paper's purpose was to explore the definition of and concepts inherent in CDM. An explanatory model is introduced to describe the multiple factors that influence ECC-CDM strategies. Reviewed literature suggests that early evidence from ECC-CDM interventions, along with results of pediatric asthma and diabetes CDM, supports CDM of ECC as a valid approach that is independent of both prevention and repair. Early results of ECC-CDM endeavors have demonstrated a reduction in rates of new cavitation, dental pain, and referral to the operating room compared to baseline rates. ECC-CDM strategies hold strong promise to curtail caries activity while complementing dental repair when needed, thereby reducing disease progression and cavity recurrence. Institutionalizing ECC-CDM will both require and benefit from evolving health care delivery and financing systems that reward positive health outcomes.
Who's minding the charge description master?
Schaum, Kathleen D
2011-11-01
Just as it takes a team to manage chronic wounds, it takes a team to maintain the CDM. The technical staff from the wound care department should be represented on this team and should share the appropriate HCPCS codes and CPT codes, product descriptions, and costs for all procedures, services, supplies, drugs, and biologics used in their department. The billing department should ensure that the appropriate revenue codes for each payer are listed for each item on the CDM. Based on costs supplied by the wound care department, the finance department should consistently assign hospital charges to each line item on the CDM. The information technology department is responsible for making the specific changes to the CDM in the computer system. Most hospitals have a CDM coordinator. The technical staff from the wound care department should work closely with the CDM coordinator and should obtain from him/her the policies and procedures for maintaining the wound care department CDM. Most CDM coordinators will also provide a CDM Change Request Form. Use that form each year when the hospital is performing its annual CDM maintenance and throughout the year to add procedures, services, supplies, drugs, or biologics to your wound care offerings and/or when the cost for these offerings change.
A Robust and Scalable Software Library for Parallel Adaptive Refinement on Unstructured Meshes
NASA Technical Reports Server (NTRS)
Lou, John Z.; Norton, Charles D.; Cwik, Thomas A.
1999-01-01
The design and implementation of Pyramid, a software library for performing parallel adaptive mesh refinement (PAMR) on unstructured meshes, is described. This software library can be easily used in a variety of unstructured parallel computational applications, including parallel finite element, parallel finite volume, and parallel visualization applications using triangular or tetrahedral meshes. The library contains a suite of well-designed and efficiently implemented modules that perform operations in a typical PAMR process. Among these are mesh quality control during successive parallel adaptive refinement (typically guided by a local-error estimator), parallel load-balancing, and parallel mesh partitioning using the ParMeTiS partitioner. The Pyramid library is implemented in Fortran 90 with an interface to the Message-Passing Interface (MPI) library, supporting code efficiency, modularity, and portability. An EM waveguide filter application, adaptively refined using the Pyramid library, is illustrated.
Efficient data management in a large-scale epidemiology research project.
Meyer, Jens; Ostrzinski, Stefan; Fredrich, Daniel; Havemann, Christoph; Krafczyk, Janina; Hoffmann, Wolfgang
2012-09-01
This article describes the concept of a "Central Data Management" (CDM) and its implementation within the large-scale population-based medical research project "Personalized Medicine". The CDM can be summarized as a conjunction of data capturing, data integration, data storage, data refinement, and data transfer. A wide spectrum of reliable "Extract Transform Load" (ETL) software for automatic integration of data as well as "electronic Case Report Forms" (eCRFs) was developed, in order to integrate decentralized and heterogeneously captured data. Due to the high sensitivity of the captured data, high system resource availability, data privacy, data security and quality assurance are of utmost importance. A complex data model was developed and implemented using an Oracle database in high availability cluster mode in order to integrate different types of participant-related data. Intelligent data capturing and storage mechanisms are improving the quality of data. Data privacy is ensured by a multi-layered role/right system for access control and de-identification of identifying data. A well defined backup process prevents data loss. Over the period of one and a half year, the CDM has captured a wide variety of data in the magnitude of approximately 5terabytes without experiencing any critical incidents of system breakdown or loss of data. The aim of this article is to demonstrate one possible way of establishing a Central Data Management in large-scale medical and epidemiological studies. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Puschner, Bernd; Steffen, Sabine; Slade, Mike; Kaliniecka, Helena; Maj, Mario; Fiorillo, Andrea; Munk-Jørgensen, Povl; Larsen, Jens Ivar; Egerházi, Anikó; Nemes, Zoltan; Rössler, Wulf; Kawohl, Wolfram; Becker, Thomas
2010-11-10
A considerable amount of research has been conducted on clinical decision making (CDM) in short-term physical conditions. However, there is a lack of knowledge on CDM and its outcome in long-term illnesses, especially in care for people with severe mental illness. The study entitled "Clinical decision making and outcome in routine care for people with severe mental illness" (CEDAR) is carried out in six European countries (Denmark, Germany, Hungary, Italy, Switzerland and UK). First, CEDAR establishes a methodology to assess CDM in people with severe mental illness. Specific instruments are developed (and psychometric properties established) to measure CDM style, key elements of CDM in routine care, as well as CDM involvement and satisfaction from patient and therapist perspectives. Second, these instruments are being put to use in a multi-national prospective observational study (bimonthly assessments during a one-year observation period; N = 560). This study investigates the immediate, short- and long-term effect of CDM on crucial dimensions of clinical outcome (symptom level, quality of life, needs) by taking into account significant variables moderating the relationship between CDM and outcome. The results of this study will make possible to delineate quality indicators of CDM, as well as to specify prime areas for further improvement. Ingredients of best practice in CDM in the routine care for people with severe mental illness will be extracted and recommendations formulated. With its explicit focus on the patient role in CDM, CEDAR will also contribute to strengthening the service user perspective. This project will substantially add to improving the practice of CDM in mental health care across Europe. ISRCTN75841675.
NASA Astrophysics Data System (ADS)
Herrera, I.; Herrera, G. S.
2015-12-01
Most geophysical systems are macroscopic physical systems. The behavior prediction of such systems is carried out by means of computational models whose basic models are partial differential equations (PDEs) [1]. Due to the enormous size of the discretized version of such PDEs it is necessary to apply highly parallelized super-computers. For them, at present, the most efficient software is based on non-overlapping domain decomposition methods (DDM). However, a limiting feature of the present state-of-the-art techniques is due to the kind of discretizations used in them. Recently, I. Herrera and co-workers using 'non-overlapping discretizations' have produced the DVS-Software which overcomes this limitation [2]. The DVS-software can be applied to a great variety of geophysical problems and achieves very high parallel efficiencies (90%, or so [3]). It is therefore very suitable for effectively applying the most advanced parallel supercomputers available at present. In a parallel talk, in this AGU Fall Meeting, Graciela Herrera Z. will present how this software is being applied to advance MOD-FLOW. Key Words: Parallel Software for Geophysics, High Performance Computing, HPC, Parallel Computing, Domain Decomposition Methods (DDM)REFERENCES [1]. Herrera Ismael and George F. Pinder, Mathematical Modelling in Science and Engineering: An axiomatic approach", John Wiley, 243p., 2012. [2]. Herrera, I., de la Cruz L.M. and Rosas-Medina A. "Non Overlapping Discretization Methods for Partial, Differential Equations". NUMER METH PART D E, 30: 1427-1454, 2014, DOI 10.1002/num 21852. (Open source) [3]. Herrera, I., & Contreras Iván "An Innovative Tool for Effectively Applying Highly Parallelized Software To Problems of Elasticity". Geofísica Internacional, 2015 (In press)
A hydrodynamic approach to cosmology: The mixed dark matter cosmological scenario
NASA Technical Reports Server (NTRS)
Cen, Renyue; Ostriker, Jeremiah P.
1994-01-01
We compute the evolution of spatially flat, mixed cold and hot dark matter models containing both baryonic matter and two kinds of dark matter. Hydrodynamics is treated with a highly developed Eulerian hydrodynamic code (see Cen 1992). A standard particle-mesh (PM) code is also used in parallel to calculate the motion of the dark matter components. We adopt the following parameters: h equivalent to (sub 0)/100 km/s Mpc(exp -1) = 0.5, OMEGA(sub C) = 0.3, and OMEGA(sub B) = 0.06, with amplitude of the perturbation spectrum fixed by the Cosmic Background Explorer Satellite (COBE) Dark Matter Radiation (DMR) measurements (Smoot et al. 1992) being sigma (sub 8) = 0.67. Four different boxes are simulated with box sizes of L = (64, 16, 4, 1) h(exp -1) Mpc, respectively, the two small boxes providing good resolution but little valid information due to absence of large-scale power. We use 128(exp 3) approximate 10(exp 6.3) baryonic cells, 128(exp .3) cold dark matter particles, and 2 x 128(exp 3) hot dark matter particles. In addition to the dark matter we follow separately six baryonic species (H, H(+), He, He(+), He(++), e(-)) with allowance for both (nonequilibrium) collisional and radiative ionization in every cell. The background radiation field is also followed in detail with allowance made for both continuum and line processes, to allow nonequilibrium heating and cooling processes to be followed in detail. The mean final Zeldovich-Sunyaev y parameter is estimated to be y Bar = (5.4 + or - 2.7) x 10(exp -7) below currently attainable observations, with a rms fluctuation of approximately delta bar y = (0.6 + or - 3.0) x 10(exp -7) on arcminute scales. The rate of galaxy formation peaks at an even later epoch (z approximate 0.3) than in the standard (OMEGA = 1, sigma sub 8 = 0.67) cold dark matter (CDM) model (z approximate 0.5) and, at a redshift of z = 4, is nearly a factor of 100 lower than for the CDM model with the same value of sigma sub 8. With regard to mass function, the smallest objects are stabilized against collapse by thermal energy: the mass-weighted mass spectrum has a broad peak in the vicinity of M(sub B) = 10(exp 9.5) solar mass with a reasonable fit to the Schechter luminosity function if the ratio of baryon mass to blue light is approximately 4. In addition, one very large PM simulation was made in a box with size (320 h(exp - 1) Mpc) containing 3 x 200(exp 3) = 10(exp 7.4) particles. Utilizing this simulation we find that the model yields a cluster mass function which is about a factor of 4 higher than observed, but a cluster-cluster correlation length marginally lower than observed, but that both are closer to observations than in the (COBE) normalized CDM model. The one-dimensional pairwise velocity dispersion is 605 + or - 8 km/s at 1/h separation, lower than that of the DCM model normalized to COBE, but still significant higher than observations (Davis & Peebles 1983). A plausible velocity bias b(sub v) = 0.8 + or - 0.1 on this scale will reduce but not remove the discrepancy. The velocity auto-correlat ion function has a coherence length of 40/h Mpc, which is somewhat lower than the observed counterpart. In all these respects the model would be improved by decreasing the cold fraction of the dark OMEGA(sub CDM)/ (OMEGA(sub CDM) + OMEGA(sub HDB). But formation of galaxies and clusters of galaxies is much later in this model than in COBE-normalized CDM, perhaps too late. To improve on these constraints a larger ratio of OMEGA(sub CDM)/ (OMEGA(sub CDM) + OMEGA(sub HDM)) is required than the value of 0.67 adopted here. It does not seem possible to find a value for this ratio which would satisfy all tests. Overall, the model is similar both on large and intermediate scales to the standard CDM model normalized to the same value of sigma(sub B), but the problem with regard to late formation of galaxies is more severe in this model than in that CDM model. Adding hot dark matter, significantly improves the ability of the COBE-normalized CDM scenario to fit existing observations, but the model is in fact not as good as the CDM model with the same sigma(sub 8) and is still probably unsatisfactory with regard to several critical tests.
78 FR 32250 - CDM Smith and Dynamac Corp; Transfer of Data
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-29
... ENVIRONMENTAL PROTECTION AGENCY [EPA-HQ-OPP-2013-0036; FRL-9387-5] CDM Smith and Dynamac Corp... the submitter, will be transferred to CDM Smith and its subcontractor, Dynamac Corp, in accordance with 40 CFR 2.307(h)(3) and 2.308(i)(2). CDM Smith and its subcontractor, Dynamac Corp, have been...
Ohashi, Ryoko; Nagao, Michinobu; Nakamura, Izumi; Okamoto, Takahiro; Sakai, Shuji
2018-04-12
The aim of this study was to determine if the diagnostic performance of breast lesion examinations could be improved using both digital breast tomosynthesis (DBT) and conventional digital mammography (CDM). Our institutional review board approved the protocol, and patients were provided the opportunity to opt out of the study. A total of 628 patients aged 22-91 years with abnormal screening results or clinical symptoms were consecutively enrolled between June 2015 and March 2016. All patients underwent DBT and CDM, and 1164 breasts were retrospectively analyzed by three radiologists who interpreted the results based on the Breast Imaging Reporting and Data System. Categories 4 and 5 were considered positive, and pathological results were the gold standard. The diagnostic performance of CDM and CDM plus DBT was compared using the mean areas under the receiver operating characteristic (ROC) curves. A total of 100 breast cancer cases were identified. The areas under the ROC curves were 0.9160 (95% confidence interval 0.8779-0.9541) for CDM alone and 0.9376 (95% confidence interval 0.9019-0.9733) for CDM plus DBT. The cut-off values for both CDM alone and CDM plus DBT measurements were 4, with sensitivities of 61.0% (61/100) and 83.0% (83/100), respectively, and specificities of 99.1% (1054/1064) and 98.9% (1052/1064), respectively. CDM yielded 39 false-negative diagnoses, while CDM plus DBT identified breast cancer in 22 of those cases (56.4%). The combination of DBT and CDM for the diagnosis of breast cancer in women with abnormal examination findings or clinical symptoms proved effective and should be used to improve the diagnostic performance of breast cancer examinations.
Supporting Open Access to European Academic Courses: The ASK-CDM-ECTS Tool
ERIC Educational Resources Information Center
Sampson, Demetrios G.; Zervas, Panagiotis
2013-01-01
Purpose: This paper aims to present and evaluate a web-based tool, namely ASK-CDM-ECTS, which facilitates authoring and publishing on the web descriptions of (open) academic courses in machine-readable format using an application profile of the Course Description Metadata (CDM) specification, namely CDM-ECTS. Design/methodology/approach: The paper…
Software Design for Real-Time Systems on Parallel Computers: Formal Specifications.
1996-04-01
This research investigated the important issues related to the analysis and design of real - time systems targeted to parallel architectures. In...particular, the software specification models for real - time systems on parallel architectures were evaluated. A survey of current formal methods for...uniprocessor real - time systems specifications was conducted to determine their extensibility in specifying real - time systems on parallel architectures. In
NASA Astrophysics Data System (ADS)
Moon, Hongsik
What is the impact of multicore and associated advanced technologies on computational software for science? Most researchers and students have multicore laptops or desktops for their research and they need computing power to run computational software packages. Computing power was initially derived from Central Processing Unit (CPU) clock speed. That changed when increases in clock speed became constrained by power requirements. Chip manufacturers turned to multicore CPU architectures and associated technological advancements to create the CPUs for the future. Most software applications benefited by the increased computing power the same way that increases in clock speed helped applications run faster. However, for Computational ElectroMagnetics (CEM) software developers, this change was not an obvious benefit - it appeared to be a detriment. Developers were challenged to find a way to correctly utilize the advancements in hardware so that their codes could benefit. The solution was parallelization and this dissertation details the investigation to address these challenges. Prior to multicore CPUs, advanced computer technologies were compared with the performance using benchmark software and the metric was FLoting-point Operations Per Seconds (FLOPS) which indicates system performance for scientific applications that make heavy use of floating-point calculations. Is FLOPS an effective metric for parallelized CEM simulation tools on new multicore system? Parallel CEM software needs to be benchmarked not only by FLOPS but also by the performance of other parameters related to type and utilization of the hardware, such as CPU, Random Access Memory (RAM), hard disk, network, etc. The codes need to be optimized for more than just FLOPs and new parameters must be included in benchmarking. In this dissertation, the parallel CEM software named High Order Basis Based Integral Equation Solver (HOBBIES) is introduced. This code was developed to address the needs of the changing computer hardware platforms in order to provide fast, accurate and efficient solutions to large, complex electromagnetic problems. The research in this dissertation proves that the performance of parallel code is intimately related to the configuration of the computer hardware and can be maximized for different hardware platforms. To benchmark and optimize the performance of parallel CEM software, a variety of large, complex projects are created and executed on a variety of computer platforms. The computer platforms used in this research are detailed in this dissertation. The projects run as benchmarks are also described in detail and results are presented. The parameters that affect parallel CEM software on High Performance Computing Clusters (HPCC) are investigated. This research demonstrates methods to maximize the performance of parallel CEM software code.
Use of chronic disease management programs for diabetes: in Alberta's primary care networks.
Campbell, David J T; Sargious, Peter; Lewanczuk, Richard; McBrien, Kerry; Tonelli, Marcello; Hemmelgarn, Brenda; Manns, Braden
2013-02-01
To determine the types of chronic disease management (CDM) programs offered for patients with diabetes in Alberta's primary care networks (PCNs). A survey was administered to PCNs to determine the types of CDM programs offered for patients with diabetes; CDM programs were organized into categories by their resource intensity and effectiveness. Results of the survey were reported using frequencies and percentages. Alberta has recently created PCNs-groups of family physicians who receive additional funds to enable them to support activities that fall outside the typical physician-based fee-for-service model, but which address specified objectives including CDM. It is currently unknown what additional programs are being provided through the PCN supplemental funding. A survey was administered to the individual responsible for CDM in each PCN. This included executive directors, chronic disease managers, and CDM nurses. We determined the CDM strategies used in each PCN to care for patients with diabetes, whether they were available to all patients, and whether the services were provided exclusively by the PCN or in conjunction with other agencies. There was considerable variation across PCNs with respect to the CDM programs offered for people with diabetes. Nearly all PCNs used multidisciplinary teams (which could include nurses, dietitians, and pharmacists) and patient education. Fewer than half of the PCNs permitted personnel other than the primary physician to write or alter prescriptions for medications. Alberta's PCNs have successfully established many different types of CDM programs. Multidisciplinary care teams, which are among the most effective CDM strategies, are currently being used by most of Alberta's PCNs.
Use of chronic disease management programs for diabetes
Campbell, David J.T.; Sargious, Peter; Lewanczuk, Richard; McBrien, Kerry; Tonelli, Marcello; Hemmelgarn, Brenda; Manns, Braden
2013-01-01
Objective To determine the types of chronic disease management (CDM) programs offered for patients with diabetes in Alberta's primary care networks (PCNs). Design A survey was administered to PCNs to determine the types of CDM programs offered for patients with diabetes; CDM programs were organized into categories by their resource intensity and effectiveness. Results of the survey were reported using frequencies and percentages. Setting Alberta has recently created PCNs—groups of family physicians who receive additional funds to enable them to support activities that fall outside the typical physician-based fee-for-service model, but which address specified objectives including CDM. It is currently unknown what additional programs are being provided through the PCN supplemental funding. Participants A survey was administered to the individual responsible for CDM in each PCN. This included executive directors, chronic disease managers, and CDM nurses. Main outcome measures We determined the CDM strategies used in each PCN to care for patients with diabetes, whether they were available to all patients, and whether the services were provided exclusively by the PCN or in conjunction with other agencies. Results There was considerable variation across PCNs with respect to the CDM programs offered for people with diabetes. Nearly all PCNs used multidisciplinary teams (which could include nurses, dietitians, and pharmacists) and patient education. Fewer than half of the PCNs permitted personnel other than the primary physician to write or alter prescriptions for medications. Conclusion Alberta's PCNs have successfully established many different types of CDM programs. Multidisciplinary care teams, which are among the most effective CDM strategies, are currently being used by most of Alberta's PCNs. PMID:23418263
Jansen, Sarah; Ball, Lauren; Lowe, Catherine
2015-04-01
This study explored private practice dietitians' perceptions of the impact of the Australian Chronic Disease Management (CDM) program on the conduct of their private practice, and the care provided to patients. Twenty-five accredited practising dietitians working in primary care participated in an individual semistructured telephone interview. Interview questions focussed on dietitians' perceptions of the proportion of patients receiving care through the CDM program, fee structures, adhering to reporting requirements and auditing. Transcript data were thematically analysed using a process of open coding. Half of the dietitians (12/25) reported that most of their patients (>75%) received care through the CDM program. Many dietitians (19/25) reported providing identical care to patients using the CDM program and private patients, but most (17/25) described spending substantially longer on administrative tasks for CDM patients. Dietitians experienced pressure from doctors and patients to keep their fees low or to bulk-bill patients using the CDM program. One-third of interviewed dietitians (8/25) expressed concern about the potential to be audited by Medicare. Recommendations to improve the CDM program included increasing the consultation length and subsequent rebate available for dietetic consultations, and increasing the number of consultations to align with dietetic best-practice guidelines. The CDM program creates challenges for dietitians working in primary care, including how to sustain the quality of patient-centred care and yet maintain equitable business practices. To ensure the CDM program appropriately assists patients to receive optimal care, further review of the CDM program within the scope of dietetics is required.
ERIC Educational Resources Information Center
Efendioglu, Akin
2012-01-01
The main purpose of this study is to design a "Courseware Development Model" (CDM) and investigate its effects on pre-service teachers' academic achievements in the field of geography and attitudes toward computer-based education (ATCBE). The CDM consisted of three components: content (C), learning theory, namely, meaningful learning (ML), and…
The Learning Process and Technological Change in Wind Power: Evidence from China's CDM Wind Projects
ERIC Educational Resources Information Center
Tang, Tian; Popp, David
2016-01-01
The Clean Development Mechanism (CDM) is a project-based carbon trade mechanism that subsidizes the users of climate-friendly technologies and encourages technology transfer. The CDM has provided financial support for a large share of Chinese wind projects since 2002. Using pooled cross-sectional data of 486 registered CDM wind projects in China…
NASA Technical Reports Server (NTRS)
Weeks, Cindy Lou
1986-01-01
Experiments were conducted at NASA Ames Research Center to define multi-tasking software requirements for multiple-instruction, multiple-data stream (MIMD) computer architectures. The focus was on specifying solutions for algorithms in the field of computational fluid dynamics (CFD). The program objectives were to allow researchers to produce usable parallel application software as soon as possible after acquiring MIMD computer equipment, to provide researchers with an easy-to-learn and easy-to-use parallel software language which could be implemented on several different MIMD machines, and to enable researchers to list preferred design specifications for future MIMD computer architectures. Analysis of CFD algorithms indicated that extensions of an existing programming language, adaptable to new computer architectures, provided the best solution to meeting program objectives. The CoFORTRAN Language was written in response to these objectives and to provide researchers a means to experiment with parallel software solutions to CFD algorithms on machines with parallel architectures.
Data management in clinical research: An overview
Krishnankutty, Binny; Bellary, Shantala; Kumar, Naveen B.R.; Moodahadu, Latha S.
2012-01-01
Clinical Data Management (CDM) is a critical phase in clinical research, which leads to generation of high-quality, reliable, and statistically sound data from clinical trials. This helps to produce a drastic reduction in time from drug development to marketing. Team members of CDM are actively involved in all stages of clinical trial right from inception to completion. They should have adequate process knowledge that helps maintain the quality standards of CDM processes. Various procedures in CDM including Case Report Form (CRF) designing, CRF annotation, database designing, data-entry, data validation, discrepancy management, medical coding, data extraction, and database locking are assessed for quality at regular intervals during a trial. In the present scenario, there is an increased demand to improve the CDM standards to meet the regulatory requirements and stay ahead of the competition by means of faster commercialization of product. With the implementation of regulatory compliant data management tools, CDM team can meet these demands. Additionally, it is becoming mandatory for companies to submit the data electronically. CDM professionals should meet appropriate expectations and set standards for data quality and also have a drive to adapt to the rapidly changing technology. This article highlights the processes involved and provides the reader an overview of the tools and standards adopted as well as the roles and responsibilities in CDM. PMID:22529469
Influence of the sex of the transmitting grandparent in congenital myotonic dystrophy.
López de Munain, A; Cobo, A M; Poza, J J; Navarrete, D; Martorell, L; Palau, F; Emparanza, J I; Baiget, M
1995-09-01
To analyse the influence of the sex of the transmitting grandparents on the occurrence of the congenital form of myotonic dystrophy (CDM), we have studied complete three generation pedigrees of 49 CDM cases, analysing: (1) the sex distribution in the grandparents' generation, and (2) the intergenerational amplification of the CTG repeat, measured in its absolute and relative values, between grandparents and the mothers of CDM patients and between the latter and their CDM children. The mean relative intergenerational increase in the 32 grandparent-mother pairs was significantly greater than in the 56 mother-CDM pairs (Mann-Whitney U test, p < 0.001). The mean expansion of the grandfathers (103 CTG repeats) was also significantly different from that seen in the grandmothers' group (154 CTG repeats) (Mann-Whitney U test, p < 0.01). This excess of non-manifesting males between the CDM grandparents' generation with a smaller CTG length than the grandmothers could suggest that the premutation has to be transmitted by a male to reach the degree of instability responsible for subsequent intergenerational CTG expansions without size constraints characteristic of the CDM range.
Exploring the clean development mechanism: Malaysian case study.
Pedersen, Anne
2008-02-01
During 2006 the CDM market in Malaysia became established and by December 2007 a total of 20 Malaysian projects had registered with the CDM Executive Board. The Kyoto Protocol defines the Annex 1 countries, as countries that are obliged to reduce their greenhouse gas (GHG) emissions and the clean development mechanism (CDM) allows Annex 1 countries to develop projects, which contribute to emission reduction, in non-Annex 1 (developing) countries. Currently, two projects have been corrected due to request for review and there is one project for which review is requested. Two projects have been rejected by the Executive Board. The broad knowledge of CDM in Malaysia and the number of successful projects are partly due to the well-functioning CDM institutional framework in Malaysia. As an illustration this article focuses on a Malaysian-Danish project and describes the implementation of CDM in Malaysia and refers to this specific project. The project was registered with the CDM Executive Board in May 2007 and is a methane avoidance project in which methane is captured from a landfill and used to generate electricity.
Parallel Computing for Probabilistic Response Analysis of High Temperature Composites
NASA Technical Reports Server (NTRS)
Sues, R. H.; Lua, Y. J.; Smith, M. D.
1994-01-01
The objective of this Phase I research was to establish the required software and hardware strategies to achieve large scale parallelism in solving PCM problems. To meet this objective, several investigations were conducted. First, we identified the multiple levels of parallelism in PCM and the computational strategies to exploit these parallelisms. Next, several software and hardware efficiency investigations were conducted. These involved the use of three different parallel programming paradigms and solution of two example problems on both a shared-memory multiprocessor and a distributed-memory network of workstations.
Using CLIPS in the domain of knowledge-based massively parallel programming
NASA Technical Reports Server (NTRS)
Dvorak, Jiri J.
1994-01-01
The Program Development Environment (PDE) is a tool for massively parallel programming of distributed-memory architectures. Adopting a knowledge-based approach, the PDE eliminates the complexity introduced by parallel hardware with distributed memory and offers complete transparency in respect of parallelism exploitation. The knowledge-based part of the PDE is realized in CLIPS. Its principal task is to find an efficient parallel realization of the application specified by the user in a comfortable, abstract, domain-oriented formalism. A large collection of fine-grain parallel algorithmic skeletons, represented as COOL objects in a tree hierarchy, contains the algorithmic knowledge. A hybrid knowledge base with rule modules and procedural parts, encoding expertise about application domain, parallel programming, software engineering, and parallel hardware, enables a high degree of automation in the software development process. In this paper, important aspects of the implementation of the PDE using CLIPS and COOL are shown, including the embedding of CLIPS with C++-based parts of the PDE. The appropriateness of the chosen approach and of the CLIPS language for knowledge-based software engineering are discussed.
An Investigation of Factors Influencing Nurses' Clinical Decision-Making Skills.
Wu, Min; Yang, Jinqiu; Liu, Lingying; Ye, Benlan
2016-08-01
This study aims to investigate the influencing factors on nurses' clinical decision-making (CDM) skills. A cross-sectional nonexperimental research design was conducted in the medical, surgical, and emergency departments of two university hospitals, between May and June 2014. We used a quantile regression method to identify the influencing factors across different quantiles of the CDM skills distribution and compared the results with the corresponding ordinary least squares (OLS) estimates. Our findings revealed that nurses were best at the skills of managing oneself. Educational level, experience, and the total structural empowerment had significant positive impacts on nurses' CDM skills, while the nurse-patient relationship, patient care and interaction, formal empowerment, and information empowerment were negatively correlated with nurses' CDM skills. These variables explained no more than 30% of the variance in nurses' CDM skills and mainly explained the lower quantiles of nurses' CDM skills distribution. © The Author(s) 2016.
Zweifler, John
2007-01-01
Bold steps are necessary to improve quality of care for patients with chronic diseases and increase satisfaction of both primary care physicians and patients. Office-based chronic disease management (CDM) workers can achieve these objectives by offering self-management support, maintaining disease registries, and monitoring compliance from the point of care. CDM workers can provide the missing link by connecting patients, primary care physicans, and CDM services sponsored by health plans or in the community. CDM workers should be supported financially by Medicare, Medicaid, and commercial health plans through reimbursements to physicians for units of service, analogous to California’s Comprehensive Perinatal Services Program. Care provided by CDM workers should be standardized, and training requirements should be sufficiently flexible to ensure wide dissemination. CDM workers can potentially improve quality while reducing costs for preventable hospitalizations and emergency department visits, but evaluation at multiple levels is recommended. PMID:17893388
Parallel-Processing Test Bed For Simulation Software
NASA Technical Reports Server (NTRS)
Blech, Richard; Cole, Gary; Townsend, Scott
1996-01-01
Second-generation Hypercluster computing system is multiprocessor test bed for research on parallel algorithms for simulation in fluid dynamics, electromagnetics, chemistry, and other fields with large computational requirements but relatively low input/output requirements. Built from standard, off-shelf hardware readily upgraded as improved technology becomes available. System used for experiments with such parallel-processing concepts as message-passing algorithms, debugging software tools, and computational steering. First-generation Hypercluster system described in "Hypercluster Parallel Processor" (LEW-15283).
Parallel software for lattice N = 4 supersymmetric Yang-Mills theory
NASA Astrophysics Data System (ADS)
Schaich, David; DeGrand, Thomas
2015-05-01
We present new parallel software, SUSY LATTICE, for lattice studies of four-dimensional N = 4 supersymmetric Yang-Mills theory with gauge group SU(N). The lattice action is constructed to exactly preserve a single supersymmetry charge at non-zero lattice spacing, up to additional potential terms included to stabilize numerical simulations. The software evolved from the MILC code for lattice QCD, and retains a similar large-scale framework despite the different target theory. Many routines are adapted from an existing serial code (Catterall and Joseph, 2012), which SUSY LATTICE supersedes. This paper provides an overview of the new parallel software, summarizing the lattice system, describing the applications that are currently provided and explaining their basic workflow for non-experts in lattice gauge theory. We discuss the parallel performance of the code, and highlight some notable aspects of the documentation for those interested in contributing to its future development.
NASA Astrophysics Data System (ADS)
Lovell, Mark R.; Zavala, Jesús; Vogelsberger, Mark; Shen, Xuejian; Cyr-Racine, Francis-Yan; Pfrommer, Christoph; Sigurdson, Kris; Boylan-Kolchin, Michael; Pillepich, Annalisa
2018-07-01
We contrast predictions for the high-redshift galaxy population and reionization history between cold dark matter (CDM) and an alternative self-interacting dark matter model based on the recently developed ETHOS framework that alleviates the small-scale CDM challenges within the Local Group. We perform the highest resolution hydrodynamical cosmological simulations (a 36 Mpc3 volume with gas cell mass of ˜ 105 M_{⊙} and minimum gas softening of ˜180 pc) within ETHOS to date - plus a CDM counterpart - to quantify the abundance of galaxies at high redshift and their impact on reionization. We find that ETHOS predicts galaxies with higher ultraviolet (UV) luminosities than their CDM counterparts and a faster build-up of the faint end of the UV luminosity function. These effects, however, make the optical depth to reionization less sensitive to the power spectrum cut-off: the ETHOS model differs from the CDM τ value by only 10 per cent and is consistent with Planck limits if the effective escape fraction of UV photons is 0.1-0.5. We conclude that current observations of high-redshift luminosity functions cannot differentiate between ETHOS and CDM models, but deep James Webb Space Telescope surveys of strongly lensed, inherently faint galaxies have the potential to test non-CDM models that offer attractive solutions to CDM's Local Group problems.
Software Issues at the User Interface
1991-05-01
successful integration of parallel computers into mainstream scientific computing. Clearly a compiler is the most important software tool available to a...Computer Science University of Colorado Boulder, CO 80309 ABSTRACT We review software issues that are critical to the successful integration of parallel...The development of an optimizing compiler of this quality, addressing communicaton instructions as well as computational instructions is a major
NAS Requirements Checklist for Job Queuing/Scheduling Software
NASA Technical Reports Server (NTRS)
Jones, James Patton
1996-01-01
The increasing reliability of parallel systems and clusters of computers has resulted in these systems becoming more attractive for true production workloads. Today, the primary obstacle to production use of clusters of computers is the lack of a functional and robust Job Management System for parallel applications. This document provides a checklist of NAS requirements for job queuing and scheduling in order to make most efficient use of parallel systems and clusters for parallel applications. Future requirements are also identified to assist software vendors with design planning.
Babaie, Javad; Ardalan, Ali; Vatandoost, Hasan; Goya, Mohammad Mehdi; Akbarisari, Ali
2016-02-01
Communicable disease management (CDM) is an important component of disaster public health response operations. However, there is a lack of any performance assessment (PA) framework and related indicators for the PA. This study aimed to develop a PA framework and indicators in CDM in disasters. In this study, a series of methods were used. First, a systematic literature review (SLR) was performed in order to extract the existing PA frameworks and indicators. Then, using a qualitative approach, some interviews with purposively selected experts were conducted and used in developing the PA framework and indicators. Finally, the analytical hierarchy process (AHP) was used for weighting of the developed indicators. The input, process, products, and outcomes (IPPO) framework was found to be an appropriate framework for CDM PA. Seven main functions were revealed to CDM during disasters. Forty PA indicators were developed for the four categories. There is a lack of any existing PA framework in CDM in disasters. Thus, in this study, a PA framework (IPPO framework) was developed for the PA of CDM in disasters through a series of methods. It can be an appropriate framework and its indicators could measure the performance of CDM in disasters.
Parallel computation and the basis system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, G.R.
1993-05-01
A software package has been written that can facilitate efforts to develop powerful, flexible, and easy-to use programs that can run in single-processor, massively parallel, and distributed computing environments. Particular attention has been given to the difficulties posed by a program consisting of many science packages that represent subsystems of a complicated, coupled system. Methods have been found to maintain independence of the packages by hiding data structures without increasing the communications costs in a parallel computing environment. Concepts developed in this work are demonstrated by a prototype program that uses library routines from two existing software systems, Basis andmore » Parallel Virtual Machine (PVM). Most of the details of these libraries have been encapsulated in routines and macros that could be rewritten for alternative libraries that possess certain minimum capabilities. The prototype software uses a flexible master-and-slaves paradigm for parallel computation and supports domain decomposition with message passing for partitioning work among slaves. Facilities are provided for accessing variables that are distributed among the memories of slaves assigned to subdomains. The software is named PROTOPAR.« less
Implementing model-based system engineering for the whole lifecycle of a spacecraft
NASA Astrophysics Data System (ADS)
Fischer, P. M.; Lüdtke, D.; Lange, C.; Roshani, F.-C.; Dannemann, F.; Gerndt, A.
2017-09-01
Design information of a spacecraft is collected over all phases in the lifecycle of a project. A lot of this information is exchanged between different engineering tasks and business processes. In some lifecycle phases, model-based system engineering (MBSE) has introduced system models and databases that help to organize such information and to keep it consistent for everyone. Nevertheless, none of the existing databases approached the whole lifecycle yet. Virtual Satellite is the MBSE database developed at DLR. It has been used for quite some time in Phase A studies and is currently extended for implementing it in the whole lifecycle of spacecraft projects. Since it is unforeseeable which future use cases such a database needs to support in all these different projects, the underlying data model has to provide tailoring and extension mechanisms to its conceptual data model (CDM). This paper explains the mechanisms as they are implemented in Virtual Satellite, which enables extending the CDM along the project without corrupting already stored information. As an upcoming major use case, Virtual Satellite will be implemented as MBSE tool in the S2TEP project. This project provides a new satellite bus for internal research and several different payload missions in the future. This paper explains how Virtual Satellite will be used to manage configuration control problems associated with such a multi-mission platform. It discusses how the S2TEP project starts using the software for collecting the first design information from concurrent engineering studies, then making use of the extension mechanisms of the CDM to introduce further information artefacts such as functional electrical architecture, thus linking more and more processes into an integrated MBSE approach.
FitzHenry, F; Resnic, F S; Robbins, S L; Denton, J; Nookala, L; Meeker, D; Ohno-Machado, L; Matheny, M E
2015-01-01
Adoption of a common data model across health systems is a key infrastructure requirement to allow large scale distributed comparative effectiveness analyses. There are a growing number of common data models (CDM), such as Mini-Sentinel, and the Observational Medical Outcomes Partnership (OMOP) CDMs. In this case study, we describe the challenges and opportunities of a study specific use of the OMOP CDM by two health systems and describe three comparative effectiveness use cases developed from the CDM. The project transformed two health system databases (using crosswalks provided) into the OMOP CDM. Cohorts were developed from the transformed CDMs for three comparative effectiveness use case examples. Administrative/billing, demographic, order history, medication, and laboratory were included in the CDM transformation and cohort development rules. Record counts per person month are presented for the eligible cohorts, highlighting differences between the civilian and federal datasets, e.g. the federal data set had more outpatient visits per person month (6.44 vs. 2.05 per person month). The count of medications per person month reflected the fact that one system's medications were extracted from orders while the other system had pharmacy fills and medication administration records. The federal system also had a higher prevalence of the conditions in all three use cases. Both systems required manual coding of some types of data to convert to the CDM. The data transformation to the CDM was time consuming and resources required were substantial, beyond requirements for collecting native source data. The need to manually code subsets of data limited the conversion. However, once the native data was converted to the CDM, both systems were then able to use the same queries to identify cohorts. Thus, the CDM minimized the effort to develop cohorts and analyze the results across the sites.
Rasekaba, T M; Williams, E; Hsu-Hage, B
2009-01-01
Chronic obstructive pulmonary disease (COPD) imposes a costly burden on healthcare. Pulmonary rehabilitation (PR) is the best practice to better manage COPD to improve patient outcomes and reduce acute hospital care utilization. To evaluate the impact of a once-weekly, eight-week multidisciplinary PR program as an integral part of the COPD chronic disease management (CDM) Program at Kyabram District Health Services. The study compared two cohorts of COPD patients: CDM-PR Cohort (4-8 weeks) and Opt-out Cohort (0-3 weeks) between February 2006 and March 2007. The CDM-PR Program involved multidisciplinary patient education and group exercise training. Nonparametric statistical tests were used to compare acute hospital care utilization 12 months before and after the introduction of CDM-PR. The number of patients involved in the CDM-PR Cohort was 29 (n = 29), and that in the Opt-out Cohort was 24 (n = 24). The CDM-PR Cohort showed significant reductions in cumulative acute hospital care utilization indicators (95% emergency department presentations, 95% inpatient admissions, 99% length of stay; effect sizes = 0.62-0.66, P < 0.001) 12 months after the introduction of the CDM Program; in contrast, changes in the cumulative indicators were statistically insignificant for the Opt-out Cohort (emergency department presentations decreased by 5%, inpatient admissions decreased by 12%, length of stay increased by 30%; effect size = 0.14-0.40, P > 0.05). Total costs associated with the hospital care utilization decreased from $130,000 to $7,500 for the CDM-PR Cohort and increased from $77,700 to $101,200 for the Opt-out Cohort. Participation in the CDM-PR for COPD patients can significantly reduce acute hospital care utilization and associated costs in a small rural health service.
Parallel software tools at Langley Research Center
NASA Technical Reports Server (NTRS)
Moitra, Stuti; Tennille, Geoffrey M.; Lakeotes, Christopher D.; Randall, Donald P.; Arthur, Jarvis J.; Hammond, Dana P.; Mall, Gerald H.
1993-01-01
This document gives a brief overview of parallel software tools available on the Intel iPSC/860 parallel computer at Langley Research Center. It is intended to provide a source of information that is somewhat more concise than vendor-supplied material on the purpose and use of various tools. Each of the chapters on tools is organized in a similar manner covering an overview of the functionality, access information, how to effectively use the tool, observations about the tool and how it compares to similar software, known problems or shortfalls with the software, and reference documentation. It is primarily intended for users of the iPSC/860 at Langley Research Center and is appropriate for both the experienced and novice user.
NASA Technical Reports Server (NTRS)
Nguyen, D. T.; Watson, Willie R. (Technical Monitor)
2005-01-01
The overall objectives of this research work are to formulate and validate efficient parallel algorithms, and to efficiently design/implement computer software for solving large-scale acoustic problems, arised from the unified frameworks of the finite element procedures. The adopted parallel Finite Element (FE) Domain Decomposition (DD) procedures should fully take advantages of multiple processing capabilities offered by most modern high performance computing platforms for efficient parallel computation. To achieve this objective. the formulation needs to integrate efficient sparse (and dense) assembly techniques, hybrid (or mixed) direct and iterative equation solvers, proper pre-conditioned strategies, unrolling strategies, and effective processors' communicating schemes. Finally, the numerical performance of the developed parallel finite element procedures will be evaluated by solving series of structural, and acoustic (symmetrical and un-symmetrical) problems (in different computing platforms). Comparisons with existing "commercialized" and/or "public domain" software are also included, whenever possible.
Parallel Ray Tracing Using the Message Passing Interface
2007-09-01
software is available for lens design and for general optical systems modeling. It tends to be designed to run on a single processor and can be very...Cameron, Senior Member, IEEE Abstract—Ray-tracing software is available for lens design and for general optical systems modeling. It tends to be designed to...National Aeronautics and Space Administration (NASA), optical ray tracing, parallel computing, parallel pro- cessing, prime numbers, ray tracing
Evaluation of Job Queuing/Scheduling Software: Phase I Report
NASA Technical Reports Server (NTRS)
Jones, James Patton
1996-01-01
The recent proliferation of high performance work stations and the increased reliability of parallel systems have illustrated the need for robust job management systems to support parallel applications. To address this issue, the national Aerodynamic Simulation (NAS) supercomputer facility compiled a requirements checklist for job queuing/scheduling software. Next, NAS began an evaluation of the leading job management system (JMS) software packages against the checklist. This report describes the three-phase evaluation process, and presents the results of Phase 1: Capabilities versus Requirements. We show that JMS support for running parallel applications on clusters of workstations and parallel systems is still insufficient, even in the leading JMS's. However, by ranking each JMS evaluated against the requirements, we provide data that will be useful to other sites in selecting a JMS.
Simulations of the Formation and Evolution of X-ray Clusters
NASA Astrophysics Data System (ADS)
Bryan, G. L.; Klypin, A.; Norman, M. L.
1994-05-01
We describe results from a set of Omega = 1 Cold plus Hot Dark Matter (CHDM) and Cold Dark Matter (CDM) simulations. We examine the formation and evolution of X-ray clusters in a cosmological setting with sufficient numbers to perform statistical analysis. We find that CDM, normalized to COBE, seems to produce too many large clusters, both in terms of the luminosity (dn/dL) and temperature (dn/dT) functions. The CHDM simulation produces fewer clusters and the temperature distribution (our numerically most secure result) matches observations where they overlap. The computed cluster luminosity function drops below observations, but we are almost surely underestimating the X-ray luminosity. Because of the lower fluctuations in CHDM, there are only a small number of bright clusters in our simulation volume; however we can use the simulated clusters to fix the relation between temperature and velocity dispersion, allowing us to use collisionless N-body codes to probe larger length scales with correspondingly brighter clusters. The hydrodynamic simulations have been performed with a hybrid particle-mesh scheme for the dark matter and a high resolution grid-based piecewise parabolic method for the adiabatic gas dynamics. This combination has been implemented for massively parallel computers, allowing us to achive grids as large as 512(3) .
The Measurement and Correlates of Career Decision Making.
ERIC Educational Resources Information Center
Harren, Vincent A.; Kass, Richard A.
This paper presents a theoretical framework for understanding career decision making (CDM); introduces an instrument, Assessment of Career Decision Making (ACDM) to measure CDM with college students; and presents correlational data on sex role and cognitive style factors hypothesized to influence CDM. The ACDM, designed to measure the Tiedeman and…
Canine degenerative myelopathy: a model of human amyotrophic lateral sclerosis.
Nardone, Raffaele; Höller, Yvonne; Taylor, Alexandra C; Lochner, Piergiorgio; Tezzon, Frediano; Golaszewski, Stefan; Brigo, Francesco; Trinka, Eugen
2016-02-01
Canine degenerative myelopathy (CDM) represents a unique naturally occurring animal model for human amyotrophic lateral sclerosis (ALS) because of similar clinical signs, neuropathologic findings, and involvement of the superoxide dismutase 1 (SOD1) mutation. A definitive diagnosis can only be made postmortem through microscopic detection of axonal degeneration, demyelination and astroglial proliferation, which is more severe in the dorsal columns of the thoracic spinal cord and in the dorsal portion of the lateral funiculus. Interestingly, the muscle acetylcholine receptor complexes are intact in CDM prior to functional impairment, thus suggesting that muscle atrophy in CDM does not result from physical denervation. Moreover, since sensory involvement seems to play an important role in CDM progression, a more careful investigation of the sensory pathology in ALS is also warranted. The importance of SOD1 expression remains unclear, while oxidative stress and denatured ubiquinated proteins appear to play a crucial role in the pathogenesis of CDM. In this updated narrative review we performed a systematic search of the published studies on CDM that may shed light on the pathophysiological mechanisms of human ALS. A better understanding of the factors that determine the disease progression in CDM may be beneficial for the development of effective treatments for ALS. Copyright © 2015 Elsevier GmbH. All rights reserved.
pcircle - A Suite of Scalable Parallel File System Tools
DOE Office of Scientific and Technical Information (OSTI.GOV)
WANG, FEIYI
2015-10-01
Most of the software related to file system are written for conventional local file system, they are serialized and can't take advantage of the benefit of a large scale parallel file system. "pcircle" software builds on top of ubiquitous MPI in cluster computing environment and "work-stealing" pattern to provide a scalable, high-performance suite of file system tools. In particular - it implemented parallel data copy and parallel data checksumming, with advanced features such as async progress report, checkpoint and restart, as well as integrity checking.
Zhan, X.
2005-01-01
A parallel Fortran-MPI (Message Passing Interface) software for numerical inversion of the Laplace transform based on a Fourier series method is developed to meet the need of solving intensive computational problems involving oscillatory water level's response to hydraulic tests in a groundwater environment. The software is a parallel version of ACM (The Association for Computing Machinery) Transactions on Mathematical Software (TOMS) Algorithm 796. Running 38 test examples indicated that implementation of MPI techniques with distributed memory architecture speedups the processing and improves the efficiency. Applications to oscillatory water levels in a well during aquifer tests are presented to illustrate how this package can be applied to solve complicated environmental problems involved in differential and integral equations. The package is free and is easy to use for people with little or no previous experience in using MPI but who wish to get off to a quick start in parallel computing. ?? 2004 Elsevier Ltd. All rights reserved.
Massively parallel quantum computer simulator
NASA Astrophysics Data System (ADS)
De Raedt, K.; Michielsen, K.; De Raedt, H.; Trieu, B.; Arnold, G.; Richter, M.; Lippert, Th.; Watanabe, H.; Ito, N.
2007-01-01
We describe portable software to simulate universal quantum computers on massive parallel computers. We illustrate the use of the simulation software by running various quantum algorithms on different computer architectures, such as a IBM BlueGene/L, a IBM Regatta p690+, a Hitachi SR11000/J1, a Cray X1E, a SGI Altix 3700 and clusters of PCs running Windows XP. We study the performance of the software by simulating quantum computers containing up to 36 qubits, using up to 4096 processors and up to 1 TB of memory. Our results demonstrate that the simulator exhibits nearly ideal scaling as a function of the number of processors and suggest that the simulation software described in this paper may also serve as benchmark for testing high-end parallel computers.
System software for the finite element machine
NASA Technical Reports Server (NTRS)
Crockett, T. W.; Knott, J. D.
1985-01-01
The Finite Element Machine is an experimental parallel computer developed at Langley Research Center to investigate the application of concurrent processing to structural engineering analysis. This report describes system-level software which has been developed to facilitate use of the machine by applications researchers. The overall software design is outlined, and several important parallel processing issues are discussed in detail, including processor management, communication, synchronization, and input/output. Based on experience using the system, the hardware architecture and software design are critiqued, and areas for further work are suggested.
Parallel design patterns for a low-power, software-defined compressed video encoder
NASA Astrophysics Data System (ADS)
Bruns, Michael W.; Hunt, Martin A.; Prasad, Durga; Gunupudi, Nageswara R.; Sonachalam, Sekar
2011-06-01
Video compression algorithms such as H.264 offer much potential for parallel processing that is not always exploited by the technology of a particular implementation. Consumer mobile encoding devices often achieve real-time performance and low power consumption through parallel processing in Application Specific Integrated Circuit (ASIC) technology, but many other applications require a software-defined encoder. High quality compression features needed for some applications such as 10-bit sample depth or 4:2:2 chroma format often go beyond the capability of a typical consumer electronics device. An application may also need to efficiently combine compression with other functions such as noise reduction, image stabilization, real time clocks, GPS data, mission/ESD/user data or software-defined radio in a low power, field upgradable implementation. Low power, software-defined encoders may be implemented using a massively parallel memory-network processor array with 100 or more cores and distributed memory. The large number of processor elements allow the silicon device to operate more efficiently than conventional DSP or CPU technology. A dataflow programming methodology may be used to express all of the encoding processes including motion compensation, transform and quantization, and entropy coding. This is a declarative programming model in which the parallelism of the compression algorithm is expressed as a hierarchical graph of tasks with message communication. Data parallel and task parallel design patterns are supported without the need for explicit global synchronization control. An example is described of an H.264 encoder developed for a commercially available, massively parallel memorynetwork processor device.
EFFECT OF QUALITY CHRONIC DISEASE MANAGEMENT FOR ALCOHOL AND DRUG DEPENDENCE ON ADDICTION OUTCOMES
Kim, Theresa W.; Saitz, Richard; Cheng, Debbie M.; Winter, Michael R; Witas, Julie; Samet, Jeffrey H.
2012-01-01
We examinedthe effect ofthe quality of primary care-basedchronic disease management (CDM)for alcohol and/or other drug (AOD) dependenceonaddiction outcomes.We assessed qualityusing 1)avisit frequencybased measure and 2) a self-reported assessment measuring alignment with the chronic care model. The visit frequency based measure had no significant association with addiction outcomes. Theself-reported measure of care - when care was at a CDM clinic - was associated with lower drug addiction severity.The self-reported assessment of care from any healthcare source (CDM clinic or elsewhere)was associated with lower alcoholaddiction severity and abstinence.These findings suggest that high quality CDM for AOD dependence may improve addiction outcomes.Quality measuresbased upon alignment with the chronic care model may better capture features of effective CDM care than a visitfrequency measure. PMID:22840687
NASA Astrophysics Data System (ADS)
Das, D.; Gopikrishna, P.; Singh, A.; Dey, A.; Iyer, P. K.
2016-04-01
Polymer light emitting diodes (PLEDs) with a device configuration of ITO/PEDOT:PSS/PFONPN01 [Poly [2,7-(9,9’-dioctylfluorene)-co-N-phenyl-1,8-naphthalimide (99:01)]/LiF/Al have been fabricated by varying the emissive layer (EML) thickness (40/65/80/130 nm) and the influence of EML thickness on the electrical characteristics of PLED has been studied. PLED can be modelled as a simple combination of resistors and capacitors. The impedance spectroscopy analysis showed that the devices with different EML thickness had different values of parallel resistance (RP) and the parallel capacitance (CP). The impedance of the devices is found to increase with increasing EML thickness resulting in an increase in the driving voltage. The device with an emissive layer thickness of 80nm, spin coated from a solution of concentration 15 mg/mL is found to give the best device performance with a maximum brightness value of 5226 cd/m2.
The H II galaxy Hubble diagram strongly favours Rh = ct over ΛCDM
NASA Astrophysics Data System (ADS)
Wei, Jun-Jie; Wu, Xue-Feng; Melia, Fulvio
2016-12-01
We continue to build support for the proposal to use H II galaxies (HIIGx) and giant extragalactic H II regions (GEHR) as standard candles to construct the Hubble diagram at redshifts beyond the current reach of Type Ia supernovae. Using a sample of 25 high-redshift HIIGx, 107 local HIIGx, and 24 GEHR, we confirm that the correlation between the emission-line luminosity and ionized-gas velocity dispersion is a viable luminosity indicator, and use it to test and compare the standard model ΛCDM and the Rh = ct universe by optimizing the parameters in each cosmology using a maximization of the likelihood function. For the flat ΛCDM model, the best fit is obtained with Ω _m= 0.40_{-0.09}^{+0.09}. However, statistical tools, such as the Akaike (AIC), Kullback (KIC) and Bayes (BIC) Information Criteria favour Rh = ct over the standard model with a likelihood of ≈94.8-98.8 per cent versus only ≈1.2-5.2 per cent. For wCDM (the version of ΛCDM with a dark-energy equation of state wde ≡ pde/ρde rather than wde = wΛ = -1), a statistically acceptable fit is realized with Ω _m=0.22_{-0.14}^{+0.16} and w_de= -0.51_{-0.25}^{+0.15} which, however, are not fully consistent with their concordance values. In this case, wCDM has two more free parameters than Rh = ct, and is penalized more heavily by these criteria. We find that Rh = ct is strongly favoured over wCDM with a likelihood of ≈92.9-99.6 per cent versus only 0.4-7.1 per cent. The current HIIGx sample is already large enough for the BIC to rule out ΛCDM/wCDM in favour of Rh = ct at a confidence level approaching 3σ.
USDA-ARS?s Scientific Manuscript database
With enhanced data availability, distributed watershed models for large areas with high spatial and temporal resolution are increasingly used to understand water budgets and examine effects of human activities and climate change/variability on water resources. Developing parallel computing software...
Dynamic CDM strategies in an EHR environment.
Bieker, Michael; Bailey, Spencer
2012-02-01
A dynamic charge description master (CDM) integrates information from clinical ancillary systems into the charge-capture process, so an organization can reduce its reliance on the patient accounting system as the sole source of billing information. By leveraging the information from electronic ancillary systems, providers can eliminate the need for paper charge-capture forms and see increased accuracy and efficiency in the maintenance of billing information. Before embarking on a dynamic CDM strategy, organizations should first determine their goals for implementing an EHR system, include revenue cycle leaders on the EHR implementation team, and carefully weigh the pros and cons of CDM design decisions.
Impact of a clinical decision making module on the attitudes and perceptions of surgical trainees.
Bhatt, Nikita R; Doherty, Eva M; Mansour, Ehab; Traynor, Oscar; Ridgway, Paul F
2016-09-01
Decision making, a cognitive non-technical skill, is a key element for clinical practice in surgery. Specific teaching about methods in clinical decision making (CDM) is a very recent addition to surgical training curricula in the UK and Ireland. Baseline trainee opinion on decision-making modules is unknown. The Royal College of Surgeons in Ireland's postgraduate training boot camp inaugural CDM module was investigated to elucidate the impact on the attitudes of CDM naïf trainees. Three standardized two-hour workshops for three trainee groups were delivered. The trainees were assessed by an anonymous questionnaire before and after the module. Change in attitude of the trainees was determined by comparing Likert scale ratings using the Wilcoxon signed-rank test. Fifty-seven newly appointed basic surgical trainees attended these workshops. A statistically significant rise in the proportion of candidates recognizing the importance of being taught CDM skills (P == 0.002) revealed the positive impact of the module, as did the increased understanding of different aspects of CDM like shared decision making (P == 0.035) and different styles of decision making (P == 0.013). These observed positive changes in trainee understanding and attitude toward CDM teaching supports the adoption of standardized modules into the curricula. More study is needed to define whether these modules will have measurable sustained enhancements of CDM skills. © 2016 Royal Australasian College of Surgeons.
Tough2{_}MP: A parallel version of TOUGH2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Keni; Wu, Yu-Shu; Ding, Chris
2003-04-09
TOUGH2{_}MP is a massively parallel version of TOUGH2. It was developed for running on distributed-memory parallel computers to simulate large simulation problems that may not be solved by the standard, single-CPU TOUGH2 code. The new code implements an efficient massively parallel scheme, while preserving the full capacity and flexibility of the original TOUGH2 code. The new software uses the METIS software package for grid partitioning and AZTEC software package for linear-equation solving. The standard message-passing interface is adopted for communication among processors. Numerical performance of the current version code has been tested on CRAY-T3E and IBM RS/6000 SP platforms. Inmore » addition, the parallel code has been successfully applied to real field problems of multi-million-cell simulations for three-dimensional multiphase and multicomponent fluid and heat flow, as well as solute transport. In this paper, we will review the development of the TOUGH2{_}MP, and discuss the basic features, modules, and their applications.« less
COLA with scale-dependent growth: applications to screened modified gravity models
NASA Astrophysics Data System (ADS)
Winther, Hans A.; Koyama, Kazuya; Manera, Marc; Wright, Bill S.; Zhao, Gong-Bo
2017-08-01
We present a general parallelized and easy-to-use code to perform numerical simulations of structure formation using the COLA (COmoving Lagrangian Acceleration) method for cosmological models that exhibit scale-dependent growth at the level of first and second order Lagrangian perturbation theory. For modified gravity theories we also include screening using a fast approximate method that covers all the main examples of screening mechanisms in the literature. We test the code by comparing it to full simulations of two popular modified gravity models, namely f(R) gravity and nDGP, and find good agreement in the modified gravity boost-factors relative to ΛCDM even when using a fairly small number of COLA time steps.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Winther, Hans A.; Koyama, Kazuya; Wright, Bill S.
We present a general parallelized and easy-to-use code to perform numerical simulations of structure formation using the COLA (COmoving Lagrangian Acceleration) method for cosmological models that exhibit scale-dependent growth at the level of first and second order Lagrangian perturbation theory. For modified gravity theories we also include screening using a fast approximate method that covers all the main examples of screening mechanisms in the literature. We test the code by comparing it to full simulations of two popular modified gravity models, namely f ( R ) gravity and nDGP, and find good agreement in the modified gravity boost-factors relative tomore » ΛCDM even when using a fairly small number of COLA time steps.« less
Self-Organized Service Negotiation for Collaborative Decision Making
Zhang, Bo; Zheng, Ziming
2014-01-01
This paper proposes a self-organized service negotiation method for CDM in intelligent and automatic manners. It mainly includes three phases: semantic-based capacity evaluation for the CDM sponsor, trust computation of the CDM organization, and negotiation selection of the decision-making service provider (DMSP). In the first phase, the CDM sponsor produces the formal semantic description of the complex decision task for DMSP and computes the capacity evaluation values according to participator instructions from different DMSPs. In the second phase, a novel trust computation approach is presented to compute the subjective belief value, the objective reputation value, and the recommended trust value. And in the third phase, based on the capacity evaluation and trust computation, a negotiation mechanism is given to efficiently implement the service selection. The simulation experiment results show that our self-organized service negotiation method is feasible and effective for CDM. PMID:25243228
Self-organized service negotiation for collaborative decision making.
Zhang, Bo; Huang, Zhenhua; Zheng, Ziming
2014-01-01
This paper proposes a self-organized service negotiation method for CDM in intelligent and automatic manners. It mainly includes three phases: semantic-based capacity evaluation for the CDM sponsor, trust computation of the CDM organization, and negotiation selection of the decision-making service provider (DMSP). In the first phase, the CDM sponsor produces the formal semantic description of the complex decision task for DMSP and computes the capacity evaluation values according to participator instructions from different DMSPs. In the second phase, a novel trust computation approach is presented to compute the subjective belief value, the objective reputation value, and the recommended trust value. And in the third phase, based on the capacity evaluation and trust computation, a negotiation mechanism is given to efficiently implement the service selection. The simulation experiment results show that our self-organized service negotiation method is feasible and effective for CDM.
Schorr, Ethlynn S; Sidou, Farzi; Kerrouche, Nabil
2012-09-01
To assess the benefit of adjunctive use of a SPF 30 moisturizing lotion in reducing local side effects associated with atopical tretinoin cream. This was a randomized, investigator/evaluator-blinded, split-face comparison in subjects with healthy skin. Subjects applied tretinoin cream 0.05% once daily to the whole face and Cetaphil 174; Dermacontrol Moisturizer (CDM) once daily to one side of the face based on randomization. Tolerability, perference and skin hydration were evaluated at each week, and a cosmetic acceptability questionnaire regarding CDM was completed at the end of the study. The majority (about 83% to 86%) of subjects experienced skin irritations on both sides of their face, though predominantly mild for the CDM + tretinoin treated side. Tolerability preferences favored the CDM+tretinoin sides. Adjunctive use of CDM with a topical tretinoin cream improves tolerance of the treatment.
Effect of quality chronic disease management for alcohol and drug dependence on addiction outcomes.
Kim, Theresa W; Saitz, Richard; Cheng, Debbie M; Winter, Michael R; Witas, Julie; Samet, Jeffrey H
2012-12-01
We examined the effect of the quality of primary care-based chronic disease management (CDM) for alcohol and/or other drug (AOD) dependence on addiction outcomes. We assessed quality using (1) a visit frequency based measure and (2) a self-reported assessment measuring alignment with the chronic care model. The visit frequency based measure had no significant association with addiction outcomes. The self-reported measure of care-when care was at a CDM clinic-was associated with lower drug addiction severity. The self-reported assessment of care from any healthcare source (CDM clinic or elsewhere) was associated with lower alcohol addiction severity and abstinence. These findings suggest that high quality CDM for AOD dependence may improve addiction outcomes. Quality measures based upon alignment with the chronic care model may better capture features of effective CDM care than a visit frequency measure. Copyright © 2012 Elsevier Inc. All rights reserved.
Effective crisis decision-making.
Kaschner, Holger
2017-01-01
When an organisation's reputation is at stake, crisis decision-making (CDM) is challenging and prone to failure. Most CDM schemes are strong at certain aspects of the overall CDM process, but almost none are strong at all of them. This paper defines criteria for good CDM schemes, analyses common approaches and introduces an alternative, stakeholder-driven scheme. Focusing on the most important stakeholders and directing any actions to preserve the relationships with them is crucial. When doing so, the interdependencies between the stakeholders must be identified and considered. Without knowledge of the sometimes less than obvious links, wellmeaning actions can cause adverse effects, so a cross-check for the impacts of potential options is recommended before making the final decision. The paper also gives recommendations on how to implement these steps at any organisation in order to enhance the quality of CDM and thus protect the organisation's reputation.
Default Parallels Plesk Panel Page
services that small businesses want and need. Our software includes key building blocks of cloud service virtualized servers Service Provider Products Parallels® Automation Hosting, SaaS, and cloud computing , the leading hosting automation software. You see this page because there is no Web site at this
Parallel computation and the Basis system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, G.R.
1992-12-16
A software package has been written that can facilitate efforts to develop powerful, flexible, and easy-to-use programs that can run in single-processor, massively parallel, and distributed computing environments. Particular attention has been given to the difficulties posed by a program consisting of many science packages that represent subsystems of a complicated, coupled system. Methods have been found to maintain independence of the packages by hiding data structures without increasing the communication costs in a parallel computing environment. Concepts developed in this work are demonstrated by a prototype program that uses library routines from two existing software systems, Basis and Parallelmore » Virtual Machine (PVM). Most of the details of these libraries have been encapsulated in routines and macros that could be rewritten for alternative libraries that possess certain minimum capabilities. The prototype software uses a flexible master-and-slaves paradigm for parallel computation and supports domain decomposition with message passing for partitioning work among slaves. Facilities are provided for accessing variables that are distributed among the memories of slaves assigned to subdomains. The software is named PROTOPAR.« less
Linear perturbation theory for tidal streams and the small-scale CDM power spectrum
NASA Astrophysics Data System (ADS)
Bovy, Jo; Erkal, Denis; Sanders, Jason L.
2017-04-01
Tidal streams in the Milky Way are sensitive probes of the population of low-mass dark matter subhaloes predicted in cold dark matter (CDM) simulations. We present a new calculus for computing the effect of subhalo fly-bys on cold streams based on the action-angle representation of streams. The heart of this calculus is a line-of-parallel-angle approach that calculates the perturbed distribution function of a stream segment by undoing the effect of all relevant impacts. This approach allows one to compute the perturbed stream density and track in any coordinate system in minutes for realizations of the subhalo distribution down to 105 M⊙, accounting for the stream's internal dispersion and overlapping impacts. We study the statistical properties of density and track fluctuations with large suites of simulations of the effect of subhalo fly-bys. The one-dimensional density and track power spectra along the stream trace the subhalo mass function, with higher mass subhaloes producing power only on large scales, while lower mass subhaloes cause structure on smaller scales. We also find significant density and track bispectra that are observationally accessible. We further demonstrate that different projections of the track all reflect the same pattern of perturbations, facilitating their observational measurement. We apply this formalism to data for the Pal 5 stream and make a first rigorous determination of 10^{+11}_{-6} dark matter subhaloes with masses between 106.5 and 109 M⊙ within 20 kpc from the Galactic centre [corresponding to 1.4^{+1.6}_{-0.9} times the number predicted by CDM-only simulations or to fsub(r < 20 kpc) ≈ 0.2 per cent] assuming that the Pal 5 stream is 5 Gyr old. Improved data will allow measurements of the subhalo mass function down to 105 M⊙, thus definitively testing whether dark matter is clumpy on the smallest scales relevant for galaxy formation.
Where the world stands still: turnaround as a strong test of ΛCDM cosmology
NASA Astrophysics Data System (ADS)
Pavlidou, V.; Tomaras, T. N.
2014-09-01
Our intuitive understanding of cosmic structure formation works best in scales small enough so that isolated, bound, relaxed gravitating systems are no longer adjusting their radius; and large enough so that space and matter follow the average expansion of the Universe. Yet one of the most robust predictions of ΛCDM cosmology concerns the scale that separates these limits: the turnaround radius, which is the non-expanding shell furthest away from the center of a bound structure. We show that the maximum possible value of the turnaround radius within the framework of the ΛCDM model is, for a given mass M, equal to (3GM/Λ c2)1/3, with G Newton's constant and c the speed of light, independently of cosmic epoch, exact nature of dark matter, or baryonic effects. We discuss the possible use of this prediction as an observational test for ΛCDM cosmology. Current data appear to favor ΛCDM over alternatives with local inhomogeneities and no Λ. However there exist several local-universe structures that have, within errors, reached their limiting size. With improved determinations of their turnaround radii and the enclosed mass, these objects may challenge the limit and ΛCDM cosmology.
Engineered cartilage using primary chondrocytes cultured in a porous cartilage-derived matrix
Cheng, Nai-Chen; Estes, Bradley T; Young, Tai-Horng; Guilak, Farshid
2011-01-01
Aim To investigate the cell growth, matrix accumulation and mechanical properties of neocartilage formed by human or porcine articular chondrocytes on a porous, porcine cartilage-derived matrix (CDM) for use in cartilage tissue engineering. Materials & methods We examined the physical properties, cell infiltration and matrix accumulation in different formulations of CDM and selected a CDM made of homogenized cartilage slurry as an appropriate scaffold for long-term culture of human and porcine articular chondrocytes. Results The CDM scaffold supported growth and proliferation of both human and porcine chondrocytes. Histology and immunohistochemistry showed abundant cartilage-specific macromolecule deposition at day 28. Human chondrocytes migrated throughout the CDM, showing a relatively homogeneous distribution of new tissue accumulation, whereas porcine chondrocytes tended to form a proteoglycan-rich layer primarily on the surfaces of the scaffold. Human chondrocyte-seeded scaffolds had a significantly lower aggregate modulus and hydraulic permeability at day 28. Conclusions These data show that a scaffold derived from native porcine articular cartilage can support neocartilage formation in the absence of exogenous growth factors. The overall characteristics and properties of the constructs depend on factors such as the concentration of CDM used, the porosity of the scaffold, and the species of chondrocytes. PMID:21175289
Consistency of the Planck CMB data and ΛCDM cosmology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shafieloo, Arman; Hazra, Dhiraj Kumar, E-mail: shafieloo@kasi.re.kr, E-mail: dhiraj.kumar.hazra@apc.univ-paris7.fr
We test the consistency between Planck temperature and polarization power spectra and the concordance model of Λ Cold Dark Matter cosmology (ΛCDM) within the framework of Crossing statistics. We find that Planck TT best fit ΛCDM power spectrum is completely consistent with EE power spectrum data while EE best fit ΛCDM power spectrum is not consistent with TT data. However, this does not point to any systematic or model-data discrepancy since in the Planck EE data, uncertainties are much larger compared to the TT data. We also investigate the possibility of any deviation from ΛCDM model analyzing the Planck 2015more » data. Results from TT, TE and EE data analysis indicate that no deviation is required beyond the flexibility of the concordance ΛCDM model. Our analysis thus rules out any strong evidence for beyond the concordance model in the Planck spectra data. We also report a mild amplitude difference comparing temperature and polarization data, where temperature data seems to have slightly lower amplitude than expected (consistently at all multiples), as we assume both temperature and polarization data are realizations of the same underlying cosmology.« less
Editorial Comments, 1974-1986: The Case For and Against the Use of Computer-Assisted Decision Making
Weaver, Robert R.
1987-01-01
Journal editorials are an important medium for communicating information about medical innovations. Evaluative statements contained in editorials pertain to the innovation's technical merits, as well as its probable economic, social and political, and ethical consequences. This information will either promote or impede the subsequent diffusion of innovations. This paper analyzes the evaluative information contained in thirty editorials that pertain to the topic of computer-assisted decision making (CDM). Most editorials agree that CDM technology is effective and economical in performing routine clinical tasks; controversy surrounds the use of more sophisticated CDM systems for complex problem solving. A few editorials argue that the innovation should play an integral role in transforming the established health care system. Most, however, maintain that it can or should be accommodated within the existing health care framework. Finally, while few editorials discuss the ethical ramifications of CDM technology, those that do suggest that it will contribute to more humane health care. The editorial analysis suggests that CDM technology aimed at routine clinical task will experience rapid diffusion. In contrast, the diffusion of more sophisticated CDM systems will, in the foreseeable future, likely be sporadic at best.
Hypercluster Parallel Processor
NASA Technical Reports Server (NTRS)
Blech, Richard A.; Cole, Gary L.; Milner, Edward J.; Quealy, Angela
1992-01-01
Hypercluster computer system includes multiple digital processors, operation of which coordinated through specialized software. Configurable according to various parallel-computing architectures of shared-memory or distributed-memory class, including scalar computer, vector computer, reduced-instruction-set computer, and complex-instruction-set computer. Designed as flexible, relatively inexpensive system that provides single programming and operating environment within which one can investigate effects of various parallel-computing architectures and combinations on performance in solution of complicated problems like those of three-dimensional flows in turbomachines. Hypercluster software and architectural concepts are in public domain.
Second Evaluation of Job Queuing/Scheduling Software. Phase 1
NASA Technical Reports Server (NTRS)
Jones, James Patton; Brickell, Cristy; Chancellor, Marisa (Technical Monitor)
1997-01-01
The recent proliferation of high performance workstations and the increased reliability of parallel systems have illustrated the need for robust job management systems to support parallel applications. To address this issue, NAS compiled a requirements checklist for job queuing/scheduling software. Next, NAS evaluated the leading job management system (JMS) software packages against the checklist. A year has now elapsed since the first comparison was published, and NAS has repeated the evaluation. This report describes this second evaluation, and presents the results of Phase 1: Capabilities versus Requirements. We show that JMS support for running parallel applications on clusters of workstations and parallel systems is still lacking, however, definite progress has been made by the vendors to correct the deficiencies. This report is supplemented by a WWW interface to the data collected, to aid other sites in extracting the evaluation information on specific requirements of interest.
Ittenbach, Richard F; Baker, Cynthia L; Corsmo, Jeremy J
2014-05-01
Standard operating procedures (SOPs) were once considered the province of the pharmaceutical industry but are now viewed as a key component of quality assurance programs. To address variability and increase the rigor of clinical data management (CDM) operations, the Cincinnati Children's Hospital Medical Center (CCHMC) decided to create CDM SOPs. In response to this challenge, and as part of a broader institutional initiative, the CCHMC leadership established an executive steering committee to oversee the development and implementation of CDM SOPs. This resulted in the creation of a quality assurance review process with three review panels: an SOP development team (16 clinical data managers and technical staff members), a faculty review panel (8 senior faculty and administrators), and an expert advisory panel (3 national CDM experts). This innovative, tiered review process helped ensure that the new SOPs would be created and implemented in accord with good CDM practices and standards. Twelve fully vetted, institutionally endorsed SOPs and one CDM template resulted from the intensive, iterative 10-month process (December 2011 to early October 2012). Phased implementation, which incoporated the CDM SOPs into the existing audit process for certain types of clinical research studies, was on schedule at the time of this writing. Once CCHMC researchers have had the opportunity to use the SOPs over time and across a broad range of research settings and conditions, the SOPs will be revisited and revalidated.
The nursing contribution to chronic disease management: a discussion paper.
Forbes, Angus; While, Alison
2009-01-01
This paper explores the nature of the nursing contribution to chronic disease management (CDM) and identifies a number of key nursing activities within CDM both at the individual patient and care system levels. The activities were identified following a detailed review of the literature (160 reports and studies of nursing practice) relating to three tracer disorders: diabetes, chronic obstructive pulmonary disease and multiple sclerosis. The paper examines these activities collectively to generate models expressing some of the core functions of nursing within CDM. The paper illustrates some of the changing characteristics of nursing roles within CDM. More fundamentally, the paper questions the position of nursing in relation to the technologies that define CDM systems and proposes four levels of contribution: the nurse as technology; the nurse as technologist; the nurse as system engineer; and the nurse as architect. These different levels reflect distinctions in the nature of the nursing gaze and power relations within the health care workforce. The paper also highlights how nurses are failing to develop the evidence for their practice in CDM. The paper concludes that there is a need for some clear principles to guide clinical practice and encourage innovation in CDM. It is argued that the principles should not be rule-bound but define a distinctive nursing gaze that will position the nursing profession within the health care system and in relation to other professions. The gaze should incorporate the needs of the individual patient and the care system that they inhabit.
Scalable computing for evolutionary genomics.
Prins, Pjotr; Belhachemi, Dominique; Möller, Steffen; Smant, Geert
2012-01-01
Genomic data analysis in evolutionary biology is becoming so computationally intensive that analysis of multiple hypotheses and scenarios takes too long on a single desktop computer. In this chapter, we discuss techniques for scaling computations through parallelization of calculations, after giving a quick overview of advanced programming techniques. Unfortunately, parallel programming is difficult and requires special software design. The alternative, especially attractive for legacy software, is to introduce poor man's parallelization by running whole programs in parallel as separate processes, using job schedulers. Such pipelines are often deployed on bioinformatics computer clusters. Recent advances in PC virtualization have made it possible to run a full computer operating system, with all of its installed software, on top of another operating system, inside a "box," or virtual machine (VM). Such a VM can flexibly be deployed on multiple computers, in a local network, e.g., on existing desktop PCs, and even in the Cloud, to create a "virtual" computer cluster. Many bioinformatics applications in evolutionary biology can be run in parallel, running processes in one or more VMs. Here, we show how a ready-made bioinformatics VM image, named BioNode, effectively creates a computing cluster, and pipeline, in a few steps. This allows researchers to scale-up computations from their desktop, using available hardware, anytime it is required. BioNode is based on Debian Linux and can run on networked PCs and in the Cloud. Over 200 bioinformatics and statistical software packages, of interest to evolutionary biology, are included, such as PAML, Muscle, MAFFT, MrBayes, and BLAST. Most of these software packages are maintained through the Debian Med project. In addition, BioNode contains convenient configuration scripts for parallelizing bioinformatics software. Where Debian Med encourages packaging free and open source bioinformatics software through one central project, BioNode encourages creating free and open source VM images, for multiple targets, through one central project. BioNode can be deployed on Windows, OSX, Linux, and in the Cloud. Next to the downloadable BioNode images, we provide tutorials online, which empower bioinformaticians to install and run BioNode in different environments, as well as information for future initiatives, on creating and building such images.
AZTEC. Parallel Iterative method Software for Solving Linear Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hutchinson, S.; Shadid, J.; Tuminaro, R.
1995-07-01
AZTEC is an interactive library that greatly simplifies the parrallelization process when solving the linear systems of equations Ax=b where A is a user supplied n X n sparse matrix, b is a user supplied vector of length n and x is a vector of length n to be computed. AZTEC is intended as a software tool for users who want to avoid cumbersome parallel programming details but who have large sparse linear systems which require an efficiently utilized parallel processing system. A collection of data transformation tools are provided that allow for easy creation of distributed sparse unstructured matricesmore » for parallel solutions.« less
Beyond the Renderer: Software Architecture for Parallel Graphics and Visualization
NASA Technical Reports Server (NTRS)
Crockett, Thomas W.
1996-01-01
As numerous implementations have demonstrated, software-based parallel rendering is an effective way to obtain the needed computational power for a variety of challenging applications in computer graphics and scientific visualization. To fully realize their potential, however, parallel renderers need to be integrated into a complete environment for generating, manipulating, and delivering visual data. We examine the structure and components of such an environment, including the programming and user interfaces, rendering engines, and image delivery systems. We consider some of the constraints imposed by real-world applications and discuss the problems and issues involved in bringing parallel rendering out of the lab and into production.
ERIC Educational Resources Information Center
White, Krista Alaine
2011-01-01
Clinical decision making (CDM) is a cornerstone skill for nurses. Self-confidence and anxiety are two affective influences that impact the learning and adeptness of CDM. Currently, no instruments exist that measure perceived self-confidence and anxiety level of undergraduate nursing students related to CDM. The purpose of this research was to…
Tanaka, Yoshiki; Yokoyama, Sho; Horai, Rie; Kojima, Takashi; Hiroyuki, Sato; Kato, Yukihito; Takagi, Mari; Nakamura, Hideki; Tanaka, Kiyoshi; Ichikawa, Kazuo; Tanabe, Shoko
2018-03-01
To evaluate the color visual acuity (CVA) of young healthy subjects using colored Landolt rings and the effect of background luminance level on the CVA. We measured the CVA of 20 young healthy subjects (age: 23.8 ± 3.8 years) with different colors using a computer and a liquid crystal display, with 15 Landolt ring colors (30 cd/m 2 ) with a background luminance of 30 cd/m 2 , and then 100 cd/m 2 . We then used different background luminance levels (15-50 cd/m 2 ) using four Landolt ring colors (red, green-yellow, green, and blue-green) to evaluate the effect of the background luminance level on CVA. The CVA significantly differed among the colors with a background luminance of 30 cd/m 2 (p < 0.0001). Green-yellow and blue-purple had poor CVA (high LogMAR value; 0.808 ± 0.107 and 0.633 ± 0.150, respectively) with a background luminance of 30 cd/m 2 (same luminance as the Landolt rings). There were no significant differences in the CVAs among the colors with a background luminance of 100 cd/m 2 (p = 0.5999). There were no significant difference in the CVA between background luminance 30 cd/m 2 and other luminance level ranging from 28 to 32 cd/m 2 for colors of red, green-yellow, green, and blue-green. The results reveal that the background luminance of Landolt rings affects the CVA. Distinctive CVAs for each color are measured by equalizing the luminance between the Landolt ring and the background. We consider that the poor CVAs of these colors reflect the visual function of S-cone, because GY and BP are included in the confusion locus of tritan axis on the chromaticity diagram. We believe that CVA assessment may be useful for individuals who have known or suspected ocular dysfunction or color vision deficiencies.
Developing software to use parallel processing effectively. Final report, June-December 1987
DOE Office of Scientific and Technical Information (OSTI.GOV)
Center, J.
1988-10-01
This report describes the difficulties involved in writing efficient parallel programs and describes the hardware and software support currently available for generating software that utilizes processing effectively. Historically, the processing rate of single-processor computers has increased by one order of magnitude every five years. However, this pace is slowing since electronic circuitry is coming up against physical barriers. Unfortunately, the complexity of engineering and research problems continues to require ever more processing power (far in excess of the maximum estimated 3 Gflops achievable by single-processor computers). For this reason, parallel-processing architectures are receiving considerable interest, since they offer high performancemore » more cheaply than a single-processor supercomputer, such as the Cray.« less
Software Development as Music Education Research
ERIC Educational Resources Information Center
Brown, Andrew R.
2007-01-01
This paper discusses how software development can be used as a method for music education research. It explains how software development can externalize ideas, stimulate action and reflection, and provide evidence to support the educative value of new software-based experiences. Parallels between the interactive software development process and…
You, Seng Chan; Lee, Seongwon; Cho, Soo-Yeon; Park, Hojun; Jung, Sungjae; Cho, Jaehyeong; Yoon, Dukyong; Park, Rae Woong
2017-01-01
It is increasingly necessary to generate medical evidence applicable to Asian people compared to those in Western countries. Observational Health Data Sciences a Informatics (OHDSI) is an international collaborative which aims to facilitate generating high-quality evidence via creating and applying open-source data analytic solutions to a large network of health databases across countries. We aimed to incorporate Korean nationwide cohort data into the OHDSI network by converting the national sample cohort into Observational Medical Outcomes Partnership-Common Data Model (OMOP-CDM). The data of 1.13 million subjects was converted to OMOP-CDM, resulting in average 99.1% conversion rate. The ACHILLES, open-source OMOP-CDM-based data profiling tool, was conducted on the converted database to visualize data-driven characterization and access the quality of data. The OMOP-CDM version of National Health Insurance Service-National Sample Cohort (NHIS-NSC) can be a valuable tool for multiple aspects of medical research by incorporation into the OHDSI research network.
Edelen, Bonnie Gilbert; Bell, Alexandra Alice
2011-08-01
The purpose of this study was to address the need for effective educational interventions to promote students' clinical decision making (CDM) within clinical practice environments. Researchers used a quasi-experimental, non-equivalent groups, posttest-only design to assess differences in CDM ability between intervention group students who participated in analogy-guided learning activities and control group students who participated in traditional activities. For the intervention, analogy-guided learning activities were incorporated into weekly group discussions, reflective journal writing, and questioning with clinical faculty. The researcher-designed Assessment of Clinical Decision Making Rubric was used to assess indicators of CDM ability in all students' reflective journal entries. Results indicated that the intervention group demonstrated significantly higher levels of CDM ability in their journals compared with the control group (ES(sm) = 0.52). Recommendations provide nurse educators with strategies to maximize students' development of CDM ability, better preparing students for the demands they face when they enter the profession. Copyright 2011, SLACK Incorporated.
Shi, Xiu-Juan; Chen, Gao-Jian; Wang, Yan-Wei; Yuan, Lin; Zhang, Qiang; Haddleton, David M; Chen, Hong
2013-11-19
Surface-initiated SET-LRP was used to synthesize polymer brush containing N-isopropylacrylamide and adamantyl acrylate using Cu(I)Cl/Me6-TREN as precursor catalyst and isopropanol/H2O as solvent. Different reaction conditions were explored to investigate the influence of different parameters (reaction time, catalyst concentration, monomer concentration) on the polymerization. Copolymers with variable 1-adamantan-1-ylmethyl acrylate (Ada) content and comparable thickness were synthesized onto silicon surfaces. Furthermore, the hydrophilic and bioactive molecule β-cyclodextrin-(mannose)7 (CDm) was synthesized and complexed with adamantane via host-guest interaction. The effect of adamantane alone and the effect of CDm together with adamantane on the wettability and thermoresponsive property of surface were investigated in detail. Experimental and molecular structure analysis showed that Ada at certain content together with CDm has the greatest impact on surface wettability. When Ada content was high (20%), copolymer-CDm surfaces showed almost no CDm complexed with Ada as the result of steric hindrance.
The least-action method, cold dark matter, and omega
NASA Technical Reports Server (NTRS)
Dunn, A. M.; Laflamme, R.
1995-01-01
Peebles has suggested an interesting technique, called the least-action method, to trace positions of galaxies back in time. This method applied on the Local Group galaxies seems to indicate that we live in an omega approximately = 0.1 universe. We have studied a cold dark matter (CDM) N-body simulation with omega = 0.2 and H = 50 km/s/Mpc and compared trajectories traced back by the least-action method with the ones given by the center of mass of the CDM halos. We show that the agreement between these sets of trajectories is at best qualitative. We also show that the line-of-sight peculiar velocities of halos are underestimated. This discrepancy is due to orphans, i.e., CDM particles which do not end up in halos. We vary the value of omega in the least-action method until the line-of-sight velocities agree with the CDM ones. The best value for this omega underestimates one of the CDM simulations by a factor of 4-5.
NASA Technical Reports Server (NTRS)
Talbot, Bryan; Zhou, Shu-Jia; Higgins, Glenn; Zukor, Dorothy (Technical Monitor)
2002-01-01
One of the most significant challenges in large-scale climate modeling, as well as in high-performance computing in other scientific fields, is that of effectively integrating many software models from multiple contributors. A software framework facilitates the integration task, both in the development and runtime stages of the simulation. Effective software frameworks reduce the programming burden for the investigators, freeing them to focus more on the science and less on the parallel communication implementation. while maintaining high performance across numerous supercomputer and workstation architectures. This document surveys numerous software frameworks for potential use in Earth science modeling. Several frameworks are evaluated in depth, including Parallel Object-Oriented Methods and Applications (POOMA), Cactus (from (he relativistic physics community), Overture, Goddard Earth Modeling System (GEMS), the National Center for Atmospheric Research Flux Coupler, and UCLA/UCB Distributed Data Broker (DDB). Frameworks evaluated in less detail include ROOT, Parallel Application Workspace (PAWS), and Advanced Large-Scale Integrated Computational Environment (ALICE). A host of other frameworks and related tools are referenced in this context. The frameworks are evaluated individually and also compared with each other.
Where the world stands still: turnaround as a strong test of ΛCDM cosmology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pavlidou, V.; Tomaras, T.N., E-mail: pavlidou@physics.uoc.gr, E-mail: tomaras@physics.uoc.gr
Our intuitive understanding of cosmic structure formation works best in scales small enough so that isolated, bound, relaxed gravitating systems are no longer adjusting their radius; and large enough so that space and matter follow the average expansion of the Universe. Yet one of the most robust predictions of ΛCDM cosmology concerns the scale that separates these limits: the turnaround radius, which is the non-expanding shell furthest away from the center of a bound structure. We show that the maximum possible value of the turnaround radius within the framework of the ΛCDM model is, for a given mass M, equalmore » to (3GM/Λ c{sup 2}){sup 1/3}, with G Newton's constant and c the speed of light, independently of cosmic epoch, exact nature of dark matter, or baryonic effects. We discuss the possible use of this prediction as an observational test for ΛCDM cosmology. Current data appear to favor ΛCDM over alternatives with local inhomogeneities and no Λ. However there exist several local-universe structures that have, within errors, reached their limiting size. With improved determinations of their turnaround radii and the enclosed mass, these objects may challenge the limit and ΛCDM cosmology.« less
Growth Media Affect Assessment of Antimicrobial Activity of Plant-Derived Polyphenols.
Xu, Xin; Ou, Zhen M; Wu, Christine D
2018-01-01
This study aimed to investigate the effects of different microbial growth media on the laboratory assessment of antimicrobial activity of natural polyphenolic compounds. The inhibition of the tea polyphenol EGCG on growth of selected oral microorganisms was evaluated in complex media and a protein-free chemically defined medium (CDM). Other antimicrobial agents (polyphenolic grape seed extract, plant alkaloid berberine, methyl salicylate, and chlorhexidine gluconate) were also tested in the study. The presence of proteins and their effects on the antimicrobial activity of EGCG were investigated by the addition of BSA to the CDM. The MICs of EGCG against test oral microorganisms were 4 to 64 times higher in complex media than in CDM. The polyphenolic grape seed extract exhibited similar discrepancies. However, the MICs of the nonpolyphenolic compounds (berberine, methyl salicylate, and chlorhexidine) were not significantly different between the two growth media. The MIC of EGCG against S. mutans UA159 in CDM with added BSA was 16 times higher than that in CDM alone. Therefore, nonproteinaceous CDM should be used to avoid interference of proteins with the active ingredients when testing the antimicrobial activity of plant-derived polyphenolic compounds against microorganisms. This will also minimize the discrepancies noted in results obtained by different investigators.
Janke, Leandro; Lima, André O S; Millet, Maurice; Radetski, Claudemir M
2013-01-01
In Brazil, Solid Waste Disposal Sites have operated without consideration of environmental criteria, these areas being characterized by methane (CH4) emissions during the anaerobic degradation of organic matter. The United Nations organization has made efforts to control this situation, through the United Nations Framework Convention on Climate Change (UNFCCC) and the Kyoto Protocol, where projects that seek to reduce the emissions of greenhouse gases (GHG) can be financially rewarded through Certified Emission Reductions (CERs) if they respect the requirements established by the Clean Development Mechanism (CDM), such as the use of methodologies approved by the CDM Executive Board (CDM-EB). Thus, a methodology was developed according to the CDM standards related to the aeration, excavation and composting of closed Municipal Solid Waste (MSW) landfills, which was submitted to CDM-EB for assessment and, after its approval, applied to a real case study in Maringá City (Brazil) with a view to avoiding negative environmental impacts due the production of methane and leachates even after its closure. This paper describes the establishment of this CDM-EB-approved methodology to determine baseline emissions, project emissions and the resultant emission reductions with the application of appropriate aeration, excavation and composting practices at closed MSW landfills. A further result obtained through the application of the methodology in the landfill case study was that it would be possible to achieve an ex-ante emission reduction of 74,013 tCO2 equivalent if the proposed CDM project activity were implemented.
Efficacy of a chronic disease management model for patients with chronic liver failure.
Wigg, Alan J; McCormick, Rosemary; Wundke, Rachel; Woodman, Richard J
2013-07-01
Despite the economic impacts of chronic liver failure (CLF) and the success of chronic disease management (CDM) programs in routine clinical practice, there have been no randomized controlled trials of CDM for CLF. We investigated the efficacy of CDM programs for CLF patients in a prospective, controlled trial. Sixty consecutive patients with cirrhosis and complications from CLF were assigned randomly to groups given intervention (n = 40) or usual care (n = 20), from 2009 to 2010. The 12-month intervention comprised 4 CDM components: delivery system redesign, self-management support, decision support, and clinical information systems. The primary outcome was the number of days spent in a hospital bed for liver-related reasons. Secondary outcomes were rates of other hospital use measures, rate of attendance at planned outpatient care, disease severity, quality of life, and quality of care. The intervention did not reduce the number of days patients spent in hospital beds for liver-related reasons, compared with usual care (17.8 vs 11.0 bed days/person/y, respectively; incidence rate ratio, 1.6; 95% confidence interval, 0.5-4.8; P = .39), or affect other measures of hospitalization. Patients given the intervention had a 30% higher rate of attendance at outpatient care (incidence rate ratio, 1.3; 95% confidence interval, 1.1-1.5; P = .004) and significant increases in quality of care, based on adherence to hepatoma screening, osteoporosis and vaccination guidelines, and referral to transplant centers (P < .05 for all). In a pilot study to determine the efficacy of CDM for patients with CLF, patients receiving CDM had significant increases in attendance at outpatient centers and quality of care, compared with patients who did not receive CDM. However, CDM did not appear to reduce hospital admission rates or disease severity or improve patient quality of life. Larger trials with longer follow-up periods are required to confirm these findings and assess cost effectiveness. Copyright © 2013 AGA Institute. Published by Elsevier Inc. All rights reserved.
The role of emotion in clinical decision making: an integrative literature review.
Kozlowski, Desirée; Hutchinson, Marie; Hurley, John; Rowley, Joanne; Sutherland, Joanna
2017-12-15
Traditionally, clinical decision making has been perceived as a purely rational and cognitive process. Recently, a number of authors have linked emotional intelligence (EI) to clinical decision making (CDM) and calls have been made for an increased focus on EI skills for clinicians. The objective of this integrative literature review was to identify and synthesise the empirical evidence for a role of emotion in CDM. A systematic search of the bibliographic databases PubMed, PsychINFO, and CINAHL (EBSCO) was conducted to identify empirical studies of clinician populations. Search terms were focused to identify studies reporting clinician emotion OR clinician emotional intelligence OR emotional competence AND clinical decision making OR clinical reasoning. Twenty three papers were retained for synthesis. These represented empirical work from qualitative, quantitative, and mixed-methods approaches and comprised work with a focus on experienced emotion and on skills associated with emotional intelligence. The studies examined nurses (10), physicians (7), occupational therapists (1), physiotherapists (1), mixed clinician samples (3), and unspecified infectious disease experts (1). We identified two main themes in the context of clinical decision making: the subjective experience of emotion; and, the application of emotion and cognition in CDM. Sub-themes under the subjective experience of emotion were: emotional response to contextual pressures; emotional responses to others; and, intentional exclusion of emotion from CDM. Under the application of emotion and cognition in CDM, sub-themes were: compassionate emotional labour - responsiveness to patient emotion within CDM; interdisciplinary tension regarding the significance and meaning of emotion in CDM; and, emotion and moral judgement. Clinicians' experienced emotions can and do affect clinical decision making, although acknowledgement of that is far from universal. Importantly, this occurs in the in the absence of a clear theoretical framework and educational preparation may not reflect the importance of emotional competence to effective CDM.
Cosmological test with the QSO Hubble diagram
NASA Astrophysics Data System (ADS)
López-Corredoira, M.; Melia, F.; Lusso, E.; Risaliti, G.
2016-03-01
A Hubble diagram (HD) has recently been constructed in the redshift range 0 ≲ z ≲ 6.5 using a nonlinear relation between the ultraviolet (UV) and X-ray luminosities of quasi stellar objects (QSOs). The Type Ia Supernovae (SN) HD has already provided a high-precision test of cosmological models, but the fact that the QSO distribution extends well beyond the supernova range (z ≲ 1.8), in principle provides us with an important complementary diagnostic whose significantly greater leverage in z can impose tighter constraints on the distance versus redshift relationship. In this paper, we therefore perform an independent test of nine different cosmological models, among which six are expanding, while three are static. Many of these are disfavored by other kinds of observations (including the aforementioned Type Ia SNe). We wish to examine whether the QSO HD confirms or rejects these earlier conclusions. We find that four of these models (Einstein-de Sitter, the Milne universe, the static universe with simple tired light and the static universe with plasma tired light) are excluded at the > 99% C.L. The quasi-steady state model is excluded at > 95% C.L. The remaining four models (ΛCDM/wCDM, the Rh = ct universe, the Friedmann open universe and a static universe with a linear Hubble law) all pass the test. However, only ΛCDM/wCDM and Rh = ct also pass the Alcock-Paczyński (AP) test. The optimized parameters in ΛCDM/wCDM are Ωm = 0.20-0.20+0.24 and wde = -1.2-∞+1.6 (the dark energy equation-of-state). Combined with the AP test, these values become Ωm = 0.38-0.19+0.20 and wde = -0.28-0.40+0.52. But whereas this optimization of parameters in ΛCDM/wCDM creates some tension with their concordance values, the Rh = ct universe has the advantage of fitting the QSO and AP data without any free parameters.
STOCHSIMGPU: parallel stochastic simulation for the Systems Biology Toolbox 2 for MATLAB.
Klingbeil, Guido; Erban, Radek; Giles, Mike; Maini, Philip K
2011-04-15
The importance of stochasticity in biological systems is becoming increasingly recognized and the computational cost of biologically realistic stochastic simulations urgently requires development of efficient software. We present a new software tool STOCHSIMGPU that exploits graphics processing units (GPUs) for parallel stochastic simulations of biological/chemical reaction systems and show that significant gains in efficiency can be made. It is integrated into MATLAB and works with the Systems Biology Toolbox 2 (SBTOOLBOX2) for MATLAB. The GPU-based parallel implementation of the Gillespie stochastic simulation algorithm (SSA), the logarithmic direct method (LDM) and the next reaction method (NRM) is approximately 85 times faster than the sequential implementation of the NRM on a central processing unit (CPU). Using our software does not require any changes to the user's models, since it acts as a direct replacement of the stochastic simulation software of the SBTOOLBOX2. The software is open source under the GPL v3 and available at http://www.maths.ox.ac.uk/cmb/STOCHSIMGPU. The web site also contains supplementary information. klingbeil@maths.ox.ac.uk Supplementary data are available at Bioinformatics online.
Component Technology for High-Performance Scientific Simulation Software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Epperly, T; Kohn, S; Kumfert, G
2000-11-09
We are developing scientific software component technology to manage the complexity of modem, parallel simulation software and increase the interoperability and re-use of scientific software packages. In this paper, we describe a language interoperability tool named Babel that enables the creation and distribution of language-independent software libraries using interface definition language (IDL) techniques. We have created a scientific IDL that focuses on the unique interface description needs of scientific codes, such as complex numbers, dense multidimensional arrays, complicated data types, and parallelism. Preliminary results indicate that in addition to language interoperability, this approach provides useful tools for thinking about themore » design of modem object-oriented scientific software libraries. Finally, we also describe a web-based component repository called Alexandria that facilitates the distribution, documentation, and re-use of scientific components and libraries.« less
Introducing parallelism to histogramming functions for GEM systems
NASA Astrophysics Data System (ADS)
Krawczyk, Rafał D.; Czarski, Tomasz; Kolasinski, Piotr; Pozniak, Krzysztof T.; Linczuk, Maciej; Byszuk, Adrian; Chernyshova, Maryna; Juszczyk, Bartlomiej; Kasprowicz, Grzegorz; Wojenski, Andrzej; Zabolotny, Wojciech
2015-09-01
This article is an assessment of potential parallelization of histogramming algorithms in GEM detector system. Histogramming and preprocessing algorithms in MATLAB were analyzed with regard to adding parallelism. Preliminary implementation of parallel strip histogramming resulted in speedup. Analysis of algorithms parallelizability is presented. Overview of potential hardware and software support to implement parallel algorithm is discussed.
A new parallel-vector finite element analysis software on distributed-memory computers
NASA Technical Reports Server (NTRS)
Qin, Jiangning; Nguyen, Duc T.
1993-01-01
A new parallel-vector finite element analysis software package MPFEA (Massively Parallel-vector Finite Element Analysis) is developed for large-scale structural analysis on massively parallel computers with distributed-memory. MPFEA is designed for parallel generation and assembly of the global finite element stiffness matrices as well as parallel solution of the simultaneous linear equations, since these are often the major time-consuming parts of a finite element analysis. Block-skyline storage scheme along with vector-unrolling techniques are used to enhance the vector performance. Communications among processors are carried out concurrently with arithmetic operations to reduce the total execution time. Numerical results on the Intel iPSC/860 computers (such as the Intel Gamma with 128 processors and the Intel Touchstone Delta with 512 processors) are presented, including an aircraft structure and some very large truss structures, to demonstrate the efficiency and accuracy of MPFEA.
JSD: Parallel Job Accounting on the IBM SP2
NASA Technical Reports Server (NTRS)
Saphir, William; Jones, James Patton; Walter, Howard (Technical Monitor)
1995-01-01
The IBM SP2 is one of the most promising parallel computers for scientific supercomputing - it is fast and usually reliable. One of its biggest problems is a lack of robust and comprehensive system software. Among other things, this software allows a collection of Unix processes to be treated as a single parallel application. It does not, however, provide accounting for parallel jobs other than what is provided by AIX for the individual process components. Without parallel job accounting, it is not possible to monitor system use, measure the effectiveness of system administration strategies, or identify system bottlenecks. To address this problem, we have written jsd, a daemon that collects accounting data for parallel jobs. jsd records information in a format that is easily machine- and human-readable, allowing us to extract the most important accounting information with very little effort. jsd also notifies system administrators in certain cases of system failure.
A tilted cold dark matter cosmological scenario
NASA Technical Reports Server (NTRS)
Cen, Renyue; Gnedin, Nickolay Y.; Kofman, Lev A.; Ostriker, Jeremiah P.
1992-01-01
A new cosmological scenario based on CDM but with a power spectrum index of about 0.7-0.8 is suggested. This model is predicted by various inflationary models with no fine tuning. This tilted CDM model, if normalized to COBE, alleviates many problems of the standard CDM model related to both small-scale and large-scale power. A physical bias of galaxies over dark matter of about two is required to fit spatial observations.
Substructured multibody molecular dynamics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grest, Gary Stephen; Stevens, Mark Jackson; Plimpton, Steven James
2006-11-01
We have enhanced our parallel molecular dynamics (MD) simulation software LAMMPS (Large-scale Atomic/Molecular Massively Parallel Simulator, lammps.sandia.gov) to include many new features for accelerated simulation including articulated rigid body dynamics via coupling to the Rensselaer Polytechnic Institute code POEMS (Parallelizable Open-source Efficient Multibody Software). We use new features of the LAMMPS software package to investigate rhodopsin photoisomerization, and water model surface tension and capillary waves at the vapor-liquid interface. Finally, we motivate the recipes of MD for practitioners and researchers in numerical analysis and computational mechanics.
Cosmic string wakes and large-scale structure
NASA Technical Reports Server (NTRS)
Charlton, Jane C.
1988-01-01
The formation of structure from infinite cosmic string wakes is modeled for a universe dominated by cold dark matter (CDM). Cross-sectional slices through the wake distribution tend to outline empty regions with diameters which are not inconsistent with the range of sizes of the voids in the CfA slice of the universe. The topology of the wake distribution is found to be spongy rather than cell-like. Correlations between CDM wakes do not extend much beyond a horizon length, so it is unlikely that CDM wakes are responsible for the correlations between clusters of galaxies. An estimate of the fraction of matter to accrete onto CDM wakes indicates that wakes could be more important in galaxy formation than previously anticipated.
Merlin - Massively parallel heterogeneous computing
NASA Technical Reports Server (NTRS)
Wittie, Larry; Maples, Creve
1989-01-01
Hardware and software for Merlin, a new kind of massively parallel computing system, are described. Eight computers are linked as a 300-MIPS prototype to develop system software for a larger Merlin network with 16 to 64 nodes, totaling 600 to 3000 MIPS. These working prototypes help refine a mapped reflective memory technique that offers a new, very general way of linking many types of computer to form supercomputers. Processors share data selectively and rapidly on a word-by-word basis. Fast firmware virtual circuits are reconfigured to match topological needs of individual application programs. Merlin's low-latency memory-sharing interfaces solve many problems in the design of high-performance computing systems. The Merlin prototypes are intended to run parallel programs for scientific applications and to determine hardware and software needs for a future Teraflops Merlin network.
Automating the parallel processing of fluid and structural dynamics calculations
NASA Technical Reports Server (NTRS)
Arpasi, Dale J.; Cole, Gary L.
1987-01-01
The NASA Lewis Research Center is actively involved in the development of expert system technology to assist users in applying parallel processing to computational fluid and structural dynamic analysis. The goal of this effort is to eliminate the necessity for the physical scientist to become a computer scientist in order to effectively use the computer as a research tool. Programming and operating software utilities have previously been developed to solve systems of ordinary nonlinear differential equations on parallel scalar processors. Current efforts are aimed at extending these capabilities to systems of partial differential equations, that describe the complex behavior of fluids and structures within aerospace propulsion systems. This paper presents some important considerations in the redesign, in particular, the need for algorithms and software utilities that can automatically identify data flow patterns in the application program and partition and allocate calculations to the parallel processors. A library-oriented multiprocessing concept for integrating the hardware and software functions is described.
NASA Technical Reports Server (NTRS)
Psiaki, Mark L. (Inventor); Kintner, Jr., Paul M. (Inventor); Ledvina, Brent M. (Inventor); Powell, Steven P. (Inventor)
2007-01-01
A real-time software receiver that executes on a general purpose processor. The software receiver includes data acquisition and correlator modules that perform, in place of hardware correlation, baseband mixing and PRN code correlation using bit-wise parallelism.
NASA Technical Reports Server (NTRS)
Psiaki, Mark L. (Inventor); Ledvina, Brent M. (Inventor); Powell, Steven P. (Inventor); Kintner, Jr., Paul M. (Inventor)
2006-01-01
A real-time software receiver that executes on a general purpose processor. The software receiver includes data acquisition and correlator modules that perform, in place of hardware correlation, baseband mixing and PRN code correlation using bit-wise parallelism.
The Software Correlator of the Chinese VLBI Network
NASA Technical Reports Server (NTRS)
Zheng, Weimin; Quan, Ying; Shu, Fengchun; Chen, Zhong; Chen, Shanshan; Wang, Weihua; Wang, Guangli
2010-01-01
The software correlator of the Chinese VLBI Network (CVN) has played an irreplaceable role in the CVN routine data processing, e.g., in the Chinese lunar exploration project. This correlator will be upgraded to process geodetic and astronomical observation data. In the future, with several new stations joining the network, CVN will carry out crustal movement observations, quick UT1 measurements, astrophysical observations, and deep space exploration activities. For the geodetic or astronomical observations, we need a wide-band 10-station correlator. For spacecraft tracking, a realtime and highly reliable correlator is essential. To meet the scientific and navigation requirements of CVN, two parallel software correlators in the multiprocessor environments are under development. A high speed, 10-station prototype correlator using the mixed Pthreads and MPI (Massage Passing Interface) parallel algorithm on a computer cluster platform is being developed. Another real-time software correlator for spacecraft tracking adopts the thread-parallel technology, and it runs on the SMP (Symmetric Multiple Processor) servers. Both correlators have the characteristic of flexible structure and scalability.
Constraints on Non-flat Cosmologies with Massive Neutrinos after Planck 2015
NASA Astrophysics Data System (ADS)
Chen, Yun; Ratra, Bharat; Biesiada, Marek; Li, Song; Zhu, Zong-Hong
2016-10-01
We investigate two dark energy cosmological models (I.e., the ΛCDM and ϕCDM models) with massive neutrinos assuming two different neutrino mass hierarchies in both the spatially flat and non-flat scenarios, where in the ϕCDM model the scalar field possesses an inverse power-law potential, V(ϕ) ∝ ϕ -α (α > 0). Cosmic microwave background data from Planck 2015, baryon acoustic oscillation data from 6dFGS, SDSS-MGS, BOSS-LOWZ and BOSS CMASS-DR11, the joint light-curve analysis compilation of SNe Ia apparent magnitude observations, and the Hubble Space Telescope H 0 prior, are jointly employed to constrain the model parameters. We first determine constraints assuming three species of degenerate massive neutrinos. In the spatially flat (non-flat) ΛCDM model, the sum of neutrino masses is bounded as Σm ν < 0.165(0.299) eV at 95% confidence level (CL). Correspondingly, in the flat (non-flat) ϕCDM model, we find Σm ν < 0.164(0.301) eV at 95% CL. The inclusion of spatial curvature as a free parameter results in a significant broadening of confidence regions for Σm ν and other parameters. In the scenario where the total neutrino mass is dominated by the heaviest neutrino mass eigenstate, we obtain similar conclusions to those obtained in the degenerate neutrino mass scenario. In addition, the results show that the bounds on Σm ν based on two different neutrino mass hierarchies have insignificant differences in the spatially flat case for both the ΛCDM and ϕCDM models; however, the corresponding differences are larger in the non-flat case.
Schröter, Hannes; Studzinski, Beatrix; Dietz, Pavel; Ulrich, Rolf; Striegel, Heiko; Simon, Perikles
2016-01-01
Purpose This study assessed the prevalence of physical and cognitive doping in recreational triathletes with two different randomized response models, that is, the Cheater Detection Model (CDM) and the Unrelated Question Model (UQM). Since both models have been employed in assessing doping, the major objective of this study was to investigate whether the estimates of these two models converge. Material and Methods An anonymous questionnaire was distributed to 2,967 athletes at two triathlon events (Frankfurt and Wiesbaden, Germany). Doping behavior was assessed either with the CDM (Frankfurt sample, one Wiesbaden subsample) or the UQM (one Wiesbaden subsample). A generalized likelihood-ratio test was employed to check whether the prevalence estimates differed significantly between models. In addition, we compared the prevalence rates of the present survey with those of a previous study on a comparable sample. Results After exclusion of incomplete questionnaires and outliers, the data of 2,017 athletes entered the final data analysis. Twelve-month prevalence for physical doping ranged from 4% (Wiesbaden, CDM and UQM) to 12% (Frankfurt CDM), and for cognitive doping from 1% (Wiesbaden, CDM) to 9% (Frankfurt CDM). The generalized likelihood-ratio test indicated no differences in prevalence rates between the two methods. Furthermore, there were no significant differences in prevalences between the present (undertaken in 2014) and the previous survey (undertaken in 2011), although the estimates tended to be smaller in the present survey. Discussion The results suggest that the two models can provide converging prevalence estimates. The high rate of cheaters estimated by the CDM, however, suggests that the present results must be seen as a lower bound and that the true prevalence of doping might be considerably higher. PMID:27218830
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Yun; Ratra, Bharat; Biesiada, Marek
We investigate two dark energy cosmological models (i.e., the ΛCDM and ϕ CDM models) with massive neutrinos assuming two different neutrino mass hierarchies in both the spatially flat and non-flat scenarios, where in the ϕ CDM model the scalar field possesses an inverse power-law potential, V ( ϕ ) ∝ ϕ {sup −} {sup α} ( α > 0). Cosmic microwave background data from Planck 2015, baryon acoustic oscillation data from 6dFGS, SDSS-MGS, BOSS-LOWZ and BOSS CMASS-DR11, the joint light-curve analysis compilation of SNe Ia apparent magnitude observations, and the Hubble Space Telescope H {sub 0} prior, are jointly employedmore » to constrain the model parameters. We first determine constraints assuming three species of degenerate massive neutrinos. In the spatially flat (non-flat) ΛCDM model, the sum of neutrino masses is bounded as Σ m {sub ν} < 0.165(0.299) eV at 95% confidence level (CL). Correspondingly, in the flat (non-flat) ϕ CDM model, we find Σ m {sub ν} < 0.164(0.301) eV at 95% CL. The inclusion of spatial curvature as a free parameter results in a significant broadening of confidence regions for Σ m {sub ν} and other parameters. In the scenario where the total neutrino mass is dominated by the heaviest neutrino mass eigenstate, we obtain similar conclusions to those obtained in the degenerate neutrino mass scenario. In addition, the results show that the bounds on Σ m {sub ν} based on two different neutrino mass hierarchies have insignificant differences in the spatially flat case for both the ΛCDM and ϕ CDM models; however, the corresponding differences are larger in the non-flat case.« less
Constraints on deviations from ΛCDM within Horndeski gravity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bellini, Emilio; Cuesta, Antonio J.; Jimenez, Raul
2016-02-01
Recent anomalies found in cosmological datasets such as the low multipoles of the Cosmic Microwave Background or the low redshift amplitude and growth of clustering measured by e.g., abundance of galaxy clusters and redshift space distortions in galaxy surveys, have motivated explorations of models beyond standard ΛCDM. Of particular interest are models where general relativity (GR) is modified on large cosmological scales. Here we consider deviations from ΛCDM+GR within the context of Horndeski gravity, which is the most general theory of gravity with second derivatives in the equations of motion. We adopt a parametrization in which the four additional Horndeskimore » functions of time α{sub i}(t) are proportional to the cosmological density of dark energy Ω{sub DE}(t). Constraints on this extended parameter space using a suite of state-of-the art cosmological observations are presented for the first time. Although the theory is able to accommodate the low multipoles of the Cosmic Microwave Background and the low amplitude of fluctuations from redshift space distortions, we find no significant tension with ΛCDM+GR when performing a global fit to recent cosmological data and thus there is no evidence against ΛCDM+GR from an analysis of the value of the Bayesian evidence ratio of the modified gravity models with respect to ΛCDM, despite introducing extra parameters. The posterior distribution of these extra parameters that we derive return strong constraints on any possible deviations from ΛCDM+GR in the context of Horndeski gravity. We illustrate how our results can be applied to a more general frameworks of modified gravity models.« less
Airway Delivery of Soluble Factors from Plastic-Adherent Bone Marrow Cells Prevents Murine Asthma
Ionescu, Lavinia I.; Alphonse, Rajesh S.; Arizmendi, Narcy; Morgan, Beverly; Abel, Melanie; Eaton, Farah; Duszyk, Marek; Vliagoftis, Harissios; Aprahamian, Tamar R.; Walsh, Kenneth
2012-01-01
Asthma affects an estimated 300 million people worldwide and accounts for 1 of 250 deaths and 15 million disability-adjusted life years lost annually. Plastic-adherent bone marrow–derived cell (BMC) administration holds therapeutic promise in regenerative medicine. However, given the low cell engraftment in target organs, including the lung, cell replacement cannot solely account for the reported therapeutic benefits. This suggests that BMCs may act by secreting soluble factors. BMCs also possess antiinflammatory and immunomodulatory properties and may therefore be beneficial for asthma. Our objective was to investigate the therapeutic potential of BMC-secreted factors in murine asthma. In a model of acute and chronic asthma, intranasal instillation of BMC conditioned medium (CdM) prevented airway hyperresponsiveness (AHR) and inflammation. In the chronic asthma model, CdM prevented airway smooth muscle thickening and peribronchial inflammation while restoring blunted salbutamol-induced bronchodilation. CdM reduced lung levels of the TH2 inflammatory cytokines IL-4 and IL-13 and increased levels of IL-10. CdM up-regulated an IL-10–induced and IL-10–secreting subset of T regulatory lymphocytes and promoted IL-10 expression by lung macrophages. Adiponectin (APN), an antiinflammatory adipokine found in CdM, prevented AHR, airway smooth muscle thickening, and peribronchial inflammation, whereas the effect of CdM in which APN was neutralized or from APN knock-out mice was attenuated compared with wild-type CdM. Our study provides evidence that BMC-derived soluble factors prevent murine asthma and suggests APN as one of the protective factors. Further identification of BMC-derived factors may hold promise for novel approaches in the treatment of asthma. PMID:21903873
Airway delivery of soluble factors from plastic-adherent bone marrow cells prevents murine asthma.
Ionescu, Lavinia I; Alphonse, Rajesh S; Arizmendi, Narcy; Morgan, Beverly; Abel, Melanie; Eaton, Farah; Duszyk, Marek; Vliagoftis, Harissios; Aprahamian, Tamar R; Walsh, Kenneth; Thébaud, Bernard
2012-02-01
Asthma affects an estimated 300 million people worldwide and accounts for 1 of 250 deaths and 15 million disability-adjusted life years lost annually. Plastic-adherent bone marrow-derived cell (BMC) administration holds therapeutic promise in regenerative medicine. However, given the low cell engraftment in target organs, including the lung, cell replacement cannot solely account for the reported therapeutic benefits. This suggests that BMCs may act by secreting soluble factors. BMCs also possess antiinflammatory and immunomodulatory properties and may therefore be beneficial for asthma. Our objective was to investigate the therapeutic potential of BMC-secreted factors in murine asthma. In a model of acute and chronic asthma, intranasal instillation of BMC conditioned medium (CdM) prevented airway hyperresponsiveness (AHR) and inflammation. In the chronic asthma model, CdM prevented airway smooth muscle thickening and peribronchial inflammation while restoring blunted salbutamol-induced bronchodilation. CdM reduced lung levels of the T(H)2 inflammatory cytokines IL-4 and IL-13 and increased levels of IL-10. CdM up-regulated an IL-10-induced and IL-10-secreting subset of T regulatory lymphocytes and promoted IL-10 expression by lung macrophages. Adiponectin (APN), an antiinflammatory adipokine found in CdM, prevented AHR, airway smooth muscle thickening, and peribronchial inflammation, whereas the effect of CdM in which APN was neutralized or from APN knock-out mice was attenuated compared with wild-type CdM. Our study provides evidence that BMC-derived soluble factors prevent murine asthma and suggests APN as one of the protective factors. Further identification of BMC-derived factors may hold promise for novel approaches in the treatment of asthma.
Experimental search for hidden photon CDM in the eV mass range with a dish antenna
DOE Office of Scientific and Technical Information (OSTI.GOV)
Suzuki, J.; Horie, T.; Inoue, Y.
2015-09-15
A search for hidden photon cold dark matter (HP CDM) using a new technique with a dish antenna is reported. From the result of the measurement, we found no evidence for the existence of HP CDM and set an upper limit on the photon-HP mixing parameter χ of ∼6×10{sup −12} for the hidden photon mass m{sub γ}=3.1±1.2 eV.
MATTER IN THE BEAM: WEAK LENSING, SUBSTRUCTURES, AND THE TEMPERATURE OF DARK MATTER
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mahdi, Hareth S.; Elahi, Pascal J.; Lewis, Geraint F.
2016-08-01
Warm dark matter (WDM) models offer an attractive alternative to the current cold dark matter (CDM) cosmological model. We present a novel method to differentiate between WDM and CDM cosmologies, namely, using weak lensing; this provides a unique probe as it is sensitive to all of the “matter in the beam,” not just dark matter haloes and the galaxies that reside in them, but also the diffuse material between haloes. We compare the weak lensing maps of CDM clusters to those in a WDM model corresponding to a thermally produced 0.5 keV dark matter particle. Our analysis clearly shows thatmore » the weak lensing magnification, convergence, and shear distributions can be used to distinguish between CDM and WDM models. WDM models increase the probability of weak magnifications, with the differences being significant to ≳5 σ , while leaving no significant imprint on the shear distribution. WDM clusters analyzed in this work are more homogeneous than CDM ones, and the fractional decrease in the amount of material in haloes is proportional to the average increase in the magnification. This difference arises from matter that would be bound in compact haloes in CDM being smoothly distributed over much larger volumes at lower densities in WDM. Moreover, the signature does not solely lie in the probability distribution function but in the full spatial distribution of the convergence field.« less
Crystalline lens thickness determines the perceived chromatic difference in magnification.
Chen, Yun; Schaeffel, Frank
2014-03-01
Since the origin of the high interindividual variability of the chromatic difference in retinal image magnification (CDM) in the human eye is not well understood, optical parameters that might determine its magnitude were studied in 21 healthy subjects with ages ranging from 21 to 58 years. Two psychophysical procedures were used to quantify CDM. They produced highly correlated results. First, a red and a blue square, presented on a black screen, had to be matched in size by the subjects with their right eyes. Second, a filled red and blue square, flickering on top of each other at 2 Hz, had to be adjusted in perceived brightness and then in size to minimize the impression of flicker. CDM varied widely among subjects from 0.0% to 3.6%. Biometric ocular parameters were measured with low coherence interferometry and crystalline lens tilt and decentration with a custom-built Purkinjemeter. Correlations were studied between CDM and corneal power, anterior chamber depth, lens thickness, lens tilt and lens decentration, and vitreous chamber depths. Lens thickness was found significantly correlated with CDM and accounted for 64% of its variance. Vertical lens tilt and decentration were also significantly correlated. It was also found that CDM increased by 3.5% per year, and part of this change can be attributed to the age-related increase in lens thickness.
Wei, Jianwei; Lee, Zhongping; Ondrusek, Michael; Mannino, Antonio; Tzortziou, Maria; Armstrong, Roy
2016-03-01
The spectral slope of the absorption coefficient of colored dissolved and detrital material (CDM), S cdm (units: nm -1 ), is an important optical parameter for characterizing the absorption spectral shape of CDM. Although highly variable in natural waters, in most remote sensing algorithms, this slope is either kept as a constant or empirically modeled with multiband ocean color in the visible domain. In this study, we explore the potential of semianalytically retrieving S cdm with added ocean color information in the ultraviolet (UV) range between 360 and 400 nm. Unique features of hyperspectral remote sensing reflectance in the UV-visible wavelengths (360-500 nm) have been observed in various waters across a range of coastal and open ocean environments. Our data and analyses indicate that ocean color in the UV domain is particularly sensitive to the variation of the CDM spectral slope. Here, we used a synthesized data set to show that adding UV wavelengths to the ocean color measurements will improve the retrieval of S cdm from remote sensing reflectance considerably, while the spectral band settings of past and current satellite ocean color sensors cannot fully account for the spectral variation of remote sensing reflectance. Results of this effort support the concept to include UV wavelengths in the next generation of satellite ocean color sensors.
Computational methods and software systems for dynamics and control of large space structures
NASA Technical Reports Server (NTRS)
Park, K. C.; Felippa, C. A.; Farhat, C.; Pramono, E.
1990-01-01
Two key areas of crucial importance to the computer-based simulation of large space structures are discussed. The first area involves multibody dynamics (MBD) of flexible space structures, with applications directed to deployment, construction, and maneuvering. The second area deals with advanced software systems, with emphasis on parallel processing. The latest research thrust in the second area involves massively parallel computers.
Consistency of the nonflat Λ CDM model with the new result from BOSS
NASA Astrophysics Data System (ADS)
Kumar, Suresh
2015-11-01
Using 137,562 quasars in the redshift range 2.1 ≤z ≤3.5 from the data release 11 (DR11) of the baryon oscillation spectroscopic survey (BOSS) of Sloan Digital Sky Survey (SDSS)-III, the BOSS-SDSS collaboration estimated the expansion rate H (z =2.34 )=222 ±7 km /s /Mpc of the Universe, and reported that this value is in tension with the predictions of flat Λ CDM model at around a 2.5 σ level. In this paper, we briefly describe some attempts made in the literature to relieve the tension, and show that the tension can naturally be alleviated in a nonflat Λ CDM model with positive curvature. We also perform the observational consistency check by considering the constraints on the nonflat Λ CDM model from Planck, WP and BAO data. We find that the nonflat Λ CDM model constrained with Planck+WP data fits better to the line of sight measurement H (z =2.34 )=222 ±7 km /s /Mpc , but only at the expense of still having a poor fit to the BAO transverse measurements.
2010-04-01
for decoupled parallel development Ref: Barry Boehm 12 Impacts of Technological Changes in the Cyber Environment on Software/Systems Engineering... Pressman , R.S., Software Engineering: A Practitioner’s Approach, 13 Impacts of Technological Changes in the Cyber Environment on Software/Systems
F-Nets and Software Cabling: Deriving a Formal Model and Language for Portable Parallel Programming
NASA Technical Reports Server (NTRS)
DiNucci, David C.; Saini, Subhash (Technical Monitor)
1998-01-01
Parallel programming is still being based upon antiquated sequence-based definitions of the terms "algorithm" and "computation", resulting in programs which are architecture dependent and difficult to design and analyze. By focusing on obstacles inherent in existing practice, a more portable model is derived here, which is then formalized into a model called Soviets which utilizes a combination of imperative and functional styles. This formalization suggests more general notions of algorithm and computation, as well as insights into the meaning of structured programming in a parallel setting. To illustrate how these principles can be applied, a very-high-level graphical architecture-independent parallel language, called Software Cabling, is described, with many of the features normally expected from today's computer languages (e.g. data abstraction, data parallelism, and object-based programming constructs).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dirian, Yves; Foffa, Stefano; Kunz, Martin
We study the cosmological predictions of two recently proposed non-local modifications of General Relativity. Both models have the same number of parameters as ΛCDM, with a mass parameter m replacing the cosmological constant. We implement the cosmological perturbations of the non-local models into a modification of the CLASS Boltzmann code, and we make a full comparison to CMB, BAO and supernova data. We find that the non-local models fit these datasets very well, at the same level as ΛCDM. Among the vast literature on modified gravity models, this is, to our knowledge, the only example which fits data as wellmore » as ΛCDM without requiring any additional parameter. For both non-local models parameter estimation using Planck +JLA+BAO data gives a value of H{sub 0} slightly higher than in ΛCDM.« less
ATLAS software configuration and build tool optimisation
NASA Astrophysics Data System (ADS)
Rybkin, Grigory; Atlas Collaboration
2014-06-01
ATLAS software code base is over 6 million lines organised in about 2000 packages. It makes use of some 100 external software packages, is developed by more than 400 developers and used by more than 2500 physicists from over 200 universities and laboratories in 6 continents. To meet the challenge of configuration and building of this software, the Configuration Management Tool (CMT) is used. CMT expects each package to describe its build targets, build and environment setup parameters, dependencies on other packages in a text file called requirements, and each project (group of packages) to describe its policies and dependencies on other projects in a text project file. Based on the effective set of configuration parameters read from the requirements files of dependent packages and project files, CMT commands build the packages, generate the environment for their use, or query the packages. The main focus was on build time performance that was optimised within several approaches: reduction of the number of reads of requirements files that are now read once per package by a CMT build command that generates cached requirements files for subsequent CMT build commands; introduction of more fine-grained build parallelism at package task level, i.e., dependent applications and libraries are compiled in parallel; code optimisation of CMT commands used for build; introduction of package level build parallelism, i. e., parallelise the build of independent packages. By default, CMT launches NUMBER-OF-PROCESSORS build commands in parallel. The other focus was on CMT commands optimisation in general that made them approximately 2 times faster. CMT can generate a cached requirements file for the environment setup command, which is especially useful for deployment on distributed file systems like AFS or CERN VMFS. The use of parallelism, caching and code optimisation significantly-by several times-reduced software build time, environment setup time, increased the efficiency of multi-core computing resources utilisation, and considerably improved software developer and user experience.
Experimental search for hidden photon CDM in the eV mass range with a dish antenna
DOE Office of Scientific and Technical Information (OSTI.GOV)
Suzuki, J.; Horie, T.; Minowa, M.
2015-09-01
A search for hidden photon cold dark matter (HP CDM) using a new technique with a dish antenna is reported. From the result of the measurement, we found no evidence for the existence of HP CDM and set an upper limit on the photon-HP mixing parameter χ of ∼ 6× 10{sup −12} for the hidden photon mass m{sub γ} = 3.1 ± 1.2 eV.
1990-09-30
audit trail for unauthorized entries. B.6.3.3 Manage CDM Resources B.6.3.3.1 Measure CDM Performance 1. Keep running log of CDM accesses by user types...SYSTEM SPPCIFIA OMA8MSTRTO Figure B-12. System Administrator Role B-3 1 SRD620340000 30 September 1990 4. Audit IISS hardware performance (LAN...SRD620340000 30 September 1990 7. Assist IISS service specifier and application specifier in implementing standards recommendation. 8. Perform audit of IISS
ERIC Educational Resources Information Center
McLean, James E.; Kaufman, Alan S.
1995-01-01
The six Holland-based Interest Scale scores yielded by the Harrington-O'Shea Career Decision-Making System (CDM) (T. Harrington and A. O'Shea, 1982) were related to sex, race, and performance on the Kaufman Adolescent and Adult Intelligence Test for 254 adolescents and young adults. CDM scores did not relate to most of the variables studied, and…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wolfe, A.
1986-03-10
Supercomputing software is moving into high gear, spurred by the rapid spread of supercomputers into new applications. The critical challenge is how to develop tools that will make it easier for programmers to write applications that take advantage of vectorizing in the classical supercomputer and the parallelism that is emerging in supercomputers and minisupercomputers. Writing parallel software is a challenge that every programmer must face because parallel architectures are springing up across the range of computing. Cray is developing a host of tools for programmers. Tools to support multitasking (in supercomputer parlance, multitasking means dividing up a single program tomore » run on multiple processors) are high on Cray's agenda. On tap for multitasking is Premult, dubbed a microtasking tool. As a preprocessor for Cray's CFT77 FORTRAN compiler, Premult will provide fine-grain multitasking.« less
NASA Technical Reports Server (NTRS)
Leone, Frank A., Jr.
2015-01-01
A method is presented to represent the large-deformation kinematics of intraply matrix cracks and delaminations in continuum damage mechanics (CDM) constitutive material models. The method involves the additive decomposition of the deformation gradient tensor into 'crack' and 'bulk material' components. The response of the intact bulk material is represented by a reduced deformation gradient tensor, and the opening of an embedded cohesive interface is represented by a normalized cohesive displacement-jump vector. The rotation of the embedded interface is tracked as the material deforms and as the crack opens. The distribution of the total local deformation between the bulk material and the cohesive interface components is determined by minimizing the difference between the cohesive stress and the bulk material stress projected onto the cohesive interface. The improvements to the accuracy of CDM models that incorporate the presented method over existing approaches are demonstrated for a single element subjected to simple shear deformation and for a finite element model of a unidirectional open-hole tension specimen. The material model is implemented as a VUMAT user subroutine for the Abaqus/Explicit finite element software. The presented deformation gradient decomposition method reduces the artificial load transfer across matrix cracks subjected to large shearing deformations, and avoids the spurious secondary failure modes that often occur in analyses based on conventional progressive damage models.
Galbraith, Lauren; Jacobs, Casey; Hemmelgarn, Brenda R; Donald, Maoliosa; Manns, Braden J; Jun, Min
2018-01-01
Primary care providers manage the majority of patients with chronic kidney disease (CKD), although the most effective chronic disease management (CDM) strategies for these patients are unknown. We assessed the efficacy of CDM interventions used by primary care providers managing patients with CKD. The Medline, Embase and Cochrane Central databases were systematically searched (inception to November 2014) for randomized controlled trials (RCTs) assessing education-based and computer-assisted CDM interventions targeting primary care providers managing patients with CKD in the community. The efficacy of CDM interventions was assessed using quality indicators [use of angiotensin-converting enzyme inhibitor (ACEI) or angiotensin receptor blocker (ARB), proteinuria measurement and achievement of blood pressure (BP) targets] and clinical outcomes (change in BP and glomerular filtration rate). Two independent reviewers evaluated studies for inclusion, quality and extracted data. Random effects models were used to estimate pooled odds ratios (ORs) and weighted mean differences for outcomes of interest. Five studies (188 clinics; 494 physicians; 42 852 patients with CKD) were included. Two studies compared computer-assisted intervention strategies with usual care, two studies compared education-based intervention strategies with computer-assisted intervention strategies and one study compared both these intervention strategies with usual care. Compared with usual care, computer-assisted CDM interventions did not increase the likelihood of ACEI/ARB use among patients with CKD {pooled OR 1.00 [95% confidence interval (CI) 0.83-1.21]; I2 = 0.0%}. Similarly, education-related CDM interventions did not increase the likelihood of ACEI/ARB use compared with computer-assisted CDM interventions [pooled OR 1.12 (95% CI 0.77-1.64); I2 = 0.0%]. Inconsistencies in reporting methods limited further pooling of data. To date, there have been very few randomized trials testing CDM interventions targeting primary care providers with the goal of improving care of people with CKD. Those conducted to date have shown minimal impact, suggesting that other strategies, or multifaceted interventions, may be required to enhance care for patients with CKD in the community. © The Author 2017. Published by Oxford University Press on behalf of ERA-EDTA. All rights reserved.
GPAW - massively parallel electronic structure calculations with Python-based software.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Enkovaara, J.; Romero, N.; Shende, S.
2011-01-01
Electronic structure calculations are a widely used tool in materials science and large consumer of supercomputing resources. Traditionally, the software packages for these kind of simulations have been implemented in compiled languages, where Fortran in its different versions has been the most popular choice. While dynamic, interpreted languages, such as Python, can increase the effciency of programmer, they cannot compete directly with the raw performance of compiled languages. However, by using an interpreted language together with a compiled language, it is possible to have most of the productivity enhancing features together with a good numerical performance. We have used thismore » approach in implementing an electronic structure simulation software GPAW using the combination of Python and C programming languages. While the chosen approach works well in standard workstations and Unix environments, massively parallel supercomputing systems can present some challenges in porting, debugging and profiling the software. In this paper we describe some details of the implementation and discuss the advantages and challenges of the combined Python/C approach. We show that despite the challenges it is possible to obtain good numerical performance and good parallel scalability with Python based software.« less
Parallel Software Model Checking
2015-01-08
checker. This project will explore this strategy to parallelize the generalized PDR algorithm for software model checking. It belongs to TF1 due to its ... focus on formal verification . Generalized PDR. Generalized Property Driven Rechability (GPDR) i is an algorithm for solving HORN-SMT reachability...subject to a penalty for failing to comply with a collection of information if it does not display a currently valid OMB control number. 1. REPORT DATE 08
Rapid Prediction of Unsteady Three-Dimensional Viscous Flows in Turbopump Geometries
NASA Technical Reports Server (NTRS)
Dorney, Daniel J.
1998-01-01
A program is underway to improve the efficiency of a three-dimensional Navier-Stokes code and generalize it for nozzle and turbopump geometries. Code modifications have included the implementation of parallel processing software, incorporation of new physical models and generalization of the multiblock capability. The final report contains details of code modifications, numerical results for several nozzle and turbopump geometries, and the implementation of the parallelization software.
GROMACS 4.5: a high-throughput and highly parallel open source molecular simulation toolkit
Pronk, Sander; Páll, Szilárd; Schulz, Roland; Larsson, Per; Bjelkmar, Pär; Apostolov, Rossen; Shirts, Michael R.; Smith, Jeremy C.; Kasson, Peter M.; van der Spoel, David; Hess, Berk; Lindahl, Erik
2013-01-01
Motivation: Molecular simulation has historically been a low-throughput technique, but faster computers and increasing amounts of genomic and structural data are changing this by enabling large-scale automated simulation of, for instance, many conformers or mutants of biomolecules with or without a range of ligands. At the same time, advances in performance and scaling now make it possible to model complex biomolecular interaction and function in a manner directly testable by experiment. These applications share a need for fast and efficient software that can be deployed on massive scale in clusters, web servers, distributed computing or cloud resources. Results: Here, we present a range of new simulation algorithms and features developed during the past 4 years, leading up to the GROMACS 4.5 software package. The software now automatically handles wide classes of biomolecules, such as proteins, nucleic acids and lipids, and comes with all commonly used force fields for these molecules built-in. GROMACS supports several implicit solvent models, as well as new free-energy algorithms, and the software now uses multithreading for efficient parallelization even on low-end systems, including windows-based workstations. Together with hand-tuned assembly kernels and state-of-the-art parallelization, this provides extremely high performance and cost efficiency for high-throughput as well as massively parallel simulations. Availability: GROMACS is an open source and free software available from http://www.gromacs.org. Contact: erik.lindahl@scilifelab.se Supplementary information: Supplementary data are available at Bioinformatics online. PMID:23407358
Comparison between the Logotropic and ΛCDM models at the cosmological scale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chavanis, Pierre-Henri; Kumar, Suresh, E-mail: chavanis@irsamc.ups-tlse.fr, E-mail: suresh.kumar@pilani.bits-pilani.ac.in
We perform a detailed comparison between the Logotropic model [P.H. Chavanis, Eur. Phys. J. Plus, 130 (2015)] and the ΛCDM model. These two models behave similarly at large (cosmological) scales up to the present. Differences will appear only in the far future, in about 25 Gyrs, when the Logotropic Universe becomes phantom while the ΛCDM Universe enters in the de Sitter era. However, the Logotropic model differs from the ΛCDM model at small (galactic) scales, where the latter encounters serious problems. Having a nonvanishing pressure, the Logotropic model can solve the cusp problem and the missing satellite problem of themore » ΛCDM model. In addition, it leads to dark matter halos with a constant surface density Σ{sub 0}=ρ{sub 0} r {sub h} , and can explain its observed value Σ{sub 0}=141 M {sub ⊙}/pc{sup 2} without adjustable parameter. This makes the logotropic model rather unique among all the models attempting to unify dark matter and dark energy. In this paper, we compare the Logotropic and ΛCDM models at the cosmological scale where they are very close to each other in order to determine quantitatively how much they differ. This comparison is facilitated by the fact that these models depend on only two parameters, the Hubble constant H {sub 0} and the present fraction of dark matter Ω{sub m0}. Using the latest observational data from Planck 2015+Lensing+BAO+JLA+HST, we find that the best fit values of H {sub 0} and Ω{sub m0} are H {sub 0}=68.30 km s{sup −1} Mpc{sup −1} and Ω{sub m0}=0.3014 for the Logotropic model, and H {sub 0}=68.02 km s{sup −1} Mpc{sup −1} and Ω{sub m0}=0.3049 for the ΛCDM model. The difference between the two models is at the percent level. As a result, the Logotropic model competes with the ΛCDM model at large scales and solves its problems at small scales. It may therefore represent a viable alternative to the ΛCDM model. Our study provides an explicit example of a theoretically motivated model that is almost indistinguishable from the ΛCDM model at the present time while having a completely different (phantom) evolution in the future. We analytically derive the statefinders of the Logotropic model for all values of the logotropic constant B . We show that the parameter s {sub 0} is directly related to this constant since s {sub 0}=− B /( B +1) independently of any other parameter like H {sub 0} or Ω{sub m0}. For the predicted value of B =3.53× 10{sup −3}, we obtain ( q {sub 0}, r {sub 0}, s {sub 0})=(−0.5516,1.011,−0.003518) instead of ( q {sub 0}, r {sub 0}, s {sub 0})=(−0.5427,1,0) for the ΛCDM model corresponding to 0 B =.« less
Can a void mimic the Λ in ΛCDM?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sundell, Peter; Vilja, Iiro; Mörtsell, Edvard, E-mail: pgsund@utu.fi, E-mail: edvard@fysik.su.se, E-mail: iiro.vilja@utu.fi
2015-08-01
We investigate Lemaítre-Tolman-Bondi (LTB) models, whose early time evolution and bang time are homogeneous and the distance-redshift relation and local Hubble parameter are inherited from the ΛCDM model. We show that the obtained LTB models and the ΛCDM model predict different relative local expansion rates and that the Hubble functions of the models diverge increasingly with redshift. The LTB models show tension between low redshift baryon acoustic oscillation and supernova observations and including Lyman-α forest or cosmic microwave background observations only accentuates the better fit of the ΛCDM model compared to the LTB model. The result indicates that additional degreesmore » of freedom are needed to explain the observations, for example by renouncing spherical symmetry, homogeneous bang time, negligible effects of pressure, or the early time homogeneity assumption.« less
Bayesian correction of H(z) data uncertainties
NASA Astrophysics Data System (ADS)
Jesus, J. F.; Gregório, T. M.; Andrade-Oliveira, F.; Valentim, R.; Matos, C. A. O.
2018-07-01
We compile 41 H(z) data from literature and use them to constrain OΛCDM and flat ΛCDM parameters. We show that the available H(z) suffers from uncertainties overestimation and propose a Bayesian method to reduce them. As a result of this method, using H(z) only, we find, in the context of OΛCDM, H0 = 69.5 ± 2.5 km s-1 Mpc-1, Ωm = 0.242 ± 0.036, and Ω _Λ =0.68± 0.14. In the context of flat ΛCDM model, we have found H0 = 70.4 ± 1.2 km s-1 Mpc-1 and Ωm = 0.256 ± 0.014. This corresponds to an uncertainty reduction of up to ≈ 30 per cent when compared to the uncorrected analysis in both cases.
Decellularized cartilage-derived matrix as substrate for endochondral bone regeneration.
Gawlitta, Debby; Benders, Kim E M; Visser, Jetze; van der Sar, Anja S; Kempen, Diederik H R; Theyse, Lars F H; Malda, Jos; Dhert, Wouter J A
2015-02-01
Following an endochondral approach to bone regeneration, multipotent stromal cells (MSCs) can be cultured on a scaffold to create a cartilaginous callus that is subsequently remodeled into bone. An attractive scaffold material for cartilage regeneration that has recently regained attention is decellularized cartilage-derived matrix (CDM). Since this material has shown potential for cartilage regeneration, we hypothesized that CDM could be a potent material for endochondral bone regeneration. In addition, since decellularized matrices are known to harbor bioactive cues for tissue formation, we evaluated the need for seeded MSCs in CDM scaffolds. In this study, ectopic bone formation in rats was evaluated for CDM scaffolds seeded with human MSCs and compared with unseeded controls. The MSC-seeded samples were preconditioned in chondrogenic medium for 37 days. After 8 weeks of subcutaneous implantation, the extent of mineralization was significantly higher in the MSC-seeded constructs versus unseeded controls. The mineralized areas corresponded to bone formation with bone marrow cavities. In addition, rat-specific bone formation was confirmed by collagen type I immunohistochemistry. Finally, fluorochrome incorporation at 3 and 6 weeks revealed that the bone formation had an inwardly directed progression. Taken together, our results show that decellularized CDM is a promising biomaterial for endochondral bone regeneration when combined with MSCs at ectopic locations. Modification of current decellularization protocols may lead to enhanced functionality of CDM scaffolds, potentially offering the prospect of generation of cell-free off-the-shelf bone regenerative substitutes.
Beom, Jaewon; Kim, Sang Jun
2011-01-01
Objective To investigate the therapeutic effects of repetitive electrical stimulation of the suprahyoid muscles in brain-injured patients with dysphagia. Method Twenty-eight brain-injured patients who showed reduced laryngeal elevation and supraglottic penetration or subglottic aspiration during a videofluoroscopic swallowing study (VFSS) were selected. The patients received either conventional dysphagia management (CDM) or CDM with repetitive electrical stimulation of the suprahyoid muscles (ESSM) for 4 weeks. The videofluoroscopic dysphagia scale (VDS) using the VFSS and American Speech-Language-Hearing Association National Outcome Measurement System (ASHA NOMS) swallowing scale (ASHA level) was used to determine swallowing function before and after treatment. Results VDS scores decreased from 29.8 to 17.9 in the ESSM group, and from 29.2 to 16.6 in the CDM group. However, there was no significant difference between the groups (p=0.796). Six patients (85.7%) in the ESSM group and 14 patients (66.7%) in the CDM group showed improvement according to the ASHA level with no significant difference between the ESSM and CDM groups (p=0.633). Conclusion Although repetitive neuromuscular electrical stimulation of the suprahyoid muscles did not further improve the swallowing function of dysphagia patients with reduced laryngeal elevation, more patients in the ESSM group showed improvement in the ASHA level than those in the CDM group. Further studies with concurrent controls and a larger sample group are required to fully establish the effects of repetitive neuromuscular electrical stimulation of the suprahyoid muscles in dysphagia patients. PMID:22506140
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barquero, Andrea A.; Michelini, Flavia M.; Alche, Laura E.
2006-06-09
We have reported the isolation of the tetranortriterpenoid 1-cinnamoyl-3,11-dihydroxymeliacarpin (CDM) from partially purified leaf extracts of Melia azedarach L. (MA) that reduced both, vesicular stomatitis virus (VSV) and Herpes simplex virus type 1 (HSV-1) multiplication. CDM blocks VSV entry and the intracellular transport of VSV-G protein, confining it to the Golgi apparatus, by pre- or post-treatment, respectively. Here, we report that HSV-1 glycoproteins were also confined to the Golgi apparatus independently of the nature of the host cell. Considering that MA could be acting as an immunomodulator preventing the development of herpetic stromal keratitis in mice, we also examined anmore » eventual effect of CDM on NF-{kappa}B signaling pathway. CDM is able to impede NF-{kappa}B activation in HSV-1-infected conjunctival cells and leads to the accumulation of p65 NF-{kappa}B subunit in the cytoplasm of uninfected treated Vero cells. In conclusion, CDM is a pleiotropic agent that not only inhibits the multiplication of DNA and RNA viruses by the same mechanism of action but also modulates the NF-{kappa}B signaling pathway.« less
A gamma-ray constraint on the nature of dark matter
NASA Technical Reports Server (NTRS)
Silk, Joseph; Bloemen, Hans
1987-01-01
If even a small component of the Galactic spheroid consists of the weakly interacting majorana fermions that are cold-dark-matter candidate particles for the Galactic halo, there should be a substantial flux of annihilation gamma rays from a source of about 1-deg extent at the Galactic center. COS B observations already constrain the halo cold-dark-matter (CDM) content entrained in the inner spheroid to be less than about 10 percent. A somewhat weaker constraint applies to the CDM believed to be present in the Galactic disk, but still only about 15 percent can be in such particles. Monochromatic line photons of energy 3-10 GeV are also predicted, and future experiments may be capable of improving these limits. Since both theoretical models of galaxy formation in a CDM-dominated universe and mass models for the rotation curve in the inner Galaxy suggest that a substantial fraction of the spheroid component should be nonluminous and incorporate entrained halo CDM, the hypothesis that the halo CDM consists predominantly of weakly interacting fermions such as photinos or heavy majorana mass neutrinos or higgsinos may already be subject to observational test.
Interactive mixture of inhomogeneous dark fluids driven by dark energy: a dynamical system analysis
NASA Astrophysics Data System (ADS)
Izquierdo, Germán; Blanquet-Jaramillo, Roberto C.; Sussman, Roberto A.
2018-03-01
We examine the evolution of an inhomogeneous mixture of non-relativistic pressureless cold dark matter (CDM), coupled to dark energy (DE) characterised by the equation of state parameter w<-1/3, with the interaction term proportional to the DE density. This coupled mixture is the source of a spherically symmetric Lemaître-Tolman-Bondi (LTB) metric admitting an asymptotic Friedman-Lemaître-Robertson-Walker (FLRW) background. Einstein's equations reduce to a 5-dimensional autonomous dynamical system involving quasi-local variables related to suitable averages of covariant scalars and their fluctuations. The phase space evolution around the critical points (past/future attractors and five saddles) is examined in detail. For all parameter values and both directions of energy flow (CDM to DE and DE to CDM) the phase space trajectories are compatible with a physically plausible early cosmic times behaviour near the past attractor. This result compares favourably with mixtures with interaction driven by the CDM density, whose past evolution is unphysical for DE to CDM energy flow. Numerical examples are provided describing the evolution of an initial profile that can be associated with idealised structure formation scenarios.
Bulgeless dwarf galaxies and dark matter cores from supernova-driven outflows.
Governato, F; Brook, C; Mayer, L; Brooks, A; Rhee, G; Wadsley, J; Jonsson, P; Willman, B; Stinson, G; Quinn, T; Madau, P
2010-01-14
For almost two decades the properties of 'dwarf' galaxies have challenged the cold dark matter (CDM) model of galaxy formation. Most observed dwarf galaxies consist of a rotating stellar disk embedded in a massive dark-matter halo with a near-constant-density core. Models based on the dominance of CDM, however, invariably form galaxies with dense spheroidal stellar bulges and steep central dark-matter profiles, because low-angular-momentum baryons and dark matter sink to the centres of galaxies through accretion and repeated mergers. Processes that decrease the central density of CDM halos have been identified, but have not yet reconciled theory with observations of present-day dwarfs. This failure is potentially catastrophic for the CDM model, possibly requiring a different dark-matter particle candidate. Here we report hydrodynamical simulations (in a framework assuming the presence of CDM and a cosmological constant) in which the inhomogeneous interstellar medium is resolved. Strong outflows from supernovae remove low-angular-momentum gas, which inhibits the formation of bulges and decreases the dark-matter density to less than half of what it would otherwise be within the central kiloparsec. The analogues of dwarf galaxies-bulgeless and with shallow central dark-matter profiles-arise naturally in these simulations.
High-performance computing — an overview
NASA Astrophysics Data System (ADS)
Marksteiner, Peter
1996-08-01
An overview of high-performance computing (HPC) is given. Different types of computer architectures used in HPC are discussed: vector supercomputers, high-performance RISC processors, various parallel computers like symmetric multiprocessors, workstation clusters, massively parallel processors. Software tools and programming techniques used in HPC are reviewed: vectorizing compilers, optimization and vector tuning, optimization for RISC processors; parallel programming techniques like shared-memory parallelism, message passing and data parallelism; and numerical libraries.
Park, Byoungchoo; Park, Chan Hyuk; Kim, Mina; Han, Mi-Young
2009-06-08
We present the results of a study of highly linear polarized light emissions from an Organic Light-Emitting Device (OLED) that consisted of a flexible Giant Birefringent Optical (GBO) multilayer polymer reflecting polarizer substrate. Luminous Electroluminescent (EL) emissions over 4,500 cd/m(2) were produced from the polarized OLED with high peak efficiencies in excess of 6 cd/A and 2 lm/W at relatively low operating voltages. The direction of polarization for the emitted EL light corresponded to the passing (ordinary) axis of the GBO-reflecting polarizer. Furthermore, the estimated polarization ratio between the brightness of two linearly polarized EL emissions parallel and perpendicular to the passing axis could be as high as 25 when measured over the whole emitted luminance range.
Crystal MD: The massively parallel molecular dynamics software for metal with BCC structure
NASA Astrophysics Data System (ADS)
Hu, Changjun; Bai, He; He, Xinfu; Zhang, Boyao; Nie, Ningming; Wang, Xianmeng; Ren, Yingwen
2017-02-01
Material irradiation effect is one of the most important keys to use nuclear power. However, the lack of high-throughput irradiation facility and knowledge of evolution process, lead to little understanding of the addressed issues. With the help of high-performance computing, we could make a further understanding of micro-level-material. In this paper, a new data structure is proposed for the massively parallel simulation of the evolution of metal materials under irradiation environment. Based on the proposed data structure, we developed the new molecular dynamics software named Crystal MD. The simulation with Crystal MD achieved over 90% parallel efficiency in test cases, and it takes more than 25% less memory on multi-core clusters than LAMMPS and IMD, which are two popular molecular dynamics simulation software. Using Crystal MD, a two trillion particles simulation has been performed on Tianhe-2 cluster.
O'Connor, B P
2000-08-01
Popular statistical software packages do not have the proper procedures for determining the number of components in factor and principal components analyses. Parallel analysis and Velicer's minimum average partial (MAP) test are validated procedures, recommended widely by statisticians. However, many researchers continue to use alternative, simpler, but flawed procedures, such as the eigenvalues-greater-than-one rule. Use of the proper procedures might be increased if these procedures could be conducted within familiar software environments. This paper describes brief and efficient programs for using SPSS and SAS to conduct parallel analyses and the MAP test.
Is there a concordance value for H0?
NASA Astrophysics Data System (ADS)
Luković, Vladimir V.; D'Agostino, Rocco; Vittorio, Nicola
2016-11-01
Context. We test the theoretical predictions of several cosmological models against different observables to compare the indirect estimates of the current expansion rate of the Universe determined from model fitting with the direct measurements based on Cepheids data published recently. Aims: We perform a statistical analysis of type Ia supernova (SN Ia), Hubble parameter, and baryon acoustic oscillation data. A joint analysis of these datasets allows us to better constrain cosmological parameters, but also to break the degeneracy that appears in the distance modulus definition between H0 and the absolute B-band magnitude of SN Ia, M0. Methods: From the theoretical side, we considered spatially flat and curvature-free ΛCDM, wCDM, and inhomogeneous Lemaître-Tolman-Bondi (LTB) models. To analyse SN Ia we took into account the distributions of SN Ia intrinsic parameters. Results: For the ΛCDM model we find that Ωm = 0.35 ± 0.02, H0 = (67.8 ± 1.0) km s-1 Mpc-1, while the corrected SN absolute magnitude has a normal distribution N(19.13,0.11). The wCDM model provides the same value for Ωm, while H0 = (66.5 ± 1.8) km s-1 Mpc-1 and w = -0.93 ± 0.07. When an inhomogeneous LTB model is considered, the combined fit provides H0 = (64.2 ± 1.9) km s-1 Mpc-1. Conclusions: Both the Akaike information criterion and the Bayes factor analysis cannot clearly distinguish between ΛCDM and wCDM cosmologies, while they clearly disfavour the LTB model. For the ΛCDM, our joint analysis of the SN Ia, the Hubble parameter, and the baryon acoustic oscillation datasets provides H0 values that are consistent with cosmic microwave background (CMB)-only Planck measurements, but they differ by 2.5σ from the value based on Cepheids data.
Code Parallelization with CAPO: A User Manual
NASA Technical Reports Server (NTRS)
Jin, Hao-Qiang; Frumkin, Michael; Yan, Jerry; Biegel, Bryan (Technical Monitor)
2001-01-01
A software tool has been developed to assist the parallelization of scientific codes. This tool, CAPO, extends an existing parallelization toolkit, CAPTools developed at the University of Greenwich, to generate OpenMP parallel codes for shared memory architectures. This is an interactive toolkit to transform a serial Fortran application code to an equivalent parallel version of the software - in a small fraction of the time normally required for a manual parallelization. We first discuss the way in which loop types are categorized and how efficient OpenMP directives can be defined and inserted into the existing code using the in-depth interprocedural analysis. The use of the toolkit on a number of application codes ranging from benchmark to real-world application codes is presented. This will demonstrate the great potential of using the toolkit to quickly parallelize serial programs as well as the good performance achievable on a large number of toolkit to quickly parallelize serial programs as well as the good performance achievable on a large number of processors. The second part of the document gives references to the parameters and the graphic user interface implemented in the toolkit. Finally a set of tutorials is included for hands-on experiences with this toolkit.
CLUMP-3D: Testing ΛCDM with Galaxy Cluster Shapes
NASA Astrophysics Data System (ADS)
Sereno, Mauro; Umetsu, Keiichi; Ettori, Stefano; Sayers, Jack; Chiu, I.-Non; Meneghetti, Massimo; Vega-Ferrero, Jesús; Zitrin, Adi
2018-06-01
The ΛCDM model of structure formation makes strong predictions on the concentration and shape of dark matter (DM) halos, which are determined by mass accretion processes. Comparison between predicted shapes and observations provides a geometric test of the ΛCDM model. Accurate and precise measurements needs a full three-dimensional (3D) analysis of the cluster mass distribution. We accomplish this with a multi-probe 3D analysis of the X-ray regular Cluster Lensing and Supernova survey with Hubble (CLASH) clusters combining strong and weak lensing, X-ray photometry and spectroscopy, and the Sunyaev–Zel’dovich effect (SZe). The cluster shapes and concentrations are consistent with ΛCDM predictions. The CLASH clusters are randomly oriented, as expected given the sample selection criteria. Shapes agree with numerical results for DM-only halos, which hints at baryonic physics being less effective in making halos rounder.
Is ΛCDM an effective CCDM cosmology?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lima, J.A.S.; Santos, R.C.; Cunha, J.V., E-mail: limajas@astro.iag.usp.br, E-mail: cliviars@gmail.com, E-mail: jvcunha@ufpa.br
We show that a cosmology driven by gravitationally induced particle production of all non-relativistic species existing in the present Universe mimics exactly the observed flat accelerating ΛCDM cosmology with just one dynamical free parameter. This kind of scenario includes the creation cold dark matter (CCDM) model [1] as a particular case and also provides a natural reduction of the dark sector since the vacuum component is not needed to accelerate the Universe. The new cosmic scenario is equivalent to ΛCDM both at the background and perturbative levels and the associated creation process is also in agreement with the universality ofmore » the gravitational interaction and equivalence principle. Implicitly, it also suggests that the present day astronomical observations cannot be considered the ultimate proof of cosmic vacuum effects in the evolved Universe because ΛCDM may be only an effective cosmology.« less
Python based high-level synthesis compiler
NASA Astrophysics Data System (ADS)
Cieszewski, Radosław; Pozniak, Krzysztof; Romaniuk, Ryszard
2014-11-01
This paper presents a python based High-Level synthesis (HLS) compiler. The compiler interprets an algorithmic description of a desired behavior written in Python and map it to VHDL. FPGA combines many benefits of both software and ASIC implementations. Like software, the mapped circuit is flexible, and can be reconfigured over the lifetime of the system. FPGAs therefore have the potential to achieve far greater performance than software as a result of bypassing the fetch-decode-execute operations of traditional processors, and possibly exploiting a greater level of parallelism. Creating parallel programs implemented in FPGAs is not trivial. This article describes design, implementation and first results of created Python based compiler.
Using parallel computing for the display and simulation of the space debris environment
NASA Astrophysics Data System (ADS)
Möckel, M.; Wiedemann, C.; Flegel, S.; Gelhaus, J.; Vörsmann, P.; Klinkrad, H.; Krag, H.
2011-07-01
Parallelism is becoming the leading paradigm in today's computer architectures. In order to take full advantage of this development, new algorithms have to be specifically designed for parallel execution while many old ones have to be upgraded accordingly. One field in which parallel computing has been firmly established for many years is computer graphics. Calculating and displaying three-dimensional computer generated imagery in real time requires complex numerical operations to be performed at high speed on a large number of objects. Since most of these objects can be processed independently, parallel computing is applicable in this field. Modern graphics processing units (GPUs) have become capable of performing millions of matrix and vector operations per second on multiple objects simultaneously. As a side project, a software tool is currently being developed at the Institute of Aerospace Systems that provides an animated, three-dimensional visualization of both actual and simulated space debris objects. Due to the nature of these objects it is possible to process them individually and independently from each other. Therefore, an analytical orbit propagation algorithm has been implemented to run on a GPU. By taking advantage of all its processing power a huge performance increase, compared to its CPU-based counterpart, could be achieved. For several years efforts have been made to harness this computing power for applications other than computer graphics. Software tools for the simulation of space debris are among those that could profit from embracing parallelism. With recently emerged software development tools such as OpenCL it is possible to transfer the new algorithms used in the visualization outside the field of computer graphics and implement them, for example, into the space debris simulation environment. This way they can make use of parallel hardware such as GPUs and Multi-Core-CPUs for faster computation. In this paper the visualization software will be introduced, including a comparison between the serial and the parallel method of orbit propagation. Ways of how to use the benefits of the latter method for space debris simulation will be discussed. An introduction to OpenCL will be given as well as an exemplary algorithm from the field of space debris simulation.
Using parallel computing for the display and simulation of the space debris environment
NASA Astrophysics Data System (ADS)
Moeckel, Marek; Wiedemann, Carsten; Flegel, Sven Kevin; Gelhaus, Johannes; Klinkrad, Heiner; Krag, Holger; Voersmann, Peter
Parallelism is becoming the leading paradigm in today's computer architectures. In order to take full advantage of this development, new algorithms have to be specifically designed for parallel execution while many old ones have to be upgraded accordingly. One field in which parallel computing has been firmly established for many years is computer graphics. Calculating and displaying three-dimensional computer generated imagery in real time requires complex numerical operations to be performed at high speed on a large number of objects. Since most of these objects can be processed independently, parallel computing is applicable in this field. Modern graphics processing units (GPUs) have become capable of performing millions of matrix and vector operations per second on multiple objects simultaneously. As a side project, a software tool is currently being developed at the Institute of Aerospace Systems that provides an animated, three-dimensional visualization of both actual and simulated space debris objects. Due to the nature of these objects it is possible to process them individually and independently from each other. Therefore, an analytical orbit propagation algorithm has been implemented to run on a GPU. By taking advantage of all its processing power a huge performance increase, compared to its CPU-based counterpart, could be achieved. For several years efforts have been made to harness this computing power for applications other than computer graphics. Software tools for the simulation of space debris are among those that could profit from embracing parallelism. With recently emerged software development tools such as OpenCL it is possible to transfer the new algorithms used in the visualization outside the field of computer graphics and implement them, for example, into the space debris simulation environment. This way they can make use of parallel hardware such as GPUs and Multi-Core-CPUs for faster computation. In this paper the visualization software will be introduced, including a comparison between the serial and the parallel method of orbit propagation. Ways of how to use the benefits of the latter method for space debris simulation will be discussed. An introduction of OpenCL will be given as well as an exemplary algorithm from the field of space debris simulation.
Testing Viable f(T) Models with Current Observations
NASA Astrophysics Data System (ADS)
Xu, Bing; Yu, Hongwei; Wu, Puxun
2018-03-01
We perform observational tests on the f(T) gravity with the BAO data (including the BOSS DR 12 galaxy sample, the DR12 Lyα-Forests measurement, the new eBOSS DR14 quasar sample, the 6dFGS, and the SDSS), the CMB distance priors from the Planck 2015, the SNIa data from the joint light-curve analysis, the latest H(z) data, and the local value of the Hubble constant. Six different f(T) models are investigated. Furthermore, the ΛCDM is also considered. All models are compared by using the Akaike information criteria (AIC) and the Bayesian information criteria (BIC). Our results show that the ΛCDM remains to be the most favored model by current observations. However, there are also the Hubble constant tension between the Planck measurements and the local Universe observations and the tension between the CMB data and the H(z) data in the ΛCDM. For f(T) models considered in this paper, half, which can reduce to the ΛCDM, have values of {{χ }2}\\min smaller than that of the ΛCDM and can relieve the tensions existing in the ΛCDM. However, they are punished slightly by the BIC due to one extra parameter. Two of six f(T) models, in which the crossing of the phantom divide line can be realized for the equation of state of the effective dark energy and this crossing is shown in this paper to be favored by current observations, are punished by the information criteria. In addition, we find that the logarithmic f(T) model is excluded by cosmological observations.
Research Progress on Dark Matter Model Based on Weakly Interacting Massive Particles
NASA Astrophysics Data System (ADS)
He, Yu; Lin, Wen-bin
2017-04-01
The cosmological model of cold dark matter (CDM) with the dark energy and a scale-invariant adiabatic primordial power spectrum has been considered as the standard cosmological model, i.e. the ΛCDM model. Weakly interacting massive particles (WIMPs) become a prominent candidate for the CDM. Many models extended from the standard model can provide the WIMPs naturally. The standard calculations of relic abundance of dark matter show that the WIMPs are well in agreement with the astronomical observation of ΩDM h2 ≈0.11. The WIMPs have a relatively large mass, and a relatively slow velocity, so they are easy to aggregate into clusters, and the results of numerical simulations based on the WIMPs agree well with the observational results of cosmic large-scale structures. In the aspect of experiments, the present accelerator or non-accelerator direct/indirect detections are mostly designed for the WIMPs. Thus, a wide attention has been paid to the CDM model based on the WIMPs. However, the ΛCDM model has a serious problem for explaining the small-scale structures under one Mpc. Different dark matter models have been proposed to alleviate the small-scale problem. However, so far there is no strong evidence enough to exclude the CDM model. We plan to introduce the research progress of the dark matter model based on the WIMPs, such as the WIMPs miracle, numerical simulation, small-scale problem, and the direct/indirect detection, to analyze the criterion for discriminating the ;cold;, ;hot;, and ;warm; dark matter, and present the future prospects for the study in this field.
Child-directed marketing inside and on the exterior of fast food restaurants.
Ohri-Vachaspati, Punam; Isgor, Zeynep; Rimkus, Leah; Powell, Lisa M; Barker, Dianne C; Chaloupka, Frank J
2015-01-01
Children who eat fast food have poor diet and health outcomes. Fast food is heavily marketed to youth, and exposure to such marketing is associated with higher fast food consumption. To examine the extent of child-directed marketing (CDM) inside and on the exterior of fast food restaurants. Data were collected from 6,716 fast food restaurants located in a nationally representative sample of public middle- and high-school enrollment areas in 2010, 2011, and 2012. CDM was defined as the presence of one or more of seven components inside or on the exterior of the restaurant. Analyses were conducted in 2014. More than 20% of fast food restaurants used CDM inside or on their exterior. In multivariate analyses, fast food restaurants that were part of a chain, offered kids' meals, were located in middle- (compared to high)-income neighborhoods, and in rural (compared to urban) areas had significantly higher odds of using any CDM; chain restaurants and those located in majority black neighborhoods (compared to white) had significantly higher odds of having an indoor display of kids' meal toys. Compared to 2010, there was a significant decline in use of CDM in 2011, but the prevalence increased close to the 2010 level in 2012. CDM inside and on the exterior of fast food restaurants is prevalent in chain restaurants; majority black communities, rural areas, and middle-income communities are disproportionately exposed. The fast food industry should limit children's exposure to marketing that promotes unhealthy food choices. Copyright © 2015 American Journal of Preventive Medicine. Published by Elsevier Inc. All rights reserved.
Wang, Zhaoxin; Shi, Jianwei; Wu, Zhigui; Xie, Huiling; Yu, Yifan; Li, Ping; Liu, Rui; Jing, Limei
2017-07-01
Since the 1980s, China has been criticized for its mode of chronic disease management (CDM) that passively provides treatment in secondary and tertiary hospitals but lacks active prevention in community health centers (CHCs). Since there are few systematic evaluations of the CHCs' methods for CDM, this study aimed to analyze their abilities. On the macroperspective, we searched the literature in China's largest and most authoritative databases and the official websites of health departments. Literature was used to analyze the government's efforts in improving CHCs' abilities to perform CDM. At the microlevel, we examined the CHCs' longitudinal data after the New Health Reform in 2009, including financial investment, facilities, professional capacities, and the conducted CDM activities. A policy analysis showed that there was an increasing tendency towards government efforts in developing CDM, and the peak appeared in 2009. By evaluating the reform at CHCs, we found that there was an obvious increase in fiscal and public health subsidies, large-scale equipment, general practitioners, and public health physicians. The benefited vulnerable population in this area also rose significantly. However, rural centers were inferior in their CDM abilities compared with urban ones, and the referral system is still not effective in China. This study showed that CHCs are increasingly valued in managing chronic diseases, especially after the New Health Reform in 2009. However, we still need to improve collaborative management for chronic diseases in the community and strengthen the abilities of CHCs, especially in rural areas. Copyright © 2017 John Wiley & Sons, Ltd.
Pacaci, Anil; Gonul, Suat; Sinaci, A Anil; Yuksel, Mustafa; Laleci Erturkmen, Gokce B
2018-01-01
Background: Utilization of the available observational healthcare datasets is key to complement and strengthen the postmarketing safety studies. Use of common data models (CDM) is the predominant approach in order to enable large scale systematic analyses on disparate data models and vocabularies. Current CDM transformation practices depend on proprietarily developed Extract-Transform-Load (ETL) procedures, which require knowledge both on the semantics and technical characteristics of the source datasets and target CDM. Purpose: In this study, our aim is to develop a modular but coordinated transformation approach in order to separate semantic and technical steps of transformation processes, which do not have a strict separation in traditional ETL approaches. Such an approach would discretize the operations to extract data from source electronic health record systems, alignment of the source, and target models on the semantic level and the operations to populate target common data repositories. Approach: In order to separate the activities that are required to transform heterogeneous data sources to a target CDM, we introduce a semantic transformation approach composed of three steps: (1) transformation of source datasets to Resource Description Framework (RDF) format, (2) application of semantic conversion rules to get the data as instances of ontological model of the target CDM, and (3) population of repositories, which comply with the specifications of the CDM, by processing the RDF instances from step 2. The proposed approach has been implemented on real healthcare settings where Observational Medical Outcomes Partnership (OMOP) CDM has been chosen as the common data model and a comprehensive comparative analysis between the native and transformed data has been conducted. Results: Health records of ~1 million patients have been successfully transformed to an OMOP CDM based database from the source database. Descriptive statistics obtained from the source and target databases present analogous and consistent results. Discussion and Conclusion: Our method goes beyond the traditional ETL approaches by being more declarative and rigorous. Declarative because the use of RDF based mapping rules makes each mapping more transparent and understandable to humans while retaining logic-based computability. Rigorous because the mappings would be based on computer readable semantics which are amenable to validation through logic-based inference methods.
Experiences using OpenMP based on Computer Directed Software DSM on a PC Cluster
NASA Technical Reports Server (NTRS)
Hess, Matthias; Jost, Gabriele; Mueller, Matthias; Ruehle, Roland
2003-01-01
In this work we report on our experiences running OpenMP programs on a commodity cluster of PCs running a software distributed shared memory (DSM) system. We describe our test environment and report on the performance of a subset of the NAS Parallel Benchmarks that have been automaticaly parallelized for OpenMP. We compare the performance of the OpenMP implementations with that of their message passing counterparts and discuss performance differences.
Impact of new computing systems on computational mechanics and flight-vehicle structures technology
NASA Technical Reports Server (NTRS)
Noor, A. K.; Storaasli, O. O.; Fulton, R. E.
1984-01-01
Advances in computer technology which may have an impact on computational mechanics and flight vehicle structures technology were reviewed. The characteristics of supersystems, highly parallel systems, and small systems are summarized. The interrelations of numerical algorithms and software with parallel architectures are discussed. A scenario for future hardware/software environment and engineering analysis systems is presented. Research areas with potential for improving the effectiveness of analysis methods in the new environment are identified.
SDA 7: A modular and parallel implementation of the simulation of diffusional association software
Martinez, Michael; Romanowska, Julia; Kokh, Daria B.; Ozboyaci, Musa; Yu, Xiaofeng; Öztürk, Mehmet Ali; Richter, Stefan
2015-01-01
The simulation of diffusional association (SDA) Brownian dynamics software package has been widely used in the study of biomacromolecular association. Initially developed to calculate bimolecular protein–protein association rate constants, it has since been extended to study electron transfer rates, to predict the structures of biomacromolecular complexes, to investigate the adsorption of proteins to inorganic surfaces, and to simulate the dynamics of large systems containing many biomacromolecular solutes, allowing the study of concentration‐dependent effects. These extensions have led to a number of divergent versions of the software. In this article, we report the development of the latest version of the software (SDA 7). This release was developed to consolidate the existing codes into a single framework, while improving the parallelization of the code to better exploit modern multicore shared memory computer architectures. It is built using a modular object‐oriented programming scheme, to allow for easy maintenance and extension of the software, and includes new features, such as adding flexible solute representations. We discuss a number of application examples, which describe some of the methods available in the release, and provide benchmarking data to demonstrate the parallel performance. © 2015 The Authors. Journal of Computational Chemistry Published by Wiley Periodicals, Inc. PMID:26123630
Parallel algorithm of VLBI software correlator under multiprocessor environment
NASA Astrophysics Data System (ADS)
Zheng, Weimin; Zhang, Dong
2007-11-01
The correlator is the key signal processing equipment of a Very Lone Baseline Interferometry (VLBI) synthetic aperture telescope. It receives the mass data collected by the VLBI observatories and produces the visibility function of the target, which can be used to spacecraft position, baseline length measurement, synthesis imaging, and other scientific applications. VLBI data correlation is a task of data intensive and computation intensive. This paper presents the algorithms of two parallel software correlators under multiprocessor environments. A near real-time correlator for spacecraft tracking adopts the pipelining and thread-parallel technology, and runs on the SMP (Symmetric Multiple Processor) servers. Another high speed prototype correlator using the mixed Pthreads and MPI (Massage Passing Interface) parallel algorithm is realized on a small Beowulf cluster platform. Both correlators have the characteristic of flexible structure, scalability, and with 10-station data correlating abilities.
Customer Decision Making in Web Services with an Integrated P6 Model
NASA Astrophysics Data System (ADS)
Sun, Zhaohao; Sun, Junqing; Meredith, Grant
Customer decision making (CDM) is an indispensable factor for web services. This article examines CDM in web services with a novel P6 model, which consists of the 6 Ps: privacy, perception, propensity, preference, personalization and promised experience. This model integrates the existing 6 P elements of marketing mix as the system environment of CDM in web services. The new integrated P6 model deals with the inner world of the customer and incorporates what the customer think during the DM process. The proposed approach will facilitate the research and development of web services and decision support systems.
Large-scale structure in superfluid Chaplygin gas cosmology
NASA Astrophysics Data System (ADS)
Yang, Rongjia
2014-03-01
We investigate the growth of the large-scale structure in the superfluid Chaplygin gas (SCG) model. Both linear and nonlinear growth, such as σ8 and the skewness S3, are discussed. We find the growth factor of SCG reduces to the Einstein-de Sitter case at early times while it differs from the cosmological constant model (ΛCDM) case in the large a limit. We also find there will be more stricture growth on large scales in the SCG scenario than in ΛCDM and the variations of σ8 and S3 between SCG and ΛCDM cannot be discriminated.
Correlated perturbations from inflation and the cosmic microwave background.
Amendola, Luca; Gordon, Christopher; Wands, David; Sasaki, Misao
2002-05-27
We compare the latest cosmic microwave background data with theoretical predictions including correlated adiabatic and cold dark matter (CDM) isocurvature perturbations with a simple power-law dependence. We find that there is a degeneracy between the amplitude of correlated isocurvature perturbations and the spectral tilt. A negative (red) tilt is found to be compatible with a larger isocurvature contribution. Estimates of the baryon and CDM densities are found to be almost independent of the isocurvature amplitude. The main result is that current microwave background data do not exclude a dominant contribution from CDM isocurvature fluctuations on large scales.
Control-display mapping in brain-computer interfaces.
Thurlings, Marieke E; van Erp, Jan B F; Brouwer, Anne-Marie; Blankertz, Benjamin; Werkhoven, Peter
2012-01-01
Event-related potential (ERP) based brain-computer interfaces (BCIs) employ differences in brain responses to attended and ignored stimuli. When using a tactile ERP-BCI for navigation, mapping is required between navigation directions on a visual display and unambiguously corresponding tactile stimuli (tactors) from a tactile control device: control-display mapping (CDM). We investigated the effect of congruent (both display and control horizontal or both vertical) and incongruent (vertical display, horizontal control) CDMs on task performance, the ERP and potential BCI performance. Ten participants attended to a target (determined via CDM), in a stream of sequentially vibrating tactors. We show that congruent CDM yields best task performance, enhanced the P300 and results in increased estimated BCI performance. This suggests a reduced availability of attentional resources when operating an ERP-BCI with incongruent CDM. Additionally, we found an enhanced N2 for incongruent CDM, which indicates a conflict between visual display and tactile control orientations. Incongruency in control-display mapping reduces task performance. In this study, brain responses, task and system performance are related to (in)congruent mapping of command options and the corresponding stimuli in a brain-computer interface (BCI). Directional congruency reduces task errors, increases available attentional resources, improves BCI performance and thus facilitates human-computer interaction.
The mass discrepancy acceleration relation in a ΛCDM context
NASA Astrophysics Data System (ADS)
Di Cintio, Arianna; Lelli, Federico
2016-02-01
The mass discrepancy acceleration relation (MDAR) describes the coupling between baryons and dark matter (DM) in galaxies: the ratio of total-to-baryonic mass at a given radius anticorrelates with the acceleration due to baryons. The MDAR has been seen as a challenge to the Λ cold dark matter (ΛCDM) galaxy formation model, while it can be explained by Modified Newtonian Dynamics. In this Letter, we show that the MDAR arises in a ΛCDM cosmology once observed galaxy scaling relations are taken into account. We build semi-empirical models based on ΛCDM haloes, with and without the inclusion of baryonic effects, coupled to empirically motivated structural relations. Our models can reproduce the MDAR: specifically, a mass-dependent density profile for DM haloes can fully account for the observed MDAR shape, while a universal profile shows a discrepancy with the MDAR of dwarf galaxies with M⋆ < 109.5 M⊙, a further indication suggesting the existence of DM cores. Additionally, we reproduce slope and normalization of the baryonic Tully-Fisher relation (BTFR) with 0.17 dex scatter. These results imply that in ΛCDM (I) the MDAR is driven by structural scaling relations of galaxies and DM density profile shapes, and (II) the baryonic fractions determined by the BTFR are consistent with those inferred from abundance-matching studies.
A multiarchitecture parallel-processing development environment
NASA Technical Reports Server (NTRS)
Townsend, Scott; Blech, Richard; Cole, Gary
1993-01-01
A description is given of the hardware and software of a multiprocessor test bed - the second generation Hypercluster system. The Hypercluster architecture consists of a standard hypercube distributed-memory topology, with multiprocessor shared-memory nodes. By using standard, off-the-shelf hardware, the system can be upgraded to use rapidly improving computer technology. The Hypercluster's multiarchitecture nature makes it suitable for researching parallel algorithms in computational field simulation applications (e.g., computational fluid dynamics). The dedicated test-bed environment of the Hypercluster and its custom-built software allows experiments with various parallel-processing concepts such as message passing algorithms, debugging tools, and computational 'steering'. Such research would be difficult, if not impossible, to achieve on shared, commercial systems.
Object detectability at increased ambient lighting conditions.
Pollard, Benjamin J; Chawla, Amarpreet S; Delong, David M; Hashimoto, Noriyuki; Samei, Ehsan
2008-06-01
Under typical dark conditions encountered in diagnostic reading rooms, a reader's pupils will contract and dilate as the visual focus intermittently shifts between the high luminance display and the darker background wall, resulting in increased visual fatigue and the degradation of diagnostic performance. A controlled increase of ambient lighting may, however, reduce the severity of these pupillary adjustments by minimizing the difference between the luminance level to which the eyes adapt while viewing an image (L(adp)) and the luminance level of diffusely reflected light from the area surrounding the display (L(s)). Although ambient lighting in reading rooms has conventionally been kept at a minimum to maintain the perceived contrast of film images, proper Digital Imaging and Communications in Medicine (DICOM) calibration of modern medical-grade liquid crystal displays can compensate for minor lighting increases with very little loss of image contrast. This paper describes two psychophysical studies developed to evaluate and refine optimum reading room ambient lighting conditions through the use of observational tasks intended to simulate real clinical practices. The first study utilized the biologic contrast response of the human visual system to determine a range of representative L(adp) values for typical medical images. Readers identified low contrast horizontal objects in circular foregrounds of uniform luminance (5, 12, 20, and 30 cd/m2) embedded within digitized mammograms. The second study examined the effect of increased ambient lighting on the detection of subtle objects embedded in circular foregrounds of uniform luminance (5, 12, and 35 cd/m2) centered within a constant background of 12 cd/m2 luminance. The images were displayed under a dark room condition (1 lux) and an increased ambient lighting level (50 lux) such that the luminance level of the diffusely reflected light from the background wall was approximately equal to the image L(adp) value of 12 cd/m2. Results from the first study demonstrated that observer true positive and false positive detection rates and true positive detection times were considerably better while viewing foregrounds at 12 and 20 cd/m2 than at the other foreground luminance levels. Results from the second study revealed that under increased room illuminance, the average true positive detection rate improved a statistically significant amount from 39.3% to 55.6% at 5 cd/m2 foreground luminance. Additionally, the true positive rate increased from 46.4% to 56.6% at 35 cd/m2 foreground luminance, and decreased slightly from 90.2% to 87.5% at 12 cd/m2 foreground luminance. False positive rates at all foreground luminance levels remained approximately constant with increased ambient lighting. Furthermore, under increased room illuminance, true positive detection times declined at every foreground luminance level, with the most considerable decrease (approximately 500 ms) at the 5 cd/m2 foreground luminance. The first study suggests that L(adp) of typical mammograms lies between 12 and 20 cd/m2, leading to an optimum reading room illuminance of approximately 50-80 lux. Findings from the second study provide psychophysical evidence that ambient lighting may be increased to a level within this range, potentially improving radiologist comfort, without deleterious effects on diagnostic performance.
2012-09-30
platform (HPC) was developed, called the HPC-Acoustic Data Accelerator, or HPC-ADA for short. The HPC-ADA was designed based on fielded systems [1-4...software (Detection cLassificaiton for MAchine learning - High Peformance Computing). The software package was designed to utilize parallel and...Sedna [7] and is designed using a parallel architecture2, allowing existing algorithms to distribute to the various processing nodes with minimal changes
Experiences Using OpenMP Based on Compiler Directed Software DSM on a PC Cluster
NASA Technical Reports Server (NTRS)
Hess, Matthias; Jost, Gabriele; Mueller, Matthias; Ruehle, Roland; Biegel, Bryan (Technical Monitor)
2002-01-01
In this work we report on our experiences running OpenMP (message passing) programs on a commodity cluster of PCs (personal computers) running a software distributed shared memory (DSM) system. We describe our test environment and report on the performance of a subset of the NAS (NASA Advanced Supercomputing) Parallel Benchmarks that have been automatically parallelized for OpenMP. We compare the performance of the OpenMP implementations with that of their message passing counterparts and discuss performance differences.
Dental Hygienist-Led Chronic Disease Management System to Control Early Childhood Caries.
Ng, Man Wai; Fida, Zameera
2016-06-01
Management of the complex chronic disease of early childhood caries requires a system of coordinated health care interventions which can be led by a dental hygienist and where patient self-care efforts are paramount. Even after receiving costly surgical treatment under general anesthesia in the operating room, many children develop new and recurrent caries after only 6-12 months, a sequela that can be prevented. This article describes the chronic disease management (CDM) of dental caries, a science-based approach that can prevent and control caries. In this article, we (1) introduce the concept of CDM of dental caries, (2) provide evidence that CDM improves oral health outcomes, and (3) propose a dental hygienist-led team-based oral health care approach to CDM. Although we will be describing the CDM approach for early childhood caries, CDM of caries is applicable in children, adolescents, and adults. Early childhood caries disease control requires meaningful engagement of patients and parents by the oral health care team to assist them with making behavioral changes in the unique context of their families and communities. The traditional dentist/hygienist/assistant model needs to evolve to a collaborative partnership between care providers and patients/families. This partnership will be focused on systematic risk assessment and behaviorally based management of the disease itself, with sensitivity toward the familial environment. Early pilot study results demonstrate reductions in the rates of new caries, dental pain, and referral to the operating room compared with baseline rates. Dental hygienists are the appropriate team members to lead this approach because of their expertise in behavior change and prevention. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
SIDM on FIRE: hydrodynamical self-interacting dark matter simulations of low-mass dwarf galaxies
NASA Astrophysics Data System (ADS)
Robles, Victor H.; Bullock, James S.; Elbert, Oliver D.; Fitts, Alex; González-Samaniego, Alejandro; Boylan-Kolchin, Michael; Hopkins, Philip F.; Faucher-Giguère, Claude-André; Kereš, Dušan; Hayward, Christopher C.
2017-12-01
We compare a suite of four simulated dwarf galaxies formed in 1010 M⊙ haloes of collisionless cold dark matter (CDM) with galaxies simulated in the same haloes with an identical galaxy formation model but a non-zero cross-section for DM self-interactions. These cosmological zoom-in simulations are part of the Feedback In Realistic Environments (FIRE) project and utilize the FIRE-2 model for hydrodynamics and galaxy formation physics. We find the stellar masses of the galaxies formed in self-interacting dark matter (SIDM) with σ/m = 1 cm2 g-1 are very similar to those in CDM (spanning M⋆ ≈ 105.7-7.0M⊙) and all runs lie on a similar stellar mass-size relation. The logarithmic DM density slope (α = d log ρ/d log r) in the central 250-500 pc remains steeper than α = -0.8 for the CDM-Hydro simulations with stellar mass M⋆ ∼ 106.6 M⊙ and core-like in the most massive galaxy. In contrast, every SIDM hydrodynamic simulation yields a flatter profile, with α > -0.4. Moreover, the central density profiles predicted in SIDM runs without baryons are similar to the SIDM runs that include FIRE-2 baryonic physics. Thus, SIDM appears to be much more robust to the inclusion of (potentially uncertain) baryonic physics than CDM on this mass scale, suggesting that SIDM will be easier to falsify than CDM using low-mass galaxies. Our FIRE simulations predict that galaxies less massive than M⋆ ≲ 3 × 106 M⊙ provide potentially ideal targets for discriminating models, with SIDM producing substantial cores in such tiny galaxies and CDM producing cusps.
Potential contribution of the forestry sector in Bangladesh to carbon sequestration.
Yong Shin, Man; Miah, Danesh M; Lee, Kyeong Hak
2007-01-01
The Kyoto Protocol provides for the involvement of developing countries in an atmospheric greenhouse gas reduction regime under its Clean Development Mechanism (CDM). Carbon credits are gained from reforestation and afforestation activities in developing countries. Bangladesh, a densely populated tropical country in South Asia, has a huge degraded forestland which can be reforested by CDM projects. To realize the potential of the forestry sector in developing countries for full-scale emission mitigation, the carbon sequestration potential of different species in different types of plantations should be integrated with the carbon trading system under the CDM of the Kyoto Protocol. This paper discusses the prospects and problems of carbon trading in Bangladesh, in relation to the CDM, in the context of global warming and the potential associated consequences. The paper analyzes the effects of reforestation projects on carbon sequestration in Bangladesh, in general, and in the hilly Chittagong region, in particular, and concludes by demonstrating the carbon trading opportunities. Results showed that tree tissue in the forests of Bangladesh stored 92tons of carbon per hectare (tC/ha), on average. The results also revealed a gross stock of 190tC/ha in the plantations of 13 tree species, ranging in age from 6 to 23 years. The paper confirms the huge atmospheric CO(2) offset by the forests if the degraded forestlands are reforested by CDM projects, indicating the potential of Bangladesh to participate in carbon trading for both its economic and environmental benefit. Within the forestry sector itself, some constraints are identified; nevertheless, the results of the study can expedite policy decisions regarding Bangladesh's participation in carbon trading through the CDM.
NASA Technical Reports Server (NTRS)
2003-01-01
Topics covered include: Stable, Thermally Conductive Fillers for Bolted Joints; Connecting to Thermocouples with Fewer Lead Wires; Zipper Connectors for Flexible Electronic Circuits; Safety Interlock for Angularly Misdirected Power Tool; Modular, Parallel Pulse-Shaping Filter Architectures; High-Fidelity Piezoelectric Audio Device; Photovoltaic Power Station with Ultracapacitors for Storage; Time Analyzer for Time Synchronization and Monitor of the Deep Space Network; Program for Computing Albedo; Integrated Software for Analyzing Designs of Launch Vehicles; Abstract-Reasoning Software for Coordinating Multiple Agents; Software Searches for Better Spacecraft-Navigation Models; Software for Partly Automated Recognition of Targets; Antistatic Polycarbonate/Copper Oxide Composite; Better VPS Fabrication of Crucibles and Furnace Cartridges; Burn-Resistant, Strong Metal-Matrix Composites; Self-Deployable Spring-Strip Booms; Explosion Welding for Hermetic Containerization; Improved Process for Fabricating Carbon Nanotube Probes; Automated Serial Sectioning for 3D Reconstruction; and Parallel Subconvolution Filtering Architectures.
Unobtrusive Software and System Health Management with R2U2 on a Parallel MIMD Coprocessor
NASA Technical Reports Server (NTRS)
Schumann, Johann; Moosbrugger, Patrick
2017-01-01
Dynamic monitoring of software and system health of a complex cyber-physical system requires observers that continuously monitor variables of the embedded software in order to detect anomalies and reason about root causes. There exists a variety of techniques for code instrumentation, but instrumentation might change runtime behavior and could require costly software re-certification. In this paper, we present R2U2E, a novel realization of our real-time, Realizable, Responsive, and Unobtrusive Unit (R2U2). The R2U2E observers are executed in parallel on a dedicated 16-core EPIPHANY co-processor, thereby avoiding additional computational overhead to the system under observation. A DMA-based shared memory access architecture allows R2U2E to operate without any code instrumentation or program interference.
Relativistic numerical cosmology with silent universes
NASA Astrophysics Data System (ADS)
Bolejko, Krzysztof
2018-01-01
Relativistic numerical cosmology is most often based either on the exact solutions of the Einstein equations, or perturbation theory, or weak-field limit, or the BSSN formalism. The silent universe provides an alternative approach to investigate relativistic evolution of cosmological systems. The silent universe is based on the solution of the Einstein equations in 1 + 3 comoving coordinates with additional constraints imposed. These constraints include: the gravitational field is sourced by dust and cosmological constant only, both rotation and magnetic part of the Weyl tensor vanish, and the shear is diagnosable. This paper describes the code simsilun (free software distributed under the terms of the reposi General Public License), which implements the equations of the silent universe. The paper also discusses applications of the silent universe and it uses the Millennium simulation to set up the initial conditions for the code simsilun. The simulation obtained this way consists of 16 777 216 worldlines, which are evolved from z = 80 to z = 0. Initially, the mean evolution (averaged over the whole domain) follows the evolution of the background ΛCDM model. However, once the evolution of cosmic structures becomes nonlinear, the spatial curvature evolves from ΩK =0 to ΩK ≈ 0.1 at the present day. The emergence of the spatial curvature is associated with ΩM and Ω_Λ being smaller by approximately 0.05 compared to the ΛCDM.
The computer-aided parallel external fixator for complex lower limb deformity correction.
Wei, Mengting; Chen, Jianwen; Guo, Yue; Sun, Hao
2017-12-01
Since parameters of the parallel external fixator are difficult to measure and calculate in real applications, this study developed computer software that can help the doctor measure parameters using digital technology and generate an electronic prescription for deformity correction. According to Paley's deformity measurement method, we provided digital measurement techniques. In addition, we proposed an deformity correction algorithm to calculate the elongations of the six struts and developed a electronic prescription software. At the same time, a three-dimensional simulation of the parallel external fixator and deformed fragment was made using virtual reality modeling language technology. From 2013 to 2015, fifteen patients with complex lower limb deformity were treated with parallel external fixators and the self-developed computer software. All of the cases had unilateral limb deformity. The deformities were caused by old osteomyelitis in nine cases and traumatic sequelae in six cases. A doctor measured the related angulation, displacement and rotation on postoperative radiographs using the digital measurement techniques. Measurement data were input into the electronic prescription software to calculate the daily adjustment elongations of the struts. Daily strut adjustments were conducted according to the data calculated. The frame was removed when expected results were achieved. Patients lived independently during the adjustment. The mean follow-up was 15 months (range 10-22 months). The duration of frame fixation from the time of application to the time of removal averaged 8.4 months (range 2.5-13.1 months). All patients were satisfied with the corrected limb alignment. No cases of wound infections or complications occurred. Using the computer-aided parallel external fixator for the correction of lower limb deformities can achieve satisfactory outcomes. The correction process can be simplified and is precise and digitized, which will greatly improve the treatment in a clinical application.
Bhadra, Biswa Nath; Jhung, Sung Hwa
2017-10-15
A series of metal-azolate frameworks or MAFs-MAF-4, -5, and -6-were synthesized and pyrolyzed to prepare porous carbons derived from MAFs (CDM-4, -5, -6, respectively). Not only the obtained carbons but also MAFs were characterized and applied for the adsorption of organic contaminants of emerging concern (CECs, including pharmaceuticals and personal care products) such as salicylic acid, clofibric acid, diclofenac sodium, bisphenol-A, and oxybenzone (OXB) from water. CDM-6 was found to be the most remarkable adsorbent among the tested ones (including activated carbon) for all the adsorbates. OXB was taken as a representative adsorbate for detailed adsorption studies as well as understanding the adsorption mechanism. H-bonding (H-acceptor: CDM; H-donor: CECs) was suggested as the principal mechanism for the adsorption of tested adsorbates. Finally, CDMs, especially CDM-6, were suggested as highly efficient and easily recyclable adsorbents for water purification. Copyright © 2017 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dirian, Yves; Foffa, Stefano; Kunz, Martin
We present a comprehensive and updated comparison with cosmological observations of two non-local modifications of gravity previously introduced by our group, the so called RR and RT models. We implement the background evolution and the cosmological perturbations of the models in a modified Boltzmann code, using CLASS. We then test the non-local models against the Planck 2015 TT, TE, EE and Cosmic Microwave Background (CMB) lensing data, isotropic and anisotropic Baryonic Acoustic Oscillations (BAO) data, JLA supernovae, H {sub 0} measurements and growth rate data, and we perform Bayesian parameter estimation. We then compare the RR, RT and ΛCDM models,more » using the Savage-Dickey method. We find that the RT model and ΛCDM perform equally well, while the performance of the RR model with respect to ΛCDM depends on whether or not we include a prior on H {sub 0} based on local measurements.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simard, G.; et al.
We report constraints on cosmological parameters from the angular power spectrum of a cosmic microwave background (CMB) gravitational lensing potential map created using temperature data from 2500 degmore » $^2$ of South Pole Telescope (SPT) data supplemented with data from Planck in the same sky region, with the statistical power in the combined map primarily from the SPT data. We fit the corresponding lensing angular power spectrum to a model including cold dark matter and a cosmological constant ($$\\Lambda$$CDM), and to models with single-parameter extensions to $$\\Lambda$$CDM. We find constraints that are comparable to and consistent with constraints found using the full-sky Planck CMB lensing data. Specifically, we find $$\\sigma_8 \\Omega_{\\rm m}^{0.25}=0.598 \\pm 0.024$$ from the lensing data alone with relatively weak priors placed on the other $$\\Lambda$$CDM parameters. In combination with primary CMB data from Planck, we explore single-parameter extensions to the $$\\Lambda$$CDM model. We find $$\\Omega_k = -0.012^{+0.021}_{-0.023}$$ or $$M_{\
Bookey-Bassett, Sue; Markle-Reid, Maureen; McKey, Colleen; Akhtar-Danesh, Noori
2016-01-01
It is acknowledged internationally that chronic disease management (CDM) for community-living older adults (CLOA) is an increasingly complex process. CDM for older adults, who are often living with multiple chronic conditions, requires coordination of various health and social services. Coordination is enabled through interprofessional collaboration (IPC) among individual providers, community organizations, and health sectors. Measuring IPC is complicated given there are multiple conceptualisations and measures of IPC. A literature review of several healthcare, psychological, and social science electronic databases was conducted to locate instruments that measure IPC at the team level and have published evidence of their reliability and validity. Five instruments met the criteria and were critically reviewed to determine their strengths and limitations as they relate to CDM for CLOA. A comparison of the characteristics, psychometric properties, and overall concordance of each instrument with salient attributes of IPC found the Collaborative Practice Assessment Tool to be the most appropriate instrument for measuring IPC for CDM in CLOA.
Improving nurses' knowledge of continuous ST-segment monitoring.
Chronister, Connie
2014-01-01
Continuous ST-segment monitoring can result in detection of myocardial ischemia, but in clinical practice, continuous ST-segment monitoring is conducted incorrectly and underused by many registered nurses (RNs). Many RNs are unable to correctly institute ST-segment monitoring guidelines because of a lack of education. To evaluate whether an educational intervention, provided to 32 RNs, increases knowledge and correct clinical decision making (CDM) for the use of continuous ST-segment monitoring. At a single institution, an ST-segment monitoring class was provided to RNs in 2 cardiovascular units. Knowledge and correct CDM instruments were used for a baseline pretest and subsequent posttest after ST-segment monitoring education. Statistical significance between pretest and posttest scores for knowledge and correct CDM practice was noted with dependent t tests (P = .0001). Many RNs responsible for electrocardiographic monitoring are not aware of evidence-based ST-segment monitoring practice guidelines and cannot properly place precordial leads needed for ST-segment monitoring. Knowledge and correct CDM with ST-segment monitoring can be improved with focused education.
NASA Astrophysics Data System (ADS)
Nesbet, Robert K.
2018-05-01
Velocities in stable circular orbits about galaxies, a measure of centripetal gravitation, exceed the expected Kepler/Newton velocity as orbital radius increases. Standard Λ cold dark matter (ΛCDM) attributes this anomaly to galactic dark matter. McGaugh et al. have recently shown for 153 disc galaxies that observed radial acceleration is an apparently universal function of classical acceleration computed for observed galactic baryonic mass density. This is consistent with the empirical modified Newtonian dynamics (MOND) model, not requiring dark matter. It is shown here that suitably constrained ΛCDM and conformal gravity (CG) also produce such a universal correlation function. ΛCDM requires a very specific dark matter distribution, while the implied CG non-classical acceleration must be independent of galactic mass. All three constrained radial acceleration functions agree with the empirical baryonic v4 Tully-Fisher relation. Accurate rotation data in the nominally flat velocity range could distinguish between MOND, ΛCDM, and CG.
Mortality of patients with COPD participating in chronic disease management programmes: a happy end?
Peytremann-Bridevaux, I; Taffe, P; Burnand, B; Bridevaux, P O; Puhan, M A
2014-09-01
Concerns about increased mortality could question the role of COPD chronic disease management (CDM) programmes. We aimed at extending a recent Cochrane review to assess the effects of CDM on mortality in patients with COPD. Mortality data were available for 25 out of 29 trials identified in a COPD integrated care systematic review. Meta-analysis using random-effects models was performed, followed by subgroup analyses according to study length (3-12 months vs >12 months), main intervention component (exercise, self-management, structured follow-up) and use of an action plan. The meta-analysis showed no impact of CDM on mortality (pooled OR: 1.00, 95% CI 0.79 to 1.28). These results do not suggest that CDM programmes expose patients with COPD to excessive mortality risk. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Parallel machine architecture and compiler design facilities
NASA Technical Reports Server (NTRS)
Kuck, David J.; Yew, Pen-Chung; Padua, David; Sameh, Ahmed; Veidenbaum, Alex
1990-01-01
The objective is to provide an integrated simulation environment for studying and evaluating various issues in designing parallel systems, including machine architectures, parallelizing compiler techniques, and parallel algorithms. The status of Delta project (which objective is to provide a facility to allow rapid prototyping of parallelized compilers that can target toward different machine architectures) is summarized. Included are the surveys of the program manipulation tools developed, the environmental software supporting Delta, and the compiler research projects in which Delta has played a role.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Malony, Allen D; Shende, Sameer
The primary goal of the University of Oregon's DOE "ÃÂcompetitiveness" project was to create performance technology that embodies and supports knowledge of performance data, analysis, and diagnosis in parallel performance problem solving. The target of our development activities was the TAU Performance System and the technology accomplishments reported in this and prior reports have all been incorporated in the TAU open software distribution. In addition, the project has been committed to maintaining strong interactions with the DOE SciDAC Performance Engineering Research Institute (PERI) and Center for Technology for Advanced Scientific Component Software (TASCS). This collaboration has proved valuable for translationmore » of our knowledge-based performance techniques to parallel application development and performance engineering practice. Our outreach has also extended to the DOE Advanced CompuTational Software (ACTS) collection and project. Throughout the project we have participated in the PERI and TASCS meetings, as well as the ACTS annual workshops.« less
Matpar: Parallel Extensions for MATLAB
NASA Technical Reports Server (NTRS)
Springer, P. L.
1998-01-01
Matpar is a set of client/server software that allows a MATLAB user to take advantage of a parallel computer for very large problems. The user can replace calls to certain built-in MATLAB functions with calls to Matpar functions.
Relation of Parallel Discrete Event Simulation algorithms with physical models
NASA Astrophysics Data System (ADS)
Shchur, L. N.; Shchur, L. V.
2015-09-01
We extend concept of local simulation times in parallel discrete event simulation (PDES) in order to take into account architecture of the current hardware and software in high-performance computing. We shortly review previous research on the mapping of PDES on physical problems, and emphasise how physical results may help to predict parallel algorithms behaviour.
Experiences in Teaching a Graduate Course on Model-Driven Software Development
ERIC Educational Resources Information Center
Tekinerdogan, Bedir
2011-01-01
Model-driven software development (MDSD) aims to support the development and evolution of software intensive systems using the basic concepts of model, metamodel, and model transformation. In parallel with the ongoing academic research, MDSD is more and more applied in industrial practices. After being accepted both by a broad community of…
NASA Astrophysics Data System (ADS)
Laracuente, Nicholas; Grossman, Carl
2013-03-01
We developed an algorithm and software to calculate autocorrelation functions from real-time photon-counting data using the fast, parallel capabilities of graphical processor units (GPUs). Recent developments in hardware and software have allowed for general purpose computing with inexpensive GPU hardware. These devices are more suited for emulating hardware autocorrelators than traditional CPU-based software applications by emphasizing parallel throughput over sequential speed. Incoming data are binned in a standard multi-tau scheme with configurable points-per-bin size and are mapped into a GPU memory pattern to reduce time-expensive memory access. Applications include dynamic light scattering (DLS) and fluorescence correlation spectroscopy (FCS) experiments. We ran the software on a 64-core graphics pci card in a 3.2 GHz Intel i5 CPU based computer running Linux. FCS measurements were made on Alexa-546 and Texas Red dyes in a standard buffer (PBS). Software correlations were compared to hardware correlator measurements on the same signals. Supported by HHMI and Swarthmore College
Testing cold dark matter models using Hubble flow variations
NASA Astrophysics Data System (ADS)
Shi, Xiangdong
1999-05-01
COBE-normalized flat (matter plus cosmological constant) and open cold dark matter (CDM) models are tested by comparing their expected Hubble flow variations and the observed variations in a Type Ia supernova sample and a Tully-Fisher cluster sample. The test provides a probe of the CDM power spectrum on scales of 0.02h Mpc^-1<~ k<~ 0.2h Mpc^-1, free of the bias factor b. The results favour a low matter content universe, or a flat matter-dominated universe with a very low Hubble constant and/or a very small spectral index n^ps, with the best fits having Ο_0~ 0.3 to 0.4. The test is found to be more discriminative to the open CDM models than to the flat CDM models. For example, the test results are found to be compatible with those from the X-ray cluster abundance measurements at smaller length-scales, and consistent with the galaxy and cluster correlation analysis of Peacock & Dodds at similar length-scales, if our universe is flat; but the results are marginally incompatible with the X-ray cluster abundance measurements if our universe is open. The open CDM results are consistent with that of Peacock & Dodds only if the matter density of the universe is less than about 60 per cent of the critical density. The shortcoming of the test is discussed, so are ways to minimize it.
Constraints on Dark Energy from Baryon Acoustic Peak and Galaxy Cluster Gas Mass Measurements
NASA Astrophysics Data System (ADS)
Samushia, Lado; Ratra, Bharat
2009-10-01
We use baryon acoustic peak measurements by Eisenstein et al. and Percival et al., together with the Wilkinson Microwave Anisotropy Probe (WMAP) measurement of the apparent acoustic horizon angle, and galaxy cluster gas mass fraction measurements of Allen et al., to constrain a slowly rolling scalar field dark energy model, phiCDM, in which dark energy's energy density changes in time. We also compare our phiCDM results with those derived for two more common dark energy models: the time-independent cosmological constant model, ΛCDM, and the XCDM parameterization of dark energy's equation of state. For time-independent dark energy, the Percival et al. measurements effectively constrain spatial curvature and favor a close to the spatially flat model, mostly due to the WMAP cosmic microwave background prior used in the analysis. In a spatially flat model the Percival et al. data less effectively constrain time-varying dark energy. The joint baryon acoustic peak and galaxy cluster gas mass constraints on the phiCDM model are consistent with but tighter than those derived from other data. A time-independent cosmological constant in a spatially flat model provides a good fit to the joint data, while the α parameter in the inverse power-law potential phiCDM model is constrained to be less than about 4 at 3σ confidence level.
NASA Technical Reports Server (NTRS)
Larson, Robert E.; Mcentire, Paul L.; Oreilly, John G.
1993-01-01
The C Data Manager (CDM) is an advanced tool for creating an object-oriented database and for processing queries related to objects stored in that database. The CDM source code was purchased and will be modified over the course of the Arachnid project. In this report, the modified CDM is referred to as MCDM. Using MCDM, a detailed series of experiments was designed and conducted on a Sun Sparcstation. The primary results and analysis of the CDM experiment are provided in this report. The experiments involved creating the Long-form Faint Source Catalog (LFSC) database and then analyzing it with respect to following: (1) the relationships between the volume of data and the time required to create a database; (2) the storage requirements of the database files; and (3) the properties of query algorithms. The effort focused on defining, implementing, and analyzing seven experimental scenarios: (1) find all sources by right ascension--RA; (2) find all sources by declination--DEC; (3) find all sources in the right ascension interval--RA1, RA2; (4) find all sources in the declination interval--DEC1, DEC2; (5) find all sources in the rectangle defined by--RA1, RA2, DEC1, DEC2; (6) find all sources that meet certain compound conditions; and (7) analyze a variety of query algorithms. Throughout this document, the numerical results obtained from these scenarios are reported; conclusions are presented at the end of the document.
Myotonic dystrophy type 1: clinical manifestations in children and adolescents.
Ho, Genevieve; Carey, Kate A; Cardamone, Michael; Farrar, Michelle A
2018-06-05
Myotonic dystrophy type 1 (DM1) is an autosomal-dominant neuromuscular disease with variable severity affecting all ages; however, current care guidelines are adult-focused. The objective of the present study was to profile DM1 in childhood and propose a framework to guide paediatric-focused management. 40 children with DM1 (mean age 12.8 years; range 2-19) were studied retrospectively for a total of 513 follow-up years at Sydney Children's Hospital. 143 clinical parameters were recorded. The clinical spectrum of disease in childhood differs from adults, with congenital myotonic dystrophy (CDM1) having more severe health issues than childhood-onset/juvenile patients (JDM1). Substantial difficulties with intellectual (CDM1 25/26 96.2%; JDM1 9/10, 90.0%), fine motor (CDM1 23/30, 76.6%; JDM1 6/10, 60.0%), gastrointestinal (CDM1 17/30, 70.0%; JDM1 3/10, 30.0%) and neuromuscular function (CDM1 30/30, 100.0%; JDM1 25/30, 83.3%) were evident. The health consequences of DM1 in childhood are diverse, highlighting the need for paediatric multidisciplinary management approaches that encompass key areas of cognition, musculoskeletal, gastrointestinal, respiratory, cardiac and sleep issues. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Dark matter deprivation in the field elliptical galaxy NGC 7507
NASA Astrophysics Data System (ADS)
Lane, Richard R.; Salinas, Ricardo; Richtler, Tom
2015-02-01
Context. Previous studies have shown that the kinematics of the field elliptical galaxy NGC 7507 do not necessarily require dark matter. This is troubling because, in the context of ΛCDM cosmologies, all galaxies should have a large dark matter component. Aims: Our aims are to determine the rotation and velocity dispersion profile out to larger radii than do previous studies, and, therefore, more accurately estimate of the dark matter content of the galaxy. Methods: We use penalised pixel-fitting software to extract velocities and velocity dispersions from GMOS slit mask spectra. Using Jeans and MONDian modelling, we then produce models with the goal of fitting the velocity dispersion data. Results: NGC 7507 has a two-component stellar halo, with the outer halo counter rotating with respect to the inner halo, with a kinematic boundary at a radius of ~110'' (~12.4 kpc). The velocity dispersion profile exhibits an increase at ~70'' (~7.9 kpc), reminiscent of several other elliptical galaxies. Our best fit models are those under mild anisotropy, which include ~100 times less dark matter than predicted by ΛCDM, although mildly anisotropic models that are completely dark matter free fit the measured dynamics almost equally well. Our MONDian models, both isotropic and anisotropic, systematically fail to reproduce the measured velocity dispersions at almost all radii. Conclusions: The counter-rotating outer halo implies a merger remnant, as does the increase in velocity dispersion at ~70''. From simulations it seems plausible that the merger that caused the increase in velocity dispersion was a spiral-spiral merger. Our Jeans models are completely consistent with a no dark matter scenario, however, some dark matter can be accommodated, although at much lower concentrations than predicted by ΛCDM simulations. This indicates that NGC 7507 may be a dark matter free elliptical galaxy. Regardless of whether NGC 7507 is completely dark matter free or very dark matter poor, it is at odds with predictions from current ΛCDM cosmological simulations. It may be possible that the observed velocity dispersions could be reproduced if the galaxy is significantly flattened along the line of sight (e.g. due to rotation); however, invoking this flattening is problematic. Based on observations taken at the Gemini Observatory, operated by the Association of Universities for Research in Astronomy, Inc., under a cooperative agreement with the NSF on behalf of the Gemini partnership: the National Science Foundation (United States), the Science and Technology Facilities Council (United Kingdom), the National Research Council (Canada), CONICYT (Chile), the Australian Research Council (Australia), Ministério da Ciência e Tecnologia (Brazil) and SECYT (Argentina).
Enhancing instruction scheduling with a block-structured ISA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Melvin, S.; Patt, Y.
It is now generally recognized that not enough parallelism exists within the small basic blocks of most general purpose programs to satisfy high performance processors. Thus, a wide variety of techniques have been developed to exploit instruction level parallelism across basic block boundaries. In this paper we discuss some previous techniques along with their hardware and software requirements. Then we propose a new paradigm for an instruction set architecture (ISA): block-structuring. This new paradigm is presented, its hardware and software requirements are discussed and the results from a simulation study are presented. We show that a block-structured ISA utilizes bothmore » dynamic and compile-time mechanisms for exploiting instruction level parallelism and has significant performance advantages over a conventional ISA.« less
The Automated Instrumentation and Monitoring System (AIMS): Design and Architecture. 3.2
NASA Technical Reports Server (NTRS)
Yan, Jerry C.; Schmidt, Melisa; Schulbach, Cathy; Bailey, David (Technical Monitor)
1997-01-01
Whether a researcher is designing the 'next parallel programming paradigm', another 'scalable multiprocessor' or investigating resource allocation algorithms for multiprocessors, a facility that enables parallel program execution to be captured and displayed is invaluable. Careful analysis of such information can help computer and software architects to capture, and therefore, exploit behavioral variations among/within various parallel programs to take advantage of specific hardware characteristics. A software tool-set that facilitates performance evaluation of parallel applications on multiprocessors has been put together at NASA Ames Research Center under the sponsorship of NASA's High Performance Computing and Communications Program over the past five years. The Automated Instrumentation and Monitoring Systematic has three major software components: a source code instrumentor which automatically inserts active event recorders into program source code before compilation; a run-time performance monitoring library which collects performance data; and a visualization tool-set which reconstructs program execution based on the data collected. Besides being used as a prototype for developing new techniques for instrumenting, monitoring and presenting parallel program execution, AIMS is also being incorporated into the run-time environments of various hardware testbeds to evaluate their impact on user productivity. Currently, the execution of FORTRAN and C programs on the Intel Paragon and PALM workstations can be automatically instrumented and monitored. Performance data thus collected can be displayed graphically on various workstations. The process of performance tuning with AIMS will be illustrated using various NAB Parallel Benchmarks. This report includes a description of the internal architecture of AIMS and a listing of the source code.
NASA Astrophysics Data System (ADS)
Hahn, T.
2016-10-01
The parallel version of the multidimensional numerical integration package Cuba is presented and achievable speed-ups discussed. The parallelization is based on the fork/wait POSIX functions, needs no extra software installed, imposes almost no constraints on the integrand function, and works largely automatically.
Cedar Project---Original goals and progress to date
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cybenko, G.; Kuck, D.; Padua, D.
1990-11-28
This work encompasses a broad attack on high speed parallel processing. Hardware, software, applications development, and performance evaluation and visualization as well as research topics are proposed. Our goal is to develop practical parallel processing for the 1990's.
Previous studies from this laboratory have demonstrated significant deficits in cardiovascular function in rats exposed to the pesticide chlordimeform (CDM) when body core temperature (Tco) was maintained at 37oC. o investigate the role of Tco on CDM toxicity, similar experiments...
Parallelization of Rocket Engine System Software (Press)
NASA Technical Reports Server (NTRS)
Cezzar, Ruknet
1996-01-01
The main goal is to assess parallelization requirements for the Rocket Engine Numeric Simulator (RENS) project which, aside from gathering information on liquid-propelled rocket engines and setting forth requirements, involve a large FORTRAN based package at NASA Lewis Research Center and TDK software developed by SUBR/UWF. The ultimate aim is to develop, test, integrate, and suitably deploy a family of software packages on various aspects and facets of rocket engines using liquid-propellants. At present, all project efforts by the funding agency, NASA Lewis Research Center, and the HBCU participants are disseminated over the internet using world wide web home pages. Considering obviously expensive methods of actual field trails, the benefits of software simulators are potentially enormous. When realized, these benefits will be analogous to those provided by numerous CAD/CAM packages and flight-training simulators. According to the overall task assignments, Hampton University's role is to collect all available software, place them in a common format, assess and evaluate, define interfaces, and provide integration. Most importantly, the HU's mission is to see to it that the real-time performance is assured. This involves source code translations, porting, and distribution. The porting will be done in two phases: First, place all software on Cray XMP platform using FORTRAN. After testing and evaluation on the Cray X-MP, the code will be translated to C + + and ported to the parallel nCUBE platform. At present, we are evaluating another option of distributed processing over local area networks using Sun NFS, Ethernet, TCP/IP. Considering the heterogeneous nature of the present software (e.g., first started as an expert system using LISP machines) which now involve FORTRAN code, the effort is expected to be quite challenging.
Computational methods and software systems for dynamics and control of large space structures
NASA Technical Reports Server (NTRS)
Park, K. C.; Felippa, C. A.; Farhat, C.; Pramono, E.
1990-01-01
This final report on computational methods and software systems for dynamics and control of large space structures covers progress to date, projected developments in the final months of the grant, and conclusions. Pertinent reports and papers that have not appeared in scientific journals (or have not yet appeared in final form) are enclosed. The grant has supported research in two key areas of crucial importance to the computer-based simulation of large space structure. The first area involves multibody dynamics (MBD) of flexible space structures, with applications directed to deployment, construction, and maneuvering. The second area deals with advanced software systems, with emphasis on parallel processing. The latest research thrust in the second area, as reported here, involves massively parallel computers.
A Tutorial on Parallel and Concurrent Programming in Haskell
NASA Astrophysics Data System (ADS)
Peyton Jones, Simon; Singh, Satnam
This practical tutorial introduces the features available in Haskell for writing parallel and concurrent programs. We first describe how to write semi-explicit parallel programs by using annotations to express opportunities for parallelism and to help control the granularity of parallelism for effective execution on modern operating systems and processors. We then describe the mechanisms provided by Haskell for writing explicitly parallel programs with a focus on the use of software transactional memory to help share information between threads. Finally, we show how nested data parallelism can be used to write deterministically parallel programs which allows programmers to use rich data types in data parallel programs which are automatically transformed into flat data parallel versions for efficient execution on multi-core processors.
Tycho 2: A Proxy Application for Kinetic Transport Sweeps
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garrett, Charles Kristopher; Warsa, James S.
2016-09-14
Tycho 2 is a proxy application that implements discrete ordinates (SN) kinetic transport sweeps on unstructured, 3D, tetrahedral meshes. It has been designed to be small and require minimal dependencies to make collaboration and experimentation as easy as possible. Tycho 2 has been released as open source software. The software is currently in a beta release with plans for a stable release (version 1.0) before the end of the year. The code is parallelized via MPI across spatial cells and OpenMP across angles. Currently, several parallelization algorithms are implemented.
Parallel Processing with Digital Signal Processing Hardware and Software
NASA Technical Reports Server (NTRS)
Swenson, Cory V.
1995-01-01
The assembling and testing of a parallel processing system is described which will allow a user to move a Digital Signal Processing (DSP) application from the design stage to the execution/analysis stage through the use of several software tools and hardware devices. The system will be used to demonstrate the feasibility of the Algorithm To Architecture Mapping Model (ATAMM) dataflow paradigm for static multiprocessor solutions of DSP applications. The individual components comprising the system are described followed by the installation procedure, research topics, and initial program development.
Parallel processing architecture for H.264 deblocking filter on multi-core platforms
NASA Astrophysics Data System (ADS)
Prasad, Durga P.; Sonachalam, Sekar; Kunchamwar, Mangesh K.; Gunupudi, Nageswara Rao
2012-03-01
Massively parallel computing (multi-core) chips offer outstanding new solutions that satisfy the increasing demand for high resolution and high quality video compression technologies such as H.264. Such solutions not only provide exceptional quality but also efficiency, low power, and low latency, previously unattainable in software based designs. While custom hardware and Application Specific Integrated Circuit (ASIC) technologies may achieve lowlatency, low power, and real-time performance in some consumer devices, many applications require a flexible and scalable software-defined solution. The deblocking filter in H.264 encoder/decoder poses difficult implementation challenges because of heavy data dependencies and the conditional nature of the computations. Deblocking filter implementations tend to be fixed and difficult to reconfigure for different needs. The ability to scale up for higher quality requirements such as 10-bit pixel depth or a 4:2:2 chroma format often reduces the throughput of a parallel architecture designed for lower feature set. A scalable architecture for deblocking filtering, created with a massively parallel processor based solution, means that the same encoder or decoder will be deployed in a variety of applications, at different video resolutions, for different power requirements, and at higher bit-depths and better color sub sampling patterns like YUV, 4:2:2, or 4:4:4 formats. Low power, software-defined encoders/decoders may be implemented using a massively parallel processor array, like that found in HyperX technology, with 100 or more cores and distributed memory. The large number of processor elements allows the silicon device to operate more efficiently than conventional DSP or CPU technology. This software programing model for massively parallel processors offers a flexible implementation and a power efficiency close to that of ASIC solutions. This work describes a scalable parallel architecture for an H.264 compliant deblocking filter for multi core platforms such as HyperX technology. Parallel techniques such as parallel processing of independent macroblocks, sub blocks, and pixel row level are examined in this work. The deblocking architecture consists of a basic cell called deblocking filter unit (DFU) and dependent data buffer manager (DFM). The DFU can be used in several instances, catering to different performance needs the DFM serves the data required for the different number of DFUs, and also manages all the neighboring data required for future data processing of DFUs. This approach achieves the scalability, flexibility, and performance excellence required in deblocking filters.
2006-04-28
1. Color online Photographs of EL emission from several devices: a green Alq3 baseline OLED at 25 V 707 mA/cm2—590 cd/m2, 0.35 cd/A; b green... Alq3 BioLED with DNA EBL at 25 V 308 mA/cm2—21 100 cd/m2, 6.56 cd/A; c blue NPB baseline OLED at 20 V 460 mA/cm2—700 cd/m2, 0.14 cd/A; d blue...al. Appl. Phys. Lett. 88, 171109 2006NPB N ,N-bisnaphthalene-1-yl-N ,N-bisphenyl benzi- dine hole transport layer HTL; Alq3 tris-8
Parallel computing on Unix workstation arrays
NASA Astrophysics Data System (ADS)
Reale, F.; Bocchino, F.; Sciortino, S.
1994-12-01
We have tested arrays of general-purpose Unix workstations used as MIMD systems for massive parallel computations. In particular we have solved numerically a demanding test problem with a 2D hydrodynamic code, generally developed to study astrophysical flows, by exucuting it on arrays either of DECstations 5000/200 on Ethernet LAN, or of DECstations 3000/400, equipped with powerful Alpha processors, on FDDI LAN. The code is appropriate for data-domain decomposition, and we have used a library for parallelization previously developed in our Institute, and easily extended to work on Unix workstation arrays by using the PVM software toolset. We have compared the parallel efficiencies obtained on arrays of several processors to those obtained on a dedicated MIMD parallel system, namely a Meiko Computing Surface (CS-1), equipped with Intel i860 processors. We discuss the feasibility of using non-dedicated parallel systems and conclude that the convenience depends essentially on the size of the computational domain as compared to the relative processor power and network bandwidth. We point out that for future perspectives a parallel development of processor and network technology is important, and that the software still offers great opportunities of improvement, especially in terms of latency times in the message-passing protocols. In conditions of significant gain in terms of speedup, such workstation arrays represent a cost-effective approach to massive parallel computations.
Application Portable Parallel Library
NASA Technical Reports Server (NTRS)
Cole, Gary L.; Blech, Richard A.; Quealy, Angela; Townsend, Scott
1995-01-01
Application Portable Parallel Library (APPL) computer program is subroutine-based message-passing software library intended to provide consistent interface to variety of multiprocessor computers on market today. Minimizes effort needed to move application program from one computer to another. User develops application program once and then easily moves application program from parallel computer on which created to another parallel computer. ("Parallel computer" also include heterogeneous collection of networked computers). Written in C language with one FORTRAN 77 subroutine for UNIX-based computers and callable from application programs written in C language or FORTRAN 77.
Implementation of highly parallel and large scale GW calculations within the OpenAtom software
NASA Astrophysics Data System (ADS)
Ismail-Beigi, Sohrab
The need to describe electronic excitations with better accuracy than provided by band structures produced by Density Functional Theory (DFT) has been a long-term enterprise for the computational condensed matter and materials theory communities. In some cases, appropriate theoretical frameworks have existed for some time but have been difficult to apply widely due to computational cost. For example, the GW approximation incorporates a great deal of important non-local and dynamical electronic interaction effects but has been too computationally expensive for routine use in large materials simulations. OpenAtom is an open source massively parallel ab initiodensity functional software package based on plane waves and pseudopotentials (http://charm.cs.uiuc.edu/OpenAtom/) that takes advantage of the Charm + + parallel framework. At present, it is developed via a three-way collaboration, funded by an NSF SI2-SSI grant (ACI-1339804), between Yale (Ismail-Beigi), IBM T. J. Watson (Glenn Martyna) and the University of Illinois at Urbana Champaign (Laxmikant Kale). We will describe the project and our current approach towards implementing large scale GW calculations with OpenAtom. Potential applications of large scale parallel GW software for problems involving electronic excitations in semiconductor and/or metal oxide systems will be also be pointed out.
Application of parallelized software architecture to an autonomous ground vehicle
NASA Astrophysics Data System (ADS)
Shakya, Rahul; Wright, Adam; Shin, Young Ho; Momin, Orko; Petkovsek, Steven; Wortman, Paul; Gautam, Prasanna; Norton, Adam
2011-01-01
This paper presents improvements made to Q, an autonomous ground vehicle designed to participate in the Intelligent Ground Vehicle Competition (IGVC). For the 2010 IGVC, Q was upgraded with a new parallelized software architecture and a new vision processor. Improvements were made to the power system reducing the number of batteries required for operation from six to one. In previous years, a single state machine was used to execute the bulk of processing activities including sensor interfacing, data processing, path planning, navigation algorithms and motor control. This inefficient approach led to poor software performance and made it difficult to maintain or modify. For IGVC 2010, the team implemented a modular parallel architecture using the National Instruments (NI) LabVIEW programming language. The new architecture divides all the necessary tasks - motor control, navigation, sensor data collection, etc. into well-organized components that execute in parallel, providing considerable flexibility and facilitating efficient use of processing power. Computer vision is used to detect white lines on the ground and determine their location relative to the robot. With the new vision processor and some optimization of the image processing algorithm used last year, two frames can be acquired and processed in 70ms. With all these improvements, Q placed 2nd in the autonomous challenge.
2011-01-01
normalized to parallel controls. Flow Cytometry and Confocal Microscopy Upon exposure to 10-ns EP, aliquots of the cellular suspension were added to a tube...Survival data was processed and plotted using GrapherH software (Golden Software, Golden, Colorado). Flow cytometry results were processed in C6 software...Accuri Cytometers, Inc., Ann Arbor, MI) and FCSExpress software (DeNovo Software, Los Angeles, CA). Final analysis and presentation of flow cytometry
Final Technical Report - Center for Technology for Advanced Scientific Component Software (TASCS)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sussman, Alan
2014-10-21
This is a final technical report for the University of Maryland work in the SciDAC Center for Technology for Advanced Scientific Component Software (TASCS). The Maryland work focused on software tools for coupling parallel software components built using the Common Component Architecture (CCA) APIs. Those tools are based on the Maryland InterComm software framework that has been used in multiple computational science applications to build large-scale simulations of complex physical systems that employ multiple separately developed codes.
Comparing the OpenMP, MPI, and Hybrid Programming Paradigm on an SMP Cluster
NASA Technical Reports Server (NTRS)
Jost, Gabriele; Jin, Haoqiang; anMey, Dieter; Hatay, Ferhat F.
2003-01-01
With the advent of parallel hardware and software technologies users are faced with the challenge to choose a programming paradigm best suited for the underlying computer architecture. With the current trend in parallel computer architectures towards clusters of shared memory symmetric multi-processors (SMP), parallel programming techniques have evolved to support parallelism beyond a single level. Which programming paradigm is the best will depend on the nature of the given problem, the hardware architecture, and the available software. In this study we will compare different programming paradigms for the parallelization of a selected benchmark application on a cluster of SMP nodes. We compare the timings of different implementations of the same CFD benchmark application employing the same numerical algorithm on a cluster of Sun Fire SMP nodes. The rest of the paper is structured as follows: In section 2 we briefly discuss the programming models under consideration. We describe our compute platform in section 3. The different implementations of our benchmark code are described in section 4 and the performance results are presented in section 5. We conclude our study in section 6.
Real-time SHVC software decoding with multi-threaded parallel processing
NASA Astrophysics Data System (ADS)
Gudumasu, Srinivas; He, Yuwen; Ye, Yan; He, Yong; Ryu, Eun-Seok; Dong, Jie; Xiu, Xiaoyu
2014-09-01
This paper proposes a parallel decoding framework for scalable HEVC (SHVC). Various optimization technologies are implemented on the basis of SHVC reference software SHM-2.0 to achieve real-time decoding speed for the two layer spatial scalability configuration. SHVC decoder complexity is analyzed with profiling information. The decoding process at each layer and the up-sampling process are designed in parallel and scheduled by a high level application task manager. Within each layer, multi-threaded decoding is applied to accelerate the layer decoding speed. Entropy decoding, reconstruction, and in-loop processing are pipeline designed with multiple threads based on groups of coding tree units (CTU). A group of CTUs is treated as a processing unit in each pipeline stage to achieve a better trade-off between parallelism and synchronization. Motion compensation, inverse quantization, and inverse transform modules are further optimized with SSE4 SIMD instructions. Simulations on a desktop with an Intel i7 processor 2600 running at 3.4 GHz show that the parallel SHVC software decoder is able to decode 1080p spatial 2x at up to 60 fps (frames per second) and 1080p spatial 1.5x at up to 50 fps for those bitstreams generated with SHVC common test conditions in the JCT-VC standardization group. The decoding performance at various bitrates with different optimization technologies and different numbers of threads are compared in terms of decoding speed and resource usage, including processor and memory.
CDM: Teaching Discrete Mathematics to Computer Science Majors
ERIC Educational Resources Information Center
Sutner, Klaus
2005-01-01
CDM, for computational discrete mathematics, is a course that attempts to teach a number of topics in discrete mathematics to computer science majors. The course abandons the classical definition-theorem-proof model, and instead relies heavily on computation as a source of motivation and also for experimentation and illustration. The emphasis on…
A Social Learning Theory of Career Decision Making.
ERIC Educational Resources Information Center
Mitchell, Anita M., Ed.; And Others
This report contains an analysis of career decision making (CDM), a synthesis of theories and empirical studies related to CDM, and identification of areas in need of further research and/or development. The study includes contributions from the fields of psychology, economics, sociology, guidance and education. An attempt has been made to…
The Effects of Gender on Career Decision Problems in Young Adults.
ERIC Educational Resources Information Center
Larson, Jeffry H.; And Others
1994-01-01
Investigated gender differences in psychological problems--decision anxiety, life-goal awareness, and others--in the career decision-making process (CDM) of 1,006 college students. Results indicated no gender differences in global levels of problems in CDM. Some specific difficulties, such as life-goal awareness and authority orientation, were…
ERIC Educational Resources Information Center
Rupp, André A.; van Rijn, Peter W.
2018-01-01
We review the GIDNA and CDM packages in R for fitting cognitive diagnosis/diagnostic classification models. We first provide a summary of their core capabilities and then use both simulated and real data to compare their functionalities in practice. We found that the most relevant routines in the two packages appear to be more similar than…
Higgs-dilaton cosmology: An inflation-dark-energy connection and forecasts for future galaxy surveys
NASA Astrophysics Data System (ADS)
Casas, Santiago; Pauly, Martin; Rubio, Javier
2018-02-01
The Higgs-dilaton model is a scale-invariant extension of the Standard Model nonminimally coupled to gravity and containing just one additional degree of freedom on top of the Standard Model particle content. This minimalistic scenario predicts a set of measurable consistency relations between the inflationary observables and the dark-energy equation-of-state parameter. We present an alternative derivation of these consistency relations that highlights the connections and differences with the α -attractor scenario. We study how far these constraints allow one to distinguish the Higgs-dilaton model from Λ CDM and w CDM cosmologies. To this end we first analyze existing data sets using a Markov chain Monte Carlo approach. Second, we perform forecasts for future galaxy surveys using a Fisher matrix approach, both for galaxy clustering and weak lensing probes. Assuming that the best fit values in the different models remain comparable to the present ones, we show that both Euclid- and SKA2-like missions will be able to discriminate a Higgs-dilaton cosmology from Λ CDM and w CDM .
The formation of cosmic structure in a texture-seeded cold dark matter cosmogony
NASA Technical Reports Server (NTRS)
Gooding, Andrew K.; Park, Changbom; Spergel, David N.; Turok, Neil; Gott, Richard, III
1992-01-01
The growth of density fluctuations induced by global texture in an Omega = 1 cold dark matter (CDM) cosmogony is calculated. The resulting power spectra are in good agreement with each other, with more power on large scales than in the standard inflation plus CDM model. Calculation of related statistics (two-point correlation functions, mass variances, cosmic Mach number) indicates that the texture plus CDM model compares more favorably than standard CDM with observations of large-scale structure. Texture produces coherent velocity fields on large scales, as observed. Excessive small-scale velocity dispersions, and voids less empty than those observed may be remedied by including baryonic physics. The topology of the cosmic structure agrees well with observation. The non-Gaussian texture induced density fluctuations lead to earlier nonlinear object formation than in Gaussian models and may also be more compatible with recent evidence that the galaxy density field is non-Gaussian on large scales. On smaller scales the density field is strongly non-Gaussian, but this appears to be primarily due to nonlinear gravitational clustering. The velocity field on smaller scales is surprisingly Gaussian.
The Case for Chronic Disease Management for Addiction
Saitz, Richard; Larson, Mary Jo; LaBelle, Colleen; Richardson, Jessica; Samet, Jeffrey H.
2009-01-01
Chronic disease (care) management (CDM) is a patient-centered model of care that involves longitudinal care delivery; integrated, and coordinated primary medical and specialty care; patient and clinician education; explicit evidence-based care plans; and expert care availability. The model, incorporating mental health and specialty addiction care, holds promise for improving care for patients with substance dependence who often receive no care or fragmented ineffective care. We describe a CDM model for substance dependence and discuss a conceptual framework, the extensive current evidence for component elements, and a promising strategy to reorganize primary and specialty health care to facilitate access for people with substance dependence. The CDM model goes beyond integrated case management by a professional, colocation of services, and integrated medical and addiction care—elements that individually can improve outcomes. Supporting evidence is presented that: 1) substance dependence is a chronic disease requiring longitudinal care, although most patients with addictions receive no treatment (eg, detoxification only) or short-term interventions, and 2) for other chronic diseases requiring longitudinal care (eg, diabetes, congestive heart failure), CDM has been proven effective. PMID:19809579
Simard, G.; et al.
2018-06-20
We report constraints on cosmological parameters from the angular power spectrum of a cosmic microwave background (CMB) gravitational lensing potential map created using temperature data from 2500 degmore » $^2$ of South Pole Telescope (SPT) data supplemented with data from Planck in the same sky region, with the statistical power in the combined map primarily from the SPT data. We fit the corresponding lensing angular power spectrum to a model including cold dark matter and a cosmological constant ($$\\Lambda$$CDM), and to models with single-parameter extensions to $$\\Lambda$$CDM. We find constraints that are comparable to and consistent with constraints found using the full-sky Planck CMB lensing data. Specifically, we find $$\\sigma_8 \\Omega_{\\rm m}^{0.25}=0.598 \\pm 0.024$$ from the lensing data alone with relatively weak priors placed on the other $$\\Lambda$$CDM parameters. In combination with primary CMB data from Planck, we explore single-parameter extensions to the $$\\Lambda$$CDM model. We find $$\\Omega_k = -0.012^{+0.021}_{-0.023}$$ or $$M_{\
NASA Astrophysics Data System (ADS)
Ji, Chang-Yan; Gu, Zheng-Tian; Kou, Zhi-Qi
2016-10-01
The electrical and optical properties of the blue phosphorescent organic light-emitting diodes (PHOLEDs) can be affected by the various structure of confinement layer in the emitting layer (EML). A series of devices with different electron or hole confinement layer (TCTA or Bphen) are fabricated, it is more effective to balance charge carriers injection for the device with the double electron confinement layers structure, the power efficiency and luminance can reach 17.7 lm/W (at 103 cd/m2) and 3536 cd/m2 (at 8 V). In case of the same double electron confinement layers, another series of devices with different profile of EML are fabricated by changing the confinement layers position, the power efficiency and luminance can be improved to 21.7 lm/W (at 103 cd/m2) and 7674 cd/m2 (at 8 V) when the thickness of EML separated by confinement layers increases gradually from the hole injection side to the electron injection side, the driving voltage can also be reduced.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simard, G.; et al.
We report constraints on cosmological parameters from the angular power spectrum of a cosmic microwave background (CMB) gravitational lensing potential map created using temperature data from 2500 degmore » $^2$ of South Pole Telescope (SPT) data supplemented with data from Planck in the same sky region, with the statistical power in the combined map primarily from the SPT data. We fit the corresponding lensing angular power spectrum to a model including cold dark matter and a cosmological constant ($$\\Lambda$$CDM), and to models with single-parameter extensions to $$\\Lambda$$CDM. We find constraints that are comparable to and consistent with constraints found using the full-sky Planck CMB lensing data. Specifically, we find $$\\sigma_8 \\Omega_{\\rm m}^{0.25}=0.598 \\pm 0.024$$ from the lensing data alone with relatively weak priors placed on the other $$\\Lambda$$CDM parameters. In combination with primary CMB data from Planck, we explore single-parameter extensions to the $$\\Lambda$$CDM model. We find $$\\Omega_k = -0.012^{+0.021}_{-0.023}$$ or $$M_{\
Can the Λ CDM model reproduce MOND-like behavior?
NASA Astrophysics Data System (ADS)
Dai, De-Chang; Lu, Chunyu
2017-12-01
It is usually believed that MOND can describe the galactic rotational curves with only baryonic matter and without any dark matter very well, while the Λ CDM model is expected to have difficulty in reproducing MOND-like behavior. Here, we use EAGLE's data to learn whether the Λ CDM model can reproduce MOND-like behavior. EAGLE's simulation result clearly reproduces the MOND-like behavior for ab⪆10-12 m/s 2 at z =0 , although the acceleration constant, a0, is a little larger than the observational data indicate. We find that a0 increases with the redshift in a way different from what Milgrom proposed (a0∝H ). Therefore, while galaxy rotation curves can be fitted by MOND's empirical function in the Λ CDM model, there is no clear connection between a0 and the Hubble constant. We also find that a0 at z ⪆1 is well separated from a0 at z =0 . Once we have enough galaxies observed at high redshifts, we will be able to rule out the modified gravity model based on MOND-like empirical function with a z -independent a0.
Chuen, Onn Chiu; Yusoff, Sumiani
2012-03-01
This study performed an assessment on the beneficial of the Clean Development Mechanism (CDM) application on waste treatment system in a local palm oil industry in Malaysia. Life cycle assessment (LCA) was conducted to assess the environmental impacts of the greenhouse gas (GHG) reduction from the CDM application. Calculations on the emission reduction used the methodology based on AM002 (Avoided Wastewater and On-site Energy Use Emissions in the Industrial Sector) Version 4 published by United Nations Framework Convention on Climate Change (UNFCC). The results from the studies showed that the introduction of CDM in the palm oil mill through conversion of the captured biogas from palm oil mill effluent (POME) treatment into power generation were able to reduce approximate 0.12 tonnes CO2 equivalent concentration (tCO2e) emission and 30 kW x hr power generation per 1 tonne of fresh fruit bunch processed. Thus, the application of CDM methodology on palm oil mill wastewater treatment was able to reduce up to 1/4 of the overall environment impact generated in palm oil mill.
Modeling of Stone-impact Resistance of Monolithic Glass Ply Using Continuum Damage Mechanics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Xin; Khaleel, Mohammad A.; Davies, Richard W.
2005-04-01
We study the stone-impact resistance of a monolithic glass ply using a combined experimental and computational approach. Instrumented stone impact tests were first carried out in controlled environment. Explicit finite element analyses were then used to simulate the interactions of the indentor and the glass layer during the impact event, and a continuum damage mechanics (CDM) model was used to describe the constitutive behavior of glass. The experimentally measured strain histories for low velocity impact served as validation of the modeling procedures. Next, stair-stepping impact experiments were performed with two indentor sizes on two glass ply thickness, and the testmore » results were used to calibrate the critical stress parameters used in the CDM constitutive model. The purpose of this study is to establish the modeling procedures and the CDM critical stress parameters under impact loading conditions. The modeling procedures and the CDM model will be used in our future studies to predict through-thickness damage evolution patterns for different laminated windshield designs in automotive applications.« less
Validation of a common data model for active safety surveillance research
Ryan, Patrick B; Reich, Christian G; Hartzema, Abraham G; Stang, Paul E
2011-01-01
Objective Systematic analysis of observational medical databases for active safety surveillance is hindered by the variation in data models and coding systems. Data analysts often find robust clinical data models difficult to understand and ill suited to support their analytic approaches. Further, some models do not facilitate the computations required for systematic analysis across many interventions and outcomes for large datasets. Translating the data from these idiosyncratic data models to a common data model (CDM) could facilitate both the analysts' understanding and the suitability for large-scale systematic analysis. In addition to facilitating analysis, a suitable CDM has to faithfully represent the source observational database. Before beginning to use the Observational Medical Outcomes Partnership (OMOP) CDM and a related dictionary of standardized terminologies for a study of large-scale systematic active safety surveillance, the authors validated the model's suitability for this use by example. Validation by example To validate the OMOP CDM, the model was instantiated into a relational database, data from 10 different observational healthcare databases were loaded into separate instances, a comprehensive array of analytic methods that operate on the data model was created, and these methods were executed against the databases to measure performance. Conclusion There was acceptable representation of the data from 10 observational databases in the OMOP CDM using the standardized terminologies selected, and a range of analytic methods was developed and executed with sufficient performance to be useful for active safety surveillance. PMID:22037893
Lalmolda, C; Coll-Fernández, R; Martínez, N; Baré, M; Teixidó Colet, M; Epelde, F; Monsó, E
2017-01-01
Pulmonary rehabilitation (PR) is recommended after a severe COPD exacerbation, but its short- and long-term effects on health care utilization have not been fully established. The aims of this study were to evaluate patient compliance with a chronic disease management (CDM) program incorporating home-based exercise training as the main component after a severe COPD exacerbation and to determine its effects on health care utilization in the following year. COPD patients with a severe exacerbation were included in a case-cohort study at admission. An intervention group participated in a nurse-supervised CDM program during the 2 months after discharge, comprising of home-based PR with exercise components directly supervised by a physiotherapist, while the remaining patients followed usual care. Nineteen of the twenty-one participants (90.5%) were compliant with the CDM program and were compared with 29 usual-care patients. Compliance with the program was associated with statistically significant reductions in admissions due to respiratory disease in the following year (median [interquartile range]: 0 [0-1] vs 1 [0-2.5]; P =0.022) and in days of admission (0 [0-7] vs 7 [0-12]; P =0.034), and multiple linear regression analysis confirmed the protective effect of the CDM program (β coefficient -0.785, P =0.014, and R 2 =0.219). A CDM program incorporating exercise training for COPD patients without limiting comorbidities after a severe exacerbation achieves high compliance and reduces admissions in the year following after the intervention.
England, Lucinda; Kotelchuck, Milton; Wilson, Hoyt G; Diop, Hafsatou; Oppedisano, Paul; Kim, Shin Y; Cui, Xiaohui; Shapiro-Mendoza, Carrie K
2015-10-01
Women with gestational diabetes mellitus (GDM) may be able to reduce their risk of recurrent GDM and progression to type 2 diabetes mellitus through lifestyle change; however, there is limited population-based information on GDM recurrence rates. We used data from a population of women delivering two sequential live singleton infants in Massachusetts (1998-2007) to estimate the prevalence of chronic diabetes mellitus (CDM) and GDM in parity one pregnancies and recurrence of GDM and progression from GDM to CDM in parity two pregnancies. We examined four diabetes classification approaches; birth certificate (BC) data alone, hospital discharge (HD) data alone, both sources hierarchically combined with a diagnosis of CDM from either source taking priority over a diagnosis of GDM, and both sources combined including only pregnancies with full agreement in diagnosis. Descriptive statistics were used to describe population characteristics, prevalence of CDM and GDM, and recurrence of diabetes in successive pregnancies. Diabetes classification agreement was assessed using the Kappa statistic. Associated maternal characteristics were examined through adjusted model-based t tests and Chi square tests. A total of 134,670 women with two sequential deliveries of parities one and two were identified. While there was only slight agreement on GDM classification across HD and BC records, estimates of GDM recurrence were fairly consistent; nearly half of women with GDM in their parity one pregnancy developed GDM in their subsequent pregnancy. While estimates of progression from GDM to CDM across sequential pregnancies were more variable, all approaches yielded estimates of ≤5 %. The development of either GDM or CDM following a parity one pregnancy with no diagnosis of diabetes was <3 % across approaches. Women with recurrent GDM were disproportionately older and foreign born. Recurrent GDM is a serious life course public health issue; the inter-pregnancy interval provides an important window for diabetes prevention.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zheng, Xiaogang; Ding, Xuheng; Biesiada, Marek
2016-07-01
The two-point diagnostics Om ( z {sub i} , z {sub j} ) and Omh {sup 2}( z {sub i} , z {sub j} ) have been introduced as an interesting tool for testing the validity of the Λ cold dark matter (ΛCDM) model. Recently, Sahni et al. combined two independent measurements of H ( z ) from baryon acoustic oscillation (BAO) data with the value of the Hubble constant H {sub 0}, and used the second of these diagnostics to test the ΛCDM (a constant equation-of-state parameter for dark energy) model. Their result indicated a considerable tension between observationsmore » and predictions of the ΛCDM model. Since reliable data concerning the expansion rates of the universe at different redshifts H ( z ) are crucial for the successful application of this method, we investigate both two-point diagnostics on the most comprehensive set of N = 36 measurements of H ( z ) from BAOs and the differential ages (DAs) of passively evolving galaxies. We discuss the uncertainties of the two-point diagnostics and find that they are strongly non-Gaussian and follow the patterns deeply rooted in their very construction. Therefore we propose that non-parametric median statistics is the most appropriate way of treating this problem. Our results support the claims that ΛCDM is in tension with H ( z ) data according to the two-point diagnostics developed by Shafieloo, Sahni, and Starobinsky. However, other alternatives to the ΛCDM model, such as the wCDM or Chevalier–Polarski–Linder models, perform even worse. We also note that there are serious systematic differences between the BAO and DA methods that ought to be better understood before H ( z ) measurements can compete with other probes methods.« less
Dark Energy Survey Year 1 Results: Cosmological Constraints from Galaxy Clustering and Weak Lensing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abbott, T.M.C.; et al.
We present cosmological results from a combined analysis of galaxy clustering and weak gravitational lensing, using 1321 degmore » $^2$ of $griz$ imaging data from the first year of the Dark Energy Survey (DES Y1). We combine three two-point functions: (i) the cosmic shear correlation function of 26 million source galaxies in four redshift bins, (ii) the galaxy angular autocorrelation function of 650,000 luminous red galaxies in five redshift bins, and (iii) the galaxy-shear cross-correlation of luminous red galaxy positions and source galaxy shears. To demonstrate the robustness of these results, we use independent pairs of galaxy shape, photometric redshift estimation and validation, and likelihood analysis pipelines. To prevent confirmation bias, the bulk of the analysis was carried out while blind to the true results; we describe an extensive suite of systematics checks performed and passed during this blinded phase. The data are modeled in flat $$\\Lambda$$CDM and $w$CDM cosmologies, marginalizing over 20 nuisance parameters, varying 6 (for $$\\Lambda$$CDM) or 7 (for $w$CDM) cosmological parameters including the neutrino mass density and including the 457 $$\\times$$ 457 element analytic covariance matrix. We find consistent cosmological results from these three two-point functions, and from their combination obtain $$S_8 \\equiv \\sigma_8 (\\Omega_m/0.3)^{0.5} = 0.783^{+0.021}_{-0.025}$$ and $$\\Omega_m = 0.264^{+0.032}_{-0.019}$$ for $$\\Lambda$$CDM for $w$CDM, we find $$S_8 = 0.794^{+0.029}_{-0.027}$$, $$\\Omega_m = 0.279^{+0.043}_{-0.022}$$, and $$w=-0.80^{+0.20}_{-0.22}$$ at 68% CL. The precision of these DES Y1 results rivals that from the Planck cosmic microwave background measurements, allowing a comparison of structure in the very early and late Universe on equal terms. Although the DES Y1 best-fit values for $$S_8$$ and $$\\Omega_m$$ are lower than the central values from Planck ...« less
Rowland, Christopher R; Glass, Katherine A; Ettyreddy, Adarsh R; Gloss, Catherine C; Matthews, Jared R L; Huynh, Nguyen P T; Guilak, Farshid
2018-05-30
Cartilage-derived matrix (CDM) has emerged as a promising scaffold material for tissue engineering of cartilage and bone due to its native chondroinductive capacity and its ability to support endochondral ossification. Because it consists of native tissue, CDM can undergo cellular remodeling, which can promote integration with host tissue and enables it to be degraded and replaced by neotissue over time. However, enzymatic degradation of decellularized tissues can occur unpredictably and may not allow sufficient time for mechanically competent tissue to form, especially in the harsh inflammatory environment of a diseased joint. The goal of the current study was to engineer cartilage and bone constructs with the ability to inhibit aberrant inflammatory processes caused by the cytokine interleukin-1 (IL-1), through scaffold-mediated delivery of lentiviral particles containing a doxycycline-inducible IL-1 receptor antagonist (IL-1Ra) transgene on anatomically-shaped CDM constructs. Additionally, scaffold-mediated lentiviral gene delivery was used to facilitate spatial organization of simultaneous chondrogenic and osteogenic differentiation via site-specific transduction of a single mesenchymal stem cell (MSC) population to overexpress either chondrogenic, transforming growth factor-beta 3 (TGF-β3), or osteogenic, bone morphogenetic protein-2 (BMP-2), transgenes. Controlled induction of IL-1Ra expression protected CDM hemispheres from inflammation-mediated degradation, and supported robust bone and cartilage tissue formation even in the presence of IL-1. In the absence of inflammatory stimuli, controlled cellular remodeling was exploited as a mechanism for fusing concentric CDM hemispheres overexpressing BMP-2 and TGF-β3 into a single bi-layered osteochondral construct. Our findings demonstrate that site-specific delivery of inducible and tunable transgenes confers spatial and temporal control over both CDM scaffold remodeling and neotissue composition. Furthermore, these constructs provide a microphysiological in vitro joint organoid model with site-specific, tunable, and inducible protein delivery systems for examining the spatiotemporal response to pro-anabolic and/or inflammatory signaling across the osteochondral interface. Copyright © 2018 Elsevier Ltd. All rights reserved.
In the right order of brush strokes: a sketch of a software philosophy retrospective.
Pyshkin, Evgeny
2014-01-01
This paper follows a discourse on software recognized as a product of art and human creativity progressing probably for as long as software exists. A retrospective view on computer science and software philosophy development is introduced. In so doing we discover parallels between software and various branches of human creative manifestations. Aesthetic properties and mutual dependency of the form and matter of art works are examined in their application to software programs. While exploring some philosophical and even artistic reflection on software we consider extended comprehension of technical sciences of programming and software engineering within the realm of liberal arts.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mellor-Crummey, John
The PIPER project set out to develop methodologies and software for measurement, analysis, attribution, and presentation of performance data for extreme-scale systems. Goals of the project were to support analysis of massive multi-scale parallelism, heterogeneous architectures, multi-faceted performance concerns, and to support both post-mortem performance analysis to identify program features that contribute to problematic performance and on-line performance analysis to drive adaptation. This final report summarizes the research and development activity at Rice University as part of the PIPER project. Producing a complete suite of performance tools for exascale platforms during the course of this project was impossible since bothmore » hardware and software for exascale systems is still a moving target. For that reason, the project focused broadly on the development of new techniques for measurement and analysis of performance on modern parallel architectures, enhancements to HPCToolkit’s software infrastructure to support our research goals or use on sophisticated applications, engaging developers of multithreaded runtimes to explore how support for tools should be integrated into their designs, engaging operating system developers with feature requests for enhanced monitoring support, engaging vendors with requests that they add hardware measure- ment capabilities and software interfaces needed by tools as they design new components of HPC platforms including processors, accelerators and networks, and finally collaborations with partners interested in using HPCToolkit to analyze and tune scalable parallel applications.« less
NASA Technical Reports Server (NTRS)
Ortega, J. M.
1986-01-01
Various graduate research activities in the field of computer science are reported. Among the topics discussed are: (1) failure probabilities in multi-version software; (2) Gaussian Elimination on parallel computers; (3) three dimensional Poisson solvers on parallel/vector computers; (4) automated task decomposition for multiple robot arms; (5) multi-color incomplete cholesky conjugate gradient methods on the Cyber 205; and (6) parallel implementation of iterative methods for solving linear equations.
By Hand or Not By-Hand: A Case Study of Alternative Approaches to Parallelize CFD Applications
NASA Technical Reports Server (NTRS)
Yan, Jerry C.; Bailey, David (Technical Monitor)
1997-01-01
While parallel processing promises to speed up applications by several orders of magnitude, the performance achieved still depends upon several factors, including the multiprocessor architecture, system software, data distribution and alignment, as well as the methods used for partitioning the application and mapping its components onto the architecture. The existence of the Gorden Bell Prize given out at Supercomputing every year suggests that while good performance can be attained for real applications on general purpose multiprocessors, the large investment in man-power and time still has to be repeated for each application-machine combination. As applications and machine architectures become more complex, the cost and time-delays for obtaining performance by hand will become prohibitive. Computer users today can turn to three possible avenues for help: parallel libraries, parallel languages and compilers, interactive parallelization tools. The success of these methodologies, in turn, depends on proper application of data dependency analysis, program structure recognition and transformation, performance prediction as well as exploitation of user supplied knowledge. NASA has been developing multidisciplinary applications on highly parallel architectures under the High Performance Computing and Communications Program. Over the past six years, the transition of underlying hardware and system software have forced the scientists to spend a large effort to migrate and recede their applications. Various attempts to exploit software tools to automate the parallelization process have not produced favorable results. In this paper, we report our most recent experience with CAPTOOL, a package developed at Greenwich University. We have chosen CAPTOOL for three reasons: 1. CAPTOOL accepts a FORTRAN 77 program as input. This suggests its potential applicability to a large collection of legacy codes currently in use. 2. CAPTOOL employs domain decomposition to obtain parallelism. Although the fact that not all kinds of parallelism are handled may seem unappealing, many NASA applications in computational aerosciences as well as earth and space sciences are amenable to domain decomposition. 3. CAPTOOL generates code for a large variety of environments employed across NASA centers: MPI/PVM on network of workstations to the IBS/SP2 and CRAY/T3D.
NASA Astrophysics Data System (ADS)
Somogyi, Gábor; Smith, Robert E.
2010-01-01
We generalize the renormalized perturbation theory (RPT) formalism of Crocce and Scoccimarro [M. Crocce and R. Scoccimarro, Phys. Rev. DPRVDAQ1550-7998 73, 063519 (2006)10.1103/PhysRevD.73.063519] to deal with multiple fluids in the Universe and here we present the complete calculations up to the one-loop level in the RPT. We apply this approach to the problem of following the nonlinear evolution of baryon and cold dark matter (CDM) perturbations, evolving from the distinct sets of initial conditions, from the high redshift post-recombination Universe right through to the present day. In current theoretical and numerical models of structure formation, it is standard practice to treat baryons and CDM as an effective single matter fluid—the so-called dark matter only modeling. In this approximation, one uses a weighed sum of late-time baryon and CDM transfer functions to set initial mass fluctuations. In this paper we explore whether this approach can be employed for high precision modeling of structure formation. We show that, even if we only follow the linear evolution, there is a large-scale scale-dependent bias between baryons and CDM for the currently favored WMAP5 ΛCDM model. This time evolving bias is significant (>1%) until the present day, when it is driven towards unity through gravitational relaxation processes. Using the RPT formalism we test this approximation in the nonlinear regime. We show that the nonlinear CDM power spectrum in the two-component fluid differs from that obtained from an effective mean-mass one-component fluid by ˜3% on scales of order k˜0.05hMpc-1 at z=10, and by ˜0.5% at z=0. However, for the case of the nonlinear evolution of the baryons the situation is worse and we find that the power spectrum is suppressed, relative to the total matter, by ˜15% on scales k˜0.05hMpc-1 at z=10, and by ˜3%-5% at z=0. Importantly, besides the suppression of the spectrum, the baryonic acoustic oscillation (BAO) features are amplified for baryon and slightly damped for CDM spectra. If we compare the total matter power spectra in the two- and one-component fluid approaches, then we find excellent agreement, with deviations being <0.5% throughout the evolution. Consequences: high precision modeling of the large-scale distribution of baryons in the Universe cannot be achieved through an effective mean-mass one-component fluid approximation; detection significance of BAO will be amplified in probes that study baryonic matter, relative to probes that study the CDM or total mass only. The CDM distribution can be modeled accurately at late times and the total matter at all times. This is good news for probes that are sensitive to the total mass, such as gravitational weak lensing as existing modeling techniques are good enough. Lastly, we identify an analytic approximation that greatly simplifies the evaluation of the full PT expressions, and it is better than <1% over the full range of scales and times considered.
Proteus: a reconfigurable computational network for computer vision
NASA Astrophysics Data System (ADS)
Haralick, Robert M.; Somani, Arun K.; Wittenbrink, Craig M.; Johnson, Robert; Cooper, Kenneth; Shapiro, Linda G.; Phillips, Ihsin T.; Hwang, Jenq N.; Cheung, William; Yao, Yung H.; Chen, Chung-Ho; Yang, Larry; Daugherty, Brian; Lorbeski, Bob; Loving, Kent; Miller, Tom; Parkins, Larye; Soos, Steven L.
1992-04-01
The Proteus architecture is a highly parallel MIMD, multiple instruction, multiple-data machine, optimized for large granularity tasks such as machine vision and image processing The system can achieve 20 Giga-flops (80 Giga-flops peak). It accepts data via multiple serial links at a rate of up to 640 megabytes/second. The system employs a hierarchical reconfigurable interconnection network with the highest level being a circuit switched Enhanced Hypercube serial interconnection network for internal data transfers. The system is designed to use 256 to 1,024 RISC processors. The processors use one megabyte external Read/Write Allocating Caches for reduced multiprocessor contention. The system detects, locates, and replaces faulty subsystems using redundant hardware to facilitate fault tolerance. The parallelism is directly controllable through an advanced software system for partitioning, scheduling, and development. System software includes a translator for the INSIGHT language, a parallel debugger, low and high level simulators, and a message passing system for all control needs. Image processing application software includes a variety of point operators neighborhood, operators, convolution, and the mathematical morphology operations of binary and gray scale dilation, erosion, opening, and closing.
Rapid assessment of assignments using plagiarism detection software.
Bischoff, Whitney R; Abrego, Patricia C
2011-01-01
Faculty members most often use plagiarism detection software to detect portions of students' written work that have been copied and/or not attributed to their authors. The rise in plagiarism has led to a parallel rise in software products designed to detect plagiarism. Some of these products are configurable for rapid assessment and teaching, as well as for plagiarism detection.
An approach to enhance pnetCDF performance in environmental modeling applications
Data intensive simulations are often limited by their I/O (input/output) performance, and "novel" techniques need to be developed in order to overcome this limitation. The software package pnetCDF (parallel network Common Data Form), which works with parallel file syste...
Parallel Computation of the Regional Ocean Modeling System (ROMS)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, P; Song, Y T; Chao, Y
2005-04-05
The Regional Ocean Modeling System (ROMS) is a regional ocean general circulation modeling system solving the free surface, hydrostatic, primitive equations over varying topography. It is free software distributed world-wide for studying both complex coastal ocean problems and the basin-to-global scale ocean circulation. The original ROMS code could only be run on shared-memory systems. With the increasing need to simulate larger model domains with finer resolutions and on a variety of computer platforms, there is a need in the ocean-modeling community to have a ROMS code that can be run on any parallel computer ranging from 10 to hundreds ofmore » processors. Recently, we have explored parallelization for ROMS using the MPI programming model. In this paper, an efficient parallelization strategy for such a large-scale scientific software package, based on an existing shared-memory computing model, is presented. In addition, scientific applications and data-performance issues on a couple of SGI systems, including Columbia, the world's third-fastest supercomputer, are discussed.« less
The design and implementation of a parallel unstructured Euler solver using software primitives
NASA Technical Reports Server (NTRS)
Das, R.; Mavriplis, D. J.; Saltz, J.; Gupta, S.; Ponnusamy, R.
1992-01-01
This paper is concerned with the implementation of a three-dimensional unstructured grid Euler-solver on massively parallel distributed-memory computer architectures. The goal is to minimize solution time by achieving high computational rates with a numerically efficient algorithm. An unstructured multigrid algorithm with an edge-based data structure has been adopted, and a number of optimizations have been devised and implemented in order to accelerate the parallel communication rates. The implementation is carried out by creating a set of software tools, which provide an interface between the parallelization issues and the sequential code, while providing a basis for future automatic run-time compilation support. Large practical unstructured grid problems are solved on the Intel iPSC/860 hypercube and Intel Touchstone Delta machine. The quantitative effect of the various optimizations are demonstrated, and we show that the combined effect of these optimizations leads to roughly a factor of three performance improvement. The overall solution efficiency is compared with that obtained on the CRAY-YMP vector supercomputer.
Scalable software architecture for on-line multi-camera video processing
NASA Astrophysics Data System (ADS)
Camplani, Massimo; Salgado, Luis
2011-03-01
In this paper we present a scalable software architecture for on-line multi-camera video processing, that guarantees a good trade off between computational power, scalability and flexibility. The software system is modular and its main blocks are the Processing Units (PUs), and the Central Unit. The Central Unit works as a supervisor of the running PUs and each PU manages the acquisition phase and the processing phase. Furthermore, an approach to easily parallelize the desired processing application has been presented. In this paper, as case study, we apply the proposed software architecture to a multi-camera system in order to efficiently manage multiple 2D object detection modules in a real-time scenario. System performance has been evaluated under different load conditions such as number of cameras and image sizes. The results show that the software architecture scales well with the number of camera and can easily works with different image formats respecting the real time constraints. Moreover, the parallelization approach can be used in order to speed up the processing tasks with a low level of overhead.
ERIC Educational Resources Information Center
Miciak, Jeremy; Taylor, W. Pat; Denton, Carolyn A.; Fletcher, Jack M.
2015-01-01
Few empirical investigations have evaluated learning disabilities (LD) identification methods based on a pattern of cognitive strengths and weaknesses (PSW). This study investigated the reliability of LD classification decisions of the concordance/discordance method (C/DM) across different psychoeducational assessment batteries. C/DM criteria were…
Bioavailable Ferric Iron (BAFelll) Assay
2007-02-01
citrate dithionite bicarbonate CDBFe citrate dithionite bicarbonate extractable iron cDCE cis-Dichloroethene CDM Camp Dresser & McKee Inc...Defense (DoD) installations. Camp Dresser & McKee Inc. (CDM), in cooperation with the Naval Facilities Engineering Services Center (NFESC), was the...several upgradient and/or cross - gradient background soil samples. Duplicate analysis of samples is recommended. While these recommendations are not
Application of a Cognitive Diagnostic Model to a High-Stakes Reading Comprehension Test
ERIC Educational Resources Information Center
Ravand, Hamdollah
2016-01-01
General cognitive diagnostic models (CDM) such as the generalized deterministic input, noisy, "and" gate (G-DINA) model are flexible in that they allow for both compensatory and noncompensatory relationships among the subskills within the same test. Most of the previous CDM applications in the literature have been add-ons to simulation…
Is a massive tau neutrino just what cold dark matter needs?
NASA Technical Reports Server (NTRS)
Dodelson, Scott; Gyuk, Geza; Turner, Michael S.
1994-01-01
The cold dark matter (CDM) scenario for structure formation in the Universe is very attractive and has many successes; however, when its spectrum of density perturbations is normalized to the COBE anisotropy measurement the level of inhomogeneity predicted on small scales is too large. This can be remedied by a tau neutrino of mass 1 MeV - 10MeV and lifetime 0.1 sec - 100 sec whose decay products include electron neutrinos because it allows the total energy density in relativistic particles to be doubled without interfering with nucleosynthesis. The anisotropies predicted on the degree scale for 'tau CDM' are larger than standard CDM. Experiments at e(sup +/-) collides may be able to probe such a mass range.
Galaxy Cluster Bulk Flows and Collision Velocities in QUMOND
NASA Astrophysics Data System (ADS)
Katz, Harley; McGaugh, Stacy; Teuben, Peter; Angus, G. W.
2013-07-01
We examine the formation of clusters of galaxies in numerical simulations of a QUMOND cosmogony with massive sterile neutrinos. Clusters formed in these exploratory simulations develop higher velocities than those found in ΛCDM simulations. The bulk motions of clusters attain ~1000 km s-1 by low redshift, comparable to observations whereas ΛCDM simulated clusters tend to fall short. Similarly, high pairwise velocities are common in cluster-cluster collisions like the Bullet Cluster. There is also a propensity for the most massive clusters to be larger in QUMOND and to appear earlier than in ΛCDM, potentially providing an explanation for "pink elephants" like El Gordo. However, it is not obvious that the cluster mass function can be recovered.
NASA Technical Reports Server (NTRS)
Ryu, Dongsu; Vishniac, Ethan T.; Chiang, Wei-Hwan
1989-01-01
The spatial distribution of the cold-dark-matter (CDM) and baryonic components of CDM-dominated cosmological models are characterized, summarizing the results of recent theoretical investigations. The evolution and distribution of matter in an Einstein-de Sitter universe on length scales small enough so that the Newtonian approximation is valid is followed chronologically, assuming (1) that the galaxies, CDM, and the intergalactic medium (IGM) are coupled by gravity, (2) that galaxies form by taking mass and momentum from the IGM, and (3) that the IGM responds to the energy input from the galaxies. The results of the numerical computations are presented in extensive graphs and discussed in detail.
A non-parametric consistency test of the ΛCDM model with Planck CMB data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aghamousa, Amir; Shafieloo, Arman; Hamann, Jan, E-mail: amir@aghamousa.com, E-mail: jan.hamann@unsw.edu.au, E-mail: shafieloo@kasi.re.kr
Non-parametric reconstruction methods, such as Gaussian process (GP) regression, provide a model-independent way of estimating an underlying function and its uncertainty from noisy data. We demonstrate how GP-reconstruction can be used as a consistency test between a given data set and a specific model by looking for structures in the residuals of the data with respect to the model's best-fit. Applying this formalism to the Planck temperature and polarisation power spectrum measurements, we test their global consistency with the predictions of the base ΛCDM model. Our results do not show any serious inconsistencies, lending further support to the interpretation ofmore » the base ΛCDM model as cosmology's gold standard.« less
Gravitational lensing in a cold dark matter universe
NASA Technical Reports Server (NTRS)
Narayan, Ramesh; White, Simon D. M.
1988-01-01
Gravitational lensing due to mass condensations in a biased cold dark matter (CDM) universe is investigated using the Press-Schechter (1974) theory with density fluctuation amplitudes taken from previous N-body work. Under the critical assumption that CDM haloes have small core radii, a distribution of image angular separations for high-z lensed quasars with a peak at about 1 arcsec and a half-width of a factor of about 10. Allowing for selection effects at small angular separations, this is in good agreement with the observed separations. The estimated frequency of lensing is somewhat lower than that observed, but the discrepancy can be removed by invoking amplification bias and by making a small upward adjustment to the density fluctuation amplitudes assumed in the CDM model.
Updated reduced CMB data and constraints on cosmological parameters
NASA Astrophysics Data System (ADS)
Cai, Rong-Gen; Guo, Zong-Kuan; Tang, Bo
2015-07-01
We obtain the reduced CMB data {lA, R, z∗} from WMAP9, WMAP9+BKP, Planck+WP and Planck+WP+BKP for the ΛCDM and wCDM models with or without spatial curvature. We then use these reduced CMB data in combination with low-redshift observations to put constraints on cosmological parameters. We find that including BKP results in a higher value of the Hubble constant especially when the equation of state (EOS) of dark energy and curvature are allowed to vary. For the ΛCDM model with curvature, the estimate of the Hubble constant with Planck+WP+Lensing is inconsistent with the one derived from Planck+WP+BKP at about 1.2σ confidence level (CL).
Failure prediction during backward flow forming of Ti6Al4V alloy
NASA Astrophysics Data System (ADS)
Singh, Abhishek Kumar; Narasimhan, K.; Singh, Ramesh
2018-05-01
The Flow forming process is a tube spinning process where the thickness of a tube is reduced with the help of spinning roller/s by keeping the internal diameter unchanged. A 3-D Finite element model for the flow-formability test has been developed by using Abaqus/explicit software. A coupled damage criterion based on continuum damage mechanics (CDM) has been studied in this research. The damage model is introduced by using FORTRAN based VUMAT subroutine which is developed through a stress integration algorithm. Further, the effect of reduction angle, friction coefficient, and coolant heat transfer coefficient on fracture has been studied. The results show that the formability improves with increase in reduction angle. Both, equivalent plastic strain and damage variable increases from inner to outer surface of flow formed tube.
Parallel computations and control of adaptive structures
NASA Technical Reports Server (NTRS)
Park, K. C.; Alvin, Kenneth F.; Belvin, W. Keith; Chong, K. P. (Editor); Liu, S. C. (Editor); Li, J. C. (Editor)
1991-01-01
The equations of motion for structures with adaptive elements for vibration control are presented for parallel computations to be used as a software package for real-time control of flexible space structures. A brief introduction of the state-of-the-art parallel computational capability is also presented. Time marching strategies are developed for an effective use of massive parallel mapping, partitioning, and the necessary arithmetic operations. An example is offered for the simulation of control-structure interaction on a parallel computer and the impact of the approach presented for applications in other disciplines than aerospace industry is assessed.
PP-SWAT: A phython-based computing software for efficient multiobjective callibration of SWAT
USDA-ARS?s Scientific Manuscript database
With enhanced data availability, distributed watershed models for large areas with high spatial and temporal resolution are increasingly used to understand water budgets and examine effects of human activities and climate change/variability on water resources. Developing parallel computing software...
Environmental models are products of the computer architecture and software tools available at the time of development. Scientifically sound algorithms may persist in their original state even as system architectures and software development approaches evolve and progress. Dating...
The Automated Instrumentation and Monitoring System (AIMS) reference manual
NASA Technical Reports Server (NTRS)
Yan, Jerry; Hontalas, Philip; Listgarten, Sherry
1993-01-01
Whether a researcher is designing the 'next parallel programming paradigm,' another 'scalable multiprocessor' or investigating resource allocation algorithms for multiprocessors, a facility that enables parallel program execution to be captured and displayed is invaluable. Careful analysis of execution traces can help computer designers and software architects to uncover system behavior and to take advantage of specific application characteristics and hardware features. A software tool kit that facilitates performance evaluation of parallel applications on multiprocessors is described. The Automated Instrumentation and Monitoring System (AIMS) has four major software components: a source code instrumentor which automatically inserts active event recorders into the program's source code before compilation; a run time performance-monitoring library, which collects performance data; a trace file animation and analysis tool kit which reconstructs program execution from the trace file; and a trace post-processor which compensate for data collection overhead. Besides being used as prototype for developing new techniques for instrumenting, monitoring, and visualizing parallel program execution, AIMS is also being incorporated into the run-time environments of various hardware test beds to evaluate their impact on user productivity. Currently, AIMS instrumentors accept FORTRAN and C parallel programs written for Intel's NX operating system on the iPSC family of multi computers. A run-time performance-monitoring library for the iPSC/860 is included in this release. We plan to release monitors for other platforms (such as PVM and TMC's CM-5) in the near future. Performance data collected can be graphically displayed on workstations (e.g. Sun Sparc and SGI) supporting X-Windows (in particular, Xl IR5, Motif 1.1.3).
Makadia, Rupa; Matcho, Amy; Ma, Qianli; Knoll, Chris; Schuemie, Martijn; DeFalco, Frank J; Londhe, Ajit; Zhu, Vivienne; Ryan, Patrick B
2015-01-01
Objectives To evaluate the utility of applying the Observational Medical Outcomes Partnership (OMOP) Common Data Model (CDM) across multiple observational databases within an organization and to apply standardized analytics tools for conducting observational research. Materials and methods Six deidentified patient-level datasets were transformed to the OMOP CDM. We evaluated the extent of information loss that occurred through the standardization process. We developed a standardized analytic tool to replicate the cohort construction process from a published epidemiology protocol and applied the analysis to all 6 databases to assess time-to-execution and comparability of results. Results Transformation to the CDM resulted in minimal information loss across all 6 databases. Patients and observations excluded were due to identified data quality issues in the source system, 96% to 99% of condition records and 90% to 99% of drug records were successfully mapped into the CDM using the standard vocabulary. The full cohort replication and descriptive baseline summary was executed for 2 cohorts in 6 databases in less than 1 hour. Discussion The standardization process improved data quality, increased efficiency, and facilitated cross-database comparisons to support a more systematic approach to observational research. Comparisons across data sources showed consistency in the impact of inclusion criteria, using the protocol and identified differences in patient characteristics and coding practices across databases. Conclusion Standardizing data structure (through a CDM), content (through a standard vocabulary with source code mappings), and analytics can enable an institution to apply a network-based approach to observational research across multiple, disparate observational health databases. PMID:25670757
Universal subhalo accretion in cold and warm dark matter cosmologies
NASA Astrophysics Data System (ADS)
Kubik, Bogna; Libeskind, Noam I.; Knebe, Alexander; Courtois, Hélène; Yepes, Gustavo; Gottlöber, Stefan; Hoffman, Yehuda
2017-12-01
The influence of the large-scale structure on host haloes may be studied by examining the angular infall pattern of subhaloes. In particular, since warm dark matter (WDM) and cold dark matter (CDM) cosmologies predict different abundances and internal properties for haloes at the low-mass end of the mass function, it is interesting to examine if there are differences in how these low-mass haloes are accreted. The accretion events are defined as the moment a halo becomes a substructure, namely when it crosses its host's virial radius. We quantify the cosmic web at each point by the shear tensor and examine where, with respect to its eigenvectors, such accretion events occur in ΛCDM and ΛWDM (1 keV sterile neutrino) cosmological models. We find that the CDM and WDM subhaloes are preferentially accreted along the principal axis of the shear tensor corresponding to the direction of weakest collapse. The beaming strength is modulated by the host and subhalo masses and by the redshift at which the accretion event occurs. Although strongest for the most massive hosts and subhaloes at high redshift, the preferential infall is found to be always aligned with the axis of weakest collapse, thus we say that it has universal nature. We compare the strength of beaming in the ΛWDM cosmology with the one found in the ΛCDM scenario. While the main findings remain the same, the accretion in the ΛWDM model for the most massive host haloes appears more beamed than in ΛCDM cosmology across all the redshifts.
Minimization of Retinal Slip Cannot Explain Human Smooth-Pursuit Eye Movements
NASA Technical Reports Server (NTRS)
Stone, Leland S.; Beutter, Brent R.; Null, Cynthia H. (Technical Monitor)
1998-01-01
Existing models assume that pursuit attempts a direct minimization of retinal image motion or "slip" (e.g. Robinson et al., 1986; Krauzlis & Weisberger, 1989). Using occluded line-figure stimuli, we have previously shown that humans can accurately pursue stimuli for which perfect tracking does not zero retinal slip (Neurologic ARCO). These findings are inconsistent with the standard control strategy of matching eye motion to a target-motion signal reconstructed by adding retinal slip and eye motion, but consistent with a visual front-end which estimates target motion via a global spatio-temporal integration for pursuit and perception. Another possible explanation is that pursuit simply attempts to minimize slip perpendicular to the segments (and neglects parallel "sliding" motion). To resolve this, 4 observers (3 naive) were asked to pursue the center of 2 types of stimuli with identical velocity-space descriptions and matched motion energy. The line-figure "diamond" stimulus was viewed through 2 invisible 3 deg-wide vertical apertures (38 cd/m2 equal to background) such that only the sinusoidal motion of 4 oblique line segments (44 cd/m2 was visible. The "cross" was identical except that the segments exchanged positions. Two trajectories (8's and infinity's) with 4 possible initial directions were randomly interleaved (1.25 cycles, 2.5s period, Ax = Ay = 1.4 deg). In 91% of trials, the diamond appeared rigid. Correspondingly, pursuit was vigorous (mean Again: 0.74) with a V/H aspect ratio approx. 1 (mean: 0.9). Despite a valid rigid solution, the cross however appeared rigid in 8% of trials. Correspondingly, pursuit was weaker (mean Hgain: 0.38) with an incorrect aspect ratio (mean: 1.5). If pursuit were just minimizing perpendicular slip, performance would be the same in both conditions.
ERIC Educational Resources Information Center
Convertino, Christina
2016-01-01
This praxis article outlines the value of using a critical and dialogical model (CDM) to teach multicultural social justice education to preservice teachers. Based on practitioner research, the article draws on the author's own teaching experiences to highlight how key features of CDM can be used to help pre-service teachers move beyond thinking…
Isotropic vs. anisotropic components of BAO data: a tool for model selection
NASA Astrophysics Data System (ADS)
Haridasu, Balakrishna S.; Luković, Vladimir V.; Vittorio, Nicola
2018-05-01
We conduct a selective analysis of the isotropic (DV) and anisotropic (AP) components of the most recent Baryon Acoustic Oscillations (BAO) data. We find that these components provide significantly different constraints and could provide strong diagnostics for model selection, also in view of more precise data to arrive. For instance, in the ΛCDM model we find a mild tension of ~ 2 σ for the Ωm estimates obtained using DV and AP separately. Considering both Ωk and w as free parameters, we find that the concordance model is in tension with the best-fit values provided by the BAO data alone at 2.2σ. We complemented the BAO data with the Supernovae Ia (SNIa) and Observational Hubble datasets to perform a joint analysis on the ΛCDM model and its standard extensions. By assuming ΛCDM scenario, we find that these data provide H0 = 69.4 ± 1.7 km/s Mpc‑1 as the best-fit value for the present expansion rate. In the kΛCDM scenario we find that the evidence for acceleration using the BAO data alone is more than ~ 5.8σ, which increases to 8.4 σ in our joint analysis.
Enhanced peculiar velocities in brane-induced gravity
NASA Astrophysics Data System (ADS)
Wyman, Mark; Khoury, Justin
2010-08-01
The mounting evidence for anomalously large peculiar velocities in our Universe presents a challenge for the ΛCDM paradigm. The recent estimates of the large-scale bulk flow by Watkins et al. are inconsistent at the nearly 3σ level with ΛCDM predictions. Meanwhile, Lee and Komatsu have recently estimated that the occurrence of high-velocity merging systems such as the bullet cluster (1E0657-57) is unlikely at a 6.5-5.8σ level, with an estimated probability between 3.3×10-11 and 3.6×10-9 in ΛCDM cosmology. We show that these anomalies are alleviated in a broad class of infrared-modifed gravity theories, called brane-induced gravity, in which gravity becomes higher-dimensional at ultralarge distances. These theories include additional scalar forces that enhance gravitational attraction and therefore speed up structure formation at late times and on sufficiently large scales. The peculiar velocities are enhanced by 24-34% compared to standard gravity, with the maximal enhancement nearly consistent at the 2σ level with bulk flow observations. The occurrence of the bullet cluster in these theories is ≈104 times more probable than in ΛCDM cosmology.
Enhanced peculiar velocities in brane-induced gravity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wyman, Mark; Khoury, Justin
The mounting evidence for anomalously large peculiar velocities in our Universe presents a challenge for the {Lambda}CDM paradigm. The recent estimates of the large-scale bulk flow by Watkins et al. are inconsistent at the nearly 3{sigma} level with {Lambda}CDM predictions. Meanwhile, Lee and Komatsu have recently estimated that the occurrence of high-velocity merging systems such as the bullet cluster (1E0657-57) is unlikely at a 6.5-5.8{sigma} level, with an estimated probability between 3.3x10{sup -11} and 3.6x10{sup -9} in {Lambda}CDM cosmology. We show that these anomalies are alleviated in a broad class of infrared-modifed gravity theories, called brane-induced gravity, in which gravitymore » becomes higher-dimensional at ultralarge distances. These theories include additional scalar forces that enhance gravitational attraction and therefore speed up structure formation at late times and on sufficiently large scales. The peculiar velocities are enhanced by 24-34% compared to standard gravity, with the maximal enhancement nearly consistent at the 2{sigma} level with bulk flow observations. The occurrence of the bullet cluster in these theories is {approx_equal}10{sup 4} times more probable than in {Lambda}CDM cosmology.« less
Commensurate 4a0-period charge density modulations throughout the Bi2Sr2CaCu2O8+x pseudogap regime
Mesaros, Andrej; Fujita, Kazuhiro; Edkins, Stephen D.; Hamidian, Mohammad H.; Eisaki, Hiroshi; Uchida, Shin-ichi; Davis, J. C. Séamus; Lawler, Michael J.; Kim, Eun-Ah
2016-01-01
Theories based upon strong real space (r-space) electron–electron interactions have long predicted that unidirectional charge density modulations (CDMs) with four-unit-cell (4a0) periodicity should occur in the hole-doped cuprate Mott insulator (MI). Experimentally, however, increasing the hole density p is reported to cause the conventionally defined wavevector QA of the CDM to evolve continuously as if driven primarily by momentum-space (k-space) effects. Here we introduce phase-resolved electronic structure visualization for determination of the cuprate CDM wavevector. Remarkably, this technique reveals a virtually doping-independent locking of the local CDM wavevector at |Q0|=2π/4a0 throughout the underdoped phase diagram of the canonical cuprate Bi2Sr2CaCu2O8. These observations have significant fundamental consequences because they are orthogonal to a k-space (Fermi-surface)–based picture of the cuprate CDMs but are consistent with strong-coupling r-space–based theories. Our findings imply that it is the latter that provides the intrinsic organizational principle for the cuprate CDM state. PMID:27791157
What do parameterized Om(z) diagnostics tell us in light of recent observations?
NASA Astrophysics Data System (ADS)
Qi, Jing-Zhao; Cao, Shuo; Biesiada, Marek; Xu, Teng-Peng; Wu, Yan; Zhang, Si-Xuan; Zhu, Zong-Hong
2018-06-01
In this paper, we propose a new parametrization for Om(z) diagnostics and show how the most recent and significantly improved observations concerning the H(z) and SN Ia measurements can be used to probe the consistency or tension between the ΛCDM model and observations. Our results demonstrate that H 0 plays a very important role in the consistency test of ΛCDM with H(z) data. Adopting the Hubble constant priors from Planck 2013 and Riess, one finds considerable tension between the current H(z) data and ΛCDM model and confirms the conclusions obtained previously by others. However, with the Hubble constant prior taken from WMAP9, the discrepancy between H(z) data and ΛCDM disappears, i.e., the current H(z) observations still support the cosmological constant scenario. This conclusion is also supported by the results derived from the Joint Light-curve Analysis (JLA) SN Ia sample. The best-fit Hubble constant from the combination of H(z)+JLA ({H}0={68.81}-1.49+1.50 km s‑1 Mpc‑1) is very consistent with results derived both by Planck 2013 and WMAP9, but is significantly different from the recent local measurement by Riess.
Copper and zinc uptake by rice and accumulation in soil amended with municipal solid waste compost
NASA Astrophysics Data System (ADS)
Bhattacharyya, P.; Chakraborty, A.; Chakrabarti, K.; Tripathy, S.; Powell, M. A.
2006-04-01
Effect of addition of municipal solid waste compost (MSWC) on two metals viz. copper (Cu) and zinc (Zn) contents of submerged rice paddies were studied. Experiments were conducted during the three consecutive wet seasons from 1997 to 1999 on rice grown under submergence, at the Experimental Farm of Calcutta University, India. A sequential extraction method was used to determine the metal (Cu and Zn) fractions in MSWC and cow dung manure (CDM). Both metals were significantly bound to the organic matter and Fe and Mn oxides in MSWC and CDM. Metal content in rice straw was higher than in rice grain. Metal bound with Fe and Mn oxides in MSWC and CDM best correlated with straw and grain metal followed by exchangeable and water soluble fractions. Carbonate, organic matter bound and residual fractions in MSWC and CDM did not significantly correlate with rice straw and grain metal. The MSWC would be a valuable resource for agriculture if it can be used safely, but long-term field experiments with MSWC are needed to assess by regular monitoring of the metal loads and accumulation in soil and plants.
Small scale clustering of late forming dark matter
NASA Astrophysics Data System (ADS)
Agarwal, S.; Corasaniti, P.-S.; Das, S.; Rasera, Y.
2015-09-01
We perform a study of the nonlinear clustering of matter in the late-forming dark matter (LFDM) scenario in which dark matter results from the transition of a nonminimally coupled scalar field from radiation to collisionless matter. A distinct feature of this model is the presence of a damped oscillatory cutoff in the linear matter power spectrum at small scales. We use a suite of high-resolution N-body simulations to study the imprints of LFDM on the nonlinear matter power spectrum, the halo mass and velocity functions and the halo density profiles. The model largely satisfies high-redshift matter power spectrum constraints from Lyman-α forest measurements, while it predicts suppressed abundance of low-mass halos (˜109- 1010 h-1 M⊙ ) at all redshifts compared to a vanilla Λ CDM model. The analysis of the LFDM halo velocity function shows a better agreement than the Λ CDM prediction with the observed abundance of low-velocity galaxies in the local volume. Halos with mass M ≳1011 h-1 M⊙ show minor departures of the density profiles from Λ CDM expectations, while smaller-mass halos are less dense, consistent with the fact that they form later than their Λ CDM counterparts.
NASA Technical Reports Server (NTRS)
Kashlinsky, A.
1993-01-01
Modified cold dark matter (CDM) models were recently suggested to account for large-scale optical data, which fix the power spectrum on large scales, and the COBE results, which would then fix the bias parameter, b. We point out that all such models have deficit of small-scale power where density fluctuations are presently nonlinear, and should then lead to late epochs of collapse of scales M between 10 exp 9 - 10 exp 10 solar masses and (1-5) x 10 exp 14 solar masses. We compute the probabilities and comoving space densities of various scale objects at high redshifts according to the CDM models and compare these with observations of high-z QSOs, high-z galaxies and the protocluster-size object found recently by Uson et al. (1992) at z = 3.4. We show that the modified CDM models are inconsistent with the observational data on these objects. We thus suggest that in order to account for the high-z objects, as well as the large-scale and COBE data, one needs a power spectrum with more power on small scales than CDM models allow and an open universe.
[Infrastructure and contents of clinical data management plan].
Shen, Tong; Xu, Lie-dong; Fu, Hai-jun; Liu, Yan; He, Jia; Chen, Ping-yan; Song, Yu-fei
2015-11-01
Establishment of quality management system (QMS) plays a critical role in the clinical data management (CDM). The objectives of CDM are to ensure the quality and integrity of the trial data. Thus, every stage or element that may impact the quality outcomes of clinical studies should be in the controlled manner, which is referred to the full life cycle of CDM associated with the data collection, handling and statistical analysis of trial data. Based on the QMS, this paper provides consensus on how to develop a compliant clinical data management plan (CDMP). According to the essential requirements of the CDM, the CDMP should encompass each process of data collection, data capture and cleaning, medical coding, data verification and reconciliation, database monitoring and management, external data transmission and integration, data documentation and data quality assurance and so on. Creating and following up data management plan in each designed data management steps, dynamically record systems used, actions taken, parties involved will build and confirm regulated data management processes, standard operational procedures and effective quality metrics in all data management activities. CDMP is one of most important data management documents that is the solid foundation for clinical data quality.
Mesaros, Andrej; Fujita, Kazuhiro; Edkins, Stephen D.; ...
2016-10-20
Theories based upon strong real space (r-space) electron–electron interactions have long predicted that unidirectional charge density modulations (CDMs) with four-unit-cell (4 a0) periodicity should occur in the hole-doped cuprate Mott insulator (MI). But, increasing the hole density p is reported to cause the conventionally defined wavevector Q A of the CDM to evolve continuously as if driven primarily by momentum-space (k-space) effects. We introduce phase-resolved electronic structure visualization for determination of the cuprate CDM wavevector. Remarkably, this technique reveals a virtually doping-independent locking of the local CDM wavevector at |Q 0|=2π/4 a0 throughout the underdoped phase diagram of the canonicalmore » cuprate Bi 2Sr 2CaCu 2O 8. Our observations have significant fundamental consequences because they are orthogonal to a k-space (Fermi-surface)–based picture of the cuprate CDMs but are consistent with strong-coupling r-space–based theories. Our findings imply that it is the latter that provides the intrinsic organizational principle for the cuprate CDM state.« less
NASA Astrophysics Data System (ADS)
Wang, Deng
2018-06-01
To explore whether there is new physics going beyond the standard cosmological model or not, we constrain seven cosmological models by combining the latest and largest Pantheon Type Ia supernovae sample with the data combination of baryonic acoustic oscillations, cosmic microwave background radiation, Planck lensing and cosmic chronometers. We find that a spatially flat universe is preferred in the framework of Λ CDM cosmology, that the constrained equation of state of dark energy is very consistent with the cosmological constant hypothesis in the ω CDM model, that there is no evidence of dynamical dark energy in the dark energy density-parametrization model, that there is no hint of interaction between dark matter and dark energy in the dark sector of the universe in the decaying vacuum model, and that there does not exist the sterile neutrino in the neutrino sector of the universe in the Λ CDM model. We also give the 95% upper limit of the total mass of three active neutrinos Σ mν<0.178 eV under the assumption of Λ CDM scenario. It is clear that there is no any departure from the standard cosmological model based on current observational datasets.
NASA Astrophysics Data System (ADS)
Cao, Shu-Lei; Duan, Xiao-Wei; Meng, Xiao-Lei; Zhang, Tong-Jie
2018-04-01
Aiming at exploring the nature of dark energy (DE), we use forty-three observational Hubble parameter data (OHD) in the redshift range 0 < z ≤slant 2.36 to make a cosmological model-independent test of the ΛCDM model with two-point Omh^2(z2;z1) diagnostic. In ΛCDM model, with equation of state (EoS) w=-1, two-point diagnostic relation Omh^2 ≡ Ωmh^2 is tenable, where Ωm is the present matter density parameter, and h is the Hubble parameter divided by 100 {km s^{-1 Mpc^{-1}}}. We utilize two methods: the weighted mean and median statistics to bin the OHD to increase the signal-to-noise ratio of the measurements. The binning methods turn out to be promising and considered to be robust. By applying the two-point diagnostic to the binned data, we find that although the best-fit values of Omh^2 fluctuate as the continuous redshift intervals change, on average, they are continuous with being constant within 1 σ confidence interval. Therefore, we conclude that the ΛCDM model cannot be ruled out.
Visualization of Octree Adaptive Mesh Refinement (AMR) in Astrophysical Simulations
NASA Astrophysics Data System (ADS)
Labadens, M.; Chapon, D.; Pomaréde, D.; Teyssier, R.
2012-09-01
Computer simulations are important in current cosmological research. Those simulations run in parallel on thousands of processors, and produce huge amount of data. Adaptive mesh refinement is used to reduce the computing cost while keeping good numerical accuracy in regions of interest. RAMSES is a cosmological code developed by the Commissariat à l'énergie atomique et aux énergies alternatives (English: Atomic Energy and Alternative Energies Commission) which uses Octree adaptive mesh refinement. Compared to grid based AMR, the Octree AMR has the advantage to fit very precisely the adaptive resolution of the grid to the local problem complexity. However, this specific octree data type need some specific software to be visualized, as generic visualization tools works on Cartesian grid data type. This is why the PYMSES software has been also developed by our team. It relies on the python scripting language to ensure a modular and easy access to explore those specific data. In order to take advantage of the High Performance Computer which runs the RAMSES simulation, it also uses MPI and multiprocessing to run some parallel code. We would like to present with more details our PYMSES software with some performance benchmarks. PYMSES has currently two visualization techniques which work directly on the AMR. The first one is a splatting technique, and the second one is a custom ray tracing technique. Both have their own advantages and drawbacks. We have also compared two parallel programming techniques with the python multiprocessing library versus the use of MPI run. The load balancing strategy has to be smartly defined in order to achieve a good speed up in our computation. Results obtained with this software are illustrated in the context of a massive, 9000-processor parallel simulation of a Milky Way-like galaxy.
Toward an automated parallel computing environment for geosciences
NASA Astrophysics Data System (ADS)
Zhang, Huai; Liu, Mian; Shi, Yaolin; Yuen, David A.; Yan, Zhenzhen; Liang, Guoping
2007-08-01
Software for geodynamic modeling has not kept up with the fast growing computing hardware and network resources. In the past decade supercomputing power has become available to most researchers in the form of affordable Beowulf clusters and other parallel computer platforms. However, to take full advantage of such computing power requires developing parallel algorithms and associated software, a task that is often too daunting for geoscience modelers whose main expertise is in geosciences. We introduce here an automated parallel computing environment built on open-source algorithms and libraries. Users interact with this computing environment by specifying the partial differential equations, solvers, and model-specific properties using an English-like modeling language in the input files. The system then automatically generates the finite element codes that can be run on distributed or shared memory parallel machines. This system is dynamic and flexible, allowing users to address different problems in geosciences. It is capable of providing web-based services, enabling users to generate source codes online. This unique feature will facilitate high-performance computing to be integrated with distributed data grids in the emerging cyber-infrastructures for geosciences. In this paper we discuss the principles of this automated modeling environment and provide examples to demonstrate its versatility.
The AIS-5000 parallel processor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schmitt, L.A.; Wilson, S.S.
1988-05-01
The AIS-5000 is a commercially available massively parallel processor which has been designed to operate in an industrial environment. It has fine-grained parallelism with up to 1024 processing elements arranged in a single-instruction multiple-data (SIMD) architecture. The processing elements are arranged in a one-dimensional chain that, for computer vision applications, can be as wide as the image itself. This architecture has superior cost/performance characteristics than two-dimensional mesh-connected systems. The design of the processing elements and their interconnections as well as the software used to program the system allow a wide variety of algorithms and applications to be implemented. In thismore » paper, the overall architecture of the system is described. Various components of the system are discussed, including details of the processing elements, data I/O pathways and parallel memory organization. A virtual two-dimensional model for programming image-based algorithms for the system is presented. This model is supported by the AIS-5000 hardware and software and allows the system to be treated as a full-image-size, two-dimensional, mesh-connected parallel processor. Performance bench marks are given for certain simple and complex functions.« less
Efficient Predictions of Excited State for Nanomaterials Using Aces 3 and 4
2017-12-20
by first-principle methods in the software package ACES by using large parallel computers, growing tothe exascale. 15. SUBJECT TERMS Computer...modeling, excited states, optical properties, structure, stability, activation barriers first principle methods , parallel computing 16. SECURITY...2 Progress with new density functional methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kargupta, H.; Stafford, B.; Hamzaoglu, I.
This paper describes an experimental parallel/distributed data mining system PADMA (PArallel Data Mining Agents) that uses software agents for local data accessing and analysis and a web based interface for interactive data visualization. It also presents the results of applying PADMA for detecting patterns in unstructured texts of postmortem reports and laboratory test data for Hepatitis C patients.
Hybrid Optimization Parallel Search PACKage
DOE Office of Scientific and Technical Information (OSTI.GOV)
2009-11-10
HOPSPACK is open source software for solving optimization problems without derivatives. Application problems may have a fully nonlinear objective function, bound constraints, and linear and nonlinear constraints. Problem variables may be continuous, integer-valued, or a mixture of both. The software provides a framework that supports any derivative-free type of solver algorithm. Through the framework, solvers request parallel function evaluation, which may use MPI (multiple machines) or multithreading (multiple processors/cores on one machine). The framework provides a Cache and Pending Cache of saved evaluations that reduces execution time and facilitates restarts. Solvers can dynamically create other algorithms to solve subproblems, amore » useful technique for handling multiple start points and integer-valued variables. HOPSPACK ships with the Generating Set Search (GSS) algorithm, developed at Sandia as part of the APPSPACK open source software project.« less
Algorithmic synthesis using Python compiler
NASA Astrophysics Data System (ADS)
Cieszewski, Radoslaw; Romaniuk, Ryszard; Pozniak, Krzysztof; Linczuk, Maciej
2015-09-01
This paper presents a python to VHDL compiler. The compiler interprets an algorithmic description of a desired behavior written in Python and translate it to VHDL. FPGA combines many benefits of both software and ASIC implementations. Like software, the programmed circuit is flexible, and can be reconfigured over the lifetime of the system. FPGAs have the potential to achieve far greater performance than software as a result of bypassing the fetch-decode-execute operations of traditional processors, and possibly exploiting a greater level of parallelism. This can be achieved by using many computational resources at the same time. Creating parallel programs implemented in FPGAs in pure HDL is difficult and time consuming. Using higher level of abstraction and High-Level Synthesis compiler implementation time can be reduced. The compiler has been implemented using the Python language. This article describes design, implementation and results of created tools.
GALAXY CLUSTER BULK FLOWS AND COLLISION VELOCITIES IN QUMOND
DOE Office of Scientific and Technical Information (OSTI.GOV)
Katz, Harley; McGaugh, Stacy; Teuben, Peter
We examine the formation of clusters of galaxies in numerical simulations of a QUMOND cosmogony with massive sterile neutrinos. Clusters formed in these exploratory simulations develop higher velocities than those found in {Lambda}CDM simulations. The bulk motions of clusters attain {approx}1000 km s{sup -1} by low redshift, comparable to observations whereas {Lambda}CDM simulated clusters tend to fall short. Similarly, high pairwise velocities are common in cluster-cluster collisions like the Bullet Cluster. There is also a propensity for the most massive clusters to be larger in QUMOND and to appear earlier than in {Lambda}CDM, potentially providing an explanation for ''pink elephants''more » like El Gordo. However, it is not obvious that the cluster mass function can be recovered.« less
Integration experiences and performance studies of A COTS parallel archive systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Hsing-bung; Scott, Cody; Grider, Bary
2010-01-01
Current and future Archive Storage Systems have been asked to (a) scale to very high bandwidths, (b) scale in metadata performance, (c) support policy-based hierarchical storage management capability, (d) scale in supporting changing needs of very large data sets, (e) support standard interface, and (f) utilize commercial-off-the-shelf(COTS) hardware. Parallel file systems have been asked to do the same thing but at one or more orders of magnitude faster in performance. Archive systems continue to move closer to file systems in their design due to the need for speed and bandwidth, especially metadata searching speeds such as more caching and lessmore » robust semantics. Currently the number of extreme highly scalable parallel archive solutions is very small especially those that will move a single large striped parallel disk file onto many tapes in parallel. We believe that a hybrid storage approach of using COTS components and innovative software technology can bring new capabilities into a production environment for the HPC community much faster than the approach of creating and maintaining a complete end-to-end unique parallel archive software solution. In this paper, we relay our experience of integrating a global parallel file system and a standard backup/archive product with a very small amount of additional code to provide a scalable, parallel archive. Our solution has a high degree of overlap with current parallel archive products including (a) doing parallel movement to/from tape for a single large parallel file, (b) hierarchical storage management, (c) ILM features, (d) high volume (non-single parallel file) archives for backup/archive/content management, and (e) leveraging all free file movement tools in Linux such as copy, move, ls, tar, etc. We have successfully applied our working COTS Parallel Archive System to the current world's first petaflop/s computing system, LANL's Roadrunner, and demonstrated its capability to address requirements of future archival storage systems.« less
Integration experiments and performance studies of a COTS parallel archive system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Hsing-bung; Scott, Cody; Grider, Gary
2010-06-16
Current and future Archive Storage Systems have been asked to (a) scale to very high bandwidths, (b) scale in metadata performance, (c) support policy-based hierarchical storage management capability, (d) scale in supporting changing needs of very large data sets, (e) support standard interface, and (f) utilize commercial-off-the-shelf (COTS) hardware. Parallel file systems have been asked to do the same thing but at one or more orders of magnitude faster in performance. Archive systems continue to move closer to file systems in their design due to the need for speed and bandwidth, especially metadata searching speeds such as more caching andmore » less robust semantics. Currently the number of extreme highly scalable parallel archive solutions is very small especially those that will move a single large striped parallel disk file onto many tapes in parallel. We believe that a hybrid storage approach of using COTS components and innovative software technology can bring new capabilities into a production environment for the HPC community much faster than the approach of creating and maintaining a complete end-to-end unique parallel archive software solution. In this paper, we relay our experience of integrating a global parallel file system and a standard backup/archive product with a very small amount of additional code to provide a scalable, parallel archive. Our solution has a high degree of overlap with current parallel archive products including (a) doing parallel movement to/from tape for a single large parallel file, (b) hierarchical storage management, (c) ILM features, (d) high volume (non-single parallel file) archives for backup/archive/content management, and (e) leveraging all free file movement tools in Linux such as copy, move, Is, tar, etc. We have successfully applied our working COTS Parallel Archive System to the current world's first petafiop/s computing system, LANL's Roadrunner machine, and demonstrated its capability to address requirements of future archival storage systems.« less
Chikkagoudar, Satish; Wang, Kai; Li, Mingyao
2011-05-26
Gene-gene interaction in genetic association studies is computationally intensive when a large number of SNPs are involved. Most of the latest Central Processing Units (CPUs) have multiple cores, whereas Graphics Processing Units (GPUs) also have hundreds of cores and have been recently used to implement faster scientific software. However, currently there are no genetic analysis software packages that allow users to fully utilize the computing power of these multi-core devices for genetic interaction analysis for binary traits. Here we present a novel software package GENIE, which utilizes the power of multiple GPU or CPU processor cores to parallelize the interaction analysis. GENIE reads an entire genetic association study dataset into memory and partitions the dataset into fragments with non-overlapping sets of SNPs. For each fragment, GENIE analyzes: 1) the interaction of SNPs within it in parallel, and 2) the interaction between the SNPs of the current fragment and other fragments in parallel. We tested GENIE on a large-scale candidate gene study on high-density lipoprotein cholesterol. Using an NVIDIA Tesla C1060 graphics card, the GPU mode of GENIE achieves a speedup of 27 times over its single-core CPU mode run. GENIE is open-source, economical, user-friendly, and scalable. Since the computing power and memory capacity of graphics cards are increasing rapidly while their cost is going down, we anticipate that GENIE will achieve greater speedups with faster GPU cards. Documentation, source code, and precompiled binaries can be downloaded from http://www.cceb.upenn.edu/~mli/software/GENIE/.
2011-01-01
Background Gene-gene interaction in genetic association studies is computationally intensive when a large number of SNPs are involved. Most of the latest Central Processing Units (CPUs) have multiple cores, whereas Graphics Processing Units (GPUs) also have hundreds of cores and have been recently used to implement faster scientific software. However, currently there are no genetic analysis software packages that allow users to fully utilize the computing power of these multi-core devices for genetic interaction analysis for binary traits. Findings Here we present a novel software package GENIE, which utilizes the power of multiple GPU or CPU processor cores to parallelize the interaction analysis. GENIE reads an entire genetic association study dataset into memory and partitions the dataset into fragments with non-overlapping sets of SNPs. For each fragment, GENIE analyzes: 1) the interaction of SNPs within it in parallel, and 2) the interaction between the SNPs of the current fragment and other fragments in parallel. We tested GENIE on a large-scale candidate gene study on high-density lipoprotein cholesterol. Using an NVIDIA Tesla C1060 graphics card, the GPU mode of GENIE achieves a speedup of 27 times over its single-core CPU mode run. Conclusions GENIE is open-source, economical, user-friendly, and scalable. Since the computing power and memory capacity of graphics cards are increasing rapidly while their cost is going down, we anticipate that GENIE will achieve greater speedups with faster GPU cards. Documentation, source code, and precompiled binaries can be downloaded from http://www.cceb.upenn.edu/~mli/software/GENIE/. PMID:21615923
Photonic content-addressable memory system that uses a parallel-readout optical disk
NASA Astrophysics Data System (ADS)
Krishnamoorthy, Ashok V.; Marchand, Philippe J.; Yayla, Gökçe; Esener, Sadik C.
1995-11-01
We describe a high-performance associative-memory system that can be implemented by means of an optical disk modified for parallel readout and a custom-designed silicon integrated circuit with parallel optical input. The system can achieve associative recall on 128 \\times 128 bit images and also on variable-size subimages. The system's behavior and performance are evaluated on the basis of experimental results on a motionless-head parallel-readout optical-disk system, logic simulations of the very-large-scale integrated chip, and a software emulation of the overall system.
Parallel Event Analysis Under Unix
NASA Astrophysics Data System (ADS)
Looney, S.; Nilsson, B. S.; Oest, T.; Pettersson, T.; Ranjard, F.; Thibonnier, J.-P.
The ALEPH experiment at LEP, the CERN CN division and Digital Equipment Corp. have, in a joint project, developed a parallel event analysis system. The parallel physics code is identical to ALEPH's standard analysis code, ALPHA, only the organisation of input/output is changed. The user may switch between sequential and parallel processing by simply changing one input "card". The initial implementation runs on an 8-node DEC 3000/400 farm, using the PVM software, and exhibits a near-perfect speed-up linearity, reducing the turn-around time by a factor of 8.
Lowrie, Richard; McConnachie, Alex; Williamson, Andrea E; Kontopantelis, Evangelos; Forrest, Marie; Lannigan, Norman; Mercer, Stewart W; Mair, Frances S
2017-04-11
The inverse equity hypothesis asserts that new health policies initially widen inequality, then attenuate inequalities over time. Since 2004, the UK's pay-for-performance scheme for chronic disease management (CDM) in primary care general practices (the Quality and Outcomes Framework) has permitted practices to except (exclude) patients from attending annual CDM reviews, without financial penalty. Informed dissent (ID) is one component of exception rates, applied to patients who have not attended due to refusal or non-response to invitations. 'Population achievement' describes the proportion receiving care, in relation to those eligible to receive it, including excepted patients. Examination of exception reporting (including ID) and population achievement enables the equity impact of the UK pay-for-performance contract to be assessed. We conducted a longitudinal analysis of practice-level rates and of predictors of ID, overall exceptions and population achievement for CDM to examine whether the inverse equity hypothesis holds true. We carried out a retrospective, longitudinal study using routine primary care data, analysed by multilevel logistic regression. Data were extracted from 793 practices (83% of Scottish general practices) serving 4.4 million patients across Scotland from 2010/2011 to 2012/2013, for 29 CDM indicators covering 11 incentivised diseases. This provided 68,991 observations, representing a total of 15 million opportunities for exception reporting. Across all observations, the median overall exception reporting rate was 7.0% (7.04% in 2010-2011; 7.02% in 2011-2012 and 6.92% in 2012-2013). The median non-attendance rate due to ID was 0.9% (0.76% in 2010-2011; 0.88% in 2011-2012 and 0.96% in 2012-2013). Median population achievement was 83.5% (83.51% in 2010-2011; 83.41% in 2011-2012 and 83.63% in 2012-2013). The odds of ID reporting in 2012/2013 were 16.0% greater than in 2010/2011 (p < 0.001). Practices in Scotland's most deprived communities were twice as likely to report non-attendance due to ID (odds ratio 2.10, 95% confidence interval 1.83-2.40, p < 0.001) compared with those in the least deprived; rural practices reported lower levels of non-attendance due to ID. These predictors were also independently associated with overall exceptions. Rates of population achievement did not change over time, with higher levels (higher remuneration) associated with increased rates of overall and ID exception and more affluent practices. Non-attendance for CDM due to ID has risen over time, and higher rates are seen in patients from practices located in disadvantaged areas. This suggests that CDM incentivisation does not conform to the inverse equity hypothesis, because inequalities are widening over time with lower uptake of anticipatory care health checks and CDM reviews noted among those most in need. Incentivised CDM needs to include incentives for engaging with the 'hard to reach' if inequalities in healthcare delivery are to be tackled.
NASA Astrophysics Data System (ADS)
Bonvin, V.; Courbin, F.; Suyu, S. H.; Marshall, P. J.; Rusu, C. E.; Sluse, D.; Tewes, M.; Wong, K. C.; Collett, T.; Fassnacht, C. D.; Treu, T.; Auger, M. W.; Hilbert, S.; Koopmans, L. V. E.; Meylan, G.; Rumbaugh, N.; Sonnenfeld, A.; Spiniello, C.
2017-03-01
We present a new measurement of the Hubble Constant H0 and other cosmological parameters based on the joint analysis of three multiply imaged quasar systems with measured gravitational time delays. First, we measure the time delay of HE 0435-1223 from 13-yr light curves obtained as part of the COSMOGRAIL project. Companion papers detail the modelling of the main deflectors and line-of-sight effects, and how these data are combined to determine the time-delay distance of HE 0435-1223. Crucially, the measurements are carried out blindly with respect to cosmological parameters in order to avoid confirmation bias. We then combine the time-delay distance of HE 0435-1223 with previous measurements from systems B1608+656 and RXJ1131-1231 to create a Time Delay Strong Lensing probe (TDSL). In flat Λ cold dark matter (ΛCDM) with free matter and energy density, we find H0 =71.9^{+2.4}_{-3.0} {km s^{-1} Mpc^{-1}} and Ω _{Λ }=0.62^{+0.24}_{-0.35}. This measurement is completely independent of, and in agreement with, the local distance ladder measurements of H0. We explore more general cosmological models combining TDSL with other probes, illustrating its power to break degeneracies inherent to other methods. The joint constraints from TDSL and Planck are H0 = 69.2_{-2.2}^{+1.4} {km s^{-1} Mpc^{-1}}, Ω _{Λ }=0.70_{-0.01}^{+0.01} and Ω _k=0.003_{-0.006}^{+0.004} in open ΛCDM and H0 =79.0_{-4.2}^{+4.4} {km s^{-1} Mpc^{-1}}, Ω _de=0.77_{-0.03}^{+0.02} and w=-1.38_{-0.16}^{+0.14} in flat wCDM. In combination with Planck and baryon acoustic oscillation data, when relaxing the constraints on the numbers of relativistic species we find Neff = 3.34_{-0.21}^{+0.21} in NeffΛCDM and when relaxing the total mass of neutrinos we find Σmν ≤ 0.182 eV in mνΛCDM. Finally, in an open wCDM in combination with Planck and cosmic microwave background lensing, we find H0 =77.9_{-4.2}^{+5.0} {km s^{-1} Mpc^{-1}}, Ω _de=0.77_{-0.03}^{+0.03}, Ω _k=-0.003_{-0.004}^{+0.004} and w=-1.37_{-0.23}^{+0.18}.
Hierarchical Petascale Simulation Framework For Stress Corrosion Cracking
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grama, Ananth
2013-12-18
A number of major accomplishments resulted from the project. These include: • Data Structures, Algorithms, and Numerical Methods for Reactive Molecular Dynamics. We have developed a range of novel data structures, algorithms, and solvers (amortized ILU, Spike) for use with ReaxFF and charge equilibration. • Parallel Formulations of ReactiveMD (Purdue ReactiveMolecular Dynamics Package, PuReMD, PuReMD-GPU, and PG-PuReMD) for Messaging, GPU, and GPU Cluster Platforms. We have developed efficient serial, parallel (MPI), GPU (Cuda), and GPU Cluster (MPI/Cuda) implementations. Our implementations have been demonstrated to be significantly better than the state of the art, both in terms of performance and scalability.more » • Comprehensive Validation in the Context of Diverse Applications. We have demonstrated the use of our software in diverse systems, including silica-water, silicon-germanium nanorods, and as part of other projects, extended it to applications ranging from explosives (RDX) to lipid bilayers (biomembranes under oxidative stress). • Open Source Software Packages for Reactive Molecular Dynamics. All versions of our soft- ware have been released over the public domain. There are over 100 major research groups worldwide using our software. • Implementation into the Department of Energy LAMMPS Software Package. We have also integrated our software into the Department of Energy LAMMPS software package.« less
NASA Astrophysics Data System (ADS)
Huang, Fa Peng; Kadota, Kenji; Sekiguchi, Toyokazu; Tashiro, Hiroyuki
2018-06-01
We study the conditions for the adiabatic resonant conversion of the cold dark matter (CDM) axions into photons in the astrophysically sourced strong magnetic fields such as those in the neutron star magnetosphere. We demonstrate the possibility that the forthcoming radio telescopes such as the SKA (Square Kilometre Array) can probe those photon signals from the CDM axions.
Operationalizing clean development mechanism baselines: A case study of China's electrical sector
NASA Astrophysics Data System (ADS)
Steenhof, Paul A.
The global carbon market is rapidly developing as the first commitment period of the Kyoto Protocol draws closer and Parties to the Protocol with greenhouse gas (GHG) emission reduction targets seek alternative ways to reduce their emissions. The Protocol includes the Clean Development Mechanism (CDM), a tool that encourages project-based investments to be made in developing nations that will lead to an additional reduction in emissions. Due to China's economic size and rate of growth, technological characteristics, and its reliance on coal, it contains a large proportion of the global CDM potential. As China's economy modernizes, more technologies and processes are requiring electricity and demand for this energy source is accelerating rapidly. Relatively inefficient technology to generate electricity in China thereby results in the electrical sector having substantial GHG emission reduction opportunities as related to the CDM. In order to ensure the credibility of the CDM in leading to a reduction in GHG emissions, it is important that the baseline method used in the CDM approval process is scientifically sound and accessible for both others to use and for evaluation purposes. Three different methods for assessing CDM baselines and environmental additionality are investigated in the context of China's electrical sector: a method based on a historical perspective of the electrical sector (factor decomposition), a method structured upon a current perspective (operating and build margins), and a simulation of the future (dispatch analysis). Assessing future emission levels for China's electrical sector is a very challenging task given the complexity of the system, its dynamics, and that it is heavily influenced by internal and external forces, but of the different baseline methods investigated, dispatch modelling is best suited for the Chinese context as it is able to consider the important regional and temporal dimensions of its economy and its future development. For China, the most promising options for promoting sustainable development, one of the goals of the Kyoto Protocol, appear to be tied to increasing electrical end-use and generation efficiency, particularly clean coal technology for electricity generation since coal will likely continue to be a dominant primary fuel.
NASA Astrophysics Data System (ADS)
Kim, G. E.; Pradal, M.-A.; Gnanadesikan, A.
2015-08-01
Light attenuation by colored detrital material (CDM) was included in a fully coupled Earth system model (ESM). This study presents a modified parameterization for shortwave attenuation, which is an empirical relationship between 244 concurrent measurements of the diffuse attenuation coefficient for downwelling irradiance, chlorophyll concentration and light absorption by CDM. Two ESM model runs using this parameterization were conducted, with and without light absorption by CDM. The light absorption coefficient for CDM was prescribed as the average of annual composite MODIS Aqua satellite data from 2002 to 2013. Comparing results from the two model runs shows that changes in light limitation associated with the inclusion of CDM decoupled trends between surface biomass and nutrients. Increases in surface biomass were expected to accompany greater nutrient uptake and therefore diminish surface nutrients. Instead, surface chlorophyll, biomass and nutrients increased together. These changes can be attributed to the different impact of light limitation on surface productivity versus total productivity. Chlorophyll and biomass increased near the surface but decreased at greater depths when CDM was included. The net effect over the euphotic zone was less total biomass leading to higher nutrient concentrations. Similar results were found in a regional analysis of the oceans by biome, investigating the spatial variability of response to changes in light limitation using a single parameterization for the surface ocean. In coastal regions, surface chlorophyll increased by 35 % while total integrated phytoplankton biomass diminished by 18 %. The largest relative increases in modeled surface chlorophyll and biomass in the open ocean were found in the equatorial biomes, while the largest decreases in depth-integrated biomass and chlorophyll were found in the subpolar and polar biomes. This mismatch of surface and subsurface trends and their regional dependence was analyzed by comparing the competing factors of diminished light availability and increased nutrient availability on phytoplankton growth in the upper 200 m. Understanding changes in biological productivity requires both surface and depth-resolved information. Surface trends may be minimal or of the opposite sign than depth-integrated amounts, depending on the vertical structure of phytoplankton abundance.
Strong gravitational lensing statistics as a test of cosmogonic scenarios
NASA Technical Reports Server (NTRS)
Cen, Renyue; Gott, J. Richard, III; Ostriker, Jeremiah P.; Turner, Edwin L.
1994-01-01
Gravitational lensing statistics can provide a direct and powerful test of cosmic structure formation theories. Since lensing tests, directly, the magnitude of the nonlinear mass density fluctuations on lines of sight to distant objects, no issues of 'bias' (of mass fluctuations with respect to galaxy density fluctuations) exist here, although lensing observations provide their own ambiguities of interpretation. We develop numerical techniques for generating model density distributions with the very large spatial dynamic range required by lensing considerations and for identifying regions of the simulations capable of multiple image lensing in a conservative and computationally efficient way that should be accurate for splittings significantly larger than 3 seconds. Applying these techniques to existing standard Cold dark matter (CDM) (Omega = 1) and Primeval Baryon Isocurvature (PBI) (Omega = 0.2) simulations (normalized to the Cosmic Background Explorer Satellite (COBE) amplitude), we find that the CDM model predicts large splitting (greater than 8 seconds) lensing events roughly an order-of-magnitude more frequently than the PBI model. Under the reasonable but idealized assumption that lensing structrues can be modeled as singular isothermal spheres (SIS), the predictions can be directly compared to observations of lensing events in quasar samples. Several large splitting (Delta Theta is greater than 8 seconds) cases are predicted in the standard CDM model (the exact number being dependent on the treatment of amplification bias), whereas none is observed. In a formal sense, the comparison excludes the CDM model at high confidence (essentially for the same reason that CDM predicts excessive small-scale cosmic velocity dispersions.) A very rough assessment of low-density but flat CDM model (Omega = 0.3, Lambda/3H(sup 2 sub 0) = 0.7) indicates a far lower and probably acceptable level of lensing. The PBI model is consistent with, but not strongly tested by, the available lensing data, and other open models would presumably do as well as PBI. These preliminary conclusions and the assumptions on which they are based can be tested and the analysis can be applied to other cosmogonic models by straightforward extension of the work presented here.
SiGN-SSM: open source parallel software for estimating gene networks with state space models.
Tamada, Yoshinori; Yamaguchi, Rui; Imoto, Seiya; Hirose, Osamu; Yoshida, Ryo; Nagasaki, Masao; Miyano, Satoru
2011-04-15
SiGN-SSM is an open-source gene network estimation software able to run in parallel on PCs and massively parallel supercomputers. The software estimates a state space model (SSM), that is a statistical dynamic model suitable for analyzing short time and/or replicated time series gene expression profiles. SiGN-SSM implements a novel parameter constraint effective to stabilize the estimated models. Also, by using a supercomputer, it is able to determine the gene network structure by a statistical permutation test in a practical time. SiGN-SSM is applicable not only to analyzing temporal regulatory dependencies between genes, but also to extracting the differentially regulated genes from time series expression profiles. SiGN-SSM is distributed under GNU Affero General Public Licence (GNU AGPL) version 3 and can be downloaded at http://sign.hgc.jp/signssm/. The pre-compiled binaries for some architectures are available in addition to the source code. The pre-installed binaries are also available on the Human Genome Center supercomputer system. The online manual and the supplementary information of SiGN-SSM is available on our web site. tamada@ims.u-tokyo.ac.jp.
Data Acquisition System for Multi-Frequency Radar Flight Operations Preparation
NASA Technical Reports Server (NTRS)
Leachman, Jonathan
2010-01-01
A three-channel data acquisition system was developed for the NASA Multi-Frequency Radar (MFR) system. The system is based on a commercial-off-the-shelf (COTS) industrial PC (personal computer) and two dual-channel 14-bit digital receiver cards. The decimated complex envelope representations of the three radar signals are passed to the host PC via the PCI bus, and then processed in parallel by multiple cores of the PC CPU (central processing unit). The innovation is this parallelization of the radar data processing using multiple cores of a standard COTS multi-core CPU. The data processing portion of the data acquisition software was built using autonomous program modules or threads, which can run simultaneously on different cores. A master program module calculates the optimal number of processing threads, launches them, and continually supplies each with data. The benefit of this new parallel software architecture is that COTS PCs can be used to implement increasingly complex processing algorithms on an increasing number of radar range gates and data rates. As new PCs become available with higher numbers of CPU cores, the software will automatically utilize the additional computational capacity.
PREMER: a Tool to Infer Biological Networks.
Villaverde, Alejandro F; Becker, Kolja; Banga, Julio R
2017-10-04
Inferring the structure of unknown cellular networks is a main challenge in computational biology. Data-driven approaches based on information theory can determine the existence of interactions among network nodes automatically. However, the elucidation of certain features - such as distinguishing between direct and indirect interactions or determining the direction of a causal link - requires estimating information-theoretic quantities in a multidimensional space. This can be a computationally demanding task, which acts as a bottleneck for the application of elaborate algorithms to large-scale network inference problems. The computational cost of such calculations can be alleviated by the use of compiled programs and parallelization. To this end we have developed PREMER (Parallel Reverse Engineering with Mutual information & Entropy Reduction), a software toolbox that can run in parallel and sequential environments. It uses information theoretic criteria to recover network topology and determine the strength and causality of interactions, and allows incorporating prior knowledge, imputing missing data, and correcting outliers. PREMER is a free, open source software tool that does not require any commercial software. Its core algorithms are programmed in FORTRAN 90 and implement OpenMP directives. It has user interfaces in Python and MATLAB/Octave, and runs on Windows, Linux and OSX (https://sites.google.com/site/premertoolbox/).
Optimization of the coherence function estimation for multi-core central processing unit
NASA Astrophysics Data System (ADS)
Cheremnov, A. G.; Faerman, V. A.; Avramchuk, V. S.
2017-02-01
The paper considers use of parallel processing on multi-core central processing unit for optimization of the coherence function evaluation arising in digital signal processing. Coherence function along with other methods of spectral analysis is commonly used for vibration diagnosis of rotating machinery and its particular nodes. An algorithm is given for the function evaluation for signals represented with digital samples. The algorithm is analyzed for its software implementation and computational problems. Optimization measures are described, including algorithmic, architecture and compiler optimization, their results are assessed for multi-core processors from different manufacturers. Thus, speeding-up of the parallel execution with respect to sequential execution was studied and results are presented for Intel Core i7-4720HQ и AMD FX-9590 processors. The results show comparatively high efficiency of the optimization measures taken. In particular, acceleration indicators and average CPU utilization have been significantly improved, showing high degree of parallelism of the constructed calculating functions. The developed software underwent state registration and will be used as a part of a software and hardware solution for rotating machinery fault diagnosis and pipeline leak location with acoustic correlation method.
Scalable Performance Environments for Parallel Systems
NASA Technical Reports Server (NTRS)
Reed, Daniel A.; Olson, Robert D.; Aydt, Ruth A.; Madhyastha, Tara M.; Birkett, Thomas; Jensen, David W.; Nazief, Bobby A. A.; Totty, Brian K.
1991-01-01
As parallel systems expand in size and complexity, the absence of performance tools for these parallel systems exacerbates the already difficult problems of application program and system software performance tuning. Moreover, given the pace of technological change, we can no longer afford to develop ad hoc, one-of-a-kind performance instrumentation software; we need scalable, portable performance analysis tools. We describe an environment prototype based on the lessons learned from two previous generations of performance data analysis software. Our environment prototype contains a set of performance data transformation modules that can be interconnected in user-specified ways. It is the responsibility of the environment infrastructure to hide details of module interconnection and data sharing. The environment is written in C++ with the graphical displays based on X windows and the Motif toolkit. It allows users to interconnect and configure modules graphically to form an acyclic, directed data analysis graph. Performance trace data are represented in a self-documenting stream format that includes internal definitions of data types, sizes, and names. The environment prototype supports the use of head-mounted displays and sonic data presentation in addition to the traditional use of visual techniques.
Influence of calcium depletion on iron-binding properties of milk.
Mittal, V A; Ellis, A; Ye, A; Das, S; Singh, H
2015-04-01
We investigated the effects of calcium depletion on the binding of iron in milk. A weakly acidic cation-exchange resin was used to remove 3 different levels (18-22, 50-55, and 68-72%) of calcium from milk. Five levels of iron (5, 10, 15, 20, and 25 mM) were added to each of these calcium-depleted milks (CDM) and the resultant milks were analyzed for particle size, microstructure, and the distribution of protein and minerals between the colloidal and soluble phases. The depletion of calcium affected the distribution of protein and minerals in normal milk. Iron added to normal milk and low-CDM (~20% calcium depletion) bound mainly to the colloidal phase (material sedimented at 100,000 × g for 1 h at 20 °C), with little effect on the integrity of the casein micelles. Depletion of ~70% of the calcium from milk resulted in almost complete disintegration of the casein micelles, as indicated by all the protein remaining in the soluble phase upon ultracentrifugation. Addition of up to ~20 mM iron to high CDM resulted in the formation of small fibrous structures that remained in the soluble phase of milk. It appeared that the iron bound to soluble (nonsedimentable) caseins in high-CDM. We observed a decrease in the aqueous phosphorus content of all milks upon iron addition, irrespective of their calcium content. We considered the interaction between aqueous phosphorus and added iron to be responsible for the high iron-binding capacity of the proteins in milk. The soluble protein-iron complexes formed in high-CDM (~70% calcium depletion) could be used as an effective iron fortificant for a range of food products because of their good solubility characteristics. Copyright © 2015 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Disentangling interacting dark energy cosmologies with the three-point correlation function
NASA Astrophysics Data System (ADS)
Moresco, Michele; Marulli, Federico; Baldi, Marco; Moscardini, Lauro; Cimatti, Andrea
2014-10-01
We investigate the possibility of constraining coupled dark energy (cDE) cosmologies using the three-point correlation function (3PCF). Making use of the CODECS N-body simulations, we study the statistical properties of cold dark matter (CDM) haloes for a variety of models, including a fiducial ΛCDM scenario and five models in which dark energy (DE) and CDM mutually interact. We measure both the halo 3PCF, ζ(θ), and the reduced 3PCF, Q(θ), at different scales (2 < r [h-1 Mpc ] < 40) and redshifts (0 ≤ z ≤ 2). In all cDE models considered in this work, Q(θ) appears flat at small scales (for all redshifts) and at low redshifts (for all scales), while it builds up the characteristic V-shape anisotropy at increasing redshifts and scales. With respect to the ΛCDM predictions, cDE models show lower (higher) values of the halo 3PCF for perpendicular (elongated) configurations. The effect is also scale-dependent, with differences between ΛCDM and cDE models that increase at large scales. We made use of these measurements to estimate the halo bias, that results in fair agreement with the one computed from the two-point correlation function (2PCF). The main advantage of using both the 2PCF and 3PCF is to break the bias-σ8 degeneracy. Moreover, we find that our bias estimates are approximately independent of the assumed strength of DE coupling. This study demonstrates the power of a higher order clustering analysis in discriminating between alternative cosmological scenarios, for both present and forthcoming galaxy surveys, such as e.g. Baryon Oscillation Spectroscopic Survey and Euclid.
NASA Astrophysics Data System (ADS)
Bemelmans, Frédéric; Rashidnasab, Alaleh; Chesterman, Frédérique; Kimpe, Tom; Bosmans, Hilde
2016-03-01
Purpose: To evaluate lesion detectability and reading time as a function of luminance level of the monitor. Material and Methods: 3D mass models and microcalcification clusters were simulated into ROIs of for processing mammograms. Randomly selected ROIs were subdivided in three groups according to their background glandularity: high (>30%), medium (15-30%) and low (<15%). 6 non-spiculated masses (9 - 11mm), 6 spiculated masses (5 - 7mm) and 6 microcalcification clusters (2 - 4mm) were scaled in 3D to create a range of sizes. The linear attenuation coefficient (AC) of the masses was adjusted from 100% glandular tissue to 90%, 80%, 70%, to create different contrasts. Six physicists read the full database on Barco's Coronis Uniti monitor for four different luminance levels (300, 800, 1000 and 1200 Cd/m2), using a 4-AFC tool. Percentage correct (PC) and time were computed for all different conditions. A paired t-test was performed to evaluate the effect of luminance on PC and time. A multi-factorial analysis was performed using MANOVA.. Results: Paired t-test indicated a statistically significant difference for the average time per session between 300 and 1200; 800 and 1200; 1000 and 1200 Cd/m2, for all participants combined. There was no effect on PC. MANOVA denoted significantly lower reading times for high glandularity images at 1200 Cd/m2. Both types of masses were significantly faster detected at 1200 Cd/m2, for the contrast study. In the size study, microcalcification clusters and spiculated masses had a significantly higher detection rate at 1200 Cd/m2. Conclusion: These results demonstrate a significant decrease in reading time, while detectability remained constant.
Wu, Jun-Yi; Chen, Show-An
2018-02-07
We use a mixed host, 2,6-bis[3-(carbazol-9-yl)phenyl]pyridine blended with 20 wt % tris(4-carbazoyl-9-ylphenyl)amine, to lower the hole-injection barrier, along with the bipolar and high-photoluminescence-quantum-yield (Φ p = 84%), blue thermally activated delay fluorescence (TADF) material of 9,9-dimethyl-9,10-dihydroacridine-2,4,6-triphenyl-1,3,5-triazine (DMAC-TRZ) as a blue dopant to compose the emission layer for the fabrication of a TADF blue organic-light-emitting diode (BOLED). The device is highly efficient with the following performance parameters: maximum brightness (B max ) = 57586 cd/m 2 , maximum current efficiency (CE max ) = 35.3 cd/A, maximum power efficiency (PE max ) = 21.4 lm/W, maximum external quantum efficiency (EQE max ) = 14.1%, and CIE coordinates (0.18, 0.42). This device has the best performance recorded among the reported solution-processed TADF BOLEDs and has a low efficiency roll-off: at brightness values of 1000 and 5000 cd/m 2 , its CEs are close, being 35.1 and 30.1 cd/A, respectively. Upon further doping of the red phosphor Ir(dpm)PQ 2 (emission peak λ max = 595 nm) into the blue emission layer, we obtained a TADF-phosphor hybrid white organic-light-emitting diode (T-P hybrid WOLED) with high performance: B max = 43594 cd/m 2 , CE max = 28.8 cd/A, PE max = 18.1 lm/W, and CIE coordinates (0.38, 0.44). This B max = 43594 cd/m 2 is better than that of the vacuum-deposited WOLED with a blue TADF emitter, 10000 cd/m 2 . This is also the first report on a T-P hybrid WOLED with a solution-processed emitting layer.
Voss, Erica A; Makadia, Rupa; Matcho, Amy; Ma, Qianli; Knoll, Chris; Schuemie, Martijn; DeFalco, Frank J; Londhe, Ajit; Zhu, Vivienne; Ryan, Patrick B
2015-05-01
To evaluate the utility of applying the Observational Medical Outcomes Partnership (OMOP) Common Data Model (CDM) across multiple observational databases within an organization and to apply standardized analytics tools for conducting observational research. Six deidentified patient-level datasets were transformed to the OMOP CDM. We evaluated the extent of information loss that occurred through the standardization process. We developed a standardized analytic tool to replicate the cohort construction process from a published epidemiology protocol and applied the analysis to all 6 databases to assess time-to-execution and comparability of results. Transformation to the CDM resulted in minimal information loss across all 6 databases. Patients and observations excluded were due to identified data quality issues in the source system, 96% to 99% of condition records and 90% to 99% of drug records were successfully mapped into the CDM using the standard vocabulary. The full cohort replication and descriptive baseline summary was executed for 2 cohorts in 6 databases in less than 1 hour. The standardization process improved data quality, increased efficiency, and facilitated cross-database comparisons to support a more systematic approach to observational research. Comparisons across data sources showed consistency in the impact of inclusion criteria, using the protocol and identified differences in patient characteristics and coding practices across databases. Standardizing data structure (through a CDM), content (through a standard vocabulary with source code mappings), and analytics can enable an institution to apply a network-based approach to observational research across multiple, disparate observational health databases. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association.
Propulsion Physics Using the Chameleon Density Model
NASA Technical Reports Server (NTRS)
Robertson, Glen A.
2011-01-01
To grow as a space faring race, future spaceflight systems will require a new theory of propulsion. Specifically one that does not require mass ejection without limiting the high thrust necessary to accelerate within or beyond our solar system and return within a normal work period or lifetime. The Chameleon Density Model (CDM) is one such model that could provide new paths in propulsion toward this end. The CDM is based on Chameleon Cosmology a dark matter theory; introduced by Khrouy and Weltman in 2004. Chameleon as it is hidden within known physics, where the Chameleon field represents a scalar field within and about an object; even in the vacuum. The CDM relates to density changes in the Chameleon field, where the density changes are related to matter accelerations within and about an object. These density changes in turn change how an object couples to its environment. Whereby, thrust is achieved by causing a differential in the environmental coupling about an object. As a demonstration to show that the CDM fits within known propulsion physics, this paper uses the model to estimate the thrust from a solid rocket motor. Under the CDM, a solid rocket constitutes a two body system, i.e., the changing density of the rocket and the changing density in the nozzle arising from the accelerated mass. Whereby, the interactions between these systems cause a differential coupling to the local gravity environment of the earth. It is shown that the resulting differential in coupling produces a calculated value for the thrust near equivalent to the conventional thrust model used in Sutton and Ross, Rocket Propulsion Elements. Even though imbedded in the equations are the Universe energy scale factor, the reduced Planck mass and the Planck length, which relates the large Universe scale to the subatomic scale.
Hidden from view: coupled dark sector physics and small scales
NASA Astrophysics Data System (ADS)
Elahi, Pascal J.; Lewis, Geraint F.; Power, Chris; Carlesi, Edoardo; Knebe, Alexander
2015-09-01
We study cluster mass dark matter (DM) haloes, their progenitors and surroundings in a coupled dark matter-dark energy (DE) model and compare it to quintessence and Λ cold dark matter (ΛCDM) models with adiabatic zoom simulations. When comparing cosmologies with different expansions histories, growth functions and power spectra, care must be taken to identify unambiguous signatures of alternative cosmologies. Shared cosmological parameters, such as σ8, need not be the same for optimal fits to observational data. We choose to set our parameters to ΛCDM z = 0 values. We find that in coupled models, where DM decays into DE, haloes appear remarkably similar to ΛCDM haloes despite DM experiencing an additional frictional force. Density profiles are not systematically different and the subhalo populations have similar mass, spin, and spatial distributions, although (sub)haloes are less concentrated on average in coupled cosmologies. However, given the scatter in related observables (V_max,R_{V_max}), this difference is unlikely to distinguish between coupled and uncoupled DM. Observations of satellites of Milky Way and M31 indicate a significant subpopulation reside in a plane. Coupled models do produce planar arrangements of satellites of higher statistical significance than ΛCDM models; however, in all models these planes are dynamically unstable. In general, the non-linear dynamics within and near large haloes masks the effects of a coupled dark sector. The sole environmental signature we find is that small haloes residing in the outskirts are more deficient in baryons than their ΛCDM counterparts. The lack of a pronounced signal for a coupled dark sector strongly suggests that such a phenomena would be effectively hidden from view.
Bruschi, Michele; Krömer, Jens O; Steen, Jennifer A; Nielsen, Lars K
2014-08-19
Peptides are increasingly used in industry as highly functional materials. Bacterial production of recombinant peptides has the potential to provide large amounts of renewable and low cost peptides, however, achieving high product titers from Chemically Defined Media (CDM) supplemented with simple sugars remains challenging. In this work, the short peptide surfactant, DAMP4, was used as a model peptide to investigate production in Escherichia coli BL21(DE3), a classical strain used for protein production. Under the same fermentation conditions, switching production of DAMP4 from rich complex media to CDM resulted in a reduction in yield that could be attributed to the reduction in final cell density more so than a significant reduction in specific productivity. To maximize product titer, cell density at induction was maximized using a fed-batch approach. In fed-batch DAMP4 product titer increased 9-fold compared to batch, while maintaining 60% specific productivity. Under the fed-batch conditions, the final product titer of DAMP4 reached more than 7 g/L which is the highest titer of DAMP4 reported to date. To investigate production from sucrose, sucrose metabolism was engineered into BL21(DE3) using a simple plasmid approach. Using this strain, growth and DAMP4 production characteristics obtained from CDM supplemented with sucrose were similar to those obtained when culturing the parent strain on CDM supplemented with glucose. Production of a model peptide was increased to several grams per liter using a CDM medium with either glucose or sucrose feedstock. It is hoped that this work will contribute cost reduction for production of designer peptide surfactants to facilitate their commercial application.
Parallel Performance of a Combustion Chemistry Simulation
Skinner, Gregg; Eigenmann, Rudolf
1995-01-01
We used a description of a combustion simulation's mathematical and computational methods to develop a version for parallel execution. The result was a reasonable performance improvement on small numbers of processors. We applied several important programming techniques, which we describe, in optimizing the application. This work has implications for programming languages, compiler design, and software engineering.
Λ CDM is Consistent with SPARC Radial Acceleration Relation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keller, B. W.; Wadsley, J. W., E-mail: kellerbw@mcmaster.ca
2017-01-20
Recent analysis of the Spitzer Photometry and Accurate Rotation Curve (SPARC) galaxy sample found a surprisingly tight relation between the radial acceleration inferred from the rotation curves and the acceleration due to the baryonic components of the disk. It has been suggested that this relation may be evidence for new physics, beyond Λ CDM . In this Letter, we show that 32 galaxies from the MUGS2 match the SPARC acceleration relation. These cosmological simulations of star-forming, rotationally supported disks were simulated with a WMAP3 Λ CDM cosmology, and match the SPARC acceleration relation with less scatter than the observational data.more » These results show that this acceleration relation is a consequence of dissipative collapse of baryons, rather than being evidence for exotic dark-sector physics or new dynamical laws.« less
Nawrotzki, Raphael J.; Jiang, Leiwen
2015-01-01
Although data for the total number of international migrant flows is now available, no global dataset concerning demographic characteristics, such as the age and gender composition of migrant flows exists. This paper reports on the methods used to generate the CDM-IM dataset of age and gender specific profiles of bilateral net (not gross) migrant flows. We employ raw data from the United Nations Global Migration Database and estimate net migrant flows by age and gender between two time points around the year 2000, accounting for various demographic processes (fertility, mortality). The dataset contains information on 3,713 net migrant flows. Validation analyses against existing data sets and the historical, geopolitical context demonstrate that the CDM-IM dataset is of reasonably high quality. PMID:26692590
Large- and small-scale constraints on power spectra in Omega = 1 universes
NASA Technical Reports Server (NTRS)
Gelb, James M.; Gradwohl, Ben-Ami; Frieman, Joshua A.
1993-01-01
The CDM model of structure formation, normalized on large scales, leads to excessive pairwise velocity dispersions on small scales. In an attempt to circumvent this problem, we study three scenarios (all with Omega = 1) with more large-scale and less small-scale power than the standard CDM model: (1) cold dark matter with significantly reduced small-scale power (inspired by models with an admixture of cold and hot dark matter); (2) cold dark matter with a non-scale-invariant power spectrum; and (3) cold dark matter with coupling of dark matter to a long-range vector field. When normalized to COBE on large scales, such models do lead to reduced velocities on small scales and they produce fewer halos compared with CDM. However, models with sufficiently low small-scale velocities apparently fail to produce an adequate number of halos.
NASA Technical Reports Server (NTRS)
Wright, E. L.; Meyer, S. S.; Bennett, C. L.; Boggess, N. W.; Cheng, E. S.; Hauser, M. G.; Kogut, A.; Lineweaver, C.; Mather, J. C.; Smoot, G. F.
1992-01-01
The large-scale cosmic background anisotropy detected by the COBE Differential Microwave Radiometer (DMR) instrument is compared to the sensitive previous measurements on various angular scales, and to the predictions of a wide variety of models of structure formation driven by gravitational instability. The observed anisotropy is consistent with all previously measured upper limits and with a number of dynamical models of structure formation. For example, the data agree with an unbiased cold dark matter (CDM) model with H0 = 50 km/s Mpc and Delta-M/M = 1 in a 16 Mpc radius sphere. Other models, such as CDM plus massive neutrinos (hot dark matter (HDM)), or CDM with a nonzero cosmological constant are also consistent with the COBE detection and can provide the extra power seen on 5-10,000 km/s scales.
Software Tools for Design and Performance Evaluation of Intelligent Systems
2004-08-01
Self-calibration of Three-Legged Modular Reconfigurable Parallel Robots Based on Leg-End Distance Errors,” Robotica , Vol. 19, pp. 187-198. [4...9] Lintott, A. B., and Dunlop, G. R., “Parallel Topology Robot Calibration,” Robotica . [10] Vischer, P., and Clavel, R., “Kinematic Calibration...of the Parallel Delta Robot,” Robotica , Vol. 16, pp.207- 218, 1998. [11] Joshi, S.A., and Surianarayan, A., “Calibration of a 6-DOF Cable Robot Using
Applying Parallel Processing Techniques to Tether Dynamics Simulation
NASA Technical Reports Server (NTRS)
Wells, B. Earl
1996-01-01
The focus of this research has been to determine the effectiveness of applying parallel processing techniques to a sizable real-world problem, the simulation of the dynamics associated with a tether which connects two objects in low earth orbit, and to explore the degree to which the parallelization process can be automated through the creation of new software tools. The goal has been to utilize this specific application problem as a base to develop more generally applicable techniques.
More About Software for No-Loss Computing
NASA Technical Reports Server (NTRS)
Edmonds, Iarina
2007-01-01
A document presents some additional information on the subject matter of "Integrated Hardware and Software for No- Loss Computing" (NPO-42554), which appears elsewhere in this issue of NASA Tech Briefs. To recapitulate: The hardware and software designs of a developmental parallel computing system are integrated to effectuate a concept of no-loss computing (NLC). The system is designed to reconfigure an application program such that it can be monitored in real time and further reconfigured to continue a computation in the event of failure of one of the computers. The design provides for (1) a distributed class of NLC computation agents, denoted introspection agents, that effects hierarchical detection of anomalies; (2) enhancement of the compiler of the parallel computing system to cause generation of state vectors that can be used to continue a computation in the event of a failure; and (3) activation of a recovery component when an anomaly is detected.
NASA Technical Reports Server (NTRS)
Kavi, K. M.
1984-01-01
There have been a number of simulation packages developed for the purpose of designing, testing and validating computer systems, digital systems and software systems. Complex analytical tools based on Markov and semi-Markov processes have been designed to estimate the reliability and performance of simulated systems. Petri nets have received wide acceptance for modeling complex and highly parallel computers. In this research data flow models for computer systems are investigated. Data flow models can be used to simulate both software and hardware in a uniform manner. Data flow simulation techniques provide the computer systems designer with a CAD environment which enables highly parallel complex systems to be defined, evaluated at all levels and finally implemented in either hardware or software. Inherent in data flow concept is the hierarchical handling of complex systems. In this paper we will describe how data flow can be used to model computer system.
TDat: An Efficient Platform for Processing Petabyte-Scale Whole-Brain Volumetric Images.
Li, Yuxin; Gong, Hui; Yang, Xiaoquan; Yuan, Jing; Jiang, Tao; Li, Xiangning; Sun, Qingtao; Zhu, Dan; Wang, Zhenyu; Luo, Qingming; Li, Anan
2017-01-01
Three-dimensional imaging of whole mammalian brains at single-neuron resolution has generated terabyte (TB)- and even petabyte (PB)-sized datasets. Due to their size, processing these massive image datasets can be hindered by the computer hardware and software typically found in biological laboratories. To fill this gap, we have developed an efficient platform named TDat, which adopts a novel data reformatting strategy by reading cuboid data and employing parallel computing. In data reformatting, TDat is more efficient than any other software. In data accessing, we adopted parallelization to fully explore the capability for data transmission in computers. We applied TDat in large-volume data rigid registration and neuron tracing in whole-brain data with single-neuron resolution, which has never been demonstrated in other studies. We also showed its compatibility with various computing platforms, image processing software and imaging systems.
Reference datasets for bioequivalence trials in a two-group parallel design.
Fuglsang, Anders; Schütz, Helmut; Labes, Detlew
2015-03-01
In order to help companies qualify and validate the software used to evaluate bioequivalence trials with two parallel treatment groups, this work aims to define datasets with known results. This paper puts a total 11 datasets into the public domain along with proposed consensus obtained via evaluations from six different software packages (R, SAS, WinNonlin, OpenOffice Calc, Kinetica, EquivTest). Insofar as possible, datasets were evaluated with and without the assumption of equal variances for the construction of a 90% confidence interval. Not all software packages provide functionality for the assumption of unequal variances (EquivTest, Kinetica), and not all packages can handle datasets with more than 1000 subjects per group (WinNonlin). Where results could be obtained across all packages, one showed questionable results when datasets contained unequal group sizes (Kinetica). A proposal is made for the results that should be used as validation targets.
Real time software tools and methodologies
NASA Technical Reports Server (NTRS)
Christofferson, M. J.
1981-01-01
Real time systems are characterized by high speed processing and throughput as well as asynchronous event processing requirements. These requirements give rise to particular implementations of parallel or pipeline multitasking structures, of intertask or interprocess communications mechanisms, and finally of message (buffer) routing or switching mechanisms. These mechanisms or structures, along with the data structue, describe the essential character of the system. These common structural elements and mechanisms are identified, their implementation in the form of routines, tasks or macros - in other words, tools are formalized. The tools developed support or make available the following: reentrant task creation, generalized message routing techniques, generalized task structures/task families, standardized intertask communications mechanisms, and pipeline and parallel processing architectures in a multitasking environment. Tools development raise some interesting prospects in the areas of software instrumentation and software portability. These issues are discussed following the description of the tools themselves.
Automatic Generation of Directive-Based Parallel Programs for Shared Memory Parallel Systems
NASA Technical Reports Server (NTRS)
Jin, Hao-Qiang; Yan, Jerry; Frumkin, Michael
2000-01-01
The shared-memory programming model is a very effective way to achieve parallelism on shared memory parallel computers. As great progress was made in hardware and software technologies, performance of parallel programs with compiler directives has demonstrated large improvement. The introduction of OpenMP directives, the industrial standard for shared-memory programming, has minimized the issue of portability. Due to its ease of programming and its good performance, the technique has become very popular. In this study, we have extended CAPTools, a computer-aided parallelization toolkit, to automatically generate directive-based, OpenMP, parallel programs. We outline techniques used in the implementation of the tool and present test results on the NAS parallel benchmarks and ARC3D, a CFD application. This work demonstrates the great potential of using computer-aided tools to quickly port parallel programs and also achieve good performance.
NASA Astrophysics Data System (ADS)
Ying, Jia-ju; Chen, Yu-dan; Liu, Jie; Wu, Dong-sheng; Lu, Jun
2016-10-01
The maladjustment of photoelectric instrument binocular optical axis parallelism will affect the observe effect directly. A binocular optical axis parallelism digital calibration system is designed. On the basis of the principle of optical axis binocular photoelectric instrument calibration, the scheme of system is designed, and the binocular optical axis parallelism digital calibration system is realized, which include four modules: multiband parallel light tube, optical axis translation, image acquisition system and software system. According to the different characteristics of thermal infrared imager and low-light-level night viewer, different algorithms is used to localize the center of the cross reticle. And the binocular optical axis parallelism calibration is realized for calibrating low-light-level night viewer and thermal infrared imager.
NASA Technical Reports Server (NTRS)
Silk, Joseph; Stebbins, Albert
1993-01-01
A study is conducted of cold dark matter (CDM) models in which clumpiness will inhere, using cosmic strings and textures suited to galaxy formation. CDM clumps of 10 million solar mass/cu pc density are generated at about z(eq) redshift, with a sizable fraction surviving. Observable implications encompass dark matter cores in globular clusters and in galactic nuclei. Results from terrestrial dark matter detection experiments may be affected by clumpiness in the Galactic halo.
Penn State University ground software support for X-ray missions.
NASA Astrophysics Data System (ADS)
Townsley, L. K.; Nousek, J. A.; Corbet, R. H. D.
1995-03-01
The X-ray group at Penn State is charged with two software development efforts in support of X-ray satellite missions. As part of the ACIS instrument team for AXAF, the authors are developing part of the ground software to support the instrument's calibration. They are also designing a translation program for Ginga data, to change it from the non-standard FRF format, which closely parallels the original telemetry format, to FITS.
Petry, Sandrine; Furlan, Sylviane; Crepeau, Marie-Jeanne; Cerning, Jutta; Desmazeaud, Michel
2000-01-01
We developed a chemically defined medium (CDM) containing lactose or glucose as the carbon source that supports growth and exopolysaccharide (EPS) production of two strains of Lactobacillus delbrueckii subsp. bulgaricus. The factors found to affect EPS production in this medium were oxygen, pH, temperature, and medium constituents, such as orotic acid and the carbon source. EPS production was greatest during the stationary phase. Composition analysis of EPS isolated at different growth phases and produced under different fermentation conditions (varying carbon source or pH) revealed that the component sugars were the same. The EPS from strain L. delbrueckii subsp. bulgaricus CNRZ 1187 contained galactose and glucose, and that of strain L. delbrueckii subsp. bulgaricus CNRZ 416 contained galactose, glucose, and rhamnose. However, the relative proportions of the individual monosaccharides differed, suggesting that repeating unit structures can vary according to specific medium alterations. Under pH-controlled fermentation conditions, L. delbrueckii subsp. bulgaricus strains produced as much EPS in the CDM as in milk. Furthermore, the relative proportions of individual monosaccharides of EPS produced in pH-controlled CDM or in milk were very similar. The CDM we developed may be a useful model and an alternative to milk in studies of EPS production. PMID:10919802
NASA Research on an Integrated Concept for Airport Surface Operations Management
NASA Technical Reports Server (NTRS)
Gupta, Gautam
2012-01-01
Surface operations at airports in the US are based on tactical operations, where departure aircraft primarily queue up and wait at the departure runways. There have been attempts to address the resulting inefficiencies with both strategic and tactical tools for metering departure aircraft. This presentation gives an overview of Spot And Runway Departure Advisor with Collaborative Decision Making (SARDA-CDM): an integrated strategic and tactical system for improving surface operations by metering departure aircraft. SARDA-CDM is the augmentation of ground and local controller advisories through sharing of flight movement and related operations information between airport operators, flight operators and air traffic control at the airport. The goal is to enhance the efficiency of airport surface operations by exchanging information between air traffic control and airline operators, while minimizing adverse effects on stakeholders and passengers. The presentation motivates the need for departure metering, and provides a brief background on the previous work on SARDA. Then, the concept of operations for SARDA-CDM is described. Then the preliminary results from testing the concept in a real-time automated simulation environment are described. Results indicate benefits such as reduction in taxiing delay and fuel consumption. Further, the preliminary implementation of SARDA-CDM seems robust for two minutes delay in gate push-back times.
Rieck, Allison Margaret
2014-09-01
To improve collaboration in Australian primary health care, there is a need to understand aspects of the general practitioner (GP)/community pharmacist relationship, its influence on collaborative chronic disease management (CDM) and if this influence can be explained by a pre-existing theory or concept. Adopting a grounded theory approach, 22 GP and 22 community pharmacist semi-structured interviews were undertaken. Analysis of the transcripts identified common themes regarding the GP/community pharmacist relationship. Trustworthiness of the themes identified was tested through negative case analysis and member checking. Hofstede's (in 1980) phenomenon of power distance was employed to illuminate the nature of GP/community pharmacist relations. The majority of GPs and community pharmacists described the characteristics of this phenomenon. The power distance was based on knowledge and expertise and was shown to be a barrier to collaboration between GPs and community pharmacists because GPs perceived that community pharmacists did not have the required expertise to improve CDM above what the GP could deliver alone. Power distance exists within the GP/community pharmacist relationship and has a negative influence on GP/community pharmacist collaborative CDM. Understanding and improving GP awareness of community pharmacist expertise has important implications for the future success of collaborative CDM.
Ricciardi, A; Ianniello, R G; Parente, E; Zotta, T
2015-09-01
Members of the Lactobacillus casei and Lactobacillus plantarum groups are capable of aerobic and respiratory growth. However, they grow poorly in aerobiosis in the currently available chemically defined media, suggesting that aerobic and respiratory growth require further supplementation. The effect of Tween 80, L-alanine, L-asparagine, L-aspartate, L-proline and L-serine on anaerobic and respiratory growth of Lact. casei N87 was investigated using a 2(5) factorial design. The effectiveness of modified CDM (mCDM) was validated on 21 strains of Lact. casei and Lact. plantarum groups. Tween 80 supplementation did not affect anaerobic growth, but improved respiratory growth. L-asparagine, L-proline and L-serine were stimulatory for respiring cells, while the presence of L-aspartate, generally, impaired biomass production. mCDM promoted the growth of Lact. casei and Lact. plantarum, with best results for strains showing a respiratory phenotype. The nutritional requirements of anaerobic and respiratory cultures of members of the Lact. casei and Lact. plantarum groups differ. Tween 80 and selected amino acids derived from pathways related to TCA cycle, pyruvate conversion and NADH recycling are required for respiration. The availability of mCDM will facilitate the study of aerobic metabolism of lactobacilli under controlled conditions. © 2015 The Society for Applied Microbiology.
Hierarchy of N-point functions in the ΛCDM and ReBEL cosmologies
NASA Astrophysics Data System (ADS)
Hellwing, Wojciech A.; Juszkiewicz, Roman; van de Weygaert, Rien
2010-11-01
In this work we investigate higher-order statistics for the ΛCDM and ReBEL scalar-interacting dark matter models by analyzing 180h-1Mpc dark matter N-body simulation ensembles. The N-point correlation functions and the related hierarchical amplitudes, such as skewness and kurtosis, are computed using the counts-in-cells method. Our studies demonstrate that the hierarchical amplitudes Sn of the scalar-interacting dark matter model significantly deviate from the values in the ΛCDM cosmology on scales comparable and smaller than the screening length rs of a given scalar-interacting model. The corresponding additional forces that enhance the total attractive force exerted on dark matter particles at galaxy scales lower the values of the hierarchical amplitudes Sn. We conclude that hypothetical additional exotic interactions in the dark matter sector should leave detectable markers in the higher-order correlation statistics of the density field. We focused in detail on the redshift evolution of the dark matter field’s skewness and kurtosis. From this investigation we find that the deviations from the canonical ΛCDM model introduced by the presence of the “fifth” force attain a maximum value at redshifts 0.5
Muramoto, Hideyuki; Shimamoto, Kazuhiro; Ikeda, Mitsuru; Koyama, Kazuyuki; Fukushima, Hiromichi; Ishigaki, Takeo
2006-06-01
The influence of monitor brightness and room illumination on soft-copy diagnosis by both cathode-ray tube (CRT) monitor and liquid crystal display (LCD) was evaluated and compared using a contrast-detail phantom. Nine observers (7 radiologists and 2 radiological technicians) interpreted six types of electronically generated contrast-detail phantom images using a 21-inch CRT (2,048x2,560) and a 21-inch LCD (2,048x2,560) under 6 kinds of viewing conditions, i.e. monitor brightness of 330 cd/m2 or 450 cd/m2, and room illumination of 20, 100 or 420 lux at the center of the display. Observers were requested to determine the visible borderline of the objects. Between 330 cd/m2 and 450 cd/m2, no significant difference in the visible area was found under any of the three lighting conditions. However, in two low-contrast phantom images, the visible area on the LCD was significantly larger than that on the CRT, independent of both monitor brightness and room illumination. (p<0.05). The effect of room illumination was not significant, suggesting that the use of LCD at high room illumination is acceptable.
The Most Massive Galaxies and Black Holes Allowed by ΛCDM
NASA Astrophysics Data System (ADS)
Behroozi, Peter; Silk, Joseph
2018-04-01
Given a galaxy's stellar mass, its host halo mass has a lower limit from the cosmic baryon fraction and known baryonic physics. At z > 4, galaxy stellar mass functions place lower limits on halo number densities that approach expected ΛCDM halo mass functions. High-redshift galaxy stellar mass functions can thus place interesting limits on number densities of massive haloes, which are otherwise very difficult to measure. Although halo mass functions at z < 8 are consistent with observed galaxy stellar masses if galaxy baryonic conversion efficiencies increase with redshift, JWST and WFIRST will more than double the redshift range over which useful constraints are available. We calculate maximum galaxy stellar masses as a function of redshift given expected halo number densities from ΛCDM. We apply similar arguments to black holes. If their virial mass estimates are accurate, number density constraints alone suggest that the quasars SDSS J1044-0125 and SDSS J010013.02+280225.8 likely have black hole mass — stellar mass ratios higher than the median z = 0 relation, confirming the expectation from Lauer bias. Finally, we present a public code to evaluate the probability of an apparently ΛCDM-inconsistent high-mass halo being detected given the combined effects of multiple surveys and observational errors.
Cloutier, Denise; Cox, Amy; Kampen, Ruth; Kobayashi, Karen; Cook, Heather; Taylor, Deanne; Gaspard, Gina
2016-01-01
Residential, long-term care serves vulnerable older adults in a facility-based environment. A new care delivery model (CDM) designed to promote more equitable care for residents was implemented in a health region in Western Canada. Leaders and managers faced challenges in implementing this model alongside other concurrent changes. This paper explores the question: How did leadership style influence team functioning with the implementation of the CDM? Qualitative data from interviews with leadership personnel (directors and managers, residential care coordinators and clinical nurse educators), and direct care staff (registered nurses, licensed practical nurses, health care aides, and allied health therapists), working in two different facilities comprise the main sources of data for this study. The findings reveal that leaders with a servant leadership style were better able to create and sustain the conditions to support successful model implementation and higher team functioning, compared to a facility in which the leadership style was less inclusive and proactive, and more resistant to the change. Consequently, staff at the second facility experienced a greater sense of overload with the implementation of the CDM. This study concludes that strong leadership is key to facilitating team work and job satisfaction in a context of change. PMID:27417591
Cloutier, Denise; Cox, Amy; Kampen, Ruth; Kobayashi, Karen; Cook, Heather; Taylor, Deanne; Gaspard, Gina
2016-01-04
Residential, long-term care serves vulnerable older adults in a facility-based environment. A new care delivery model (CDM) designed to promote more equitable care for residents was implemented in a health region in Western Canada. Leaders and managers faced challenges in implementing this model alongside other concurrent changes. This paper explores the question: How did leadership style influence team functioning with the implementation of the CDM? Qualitative data from interviews with leadership personnel (directors and managers, residential care coordinators and clinical nurse educators), and direct care staff (registered nurses, licensed practical nurses, health care aides, and allied health therapists), working in two different facilities comprise the main sources of data for this study. The findings reveal that leaders with a servant leadership style were better able to create and sustain the conditions to support successful model implementation and higher team functioning, compared to a facility in which the leadership style was less inclusive and proactive, and more resistant to the change. Consequently, staff at the second facility experienced a greater sense of overload with the implementation of the CDM. This study concludes that strong leadership is key to facilitating team work and job satisfaction in a context of change.
BarraCUDA - a fast short read sequence aligner using graphics processing units
2012-01-01
Background With the maturation of next-generation DNA sequencing (NGS) technologies, the throughput of DNA sequencing reads has soared to over 600 gigabases from a single instrument run. General purpose computing on graphics processing units (GPGPU), extracts the computing power from hundreds of parallel stream processors within graphics processing cores and provides a cost-effective and energy efficient alternative to traditional high-performance computing (HPC) clusters. In this article, we describe the implementation of BarraCUDA, a GPGPU sequence alignment software that is based on BWA, to accelerate the alignment of sequencing reads generated by these instruments to a reference DNA sequence. Findings Using the NVIDIA Compute Unified Device Architecture (CUDA) software development environment, we ported the most computational-intensive alignment component of BWA to GPU to take advantage of the massive parallelism. As a result, BarraCUDA offers a magnitude of performance boost in alignment throughput when compared to a CPU core while delivering the same level of alignment fidelity. The software is also capable of supporting multiple CUDA devices in parallel to further accelerate the alignment throughput. Conclusions BarraCUDA is designed to take advantage of the parallelism of GPU to accelerate the alignment of millions of sequencing reads generated by NGS instruments. By doing this, we could, at least in part streamline the current bioinformatics pipeline such that the wider scientific community could benefit from the sequencing technology. BarraCUDA is currently available from http://seqbarracuda.sf.net PMID:22244497
NASA Technical Reports Server (NTRS)
Katz, Daniel
2004-01-01
PVM Wrapper is a software library that makes it possible for code that utilizes the Parallel Virtual Machine (PVM) software library to run using the message-passing interface (MPI) software library, without needing to rewrite the entire code. PVM and MPI are the two most common software libraries used for applications that involve passing of messages among parallel computers. Since about 1996, MPI has been the de facto standard. Codes written when PVM was popular often feature patterns of {"initsend," "pack," "send"} and {"receive," "unpack"} calls. In many cases, these calls are not contiguous and one set of calls may even exist over multiple subroutines. These characteristics make it difficult to obtain equivalent functionality via a single MPI "send" call. Because PVM Wrapper is written to run with MPI- 1.2, some PVM functions are not permitted and must be replaced - a task that requires some programming expertise. The "pvm_spawn" and "pvm_parent" function calls are not replaced, but a programmer can use "mpirun" and knowledge of the ranks of parent and child tasks with supplied macroinstructions to enable execution of codes that use "pvm_spawn" and "pvm_parent."
A microkernel design for component-based parallel numerical software systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Balay, S.
1999-01-13
What is the minimal software infrastructure and what type of conventions are needed to simplify development of sophisticated parallel numerical application codes using a variety of software components that are not necessarily available as source code? We propose an opaque object-based model where the objects are dynamically loadable from the file system or network. The microkernel required to manage such a system needs to include, at most: (1) a few basic services, namely--a mechanism for loading objects at run time via dynamic link libraries, and consistent schemes for error handling and memory management; and (2) selected methods that all objectsmore » share, to deal with object life (destruction, reference counting, relationships), and object observation (viewing, profiling, tracing). We are experimenting with these ideas in the context of extensible numerical software within the ALICE (Advanced Large-scale Integrated Computational Environment) project, where we are building the microkernel to manage the interoperability among various tools for large-scale scientific simulations. This paper presents some preliminary observations and conclusions from our work with microkernel design.« less
Instrumentation, performance visualization, and debugging tools for multiprocessors
NASA Technical Reports Server (NTRS)
Yan, Jerry C.; Fineman, Charles E.; Hontalas, Philip J.
1991-01-01
The need for computing power has forced a migration from serial computation on a single processor to parallel processing on multiprocessor architectures. However, without effective means to monitor (and visualize) program execution, debugging, and tuning parallel programs becomes intractably difficult as program complexity increases with the number of processors. Research on performance evaluation tools for multiprocessors is being carried out at ARC. Besides investigating new techniques for instrumenting, monitoring, and presenting the state of parallel program execution in a coherent and user-friendly manner, prototypes of software tools are being incorporated into the run-time environments of various hardware testbeds to evaluate their impact on user productivity. Our current tool set, the Ames Instrumentation Systems (AIMS), incorporates features from various software systems developed in academia and industry. The execution of FORTRAN programs on the Intel iPSC/860 can be automatically instrumented and monitored. Performance data collected in this manner can be displayed graphically on workstations supporting X-Windows. We have successfully compared various parallel algorithms for computational fluid dynamics (CFD) applications in collaboration with scientists from the Numerical Aerodynamic Simulation Systems Division. By performing these comparisons, we show that performance monitors and debuggers such as AIMS are practical and can illuminate the complex dynamics that occur within parallel programs.
NASA Astrophysics Data System (ADS)
Frickenhaus, Stephan; Hiller, Wolfgang; Best, Meike
The portable software FoSSI is introduced that—in combination with additional free solver software packages—allows for an efficient and scalable parallel solution of large sparse linear equations systems arising in finite element model codes. FoSSI is intended to support rapid model code development, completely hiding the complexity of the underlying solver packages. In particular, the model developer need not be an expert in parallelization and is yet free to switch between different solver packages by simple modifications of the interface call. FoSSI offers an efficient and easy, yet flexible interface to several parallel solvers, most of them available on the web, such as PETSC, AZTEC, MUMPS, PILUT and HYPRE. FoSSI makes use of the concept of handles for vectors, matrices, preconditioners and solvers, that is frequently used in solver libraries. Hence, FoSSI allows for a flexible treatment of several linear equations systems and associated preconditioners at the same time, even in parallel on separate MPI-communicators. The second special feature in FoSSI is the task specifier, being a combination of keywords, each configuring a certain phase in the solver setup. This enables the user to control a solver over one unique subroutine. Furthermore, FoSSI has rather similar features for all solvers, making a fast solver intercomparison or exchange an easy task. FoSSI is a community software, proven in an adaptive 2D-atmosphere model and a 3D-primitive equation ocean model, both formulated in finite elements. The present paper discusses perspectives of an OpenMP-implementation of parallel iterative solvers based on domain decomposition methods. This approach to OpenMP solvers is rather attractive, as the code for domain-local operations of factorization, preconditioning and matrix-vector product can be readily taken from a sequential implementation that is also suitable to be used in an MPI-variant. Code development in this direction is in an advanced state under the name ScOPES: the Scalable Open Parallel sparse linear Equations Solver.
Concurrency-based approaches to parallel programming
NASA Technical Reports Server (NTRS)
Kale, L.V.; Chrisochoides, N.; Kohl, J.; Yelick, K.
1995-01-01
The inevitable transition to parallel programming can be facilitated by appropriate tools, including languages and libraries. After describing the needs of applications developers, this paper presents three specific approaches aimed at development of efficient and reusable parallel software for irregular and dynamic-structured problems. A salient feature of all three approaches in their exploitation of concurrency within a processor. Benefits of individual approaches such as these can be leveraged by an interoperability environment which permits modules written using different approaches to co-exist in single applications.
Gooding, Owen W
2004-06-01
The use of parallel synthesis techniques with statistical design of experiment (DoE) methods is a powerful combination for the optimization of chemical processes. Advances in parallel synthesis equipment and easy to use software for statistical DoE have fueled a growing acceptance of these techniques in the pharmaceutical industry. As drug candidate structures become more complex at the same time that development timelines are compressed, these enabling technologies promise to become more important in the future.
NASA Technical Reports Server (NTRS)
Mavriplis, D. J.; Das, Raja; Saltz, Joel; Vermeland, R. E.
1992-01-01
An efficient three dimensional unstructured Euler solver is parallelized on a Cray Y-MP C90 shared memory computer and on an Intel Touchstone Delta distributed memory computer. This paper relates the experiences gained and describes the software tools and hardware used in this study. Performance comparisons between two differing architectures are made.
Parallelization of Rocket Engine Simulator Software (PRESS)
NASA Technical Reports Server (NTRS)
Cezzar, Ruknet
1997-01-01
Parallelization of Rocket Engine System Software (PRESS) project is part of a collaborative effort with Southern University at Baton Rouge (SUBR), University of West Florida (UWF), and Jackson State University (JSU). The second-year funding, which supports two graduate students enrolled in our new Master's program in Computer Science at Hampton University and the principal investigator, have been obtained for the period from October 19, 1996 through October 18, 1997. The key part of the interim report was new directions for the second year funding. This came about from discussions during Rocket Engine Numeric Simulator (RENS) project meeting in Pensacola on January 17-18, 1997. At that time, a software agreement between Hampton University and NASA Lewis Research Center had already been concluded. That agreement concerns off-NASA-site experimentation with PUMPDES/TURBDES software. Before this agreement, during the first year of the project, another large-scale FORTRAN-based software, Two-Dimensional Kinetics (TDK), was being used for translation to an object-oriented language and parallelization experiments. However, that package proved to be too complex and lacking sufficient documentation for effective translation effort to the object-oriented C + + source code. The focus, this time with better documented and more manageable PUMPDES/TURBDES package, was still on translation to C + + with design improvements. At the RENS Meeting, however, the new impetus for the RENS projects in general, and PRESS in particular, has shifted in two important ways. One was closer alignment with the work on Numerical Propulsion System Simulator (NPSS) through cooperation and collaboration with LERC ACLU organization. The other was to see whether and how NASA's various rocket design software can be run over local and intra nets without any radical efforts for redesign and translation into object-oriented source code. There were also suggestions that the Fortran based code be encapsulated in C + + code thereby facilitating reuse without undue development effort. The details are covered in the aforementioned section of the interim report filed on April 28, 1997.
Cosmic string with a light massive neutrino
NASA Technical Reports Server (NTRS)
Albrecht, Andreas; Stebbins, Albert
1992-01-01
We have estimated the power spectra of density fluctuations produced by cosmic strings with neutrino hot dark matter (HDM). Normalizing at 8/h Mpc, we find that the spectrum has more power on small scales than HDM + inflation, less than cold dark matter (CDM) + inflation, and significantly less the CDM + strings. With HDM, large wakes give significant contribution to the power on the galaxy scale and may give rise to large sheets of galaxies.
Sustainable waste management in Africa through CDM projects.
Couth, R; Trois, C
2012-11-01
Only few Clean Development Mechanism (CDM) projects (traditionally focussed on landfill gas combustion) have been registered in Africa if compared to similar developing countries. The waste hierarchy adopted by many African countries clearly shows that waste recycling and composting projects are generally the most sustainable. This paper undertakes a sustainability assessment for practical waste treatment and disposal scenarios for Africa and makes recommendations for consideration. The appraisal in this paper demonstrates that mechanical biological treatment of waste becomes more financially attractive if established through the CDM process. Waste will continue to be dumped in Africa with increasing greenhouse gas emissions produced, unless industrialised countries (Annex 1) fund carbon emission reduction schemes through a replacement to the Kyoto Protocol. Such a replacement should calculate all of the direct and indirect carbon emission savings and seek to promote public-private partnerships through a concerted support of the informal sector. Copyright © 2012 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wright, Bill S.; Winther, Hans A.; Koyama, Kazuya, E-mail: bill.wright@port.ac.uk, E-mail: hans.winther@port.ac.uk, E-mail: kazuya.koyama@port.ac.uk
The effect of massive neutrinos on the growth of cold dark matter perturbations acts as a scale-dependent Newton's constant and leads to scale-dependent growth factors just as we often find in models of gravity beyond General Relativity. We show how to compute growth factors for ΛCDM and general modified gravity cosmologies combined with massive neutrinos in Lagrangian perturbation theory for use in COLA and extensions thereof. We implement this together with the grid-based massive neutrino method of Brandbyge and Hannestad in MG-PICOLA and compare COLA simulations to full N -body simulations of ΛCDM and f ( R ) gravity withmore » massive neutrinos. Our implementation is computationally cheap if the underlying cosmology already has scale-dependent growth factors and it is shown to be able to produce results that match N -body to percent level accuracy for both the total and CDM matter power-spectra up to k ∼< 1 h /Mpc.« less
A ROBUST MEASURE OF DARK MATTER HALO ELLIPTICITIES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Evslin, Jarah
2016-08-01
In simulations of the standard cosmological model (ΛCDM), dark matter halos are aspherical. However, so far the asphericity of an individual galaxy’s halo has never been robustly established. We use the Jeans equations to define a quantity that robustly characterizes a deviation from rotational symmetry. This quantity is essentially the gravitational torque and it roughly provides the ellipticity projected along the line of sight. We show that the Thirty Meter Telescope (TMT), with a single epoch of observations combined with those of the Gaia Space Telescope , can distinguish the ΛCDM value of the torque from zero for each Sculptor-likemore » dwarf galaxy with a confidence between 0 and 5 σ , depending on the orientation of each halo. With two epochs of observations, TMT will achieve a 5 σ discovery of torque and thus asphericity for most such galaxies, thus providing a new and powerful test of the ΛCDM model.« less
NASA Astrophysics Data System (ADS)
Chen, Sun-Zen; Peng, Shiang-Hau; Ting, Tzu-Yu; Wu, Po-Shien; Lin, Chun-Hao; Chang, Chin-Yeh; Shyue, Jing-Jong; Jou, Jwo-Huei
2012-10-01
We demonstrate the feasibility of using direct contact-printing in the fabrication of monochromatic and polychromatic organic light-emitting diodes (OLEDs). Bright devices with red, green, blue, and white contact-printed light-emitting layers with a respective maximum luminance of 29 000, 29 000, 4000, and 18 000 cd/m2 were obtained with sound film integrity by blending a polymeric host into a molecular host. For the red OLED as example, the maximum luminance was decreased from 29 000 to 5000 cd/m2 as only the polymeric host was used, or decreased to 7000 cd/m2 as only the molecular host was used. The markedly improved device performance achieved in the devices with blended hosts may be attributed to the employed polymeric host that contributed a good film-forming character, and the molecular host that contributed a good electroluminescence character.
NASA Astrophysics Data System (ADS)
Wright, Bill S.; Winther, Hans A.; Koyama, Kazuya
2017-10-01
The effect of massive neutrinos on the growth of cold dark matter perturbations acts as a scale-dependent Newton's constant and leads to scale-dependent growth factors just as we often find in models of gravity beyond General Relativity. We show how to compute growth factors for ΛCDM and general modified gravity cosmologies combined with massive neutrinos in Lagrangian perturbation theory for use in COLA and extensions thereof. We implement this together with the grid-based massive neutrino method of Brandbyge and Hannestad in MG-PICOLA and compare COLA simulations to full N-body simulations of ΛCDM and f(R) gravity with massive neutrinos. Our implementation is computationally cheap if the underlying cosmology already has scale-dependent growth factors and it is shown to be able to produce results that match N-body to percent level accuracy for both the total and CDM matter power-spectra up to klesssim 1 h/Mpc.
Nursing Student Perceptions Regarding Simulation Experience Sequencing.
Woda, Aimee A; Gruenke, Theresa; Alt-Gehrman, Penny; Hansen, Jamie
2016-09-01
The use of simulated learning experiences (SLEs) have increased within nursing curricula with positive learning outcomes for nursing students. The purpose of this study is to explore nursing students' perceptions of their clinical decision making (CDM) related to the block sequencing of different patient care experiences, SLEs versus hospital-based learning experiences (HLEs). A qualitative descriptive design used open-ended survey questions to generate information about the block sequencing of SLEs and its impact on nursing students' perceived CDM. Three themes emerged from the data: Preexperience Anxiety, Real-Time Decision Making, and Increased Patient Care Experiences. Nursing students identified that having SLEs prior to HLEs provided several benefits. Even when students preferred SLEs prior to HLEs, the sequence did not impact their CDM. This suggests that alternating block sequencing can be used without impacting the students' perceptions of their ability to make decisions. [J Nurs Educ. 2016;55(9):528-532.]. Copyright 2016, SLACK Incorporated.
Beyond Λ CDM: Problems, solutions, and the road ahead
NASA Astrophysics Data System (ADS)
Bull, Philip; Akrami, Yashar; Adamek, Julian; Baker, Tessa; Bellini, Emilio; Beltrán Jiménez, Jose; Bentivegna, Eloisa; Camera, Stefano; Clesse, Sébastien; Davis, Jonathan H.; Di Dio, Enea; Enander, Jonas; Heavens, Alan; Heisenberg, Lavinia; Hu, Bin; Llinares, Claudio; Maartens, Roy; Mörtsell, Edvard; Nadathur, Seshadri; Noller, Johannes; Pasechnik, Roman; Pawlowski, Marcel S.; Pereira, Thiago S.; Quartin, Miguel; Ricciardone, Angelo; Riemer-Sørensen, Signe; Rinaldi, Massimiliano; Sakstein, Jeremy; Saltas, Ippocratis D.; Salzano, Vincenzo; Sawicki, Ignacy; Solomon, Adam R.; Spolyar, Douglas; Starkman, Glenn D.; Steer, Danièle; Tereno, Ismael; Verde, Licia; Villaescusa-Navarro, Francisco; von Strauss, Mikael; Winther, Hans A.
2016-06-01
Despite its continued observational successes, there is a persistent (and growing) interest in extending cosmology beyond the standard model, Λ CDM. This is motivated by a range of apparently serious theoretical issues, involving such questions as the cosmological constant problem, the particle nature of dark matter, the validity of general relativity on large scales, the existence of anomalies in the CMB and on small scales, and the predictivity and testability of the inflationary paradigm. In this paper, we summarize the current status of Λ CDM as a physical theory, and review investigations into possible alternatives along a number of different lines, with a particular focus on highlighting the most promising directions. While the fundamental problems are proving reluctant to yield, the study of alternative cosmologies has led to considerable progress, with much more to come if hopes about forthcoming high-precision observations and new theoretical ideas are fulfilled.
Meng, Mei; Song, Wook; Kim, You-Hyun; Lee, Sang-Youn; Jhun, Chul-Gyu; Zhu, Fu Rong; Ryu, Dae Hyun; Kim, Woo-Young
2013-01-01
High efficiency blue organic light emitting diodes (OLEDs), based on 2-me-thyl-9,10-di(2-naphthyl) anthracene (MADN) doped with 4,4'-bis(9-ethyl-3-carbazovinylene)-1,1'-biphenyl (BCzVBi), were fabricated using two different electron transport layers (ETLs) of tris(8-hydroxyquinolino)-aluminum (Alq3) and 4,7-di-phenyl-1,10-phenanthroline (Bphen). Bphen ETL layers favored the efficient hole-electron recombination in the emissive layer of the BCzVBi-doped blue OLEDs, leading to high luminous efficiency and quantum efficiency of 8.34 cd/A at 100 mA/cm2 and 5.73% at 100 cd/m2, respectively. Maximum luminance of blue OLED with Bphen ETL and Alq3 ETL were 10670 cd/m2, and CIExy coordinates of blue OLEDs were (0.180, 0279) and (0.155, 0.212) at 100 cd/m2.
Redshift space clustering of galaxies and cold dark matter model
NASA Technical Reports Server (NTRS)
Bahcall, Neta A.; Cen, Renyue; Gramann, Mirt
1993-01-01
The distorting effect of peculiar velocities on the power speturm and correlation function of IRAS and optical galaxies is studied. The observed redshift space power spectra and correlation functions of IRAS and optical the galaxies over the entire range of scales are directly compared with the corresponding redshift space distributions using large-scale computer simulations of cold dark matter (CDM) models in order to study the distortion effect of peculiar velocities on the power spectrum and correlation function of the galaxies. It is found that the observed power spectrum of IRAS and optical galaxies is consistent with the spectrum of an Omega = 1 CDM model. The problems that such a model currently faces may be related more to the high value of Omega in the model than to the shape of the spectrum. A low-density CDM model is also investigated and found to be consistent with the data.
Application of the Consumer Decision-Making Model to Hearing Aid Adoption in First-Time Users
Amlani, Amyn M.
2016-01-01
Since 1980, hearing aid adoption rates have remained essentially the same, increasing at a rate equal to the organic growth of the population. Researchers have used theoretical models from psychology and sociology to determine those factors or constructs that lead to the adoption of hearing aids by first-time impaired listeners entering the market. In this article, a theoretical model, the Consumer Decision-Making Model (CDM), premised on the neobehavioral approach that considers an individual's psychological and cognitive emphasis toward a product or service, is described. Three theoretical models (i.e., transtheoretical, social model of disability, Health Belief Model), and their relevant findings to the hearing aid market, are initially described. The CDM is then presented, along with supporting evidence of the model's various factors from the hearing aid literature. Future applications of the CDM to hearing health care also are discussed. PMID:27516718
New observational constraints on f ( T ) gravity from cosmic chronometers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nunes, Rafael C.; Pan, Supriya; Saridakis, Emmanuel N., E-mail: nunes@ecm.ub.edu, E-mail: span@iiserkol.ac.in, E-mail: Emmanuel_Saridakis@baylor.edu
2016-08-01
We use the local value of the Hubble constant recently measured with 2.4% precision, as well as the latest compilation of cosmic chronometers data, together with standard probes such as Supernovae Type Ia and Baryon Acoustic Oscillation distance measurements, in order to impose constraints on the viable and most used f ( T ) gravity models, where T is the torsion scalar in teleparallel gravity. In particular, we consider three f ( T ) models with two parameters, out of which one is independent, and we quantify their deviation from ΛCDM cosmology through a sole parameter. Our analysis reveals thatmore » for one of the models a small but non-zero deviation from ΛCDM cosmology is slightly favored, while for the other models the best fit is very close to ΛCDM scenario. Clearly, f ( T ) gravity is consistent with observations, and it can serve as a candidate for modified gravity.« less
Application of the Consumer Decision-Making Model to Hearing Aid Adoption in First-Time Users.
Amlani, Amyn M
2016-05-01
Since 1980, hearing aid adoption rates have remained essentially the same, increasing at a rate equal to the organic growth of the population. Researchers have used theoretical models from psychology and sociology to determine those factors or constructs that lead to the adoption of hearing aids by first-time impaired listeners entering the market. In this article, a theoretical model, the Consumer Decision-Making Model (CDM), premised on the neobehavioral approach that considers an individual's psychological and cognitive emphasis toward a product or service, is described. Three theoretical models (i.e., transtheoretical, social model of disability, Health Belief Model), and their relevant findings to the hearing aid market, are initially described. The CDM is then presented, along with supporting evidence of the model's various factors from the hearing aid literature. Future applications of the CDM to hearing health care also are discussed.
Chronic disease management in general practice: results from a national study.
Darker, C; Martin, C; O'Dowd, T; O'Kelly, F; O'Shea, B
2012-04-01
The aim of this study was to provide baseline data on chronic disease management (CDM) provision in Irish general practice (GP). The survey instrument was previously used in a study of primary care physicians in 11 countries, thus allowing international comparisons. The response rate was 72% (380/527).The majority of GPs (240/380; 63%) reported significant changes are needed in our health care system to make CDM work better. Small numbers of routine clinical audits are being performed (95/380; 25%). Irish GPs use evidence based guidelines for treatment of diabetes (267/380; 71%), asthma / COPD (279/380; 74%) and hypertension (297/380; 79%), to the same extent as international counterparts. Barriers to delivering chronic care include increased workload (379/380; 99%), lack of appropriate funding (286/380; 76%), with GPs interested in targeted payments (244/380; 68%). This study provides baseline data to assess future changes in CDM.
Santos, M M O; van Elk, A G P; Romanel, C
2015-12-01
Solid waste disposal sites (SWDS) - especially landfills - are a significant source of methane, a greenhouse gas. Although having the potential to be captured and used as a fuel, most of the methane formed in SWDS is emitted to the atmosphere, mainly in developing countries. Methane emissions have to be estimated in national inventories. To help this task the Intergovernmental Panel on Climate Change (IPCC) has published three sets of guidelines. In addition, the Kyoto Protocol established the Clean Development Mechanism (CDM) to assist the developed countries to offset their own greenhouse gas emissions by assisting other countries to achieve sustainable development while reducing emissions. Based on methodologies provided by the IPCC regarding SWDS, the CDM Executive Board has issued a tool to be used by project developers for estimating baseline methane emissions in their project activities - on burning biogas from landfills or on preventing biomass to be landfilled and so avoiding methane emissions. Some inconsistencies in the first two IPCC guidelines have already been pointed out in an Annex of IPCC latest edition, although with hidden details. The CDM tool uses a model for methane estimation that takes on board parameters, factors and assumptions provided in the latest IPCC guidelines, while using in its core equation the one of the second IPCC edition with its shortcoming as well as allowing a misunderstanding of the time variable. Consequences of wrong ex-ante estimation of baseline emissions regarding CDM project activities can be of economical or environmental type. Example of the first type is the overestimation of 18% in an actual project on biogas from landfill in Brazil that harms its developers; of the second type, the overestimation of 35% in a project preventing municipal solid waste from being landfilled in China, which harms the environment, not for the project per se but for the undue generated carbon credits. In a simulated landfill - the same amount of waste for 20 years -, the error would be an overestimation of 25% if the CDM project activity starts from the very first year or an underestimation of 15% if it starts just after the landfill closure. Therefore, a correction in the tool to calculate emissions from landfills as adopted by the CDM Executive Board is needed. Moreover, in countries not using the latest IPCC guidelines, which provides clear formulas to prevent misunderstandings, inventory compilers can also benefit from this paper by having more accurate results in national GHG inventories related to solid waste disposal, especially when increasing amounts of waste are landfilled, which is the case of the developing countries. Copyright © 2015 Elsevier Ltd. All rights reserved.
Automatic Generation of OpenMP Directives and Its Application to Computational Fluid Dynamics Codes
NASA Technical Reports Server (NTRS)
Yan, Jerry; Jin, Haoqiang; Frumkin, Michael; Yan, Jerry (Technical Monitor)
2000-01-01
The shared-memory programming model is a very effective way to achieve parallelism on shared memory parallel computers. As great progress was made in hardware and software technologies, performance of parallel programs with compiler directives has demonstrated large improvement. The introduction of OpenMP directives, the industrial standard for shared-memory programming, has minimized the issue of portability. In this study, we have extended CAPTools, a computer-aided parallelization toolkit, to automatically generate OpenMP-based parallel programs with nominal user assistance. We outline techniques used in the implementation of the tool and discuss the application of this tool on the NAS Parallel Benchmarks and several computational fluid dynamics codes. This work demonstrates the great potential of using the tool to quickly port parallel programs and also achieve good performance that exceeds some of the commercial tools.
UFMulti: A new parallel processing software system for HEP
NASA Astrophysics Data System (ADS)
Avery, Paul; White, Andrew
1989-12-01
UFMulti is a multiprocessing software package designed for general purpose high energy physics applications, including physics and detector simulation, data reduction and DST physics analysis. The system is particularly well suited for installations where several workstation or computers are connected through a local area network (LAN). The initial configuration of the software is currently running on VAX/VMS machines with a planned extension to ULTRIX, using the new RISC CPUs from Digital, in the near future.
Parallel-Processing Software for Creating Mosaic Images
NASA Technical Reports Server (NTRS)
Klimeck, Gerhard; Deen, Robert; McCauley, Michael; DeJong, Eric
2008-01-01
A computer program implements parallel processing for nearly real-time creation of panoramic mosaics of images of terrain acquired by video cameras on an exploratory robotic vehicle (e.g., a Mars rover). Because the original images are typically acquired at various camera positions and orientations, it is necessary to warp the images into the reference frame of the mosaic before stitching them together to create the mosaic. [Also see "Parallel-Processing Software for Correlating Stereo Images," Software Supplement to NASA Tech Briefs, Vol. 31, No. 9 (September 2007) page 26.] The warping algorithm in this computer program reflects the considerations that (1) for every pixel in the desired final mosaic, a good corresponding point must be found in one or more of the original images and (2) for this purpose, one needs a good mathematical model of the cameras and a good correlation of individual pixels with respect to their positions in three dimensions. The desired mosaic is divided into slices, each of which is assigned to one of a number of central processing units (CPUs) operating simultaneously. The results from the CPUs are gathered and placed into the final mosaic. The time taken to create the mosaic depends upon the number of CPUs, the speed of each CPU, and whether a local or a remote data-staging mechanism is used.
Bright high z SnIa: A challenge for ΛCDM
NASA Astrophysics Data System (ADS)
Perivolaropoulos, L.; Shafieloo, A.
2009-06-01
It has recently been pointed out by Kowalski et. al. [Astrophys. J. 686, 749 (2008).ASJOAB0004-637X10.1086/589937] that there is “an unexpected brightness of the SnIa data at z>1.” We quantify this statement by constructing a new statistic which is applicable directly on the type Ia supernova (SnIa) distance moduli. This statistic is designed to pick up systematic brightness trends of SnIa data points with respect to a best fit cosmological model at high redshifts. It is based on binning the normalized differences between the SnIa distance moduli and the corresponding best fit values in the context of a specific cosmological model (e.g. ΛCDM). These differences are normalized by the standard errors of the observed distance moduli. We then focus on the highest redshift bin and extend its size toward lower redshifts until the binned normalized difference (BND) changes sign (crosses 0) at a redshift zc (bin size Nc). The bin size Nc of this crossing (the statistical variable) is then compared with the corresponding crossing bin size Nmc for Monte Carlo data realizations based on the best fit model. We find that the crossing bin size Nc obtained from the Union08 and Gold06 data with respect to the best fit ΛCDM model is anomalously large compared to Nmc of the corresponding Monte Carlo data sets obtained from the best fit ΛCDM in each case. In particular, only 2.2% of the Monte Carlo ΛCDM data sets are consistent with the Gold06 value of Nc while the corresponding probability for the Union08 value of Nc is 5.3%. Thus, according to this statistic, the probability that the high redshift brightness bias of the Union08 and Gold06 data sets is realized in the context of a (w0,w1)=(-1,0) model (ΛCDM cosmology) is less than 6%. The corresponding realization probability in the context of a (w0,w1)=(-1.4,2) model is more than 30% for both the Union08 and the Gold06 data sets indicating a much better consistency for this model with respect to the BND statistic.
An overview of the Hadoop/MapReduce/HBase framework and its current applications in bioinformatics
2010-01-01
Background Bioinformatics researchers are now confronted with analysis of ultra large-scale data sets, a problem that will only increase at an alarming rate in coming years. Recent developments in open source software, that is, the Hadoop project and associated software, provide a foundation for scaling to petabyte scale data warehouses on Linux clusters, providing fault-tolerant parallelized analysis on such data using a programming style named MapReduce. Description An overview is given of the current usage within the bioinformatics community of Hadoop, a top-level Apache Software Foundation project, and of associated open source software projects. The concepts behind Hadoop and the associated HBase project are defined, and current bioinformatics software that employ Hadoop is described. The focus is on next-generation sequencing, as the leading application area to date. Conclusions Hadoop and the MapReduce programming paradigm already have a substantial base in the bioinformatics community, especially in the field of next-generation sequencing analysis, and such use is increasing. This is due to the cost-effectiveness of Hadoop-based analysis on commodity Linux clusters, and in the cloud via data upload to cloud vendors who have implemented Hadoop/HBase; and due to the effectiveness and ease-of-use of the MapReduce method in parallelization of many data analysis algorithms. PMID:21210976
An overview of the Hadoop/MapReduce/HBase framework and its current applications in bioinformatics.
Taylor, Ronald C
2010-12-21
Bioinformatics researchers are now confronted with analysis of ultra large-scale data sets, a problem that will only increase at an alarming rate in coming years. Recent developments in open source software, that is, the Hadoop project and associated software, provide a foundation for scaling to petabyte scale data warehouses on Linux clusters, providing fault-tolerant parallelized analysis on such data using a programming style named MapReduce. An overview is given of the current usage within the bioinformatics community of Hadoop, a top-level Apache Software Foundation project, and of associated open source software projects. The concepts behind Hadoop and the associated HBase project are defined, and current bioinformatics software that employ Hadoop is described. The focus is on next-generation sequencing, as the leading application area to date. Hadoop and the MapReduce programming paradigm already have a substantial base in the bioinformatics community, especially in the field of next-generation sequencing analysis, and such use is increasing. This is due to the cost-effectiveness of Hadoop-based analysis on commodity Linux clusters, and in the cloud via data upload to cloud vendors who have implemented Hadoop/HBase; and due to the effectiveness and ease-of-use of the MapReduce method in parallelization of many data analysis algorithms.
Software Techniques for Balancing Computation & Communication in Parallel Systems
1994-07-01
boer of Tasks: 15 PE Loand Yaltanc: 0.0000 K ] PE Loed Ya tance: 0.0000 Into-Tas Com: LInter-Task Com: 116 Ntwok traffic: ±16 PE LAYMT 1, Networkc...confusion. Because past versions for all files were saved and documented within SCCS, software developers were able to roll back to various combinations of
Multitasking scheduler works without OS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Howard, D.M.
1982-09-15
Z80 control applications requiring parallel execution of multiple software tasks can use the executive routine described and listed in this article when multitasking is not available via an operating system (OS). Although the routine is not as capable or as transparent to software as the multitasking in a full-scale OS, it is simple to understand and use.
Case Studies of Software Development Tools for Parallel Architectures
1993-06-01
Simulation ............................................. 29 4.7.3 Visualization...autonomous entities, each with its own state and set of behaviors, as in simulation , tracking, or Battle Management. Because C2 applications are often... simulation , that is used to help the developer solve the problems. The new tool/problem solution matrix is structured in terms of the software development
Learning and Best Practices for Learning in Open-Source Software Communities
ERIC Educational Resources Information Center
Singh, Vandana; Holt, Lila
2013-01-01
This research is about participants who use open-source software (OSS) discussion forums for learning. Learning in online communities of education as well as non-education-related online communities has been studied under the lens of social learning theory and situated learning for a long time. In this research, we draw parallels among these two…
Image Understanding Architecture
1991-09-01
architecture to support real-time, knowledge -based image understanding , and develop the software support environment that will be needed to utilize...NUMBER OF PAGES Image Understanding Architecture, Knowledge -Based Vision, AI Real-Time Computer Vision, Software Simulator, Parallel Processor IL PRICE... information . In addition to sensory and knowledge -based processing it is useful to introduce a level of symbolic processing. Thus, vision researchers
Adversarial Collaboration Decision-Making: An Overview of Social Quantum Information Processing
2002-01-01
collaborative decision - making (CDM) to solve problems is an aspect of human behavior least yielding to rational predictions. To reduce the complexity of CDM...increases. Implications for C2 decision - making are discussed. Overview of research Game theory was one of the first rational approaches to the study of...Psychologist, 36, 343-356. Lawless, W.F. (2001), The quantum of social action and the function of emotion in decision - making , Proceedings, Emotional and
NASA Astrophysics Data System (ADS)
Du, Xiaoyang; Tao, Silu; Huang, Yun; Yang, Xiaoxia; Ding, Xulin; Zhang, Xiaohong
2015-11-01
Efficient fluorescence/phosphorescence hybrid white organic light-emitting diodes (OLEDs) with single doped co-host structure have been fabricated. Device using 9-Naphthyl-10 -(4-triphenylamine)anthrancene as the fluorescent dopant and Ir(ppy)3 and Ir(2-phq)3 as the green and orange phosphorescent dopants show the luminous efficiency of 12.4% (17.6 lm/W, 27.5 cd/A) at 1000 cd/m2. Most important to note that the efficiency-brightness roll-off of the device was very mild. With the brightness rising up to 5000 and 10 000 cd/m2, the efficiency could be kept at 11.8% (14.0 lm/W, 26.5 cd/A) and 11.0% (11.8 lm/W, 25.0 cd/A). The Commission Internationale de L'Eclairage (CIE) coordinates and color rending index (CRI) were measured to be (0.45, 0.48) and 65, respectively, and remained the same in a large range of brightness (1000-10 000 cd/m2), which is scarce in the reported white OLEDs. The performance of the device at high luminance (5000 and 10 000 cd/m2) was among the best reported results including fluorescence/phosphorescence hybrid and all-phosphorescent white OLEDs. Moreover, the CRI of the white OLED can be improved to 83 by using a yellow-green emitter (Ir(ppy)2bop) in the device.
Rosen, Gerald
2011-06-01
Recent observations and theoretical studies have shown that non-baryonic Cold Dark Matter (CDM), which constitutes about 84% of all matter in the Universe, may feature a complex-scalar-field that carries particles of mass ≅ 2.47 x 10(-3)eV with the associated Compton range m(-1) ≅8.02 x 10(-3) cm, a distance on the scale of extended bionucleic acids and living cells. Such a complex-scalar-field can enter a weak-isospin Lorentz-invariant interaction that generates the flow of right-handed electrons and induces a chirality-imbued quantum chemistry on the m (-1) scale. A phenomenological Volterra-type equation is proposed for the CDM-impacted time development of N, the number of base pairs in the most advanced organism at Earth-age t. The solution to this equation suggests that the boosts in N at t ≅ 1.1 Gyr (advent of the first living prokaryotic cells), at t ≅ 2.9 Gyr (advent of eukaryotic single-celled organisms) and finally at t ≅ 4.0 Gyr (the Cambrian explosion) may be associated with three multi-Myr-duration cosmic showers of the complex-scalar-field CDM particles. If so, the signature of the particles may be detectible in Cambrian rocks.
The Rh = ct universe in alternative theories of gravity
NASA Astrophysics Data System (ADS)
Sultana, Joseph; Kazanas, Demosthenes
2017-12-01
The Λ cold dark matter (ΛCDM) model (one comprising of a cosmological constant Λ and cold dark matter) is generally considered the standard model in cosmology. One of the alternatives that has received attention in the last few years is the Rh = ct universe, which provides an age for the Universe similar to that of ΛCDM and whose (vanishing) deceleration parameter is apparently not inconsistent with observations. Like the ΛCDM, the Rh = ct universe is based on a Friedmann-Robertson-Walker cosmology with the total energy density ρ and pressure p of the cosmic fluid satisfying the simple equation of state ρ + 3p = 0, i.e. a vanishing total active gravitational mass. In an earlier paper, we examined the possible sources for the Rh = ct universe within general relativity, and we have shown that it still contains a dark energy component, albeit not in the form of a cosmological constant. The growing interest in gravitational theories, alternative to Einstein's general relativity, in cosmology, is mainly driven by the need for cosmological models that attain a late-time accelerated expansion without the presence of a cosmological constant as in the ΛCDM, and thereby avoiding the problems associated with it. In this paper, we discuss some of these common alternative theories and show that the Rh = ct is also a solution to some of them.
Can f(T) gravity theories mimic ΛCDM cosmic history
DOE Office of Scientific and Technical Information (OSTI.GOV)
Setare, M.R.; Mohammadipour, N., E-mail: rezakord@ipm.ir, E-mail: N.Mohammadipour@uok.ac.ir
2013-01-01
Recently the teleparallel Lagrangian density described by the torsion scalar T has been extended to a function of T. The f(T) modified teleparallel gravity has been proposed as the natural gravitational alternative for dark energy to explain the late time acceleration of the universe. In order to reconstruct the function f(T) by demanding a background ΛCDM cosmology we assume that, (i) the background cosmic history provided by the flat ΛCDM (the radiation ere with ω{sub eff} = (1/3), matter and de Sitter eras with ω{sub eff} = 0 and ω{sub eff} = −1, respectively) (ii) the radiation dominate in themore » radiation era with Ω{sub 0r} = 1 and the matter dominate during the matter phases when Ω{sub 0m} = 1. We find the cosmological dynamical system which can obey the ΛCDM cosmic history. In each era, we find a critical lines that, the radiation dominated and the matter dominated are one points of them in the radiation and matter phases, respectively. Also, we drive the cosmologically viability condition for these models. We investigate the stability condition with respect to the homogeneous scalar perturbations in each era and we obtain the stability conditions for the fixed points in each eras. Finally, we reconstruct the function f(T) which mimics cosmic expansion history.« less
Model Selection with Strong-lensing Systems
NASA Astrophysics Data System (ADS)
Leaf, Kyle; Melia, Fulvio
2018-05-01
In this paper, we use an unprecedentedly large sample (158) of confirmed strong lens systems for model selection, comparing five well studied Friedmann-Robertson-Walker cosmologies: ΛCDM, wCDM (the standard model with a variable dark-energy equation of state), the Rh = ct universe, the (empty) Milne cosmology, and the classical Einstein-de Sitter (matter dominated) universe. We first use these sources to optimize the parameters in the standard model and show that they are consistent with Planck, though the quality of the best fit is not satisfactory. We demonstrate that this is likely due to under-reported errors, or to errors yet to be included in this kind of analysis. We suggest that the missing dispersion may be due to scatter about a pure single isothermal sphere (SIS) model that is often assumed for the mass distribution in these lenses. We then use the Bayes information criterion, with the inclusion of a suggested SIS dispersion, to calculate the relative likelihoods and ranking of these models, showing that Milne and Einstein-de Sitter are completely ruled out, while Rh = ct is preferred over ΛCDM/wCDM with a relative probability of ˜73% versus ˜24%. The recently reported sample of new strong lens candidates by the Dark Energy Survey, if confirmed, may be able to demonstrate which of these two models is favoured over the other at a level exceeding 3σ.
Cold dark matter. 2: Spatial and velocity statistics
NASA Technical Reports Server (NTRS)
Gelb, James M.; Bertschinger, Edmund
1994-01-01
We examine high-resolution gravitational N-body simulations of the omega = 1 cold dark matter (CDM) model in order to determine whether there is any normalization of the initial density fluctuation spectrum that yields acceptable results for galaxy clustering and velocities. Dense dark matter halos in the evolved mass distribution are identified with luminous galaxies; the most massive halos are also considered as sites for galaxy groups, with a range of possibilities explored for the group mass-to-light ratios. We verify the earlier conclusions of White et al. (1987) for the low-amplitude (high-bias) CDM model-the galaxy correlation function is marginally acceptable but that there are too many galaxies. We also show that the peak biasing method does not accurately reproduce the results obtained using dense halos identified in the simulations themselves. The Cosmic Background Explorer (COBE) anisotropy implies a higher normalization, resulting in problems with excessive pairwise galaxy velocity dispersion unless a strong velocity bias is present. Although we confirm the strong velocity bias of halos reported by Couchman & Carlberg (1992), we show that the galaxy motions are still too large on small scales. We find no amplitude for which the CDM model can reconcile simultaneously and galaxy correlation function, the low pairwise velocity dispersion, and the richness distribution of groups and clusters. With the normalization implied by COBE, the CDM spectrum has too much power on small scales if omega = 1.
Reconstruction, thermodynamics and stability of the ΛCDM model in f(T,{ T }) gravity
NASA Astrophysics Data System (ADS)
Junior, Ednaldo L. B.; Rodrigues, Manuel E.; Salako, Ines G.; Houndjo, Mahouton J. S.
2016-06-01
We reconstruct the ΛCDM model for f(T,{ T }) theory, where T is the torsion scalar and { T } the trace of the energy-momentum tensor. The result shows that the action of ΛCDM is a combination of a linear term, a constant (-2{{Λ }}) and a nonlinear term given by the product \\sqrt{-T}{F}g[({T}1/3/16π G) (16π G{ T }+T+8{{Λ }})], with F g being a generic function. We show that to maintain conservation of the energy-momentum tensor, we should impose that {F}g[y] must be linear on the trace { T }. This reconstruction decays in f (T) theory for {F}g\\equiv Q, with Q a constant. Our reconstruction describes the cosmological eras to the present time. The model present stability within the geometric and matter perturbations for the choice {F}g=y, where y=({T}1/3/16π G)(16π G{ T }+T+8{{Λ }}), except for the geometric part in the de Sitter model. We impose the first and second laws of thermodynamics to ΛCDM and find the condition where they are satisfied, that is, {T}A,{G}{{eff}}\\gt 0, however where this is not possible in the cases that we choose, this leads to a breakdown of positive entropy and Misner-Sharp energy.
Angular Baryon Acoustic Oscillation measure at z=2.225 from the SDSS quasar survey
NASA Astrophysics Data System (ADS)
de Carvalho, E.; Bernui, A.; Carvalho, G. C.; Novaes, C. P.; Xavier, H. S.
2018-04-01
Following a quasi model-independent approach we measure the transversal BAO mode at high redshift using the two-point angular correlation function (2PACF). The analyses done here are only possible now with the quasar catalogue from the twelfth data release (DR12Q) from the Sloan Digital Sky Survey, because it is spatially dense enough to allow the measurement of the angular BAO signature with moderate statistical significance and acceptable precision. Our analyses with quasars in the redshift interval z in [2.20,2.25] produce the angular BAO scale θBAO = 1.77° ± 0.31° with a statistical significance of 2.12 σ (i.e., 97% confidence level), calculated through a likelihood analysis performed using the theoretical covariance matrix sourced by the analytical power spectra expected in the ΛCDM concordance model. Additionally, we show that the BAO signal is robust—although with less statistical significance—under diverse bin-size choices and under small displacements of the quasars' angular coordinates. Finally, we also performed cosmological parameter analyses comparing the θBAO predictions for wCDM and w(a)CDM models with angular BAO data available in the literature, including the measurement obtained here, jointly with CMB data. The constraints on the parameters ΩM, w0 and wa are in excellent agreement with the ΛCDM concordance model.
Parallel computers - Estimate errors caused by imprecise data
NASA Technical Reports Server (NTRS)
Kreinovich, Vladik; Bernat, Andrew; Villa, Elsa; Mariscal, Yvonne
1991-01-01
A new approach to the problem of estimating errors caused by imprecise data is proposed in the context of software engineering. A software device is used to produce an ideal solution to the problem, when the computer is capable of computing errors of arbitrary programs. The software engineering aspect of this problem is to describe a device for computing the error estimates in software terms and then to provide precise numbers with error estimates to the user. The feasibility of the program capable of computing both some quantity and its error estimate in the range of possible measurement errors is demonstrated.
NASA Astrophysics Data System (ADS)
Gaševic, Dragan; Djuric, Dragan; Devedžic, Vladan
A relevant initiative from the software engineering community called Model Driven Engineering (MDE) is being developed in parallel with the Semantic Web (Mellor et al. 2003a). The MDE approach to software development suggests that one should first develop a model of the system under study, which is then transformed into the real thing (i.e., an executable software entity). The most important research initiative in this area is the Model Driven Architecture (MDA), which is Model Driven Architecture being developed under the umbrella of the Object Management Group (OMG). This chapter describes the basic concepts of this software engineering effort.
Integrated Network Decompositions and Dynamic Programming for Graph Optimization (INDDGO)
DOE Office of Scientific and Technical Information (OSTI.GOV)
The INDDGO software package offers a set of tools for finding exact solutions to graph optimization problems via tree decompositions and dynamic programming algorithms. Currently the framework offers serial and parallel (distributed memory) algorithms for finding tree decompositions and solving the maximum weighted independent set problem. The parallel dynamic programming algorithm is implemented on top of the MADNESS task-based runtime.
Device USB interface and software development for electric parameter measuring instrument
NASA Astrophysics Data System (ADS)
Li, Deshi; Chen, Jian; Wu, Yadong
2003-09-01
Aimed at general devices development, this paper discussed the development of USB interface and software development. With an example, using PDIUSBD12 which support parallel interface, the paper analyzed its technical characteristics. Designed different interface circuit with 80C52 singlechip microcomputer and TMS320C54 series digital signal processor, analyzed the address allocation, register access. According to USB1.1 standard protocol, designed the device software and application layer protocol. The paper designed the data exchange protocol, and carried out system functions.
A Massively Parallel Computational Method of Reading Index Files for SOAPsnv.
Zhu, Xiaoqian; Peng, Shaoliang; Liu, Shaojie; Cui, Yingbo; Gu, Xiang; Gao, Ming; Fang, Lin; Fang, Xiaodong
2015-12-01
SOAPsnv is the software used for identifying the single nucleotide variation in cancer genes. However, its performance is yet to match the massive amount of data to be processed. Experiments reveal that the main performance bottleneck of SOAPsnv software is the pileup algorithm. The original pileup algorithm's I/O process is time-consuming and inefficient to read input files. Moreover, the scalability of the pileup algorithm is also poor. Therefore, we designed a new algorithm, named BamPileup, aiming to improve the performance of sequential read, and the new pileup algorithm implemented a parallel read mode based on index. Using this method, each thread can directly read the data start from a specific position. The results of experiments on the Tianhe-2 supercomputer show that, when reading data in a multi-threaded parallel I/O way, the processing time of algorithm is reduced to 3.9 s and the application program can achieve a speedup up to 100×. Moreover, the scalability of the new algorithm is also satisfying.
[Series: Medical Applications of the PHITS Code (2): Acceleration by Parallel Computing].
Furuta, Takuya; Sato, Tatsuhiko
2015-01-01
Time-consuming Monte Carlo dose calculation becomes feasible owing to the development of computer technology. However, the recent development is due to emergence of the multi-core high performance computers. Therefore, parallel computing becomes a key to achieve good performance of software programs. A Monte Carlo simulation code PHITS contains two parallel computing functions, the distributed-memory parallelization using protocols of message passing interface (MPI) and the shared-memory parallelization using open multi-processing (OpenMP) directives. Users can choose the two functions according to their needs. This paper gives the explanation of the two functions with their advantages and disadvantages. Some test applications are also provided to show their performance using a typical multi-core high performance workstation.
Implementing Shared Memory Parallelism in MCBEND
NASA Astrophysics Data System (ADS)
Bird, Adam; Long, David; Dobson, Geoff
2017-09-01
MCBEND is a general purpose radiation transport Monte Carlo code from AMEC Foster Wheelers's ANSWERS® Software Service. MCBEND is well established in the UK shielding community for radiation shielding and dosimetry assessments. The existing MCBEND parallel capability effectively involves running the same calculation on many processors. This works very well except when the memory requirements of a model restrict the number of instances of a calculation that will fit on a machine. To more effectively utilise parallel hardware OpenMP has been used to implement shared memory parallelism in MCBEND. This paper describes the reasoning behind the choice of OpenMP, notes some of the challenges of multi-threading an established code such as MCBEND and assesses the performance of the parallel method implemented in MCBEND.
A framework for grand scale parallelization of the combined finite discrete element method in 2d
NASA Astrophysics Data System (ADS)
Lei, Z.; Rougier, E.; Knight, E. E.; Munjiza, A.
2014-09-01
Within the context of rock mechanics, the Combined Finite-Discrete Element Method (FDEM) has been applied to many complex industrial problems such as block caving, deep mining techniques (tunneling, pillar strength, etc.), rock blasting, seismic wave propagation, packing problems, dam stability, rock slope stability, rock mass strength characterization problems, etc. The reality is that most of these were accomplished in a 2D and/or single processor realm. In this work a hardware independent FDEM parallelization framework has been developed using the Virtual Parallel Machine for FDEM, (V-FDEM). With V-FDEM, a parallel FDEM software can be adapted to different parallel architecture systems ranging from just a few to thousands of cores.
NASA Technical Reports Server (NTRS)
Crockett, Thomas W.
1995-01-01
This article provides a broad introduction to the subject of parallel rendering, encompassing both hardware and software systems. The focus is on the underlying concepts and the issues which arise in the design of parallel rendering algorithms and systems. We examine the different types of parallelism and how they can be applied in rendering applications. Concepts from parallel computing, such as data decomposition, task granularity, scalability, and load balancing, are considered in relation to the rendering problem. We also explore concepts from computer graphics, such as coherence and projection, which have a significant impact on the structure of parallel rendering algorithms. Our survey covers a number of practical considerations as well, including the choice of architectural platform, communication and memory requirements, and the problem of image assembly and display. We illustrate the discussion with numerous examples from the parallel rendering literature, representing most of the principal rendering methods currently used in computer graphics.
Parallel computing for probabilistic fatigue analysis
NASA Technical Reports Server (NTRS)
Sues, Robert H.; Lua, Yuan J.; Smith, Mark D.
1993-01-01
This paper presents the results of Phase I research to investigate the most effective parallel processing software strategies and hardware configurations for probabilistic structural analysis. We investigate the efficiency of both shared and distributed-memory architectures via a probabilistic fatigue life analysis problem. We also present a parallel programming approach, the virtual shared-memory paradigm, that is applicable across both types of hardware. Using this approach, problems can be solved on a variety of parallel configurations, including networks of single or multiprocessor workstations. We conclude that it is possible to effectively parallelize probabilistic fatigue analysis codes; however, special strategies will be needed to achieve large-scale parallelism to keep large number of processors busy and to treat problems with the large memory requirements encountered in practice. We also conclude that distributed-memory architecture is preferable to shared-memory for achieving large scale parallelism; however, in the future, the currently emerging hybrid-memory architectures will likely be optimal.
Parallelization of ARC3D with Computer-Aided Tools
NASA Technical Reports Server (NTRS)
Jin, Haoqiang; Hribar, Michelle; Yan, Jerry; Saini, Subhash (Technical Monitor)
1998-01-01
A series of efforts have been devoted to investigating methods of porting and parallelizing applications quickly and efficiently for new architectures, such as the SCSI Origin 2000 and Cray T3E. This report presents the parallelization of a CFD application, ARC3D, using the computer-aided tools, Cesspools. Steps of parallelizing this code and requirements of achieving better performance are discussed. The generated parallel version has achieved reasonably well performance, for example, having a speedup of 30 for 36 Cray T3E processors. However, this performance could not be obtained without modification of the original serial code. It is suggested that in many cases improving serial code and performing necessary code transformations are important parts for the automated parallelization process although user intervention in many of these parts are still necessary. Nevertheless, development and improvement of useful software tools, such as Cesspools, can help trim down many tedious parallelization details and improve the processing efficiency.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Biyikli, Emre; To, Albert C., E-mail: albertto@pitt.edu
Atomistic/continuum coupling methods combine accurate atomistic methods and efficient continuum methods to simulate the behavior of highly ordered crystalline systems. Coupled methods utilize the advantages of both approaches to simulate systems at a lower computational cost, while retaining the accuracy associated with atomistic methods. Many concurrent atomistic/continuum coupling methods have been proposed in the past; however, their true computational efficiency has not been demonstrated. The present work presents an efficient implementation of a concurrent coupling method called the Multiresolution Molecular Mechanics (MMM) for serial, parallel, and adaptive analysis. First, we present the features of the software implemented along with themore » associated technologies. The scalability of the software implementation is demonstrated, and the competing effects of multiscale modeling and parallelization are discussed. Then, the algorithms contributing to the efficiency of the software are presented. These include algorithms for eliminating latent ghost atoms from calculations and measurement-based dynamic balancing of parallel workload. The efficiency improvements made by these algorithms are demonstrated by benchmark tests. The efficiency of the software is found to be on par with LAMMPS, a state-of-the-art Molecular Dynamics (MD) simulation code, when performing full atomistic simulations. Speed-up of the MMM method is shown to be directly proportional to the reduction of the number of the atoms visited in force computation. Finally, an adaptive MMM analysis on a nanoindentation problem, containing over a million atoms, is performed, yielding an improvement of 6.3–8.5 times in efficiency, over the full atomistic MD method. For the first time, the efficiency of a concurrent atomistic/continuum coupling method is comprehensively investigated and demonstrated.« less
Multiresolution molecular mechanics: Implementation and efficiency
NASA Astrophysics Data System (ADS)
Biyikli, Emre; To, Albert C.
2017-01-01
Atomistic/continuum coupling methods combine accurate atomistic methods and efficient continuum methods to simulate the behavior of highly ordered crystalline systems. Coupled methods utilize the advantages of both approaches to simulate systems at a lower computational cost, while retaining the accuracy associated with atomistic methods. Many concurrent atomistic/continuum coupling methods have been proposed in the past; however, their true computational efficiency has not been demonstrated. The present work presents an efficient implementation of a concurrent coupling method called the Multiresolution Molecular Mechanics (MMM) for serial, parallel, and adaptive analysis. First, we present the features of the software implemented along with the associated technologies. The scalability of the software implementation is demonstrated, and the competing effects of multiscale modeling and parallelization are discussed. Then, the algorithms contributing to the efficiency of the software are presented. These include algorithms for eliminating latent ghost atoms from calculations and measurement-based dynamic balancing of parallel workload. The efficiency improvements made by these algorithms are demonstrated by benchmark tests. The efficiency of the software is found to be on par with LAMMPS, a state-of-the-art Molecular Dynamics (MD) simulation code, when performing full atomistic simulations. Speed-up of the MMM method is shown to be directly proportional to the reduction of the number of the atoms visited in force computation. Finally, an adaptive MMM analysis on a nanoindentation problem, containing over a million atoms, is performed, yielding an improvement of 6.3-8.5 times in efficiency, over the full atomistic MD method. For the first time, the efficiency of a concurrent atomistic/continuum coupling method is comprehensively investigated and demonstrated.
Parallel Wavefront Analysis for a 4D Interferometer
NASA Technical Reports Server (NTRS)
Rao, Shanti R.
2011-01-01
This software provides a programming interface for automating data collection with a PhaseCam interferometer from 4D Technology, and distributing the image-processing algorithm across a cluster of general-purpose computers. Multiple instances of 4Sight (4D Technology s proprietary software) run on a networked cluster of computers. Each connects to a single server (the controller) and waits for instructions. The controller directs the interferometer to several images, then assigns each image to a different computer for processing. When the image processing is finished, the server directs one of the computers to collate and combine the processed images, saving the resulting measurement in a file on a disk. The available software captures approximately 100 images and analyzes them immediately. This software separates the capture and analysis processes, so that analysis can be done at a different time and faster by running the algorithm in parallel across several processors. The PhaseCam family of interferometers can measure an optical system in milliseconds, but it takes many seconds to process the data so that it is usable. In characterizing an adaptive optics system, like the next generation of astronomical observatories, thousands of measurements are required, and the processing time quickly becomes excessive. A programming interface distributes data processing for a PhaseCam interferometer across a Windows computing cluster. A scriptable controller program coordinates data acquisition from the interferometer, storage on networked hard disks, and parallel processing. Idle time of the interferometer is minimized. This architecture is implemented in Python and JavaScript, and may be altered to fit a customer s needs.
Yang, Yu; Zhou, Xiaofeng; Gao, Shuangqing; Lin, Hongbo; Xie, Yanming; Feng, Yuji; Huang, Kui; Zhan, Siyan
2018-01-01
Electronic healthcare databases (EHDs) are used increasingly for post-marketing drug safety surveillance and pharmacoepidemiology in Europe and North America. However, few studies have examined the potential of these data sources in China. Three major types of EHDs in China (i.e., a regional community-based database, a national claims database, and an electronic medical records [EMR] database) were selected for evaluation. Forty core variables were derived based on the US Mini-Sentinel (MS) Common Data Model (CDM) as well as the data features in China that would be desirable to support drug safety surveillance. An email survey of these core variables and eight general questions as well as follow-up inquiries on additional variables was conducted. These 40 core variables across the three EHDs and all variables in each EHD along with those in the US MS CDM and Observational Medical Outcomes Partnership (OMOP) CDM were compared for availability and labeled based on specific standards. All of the EHDs' custodians confirmed their willingness to share their databases with academic institutions after appropriate approval was obtained. The regional community-based database contained 1.19 million people in 2015 with 85% of core variables. Resampled annually nationwide, the national claims database included 5.4 million people in 2014 with 55% of core variables, and the EMR database included 3 million inpatients from 60 hospitals in 2015 with 80% of core variables. Compared with MS CDM or OMOP CDM, the proportion of variables across the three EHDs available or able to be transformed/derived from the original sources are 24-83% or 45-73%, respectively. These EHDs provide potential value to post-marketing drug safety surveillance and pharmacoepidemiology in China. Future research is warranted to assess the quality and completeness of these EHDs or additional data sources in China.
Radner, Wolfgang; Radner, Stephan; Raunig, Valerian; Diendorfer, Gabriela
2014-03-01
To evaluate reading performance of patients with monofocal intraocular lenses (IOLs) (Acrysof SN60WF) with or without reading glasses under bright and dim light conditions. Austrian Academy of Ophthalmology, Vienna, Austria. Evaluation of a diagnostic test or technology. In pseudophakic patients, the spherical refractive error was limited to between +0.50 diopter (D) and -0.75 D with astigmatism of 0.75 D (mean spherical equivalent: right eye, -0.08 ± 0.43 [SD]; left eye, -0.15 ± 0.35). Near addition was +2.75 D. Reading performance was assessed binocularly with or without reading glasses at an illumination of 100 candelas (cd)/m(2) and 4 cd/m(2) using the Radner Reading Charts. In the 25 patients evaluated, binocularly, the mean corrected distance visual acuity was -0.07 ± 0.06 logMAR and the mean uncorrected distance visual acuity was 0.01 ± 0.11 logMAR. The mean reading acuity with reading glasses was 0.02 ± 0.10 logRAD at 100 cd/m(2) and 0.12 ± 0.14 logRAD at 4 cd/m(2). Without reading glasses, it was 0.44 ± 0.13 logRAD and 0.56 ± 0.16 logRAD, respectively (P < .05). Without reading glasses and at 100 cd/m(2), 40% of patients read 0.4 logRAD at more than 80 words per minute (wpm), 68% exceeded this limit at 0.5 logRAD, and 92% exceeded it at 0.6 logRAD. The mean reading speed at 0.5 logRAD was 134.76 ± 48.22 wpm; with reading glasses it was 167.65 ± 32.77 wpm (P < .05). A considerable percentage of patients with monofocal IOLs read newspaper print size without glasses under good light conditions. Copyright © 2014 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.
A COMPARATIVE ANALYSIS OF THE SUPERNOVA LEGACY SURVEY SAMPLE WITH ΛCDM AND THE R{sub h}=ct UNIVERSE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wei, Jun-Jie; Wu, Xue-Feng; Melia, Fulvio
The use of Type Ia supernovae (SNe Ia) has thus far produced the most reliable measurement of the expansion history of the universe, suggesting that ΛCDM offers the best explanation for the redshift–luminosity distribution observed in these events. However, analysis of other kinds of sources, such as cosmic chronometers, gamma-ray bursts, and high-z quasars, conflicts with this conclusion, indicating instead that the constant expansion rate implied by the R{sub h} = ct universe is a better fit to the data. The central difficulty with the use of SNe Ia as standard candles is that one must optimize three or fourmore » nuisance parameters characterizing supernova (SN) luminosities simultaneously with the parameters of an expansion model. Hence, in comparing competing models, one must reduce the data independently for each. We carry out such a comparison of ΛCDM and the R{sub h} = ct universe using the SN Legacy Survey sample of 252 SN events, and show that each model fits its individually reduced data very well. However, since R{sub h} = ct has only one free parameter (the Hubble constant), it follows from a standard model selection technique that it is to be preferred over ΛCDM, the minimalist version of which has three (the Hubble constant, the scaled matter density, and either the spatial curvature constant or the dark energy equation-of-state parameter). We estimate using the Bayes Information Criterion that in a pairwise comparison, the likelihood of R{sub h} = ct is ∼90%, compared with only ∼10% for a minimalist form of ΛCDM, in which dark energy is simply a cosmological constant. Compared to R{sub h} = ct, versions of the standard model with more elaborate parametrizations of dark energy are judged to be even less likely.« less
Research in Parallel Algorithms and Software for Computational Aerosciences
NASA Technical Reports Server (NTRS)
Domel, Neal D.
1996-01-01
Phase I is complete for the development of a Computational Fluid Dynamics parallel code with automatic grid generation and adaptation for the Euler analysis of flow over complex geometries. SPLITFLOW, an unstructured Cartesian grid code developed at Lockheed Martin Tactical Aircraft Systems, has been modified for a distributed memory/massively parallel computing environment. The parallel code is operational on an SGI network, Cray J90 and C90 vector machines, SGI Power Challenge, and Cray T3D and IBM SP2 massively parallel machines. Parallel Virtual Machine (PVM) is the message passing protocol for portability to various architectures. A domain decomposition technique was developed which enforces dynamic load balancing to improve solution speed and memory requirements. A host/node algorithm distributes the tasks. The solver parallelizes very well, and scales with the number of processors. Partially parallelized and non-parallelized tasks consume most of the wall clock time in a very fine grain environment. Timing comparisons on a Cray C90 demonstrate that Parallel SPLITFLOW runs 2.4 times faster on 8 processors than its non-parallel counterpart autotasked over 8 processors.
Research in Parallel Algorithms and Software for Computational Aerosciences
NASA Technical Reports Server (NTRS)
Domel, Neal D.
1996-01-01
Phase 1 is complete for the development of a computational fluid dynamics CFD) parallel code with automatic grid generation and adaptation for the Euler analysis of flow over complex geometries. SPLITFLOW, an unstructured Cartesian grid code developed at Lockheed Martin Tactical Aircraft Systems, has been modified for a distributed memory/massively parallel computing environment. The parallel code is operational on an SGI network, Cray J90 and C90 vector machines, SGI Power Challenge, and Cray T3D and IBM SP2 massively parallel machines. Parallel Virtual Machine (PVM) is the message passing protocol for portability to various architectures. A domain decomposition technique was developed which enforces dynamic load balancing to improve solution speed and memory requirements. A host/node algorithm distributes the tasks. The solver parallelizes very well, and scales with the number of processors. Partially parallelized and non-parallelized tasks consume most of the wall clock time in a very fine grain environment. Timing comparisons on a Cray C90 demonstrate that Parallel SPLITFLOW runs 2.4 times faster on 8 processors than its non-parallel counterpart autotasked over 8 processors.
NASA Astrophysics Data System (ADS)
Lawry, B. J.; Encarnacao, A.; Hipp, J. R.; Chang, M.; Young, C. J.
2011-12-01
With the rapid growth of multi-core computing hardware, it is now possible for scientific researchers to run complex, computationally intensive software on affordable, in-house commodity hardware. Multi-core CPUs (Central Processing Unit) and GPUs (Graphics Processing Unit) are now commonplace in desktops and servers. Developers today have access to extremely powerful hardware that enables the execution of software that could previously only be run on expensive, massively-parallel systems. It is no longer cost-prohibitive for an institution to build a parallel computing cluster consisting of commodity multi-core servers. In recent years, our research team has developed a distributed, multi-core computing system and used it to construct global 3D earth models using seismic tomography. Traditionally, computational limitations forced certain assumptions and shortcuts in the calculation of tomographic models; however, with the recent rapid growth in computational hardware including faster CPU's, increased RAM, and the development of multi-core computers, we are now able to perform seismic tomography, 3D ray tracing and seismic event location using distributed parallel algorithms running on commodity hardware, thereby eliminating the need for many of these shortcuts. We describe Node Resource Manager (NRM), a system we developed that leverages the capabilities of a parallel computing cluster. NRM is a software-based parallel computing management framework that works in tandem with the Java Parallel Processing Framework (JPPF, http://www.jppf.org/), a third party library that provides a flexible and innovative way to take advantage of modern multi-core hardware. NRM enables multiple applications to use and share a common set of networked computers, regardless of their hardware platform or operating system. Using NRM, algorithms can be parallelized to run on multiple processing cores of a distributed computing cluster of servers and desktops, which results in a dramatic speedup in execution time. NRM is sufficiently generic to support applications in any domain, as long as the application is parallelizable (i.e., can be subdivided into multiple individual processing tasks). At present, NRM has been effective in decreasing the overall runtime of several algorithms: 1) the generation of a global 3D model of the compressional velocity distribution in the Earth using tomographic inversion, 2) the calculation of the model resolution matrix, model covariance matrix, and travel time uncertainty for the aforementioned velocity model, and 3) the correlation of waveforms with archival data on a massive scale for seismic event detection. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Software Engineering for Scientific Computer Simulations
NASA Astrophysics Data System (ADS)
Post, Douglass E.; Henderson, Dale B.; Kendall, Richard P.; Whitney, Earl M.
2004-11-01
Computer simulation is becoming a very powerful tool for analyzing and predicting the performance of fusion experiments. Simulation efforts are evolving from including only a few effects to many effects, from small teams with a few people to large teams, and from workstations and small processor count parallel computers to massively parallel platforms. Successfully making this transition requires attention to software engineering issues. We report on the conclusions drawn from a number of case studies of large scale scientific computing projects within DOE, academia and the DoD. The major lessons learned include attention to sound project management including setting reasonable and achievable requirements, building a good code team, enforcing customer focus, carrying out verification and validation and selecting the optimum computational mathematics approaches.
Parallel Multiscale Algorithms for Astrophysical Fluid Dynamics Simulations
NASA Technical Reports Server (NTRS)
Norman, Michael L.
1997-01-01
Our goal is to develop software libraries and applications for astrophysical fluid dynamics simulations in multidimensions that will enable us to resolve the large spatial and temporal variations that inevitably arise due to gravity, fronts and microphysical phenomena. The software must run efficiently on parallel computers and be general enough to allow the incorporation of a wide variety of physics. Cosmological structure formation with realistic gas physics is the primary application driver in this work. Accurate simulations of e.g. galaxy formation require a spatial dynamic range (i.e., ratio of system scale to smallest resolved feature) of 104 or more in three dimensions in arbitrary topologies. We take this as our technical requirement. We have achieved, and in fact, surpassed these goals.
Concurrent Probabilistic Simulation of High Temperature Composite Structural Response
NASA Technical Reports Server (NTRS)
Abdi, Frank
1996-01-01
A computational structural/material analysis and design tool which would meet industry's future demand for expedience and reduced cost is presented. This unique software 'GENOA' is dedicated to parallel and high speed analysis to perform probabilistic evaluation of high temperature composite response of aerospace systems. The development is based on detailed integration and modification of diverse fields of specialized analysis techniques and mathematical models to combine their latest innovative capabilities into a commercially viable software package. The technique is specifically designed to exploit the availability of processors to perform computationally intense probabilistic analysis assessing uncertainties in structural reliability analysis and composite micromechanics. The primary objectives which were achieved in performing the development were: (1) Utilization of the power of parallel processing and static/dynamic load balancing optimization to make the complex simulation of structure, material and processing of high temperature composite affordable; (2) Computational integration and synchronization of probabilistic mathematics, structural/material mechanics and parallel computing; (3) Implementation of an innovative multi-level domain decomposition technique to identify the inherent parallelism, and increasing convergence rates through high- and low-level processor assignment; (4) Creating the framework for Portable Paralleled architecture for the machine independent Multi Instruction Multi Data, (MIMD), Single Instruction Multi Data (SIMD), hybrid and distributed workstation type of computers; and (5) Market evaluation. The results of Phase-2 effort provides a good basis for continuation and warrants Phase-3 government, and industry partnership.
Biocellion: accelerating computer simulation of multicellular biological system models
Kang, Seunghwa; Kahan, Simon; McDermott, Jason; Flann, Nicholas; Shmulevich, Ilya
2014-01-01
Motivation: Biological system behaviors are often the outcome of complex interactions among a large number of cells and their biotic and abiotic environment. Computational biologists attempt to understand, predict and manipulate biological system behavior through mathematical modeling and computer simulation. Discrete agent-based modeling (in combination with high-resolution grids to model the extracellular environment) is a popular approach for building biological system models. However, the computational complexity of this approach forces computational biologists to resort to coarser resolution approaches to simulate large biological systems. High-performance parallel computers have the potential to address the computing challenge, but writing efficient software for parallel computers is difficult and time-consuming. Results: We have developed Biocellion, a high-performance software framework, to solve this computing challenge using parallel computers. To support a wide range of multicellular biological system models, Biocellion asks users to provide their model specifics by filling the function body of pre-defined model routines. Using Biocellion, modelers without parallel computing expertise can efficiently exploit parallel computers with less effort than writing sequential programs from scratch. We simulate cell sorting, microbial patterning and a bacterial system in soil aggregate as case studies. Availability and implementation: Biocellion runs on x86 compatible systems with the 64 bit Linux operating system and is freely available for academic use. Visit http://biocellion.com for additional information. Contact: seunghwa.kang@pnnl.gov PMID:25064572
Weighted Ensemble Simulation: Review of Methodology, Applications, and Software
Zuckerman, Daniel M.; Chong, Lillian T.
2018-01-01
The weighted ensemble (WE) methodology orchestrates quasi-independent parallel simulations run with intermittent communication that can enhance sampling of rare events such as protein conformational changes, folding, and binding. The WE strategy can achieve superlinear scaling—the unbiased estimation of key observables such as rate constants and equilibrium state populations to greater precision than would be possible with ordinary parallel simulation. WE software can be used to control any dynamics engine, such as standard molecular dynamics and cell-modeling packages. This article reviews the theoretical basis of WE and goes on to describe successful applications to a number of complex biological processes—protein conformational transitions, (un)binding, and assembly processes, as well as cell-scale processes in systems biology. We furthermore discuss the challenges that need to be overcome in the next phase of WE methodological development. Overall, the combined advances in WE methodology and software have enabled the simulation of long-timescale processes that would otherwise not be practical on typical computing resources using standard simulation. PMID:28301772
Weighted Ensemble Simulation: Review of Methodology, Applications, and Software.
Zuckerman, Daniel M; Chong, Lillian T
2017-05-22
The weighted ensemble (WE) methodology orchestrates quasi-independent parallel simulations run with intermittent communication that can enhance sampling of rare events such as protein conformational changes, folding, and binding. The WE strategy can achieve superlinear scaling-the unbiased estimation of key observables such as rate constants and equilibrium state populations to greater precision than would be possible with ordinary parallel simulation. WE software can be used to control any dynamics engine, such as standard molecular dynamics and cell-modeling packages. This article reviews the theoretical basis of WE and goes on to describe successful applications to a number of complex biological processes-protein conformational transitions, (un)binding, and assembly processes, as well as cell-scale processes in systems biology. We furthermore discuss the challenges that need to be overcome in the next phase of WE methodological development. Overall, the combined advances in WE methodology and software have enabled the simulation of long-timescale processes that would otherwise not be practical on typical computing resources using standard simulation.
Execution of parallel algorithms on a heterogeneous multicomputer
NASA Astrophysics Data System (ADS)
Isenstein, Barry S.; Greene, Jonathon
1995-04-01
Many aerospace/defense sensing and dual-use applications require high-performance computing, extensive high-bandwidth interconnect and realtime deterministic operation. This paper will describe the architecture of a scalable multicomputer that includes DSP and RISC processors. A single chassis implementation is capable of delivering in excess of 10 GFLOPS of DSP processing power with 2 Gbytes/s of realtime sensor I/O. A software approach to implementing parallel algorithms called the Parallel Application System (PAS) is also presented. An example of applying PAS to a DSP application is shown.
Parallel-In-Time For Moving Meshes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Falgout, R. D.; Manteuffel, T. A.; Southworth, B.
2016-02-04
With steadily growing computational resources available, scientists must develop e ective ways to utilize the increased resources. High performance, highly parallel software has be- come a standard. However until recent years parallelism has focused primarily on the spatial domain. When solving a space-time partial di erential equation (PDE), this leads to a sequential bottleneck in the temporal dimension, particularly when taking a large number of time steps. The XBraid parallel-in-time library was developed as a practical way to add temporal parallelism to existing se- quential codes with only minor modi cations. In this work, a rezoning-type moving mesh is appliedmore » to a di usion problem and formulated in a parallel-in-time framework. Tests and scaling studies are run using XBraid and demonstrate excellent results for the simple model problem considered herein.« less
García-Jacas, César R; Marrero-Ponce, Yovani; Acevedo-Martínez, Liesner; Barigye, Stephen J; Valdés-Martiní, José R; Contreras-Torres, Ernesto
2014-07-05
The present report introduces the QuBiLS-MIDAS software belonging to the ToMoCoMD-CARDD suite for the calculation of three-dimensional molecular descriptors (MDs) based on the two-linear (bilinear), three-linear, and four-linear (multilinear or N-linear) algebraic forms. Thus, it is unique software that computes these tensor-based indices. These descriptors, establish relations for two, three, and four atoms by using several (dis-)similarity metrics or multimetrics, matrix transformations, cutoffs, local calculations and aggregation operators. The theoretical background of these N-linear indices is also presented. The QuBiLS-MIDAS software was developed in the Java programming language and employs the Chemical Development Kit library for the manipulation of the chemical structures and the calculation of the atomic properties. This software is composed by a desktop user-friendly interface and an Abstract Programming Interface library. The former was created to simplify the configuration of the different options of the MDs, whereas the library was designed to allow its easy integration to other software for chemoinformatics applications. This program provides functionalities for data cleaning tasks and for batch processing of the molecular indices. In addition, it offers parallel calculation of the MDs through the use of all available processors in current computers. The studies of complexity of the main algorithms demonstrate that these were efficiently implemented with respect to their trivial implementation. Lastly, the performance tests reveal that this software has a suitable behavior when the amount of processors is increased. Therefore, the QuBiLS-MIDAS software constitutes a useful application for the computation of the molecular indices based on N-linear algebraic maps and it can be used freely to perform chemoinformatics studies. Copyright © 2014 Wiley Periodicals, Inc.
ERIC Educational Resources Information Center
Read, Brock
2008-01-01
A parallel between plagiarism and corporate crime raises eyebrows--and ire-- on campuses, but for John Barrie, the comparison is a perfectly natural one. In the 10 years since he founded iParadigms, which sells the antiplagiarism software Turnitin, he has argued--forcefully, and at times combatively--that academic plagiarism is growing, and that…
PuLP/XtraPuLP : Partitioning Tools for Extreme-Scale Graphs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Slota, George M; Rajamanickam, Sivasankaran; Madduri, Kamesh
2017-09-21
PuLP/XtraPulp is software for partitioning graphs from several real-world problems. Graphs occur in several places in real world from road networks, social networks and scientific simulations. For efficient parallel processing these graphs have to be partitioned (split) with respect to metrics such as computation and communication costs. Our software allows such partitioning for massive graphs.
Ogawa, Yasushi; Fawaz, Farah; Reyes, Candice; Lai, Julie; Pungor, Erno
2007-01-01
Parameter settings of a parallel line analysis procedure were defined by applying statistical analysis procedures to the absorbance data from a cell-based potency bioassay for a recombinant adenovirus, Adenovirus 5 Fibroblast Growth Factor-4 (Ad5FGF-4). The parallel line analysis was performed with a commercially available software, PLA 1.2. The software performs Dixon outlier test on replicates of the absorbance data, performs linear regression analysis to define linear region of the absorbance data, and tests parallelism between the linear regions of standard and sample. Width of Fiducial limit, expressed as a percent of the measured potency, was developed as a criterion for rejection of the assay data and to significantly improve the reliability of the assay results. With the linear range-finding criteria of the software set to a minimum of 5 consecutive dilutions and best statistical outcome, and in combination with the Fiducial limit width acceptance criterion of <135%, 13% of the assay results were rejected. With these criteria applied, the assay was found to be linear over the range of 0.25 to 4 relative potency units, defined as the potency of the sample normalized to the potency of Ad5FGF-4 standard containing 6 x 10(6) adenovirus particles/mL. The overall precision of the assay was estimated to be 52%. Without the application of Fiducial limit width criterion, the assay results were not linear over the range, and an overall precision of 76% was calculated from the data. An absolute unit of potency for the assay was defined by using the parallel line analysis procedure as the amount of Ad5FGF-4 that results in an absorbance value that is 121% of the average absorbance readings of the wells containing cells not infected with the adenovirus.
Bilingual parallel programming
DOE Office of Scientific and Technical Information (OSTI.GOV)
Foster, I.; Overbeek, R.
1990-01-01
Numerous experiments have demonstrated that computationally intensive algorithms support adequate parallelism to exploit the potential of large parallel machines. Yet successful parallel implementations of serious applications are rare. The limiting factor is clearly programming technology. None of the approaches to parallel programming that have been proposed to date -- whether parallelizing compilers, language extensions, or new concurrent languages -- seem to adequately address the central problems of portability, expressiveness, efficiency, and compatibility with existing software. In this paper, we advocate an alternative approach to parallel programming based on what we call bilingual programming. We present evidence that this approach providesmore » and effective solution to parallel programming problems. The key idea in bilingual programming is to construct the upper levels of applications in a high-level language while coding selected low-level components in low-level languages. This approach permits the advantages of a high-level notation (expressiveness, elegance, conciseness) to be obtained without the cost in performance normally associated with high-level approaches. In addition, it provides a natural framework for reusing existing code.« less
Galaxy clusters and cold dark matter - A low-density unbiased universe?
NASA Technical Reports Server (NTRS)
Bahcall, Neta A.; Cen, Renyue
1992-01-01
Large-scale simulations of a universe dominated by cold dark matter (CDM) are tested against two fundamental properties of clusters of galaxies: the cluster mass function and the cluster correlation function. We find that standard biased CDM models are inconsistent with these observations for any bias parameter b. A low-density, low-bias CDM-type model, with or without a cosmological constant, appears to be consistent with both the cluster mass function and the cluster correlations. The low-density model agrees well with the observed correlation function of the Abell, Automatic Plate Measuring Facility (APM), and Edinburgh-Durham cluster catalogs. The model is in excellent agreement with the observed dependence of the correlation strength on cluster mean separation, reproducing the measured universal dimensionless cluster correlation. The low-density model is also consistent with other large-scale structure observations, including the APM angular galaxy-correlations, and for lambda = 1-Omega with the COBE results of the microwave background radiation fluctuations.
Semantic Enhancement for Enterprise Data Management
NASA Astrophysics Data System (ADS)
Ma, Li; Sun, Xingzhi; Cao, Feng; Wang, Chen; Wang, Xiaoyuan; Kanellos, Nick; Wolfson, Dan; Pan, Yue
Taking customer data as an example, the paper presents an approach to enhance the management of enterprise data by using Semantic Web technologies. Customer data is the most important kind of core business entity a company uses repeatedly across many business processes and systems, and customer data management (CDM) is becoming critical for enterprises because it keeps a single, complete and accurate record of customers across the enterprise. Existing CDM systems focus on integrating customer data from all customer-facing channels and front and back office systems through multiple interfaces, as well as publishing customer data to different applications. To make the effective use of the CDM system, this paper investigates semantic query and analysis over the integrated and centralized customer data, enabling automatic classification and relationship discovery. We have implemented these features over IBM Websphere Customer Center, and shown the prototype to our clients. We believe that our study and experiences are valuable for both Semantic Web community and data management community.
What do gas-rich galaxies actually tell us about modified Newtonian dynamics?
Foreman, Simon; Scott, Douglas
2012-04-06
It has recently been claimed that measurements of the baryonic Tully-Fisher relation (BTFR), a power-law relationship between the observed baryonic masses and outer rotation velocities of galaxies, support the predictions of modified Newtonian dynamics for the slope and scatter in the relation, while challenging the cold dark matter (CDM) paradigm. We investigate these claims, and find that (1) the scatter in the data used to determine the BTFR is in conflict with observational uncertainties on the data, (2) these data do not make strong distinctions regarding the best-fit BTFR parameters, (3) the literature contains a wide variety of measurements of the BTFR, many of which are discrepant with the recent results, and (4) the claimed CDM "prediction" for the BTFR is a gross oversimplification of the complex galaxy-scale physics involved. We conclude that the BTFR is currently untrustworthy as a test of CDM. © 2012 American Physical Society
Curvaton as dark matter with secondary inflation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gong, Jinn-Ouk; Kitajima, Naoya; Terada, Takahiro, E-mail: jinn-ouk.gong@apctp.org, E-mail: naoya.kitajima@apctp.org, E-mail: terada@kias.re.kr
2017-03-01
We consider a novel cosmological scenario in which a curvaton is long-lived and plays the role of cold dark matter (CDM) in the presence of a short, secondary inflation. Non-trivial evolution of the large scale cosmological perturbation in the curvaton scenario can affect the duration of the short term inflation, resulting in the inhomogeneous end of inflation. Non-linear parameters of the curvature perturbation are predicted to be f {sub NL} ≈ 5/4 and g {sub NL} ≈ 0. The curvaton abundance can be well diluted by the short-term inflation and accordingly, it does not have to decay into the Standardmore » Model particles. Then the curvaton can account for the present CDM with the isocurvature perturbation being sufficiently suppressed because both the adiabatic and CDM isocurvature perturbations have the same origin. As an explicit example, we consider the thermal inflation scenario and a string axion as a candidate for this curvaton-dark matter. We further discuss possibilities to identify the curvaton-dark matter with the QCD axion.« less
Topology in two dimensions. IV - CDM models with non-Gaussian initial conditions
NASA Astrophysics Data System (ADS)
Coles, Peter; Moscardini, Lauro; Plionis, Manolis; Lucchin, Francesco; Matarrese, Sabino; Messina, Antonio
1993-02-01
The results of N-body simulations with both Gaussian and non-Gaussian initial conditions are used here to generate projected galaxy catalogs with the same selection criteria as the Shane-Wirtanen counts of galaxies. The Euler-Poincare characteristic is used to compare the statistical nature of the projected galaxy clustering in these simulated data sets with that of the observed galaxy catalog. All the models produce a topology dominated by a meatball shift when normalized to the known small-scale clustering properties of galaxies. Models characterized by a positive skewness of the distribution of primordial density perturbations are inconsistent with the Lick data, suggesting problems in reconciling models based on cosmic textures with observations. Gaussian CDM models fit the distribution of cell counts only if they have a rather high normalization but possess too low a coherence length compared with the Lick counts. This suggests that a CDM model with extra large scale power would probably fit the available data.
DNA bases thymine and adenine in bio-organic light emitting diodes.
Gomez, Eliot F; Venkatraman, Vishak; Grote, James G; Steckl, Andrew J
2014-11-24
We report on the use of nucleic acid bases (NBs) in organic light emitting diodes (OLEDs). NBs are small molecules that are the basic building blocks of the larger DNA polymer. NBs readily thermally evaporate and integrate well into the vacuum deposited OLED fabrication. Adenine (A) and thymine (T) were deposited as electron-blocking/hole-transport layers (EBL/HTL) that resulted in increases in performance over the reference OLED containing the standard EBL material NPB. A-based OLEDs reached a peak current efficiency and luminance performance of 48 cd/A and 93,000 cd/m(2), respectively, while T-based OLEDs had a maximum of 76 cd/A and 132,000 cd/m(2). By comparison, the reference OLED yielded 37 cd/A and 113,000 cd/m(2). The enhanced performance of T-based devices is attributed to a combination of energy levels and structured surface morphology that causes more efficient and controlled hole current transport to the emitting layer.
The tangential velocity of M31: CLUES from constrained simulations
NASA Astrophysics Data System (ADS)
Carlesi, Edoardo; Hoffman, Yehuda; Sorce, Jenny G.; Gottlöber, Stefan; Yepes, Gustavo; Courtois, Hélène; Tully, R. Brent
2016-07-01
Determining the precise value of the tangential component of the velocity of M31 is a non-trivial astrophysical issue that relies on complicated modelling. This has recently lead to conflicting estimates, obtained by several groups that used different methodologies and assumptions. This Letter addresses the issue by computing a Bayesian posterior distribution function of this quantity, in order to measure the compatibility of those estimates with Λ cold dark matter (ΛCDM). This is achieved using an ensemble of Local Group (LG) look-alikes collected from a set of constrained simulations (CSs) of the local Universe, and a standard unconstrained ΛCDM. The latter allows us to build a control sample of LG-like pairs and to single out the influence of the environment in our results. We find that neither estimate is at odds with ΛCDM; however, whereas CSs favour higher values of vtan, the reverse is true for estimates based on LG samples gathered from unconstrained simulations, overlooking the environmental element.
Entropy corrected holographic dark energy models in modified gravity
NASA Astrophysics Data System (ADS)
Jawad, Abdul; Azhar, Nadeem; Rani, Shamaila
We consider the power law and the entropy corrected holographic dark energy (HDE) models with Hubble horizon in the dynamical Chern-Simons modified gravity. We explore various cosmological parameters and planes in this framework. The Hubble parameter lies within the consistent range at the present and later epoch for both entropy corrected models. The deceleration parameter explains the accelerated expansion of the universe. The equation of state (EoS) parameter corresponds to quintessence and cold dark matter (ΛCDM) limit. The ωΛ-ωΛ‧ approaches to ΛCDM limit and freezing region in both entropy corrected models. The statefinder parameters are consistent with ΛCDM limit and dark energy (DE) models. The generalized second law of thermodynamics remain valid in all cases of interacting parameter. It is interesting to mention here that our results of Hubble, EoS parameter and ωΛ-ωΛ‧ plane show consistency with the present observations like Planck, WP, BAO, H0, SNLS and nine-year WMAP.
Mackenzie, Lynette; Clemson, Lindy
2014-04-01
Exercise and home modifications are effective interventions for preventing falls. Chronic disease management (CDM) items are one way for general practitioners (GPs) to access these interventions. This study aimed to evaluate the outcomes and feasibility of using CDM items for occupational therapy (OT) and physiotherapy (PT) sessions to address falls risk. A pre-post pilot study design was used to evaluate five collaborative sessions shared by a private OT and PT using CDM items and a GP management plan. Pre and post intervention measures were used to evaluate outcomes for eight patients aged ≥75 years from two GP practices. At 2 months post-intervention there were significant improvements in everyday functioning (P = 0.04), physical capacity (P = 0.01) and falls efficacy (P =0.01). Adherence to the intervention was excellent. Falls prevention interventions can be effective in primary care settings and sustainable pathways need to be developed to ensure access for older people at risk.
Language Classification using N-grams Accelerated by FPGA-based Bloom Filters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jacob, A; Gokhale, M
N-Gram (n-character sequences in text documents) counting is a well-established technique used in classifying the language of text in a document. In this paper, n-gram processing is accelerated through the use of reconfigurable hardware on the XtremeData XD1000 system. Our design employs parallelism at multiple levels, with parallel Bloom Filters accessing on-chip RAM, parallel language classifiers, and parallel document processing. In contrast to another hardware implementation (HAIL algorithm) that uses off-chip SRAM for lookup, our highly scalable implementation uses only on-chip memory blocks. Our implementation of end-to-end language classification runs at 85x comparable software and 1.45x the competing hardware design.
Comparing the OpenMP, MPI, and Hybrid Programming Paradigm on an SMP Cluster
NASA Technical Reports Server (NTRS)
Jost, Gabriele; Jin, Hao-Qiang; anMey, Dieter; Hatay, Ferhat F.
2003-01-01
Clusters of SMP (Symmetric Multi-Processors) nodes provide support for a wide range of parallel programming paradigms. The shared address space within each node is suitable for OpenMP parallelization. Message passing can be employed within and across the nodes of a cluster. Multiple levels of parallelism can be achieved by combining message passing and OpenMP parallelization. Which programming paradigm is the best will depend on the nature of the given problem, the hardware components of the cluster, the network, and the available software. In this study we compare the performance of different implementations of the same CFD benchmark application, using the same numerical algorithm but employing different programming paradigms.
Topical perspective on massive threading and parallelism.
Farber, Robert M
2011-09-01
Unquestionably computer architectures have undergone a recent and noteworthy paradigm shift that now delivers multi- and many-core systems with tens to many thousands of concurrent hardware processing elements per workstation or supercomputer node. GPGPU (General Purpose Graphics Processor Unit) technology in particular has attracted significant attention as new software development capabilities, namely CUDA (Compute Unified Device Architecture) and OpenCL™, have made it possible for students as well as small and large research organizations to achieve excellent speedup for many applications over more conventional computing architectures. The current scientific literature reflects this shift with numerous examples of GPGPU applications that have achieved one, two, and in some special cases, three-orders of magnitude increased computational performance through the use of massive threading to exploit parallelism. Multi-core architectures are also evolving quickly to exploit both massive-threading and massive-parallelism such as the 1.3 million threads Blue Waters supercomputer. The challenge confronting scientists in planning future experimental and theoretical research efforts--be they individual efforts with one computer or collaborative efforts proposing to use the largest supercomputers in the world is how to capitalize on these new massively threaded computational architectures--especially as not all computational problems will scale to massive parallelism. In particular, the costs associated with restructuring software (and potentially redesigning algorithms) to exploit the parallelism of these multi- and many-threaded machines must be considered along with application scalability and lifespan. This perspective is an overview of the current state of threading and parallelize with some insight into the future. Published by Elsevier Inc.
NASA Technical Reports Server (NTRS)
Fineberg, Samuel A.; Kutler, Paul (Technical Monitor)
1997-01-01
The Whitney project is integrating commodity off-the-shelf PC hardware and software technology to build a parallel supercomputer with hundreds to thousands of nodes. To build such a system, one must have a scalable software model, and the installation and maintenance of the system software must be completely automated. We describe the design of an architecture for booting, installing, and configuring nodes in such a system with particular consideration given to scalability and ease of maintenance. This system has been implemented on a 40-node prototype of Whitney and is to be used on the 500 processor Whitney system to be built in 1998.
Software environment for implementing engineering applications on MIMD computers
NASA Technical Reports Server (NTRS)
Lopez, L. A.; Valimohamed, K. A.; Schiff, S.
1990-01-01
In this paper the concept for a software environment for developing engineering application systems for multiprocessor hardware (MIMD) is presented. The philosophy employed is to solve the largest problems possible in a reasonable amount of time, rather than solve existing problems faster. In the proposed environment most of the problems concerning parallel computation and handling of large distributed data spaces are hidden from the application program developer, thereby facilitating the development of large-scale software applications. Applications developed under the environment can be executed on a variety of MIMD hardware; it protects the application software from the effects of a rapidly changing MIMD hardware technology.
STGT program: Ada coding and architecture lessons learned
NASA Technical Reports Server (NTRS)
Usavage, Paul; Nagurney, Don
1992-01-01
STGT (Second TDRSS Ground Terminal) is currently halfway through the System Integration Test phase (Level 4 Testing). To date, many software architecture and Ada language issues have been encountered and solved. This paper, which is the transcript of a presentation at the 3 Dec. meeting, attempts to define these lessons plus others learned regarding software project management and risk management issues, training, performance, reuse, and reliability. Observations are included regarding the use of particular Ada coding constructs, software architecture trade-offs during the prototyping, development and testing stages of the project, and dangers inherent in parallel or concurrent systems, software, hardware, and operations engineering.
Imprint of thawing scalar fields on the large scale galaxy overdensity
NASA Astrophysics Data System (ADS)
Dinda, Bikash R.; Sen, Anjan A.
2018-04-01
We investigate the observed galaxy power spectrum for the thawing class of scalar field models taking into account various general relativistic corrections that occur on very large scales. We consider the full general relativistic perturbation equations for the matter as well as the dark energy fluid. We form a single autonomous system of equations containing both the background and the perturbed equations of motion which we subsequently solve for different scalar field potentials. First we study the percentage deviation from the Λ CDM model for different cosmological parameters as well as in the observed galaxy power spectra on different scales in scalar field models for various choices of scalar field potentials. Interestingly the difference in background expansion results from the enhancement of power from Λ CDM on small scales, whereas the inclusion of general relativistic (GR) corrections results in the suppression of power from Λ CDM on large scales. This can be useful to distinguish scalar field models from Λ CDM with future optical/radio surveys. We also compare the observed galaxy power spectra for tracking and thawing types of scalar field using some particular choices for the scalar field potentials. We show that thawing and tracking models can have large differences in observed galaxy power spectra on large scales and for smaller redshifts due to different GR effects. But on smaller scales and for larger redshifts, the difference is small and is mainly due to the difference in background expansion.
Canine diabetes mellitus risk factors: A matched case-control study.
Pöppl, Alan Gomes; de Carvalho, Guilherme Luiz Carvalho; Vivian, Itatiele Farias; Corbellini, Luis Gustavo; González, Félix Hilário Díaz
2017-10-01
Different subtypes of canine diabetes mellitus (CDM) have been described based on their aetiopathogenesis. Therefore, manifold risk factors may be involved in CDM development. This study aims to investigate canine diabetes mellitus risk factors. Owners of 110 diabetic dogs and 136 healthy controls matched by breed, sex, and age were interviewed concerning aspects related to diet, weight, physical activity, oral health, reproductive history, pancreatitis, and exposure to exogenous glucocorticoids. Two multivariate multivariable statistical models were created: The UMod included males and females without variables related to oestrous cycle, while the FMod included only females with all analysed variables. In the UMod, "Not exclusively commercial diet" (OR 4.86, 95%CI 2.2-10.7, P<0.001) and "Overweight" (OR 3.51, 95%CI 1.6-7.5, P=0.001) were statistically significant, while in the FMod, "Not exclusively commercial diet" (OR 4.14, 95%CI 1.3-12.7, P=0.01), "Table scraps abuse" (OR 3.62, 95%CI 1.1-12.2, P=0.03), "Overweight" (OR 3.91, 95%CI 1.2-12.6, P=0.02), and "Dioestrus" (OR 5.53, 95%CI 1.9-16.3, P=0.002) were statistically significant. The findings in this study support feeding not exclusively balanced commercial dog food, overweight, treats abuse, and diestrus, as main CDM risk factors. Moreover, those results give subside for preventive care studies against CDM development. Copyright © 2017 Elsevier Ltd. All rights reserved.
Do Processing Patterns of Strengths and Weaknesses Predict Differential Treatment Response?
Miciak, Jeremy; Williams, Jacob L; Taylor, W Pat; Cirino, Paul T; Fletcher, Jack M; Vaughn, Sharon
2016-08-01
No previous empirical study has investigated whether the LD identification decisions of proposed methods to operationalize processing strengths and weaknesses (PSW) approaches for LD identification are associated with differential treatment response. We investigated whether the identification decisions of the concordance/discordance model (C/DM; Hale & Fiorello, 2004) and Cross Battery Assessment approach (XBA method; Flanagan, Ortiz, & Alfonso, 2007) were consistent and whether they predicted intervention response beyond that accounted for by pretest performance on measures of reading. Psychoeducational assessments were administered at pretest to 203 4 th graders with low reading comprehension and individual results were utilized to identify students who met LD criteria according to the C/DM and XBA methods and students who did not. Resulting group status permitted an investigation of agreement for identification methods and whether group status at pretest (LD or not LD) was associated with differential treatment response to an intensive reading intervention. The LD identification decisions of the XBA and C/DM demonstrated poor agreement with one another (κ = -.10). Comparisons of posttest performance for students who met LD criteria and those who did not meet were largely null, with small effect sizes across all measures. LD status, as identified through the C/DM and XBA approaches, was not associated with differential treatment response and did not contribute educationally meaningful information about how students would respond to intensive reading intervention. These results do not support the value of cognitive assessment utilized in this way as part of the LD identification process.
Klein, M; Birch, D G
2009-12-01
To determine whether the Diagnosys full-field stimulus threshold (D-FST) is a valid, sensitive and repeatable psychophysical method of measuring and following visual function in low-vision subjects. Fifty-three affected eyes of 42 subjects with severe retinal degenerative diseases (RDDs) were tested with achromatic stimuli on the D-FST. Included were subjects who were either unable to perform a static perimetric field or had non-detectable or sub-microvolt electroretinograms (ERGs). A subset of 21 eyes of 17 subjects was tested on both the D-FST and the FST2, a previous established full-field threshold test. Seven eyes of 7 normal control subjects were tested on both the D-FST and the FST2. Results for the two methods were compared with the Bland-Altman test. On the D-FST, a threshold could successfully be determined for 13 of 14 eyes with light perception (LP) only (median 0.9 +/- 1.4 log cd/m2), and all eyes determined to be counting fingers (CF; median 0.3 +/- 1.8 log cd/m2). The median full-field threshold for the normal controls was -4.3 +/- 0.6 log cd/m2 on the D-FST and -4.8 +/- 0.9 log cd/m2 on the FST2. The D-FST offers a commercially available method with a robust psychophysical algorithm and is a useful tool for following visual function in low vision subjects.
The role of Sudanese community pharmacists in patients' self-care.
Mohamed, Sumia S; Mahmoud, Adil A; Ali, Abdulazim A
2014-04-01
To describe the current and potential roles of Sudanese community pharmacists in responding to symptoms (RTS) and chronic diseases management (CDM) and identify perceived barriers. Community pharmacies in Khartoum State. A structured, self-administered, piloted questionnaire was conducted of pharmacists in charge of 274, randomly selected, community pharmacies. Close ended questions and a 5-point Likert-type scale were used to measure responses. Respondents' demographics, their current activities, attitude and involvement in RTS and CDM and potential barriers. Response rate was 67 %. The majority of respondents (>90 %) reported that they are involved in RTS activities but have negative views regarding practice standards. They lack specific lists of minor conditions and their treatment (87.4 %), recorded counseling procedure (84.7 %), and referral forms (85.8 %). Almost all community pharmacists see an important role for them in CDM (4.54 ± 0.74, 95.3 %) and accept team work with other health care providers (4.46 ± 0.74, 87.5 %). Lack of proper knowledge and training, time, space, patients' acceptance and official recognition of pharmacists' new role, were some of the identified barriers. Sudanese community pharmacists provide RTS and CDM services; however, clinical knowledge and training and well defined national practice standards needs were identified. The current product-focused activities need to be refined to include more patient-focused services. For Improved patients' self-care services, a number of obstacles identified by surveyed pharmacists need to resolved. This requires collaboration of different parties including academics, governmental bodies and professional organizations.
Distinguishing CDM dwarfs from SIDM dwarfs in baryonic simulations
NASA Astrophysics Data System (ADS)
Strickland, Emily; Fitts, Alex B.; Boylan-Kolchin, Michael
2017-06-01
Dwarf galaxies in the nearby Universe are the most dark-matter-dominated systems known. They are therefore natural probes of the nature of dark matter, which remains unknown. Our collaboration has performed several high-resolution cosmological zoom-in simulations of isolated dwarf galaxies. We simulate each galaxy in standard cold dark matter (ΛCDM) as well as self-interacting dark matter (SIDM, with a cross section of σ/m ~ 1 cm2/g), both with and without baryons, in order to identify distinguishing characteristics between the two. The simulations are run using GIZMO, a meshless-finite-mass hydrodynamical code, and are part of the Feedback in Realistic Environments (FIRE) project. By analyzing both the global properties and inner structure of the dwarfs in varying dark matter prescriptions, we provide a side-by-side comparison of isolated, dark-matter-dominated galaxies at the mass scale where differences in the two models of dark matter are thought to be the most obvious. We find that the edge of classical dwarfs and ultra-faint dwarfs (at stellar masses of ~105 solar masses) provides the clearest window for distinguishing between the two theories. At these low masses, our SIDM galaxies have a cored inner density profile, while their CDM counterparts have “cuspy” centers. The SIDM versions of each galaxy also have measurably lower stellar velocity dispersions than their CDM counterparts. Future observations of ultra faint dwarfs with JWST and 30-m telescopes will be able to discern whether such alternate theories of dark matter are viable.
Culha, Mustafa; Schell, Fred M; Fox, Shannon; Green, Thomas; Betts, Thomas; Sepaniak, Michael J
2004-01-22
A highly new charged cyclodextrin (CD) derivatives, (6-O-carboxymethyl-2,3-di-O-methyl)cyclomaltoheptaoses (CDM-beta-CDs), was synthesized and characterized as anionic reagents for capillary electrophoresis (CE) in an electrokinetic chromatography mode of separation. Substitution with dimethyl groups at the secondary hydroxyl sites of the CD is aimed at influencing the magnitude and selectivity of analyte-CD interactions, while substitution by carboxymethyl groups at the primary hydroxyl sites provides for high charge and electrophoretic mobility. Full regioselective methylation at the secondary hydroxyl sites was achieved in this work, while substitution at the primary hydroxyl sites generated a mixture of multiply charged products. The separation performance of CDM-beta-CD was evaluated using a variety of analyte mixtures. The results obtained from commercially available negatively charged cyclodextrins, heptakis(2,3-di-O-methyl-6-O-sulfo)cyclomaltoheptaose (HDMS-beta-CD) and O-(carboxymethyl)cyclomaltoheptaose (CM-beta-CD) with an average degree of substitution one (DS 1), were compared to CDM-beta-CD using a sample composed of eight positional isomers of dihydroxynaphthalene. Four hydroxylated polychlorobiphenyl derivatives, a group of chiral and isomeric catchecins, and chiral binaphthyl compounds were also separated with CDM-beta-CD. The effect of adding neutral beta-cyclodextrin (beta-CD) into the running buffer containing charged cyclodextrins was investigated and provided evidence of significant inter-CD interactions. Under certain running buffer conditions, the charged cyclodextrins also appear to adsorb to the capillary walls to various degrees.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Story, K. T.; Keisler, R.; Benson, B. A.
2013-12-10
We present a measurement of the cosmic microwave background (CMB) temperature power spectrum using data from the recently completed South Pole Telescope Sunyaev-Zel'dovich (SPT-SZ) survey. This measurement is made from observations of 2540 deg{sup 2} of sky with arcminute resolution at 150 GHz, and improves upon previous measurements using the SPT by tripling the sky area. We report CMB temperature anisotropy power over the multipole range 650 < ℓ < 3000. We fit the SPT bandpowers, combined with the 7 yr Wilkinson Microwave Anisotropy Probe (WMAP7) data, with a six-parameter ΛCDM cosmological model and find that the two datasets aremore » consistent and well fit by the model. Adding SPT measurements significantly improves ΛCDM parameter constraints; in particular, the constraint on θ {sub s} tightens by a factor of 2.7. The impact of gravitational lensing is detected at 8.1σ, the most significant detection to date. This sensitivity of the SPT+WMAP7 data to lensing by large-scale structure at low redshifts allows us to constrain the mean curvature of the observable universe with CMB data alone to be Ω{sub k}=−0.003{sub −0.018}{sup +0.014}. Using the SPT+WMAP7 data, we measure the spectral index of scalar fluctuations to be n{sub s} = 0.9623 ± 0.0097 in the ΛCDM model, a 3.9σ preference for a scale-dependent spectrum with n{sub s} < 1. The SPT measurement of the CMB damping tail helps break the degeneracy that exists between the tensor-to-scalar ratio r and n{sub s} in large-scale CMB measurements, leading to an upper limit of r < 0.18 (95% C.L.) in the ΛCDM+r model. Adding low-redshift measurements of the Hubble constant (H {sub 0}) and the baryon acoustic oscillation (BAO) feature to the SPT+WMAP7 data leads to further improvements. The combination of SPT+WMAP7+H {sub 0}+BAO constrains n{sub s} = 0.9538 ± 0.0081 in the ΛCDM model, a 5.7σ detection of n{sub s} < 1, and places an upper limit of r < 0.11 (95% C.L.) in the ΛCDM+r model. These new constraints on n{sub s} and r have significant implications for our understanding of inflation, which we discuss in the context of selected single-field inflation models.« less
Department of Defense High Performance Computing Modernization Program. 2006 Annual Report
2007-03-01
Department. We successfully completed several software development projects that introduced parallel, scalable production software now in use across the...imagined. They are developing and deploying weather and ocean models that allow our soldiers, sailors, marines and airmen to plan missions more effectively...and to navigate adverse environments safely. They are modeling molecular interactions leading to the development of higher energy fuels, munitions
Real-Time Monitoring of Scada Based Control System for Filling Process
NASA Astrophysics Data System (ADS)
Soe, Aung Kyaw; Myint, Aung Naing; Latt, Maung Maung; Theingi
2008-10-01
This paper is a design of real-time monitoring for filling system using Supervisory Control and Data Acquisition (SCADA). The monitoring of production process is described in real-time using Visual Basic.Net programming under Visual Studio 2005 software without SCADA software. The software integrators are programmed to get the required information for the configuration screens. Simulation of components is expressed on the computer screen using parallel port between computers and filling devices. The programs of real-time simulation for the filling process from the pure drinking water industry are provided.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of manymore » computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.« less
Lee, Jae H.; Yao, Yushu; Shrestha, Uttam; Gullberg, Grant T.; Seo, Youngho
2014-01-01
The primary goal of this project is to implement the iterative statistical image reconstruction algorithm, in this case maximum likelihood expectation maximum (MLEM) used for dynamic cardiac single photon emission computed tomography, on Spark/GraphX. This involves porting the algorithm to run on large-scale parallel computing systems. Spark is an easy-to- program software platform that can handle large amounts of data in parallel. GraphX is a graph analytic system running on top of Spark to handle graph and sparse linear algebra operations in parallel. The main advantage of implementing MLEM algorithm in Spark/GraphX is that it allows users to parallelize such computation without any expertise in parallel computing or prior knowledge in computer science. In this paper we demonstrate a successful implementation of MLEM in Spark/GraphX and present the performance gains with the goal to eventually make it useable in clinical setting. PMID:27081299
Lee, Jae H; Yao, Yushu; Shrestha, Uttam; Gullberg, Grant T; Seo, Youngho
2014-11-01
The primary goal of this project is to implement the iterative statistical image reconstruction algorithm, in this case maximum likelihood expectation maximum (MLEM) used for dynamic cardiac single photon emission computed tomography, on Spark/GraphX. This involves porting the algorithm to run on large-scale parallel computing systems. Spark is an easy-to- program software platform that can handle large amounts of data in parallel. GraphX is a graph analytic system running on top of Spark to handle graph and sparse linear algebra operations in parallel. The main advantage of implementing MLEM algorithm in Spark/GraphX is that it allows users to parallelize such computation without any expertise in parallel computing or prior knowledge in computer science. In this paper we demonstrate a successful implementation of MLEM in Spark/GraphX and present the performance gains with the goal to eventually make it useable in clinical setting.
Ramses-GPU: Second order MUSCL-Handcock finite volume fluid solver
NASA Astrophysics Data System (ADS)
Kestener, Pierre
2017-10-01
RamsesGPU is a reimplementation of RAMSES (ascl:1011.007) which drops the adaptive mesh refinement (AMR) features to optimize 3D uniform grid algorithms for modern graphics processor units (GPU) to provide an efficient software package for astrophysics applications that do not need AMR features but do require a very large number of integration time steps. RamsesGPU provides an very efficient C++/CUDA/MPI software implementation of a second order MUSCL-Handcock finite volume fluid solver for compressible hydrodynamics as a magnetohydrodynamics solver based on the constraint transport technique. Other useful modules includes static gravity, dissipative terms (viscosity, resistivity), and forcing source term for turbulence studies, and special care was taken to enhance parallel input/output performance by using state-of-the-art libraries such as HDF5 and parallel-netcdf.
Static analysis of the hull plate using the finite element method
NASA Astrophysics Data System (ADS)
Ion, A.
2015-11-01
This paper aims at presenting the static analysis for two levels of a container ship's construction as follows: the first level is at the girder / hull plate and the second level is conducted at the entire strength hull of the vessel. This article will describe the work for the static analysis of a hull plate. We shall use the software package ANSYS Mechanical 14.5. The program is run on a computer with four Intel Xeon X5260 CPU processors at 3.33 GHz, 32 GB memory installed. In terms of software, the shared memory parallel version of ANSYS refers to running ANSYS across multiple cores on a SMP system. The distributed memory parallel version of ANSYS (Distributed ANSYS) refers to running ANSYS across multiple processors on SMP systems or DMP systems.
Implementation of a partitioned algorithm for simulation of large CSI problems
NASA Technical Reports Server (NTRS)
Alvin, Kenneth F.; Park, K. C.
1991-01-01
The implementation of a partitioned numerical algorithm for determining the dynamic response of coupled structure/controller/estimator finite-dimensional systems is reviewed. The partitioned approach leads to a set of coupled first and second-order linear differential equations which are numerically integrated with extrapolation and implicit step methods. The present software implementation, ACSIS, utilizes parallel processing techniques at various levels to optimize performance on a shared-memory concurrent/vector processing system. A general procedure for the design of controller and filter gains is also implemented, which utilizes the vibration characteristics of the structure to be solved. Also presented are: example problems; a user's guide to the software; the procedures and algorithm scripts; a stability analysis for the algorithm; and the source code for the parallel implementation.
Scheduling for Locality in Shared-Memory Multiprocessors
1993-05-01
Submitted in Partial Fulfillment of the Requirements for the Degree ’)iIC Q(JALfryT INSPECTED 5 DOCTOR OF PHILOSOPHY I Accesion For Supervised by NTIS CRAM... architecture on parallel program performance, explain the implications of this trend on popular parallel programming models, and propose system software to 0...decomoosition and scheduling algorithms. I. SUIUECT TERMS IS. NUMBER OF PAGES shared-memory multiprocessors; architecture trends; loop 110 scheduling
Parallelizing Data-Centric Programs
2013-09-25
results than current techniques, such as ImageWebs [HGO+10], given the same budget of matches performed. 4.2 Scalable Parallel Similarity Search The work...algorithms. 5 Data-Driven Applications in the Cloud In this project, we investigated what happens when data-centric software is moved from expensive custom ...returns appropriate answer tuples. Figure 9 (b) shows the mutual constraint satisfaction that takes place in answering for 122. The intent is that
Sierra Structural Dynamics User's Notes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reese, Garth M.
2015-10-19
Sierra/SD provides a massively parallel implementation of structural dynamics finite element analysis, required for high fidelity, validated models used in modal, vibration, static and shock analysis of weapons systems. This document provides a users guide to the input for Sierra/SD. Details of input specifications for the different solution types, output options, element types and parameters are included. The appendices contain detailed examples, and instructions for running the software on parallel platforms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Munday, Lynn Brendon; Day, David M.; Bunting, Gregory
Sierra/SD provides a massively parallel implementation of structural dynamics finite element analysis, required for high fidelity, validated models used in modal, vibration, static and shock analysis of weapons systems. This document provides a users guide to the input for Sierra/SD. Details of input specifications for the different solution types, output options, element types and parameters are included. The appendices contain detailed examples, and instructions for running the software on parallel platforms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zamora, Richard; Voter, Arthur; Uberuaga, Bla
2017-10-23
The SpecTAD software represents a refactoring of the Temperature Accelerated Dynamics (TAD2) code authored by Arthur F. Voter and Blas P. Uberuaga (LA-CC-02-05). SpecTAD extends the capabilities of TAD2, by providing algorithms for both temporal and spatial parallelism. The novel algorithms for temporal parallelism include both speculation and replication based techniques. SpecTAD also offers the optional capability to dynamically link to the open-source LAMMPS package.
DOE Office of Scientific and Technical Information (OSTI.GOV)
2015-10-20
Look-ahead dynamic simulation software system incorporates the high performance parallel computing technologies, significantly reduces the solution time for each transient simulation case, and brings the dynamic simulation analysis into on-line applications to enable more transparency for better reliability and asset utilization. It takes the snapshot of the current power grid status, functions in parallel computing the system dynamic simulation, and outputs the transient response of the power system in real time.
DOVIS 2.0: an efficient and easy to use parallel virtual screening tool based on AutoDock 4.0.
Jiang, Xiaohui; Kumar, Kamal; Hu, Xin; Wallqvist, Anders; Reifman, Jaques
2008-09-08
Small-molecule docking is an important tool in studying receptor-ligand interactions and in identifying potential drug candidates. Previously, we developed a software tool (DOVIS) to perform large-scale virtual screening of small molecules in parallel on Linux clusters, using AutoDock 3.05 as the docking engine. DOVIS enables the seamless screening of millions of compounds on high-performance computing platforms. In this paper, we report significant advances in the software implementation of DOVIS 2.0, including enhanced screening capability, improved file system efficiency, and extended usability. To keep DOVIS up-to-date, we upgraded the software's docking engine to the more accurate AutoDock 4.0 code. We developed a new parallelization scheme to improve runtime efficiency and modified the AutoDock code to reduce excessive file operations during large-scale virtual screening jobs. We also implemented an algorithm to output docked ligands in an industry standard format, sd-file format, which can be easily interfaced with other modeling programs. Finally, we constructed a wrapper-script interface to enable automatic rescoring of docked ligands by arbitrarily selected third-party scoring programs. The significance of the new DOVIS 2.0 software compared with the previous version lies in its improved performance and usability. The new version makes the computation highly efficient by automating load balancing, significantly reducing excessive file operations by more than 95%, providing outputs that conform to industry standard sd-file format, and providing a general wrapper-script interface for rescoring of docked ligands. The new DOVIS 2.0 package is freely available to the public under the GNU General Public License.
Examining the architecture of cellular computing through a comparative study with a computer.
Wang, Degeng; Gribskov, Michael
2005-06-22
The computer and the cell both use information embedded in simple coding, the binary software code and the quadruple genomic code, respectively, to support system operations. A comparative examination of their system architecture as well as their information storage and utilization schemes is performed. On top of the code, both systems display a modular, multi-layered architecture, which, in the case of a computer, arises from human engineering efforts through a combination of hardware implementation and software abstraction. Using the computer as a reference system, a simplistic mapping of the architectural components between the two is easily detected. This comparison also reveals that a cell abolishes the software-hardware barrier through genomic encoding for the constituents of the biochemical network, a cell's "hardware" equivalent to the computer central processing unit (CPU). The information loading (gene expression) process acts as a major determinant of the encoded constituent's abundance, which, in turn, often determines the "bandwidth" of a biochemical pathway. Cellular processes are implemented in biochemical pathways in parallel manners. In a computer, on the other hand, the software provides only instructions and data for the CPU. A process represents just sequentially ordered actions by the CPU and only virtual parallelism can be implemented through CPU time-sharing. Whereas process management in a computer may simply mean job scheduling, coordinating pathway bandwidth through the gene expression machinery represents a major process management scheme in a cell. In summary, a cell can be viewed as a super-parallel computer, which computes through controlled hardware composition. While we have, at best, a very fragmented understanding of cellular operation, we have a thorough understanding of the computer throughout the engineering process. The potential utilization of this knowledge to the benefit of systems biology is discussed.
NASA Astrophysics Data System (ADS)
Vnukov, A. A.; Shershnev, M. B.
2018-01-01
The aim of this work is the software implementation of three image scaling algorithms using parallel computations, as well as the development of an application with a graphical user interface for the Windows operating system to demonstrate the operation of algorithms and to study the relationship between system performance, algorithm execution time and the degree of parallelization of computations. Three methods of interpolation were studied, formalized and adapted to scale images. The result of the work is a program for scaling images by different methods. Comparison of the quality of scaling by different methods is given.
Parallel Gaussian elimination of a block tridiagonal matrix using multiple microcomputers
NASA Technical Reports Server (NTRS)
Blech, Richard A.
1989-01-01
The solution of a block tridiagonal matrix using parallel processing is demonstrated. The multiprocessor system on which results were obtained and the software environment used to program that system are described. Theoretical partitioning and resource allocation for the Gaussian elimination method used to solve the matrix are discussed. The results obtained from running 1, 2 and 3 processor versions of the block tridiagonal solver are presented. The PASCAL source code for these solvers is given in the appendix, and may be transportable to other shared memory parallel processors provided that the synchronization outlines are reproduced on the target system.
NASA Technical Reports Server (NTRS)
Krosel, S. M.; Milner, E. J.
1982-01-01
The application of Predictor corrector integration algorithms developed for the digital parallel processing environment are investigated. The algorithms are implemented and evaluated through the use of a software simulator which provides an approximate representation of the parallel processing hardware. Test cases which focus on the use of the algorithms are presented and a specific application using a linear model of a turbofan engine is considered. Results are presented showing the effects of integration step size and the number of processors on simulation accuracy. Real time performance, interprocessor communication, and algorithm startup are also discussed.
2014-06-01
and Invest, “CDM Markt kompakt,” [CDM Market Compact] Cologne: 2009. 104Ministério da Ciência, Tecnologia e Inovação [Ministry of Science...Challenged,” 395‒424. 128“Designated National Authority (Interministerial Commission on Global Climate Change),” Ministério da Ciência, Tecnologia e... Tecnologia e Inovação, accessed March 17, 2014, http://mct.gov.br/index.php/content/view/58160.html. 51
Exact analytic solution for non-linear density fluctuation in a ΛCDM universe
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoo, Jaiyul; Gong, Jinn-Ouk, E-mail: jyoo@physik.uzh.ch, E-mail: jinn-ouk.gong@apctp.org
We derive the exact third-order analytic solution of the matter density fluctuation in the proper-time hypersurface in a ΛCDM universe, accounting for the explicit time-dependence and clarifying the relation to the initial condition. Furthermore, we compare our analytic solution to the previous calculation in the comoving gauge, and to the standard Newtonian perturbation theory by providing Fourier kernels for the relativistic effects. Our results provide an essential ingredient for a complete description of galaxy bias in the relativistic context.
Probing dark energy using convergence power spectrum and bi-spectrum
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dinda, Bikash R., E-mail: bikash@ctp-jamia.res.in
Weak lensing convergence statistics is a powerful tool to probe dark energy. Dark energy plays an important role to the structure formation and the effects can be detected through the convergence power spectrum, bi-spectrum etc. One of the most promising and simplest dark energy model is the ΛCDM . However, it is worth investigating different dark energy models with evolving equation of state of the dark energy. In this work, detectability of different dark energy models from ΛCDM model has been explored through convergence power spectrum and bi-spectrum.
NASA Astrophysics Data System (ADS)
Gerke, Kirill; Vasilyev, Roman; Khirevich, Siarhei; Karsanina, Marina; Collins, Daniel; Korost, Dmitry; Mallants, Dirk
2015-04-01
In this contribution we introduce a novel free software which solves the Stokes equation to obtain velocity fields for low Reynolds-number flows within externally generated 3D pore geometries. Provided with velocity fields, one can calculate permeability for known pressure gradient boundary conditions via Darcy's equation. Finite-difference schemes of 2nd and 4th order of accuracy are used together with an artificial compressibility method to iteratively converge to a steady-state solution of Stokes' equation. This numerical approach is much faster and less computationally demanding than the majority of open-source or commercial softwares employing other algorithms (finite elements/volumes, lattice Boltzmann, etc.) The software consists of two parts: 1) a pre and post-processing graphical interface, and 2) a solver. The latter is efficiently parallelized to use any number of available cores (the speedup on 16 threads was up to 10-12 depending on hardware). Due to parallelization and memory optimization our software can be used to obtain solutions for 300x300x300 voxels geometries on modern desktop PCs. The software was successfully verified by testing it against lattice Boltzmann simulations and analytical solutions. To illustrate the software's applicability for numerous problems in Earth Sciences, a number of case studies have been developed: 1) identifying the representative elementary volume for permeability determination within a sandstone sample, 2) derivation of permeability/hydraulic conductivity values for rock and soil samples and comparing those with experimentally obtained values, 3) revealing the influence of the amount of fine-textured material such as clay on filtration properties of sandy soil. This work was partially supported by RSF grant 14-17-00658 (pore-scale modelling) and RFBR grants 13-04-00409-a and 13-05-01176-a.
NASA Technical Reports Server (NTRS)
2013-01-01
Topics include: Cloud Absorption Radiometer Autonomous Navigation System - CANS, Software Method for Computed Tomography Cylinder Data Unwrapping, Re-slicing, and Analysis, Discrete Data Qualification System and Method Comprising Noise Series Fault Detection, Simple Laser Communications Terminal for Downlink from Earth Orbit at Rates Exceeding 10 Gb/s, Application Program Interface for the Orion Aerodynamics Database, Hyperspectral Imager-Tracker, Web Application Software for Ground Operations Planning Database (GOPDb) Management, Software Defined Radio with Parallelized Software Architecture, Compact Radar Transceiver with Included Calibration, Software Defined Radio with Parallelized Software Architecture, Phase Change Material Thermal Power Generator, The Thermal Hogan - A Means of Surviving the Lunar Night, Micromachined Active Magnetic Regenerator for Low-Temperature Magnetic Coolers, Nano-Ceramic Coated Plastics, Preparation of a Bimetal Using Mechanical Alloying for Environmental or Industrial Use, Phase Change Material for Temperature Control of Imager or Sounder on GOES Type Satellites in GEO, Dual-Compartment Inflatable Suitlock, Modular Connector Keying Concept, Genesis Ultrapure Water Megasonic Wafer Spin Cleaner, Piezoelectrically Initiated Pyrotechnic Igniter, Folding Elastic Thermal Surface - FETS, Multi-Pass Quadrupole Mass Analyzer, Lunar Sulfur Capture System, Environmental Qualification of a Single-Crystal Silicon Mirror for Spaceflight Use, Planar Superconducting Millimeter-Wave/Terahertz Channelizing Filter, Qualification of UHF Antenna for Extreme Martian Thermal Environments, Ensemble Eclipse: A Process for Prefab Development Environment for the Ensemble Project, ISS Live!, Space Operations Learning Center (SOLC) iPhone/iPad Application, Software to Compare NPP HDF5 Data Files, Planetary Data Systems (PDS) Imaging Node Atlas II, Automatic Calibration of an Airborne Imaging System to an Inertial Navigation Unit, Translating MAPGEN to ASPEN for MER, Support Routines for In Situ Image Processing, and Semi-Supervised Eigenbasis Novelty Detection.
Biocellion: accelerating computer simulation of multicellular biological system models.
Kang, Seunghwa; Kahan, Simon; McDermott, Jason; Flann, Nicholas; Shmulevich, Ilya
2014-11-01
Biological system behaviors are often the outcome of complex interactions among a large number of cells and their biotic and abiotic environment. Computational biologists attempt to understand, predict and manipulate biological system behavior through mathematical modeling and computer simulation. Discrete agent-based modeling (in combination with high-resolution grids to model the extracellular environment) is a popular approach for building biological system models. However, the computational complexity of this approach forces computational biologists to resort to coarser resolution approaches to simulate large biological systems. High-performance parallel computers have the potential to address the computing challenge, but writing efficient software for parallel computers is difficult and time-consuming. We have developed Biocellion, a high-performance software framework, to solve this computing challenge using parallel computers. To support a wide range of multicellular biological system models, Biocellion asks users to provide their model specifics by filling the function body of pre-defined model routines. Using Biocellion, modelers without parallel computing expertise can efficiently exploit parallel computers with less effort than writing sequential programs from scratch. We simulate cell sorting, microbial patterning and a bacterial system in soil aggregate as case studies. Biocellion runs on x86 compatible systems with the 64 bit Linux operating system and is freely available for academic use. Visit http://biocellion.com for additional information. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Reconstructing evolutionary trees in parallel for massive sequences.
Zou, Quan; Wan, Shixiang; Zeng, Xiangxiang; Ma, Zhanshan Sam
2017-12-14
Building the evolutionary trees for massive unaligned DNA sequences is challenging and crucial. However, reconstructing evolutionary tree for ultra-large sequences is hard. Massive multiple sequence alignment is also challenging and time/space consuming. Hadoop and Spark are developed recently, which bring spring light for the classical computational biology problems. In this paper, we tried to solve the multiple sequence alignment and evolutionary reconstruction in parallel. HPTree, which is developed in this paper, can deal with big DNA sequence files quickly. It works well on the >1GB files, and gets better performance than other evolutionary reconstruction tools. Users could use HPTree for reonstructing evolutioanry trees on the computer clusters or cloud platform (eg. Amazon Cloud). HPTree could help on population evolution research and metagenomics analysis. In this paper, we employ the Hadoop and Spark platform and design an evolutionary tree reconstruction software tool for unaligned massive DNA sequences. Clustering and multiple sequence alignment are done in parallel. Neighbour-joining model was employed for the evolutionary tree building. We opened our software together with source codes via http://lab.malab.cn/soft/HPtree/ .
Multi-mode sensor processing on a dynamically reconfigurable massively parallel processor array
NASA Astrophysics Data System (ADS)
Chen, Paul; Butts, Mike; Budlong, Brad; Wasson, Paul
2008-04-01
This paper introduces a novel computing architecture that can be reconfigured in real time to adapt on demand to multi-mode sensor platforms' dynamic computational and functional requirements. This 1 teraOPS reconfigurable Massively Parallel Processor Array (MPPA) has 336 32-bit processors. The programmable 32-bit communication fabric provides streamlined inter-processor connections with deterministically high performance. Software programmability, scalability, ease of use, and fast reconfiguration time (ranging from microseconds to milliseconds) are the most significant advantages over FPGAs and DSPs. This paper introduces the MPPA architecture, its programming model, and methods of reconfigurability. An MPPA platform for reconfigurable computing is based on a structural object programming model. Objects are software programs running concurrently on hundreds of 32-bit RISC processors and memories. They exchange data and control through a network of self-synchronizing channels. A common application design pattern on this platform, called a work farm, is a parallel set of worker objects, with one input and one output stream. Statically configured work farms with homogeneous and heterogeneous sets of workers have been used in video compression and decompression, network processing, and graphics applications.
Integrating the Apache Big Data Stack with HPC for Big Data
NASA Astrophysics Data System (ADS)
Fox, G. C.; Qiu, J.; Jha, S.
2014-12-01
There is perhaps a broad consensus as to important issues in practical parallel computing as applied to large scale simulations; this is reflected in supercomputer architectures, algorithms, libraries, languages, compilers and best practice for application development. However, the same is not so true for data intensive computing, even though commercially clouds devote much more resources to data analytics than supercomputers devote to simulations. We look at a sample of over 50 big data applications to identify characteristics of data intensive applications and to deduce needed runtime and architectures. We suggest a big data version of the famous Berkeley dwarfs and NAS parallel benchmarks and use these to identify a few key classes of hardware/software architectures. Our analysis builds on combining HPC and ABDS the Apache big data software stack that is well used in modern cloud computing. Initial results on clouds and HPC systems are encouraging. We propose the development of SPIDAL - Scalable Parallel Interoperable Data Analytics Library -- built on system aand data abstractions suggested by the HPC-ABDS architecture. We discuss how it can be used in several application areas including Polar Science.
Implementation and Testing of VLBI Software Correlation at the USNO
NASA Technical Reports Server (NTRS)
Fey, Alan; Ojha, Roopesh; Boboltz, Dave; Geiger, Nicole; Kingham, Kerry; Hall, David; Gaume, Ralph; Johnston, Ken
2010-01-01
The Washington Correlator (WACO) at the U.S. Naval Observatory (USNO) is a dedicated VLBI processor based on dedicated hardware of ASIC design. The WACO is currently over 10 years old and is nearing the end of its expected lifetime. Plans for implementation and testing of software correlation at the USNO are currently being considered. The VLBI correlation process is, by its very nature, well suited to a parallelized computing environment. Commercial off-the-shelf computer hardware has advanced in processing power to the point where software correlation is now both economically and technologically feasible. The advantages of software correlation are manifold but include flexibility, scalability, and easy adaptability to changing environments and requirements. We discuss our experience with and plans for use of software correlation at USNO with emphasis on the use of the DiFX software correlator.
Barboni, Mirella Telles Salgueiro; Martins, Cristiane Maria Gomes; Nagy, Balázs Vince; Tsai, Tina; Damico, Francisco Max; da Costa, Marcelo Fernandes; de Cassia, Rita; Pavanello, M; Lourenço, Naila Cristina Vilaça; de Cerqueira, Antonia Maria Pereira; Zatz, Mayana; Kremers, Jan; Ventura, Dora Fix
2016-07-01
Visual information is processed in parallel pathways in the visual system. Parallel processing begins at the synapse between the photoreceptors and their postreceptoral neurons in the human retina. The integrity of this first neural connection is vital for normal visual processing downstream. Of the numerous elements necessary for proper functioning of this synaptic contact, dystrophin proteins in the eye play an important role. Deficiency of muscle dystrophin causes Duchenne muscular dystrophy (DMD), an X-linked disease that affects muscle function and leads to decreased life expectancy. In DMD patients, postreceptoral retinal mechanisms underlying scotopic and photopic vision and ON- and OFF-pathway responses are also altered. In this study, we recorded the electroretinogram (ERG) while preferentially activating the (red-green) opponent or the luminance pathway, and compared data from healthy participants (n = 16) with those of DMD patients (n = 10). The stimuli were heterochromatic sinusoidal modulations at a mean luminance of 200 cd/m2. The recordings allowed us also to analyze ON and OFF cone-driven retinal responses. We found significant differences in 12-Hz response amplitudes and phases between controls and DMD patients, with conditions with large luminance content resulting in larger response amplitudes in DMD patients compared to controls, whereas responses of DMD patients were smaller when pure chromatic modulation was given. The results suggest that dystrophin is required for the proper function of luminance and red-green cone opponent mechanisms in the human retina.
Parallels in Computer-Aided Design Framework and Software Development Environment Efforts.
1992-05-01
de - sign kits, and tool and design management frameworks. Also, books about software engineer- ing environments [Long 91] and electronic design...tool integration [Zarrella 90], and agreement upon a universal de - sign automation framework, such as the CAD Framework Initiative (CFI) [Malasky 91...ments: identification, control, status accounting, and audit and review. The paper by Dart ex- tracts 15 CM concepts from existing SDEs and tools
Parallel Worlds: Agile and Waterfall Differences and Similarities
2013-10-01
development model , and it is deliberately shorter than the Agile Overview as most readers are assumed to be from the Traditional World. For a more in...process of DODI 5000 does not forbid the iterative incremental software development model with frequent end-user interaction, it requires heroics on...added). Today, many of the DOD’s large IT programs therefore continue to adopt program structures and software development models closely
CrossTalk: The Journal of Defense Software Engineering. Volume 19, Number 6
2006-06-01
improvement methods. The total volume of projects studied now exceeds 12,000. Software Productivity Research, LLC Phone: (877) 570-5459 (973) 273-5829...While performing quality con- sulting, Olson has helped organizations measurably improve quality and productivity , save millions of dollars in costs of...This article draws parallels between the outrageous events on the Jerry Springer Show and problems faced by process improvement programs. by Paul
Architecture-Adaptive Computing Environment: A Tool for Teaching Parallel Programming
NASA Technical Reports Server (NTRS)
Dorband, John E.; Aburdene, Maurice F.
2002-01-01
Recently, networked and cluster computation have become very popular. This paper is an introduction to a new C based parallel language for architecture-adaptive programming, aCe C. The primary purpose of aCe (Architecture-adaptive Computing Environment) is to encourage programmers to implement applications on parallel architectures by providing them the assurance that future architectures will be able to run their applications with a minimum of modification. A secondary purpose is to encourage computer architects to develop new types of architectures by providing an easily implemented software development environment and a library of test applications. This new language should be an ideal tool to teach parallel programming. In this paper, we will focus on some fundamental features of aCe C.
Pellejero-Ibanez, Marco; Chuang, Chia -Hsun; Rubino-Martin, J. A.; ...
2016-03-28
Here, we develop a new methodology called double-probe analysis with the aim of minimizing informative priors in the estimation of cosmological parameters. We extract the dark-energy-model-independent cosmological constraints from the joint data sets of Baryon Oscillation Spectroscopic Survey (BOSS) galaxy sample and Planck cosmic microwave background (CMB) measurement. We measure the mean values and covariance matrix of {R, l a, Ω bh 2, n s, log(A s), Ω k, H(z), D A(z), f(z)σ 8(z)}, which give an efficient summary of Planck data and 2-point statistics from BOSS galaxy sample, where R = √Ω mH 2 0, and l a =more » πr(z *)/r s(z *), z * is the redshift at the last scattering surface, and r(z *) and r s(z *) denote our comoving distance to z * and sound horizon at z * respectively. The advantage of this method is that we do not need to put informative priors on the cosmological parameters that galaxy clustering is not able to constrain well, i.e. Ω bh 2 and n s. Using our double-probe results, we obtain Ω m = 0.304 ± 0.009, H 0 = 68.2 ± 0.7, and σ 8 = 0.806 ± 0.014 assuming ΛCDM; and Ω k = 0.002 ± 0.003 and w = –1.00 ± 0.07 assuming owCDM. The results show no tension with the flat ΛCDM cosmological paradigm. By comparing with the full-likelihood analyses with fixed dark energy models, we demonstrate that the double-probe method provides robust cosmological parameter constraints which can be conveniently used to study dark energy models. We extend our study to measure the sum of neutrino mass and obtain Σm ν < 0.10/0.22 (68%/95%) assuming ΛCDM and Σm ν < 0.26/0.52 (68%/95%) assuming wCDM. This paper is part of a set that analyses the final galaxy clustering dataset from BOSS.« less
Clustering of galaxies in a hierarchical universe - I. Methods and results at z=0
NASA Astrophysics Data System (ADS)
Kauffmann, Guinevere; Colberg, Jorg M.; Diaferio, Antonaldo; White, Simon D. M.
1999-02-01
We introduce a new technique for following the formation and evolution of galaxies in cosmological N-body simulations. Dissipationless simulations are used to track the formation and merging of dark matter haloes as a function of redshift. Simple prescriptions, taken directly from semi-analytic models of galaxy formation, are adopted for gas cooling, star formation, supernova feedback and the merging of galaxies within the haloes. This scheme enables us to explore the clustering properties of galaxies, and to investigate how selection by luminosity, colour or type influences the results. In this paper we study the properties of the galaxy distribution at z=0. These include B- and K-band luminosity functions, two-point correlation functions, pairwise peculiar velocities, cluster mass-to-light ratios, B-V colours, and star formation rates. We focus on two variants of a cold dark matter (CDM) cosmology: a high-density (Omega =1) model with shape-parameter Gamma =0.21 (tau CDM), and a low-density model with Omega =0.3 and Lambda =0.7 (Lambda CDM). Both models are normalized to reproduce the I-band Tully-Fisher relation of Giovanelli et al. near a circular velocity of 220 km s^-1. Our results depend strongly both on this normalization and on the adopted prescriptions for star formation and feedback. Very different assumptions are required to obtain an acceptable model in the two cases. For tau CDM, efficient feedback is required to suppress the growth of galaxies, particularly in low-mass field haloes. Without it, there are too many galaxies and the correlation function exhibits a strong turnover on scales below 1 Mpc. For Lambda CDM, feedback must be weaker, otherwise too few L_* galaxies are produced and the correlation function is too steep. Although neither model is perfect, both come close to reproducing most of the data. Given the uncertainties in modelling some of the critical physical processes, we conclude that it is not yet possible to draw firm conclusions about the values of cosmological parameters from studies of this kind. Further observational work on global star formation and feedback effects is required to narrow the range of possibilities.