Savundranayagam, Marie Y; Moore-Nielsen, Kelsey
2015-10-01
There are many recommended language-based strategies for effective communication with persons with dementia. What is unknown is whether effective language-based strategies are also person centered. Accordingly, the objective of this study was to examine whether language-based strategies for effective communication with persons with dementia overlapped with the following indicators of person-centered communication: recognition, negotiation, facilitation, and validation. Conversations (N = 46) between staff-resident dyads were audio-recorded during routine care tasks over 12 weeks. Staff utterances were coded twice, using language-based and person-centered categories. There were 21 language-based categories and 4 person-centered categories. There were 5,800 utterances transcribed: 2,409 without indicators, 1,699 coded as language or person centered, and 1,692 overlapping utterances. For recognition, 26% of utterances were greetings, 21% were affirmations, 13% were questions (yes/no and open-ended), and 15% involved rephrasing. Questions (yes/no, choice, and open-ended) comprised 74% of utterances that were coded as negotiation. A similar pattern was observed for utterances coded as facilitation where 51% of utterances coded as facilitation were yes/no questions, open-ended questions, and choice questions. However, 21% of facilitative utterances were affirmations and 13% involved rephrasing. Finally, 89% of utterances coded as validation were affirmations. The findings identify specific language-based strategies that support person-centered communication. However, between 1 and 4, out of a possible 21 language-based strategies, overlapped with at least 10% of utterances coded as each person-centered indicator. This finding suggests that staff need training to use more diverse language strategies that support personhood of residents with dementia.
Full-f version of GENE for turbulence in open-field-line systems
NASA Astrophysics Data System (ADS)
Pan, Q.; Told, D.; Shi, E. L.; Hammett, G. W.; Jenko, F.
2018-06-01
Unique properties of plasmas in the tokamak edge, such as large amplitude fluctuations and plasma-wall interactions in the open-field-line regions, require major modifications of existing gyrokinetic codes originally designed for simulating core turbulence. To this end, the global version of the 3D2V gyrokinetic code GENE, so far employing a δf-splitting technique, is extended to simulate electrostatic turbulence in straight open-field-line systems. The major extensions are the inclusion of the velocity-space nonlinearity, the development of a conducting-sheath boundary, and the implementation of the Lenard-Bernstein collision operator. With these developments, the code can be run as a full-f code and can handle particle loss to and reflection from the wall. The extended code is applied to modeling turbulence in the Large Plasma Device (LAPD), with a reduced mass ratio and a much lower collisionality. Similar to turbulence in a tokamak scrape-off layer, LAPD turbulence involves collisions, parallel streaming, cross-field turbulent transport with steep profiles, and particle loss at the parallel boundary.
ERIC Educational Resources Information Center
Panettieri, Joseph C.
2007-01-01
This article discusses open source projects which may free universities from expensive, rigid commercial software. But will the rewards outweigh the potential risks? The Kuali Project involves multiple universities writing and sharing code for their financial and operational systems. Another, the Sakai Project, is a community source platform for…
PlasmaPy: beginning a community developed Python package for plasma physics
NASA Astrophysics Data System (ADS)
Murphy, Nicholas A.; Huang, Yi-Min; PlasmaPy Collaboration
2016-10-01
In recent years, researchers in several disciplines have collaborated on community-developed open source Python packages such as Astropy, SunPy, and SpacePy. These packages provide core functionality, common frameworks for data analysis and visualization, and educational tools. We propose that our community begins the development of PlasmaPy: a new open source core Python package for plasma physics. PlasmaPy could include commonly used functions in plasma physics, easy-to-use plasma simulation codes, Grad-Shafranov solvers, eigenmode solvers, and tools to analyze both simulations and experiments. The development will include modern programming practices such as version control, embedding documentation in the code, unit tests, and avoiding premature optimization. We will describe early code development on PlasmaPy, and discuss plans moving forward. The success of PlasmaPy depends on active community involvement and a welcoming and inclusive environment, so anyone interested in joining this collaboration should contact the authors.
DualSPHysics: A numerical tool to simulate real breakwaters
NASA Astrophysics Data System (ADS)
Zhang, Feng; Crespo, Alejandro; Altomare, Corrado; Domínguez, José; Marzeddu, Andrea; Shang, Shao-ping; Gómez-Gesteira, Moncho
2018-02-01
The open-source code DualSPHysics is used in this work to compute the wave run-up in an existing dike in the Chinese coast using realistic dimensions, bathymetry and wave conditions. The GPU computing power of the DualSPHysics allows simulating real-engineering problems that involve complex geometries with a high resolution in a reasonable computational time. The code is first validated by comparing the numerical free-surface elevation, the wave orbital velocities and the time series of the run-up with physical data in a wave flume. Those experiments include a smooth dike and an armored dike with two layers of cubic blocks. After validation, the code is applied to a real case to obtain the wave run-up under different incident wave conditions. In order to simulate the real open sea, the spurious reflections from the wavemaker are removed by using an active wave absorption technique.
Involvement of Family Members and Professionals in Older Women's Post-Fall Decision Making.
Bergeron, Caroline D; Hilfinger Messias, DeAnne K; Friedman, Daniela B; Spencer, S Melinda; Miller, Susan C
2018-03-01
This exploratory, descriptive study examined involvement of family members and professionals in older women's post-fall decision making. We conducted semistructured interviews with 17 older women who had recently fallen and 11 individuals these women identified as being engaged in their post-fall decision-making processes. Qualitative data analysis involved open and axial coding and development of themes. After experiencing a fall, these older women's openness to others' opinions and advice; their assessments of types and credibility of potential information sources; and the communication practices they established with these sources influenced how they accessed, accepted, or rejected information from family members and professionals. Increased awareness of the involvement of others in post-fall decision making could enhance communication with older women who fall. Developing and implementing practical strategies to help family members and professionals initiate and engage in conversations about falls and their consequences could lead to more open decision making and improved post-fall quality of life among older women.
Implementing Shared Memory Parallelism in MCBEND
NASA Astrophysics Data System (ADS)
Bird, Adam; Long, David; Dobson, Geoff
2017-09-01
MCBEND is a general purpose radiation transport Monte Carlo code from AMEC Foster Wheelers's ANSWERS® Software Service. MCBEND is well established in the UK shielding community for radiation shielding and dosimetry assessments. The existing MCBEND parallel capability effectively involves running the same calculation on many processors. This works very well except when the memory requirements of a model restrict the number of instances of a calculation that will fit on a machine. To more effectively utilise parallel hardware OpenMP has been used to implement shared memory parallelism in MCBEND. This paper describes the reasoning behind the choice of OpenMP, notes some of the challenges of multi-threading an established code such as MCBEND and assesses the performance of the parallel method implemented in MCBEND.
PlasmaPy: initial development of a Python package for plasma physics
NASA Astrophysics Data System (ADS)
Murphy, Nicholas; Leonard, Andrew J.; Stańczak, Dominik; Haggerty, Colby C.; Parashar, Tulasi N.; Huang, Yu-Min; PlasmaPy Community
2017-10-01
We report on initial development of PlasmaPy: an open source community-driven Python package for plasma physics. PlasmaPy seeks to provide core functionality that is needed for the formation of a fully open source Python ecosystem for plasma physics. PlasmaPy prioritizes code readability, consistency, and maintainability while using best practices for scientific computing such as version control, continuous integration testing, embedding documentation in code, and code review. We discuss our current and planned capabilities, including features presently under development. The development roadmap includes features such as fluid and particle simulation capabilities, a Grad-Shafranov solver, a dispersion relation solver, atomic data retrieval methods, and tools to analyze simulations and experiments. We describe several ways to contribute to PlasmaPy. PlasmaPy has a code of conduct and is being developed under a BSD license, with a version 0.1 release planned for 2018. The success of PlasmaPy depends on active community involvement, so anyone interested in contributing to this project should contact the authors. This work was partially supported by the U.S. Department of Energy.
Doctoral Students' Identity Positioning in Networked Learning Environments
ERIC Educational Resources Information Center
Koole, Marguerite; Stack, Sara
2016-01-01
In this study, the authors explored identity positioning as perceived by doctoral learners in online, networked-learning environments. The study examined two distance doctoral programs at a Canadian university. It was a qualitative study based on methodologies involving open coding and discourse analysis. The social positioning cycle, based on…
Stodden, Victoria; Guo, Peixuan; Ma, Zhaokun
2013-01-01
Journal policy on research data and code availability is an important part of the ongoing shift toward publishing reproducible computational science. This article extends the literature by studying journal data sharing policies by year (for both 2011 and 2012) for a referent set of 170 journals. We make a further contribution by evaluating code sharing policies, supplemental materials policies, and open access status for these 170 journals for each of 2011 and 2012. We build a predictive model of open data and code policy adoption as a function of impact factor and publisher and find higher impact journals more likely to have open data and code policies and scientific societies more likely to have open data and code policies than commercial publishers. We also find open data policies tend to lead open code policies, and we find no relationship between open data and code policies and either supplemental material policies or open access journal status. Of the journals in this study, 38% had a data policy, 22% had a code policy, and 66% had a supplemental materials policy as of June 2012. This reflects a striking one year increase of 16% in the number of data policies, a 30% increase in code policies, and a 7% increase in the number of supplemental materials policies. We introduce a new dataset to the community that categorizes data and code sharing, supplemental materials, and open access policies in 2011 and 2012 for these 170 journals.
Stodden, Victoria; Guo, Peixuan; Ma, Zhaokun
2013-01-01
Journal policy on research data and code availability is an important part of the ongoing shift toward publishing reproducible computational science. This article extends the literature by studying journal data sharing policies by year (for both 2011 and 2012) for a referent set of 170 journals. We make a further contribution by evaluating code sharing policies, supplemental materials policies, and open access status for these 170 journals for each of 2011 and 2012. We build a predictive model of open data and code policy adoption as a function of impact factor and publisher and find higher impact journals more likely to have open data and code policies and scientific societies more likely to have open data and code policies than commercial publishers. We also find open data policies tend to lead open code policies, and we find no relationship between open data and code policies and either supplemental material policies or open access journal status. Of the journals in this study, 38% had a data policy, 22% had a code policy, and 66% had a supplemental materials policy as of June 2012. This reflects a striking one year increase of 16% in the number of data policies, a 30% increase in code policies, and a 7% increase in the number of supplemental materials policies. We introduce a new dataset to the community that categorizes data and code sharing, supplemental materials, and open access policies in 2011 and 2012 for these 170 journals. PMID:23805293
The neuromodulator of exploration: A unifying theory of the role of dopamine in personality
DeYoung, Colin G.
2013-01-01
The neuromodulator dopamine is centrally involved in reward, approach behavior, exploration, and various aspects of cognition. Variations in dopaminergic function appear to be associated with variations in personality, but exactly which traits are influenced by dopamine remains an open question. This paper proposes a theory of the role of dopamine in personality that organizes and explains the diversity of findings, utilizing the division of the dopaminergic system into value coding and salience coding neurons (Bromberg-Martin et al., 2010). The value coding system is proposed to be related primarily to Extraversion and the salience coding system to Openness/Intellect. Global levels of dopamine influence the higher order personality factor, Plasticity, which comprises the shared variance of Extraversion and Openness/Intellect. All other traits related to dopamine are linked to Plasticity or its subtraits. The general function of dopamine is to promote exploration, by facilitating engagement with cues of specific reward (value) and cues of the reward value of information (salience). This theory constitutes an extension of the entropy model of uncertainty (EMU; Hirsh et al., 2012), enabling EMU to account for the fact that uncertainty is an innate incentive reward as well as an innate threat. The theory accounts for the association of dopamine with traits ranging from sensation and novelty seeking, to impulsivity and aggression, to achievement striving, creativity, and cognitive abilities, to the overinclusive thinking characteristic of schizotypy. PMID:24294198
Freeing Worldview's development process: Open source everything!
NASA Astrophysics Data System (ADS)
Gunnoe, T.
2016-12-01
Freeing your code and your project are important steps for creating an inviting environment for collaboration, with the added side effect of keeping a good relationship with your users. NASA Worldview's codebase was released with the open source NOSA (NASA Open Source Agreement) license in 2014, but this is only the first step. We also have to free our ideas, empower our users by involving them in the development process, and open channels that lead to the creation of a community project. There are many highly successful examples of Free and Open Source Software (FOSS) projects of which we can take note: the Linux kernel, Debian, GNOME, etc. These projects owe much of their success to having a passionate mix of developers/users with a great community and a common goal in mind. This presentation will describe the scope of this openness and how Worldview plans to move forward with a more community-inclusive approach.
Deterministic Design Optimization of Structures in OpenMDAO Framework
NASA Technical Reports Server (NTRS)
Coroneos, Rula M.; Pai, Shantaram S.
2012-01-01
Nonlinear programming algorithms play an important role in structural design optimization. Several such algorithms have been implemented in OpenMDAO framework developed at NASA Glenn Research Center (GRC). OpenMDAO is an open source engineering analysis framework, written in Python, for analyzing and solving Multi-Disciplinary Analysis and Optimization (MDAO) problems. It provides a number of solvers and optimizers, referred to as components and drivers, which users can leverage to build new tools and processes quickly and efficiently. Users may download, use, modify, and distribute the OpenMDAO software at no cost. This paper summarizes the process involved in analyzing and optimizing structural components by utilizing the framework s structural solvers and several gradient based optimizers along with a multi-objective genetic algorithm. For comparison purposes, the same structural components were analyzed and optimized using CometBoards, a NASA GRC developed code. The reliability and efficiency of the OpenMDAO framework was compared and reported in this report.
Genetic coding and gene expression - new Quadruplet genetic coding model
NASA Astrophysics Data System (ADS)
Shankar Singh, Rama
2012-07-01
Successful demonstration of human genome project has opened the door not only for developing personalized medicine and cure for genetic diseases, but it may also answer the complex and difficult question of the origin of life. It may lead to making 21st century, a century of Biological Sciences as well. Based on the central dogma of Biology, genetic codons in conjunction with tRNA play a key role in translating the RNA bases forming sequence of amino acids leading to a synthesized protein. This is the most critical step in synthesizing the right protein needed for personalized medicine and curing genetic diseases. So far, only triplet codons involving three bases of RNA, transcribed from DNA bases, have been used. Since this approach has several inconsistencies and limitations, even the promise of personalized medicine has not been realized. The new Quadruplet genetic coding model proposed and developed here involves all four RNA bases which in conjunction with tRNA will synthesize the right protein. The transcription and translation process used will be the same, but the Quadruplet codons will help overcome most of the inconsistencies and limitations of the triplet codes. Details of this new Quadruplet genetic coding model and its subsequent potential applications including relevance to the origin of life will be presented.
ERIC Educational Resources Information Center
Aulls, Mark W.; Ibrahim, Ahmed
2012-01-01
This multiple case study examined pre-service teachers perceptions of effective post-secondary instruction. Pre-service teachers were asked to write essays describing an effective teacher of their choice. Twenty-one essays were randomly selected. Data analysis involved open coding of each essay, content analysis of each essay using Anderson and…
Awareness Becomes Necessary Between Adaptive Pattern Coding of Open and Closed Curvatures
Sweeny, Timothy D.; Grabowecky, Marcia; Suzuki, Satoru
2012-01-01
Visual pattern processing becomes increasingly complex along the ventral pathway, from the low-level coding of local orientation in the primary visual cortex to the high-level coding of face identity in temporal visual areas. Previous research using pattern aftereffects as a psychophysical tool to measure activation of adaptive feature coding has suggested that awareness is relatively unimportant for the coding of orientation, but awareness is crucial for the coding of face identity. We investigated where along the ventral visual pathway awareness becomes crucial for pattern coding. Monoptic masking, which interferes with neural spiking activity in low-level processing while preserving awareness of the adaptor, eliminated open-curvature aftereffects but preserved closed-curvature aftereffects. In contrast, dichoptic masking, which spares spiking activity in low-level processing while wiping out awareness, preserved open-curvature aftereffects but eliminated closed-curvature aftereffects. This double dissociation suggests that adaptive coding of open and closed curvatures straddles the divide between weakly and strongly awareness-dependent pattern coding. PMID:21690314
Dharmaraj, Christopher D; Thadikonda, Kishan; Fletcher, Anthony R; Doan, Phuc N; Devasahayam, Nallathamby; Matsumoto, Shingo; Johnson, Calvin A; Cook, John A; Mitchell, James B; Subramanian, Sankaran; Krishna, Murali C
2009-01-01
Three-dimensional Oximetric Electron Paramagnetic Resonance Imaging using the Single Point Imaging modality generates unpaired spin density and oxygen images that can readily distinguish between normal and tumor tissues in small animals. It is also possible with fast imaging to track the changes in tissue oxygenation in response to the oxygen content in the breathing air. However, this involves dealing with gigabytes of data for each 3D oximetric imaging experiment involving digital band pass filtering and background noise subtraction, followed by 3D Fourier reconstruction. This process is rather slow in a conventional uniprocessor system. This paper presents a parallelization framework using OpenMP runtime support and parallel MATLAB to execute such computationally intensive programs. The Intel compiler is used to develop a parallel C++ code based on OpenMP. The code is executed on four Dual-Core AMD Opteron shared memory processors, to reduce the computational burden of the filtration task significantly. The results show that the parallel code for filtration has achieved a speed up factor of 46.66 as against the equivalent serial MATLAB code. In addition, a parallel MATLAB code has been developed to perform 3D Fourier reconstruction. Speedup factors of 4.57 and 4.25 have been achieved during the reconstruction process and oximetry computation, for a data set with 23 x 23 x 23 gradient steps. The execution time has been computed for both the serial and parallel implementations using different dimensions of the data and presented for comparison. The reported system has been designed to be easily accessible even from low-cost personal computers through local internet (NIHnet). The experimental results demonstrate that the parallel computing provides a source of high computational power to obtain biophysical parameters from 3D EPR oximetric imaging, almost in real-time.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fu, Pengcheng; Mcclure, Mark; Shiozawa, Sogo
A series of experiments performed at the Fenton Hill hot dry rock site after stage 2 drilling of Phase I reservoir provided intriguing field observations on the reservoir’s responses to injection and venting under various conditions. Two teams participating in the US DOE Geothermal Technologies Office (GTO)’s Code Comparison Study (CCS) used different numerical codes to model these five experiments with the objective of inferring the hydraulic stimulation mechanism involved. The codes used by the two teams are based on different numerical principles, and the assumptions made were also different, due to intrinsic limitations in the codes and the modelers’more » personal interpretations of the field observations. Both sets of models were able to produce the most important field observations and both found that it was the combination of the vertical gradient of the fracture opening pressure, injection volume, and the use/absence of proppant that yielded the different outcomes of the five experiments.« less
Portable LQCD Monte Carlo code using OpenACC
NASA Astrophysics Data System (ADS)
Bonati, Claudio; Calore, Enrico; Coscetti, Simone; D'Elia, Massimo; Mesiti, Michele; Negro, Francesco; Fabio Schifano, Sebastiano; Silvi, Giorgio; Tripiccione, Raffaele
2018-03-01
Varying from multi-core CPU processors to many-core GPUs, the present scenario of HPC architectures is extremely heterogeneous. In this context, code portability is increasingly important for easy maintainability of applications; this is relevant in scientific computing where code changes are numerous and frequent. In this talk we present the design and optimization of a state-of-the-art production level LQCD Monte Carlo application, using the OpenACC directives model. OpenACC aims to abstract parallel programming to a descriptive level, where programmers do not need to specify the mapping of the code on the target machine. We describe the OpenACC implementation and show that the same code is able to target different architectures, including state-of-the-art CPUs and GPUs.
Open-Source Development of the Petascale Reactive Flow and Transport Code PFLOTRAN
NASA Astrophysics Data System (ADS)
Hammond, G. E.; Andre, B.; Bisht, G.; Johnson, T.; Karra, S.; Lichtner, P. C.; Mills, R. T.
2013-12-01
Open-source software development has become increasingly popular in recent years. Open-source encourages collaborative and transparent software development and promotes unlimited free redistribution of source code to the public. Open-source development is good for science as it reveals implementation details that are critical to scientific reproducibility, but generally excluded from journal publications. In addition, research funds that would have been spent on licensing fees can be redirected to code development that benefits more scientists. In 2006, the developers of PFLOTRAN open-sourced their code under the U.S. Department of Energy SciDAC-II program. Since that time, the code has gained popularity among code developers and users from around the world seeking to employ PFLOTRAN to simulate thermal, hydraulic, mechanical and biogeochemical processes in the Earth's surface/subsurface environment. PFLOTRAN is a massively-parallel subsurface reactive multiphase flow and transport simulator designed from the ground up to run efficiently on computing platforms ranging from the laptop to leadership-class supercomputers, all from a single code base. The code employs domain decomposition for parallelism and is founded upon the well-established and open-source parallel PETSc and HDF5 frameworks. PFLOTRAN leverages modern Fortran (i.e. Fortran 2003-2008) in its extensible object-oriented design. The use of this progressive, yet domain-friendly programming language has greatly facilitated collaboration in the code's software development. Over the past year, PFLOTRAN's top-level data structures were refactored as Fortran classes (i.e. extendible derived types) to improve the flexibility of the code, ease the addition of new process models, and enable coupling to external simulators. For instance, PFLOTRAN has been coupled to the parallel electrical resistivity tomography code E4D to enable hydrogeophysical inversion while the same code base can be used as a third-party library to provide hydrologic flow, energy transport, and biogeochemical capability to the community land model, CLM, part of the open-source community earth system model (CESM) for climate. In this presentation, the advantages and disadvantages of open source software development in support of geoscience research at government laboratories, universities, and the private sector are discussed. Since the code is open-source (i.e. it's transparent and readily available to competitors), the PFLOTRAN team's development strategy within a competitive research environment is presented. Finally, the developers discuss their approach to object-oriented programming and the leveraging of modern Fortran in support of collaborative geoscience research as the Fortran standard evolves among compiler vendors.
Design and optimization of a portable LQCD Monte Carlo code using OpenACC
NASA Astrophysics Data System (ADS)
Bonati, Claudio; Coscetti, Simone; D'Elia, Massimo; Mesiti, Michele; Negro, Francesco; Calore, Enrico; Schifano, Sebastiano Fabio; Silvi, Giorgio; Tripiccione, Raffaele
The present panorama of HPC architectures is extremely heterogeneous, ranging from traditional multi-core CPU processors, supporting a wide class of applications but delivering moderate computing performance, to many-core Graphics Processor Units (GPUs), exploiting aggressive data-parallelism and delivering higher performances for streaming computing applications. In this scenario, code portability (and performance portability) become necessary for easy maintainability of applications; this is very relevant in scientific computing where code changes are very frequent, making it tedious and prone to error to keep different code versions aligned. In this work, we present the design and optimization of a state-of-the-art production-level LQCD Monte Carlo application, using the directive-based OpenACC programming model. OpenACC abstracts parallel programming to a descriptive level, relieving programmers from specifying how codes should be mapped onto the target architecture. We describe the implementation of a code fully written in OpenAcc, and show that we are able to target several different architectures, including state-of-the-art traditional CPUs and GPUs, with the same code. We also measure performance, evaluating the computing efficiency of our OpenACC code on several architectures, comparing with GPU-specific implementations and showing that a good level of performance-portability can be reached.
2006-11-01
engines will involve a family of common components. It will consist of a real - time operating system and partitioned application software (AS...system will employ a standard hardware and software architecture. It will consist of a real time operating system and partitioned application...Inputs - Enables Large Cost Reduction 3. Software - FAA Certified Auto Code - Real Time Operating System - Commercial
2015-06-01
abstract constraints along six dimen- sions for expansion: user, actions, data , business rules, interfaces, and quality attributes [Gottesdiener 2010...relevant open source systems. For example, the CONNECT and HADOOP Distributed File System (HDFS) projects have many user stories that deal with...Iteration Zero involves architecture planning before writing any code. An overly long Iteration Zero is equivalent to the dysfunctional “ Big Up-Front
Locking Down the Software Development Environment
2014-12-01
OpenSSL code [13]. The OpenSSL software is, as the name implies, open source, a result of many developers coding beginning in 1998 using the C...programming language to build crypto services. OpenSSL is used widely both on the Internet and in firmware [13], further delaying the ability of many
Using Quick Response Codes in the Classroom: Quality Outcomes.
Zurmehly, Joyce; Adams, Kellie
2017-10-01
With smart device technology emerging, educators are challenged with redesigning teaching strategies using technology to allow students to participate dynamically and provide immediate answers. To facilitate integration of technology and to actively engage students, quick response codes were included in a medical surgical lecture. Quick response codes are two-dimensional square patterns that enable the coding or storage of more than 7000 characters that can be accessed via a quick response code scanning application. The aim of this quasi-experimental study was to explore quick response code use in a lecture and measure students' satisfaction (met expectations, increased interest, helped understand, and provided practice and prompt feedback) and engagement (liked most, liked least, wanted changed, and kept involved), assessed using an investigator-developed instrument. Although there was no statistically significant correlation of quick response use to examination scores, satisfaction scores were high, and there was a small yet positive association between how students perceived their learning with quick response codes and overall examination scores. Furthermore, on open-ended survey questions, students responded that they were satisfied with the use of quick response codes, appreciated the immediate feedback, and planned to use them in the clinical setting. Quick response codes offer a way to integrate technology into the classroom to provide students with instant positive feedback.
What makes computational open source software libraries successful?
NASA Astrophysics Data System (ADS)
Bangerth, Wolfgang; Heister, Timo
2013-01-01
Software is the backbone of scientific computing. Yet, while we regularly publish detailed accounts about the results of scientific software, and while there is a general sense of which numerical methods work well, our community is largely unaware of best practices in writing the large-scale, open source scientific software upon which our discipline rests. This is particularly apparent in the commonly held view that writing successful software packages is largely the result of simply ‘being a good programmer’ when in fact there are many other factors involved, for example the social skill of community building. In this paper, we consider what we have found to be the necessary ingredients for successful scientific software projects and, in particular, for software libraries upon which the vast majority of scientific codes are built today. In particular, we discuss the roles of code, documentation, communities, project management and licenses. We also briefly comment on the impact on academic careers of engaging in software projects.
MicroRNAs in cancer therapeutics: "from the bench to the bedside".
Monroig-Bosque, Paloma del C; Rivera, Carlos A; Calin, George A
2015-01-01
MicroRNAs (miRNAs) are non-coding RNA transcripts that regulate physiological processes by targeting proteins directly. Their involvement in research has been robust, and evidence of their regulative functions has granted them the title: master regulators of the human genome. In cancer, they are considered important therapeutic agents, due to the fact that their aberrant expression contributes to disease development, progression, metastasis, therapeutic response and patient overall survival. This has endeavored fields of biomedical sciences to invest in developing and exploiting miRNA-based therapeutics thoroughly. Herein we highlight relevant ongoing/open clinical trials involving miRNAs and cancer.
Targeting multiple heterogeneous hardware platforms with OpenCL
NASA Astrophysics Data System (ADS)
Fox, Paul A.; Kozacik, Stephen T.; Humphrey, John R.; Paolini, Aaron; Kuller, Aryeh; Kelmelis, Eric J.
2014-06-01
The OpenCL API allows for the abstract expression of parallel, heterogeneous computing, but hardware implementations have substantial implementation differences. The abstractions provided by the OpenCL API are often insufficiently high-level to conceal differences in hardware architecture. Additionally, implementations often do not take advantage of potential performance gains from certain features due to hardware limitations and other factors. These factors make it challenging to produce code that is portable in practice, resulting in much OpenCL code being duplicated for each hardware platform being targeted. This duplication of effort offsets the principal advantage of OpenCL: portability. The use of certain coding practices can mitigate this problem, allowing a common code base to be adapted to perform well across a wide range of hardware platforms. To this end, we explore some general practices for producing performant code that are effective across platforms. Additionally, we explore some ways of modularizing code to enable optional optimizations that take advantage of hardware-specific characteristics. The minimum requirement for portability implies avoiding the use of OpenCL features that are optional, not widely implemented, poorly implemented, or missing in major implementations. Exposing multiple levels of parallelism allows hardware to take advantage of the types of parallelism it supports, from the task level down to explicit vector operations. Static optimizations and branch elimination in device code help the platform compiler to effectively optimize programs. Modularization of some code is important to allow operations to be chosen for performance on target hardware. Optional subroutines exploiting explicit memory locality allow for different memory hierarchies to be exploited for maximum performance. The C preprocessor and JIT compilation using the OpenCL runtime can be used to enable some of these techniques, as well as to factor in hardware-specific optimizations as necessary.
Open Rotor Noise Prediction Methods at NASA Langley- A Technology Review
NASA Technical Reports Server (NTRS)
Farassat, F.; Dunn, Mark H.; Tinetti, Ana F.; Nark, Douglas M.
2009-01-01
Open rotors are once again under consideration for propulsion of the future airliners because of their high efficiency. The noise generated by these propulsion systems must meet the stringent noise standards of today to reduce community impact. In this paper we review the open rotor noise prediction methods available at NASA Langley. We discuss three codes called ASSPIN (Advanced Subsonic-Supersonic Propeller Induced Noise), FW - Hpds (Ffowcs Williams-Hawkings with penetrable data surface) and the FSC (Fast Scattering Code). The first two codes are in the time domain and the third code is a frequency domain code. The capabilities of these codes and the input data requirements as well as the output data are presented. Plans for further improvements of these codes are discussed. In particular, a method based on equivalent sources is outlined to get rid of spurious signals in the FW - Hpds code.
NASA Astrophysics Data System (ADS)
Maeda, Takuto; Takemura, Shunsuke; Furumura, Takashi
2017-07-01
We have developed an open-source software package, Open-source Seismic Wave Propagation Code (OpenSWPC), for parallel numerical simulations of seismic wave propagation in 3D and 2D (P-SV and SH) viscoelastic media based on the finite difference method in local-to-regional scales. This code is equipped with a frequency-independent attenuation model based on the generalized Zener body and an efficient perfectly matched layer for absorbing boundary condition. A hybrid-style programming using OpenMP and the Message Passing Interface (MPI) is adopted for efficient parallel computation. OpenSWPC has wide applicability for seismological studies and great portability to allowing excellent performance from PC clusters to supercomputers. Without modifying the code, users can conduct seismic wave propagation simulations using their own velocity structure models and the necessary source representations by specifying them in an input parameter file. The code has various modes for different types of velocity structure model input and different source representations such as single force, moment tensor and plane-wave incidence, which can easily be selected via the input parameters. Widely used binary data formats, the Network Common Data Form (NetCDF) and the Seismic Analysis Code (SAC) are adopted for the input of the heterogeneous structure model and the outputs of the simulation results, so users can easily handle the input/output datasets. All codes are written in Fortran 2003 and are available with detailed documents in a public repository.[Figure not available: see fulltext.
Finding Resolution for the Responsible Transparency of Economic Models in Health and Medicine.
Padula, William V; McQueen, Robert Brett; Pronovost, Peter J
2017-11-01
The Second Panel on Cost-Effectiveness in Health and Medicine recommendations for conduct, methodological practices, and reporting of cost-effectiveness analyses has a number of questions unanswered with respect to the implementation of transparent, open source code interface for economic models. The possibility of making economic model source code could be positive and progressive for the field; however, several unintended consequences of this system should be first considered before complete implementation of this model. First, there is the concern regarding intellectual property rights that modelers have to their analyses. Second, the open source code could make analyses more accessible to inexperienced modelers, leading to inaccurate or misinterpreted results. We propose several resolutions to these concerns. The field should establish a licensing system of open source code such that the model originators maintain control of the code use and grant permissions to other investigators who wish to use it. The field should also be more forthcoming towards the teaching of cost-effectiveness analysis in medical and health services education so that providers and other professionals are familiar with economic modeling and able to conduct analyses with open source code. These types of unintended consequences need to be fully considered before the field's preparedness to move forward into an era of model transparency with open source code.
ANNA: A Convolutional Neural Network Code for Spectroscopic Analysis
NASA Astrophysics Data System (ADS)
Lee-Brown, Donald; Anthony-Twarog, Barbara J.; Twarog, Bruce A.
2018-01-01
We present ANNA, a Python-based convolutional neural network code for the automated analysis of stellar spectra. ANNA provides a flexible framework that allows atmospheric parameters such as temperature and metallicity to be determined with accuracies comparable to those of established but less efficient techniques. ANNA performs its parameterization extremely quickly; typically several thousand spectra can be analyzed in less than a second. Additionally, the code incorporates features which greatly speed up the training process necessary for the neural network to measure spectra accurately, resulting in a tool that can easily be run on a single desktop or laptop computer. Thus, ANNA is useful in an era when spectrographs increasingly have the capability to collect dozens to hundreds of spectra each night. This talk will cover the basic features included in ANNA and demonstrate its performance in two use cases: an open cluster abundance analysis involving several hundred spectra, and a metal-rich field star study. Applicability of the code to large survey datasets will also be discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grebennikov, A.N.; Zhitnik, A.K.; Zvenigorodskaya, O.A.
1995-12-31
In conformity with the protocol of the Workshop under Contract {open_quotes}Assessment of RBMK reactor safety using modern Western Codes{close_quotes} VNIIEF performed a neutronics computation series to compare western and VNIIEF codes and assess whether VNIIEF codes are suitable for RBMK type reactor safety assessment computation. The work was carried out in close collaboration with M.I. Rozhdestvensky and L.M. Podlazov, NIKIET employees. The effort involved: (1) cell computations with the WIMS, EKRAN codes (improved modification of the LOMA code) and the S-90 code (VNIIEF Monte Carlo). Cell, polycell, burnup computation; (2) 3D computation of static states with the KORAT-3D and NEUmore » codes and comparison with results of computation with the NESTLE code (USA). The computations were performed in the geometry and using the neutron constants presented by the American party; (3) 3D computation of neutron kinetics with the KORAT-3D and NEU codes. These computations were performed in two formulations, both being developed in collaboration with NIKIET. Formulation of the first problem maximally possibly agrees with one of NESTLE problems and imitates gas bubble travel through a core. The second problem is a model of the RBMK as a whole with imitation of control and protection system controls (CPS) movement in a core.« less
openQ*D simulation code for QCD+QED
NASA Astrophysics Data System (ADS)
Campos, Isabel; Fritzsch, Patrick; Hansen, Martin; Krstić Marinković, Marina; Patella, Agostino; Ramos, Alberto; Tantalo, Nazario
2018-03-01
The openQ*D code for the simulation of QCD+QED with C* boundary conditions is presented. This code is based on openQCD-1.6, from which it inherits the core features that ensure its efficiency: the locally-deflated SAP-preconditioned GCR solver, the twisted-mass frequency splitting of the fermion action, the multilevel integrator, the 4th order OMF integrator, the SSE/AVX intrinsics, etc. The photon field is treated as fully dynamical and C* boundary conditions can be chosen in the spatial directions. We discuss the main features of openQ*D, and we show basic test results and performance analysis. An alpha version of this code is publicly available and can be downloaded from http://rcstar.web.cern.ch/.
Hypersonic simulations using open-source CFD and DSMC solvers
NASA Astrophysics Data System (ADS)
Casseau, V.; Scanlon, T. J.; John, B.; Emerson, D. R.; Brown, R. E.
2016-11-01
Hypersonic hybrid hydrodynamic-molecular gas flow solvers are required to satisfy the two essential requirements of any high-speed reacting code, these being physical accuracy and computational efficiency. The James Weir Fluids Laboratory at the University of Strathclyde is currently developing an open-source hybrid code which will eventually reconcile the direct simulation Monte-Carlo method, making use of the OpenFOAM application called dsmcFoam, and the newly coded open-source two-temperature computational fluid dynamics solver named hy2Foam. In conjunction with employing the CVDV chemistry-vibration model in hy2Foam, novel use is made of the QK rates in a CFD solver. In this paper, further testing is performed, in particular with the CFD solver, to ensure its efficacy before considering more advanced test cases. The hy2Foam and dsmcFoam codes have shown to compare reasonably well, thus providing a useful basis for other codes to compare against.
mdFoam+: Advanced molecular dynamics in OpenFOAM
NASA Astrophysics Data System (ADS)
Longshaw, S. M.; Borg, M. K.; Ramisetti, S. B.; Zhang, J.; Lockerby, D. A.; Emerson, D. R.; Reese, J. M.
2018-03-01
This paper introduces mdFoam+, which is an MPI parallelised molecular dynamics (MD) solver implemented entirely within the OpenFOAM software framework. It is open-source and released under the same GNU General Public License (GPL) as OpenFOAM. The source code is released as a publicly open software repository that includes detailed documentation and tutorial cases. Since mdFoam+ is designed entirely within the OpenFOAM C++ object-oriented framework, it inherits a number of key features. The code is designed for extensibility and flexibility, so it is aimed first and foremost as an MD research tool, in which new models and test cases can be developed and tested rapidly. Implementing mdFoam+ in OpenFOAM also enables easier development of hybrid methods that couple MD with continuum-based solvers. Setting up MD cases follows the standard OpenFOAM format, as mdFoam+ also relies upon the OpenFOAM dictionary-based directory structure. This ensures that useful pre- and post-processing capabilities provided by OpenFOAM remain available even though the fully Lagrangian nature of an MD simulation is not typical of most OpenFOAM applications. Results show that mdFoam+ compares well to another well-known MD code (e.g. LAMMPS) in terms of benchmark problems, although it also has additional functionality that does not exist in other open-source MD codes.
Design Aspects of the Rayleigh Convection Code
NASA Astrophysics Data System (ADS)
Featherstone, N. A.
2017-12-01
Understanding the long-term generation of planetary or stellar magnetic field requires complementary knowledge of the large-scale fluid dynamics pervading large fractions of the object's interior. Such large-scale motions are sensitive to the system's geometry which, in planets and stars, is spherical to a good approximation. As a result, computational models designed to study such systems often solve the MHD equations in spherical geometry, frequently employing a spectral approach involving spherical harmonics. We present computational and user-interface design aspects of one such modeling tool, the Rayleigh convection code, which is suitable for deployment on desktop and petascale-hpc architectures alike. In this poster, we will present an overview of this code's parallel design and its built-in diagnostics-output package. Rayleigh has been developed with NSF support through the Computational Infrastructure for Geodynamics and is expected to be released as open-source software in winter 2017/2018.
Regulated expression of the lncRNA TERRA and its impact on telomere biology.
Oliva-Rico, Diego; Herrera, Luis A
2017-10-01
The telomere protects against genomic instability by minimizing the accelerated end resection of the genetic material, a phenomenon that results in severe chromosome instability that could favor the transformation of a cell by enabling the emergence of tumor-promoting mutations. Some mechanisms that avoid this fate, such as capping and loop formation, have been very well characterized; however, telomeric non-coding transcripts, such as long non-coding RNAs (lncRNAs), should also be considered in this context because they play roles in the organization of telomere dynamics, involving processes such as replication, degradation, extension, and heterochromatin stabilization. Although the mechanism through which the expression of telomeric transcripts regulates telomere dynamics is not yet clear, a non-coding RNA component opens the research options in telomere biology and the impact that it can have on telomere-associated diseases such as cancer. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
SORTA: a system for ontology-based re-coding and technical annotation of biomedical phenotype data.
Pang, Chao; Sollie, Annet; Sijtsma, Anna; Hendriksen, Dennis; Charbon, Bart; de Haan, Mark; de Boer, Tommy; Kelpin, Fleur; Jetten, Jonathan; van der Velde, Joeri K; Smidt, Nynke; Sijmons, Rolf; Hillege, Hans; Swertz, Morris A
2015-01-01
There is an urgent need to standardize the semantics of biomedical data values, such as phenotypes, to enable comparative and integrative analyses. However, it is unlikely that all studies will use the same data collection protocols. As a result, retrospective standardization is often required, which involves matching of original (unstructured or locally coded) data to widely used coding or ontology systems such as SNOMED CT (clinical terms), ICD-10 (International Classification of Disease) and HPO (Human Phenotype Ontology). This data curation process is usually a time-consuming process performed by a human expert. To help mechanize this process, we have developed SORTA, a computer-aided system for rapidly encoding free text or locally coded values to a formal coding system or ontology. SORTA matches original data values (uploaded in semicolon delimited format) to a target coding system (uploaded in Excel spreadsheet, OWL ontology web language or OBO open biomedical ontologies format). It then semi- automatically shortlists candidate codes for each data value using Lucene and n-gram based matching algorithms, and can also learn from matches chosen by human experts. We evaluated SORTA's applicability in two use cases. For the LifeLines biobank, we used SORTA to recode 90 000 free text values (including 5211 unique values) about physical exercise to MET (Metabolic Equivalent of Task) codes. For the CINEAS clinical symptom coding system, we used SORTA to map to HPO, enriching HPO when necessary (315 terms matched so far). Out of the shortlists at rank 1, we found a precision/recall of 0.97/0.98 in LifeLines and of 0.58/0.45 in CINEAS. More importantly, users found the tool both a major time saver and a quality improvement because SORTA reduced the chances of human mistakes. Thus, SORTA can dramatically ease data (re)coding tasks and we believe it will prove useful for many more projects. Database URL: http://molgenis.org/sorta or as an open source download from http://www.molgenis.org/wiki/SORTA. © The Author(s) 2015. Published by Oxford University Press.
SORTA: a system for ontology-based re-coding and technical annotation of biomedical phenotype data
Pang, Chao; Sollie, Annet; Sijtsma, Anna; Hendriksen, Dennis; Charbon, Bart; de Haan, Mark; de Boer, Tommy; Kelpin, Fleur; Jetten, Jonathan; van der Velde, Joeri K.; Smidt, Nynke; Sijmons, Rolf; Hillege, Hans; Swertz, Morris A.
2015-01-01
There is an urgent need to standardize the semantics of biomedical data values, such as phenotypes, to enable comparative and integrative analyses. However, it is unlikely that all studies will use the same data collection protocols. As a result, retrospective standardization is often required, which involves matching of original (unstructured or locally coded) data to widely used coding or ontology systems such as SNOMED CT (clinical terms), ICD-10 (International Classification of Disease) and HPO (Human Phenotype Ontology). This data curation process is usually a time-consuming process performed by a human expert. To help mechanize this process, we have developed SORTA, a computer-aided system for rapidly encoding free text or locally coded values to a formal coding system or ontology. SORTA matches original data values (uploaded in semicolon delimited format) to a target coding system (uploaded in Excel spreadsheet, OWL ontology web language or OBO open biomedical ontologies format). It then semi- automatically shortlists candidate codes for each data value using Lucene and n-gram based matching algorithms, and can also learn from matches chosen by human experts. We evaluated SORTA’s applicability in two use cases. For the LifeLines biobank, we used SORTA to recode 90 000 free text values (including 5211 unique values) about physical exercise to MET (Metabolic Equivalent of Task) codes. For the CINEAS clinical symptom coding system, we used SORTA to map to HPO, enriching HPO when necessary (315 terms matched so far). Out of the shortlists at rank 1, we found a precision/recall of 0.97/0.98 in LifeLines and of 0.58/0.45 in CINEAS. More importantly, users found the tool both a major time saver and a quality improvement because SORTA reduced the chances of human mistakes. Thus, SORTA can dramatically ease data (re)coding tasks and we believe it will prove useful for many more projects. Database URL: http://molgenis.org/sorta or as an open source download from http://www.molgenis.org/wiki/SORTA PMID:26385205
Runtime Detection of C-Style Errors in UPC Code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pirkelbauer, P; Liao, C; Panas, T
2011-09-29
Unified Parallel C (UPC) extends the C programming language (ISO C 99) with explicit parallel programming support for the partitioned global address space (PGAS), which provides a global memory space with localized partitions to each thread. Like its ancestor C, UPC is a low-level language that emphasizes code efficiency over safety. The absence of dynamic (and static) safety checks allows programmer oversights and software flaws that can be hard to spot. In this paper, we present an extension of a dynamic analysis tool, ROSE-Code Instrumentation and Runtime Monitor (ROSECIRM), for UPC to help programmers find C-style errors involving the globalmore » address space. Built on top of the ROSE source-to-source compiler infrastructure, the tool instruments source files with code that monitors operations and keeps track of changes to the system state. The resulting code is linked to a runtime monitor that observes the program execution and finds software defects. We describe the extensions to ROSE-CIRM that were necessary to support UPC. We discuss complications that arise from parallel code and our solutions. We test ROSE-CIRM against a runtime error detection test suite, and present performance results obtained from running error-free codes. ROSE-CIRM is released as part of the ROSE compiler under a BSD-style open source license.« less
Ruan, Hang; Li, Jian; Zhang, Lei; Long, Teng
2015-01-01
For vehicle positioning with Global Navigation Satellite System (GNSS) in urban areas, open-loop tracking shows better performance because of its high sensitivity and superior robustness against multipath. However, no previous study has focused on the effects of the code search grid size on the code phase measurement accuracy of open-loop tracking. Traditional open-loop tracking methods are performed by the batch correlators with fixed correlation space. The code search grid size, which is the correlation space, is a constant empirical value and the code phase measuring accuracy will be largely degraded due to the improper grid size, especially when the signal carrier-to-noise density ratio (C/N0) varies. In this study, the Adaptive Correlation Space Adjusted Open-Loop Tracking Approach (ACSA-OLTA) is proposed to improve the code phase measurement dependent pseudo range accuracy. In ACSA-OLTA, the correlation space is adjusted according to the signal C/N0. The novel Equivalent Weighted Pseudo Range Error (EWPRE) is raised to obtain the optimal code search grid sizes for different C/N0. The code phase measuring errors of different measurement calculation methods are analyzed for the first time. The measurement calculation strategy of ACSA-OLTA is derived from the analysis to further improve the accuracy but reduce the correlator consumption. Performance simulation and real tests confirm that the pseudo range and positioning accuracy of ASCA-OLTA are better than the traditional open-loop tracking methods in the usual scenarios of urban area. PMID:26343683
Automatic Coding of Short Text Responses via Clustering in Educational Assessment
ERIC Educational Resources Information Center
Zehner, Fabian; Sälzer, Christine; Goldhammer, Frank
2016-01-01
Automatic coding of short text responses opens new doors in assessment. We implemented and integrated baseline methods of natural language processing and statistical modelling by means of software components that are available under open licenses. The accuracy of automatic text coding is demonstrated by using data collected in the "Programme…
NASA Astrophysics Data System (ADS)
Couvreur, A.
2009-05-01
The theory of algebraic-geometric codes has been developed in the beginning of the 80's after a paper of V.D. Goppa. Given a smooth projective algebraic curve X over a finite field, there are two different constructions of error-correcting codes. The first one, called "functional", uses some rational functions on X and the second one, called "differential", involves some rational 1-forms on this curve. Hundreds of papers are devoted to the study of such codes. In addition, a generalization of the functional construction for algebraic varieties of arbitrary dimension is given by Y. Manin in an article of 1984. A few papers about such codes has been published, but nothing has been done concerning a generalization of the differential construction to the higher-dimensional case. In this thesis, we propose a differential construction of codes on algebraic surfaces. Afterwards, we study the properties of these codes and particularly their relations with functional codes. A pretty surprising fact is that a main difference with the case of curves appears. Indeed, if in the case of curves, a differential code is always the orthogonal of a functional one, this assertion generally fails for surfaces. Last observation motivates the study of codes which are the orthogonal of some functional code on a surface. Therefore, we prove that, under some condition on the surface, these codes can be realized as sums of differential codes. Moreover, we show that some answers to some open problems "a la Bertini" could give very interesting informations on the parameters of these codes.
Support of Multidimensional Parallelism in the OpenMP Programming Model
NASA Technical Reports Server (NTRS)
Jin, Hao-Qiang; Jost, Gabriele
2003-01-01
OpenMP is the current standard for shared-memory programming. While providing ease of parallel programming, the OpenMP programming model also has limitations which often effect the scalability of applications. Examples for these limitations are work distribution and point-to-point synchronization among threads. We propose extensions to the OpenMP programming model which allow the user to easily distribute the work in multiple dimensions and synchronize the workflow among the threads. The proposed extensions include four new constructs and the associated runtime library. They do not require changes to the source code and can be implemented based on the existing OpenMP standard. We illustrate the concept in a prototype translator and test with benchmark codes and a cloud modeling code.
Evaluating Open-Source Full-Text Search Engines for Matching ICD-10 Codes.
Jurcău, Daniel-Alexandru; Stoicu-Tivadar, Vasile
2016-01-01
This research presents the results of evaluating multiple free, open-source engines on matching ICD-10 diagnostic codes via full-text searches. The study investigates what it takes to get an accurate match when searching for a specific diagnostic code. For each code the evaluation starts by extracting the words that make up its text and continues with building full-text search queries from the combinations of these words. The queries are then run against all the ICD-10 codes until a match indicates the code in question as a match with the highest relative score. This method identifies the minimum number of words that must be provided in order for the search engines choose the desired entry. The engines analyzed include a popular Java-based full-text search engine, a lightweight engine written in JavaScript which can even execute on the user's browser, and two popular open-source relational database management systems.
SolTrace | Concentrating Solar Power | NREL
NREL packaged distribution or from source code at the SolTrace open source project website. NREL Publications Support FAQs SolTrace open source project The code uses Monte-Carlo ray-tracing methodology. The -tracing capabilities. With the release of the SolTrace open source project, the software has adopted
Scalar collapse in AdS with an OpenCL open source code
NASA Astrophysics Data System (ADS)
Liebling, Steven L.; Khanna, Gaurav
2017-10-01
We study the spherically symmetric collapse of a scalar field in anti-de Sitter spacetime using a newly constructed, open-source code which parallelizes over heterogeneous architectures using the open standard OpenCL. An open question for this scenario concerns how to tell, a priori, whether some form of initial data will be stable or will instead develop under the turbulent instability into a black hole in the limit of vanishing amplitude. Previous work suggested the existence of islands of stability around quasi-periodic solutions, and we use this new code to examine the stability properties of approximately quasi-periodic solutions which balance energy transfer to higher modes with energy transfer to lower modes. The evolutions provide some evidence, though not conclusively, for stability of initial data sufficiently close to quasiperiodic solutions.
Pitney, William A; Ehlers, Greg G
2004-01-01
Objective: To gain insight regarding the mentoring processes involving students enrolled in athletic training education programs and to create a mentoring model. Design and Setting: We conducted a grounded theory study with students and mentors currently affiliated with 1 of 2 of the athletic training education programs accredited by the Commission on Accreditation of Allied Health Education Programs. Participants: Sixteen interviews were conducted, 13 with athletic training students and 3 with individuals identified as mentors. The students ranged in age from 20 to 24 years, with an average of 21.6 years. The mentors ranged from 24 to 38 years of age, with an average of 33.3 years. Participants were purposefully selected based on theoretic sampling and availability. Data Analysis: The transcribed interviews were analyzed using open-, axial-, and selective-coding procedures. Member checks, peer debriefings, and triangulation were used to ensure trustworthiness. Results: Students who acknowledged having a mentor overwhelmingly identified their clinical instructor in this role. The open-coding procedures produced 3 categories: (1) mentoring prerequisites, (2) interpersonal foundations, and (3) educational dimensions. Mentoring prerequisites included accessibility, approachability, and protégé initiative. Interpersonal foundations involved the mentor and protégé having congruent values, trust, and a personal relationship. The educational dimensions category involved the mentor facilitating knowledge and skill development, encouraging professional perspectives, and individualizing learning. Although a student-certified athletic trainer relationship can be grounded in either interpersonal or educational aspects, the data support the occurrence of an authentic mentoring relationship when the dimensions coalesced. Conclusions: Potential mentors must not only be accessible but also approachable by a prospective protégé. Mentoring takes initiative on behalf of a student and the mentor. A mentoring relationship is complex and involves the coalescence of both interpersonal and educational aspects of an affiliation. As a professional-socialization tactic, mentoring offers students a way to anticipate the future professional role in a very personal and meaningful way. PMID:15592607
Albertini, A M; Caramori, T; Crabb, W D; Scoffone, F; Galizzi, A
1991-01-01
We cloned and sequenced 8.3 kb of Bacillus subtilis DNA corresponding to the flaA locus involved in flagellar biosynthesis, motility, and chemotaxis. The DNA sequence revealed the presence of 10 complete and 2 incomplete open reading frames. Comparison of the deduced amino acid sequences to data banks showed similarities of nine of the deduced products to a number of proteins of Escherichia coli and Salmonella typhimurium for which a role in flagellar functioning has been directly demonstrated. In particular, the sequence data suggest that the flaA operon codes for the M-ring protein, components of the motor switch, and the distal part of the basal-body rod. The gene order is remarkably similar to that described for region III of the enterobacterial flagellar regulon. One of the open reading frames was translated into a protein with 48% amino acid identity to S. typhimurium FliI and 29% identity to the beta subunit of E. coli ATP synthase. PMID:1828465
Fast Acceleration of 2D Wave Propagation Simulations Using Modern Computational Accelerators
Wang, Wei; Xu, Lifan; Cavazos, John; Huang, Howie H.; Kay, Matthew
2014-01-01
Recent developments in modern computational accelerators like Graphics Processing Units (GPUs) and coprocessors provide great opportunities for making scientific applications run faster than ever before. However, efficient parallelization of scientific code using new programming tools like CUDA requires a high level of expertise that is not available to many scientists. This, plus the fact that parallelized code is usually not portable to different architectures, creates major challenges for exploiting the full capabilities of modern computational accelerators. In this work, we sought to overcome these challenges by studying how to achieve both automated parallelization using OpenACC and enhanced portability using OpenCL. We applied our parallelization schemes using GPUs as well as Intel Many Integrated Core (MIC) coprocessor to reduce the run time of wave propagation simulations. We used a well-established 2D cardiac action potential model as a specific case-study. To the best of our knowledge, we are the first to study auto-parallelization of 2D cardiac wave propagation simulations using OpenACC. Our results identify several approaches that provide substantial speedups. The OpenACC-generated GPU code achieved more than speedup above the sequential implementation and required the addition of only a few OpenACC pragmas to the code. An OpenCL implementation provided speedups on GPUs of at least faster than the sequential implementation and faster than a parallelized OpenMP implementation. An implementation of OpenMP on Intel MIC coprocessor provided speedups of with only a few code changes to the sequential implementation. We highlight that OpenACC provides an automatic, efficient, and portable approach to achieve parallelization of 2D cardiac wave simulations on GPUs. Our approach of using OpenACC, OpenCL, and OpenMP to parallelize this particular model on modern computational accelerators should be applicable to other computational models of wave propagation in multi-dimensional media. PMID:24497950
The Particle Accelerator Simulation Code PyORBIT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gorlov, Timofey V; Holmes, Jeffrey A; Cousineau, Sarah M
2015-01-01
The particle accelerator simulation code PyORBIT is presented. The structure, implementation, history, parallel and simulation capabilities, and future development of the code are discussed. The PyORBIT code is a new implementation and extension of algorithms of the original ORBIT code that was developed for the Spallation Neutron Source accelerator at the Oak Ridge National Laboratory. The PyORBIT code has a two level structure. The upper level uses the Python programming language to control the flow of intensive calculations performed by the lower level code implemented in the C++ language. The parallel capabilities are based on MPI communications. The PyORBIT ismore » an open source code accessible to the public through the Google Open Source Projects Hosting service.« less
RNA editing differently affects protein-coding genes in D. melanogaster and H. sapiens.
Grassi, Luigi; Leoni, Guido; Tramontano, Anna
2015-07-14
When an RNA editing event occurs within a coding sequence it can lead to a different encoded amino acid. The biological significance of these events remains an open question: they can modulate protein functionality, increase the complexity of transcriptomes or arise from a loose specificity of the involved enzymes. We analysed the editing events in coding regions that produce or not a change in the encoded amino acid (nonsynonymous and synonymous events, respectively) in D. melanogaster and in H. sapiens and compared them with the appropriate random models. Interestingly, our results show that the phenomenon has rather different characteristics in the two organisms. For example, we confirm the observation that editing events occur more frequently in non-coding than in coding regions, and report that this effect is much more evident in H. sapiens. Additionally, in this latter organism, editing events tend to affect less conserved residues. The less frequently occurring editing events in Drosophila tend to avoid drastic amino acid changes. Interestingly, we find that, in Drosophila, changes from less frequently used codons to more frequently used ones are favoured, while this is not the case in H. sapiens.
Role of the Integrin-Linked Kinase, ILK, in Mammary Carcinogensis
2000-08-01
have been implicated in environmental stress clonei 6-10 responses in yeasts, plants and mammals, as well as regulating abscisic acid signal transduction...phosphatase 2C involved in abscisic acid signal transduction in higher plants. Proc. Natl Acad. Sci. USA, 95, 975-980. Strovel,E.T., Wu,D. and Sussman,D.J...contain a 450bp open reading frame, coding for 149 amino acids and a poly A tail 245bp downstream of the stop codon, although no polyadenylation site
2014-05-01
fusion, space and astrophysical plasmas, but still the general picture can be presented quite well with the fluid approach [6, 7]. The microscopic...purpose computing CPU for algorithms where processing of large blocks of data is done in parallel. The reason for that is the GPU’s highly effective...parallel structure. Most of the image and video processing computations involve heavy matrix and vector op- erations over large amounts of data and
Alu-mediated deletion of SOX10 regulatory elements in Waardenburg syndrome type 4
Bondurand, Nadége; Fouquet, Virginie; Baral, Viviane; Lecerf, Laure; Loundon, Natalie; Goossens, Michel; Duriez, Benedicte; Labrune, Philippe; Pingault, Veronique
2012-01-01
Waardenburg syndrome type 4 (WS4) is a rare neural crest disorder defined by the combination of Waardenburg syndrome (sensorineural hearing loss and pigmentation defects) and Hirschsprung disease (intestinal aganglionosis). Three genes are known to be involved in this syndrome, that is, EDN3 (endothelin-3), EDNRB (endothelin receptor type B), and SOX10. However, 15–35% of WS4 remains unexplained at the molecular level, suggesting that other genes could be involved and/or that mutations within known genes may have escaped previous screenings. Here, we searched for deletions within recently identified SOX10 regulatory sequences and describe the first characterization of a WS4 patient presenting with a large deletion encompassing three of these enhancers. Analysis of the breakpoint region suggests a complex rearrangement involving three Alu sequences that could be mediated by a FosTes/MMBIR replication mechanism. Taken together with recent reports, our results demonstrate that the disruption of highly conserved non-coding elements located within or at a long distance from the coding sequences of key genes can result in several neurocristopathies. This opens up new routes to the molecular dissection of neural crest disorders. PMID:22378281
Alu-mediated deletion of SOX10 regulatory elements in Waardenburg syndrome type 4.
Bondurand, Nadége; Fouquet, Virginie; Baral, Viviane; Lecerf, Laure; Loundon, Natalie; Goossens, Michel; Duriez, Benedicte; Labrune, Philippe; Pingault, Veronique
2012-09-01
Waardenburg syndrome type 4 (WS4) is a rare neural crest disorder defined by the combination of Waardenburg syndrome (sensorineural hearing loss and pigmentation defects) and Hirschsprung disease (intestinal aganglionosis). Three genes are known to be involved in this syndrome, that is, EDN3 (endothelin-3), EDNRB (endothelin receptor type B), and SOX10. However, 15-35% of WS4 remains unexplained at the molecular level, suggesting that other genes could be involved and/or that mutations within known genes may have escaped previous screenings. Here, we searched for deletions within recently identified SOX10 regulatory sequences and describe the first characterization of a WS4 patient presenting with a large deletion encompassing three of these enhancers. Analysis of the breakpoint region suggests a complex rearrangement involving three Alu sequences that could be mediated by a FosTes/MMBIR replication mechanism. Taken together with recent reports, our results demonstrate that the disruption of highly conserved non-coding elements located within or at a long distance from the coding sequences of key genes can result in several neurocristopathies. This opens up new routes to the molecular dissection of neural crest disorders.
NASA Astrophysics Data System (ADS)
Sandalski, Stou
Smooth particle hydrodynamics is an efficient method for modeling the dynamics of fluids. It is commonly used to simulate astrophysical processes such as binary mergers. We present a newly developed GPU accelerated smooth particle hydrodynamics code for astrophysical simulations. The code is named
Letter order is not coded by open bigrams
Kinoshita, Sachiko; Norris, Dennis
2013-01-01
Open bigram (OB) models (e.g., SERIOL: Whitney, 2001, 2008; Binary OB, Grainger & van Heuven, 2003; Overlap OB, Grainger et al., 2006; Local combination detector model, Dehaene et al., 2005) posit that letter order in a word is coded by a set of ordered letter pairs. We report three experiments using bigram primes in the same-different match task, investigating the effects of order reversal and the number of letters intervening between the letters in the target. Reversed bigrams (e.g., fo-OF, ob-ABOLISH) produced robust priming, in direct contradiction to the assumption that letter order is coded by the presence of ordered letter pairs. Also in contradiction to the core assumption of current open bigram models, non-contiguous bigrams spanning three letters in the target (e.g., bs-ABOLISH) showed robust priming effects, equivalent in size to contiguous bigrams (e.g., bo-ABOLISH). These results question the role of open bigrams in coding letter order. PMID:23914048
Sexual involvement with patients.
Kirstein, L
1978-04-01
Three cases of sexual activity between patients and staff members were presented and determinants and consequences of this type of acting out behavior were discussed. Patients sexual behavior was in part motivated by a need to avoid feelings of loneliness and anxiety and a consequence of the sexual behavior was the recurrence of symptoms and behaviors noted upon admission. The staff members were noted to become more self preoccupied and less involved with both staff and patients following the sexual behavior. The role of the ward psychiatrist in preventing such patient staff interactions includes his taking responsibility for educational and supervisory needs of the staff, his being involved in the creation and maintenance of the ward's moral code and his awareness of group and organizational factors that may impede open staff communications.
Computational Infrastructure for Geodynamics (CIG)
NASA Astrophysics Data System (ADS)
Gurnis, M.; Kellogg, L. H.; Bloxham, J.; Hager, B. H.; Spiegelman, M.; Willett, S.; Wysession, M. E.; Aivazis, M.
2004-12-01
Solid earth geophysicists have a long tradition of writing scientific software to address a wide range of problems. In particular, computer simulations came into wide use in geophysics during the decade after the plate tectonic revolution. Solution schemes and numerical algorithms that developed in other areas of science, most notably engineering, fluid mechanics, and physics, were adapted with considerable success to geophysics. This software has largely been the product of individual efforts and although this approach has proven successful, its strength for solving problems of interest is now starting to show its limitations as we try to share codes and algorithms or when we want to recombine codes in novel ways to produce new science. With funding from the NSF, the US community has embarked on a Computational Infrastructure for Geodynamics (CIG) that will develop, support, and disseminate community-accessible software for the greater geodynamics community from model developers to end-users. The software is being developed for problems involving mantle and core dynamics, crustal and earthquake dynamics, magma migration, seismology, and other related topics. With a high level of community participation, CIG is leveraging state-of-the-art scientific computing into a suite of open-source tools and codes. The infrastructure that we are now starting to develop will consist of: (a) a coordinated effort to develop reusable, well-documented and open-source geodynamics software; (b) the basic building blocks - an infrastructure layer - of software by which state-of-the-art modeling codes can be quickly assembled; (c) extension of existing software frameworks to interlink multiple codes and data through a superstructure layer; (d) strategic partnerships with the larger world of computational science and geoinformatics; and (e) specialized training and workshops for both the geodynamics and broader Earth science communities. The CIG initiative has already started to leverage and develop long-term strategic partnerships with open source development efforts within the larger thrusts of scientific computing and geoinformatics. These strategic partnerships are essential as the frontier has moved into multi-scale and multi-physics problems in which many investigators now want to use simulation software for data interpretation, data assimilation, and hypothesis testing.
Application Side Casing on Open Deck RoRo to Improve Ship Stability
NASA Astrophysics Data System (ADS)
Hasanudin; K. A. P Utama, I.; Chen, Jeng-Horng
2018-03-01
RoRo is a vessel that can transport passengers, cargo, container and cars. Open Car Deck is favourite RoRo Vessel in developing countries due to its small GT, small tax and spacious car deck, but it has poor survival of stability. Many accident involve Open Car Deck RoRo which cause fatalities and victim. In order to ensure the safety of the ship, IMO had applied intact stability criteria IS Code 2008 which adapted from Rahola’s Research, but since 2008 IMO improved criteria become probabilistic damage stability SOLAS 2009. The RoRo type Open Car Deck has wide Breadth (B), small Draft (D) and small freeboard. It has difficulties to satisfy the ship’s stability criteria. Side Casings which has been applied in some RoRo have be known reduce freeboard or improve ship’s safety. In this paper investigated the effect side casings to survival of intact dan damage ship’s stability. Calculation has been conducted for four ships without, existing and full side casings. The investigation results shows that defect stability of Open Deck RoRo can be reduce with fitting side casing.
TU-AB-BRC-12: Optimized Parallel MonteCarlo Dose Calculations for Secondary MU Checks
DOE Office of Scientific and Technical Information (OSTI.GOV)
French, S; Nazareth, D; Bellor, M
Purpose: Secondary MU checks are an important tool used during a physics review of a treatment plan. Commercial software packages offer varying degrees of theoretical dose calculation accuracy, depending on the modality involved. Dose calculations of VMAT plans are especially prone to error due to the large approximations involved. Monte Carlo (MC) methods are not commonly used due to their long run times. We investigated two methods to increase the computational efficiency of MC dose simulations with the BEAMnrc code. Distributed computing resources, along with optimized code compilation, will allow for accurate and efficient VMAT dose calculations. Methods: The BEAMnrcmore » package was installed on a high performance computing cluster accessible to our clinic. MATLAB and PYTHON scripts were developed to convert a clinical VMAT DICOM plan into BEAMnrc input files. The BEAMnrc installation was optimized by running the VMAT simulations through profiling tools which indicated the behavior of the constituent routines in the code, e.g. the bremsstrahlung splitting routine, and the specified random number generator. This information aided in determining the most efficient compiling parallel configuration for the specific CPU’s available on our cluster, resulting in the fastest VMAT simulation times. Our method was evaluated with calculations involving 10{sup 8} – 10{sup 9} particle histories which are sufficient to verify patient dose using VMAT. Results: Parallelization allowed the calculation of patient dose on the order of 10 – 15 hours with 100 parallel jobs. Due to the compiler optimization process, further speed increases of 23% were achieved when compared with the open-source compiler BEAMnrc packages. Conclusion: Analysis of the BEAMnrc code allowed us to optimize the compiler configuration for VMAT dose calculations. In future work, the optimized MC code, in conjunction with the parallel processing capabilities of BEAMnrc, will be applied to provide accurate and efficient secondary MU checks.« less
The MOLDY short-range molecular dynamics package
NASA Astrophysics Data System (ADS)
Ackland, G. J.; D'Mellow, K.; Daraszewicz, S. L.; Hepburn, D. J.; Uhrin, M.; Stratford, K.
2011-12-01
We describe a parallelised version of the MOLDY molecular dynamics program. This Fortran code is aimed at systems which may be described by short-range potentials and specifically those which may be addressed with the embedded atom method. This includes a wide range of transition metals and alloys. MOLDY provides a range of options in terms of the molecular dynamics ensemble used and the boundary conditions which may be applied. A number of standard potentials are provided, and the modular structure of the code allows new potentials to be added easily. The code is parallelised using OpenMP and can therefore be run on shared memory systems, including modern multicore processors. Particular attention is paid to the updates required in the main force loop, where synchronisation is often required in OpenMP implementations of molecular dynamics. We examine the performance of the parallel code in detail and give some examples of applications to realistic problems, including the dynamic compression of copper and carbon migration in an iron-carbon alloy. Program summaryProgram title: MOLDY Catalogue identifier: AEJU_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEJU_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License version 2 No. of lines in distributed program, including test data, etc.: 382 881 No. of bytes in distributed program, including test data, etc.: 6 705 242 Distribution format: tar.gz Programming language: Fortran 95/OpenMP Computer: Any Operating system: Any Has the code been vectorised or parallelized?: Yes. OpenMP is required for parallel execution RAM: 100 MB or more Classification: 7.7 Nature of problem: Moldy addresses the problem of many atoms (of order 10 6) interacting via a classical interatomic potential on a timescale of microseconds. It is designed for problems where statistics must be gathered over a number of equivalent runs, such as measuring thermodynamic properities, diffusion, radiation damage, fracture, twinning deformation, nucleation and growth of phase transitions, sputtering etc. In the vast majority of materials, the interactions are non-pairwise, and the code must be able to deal with many-body forces. Solution method: Molecular dynamics involves integrating Newton's equations of motion. MOLDY uses verlet (for good energy conservation) or predictor-corrector (for accurate trajectories) algorithms. It is parallelised using open MP. It also includes a static minimisation routine to find the lowest energy structure. Boundary conditions for surfaces, clusters, grain boundaries, thermostat (Nose), barostat (Parrinello-Rahman), and externally applied strain are provided. The initial configuration can be either a repeated unit cell or have all atoms given explictly. Initial velocities are generated internally, but it is also possible to specify the velocity of a particular atom. A wide range of interatomic force models are implemented, including embedded atom, Morse or Lennard-Jones. Thus the program is especially well suited to calculations of metals. Restrictions: The code is designed for short-ranged potentials, and there is no Ewald sum. Thus for long range interactions where all particles interact with all others, the order- N scaling will fail. Different interatomic potential forms require recompilation of the code. Additional comments: There is a set of associated open-source analysis software for postprocessing and visualisation. This includes local crystal structure recognition and identification of topological defects. Running time: A set of test modules for running time are provided. The code scales as order N. The parallelisation shows near-linear scaling with number of processors in a shared memory environment. A typical run of a few tens of nanometers for a few nanoseconds will run on a timescale of days on a multiprocessor desktop.
Comparison and correlation of Simple Sequence Repeats distribution in genomes of Brucella species
Kiran, Jangampalli Adi Pradeep; Chakravarthi, Veeraraghavulu Praveen; Kumar, Yellapu Nanda; Rekha, Somesula Swapna; Kruti, Srinivasan Shanthi; Bhaskar, Matcha
2011-01-01
Computational genomics is one of the important tools to understand the distribution of closely related genomes including simple sequence repeats (SSRs) in an organism, which gives valuable information regarding genetic variations. The central objective of the present study was to screen the SSRs distributed in coding and non-coding regions among different human Brucella species which are involved in a range of pathological disorders. Computational analysis of the SSRs in the Brucella indicates few deviations from expected random models. Statistical analysis also reveals that tri-nucleotide SSRs are overrepresented and tetranucleotide SSRs underrepresented in Brucella genomes. From the data, it can be suggested that over expressed tri-nucleotide SSRs in genomic and coding regions might be responsible in the generation of functional variation of proteins expressed which in turn may lead to different pathogenicity, virulence determinants, stress response genes, transcription regulators and host adaptation proteins of Brucella genomes. Abbreviations SSRs - Simple Sequence Repeats, ORFs - Open Reading Frames. PMID:21738309
ERIC Educational Resources Information Center
Olsen, Florence
2003-01-01
Colleges and universities are beginning to consider collaborating on open-source-code projects as a way to meet critical software and computing needs. Points out the attractive features of noncommercial open-source software and describes some examples in use now, especially for the creation of Web infrastructure. (SLD)
OpenFLUID: an open-source software environment for modelling fluxes in landscapes
NASA Astrophysics Data System (ADS)
Fabre, Jean-Christophe; Rabotin, Michaël; Crevoisier, David; Libres, Aline; Dagès, Cécile; Moussa, Roger; Lagacherie, Philippe; Raclot, Damien; Voltz, Marc
2013-04-01
Integrative landscape functioning has become a common concept in environmental management. Landscapes are complex systems where many processes interact in time and space. In agro-ecosystems, these processes are mainly physical processes, including hydrological-processes, biological processes and human activities. Modelling such systems requires an interdisciplinary approach, coupling models coming from different disciplines, developed by different teams. In order to support collaborative works, involving many models coupled in time and space for integrative simulations, an open software modelling platform is a relevant answer. OpenFLUID is an open source software platform for modelling landscape functioning, mainly focused on spatial fluxes. It provides an advanced object-oriented architecture allowing to i) couple models developed de novo or from existing source code, and which are dynamically plugged to the platform, ii) represent landscapes as hierarchical graphs, taking into account multi-scale, spatial heterogeneities and landscape objects connectivity, iii) run and explore simulations in many ways : using the OpenFLUID software interfaces for users (command line interface, graphical user interface), or using external applications such as GNU R through the provided ROpenFLUID package. OpenFLUID is developed in C++ and relies on open source libraries only (Boost, libXML2, GLib/GTK, OGR/GDAL, …). For modelers and developers, OpenFLUID provides a dedicated environment for model development, which is based on an open source toolchain, including the Eclipse editor, the GCC compiler and the CMake build system. OpenFLUID is distributed under the GPLv3 open source license, with a special exception allowing to plug existing models licensed under any license. It is clearly in the spirit of sharing knowledge and favouring collaboration in a community of modelers. OpenFLUID has been involved in many research applications, such as modelling of hydrological network transfer, diagnosis and prediction of water quality taking into account human activities, study of the effect of spatial organization on hydrological fluxes, modelling of surface-subsurface water exchanges, … At LISAH research unit, OpenFLUID is the supporting development platform of the MHYDAS model, which is a distributed model for agrosystems (Moussa et al., 2002, Hydrological Processes, 16, 393-412). OpenFLUID web site : http://www.openfluid-project.org
Spectral-element Seismic Wave Propagation on CUDA/OpenCL Hardware Accelerators
NASA Astrophysics Data System (ADS)
Peter, D. B.; Videau, B.; Pouget, K.; Komatitsch, D.
2015-12-01
Seismic wave propagation codes are essential tools to investigate a variety of wave phenomena in the Earth. Furthermore, they can now be used for seismic full-waveform inversions in regional- and global-scale adjoint tomography. Although these seismic wave propagation solvers are crucial ingredients to improve the resolution of tomographic images to answer important questions about the nature of Earth's internal processes and subsurface structure, their practical application is often limited due to high computational costs. They thus need high-performance computing (HPC) facilities to improving the current state of knowledge. At present, numerous large HPC systems embed many-core architectures such as graphics processing units (GPUs) to enhance numerical performance. Such hardware accelerators can be programmed using either the CUDA programming environment or the OpenCL language standard. CUDA software development targets NVIDIA graphic cards while OpenCL was adopted by additional hardware accelerators, like e.g. AMD graphic cards, ARM-based processors as well as Intel Xeon Phi coprocessors. For seismic wave propagation simulations using the open-source spectral-element code package SPECFEM3D_GLOBE, we incorporated an automatic source-to-source code generation tool (BOAST) which allows us to use meta-programming of all computational kernels for forward and adjoint runs. Using our BOAST kernels, we generate optimized source code for both CUDA and OpenCL languages within the source code package. Thus, seismic wave simulations are able now to fully utilize CUDA and OpenCL hardware accelerators. We show benchmarks of forward seismic wave propagation simulations using SPECFEM3D_GLOBE on CUDA/OpenCL GPUs, validating results and comparing performances for different simulations and hardware usages.
The Open Spectral Database: an open platform for sharing and searching spectral data.
Chalk, Stuart J
2016-01-01
A number of websites make available spectral data for download (typically as JCAMP-DX text files) and one (ChemSpider) that also allows users to contribute spectral files. As a result, searching and retrieving such spectral data can be time consuming, and difficult to reuse if the data is compressed in the JCAMP-DX file. What is needed is a single resource that allows submission of JCAMP-DX files, export of the raw data in multiple formats, searching based on multiple chemical identifiers, and is open in terms of license and access. To address these issues a new online resource called the Open Spectral Database (OSDB) http://osdb.info/ has been developed and is now available. Built using open source tools, using open code (hosted on GitHub), providing open data, and open to community input about design and functionality, the OSDB is available for anyone to submit spectral data, making it searchable and available to the scientific community. This paper details the concept and coding, internal architecture, export formats, Representational State Transfer (REST) Application Programming Interface and options for submission of data. The OSDB website went live in November 2015. Concurrently, the GitHub repository was made available at https://github.com/stuchalk/OSDB/, and is open for collaborators to join the project, submit issues, and contribute code. The combination of a scripting environment (PHPStorm), a PHP Framework (CakePHP), a relational database (MySQL) and a code repository (GitHub) provides all the capabilities to easily develop REST based websites for ingestion, curation and exposure of open chemical data to the community at all levels. It is hoped this software stack (or equivalent ones in other scripting languages) will be leveraged to make more chemical data available for both humans and computers.
Horgan, John; Shortland, Neil; Abbasciano, Suzzette; Walsh, Shaun
2016-09-01
Involvement in terrorism has traditionally been discussed in relatively simplistic ways with little effort spent on developing a deeper understanding of what involvement actually entails, and how it differs from person to person. In this paper, we present the results of a three-year project focused on 183 individuals associated with the global jihadist movement who were convicted in the United States, for terrorist offenses, between 1995 and 2012. These data were developed by a large-scale, open-source data collection activity that involved a coding dictionary of more than 120 variables. We identify and explore the diversity of behaviors that constitute involvement in terrorism. We also compare lone actors and those who acted as part of a group, finding that lone actors differed from group-based actors in key demographic attributes and were more likely to be involved in attack execution behaviors. Implications for counterterrorism are then discussed. © 2016 American Academy of Forensic Sciences.
Delimitation of essential genes of cassava latent virus DNA 2.
Etessami, P; Callis, R; Ellwood, S; Stanley, J
1988-01-01
Insertion and deletion mutagenesis of both extended open reading frames (ORFs) of cassava latent virus DNA 2 destroys infectivity. Infectivity is restored by coinoculating constructs that contain single mutations within different ORFs. Although frequent intermolecular recombination produces dominant parental-type virus, mutants can be retained within the virus population indicating that they are competent for replication and suggesting that rescue can occur by complementation of trans acting gene products. By cloning specific fragments into DNA 1 coat protein deletion vectors we have delimited the DNA 2 coding regions and provide substantive evidence that both are essential for virus infection. Although a DNA 2 component is unique to whitefly-transmitted geminiviruses, the results demonstrate that neither coding region is involved solely in insect transmission. The requirement for a bipartite genome for whitefly-transmitted geminiviruses is discussed. Images PMID:3387209
2009-06-06
written in Standard ML, and comprises nearly 7,000 lines of code. OpenSSL is used for all cryptographic operations. Because the front end tools are used...be managed. Macrobenchmarks. To understand the performance of PCFS in practice, we also ran two simple macrobenchmarks. The first (called OpenSSL in...the table below), untars the OpenSSL source code, compiles it and deletes it. The other (called Fuse in the table below), performs similar operations
QSL Squasher: A Fast Quasi-separatrix Layer Map Calculator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tassev, Svetlin; Savcheva, Antonia, E-mail: svetlin.tassev@cfa.harvard.edu
Quasi-Separatrix Layers (QSLs) are a useful proxy for the locations where current sheets can develop in the solar corona, and give valuable information about the connectivity in complicated magnetic field configurations. However, calculating QSL maps, even for two-dimensional slices through three-dimensional models of coronal magnetic fields, is a non-trivial task, as it usually involves tracing out millions of magnetic field lines with immense precision. Thus, extending QSL calculations to three dimensions has rarely been done until now. In order to address this challenge, we present QSL Squasher—a public, open-source code, which is optimized for calculating QSL maps in both twomore » and three dimensions on graphics processing units. The code achieves large processing speeds for three reasons, each of which results in an order-of-magnitude speed-up. (1) The code is parallelized using OpenCL. (2) The precision requirements for the QSL calculation are drastically reduced by using perturbation theory. (3) A new boundary detection criterion between quasi-connectivity domains is used, which quickly identifies possible QSL locations that need to be finely sampled by the code. That boundary detection criterion relies on finding the locations of abrupt field-line length changes, which we do by introducing a new Field-line Length Edge (FLEDGE) map. We find FLEDGE maps useful on their own as a quick-and-dirty substitute for QSL maps. QSL Squasher allows construction of high-resolution 3D FLEDGE maps in a matter of minutes, which is two orders of magnitude faster than calculating the corresponding 3D QSL maps. We include a sample of calculations done using QSL Squasher to demonstrate its capabilities as a QSL calculator, as well as to compare QSL and FLEDGE maps.« less
Scalable nanohelices for predictive studies and enhanced 3D visualization.
Meagher, Kwyn A; Doblack, Benjamin N; Ramirez, Mercedes; Davila, Lilian P
2014-11-12
Spring-like materials are ubiquitous in nature and of interest in nanotechnology for energy harvesting, hydrogen storage, and biological sensing applications. For predictive simulations, it has become increasingly important to be able to model the structure of nanohelices accurately. To study the effect of local structure on the properties of these complex geometries one must develop realistic models. To date, software packages are rather limited in creating atomistic helical models. This work focuses on producing atomistic models of silica glass (SiO₂) nanoribbons and nanosprings for molecular dynamics (MD) simulations. Using an MD model of "bulk" silica glass, two computational procedures to precisely create the shape of nanoribbons and nanosprings are presented. The first method employs the AWK programming language and open-source software to effectively carve various shapes of silica nanoribbons from the initial bulk model, using desired dimensions and parametric equations to define a helix. With this method, accurate atomistic silica nanoribbons can be generated for a range of pitch values and dimensions. The second method involves a more robust code which allows flexibility in modeling nanohelical structures. This approach utilizes a C++ code particularly written to implement pre-screening methods as well as the mathematical equations for a helix, resulting in greater precision and efficiency when creating nanospring models. Using these codes, well-defined and scalable nanoribbons and nanosprings suited for atomistic simulations can be effectively created. An added value in both open-source codes is that they can be adapted to reproduce different helical structures, independent of material. In addition, a MATLAB graphical user interface (GUI) is used to enhance learning through visualization and interaction for a general user with the atomistic helical structures. One application of these methods is the recent study of nanohelices via MD simulations for mechanical energy harvesting purposes.
QSL Squasher: A Fast Quasi-separatrix Layer Map Calculator
NASA Astrophysics Data System (ADS)
Tassev, Svetlin; Savcheva, Antonia
2017-05-01
Quasi-Separatrix Layers (QSLs) are a useful proxy for the locations where current sheets can develop in the solar corona, and give valuable information about the connectivity in complicated magnetic field configurations. However, calculating QSL maps, even for two-dimensional slices through three-dimensional models of coronal magnetic fields, is a non-trivial task, as it usually involves tracing out millions of magnetic field lines with immense precision. Thus, extending QSL calculations to three dimensions has rarely been done until now. In order to address this challenge, we present QSL Squasher—a public, open-source code, which is optimized for calculating QSL maps in both two and three dimensions on graphics processing units. The code achieves large processing speeds for three reasons, each of which results in an order-of-magnitude speed-up. (1) The code is parallelized using OpenCL. (2) The precision requirements for the QSL calculation are drastically reduced by using perturbation theory. (3) A new boundary detection criterion between quasi-connectivity domains is used, which quickly identifies possible QSL locations that need to be finely sampled by the code. That boundary detection criterion relies on finding the locations of abrupt field-line length changes, which we do by introducing a new Field-line Length Edge (FLEDGE) map. We find FLEDGE maps useful on their own as a quick-and-dirty substitute for QSL maps. QSL Squasher allows construction of high-resolution 3D FLEDGE maps in a matter of minutes, which is two orders of magnitude faster than calculating the corresponding 3D QSL maps. We include a sample of calculations done using QSL Squasher to demonstrate its capabilities as a QSL calculator, as well as to compare QSL and FLEDGE maps.
PARAVT: Parallel Voronoi tessellation code
NASA Astrophysics Data System (ADS)
González, R. E.
2016-10-01
In this study, we present a new open source code for massive parallel computation of Voronoi tessellations (VT hereafter) in large data sets. The code is focused for astrophysical purposes where VT densities and neighbors are widely used. There are several serial Voronoi tessellation codes, however no open source and parallel implementations are available to handle the large number of particles/galaxies in current N-body simulations and sky surveys. Parallelization is implemented under MPI and VT using Qhull library. Domain decomposition takes into account consistent boundary computation between tasks, and includes periodic conditions. In addition, the code computes neighbors list, Voronoi density, Voronoi cell volume, density gradient for each particle, and densities on a regular grid. Code implementation and user guide are publicly available at https://github.com/regonzar/paravt.
A Clustering-Based Approach to Enriching Code Foraging Environment.
Niu, Nan; Jin, Xiaoyu; Niu, Zhendong; Cheng, Jing-Ru C; Li, Ling; Kataev, Mikhail Yu
2016-09-01
Developers often spend valuable time navigating and seeking relevant code in software maintenance. Currently, there is a lack of theoretical foundations to guide tool design and evaluation to best shape the code base to developers. This paper contributes a unified code navigation theory in light of the optimal food-foraging principles. We further develop a novel framework for automatically assessing the foraging mechanisms in the context of program investigation. We use the framework to examine to what extent the clustering of software entities affects code foraging. Our quantitative analysis of long-lived open-source projects suggests that clustering enriches the software environment and improves foraging efficiency. Our qualitative inquiry reveals concrete insights into real developer's behavior. Our research opens the avenue toward building a new set of ecologically valid code navigation tools.
PD5: a general purpose library for primer design software.
Riley, Michael C; Aubrey, Wayne; Young, Michael; Clare, Amanda
2013-01-01
Complex PCR applications for large genome-scale projects require fast, reliable and often highly sophisticated primer design software applications. Presently, such applications use pipelining methods to utilise many third party applications and this involves file parsing, interfacing and data conversion, which is slow and prone to error. A fully integrated suite of software tools for primer design would considerably improve the development time, the processing speed, and the reliability of bespoke primer design software applications. The PD5 software library is an open-source collection of classes and utilities, providing a complete collection of software building blocks for primer design and analysis. It is written in object-oriented C(++) with an emphasis on classes suitable for efficient and rapid development of bespoke primer design programs. The modular design of the software library simplifies the development of specific applications and also integration with existing third party software where necessary. We demonstrate several applications created using this software library that have already proved to be effective, but we view the project as a dynamic environment for building primer design software and it is open for future development by the bioinformatics community. Therefore, the PD5 software library is published under the terms of the GNU General Public License, which guarantee access to source-code and allow redistribution and modification. The PD5 software library is downloadable from Google Code and the accompanying Wiki includes instructions and examples: http://code.google.com/p/primer-design.
ERIC Educational Resources Information Center
Davis, Colin J.; Bowers, Jeffrey S.
2006-01-01
Five theories of how letter position is coded are contrasted: position-specific slot-coding, Wickelcoding, open-bigram coding (discrete and continuous), and spatial coding. These theories make different predictions regarding the relative similarity of three different types of pairs of letter strings: substitution neighbors,…
Open Education and the Open Science Economy
ERIC Educational Resources Information Center
Peters, Michael A.
2009-01-01
Openness as a complex code word for a variety of digital trends and movements has emerged as an alternative mode of "social production" based on the growing and overlapping complexities of open source, open access, open archiving, open publishing, and open science. This paper argues that the openness movement with its reinforcing structure of…
Raus, Kasper; Brown, Jayne; Seale, Clive; Rietjens, Judith A C; Janssens, Rien; Bruinsma, Sophie; Mortier, Freddy; Payne, Sheila; Sterckx, Sigrid
2014-02-20
Continuous sedation is increasingly used as a way to relieve symptoms at the end of life. Current research indicates that some physicians, nurses, and relatives involved in this practice experience emotional and/or moral distress. This study aims to provide insight into what may influence how professional and/or family carers cope with such distress. This study is an international qualitative interview study involving interviews with physicians, nurses, and relatives of deceased patients in the UK, The Netherlands and Belgium (the UNBIASED study) about a case of continuous sedation at the end of life they were recently involved in. All interviews were transcribed verbatim and analysed by staying close to the data using open coding. Next, codes were combined into larger themes and categories of codes resulting in a four point scheme that captured all of the data. Finally, our findings were compared with others and explored in relation to theories in ethics and sociology. The participants' responses can be captured as different dimensions of 'closeness', i.e. the degree to which one feels connected or 'close' to a certain decision or event. We distinguished four types of 'closeness', namely emotional, physical, decisional, and causal. Using these four dimensions of 'closeness' it became possible to describe how physicians, nurses, and relatives experience their involvement in cases of continuous sedation until death. More specifically, it shined a light on the everyday moral reasoning employed by care providers and relatives in the context of continuous sedation, and how this affected the emotional impact of being involved in sedation, as well as the perception of their own moral responsibility. Findings from this study demonstrate that various factors are reported to influence the degree of closeness to continuous sedation (and thus the extent to which carers feel morally responsible), and that some of these factors help care providers and relatives to distinguish continuous sedation from euthanasia.
2014-01-01
Background Continuous sedation is increasingly used as a way to relieve symptoms at the end of life. Current research indicates that some physicians, nurses, and relatives involved in this practice experience emotional and/or moral distress. This study aims to provide insight into what may influence how professional and/or family carers cope with such distress. Methods This study is an international qualitative interview study involving interviews with physicians, nurses, and relatives of deceased patients in the UK, The Netherlands and Belgium (the UNBIASED study) about a case of continuous sedation at the end of life they were recently involved in. All interviews were transcribed verbatim and analysed by staying close to the data using open coding. Next, codes were combined into larger themes and categories of codes resulting in a four point scheme that captured all of the data. Finally, our findings were compared with others and explored in relation to theories in ethics and sociology. Results The participants’ responses can be captured as different dimensions of ‘closeness’, i.e. the degree to which one feels connected or ‘close’ to a certain decision or event. We distinguished four types of ‘closeness’, namely emotional, physical, decisional, and causal. Using these four dimensions of ‘closeness’ it became possible to describe how physicians, nurses, and relatives experience their involvement in cases of continuous sedation until death. More specifically, it shined a light on the everyday moral reasoning employed by care providers and relatives in the context of continuous sedation, and how this affected the emotional impact of being involved in sedation, as well as the perception of their own moral responsibility. Conclusion Findings from this study demonstrate that various factors are reported to influence the degree of closeness to continuous sedation (and thus the extent to which carers feel morally responsible), and that some of these factors help care providers and relatives to distinguish continuous sedation from euthanasia. PMID:24555871
NASA Astrophysics Data System (ADS)
D'Alessandro, Valerio; Binci, Lorenzo; Montelpare, Sergio; Ricci, Renato
2018-01-01
Open-source CFD codes provide suitable environments for implementing and testing low-dissipative algorithms typically used to simulate turbulence. In this research work we developed CFD solvers for incompressible flows based on high-order explicit and diagonally implicit Runge-Kutta (RK) schemes for time integration. In particular, an iterated PISO-like procedure based on Rhie-Chow correction was used to handle pressure-velocity coupling within each implicit RK stage. For the explicit approach, a projected scheme was used to avoid the "checker-board" effect. The above-mentioned approaches were also extended to flow problems involving heat transfer. It is worth noting that the numerical technology available in the OpenFOAM library was used for space discretization. In this work, we additionally explore the reliability and effectiveness of the proposed implementations by computing several unsteady flow benchmarks; we also show that the numerical diffusion due to the time integration approach is completely canceled using the solution techniques proposed here.
Binary Code Extraction and Interface Identification for Security Applications
2009-10-02
the functions extracted during the end-to-end applications and at the bottom some additional functions extracted from the OpenSSL library. fact that as...mentioned in Section 5.1 through Section 5.3 and some additional functions that we extract from the OpenSSL library for evaluation purposes. The... OpenSSL functions, the false positives and negatives are measured by comparison with the original C source code. For the malware samples, no source is
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kolb, C.E.; Yousefian, V.; Wormhoudt, J.
1978-01-30
Research has included theoretical modeling of important plasma chemical effects such as: conductivity reductions due to condensed slag/electron interactions; conductivity and generator efficiency reductions due to the formation of slag-related negative ion species; and the loss of alkali seed due to chemical combination with condensed slag. A summary of the major conclusions in each of these areas is presented. A major output of the modeling effort has been the development of an MHD plasma chemistry core flow model. This model has been formulated into a computer program designated the PACKAGE code (Plasma Analysis, Chemical Kinetics, And Generator Efficiency). The PACKAGEmore » code is designed to calculate the effect of coal rank, ash percentage, ash composition, air preheat temperatures, equivalence ratio, and various generator channel parameters on the overall efficiency of open-cycle, coal-fired MHD generators. A complete description of the PACKAGE code and a preliminary version of the PACKAGE user's manual are included. A laboratory measurements program involving direct, mass spectrometric sampling of the positive and negative ions formed in a one atmosphere coal combustion plasma was also completed during the contract's initial phase. The relative ion concentrations formed in a plasma due to the methane augmented combustion of pulverized Montana Rosebud coal with potassium carbonate seed and preheated air are summarized. Positive ions measured include K/sup +/, KO/sup +/, Na/sup +/, Rb/sup +/, Cs/sup +/, and CsO/sup +/, while negative ions identified include PO/sub 3//sup -/, PO/sub 2//sup -/, BO/sub 2//sup -/, OH/sup -/, SH/sup -/, and probably HCrO/sub 3/, HMoO/sub 4//sup -/, and HWO/sub 3//sup -/. Comparison of the measurements with PACKAGE code predictions are presented. Preliminary design considerations for a mass spectrometric sampling probe capable of characterizing coal combustion plasmas from full scale combustors and flow trains are presented and discussed.« less
NASA Astrophysics Data System (ADS)
Rueda, Antonio J.; Noguera, José M.; Luque, Adrián
2016-02-01
In recent years GPU computing has gained wide acceptance as a simple low-cost solution for speeding up computationally expensive processing in many scientific and engineering applications. However, in most cases accelerating a traditional CPU implementation for a GPU is a non-trivial task that requires a thorough refactorization of the code and specific optimizations that depend on the architecture of the device. OpenACC is a promising technology that aims at reducing the effort required to accelerate C/C++/Fortran code on an attached multicore device. Virtually with this technology the CPU code only has to be augmented with a few compiler directives to identify the areas to be accelerated and the way in which data has to be moved between the CPU and GPU. Its potential benefits are multiple: better code readability, less development time, lower risk of errors and less dependency on the underlying architecture and future evolution of the GPU technology. Our aim with this work is to evaluate the pros and cons of using OpenACC against native GPU implementations in computationally expensive hydrological applications, using the classic D8 algorithm of O'Callaghan and Mark for river network extraction as case-study. We implemented the flow accumulation step of this algorithm in CPU, using OpenACC and two different CUDA versions, comparing the length and complexity of the code and its performance with different datasets. We advance that although OpenACC can not match the performance of a CUDA optimized implementation (×3.5 slower in average), it provides a significant performance improvement against a CPU implementation (×2-6) with by far a simpler code and less implementation effort.
Chirp- and random-based coded ultrasonic excitation for localized blood-brain barrier opening
Kamimura, HAS; Wang, S; Wu, S-Y; Karakatsani, ME; Acosta, C; Carneiro, AAO; Konofagou, EE
2015-01-01
Chirp- and random-based coded excitation methods have been proposed to reduce standing wave formation and improve focusing of transcranial ultrasound. However, no clear evidence has been shown to support the benefits of these ultrasonic excitation sequences in vivo. This study evaluates the chirp and periodic selection of random frequency (PSRF) coded-excitation methods for opening the blood-brain barrier (BBB) in mice. Three groups of mice (n=15) were injected with polydisperse microbubbles and sonicated in the caudate putamen using the chirp/PSRF coded (bandwidth: 1.5-1.9 MHz, peak negative pressure: 0.52 MPa, duration: 30 s) or standard ultrasound (frequency: 1.5 MHz, pressure: 0.52 MPa, burst duration: 20 ms, duration: 5 min) sequences. T1-weighted contrast-enhanced MRI scans were performed to quantitatively analyze focused ultrasound induced BBB opening. The mean opening volumes evaluated from the MRI were 9.38±5.71 mm3, 8.91±3.91 mm3 and 35.47 ± 5.10 mm3 for the chirp, random and regular sonications, respectively. The mean cavitation levels were 55.40±28.43 V.s, 63.87±29.97 V.s and 356.52±257.15 V.s for the chirp, random and regular sonications, respectively. The chirp and PSRF coded pulsing sequences improved the BBB opening localization by inducing lower cavitation levels and smaller opening volumes compared to results of the regular sonication technique. Larger bandwidths were associated with more focused targeting but were limited by the frequency response of the transducer, the skull attenuation and the microbubbles optimal frequency range. The coded methods could therefore facilitate highly localized drug delivery as well as benefit other transcranial ultrasound techniques that use higher pressure levels and higher precision to induce the necessary bioeffects in a brain region while avoiding damage to the surrounding healthy tissue. PMID:26394091
dsmcFoam+: An OpenFOAM based direct simulation Monte Carlo solver
NASA Astrophysics Data System (ADS)
White, C.; Borg, M. K.; Scanlon, T. J.; Longshaw, S. M.; John, B.; Emerson, D. R.; Reese, J. M.
2018-03-01
dsmcFoam+ is a direct simulation Monte Carlo (DSMC) solver for rarefied gas dynamics, implemented within the OpenFOAM software framework, and parallelised with MPI. It is open-source and released under the GNU General Public License in a publicly available software repository that includes detailed documentation and tutorial DSMC gas flow cases. This release of the code includes many features not found in standard dsmcFoam, such as molecular vibrational and electronic energy modes, chemical reactions, and subsonic pressure boundary conditions. Since dsmcFoam+ is designed entirely within OpenFOAM's C++ object-oriented framework, it benefits from a number of key features: the code emphasises extensibility and flexibility so it is aimed first and foremost as a research tool for DSMC, allowing new models and test cases to be developed and tested rapidly. All DSMC cases are as straightforward as setting up any standard OpenFOAM case, as dsmcFoam+ relies upon the standard OpenFOAM dictionary based directory structure. This ensures that useful pre- and post-processing capabilities provided by OpenFOAM remain available even though the fully Lagrangian nature of a DSMC simulation is not typical of most OpenFOAM applications. We show that dsmcFoam+ compares well to other well-known DSMC codes and to analytical solutions in terms of benchmark results.
Nurturing reliable and robust open-source scientific software
NASA Astrophysics Data System (ADS)
Uieda, L.; Wessel, P.
2017-12-01
Scientific results are increasingly the product of software. The reproducibility and validity of published results cannot be ensured without access to the source code of the software used to produce them. Therefore, the code itself is a fundamental part of the methodology and must be published along with the results. With such a reliance on software, it is troubling that most scientists do not receive formal training in software development. Tools such as version control, continuous integration, and automated testing are routinely used in industry to ensure the correctness and robustness of software. However, many scientist do not even know of their existence (although efforts like Software Carpentry are having an impact on this issue; software-carpentry.org). Publishing the source code is only the first step in creating an open-source project. For a project to grow it must provide documentation, participation guidelines, and a welcoming environment for new contributors. Expanding the project community is often more challenging than the technical aspects of software development. Maintainers must invest time to enforce the rules of the project and to onboard new members, which can be difficult to justify in the context of the "publish or perish" mentality. This problem will continue as long as software contributions are not recognized as valid scholarship by hiring and tenure committees. Furthermore, there are still unsolved problems in providing attribution for software contributions. Many journals and metrics of academic productivity do not recognize citations to sources other than traditional publications. Thus, some authors choose to publish an article about the software and use it as a citation marker. One issue with this approach is that updating the reference to include new contributors involves writing and publishing a new article. A better approach would be to cite a permanent archive of individual versions of the source code in services such as Zenodo (zenodo.org). However, citations to these sources are not always recognized when computing citation metrics. In summary, the widespread development of reliable and robust open-source software relies on the creation of formal training programs in software development best practices and the recognition of software as a valid form of scholarship.
OpenFOAM: Open source CFD in research and industry
NASA Astrophysics Data System (ADS)
Jasak, Hrvoje
2009-12-01
The current focus of development in industrial Computational Fluid Dynamics (CFD) is integration of CFD into Computer-Aided product development, geometrical optimisation, robust design and similar. On the other hand, in CFD research aims to extend the boundaries ofpractical engineering use in "non-traditional " areas. Requirements of computational flexibility and code integration are contradictory: a change of coding paradigm, with object orientation, library components, equation mimicking is proposed as a way forward. This paper describes OpenFOAM, a C++ object oriented library for Computational Continuum Mechanics (CCM) developed by the author. Efficient and flexible implementation of complex physical models is achieved by mimicking the form ofpartial differential equation in software, with code functionality provided in library form. Open Source deployment and development model allows the user to achieve desired versatility in physical modeling without the sacrifice of complex geometry support and execution efficiency.
Portable multi-node LQCD Monte Carlo simulations using OpenACC
NASA Astrophysics Data System (ADS)
Bonati, Claudio; Calore, Enrico; D'Elia, Massimo; Mesiti, Michele; Negro, Francesco; Sanfilippo, Francesco; Schifano, Sebastiano Fabio; Silvi, Giorgio; Tripiccione, Raffaele
This paper describes a state-of-the-art parallel Lattice QCD Monte Carlo code for staggered fermions, purposely designed to be portable across different computer architectures, including GPUs and commodity CPUs. Portability is achieved using the OpenACC parallel programming model, used to develop a code that can be compiled for several processor architectures. The paper focuses on parallelization on multiple computing nodes using OpenACC to manage parallelism within the node, and OpenMPI to manage parallelism among the nodes. We first discuss the available strategies to be adopted to maximize performances, we then describe selected relevant details of the code, and finally measure the level of performance and scaling-performance that we are able to achieve. The work focuses mainly on GPUs, which offer a significantly high level of performances for this application, but also compares with results measured on other processors.
Turbulent Bubbly Flow in a Vertical Pipe Computed By an Eddy-Resolving Reynolds Stress Model
2014-09-19
the numerical code OpenFOAM R©. 1 Introduction Turbulent bubbly flows are encountered in many industrially relevant applications, such as chemical in...performed using the OpenFOAM -2.2.2 computational code utilizing a cell- center-based finite volume method on an unstructured numerical grid. The...the mean Courant number is always below 0.4. The utilized turbulence models were implemented into the so-called twoPhaseEulerFoam solver in OpenFOAM , to
A case study of infant health promotion and corporate marketing of milk substitutes.
Mendoza, Roger Lee
2012-06-01
The mismatch between the demand for, and supply of, health products has led to the increasing involvement of courts worldwide in health promotion and marketing. This study critically examines the implementation of one country's Milk Code within the framework of the International Code of Marketing of Breast-Milk Substitutes, and the efficacy of the judicial process in balancing corporate marketing and state regulatory objectives. Drawing upon the Philippine experience with its own Milk Code, it evaluates the capacities of courts to determine policy costs and risks against the benefits of delineating and containing corporate marketing strategies for milk substitutes and supplements. The study finds that the methodological and information-based challenges faced by courts in resolving multi-dimensional health issues may not be overcome without serious questions concerning the legitimacy of the judicial process itself. Despite the deficiencies of litigation and adjudication, the study notes the catalytic potential of a judicial decision in opening up vital policy space for future renegotiations among rival parties and interests. Third-party intervention is explored relative to this catalytic function.
Using grounded theory to create a substantive theory of promoting schoolchildren's mental health.
Puolakka, Kristiina; Haapasalo-Pesu, Kirsi-Maria; Kiikkala, Irma; Astedt-Kurki, Päivi; Paavilainen, Eija
2013-01-01
To discuss the creation of a substantive theory using grounded theory. This article provides an example of generating theory from a study of mental health promotion at a high school in Finland. Grounded theory is a method for creating explanatory theory. It is a valuable tool for health professionals when studying phenomena that affect patients' health, offering a deeper understanding of nursing methods and knowledge. Interviews with school employees, students and parents, and verbal responses to the 'school wellbeing profile survey', as well as working group memos related to the development activities. Participating children were aged between 12 and 15. The analysis was conducted by applying the grounded theory method and involved open coding of the material, constant comparison, axial coding and selective coding after identifying the core category. The analysis produced concepts about mental health promotion in school and assumptions about relationships. Grounded theory proved to be an effective means of eliciting people's viewpoints on mental health promotion. The personal views of different parties make it easier to identify an action applicable to practice.
Opening our science: Open science and cyanobacterial research at the US EPA
In this blog post we introduce the idea of Open Science and discuss multiple ways we are implementing these concepts in our cyanobacteria research. We give examples of our open access publications, open source code that support our research, and provide open access to our resear...
QUANTUM ESPRESSO: a modular and open-source software project for quantum simulations of materials.
Giannozzi, Paolo; Baroni, Stefano; Bonini, Nicola; Calandra, Matteo; Car, Roberto; Cavazzoni, Carlo; Ceresoli, Davide; Chiarotti, Guido L; Cococcioni, Matteo; Dabo, Ismaila; Dal Corso, Andrea; de Gironcoli, Stefano; Fabris, Stefano; Fratesi, Guido; Gebauer, Ralph; Gerstmann, Uwe; Gougoussis, Christos; Kokalj, Anton; Lazzeri, Michele; Martin-Samos, Layla; Marzari, Nicola; Mauri, Francesco; Mazzarello, Riccardo; Paolini, Stefano; Pasquarello, Alfredo; Paulatto, Lorenzo; Sbraccia, Carlo; Scandolo, Sandro; Sclauzero, Gabriele; Seitsonen, Ari P; Smogunov, Alexander; Umari, Paolo; Wentzcovitch, Renata M
2009-09-30
QUANTUM ESPRESSO is an integrated suite of computer codes for electronic-structure calculations and materials modeling, based on density-functional theory, plane waves, and pseudopotentials (norm-conserving, ultrasoft, and projector-augmented wave). The acronym ESPRESSO stands for opEn Source Package for Research in Electronic Structure, Simulation, and Optimization. It is freely available to researchers around the world under the terms of the GNU General Public License. QUANTUM ESPRESSO builds upon newly-restructured electronic-structure codes that have been developed and tested by some of the original authors of novel electronic-structure algorithms and applied in the last twenty years by some of the leading materials modeling groups worldwide. Innovation and efficiency are still its main focus, with special attention paid to massively parallel architectures, and a great effort being devoted to user friendliness. QUANTUM ESPRESSO is evolving towards a distribution of independent and interoperable codes in the spirit of an open-source project, where researchers active in the field of electronic-structure calculations are encouraged to participate in the project by contributing their own codes or by implementing their own ideas into existing codes.
NASA Astrophysics Data System (ADS)
Blecic, Jasmina; Harrington, Joseph; Bowman, Matthew O.; Cubillos, Patricio E.; Stemm, Madison; Foster, Andrew
2014-11-01
We present a new, open-source, Thermochemical Equilibrium Abundances (TEA) code that calculates the abundances of gaseous molecular species. TEA uses the Gibbs-free-energy minimization method with an iterative Lagrangian optimization scheme. It initializes the radiative-transfer calculation in our Bayesian Atmospheric Radiative Transfer (BART) code. Given elemental abundances, TEA calculates molecular abundances for a particular temperature and pressure or a list of temperature-pressure pairs. The code is tested against the original method developed by White at al. (1958), the analytic method developed by Burrows and Sharp (1999), and the Newton-Raphson method implemented in the open-source Chemical Equilibrium with Applications (CEA) code. TEA is written in Python and is available to the community via the open-source development site GitHub.com. We also present BART applied to eclipse depths of WASP-43b exoplanet, constraining atmospheric thermal and chemical parameters. This work was supported by NASA Planetary Atmospheres grant NNX12AI69G and NASA Astrophysics Data Analysis Program grant NNX13AF38G. JB holds a NASA Earth and Space Science Fellowship.
Using Docker Containers to Extend Reproducibility Architecture for the NASA Earth Exchange (NEX)
NASA Technical Reports Server (NTRS)
Votava, Petr; Michaelis, Andrew; Spaulding, Ryan; Becker, Jeffrey C.
2016-01-01
NASA Earth Exchange (NEX) is a data, supercomputing and knowledge collaboratory that houses NASA satellite, climate and ancillary data where a focused community can come together to address large-scale challenges in Earth sciences. As NEX has been growing into a petabyte-size platform for analysis, experiments and data production, it has been increasingly important to enable users to easily retrace their steps, identify what datasets were produced by which process chains, and give them ability to readily reproduce their results. This can be a tedious and difficult task even for a small project, but is almost impossible on large processing pipelines. We have developed an initial reproducibility and knowledge capture solution for the NEX, however, if users want to move the code to another system, whether it is their home institution cluster, laptop or the cloud, they have to find, build and install all the required dependencies that would run their code. This can be a very tedious and tricky process and is a big impediment to moving code to data and reproducibility outside the original system. The NEX team has tried to assist users who wanted to move their code into OpenNEX on Amazon cloud by creating custom virtual machines with all the software and dependencies installed, but this, while solving some of the issues, creates a new bottleneck that requires the NEX team to be involved with any new request, updates to virtual machines and general maintenance support. In this presentation, we will describe a solution that integrates NEX and Docker to bridge the gap in code-to-data migration. The core of the solution is saemi-automatic conversion of science codes, tools and services that are already tracked and described in the NEX provenance system, to Docker - an open-source Linux container software. Docker is available on most computer platforms, easy to install and capable of seamlessly creating and/or executing any application packaged in the appropriate format. We believe this is an important step towards seamless process deployment in heterogeneous environments that will enhance community access to NASA data and tools in a scalable way, promote software reuse, and improve reproducibility of scientific results.
Using Docker Containers to Extend Reproducibility Architecture for the NASA Earth Exchange (NEX)
NASA Astrophysics Data System (ADS)
Votava, P.; Michaelis, A.; Spaulding, R.; Becker, J. C.
2016-12-01
NASA Earth Exchange (NEX) is a data, supercomputing and knowledge collaboratory that houses NASA satellite, climate and ancillary data where a focused community can come together to address large-scale challenges in Earth sciences. As NEX has been growing into a petabyte-size platform for analysis, experiments and data production, it has been increasingly important to enable users to easily retrace their steps, identify what datasets were produced by which process chains, and give them ability to readily reproduce their results. This can be a tedious and difficult task even for a small project, but is almost impossible on large processing pipelines. We have developed an initial reproducibility and knowledge capture solution for the NEX, however, if users want to move the code to another system, whether it is their home institution cluster, laptop or the cloud, they have to find, build and install all the required dependencies that would run their code. This can be a very tedious and tricky process and is a big impediment to moving code to data and reproducibility outside the original system. The NEX team has tried to assist users who wanted to move their code into OpenNEX on Amazon cloud by creating custom virtual machines with all the software and dependencies installed, but this, while solving some of the issues, creates a new bottleneck that requires the NEX team to be involved with any new request, updates to virtual machines and general maintenance support. In this presentation, we will describe a solution that integrates NEX and Docker to bridge the gap in code-to-data migration. The core of the solution is saemi-automatic conversion of science codes, tools and services that are already tracked and described in the NEX provenance system, to Docker - an open-source Linux container software. Docker is available on most computer platforms, easy to install and capable of seamlessly creating and/or executing any application packaged in the appropriate format. We believe this is an important step towards seamless process deployment in heterogeneous environments that will enhance community access to NASA data and tools in a scalable way, promote software reuse, and improve reproducibility of scientific results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Su, L; Du, X; Liu, T
Purpose: As a module of ARCHER -- Accelerated Radiation-transport Computations in Heterogeneous EnviRonments, ARCHER{sub RT} is designed for RadioTherapy (RT) dose calculation. This paper describes the application of ARCHERRT on patient-dependent TomoTherapy and patient-independent IMRT. It also conducts a 'fair' comparison of different GPUs and multicore CPU. Methods: The source input used for patient-dependent TomoTherapy is phase space file (PSF) generated from optimized plan. For patient-independent IMRT, the open filed PSF is used for different cases. The intensity modulation is simulated by fluence map. The GEANT4 code is used as benchmark. DVH and gamma index test are employed to evaluatemore » the accuracy of ARCHER{sub RT} code. Some previous studies reported misleading speedups by comparing GPU code with serial CPU code. To perform a fairer comparison, we write multi-thread code with OpenMP to fully exploit computing potential of CPU. The hardware involved in this study are a 6-core Intel E5-2620 CPU and 6 NVIDIA M2090 GPUs, a K20 GPU and a K40 GPU. Results: Dosimetric results from ARCHER{sub RT} and GEANT4 show good agreement. The 2%/2mm gamma test pass rates for different clinical cases are 97.2% to 99.7%. A single M2090 GPU needs 50~79 seconds for the simulation to achieve a statistical error of 1% in the PTV. The K40 card is about 1.7∼1.8 times faster than M2090 card. Using 6 M2090 card, the simulation can be finished in about 10 seconds. For comparison, Intel E5-2620 needs 507∼879 seconds for the same simulation. Conclusion: We successfully applied ARCHER{sub RT} to Tomotherapy and patient-independent IMRT, and conducted a fair comparison between GPU and CPU performance. The ARCHER{sub RT} code is both accurate and efficient and may be used towards clinical applications.« less
Open Rotor Noise Prediction at NASA Langley - Capabilities, Research and Development
NASA Technical Reports Server (NTRS)
Farassat, Fereidoun
2010-01-01
The high fuel prices of recent years have caused the operating cost of the airlines to soar. In an effort to bring down the fuel consumption, the major aircraft engine manufacturers are now taking a fresh look at open rotors for the propulsion of future airliners. Open rotors, also known as propfans or unducted fans, can offer up to 30 per cent improvement in efficiency compared to high bypass engines of 1980 vintage currently in use in most civilian aircraft. NASA Langley researchers have contributed significantly to the development of aeroacoustic technology of open rotors. This report discusses the current noise prediction technology at Langley and reviews the input data requirements, strengths and limitations of each method as well as the associated problems in need of attention by the researchers. We present a brief history of research on the aeroacoustics of rotating blade machinery at Langley Research Center. We then discuss the available noise prediction codes for open rotors developed at NASA Langley and their capabilities. In particular, we present the two useful formulations used for the computation of noise from subsonic and supersonic surfaces. Here we discuss the open rotor noise prediction codes ASSPIN and one based on Ffowcs Williams-Hawkings equation with penetrable data surface (FW - Hpds). The scattering of sound from surfaces near the rotor are calculated using the fast scattering code (FSC) which is also discussed in this report. Plans for further improvements of these codes are given.
DOE Office of Scientific and Technical Information (OSTI.GOV)
2015-12-09
PV_LIB comprises a library of Matlab? code for modeling photovoltaic (PV) systems. Included are functions to compute solar position and to estimate irradiance in the PV system's plane of array, cell temperature, PV module electrical output, and conversion from DC to AC power. Also included are functions that aid in determining parameters for module performance models from module characterization testing. PV_LIB is open source code primarily intended for research and academic purposes. All algorithms are documented in openly available literature with the appropriate references included in comments within the code.
Effective Vectorization with OpenMP 4.5
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huber, Joseph N.; Hernandez, Oscar R.; Lopez, Matthew Graham
This paper describes how the Single Instruction Multiple Data (SIMD) model and its extensions in OpenMP work, and how these are implemented in different compilers. Modern processors are highly parallel computational machines which often include multiple processors capable of executing several instructions in parallel. Understanding SIMD and executing instructions in parallel allows the processor to achieve higher performance without increasing the power required to run it. SIMD instructions can significantly reduce the runtime of code by executing a single operation on large groups of data. The SIMD model is so integral to the processor s potential performance that, if SIMDmore » is not utilized, less than half of the processor is ever actually used. Unfortunately, using SIMD instructions is a challenge in higher level languages because most programming languages do not have a way to describe them. Most compilers are capable of vectorizing code by using the SIMD instructions, but there are many code features important for SIMD vectorization that the compiler cannot determine at compile time. OpenMP attempts to solve this by extending the C++/C and Fortran programming languages with compiler directives that express SIMD parallelism. OpenMP is used to pass hints to the compiler about the code to be executed in SIMD. This is a key resource for making optimized code, but it does not change whether or not the code can use SIMD operations. However, in many cases critical functions are limited by a poor understanding of how SIMD instructions are actually implemented, as SIMD can be implemented through vector instructions or simultaneous multi-threading (SMT). We have found that it is often the case that code cannot be vectorized, or is vectorized poorly, because the programmer does not have sufficient knowledge of how SIMD instructions work.« less
Coding Issues in Grounded Theory
ERIC Educational Resources Information Center
Moghaddam, Alireza
2006-01-01
This paper discusses grounded theory as one of the qualitative research designs. It describes how grounded theory generates from data. Three phases of grounded theory--open coding, axial coding, and selective coding--are discussed, along with some of the issues which are the source of debate among grounded theorists, especially between its…
Cultural and Technological Issues and Solutions for Geodynamics Software Citation
NASA Astrophysics Data System (ADS)
Heien, E. M.; Hwang, L.; Fish, A. E.; Smith, M.; Dumit, J.; Kellogg, L. H.
2014-12-01
Computational software and custom-written codes play a key role in scientific research and teaching, providing tools to perform data analysis and forward modeling through numerical computation. However, development of these codes is often hampered by the fact that there is no well-defined way for the authors to receive credit or professional recognition for their work through the standard methods of scientific publication and subsequent citation of the work. This in turn may discourage researchers from publishing their codes or making them easier for other scientists to use. We investigate the issues involved in citing software in a scientific context, and introduce features that should be components of a citation infrastructure, particularly oriented towards the codes and scientific culture in the area of geodynamics research. The codes used in geodynamics are primarily specialized numerical modeling codes for continuum mechanics problems; they may be developed by individual researchers, teams of researchers, geophysicists in collaboration with computational scientists and applied mathematicians, or by coordinated community efforts such as the Computational Infrastructure for Geodynamics. Some but not all geodynamics codes are open-source. These characteristics are common to many areas of geophysical software development and use. We provide background on the problem of software citation and discuss some of the barriers preventing adoption of such citations, including social/cultural barriers, insufficient technological support infrastructure, and an overall lack of agreement about what a software citation should consist of. We suggest solutions in an initial effort to create a system to support citation of software and promotion of scientific software development.
LLNL Mercury Project Trinity Open Science Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dawson, Shawn A.
The Mercury Monte Carlo particle transport code is used to simulate the transport of radiation through urban environments. These challenging calculations include complicated geometries and require significant computational resources to complete. In the proposed Trinity Open Science calculations, I will investigate computer science aspects of the code which are relevant to convergence of the simulation quantities with increasing Monte Carlo particle counts.
Open-access programs for injury categorization using ICD-9 or ICD-10.
Clark, David E; Black, Adam W; Skavdahl, David H; Hallagan, Lee D
2018-04-09
The article introduces Programs for Injury Categorization, using the International Classification of Diseases (ICD) and R statistical software (ICDPIC-R). Starting with ICD-8, methods have been described to map injury diagnosis codes to severity scores, especially the Abbreviated Injury Scale (AIS) and Injury Severity Score (ISS). ICDPIC was originally developed for this purpose using Stata, and ICDPIC-R is an open-access update that accepts both ICD-9 and ICD-10 codes. Data were obtained from the National Trauma Data Bank (NTDB), Admission Year 2015. ICDPIC-R derives CDC injury mechanism categories and an approximate ISS ("RISS") from either ICD-9 or ICD-10 codes. For ICD-9-coded cases, RISS is derived similar to the Stata package (with some improvements reflecting user feedback). For ICD-10-coded cases, RISS may be calculated in several ways: The "GEM" methods convert ICD-10 to ICD-9 (using General Equivalence Mapping tables from CMS) and then calculate ISS with options similar to the Stata package; a "ROCmax" method calculates RISS directly from ICD-10 codes, based on diagnosis-specific mortality in the NTDB, maximizing the C-statistic for predicting NTDB mortality while attempting to minimize the difference between RISS and ISS submitted by NTDB registrars (ISSAIS). Findings were validated using data from the National Inpatient Survey (NIS, 2015). NTDB contained 917,865 cases, of which 86,878 had valid ICD-10 injury codes. For a random 100,000 ICD-9-coded cases in NTDB, RISS using the GEM methods was nearly identical to ISS calculated by the Stata version, which has been previously validated. For ICD-10-coded cases in NTDB, categorized ISS using any version of RISS was similar to ISSAIS; for both NTDB and NIS cases, increasing ISS was associated with increasing mortality. Prediction of NTDB mortality was associated with C-statistics of 0.81 for ISSAIS, 0.75 for RISS using the GEM methods, and 0.85 for RISS using the ROCmax method; prediction of NIS mortality was associated with C-statistics of 0.75-0.76 for RISS using the GEM methods, and 0.78 for RISS using the ROCmax method. Instructions are provided for accessing ICDPIC-R at no cost. The ideal methods of injury categorization and injury severity scoring involve trained personnel with access to injured persons or their medical records. ICDPIC-R may be a useful substitute when this ideal cannot be obtained.
Fourier-Bessel Particle-In-Cell (FBPIC) v0.1.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lehe, Remi; Kirchen, Manuel; Jalas, Soeren
The Fourier-Bessel Particle-In-Cell code is a scientific simulation software for relativistic plasma physics. It is a Particle-In-Cell code whose distinctive feature is to use a spectral decomposition in cylindrical geometry. This decomposition allows to combine the advantages of spectral 3D Cartesian PIC codes (high accuracy and stability) and those of finite-difference cylindrical PIC codes with azimuthal decomposition (orders-of-magnitude speedup when compared to 3D simulations). The code is built on Python and can run both on CPU and GPU (the GPU runs being typically 1 or 2 orders of magnitude faster than the corresponding CPU runs.) The code has the exactmore » same output format as the open-source PIC codes Warp and PIConGPU (openPMD format: openpmd.org) and has a very similar input format as Warp (Python script with many similarities). There is therefore tight interoperability between Warp and FBPIC, and this interoperability will increase even more in the future.« less
Mercado-Blanco, J; García, F; Fernández-López, M; Olivares, J
1993-01-01
Melanin production by Rhizobium meliloti GR4 is linked to nonsymbiotic plasmid pRmeGR4b (140 MDa). Transfer of this plasmid to GR4-cured derivatives or to Agrobacterium tumefaciens enables these bacteria to produce melanin. Sequence analysis of a 3.5-kb PstI fragment of plasmid pRmeGR4b has revealed the presence of a open reading frame 1,481-bp that codes for a protein whose sequence shows strong homology to two conserved regions involved in copper binding in tyrosinases and hemocyanins. In vitro-coupled transcription-translation experiments showed that this open reading frame codes for a 55-kDa polypeptide. Melanin production in GR4 is not under the control of the RpoN-NifA regulatory system, unlike that in R. leguminosarum bv. phaseoli 8002. The GR4 tyrosinase gene could be expressed in Escherichia coli under the control of the lacZ promoter. For avoiding confusion with mel genes (for melibiose), a change of the name of the previously reported mel genes of R. leguminosarum bv. phaseoli and other organisms to mep genes (for melanin production) is proposed. Images PMID:8366027
NASA Astrophysics Data System (ADS)
Harijishnu, R.; Jayakumar, J. S.
2017-09-01
The main objective of this paper is to study the heat transfer rate of thermal radiation in participating media. For that, a generated collimated beam has been passed through a two dimensional slab model of flint glass with a refractive index 2. Both Polar and azimuthal angle have been varied to generate such a beam. The Temperature of the slab and Snells law has been validated by Radiation Transfer Equation (RTE) in OpenFOAM (Open Field Operation and Manipulation), a CFD software which is the major computational tool used in Industry and research applications where the source code is modified in which radiation heat transfer equation is added to the case and different radiation heat transfer models are utilized. This work concentrates on the numerical strategies involving both transparent and participating media. Since Radiation Transfer Equation (RTE) is difficult to solve, the purpose of this paper is to use existing solver buoyantSimlpeFoam to solve radiation model in the participating media by compiling the source code to obtain the heat transfer rate inside the slab by varying the Intensity of radiation. The Finite Volume Method (FVM) is applied to solve the Radiation Transfer Equation (RTE) governing the above said physical phenomena.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang, Guoping; D'Azevedo, Ed F; Zhang, Fan
2010-01-01
Calibration of groundwater models involves hundreds to thousands of forward solutions, each of which may solve many transient coupled nonlinear partial differential equations, resulting in a computationally intensive problem. We describe a hybrid MPI/OpenMP approach to exploit two levels of parallelisms in software and hardware to reduce calibration time on multi-core computers. HydroGeoChem 5.0 (HGC5) is parallelized using OpenMP for direct solutions for a reactive transport model application, and a field-scale coupled flow and transport model application. In the reactive transport model, a single parallelizable loop is identified to account for over 97% of the total computational time using GPROF.more » Addition of a few lines of OpenMP compiler directives to the loop yields a speedup of about 10 on a 16-core compute node. For the field-scale model, parallelizable loops in 14 of 174 HGC5 subroutines that require 99% of the execution time are identified. As these loops are parallelized incrementally, the scalability is found to be limited by a loop where Cray PAT detects over 90% cache missing rates. With this loop rewritten, similar speedup as the first application is achieved. The OpenMP-parallelized code can be run efficiently on multiple workstations in a network or multiple compute nodes on a cluster as slaves using parallel PEST to speedup model calibration. To run calibration on clusters as a single task, the Levenberg Marquardt algorithm is added to HGC5 with the Jacobian calculation and lambda search parallelized using MPI. With this hybrid approach, 100 200 compute cores are used to reduce the calibration time from weeks to a few hours for these two applications. This approach is applicable to most of the existing groundwater model codes for many applications.« less
Utilizing GPUs to Accelerate Turbomachinery CFD Codes
NASA Technical Reports Server (NTRS)
MacCalla, Weylin; Kulkarni, Sameer
2016-01-01
GPU computing has established itself as a way to accelerate parallel codes in the high performance computing world. This work focuses on speeding up APNASA, a legacy CFD code used at NASA Glenn Research Center, while also drawing conclusions about the nature of GPU computing and the requirements to make GPGPU worthwhile on legacy codes. Rewriting and restructuring of the source code was avoided to limit the introduction of new bugs. The code was profiled and investigated for parallelization potential, then OpenACC directives were used to indicate parallel parts of the code. The use of OpenACC directives was not able to reduce the runtime of APNASA on either the NVIDIA Tesla discrete graphics card, or the AMD accelerated processing unit. Additionally, it was found that in order to justify the use of GPGPU, the amount of parallel work being done within a kernel would have to greatly exceed the work being done by any one portion of the APNASA code. It was determined that in order for an application like APNASA to be accelerated on the GPU, it should not be modular in nature, and the parallel portions of the code must contain a large portion of the code's computation time.
Grijalvo, Santiago; Alagia, Adele
2018-01-01
Oligonucleotide-based therapy has become an alternative to classical approaches in the search of novel therapeutics involving gene-related diseases. Several mechanisms have been described in which demonstrate the pivotal role of oligonucleotide for modulating gene expression. Antisense oligonucleotides (ASOs) and more recently siRNAs and miRNAs have made important contributions either in reducing aberrant protein levels by sequence-specific targeting messenger RNAs (mRNAs) or restoring the anomalous levels of non-coding RNAs (ncRNAs) that are involved in a good number of diseases including cancer. In addition to formulation approaches which have contributed to accelerate the presence of ASOs, siRNAs and miRNAs in clinical trials; the covalent linkage between non-viral vectors and nucleic acids has also added value and opened new perspectives to the development of promising nucleic acid-based therapeutics. This review article is mainly focused on the strategies carried out for covalently modifying siRNA and miRNA molecules. Examples involving cell-penetrating peptides (CPPs), carbohydrates, polymers, lipids and aptamers are discussed for the synthesis of siRNA conjugates whereas in the case of miRNA-based drugs, this review article makes special emphasis in using antagomiRs, locked nucleic acids (LNAs), peptide nucleic acids (PNAs) as well as nanoparticles. The biomedical applications of siRNA and miRNA conjugates are also discussed. PMID:29415514
Hierarchical parallelisation of functional renormalisation group calculations - hp-fRG
NASA Astrophysics Data System (ADS)
Rohe, Daniel
2016-10-01
The functional renormalisation group (fRG) has evolved into a versatile tool in condensed matter theory for studying important aspects of correlated electron systems. Practical applications of the method often involve a high numerical effort, motivating the question in how far High Performance Computing (HPC) can leverage the approach. In this work we report on a multi-level parallelisation of the underlying computational machinery and show that this can speed up the code by several orders of magnitude. This in turn can extend the applicability of the method to otherwise inaccessible cases. We exploit three levels of parallelisation: Distributed computing by means of Message Passing (MPI), shared-memory computing using OpenMP, and vectorisation by means of SIMD units (single-instruction-multiple-data). Results are provided for two distinct High Performance Computing (HPC) platforms, namely the IBM-based BlueGene/Q system JUQUEEN and an Intel Sandy-Bridge-based development cluster. We discuss how certain issues and obstacles were overcome in the course of adapting the code. Most importantly, we conclude that this vast improvement can actually be accomplished by introducing only moderate changes to the code, such that this strategy may serve as a guideline for other researcher to likewise improve the efficiency of their codes.
Zhang, Yunfang; Zhang, Xudong; Shi, Junchao; Tuorto, Francesca; Li, Xin; Liu, Yusheng; Liebers, Reinhard; Zhang, Liwen; Qu, Yongcun; Qian, Jingjing; Pahima, Maya; Liu, Ying; Yan, Menghong; Cao, Zhonghong; Lei, Xiaohua; Cao, Yujing; Peng, Hongying; Liu, Shichao; Wang, Yue; Zheng, Huili; Woolsey, Rebekah; Quilici, David; Zhai, Qiwei; Li, Lei; Zhou, Tong; Yan, Wei; Lyko, Frank; Zhang, Ying; Zhou, Qi; Duan, Enkui; Chen, Qi
2018-05-01
The discovery of RNAs (for example, messenger RNAs, non-coding RNAs) in sperm has opened the possibility that sperm may function by delivering additional paternal information aside from solely providing the DNA 1 . Increasing evidence now suggests that sperm small non-coding RNAs (sncRNAs) can mediate intergenerational transmission of paternally acquired phenotypes, including mental stress 2,3 and metabolic disorders 4-6 . How sperm sncRNAs encode paternal information remains unclear, but the mechanism may involve RNA modifications. Here we show that deletion of a mouse tRNA methyltransferase, DNMT2, abolished sperm sncRNA-mediated transmission of high-fat-diet-induced metabolic disorders to offspring. Dnmt2 deletion prevented the elevation of RNA modifications (m 5 C, m 2 G) in sperm 30-40 nt RNA fractions that are induced by a high-fat diet. Also, Dnmt2 deletion altered the sperm small RNA expression profile, including levels of tRNA-derived small RNAs and rRNA-derived small RNAs, which might be essential in composing a sperm RNA 'coding signature' that is needed for paternal epigenetic memory. Finally, we show that Dnmt2-mediated m 5 C contributes to the secondary structure and biological properties of sncRNAs, implicating sperm RNA modifications as an additional layer of paternal hereditary information.
Preface to the Special Issue on TOUGH Symposium 2015
NASA Astrophysics Data System (ADS)
Blanco-Martín, Laura
2017-11-01
The TOUGH Symposium 2015 was held in Berkeley, California, September 28-30, 2015. The TOUGH family of codes, developed at the Energy Geosciences Division of Lawrence Berkeley National Laboratory (LBNL), is a suite of computer programs for the simulation of multiphase and multicomponent fluid and heat flows in porous and fractured media with applications in many geosciences fields, such as geothermal reservoir engineering, nuclear waste disposal, geological carbon sequestration, oil and gas reservoirs, gas hydrate research, vadose zone hydrology and environmental remediation. Since the first release in the 1980s, many modifications and enhancements have been continuously made to TOUGH and its various descendants (iTOUGH2, TOUGH+, TOUGH-MP, TOUGHREACT, TOUGH+HYDRATE, TMVOC...), at LBNL and elsewhere. Today, these codes are used worldwide in academia, government organizations and private companies in problems involving coupled hydrological, thermal, biogeochemical and geomechanical processes. The Symposia, organized every 2-3 years, bring together developers and users for an open exchange on recent code enhancements and applications. In 2015, the Symposium was attended by one hundred participants, representing thirty-four nationalities. This Special Issue in Computers & Geosciences gathers extended versions of selected Symposium proceedings related to (i) recent enhancements to the TOUGH family of codes and (ii) coupled flow and geomechanics processes modeling.
Validation of OpenFoam for heavy gas dispersion applications.
Mack, A; Spruijt, M P N
2013-11-15
In the present paper heavy gas dispersion calculations were performed with OpenFoam. For a wind tunnel test case, numerical data was validated with experiments. For a full scale numerical experiment, a code to code comparison was performed with numerical results obtained from Fluent. The validation was performed in a gravity driven environment (slope), where the heavy gas induced the turbulence. For the code to code comparison, a hypothetical heavy gas release into a strongly turbulent atmospheric boundary layer including terrain effects was selected. The investigations were performed for SF6 and CO2 as heavy gases applying the standard k-ɛ turbulence model. A strong interaction of the heavy gas with the turbulence is present which results in a strong damping of the turbulence and therefore reduced heavy gas mixing. Especially this interaction, based on the buoyancy effects, was studied in order to ensure that the turbulence-buoyancy coupling is the main driver for the reduced mixing and not the global behaviour of the turbulence modelling. For both test cases, comparisons were performed between OpenFoam and Fluent solutions which were mainly in good agreement with each other. Beside steady state solutions, the time accuracy was investigated. In the low turbulence environment (wind tunnel test) which for both codes (laminar solutions) was in good agreement, also with the experimental data. The turbulent solutions of OpenFoam were in much better agreement with the experimental results than the Fluent solutions. Within the strong turbulence environment, both codes showed an excellent comparability. Copyright © 2013 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Selker, J. S.; Roques, C.; Higgins, C. W.; Good, S. P.; Hut, R.; Selker, A.
2015-12-01
The confluence of 3-Dimensional printing, low-cost solid-state-sensors, low-cost, low-power digital controllers (e.g., Arduinos); and open-source publishing (e.g., Github) is poised to transform environmental sensing. The Open-Source Published Environmental Sensing (OPENS) laboratory has launched and is available for all to use. OPENS combines cutting edge technologies and makes them available to the global environmental sensing community. OPENS includes a Maker lab space where people may collaborate in person or virtually via on-line forum for the publication and discussion of environmental sensing technology (Corvallis, Oregon, USA, please feel free to request a free reservation for space and equipment use). The physical lab houses a test-bed for sensors, as well as a complete classical machine shop, 3-D printers, electronics development benches, and workstations for code development. OPENS will provide a web-based formal publishing framework wherein global students and scientists can peer-review publish (with DOI) novel and evolutionary advancements in environmental sensor systems. This curated and peer-reviewed digital collection will include complete sets of "printable" parts and operating computer code for sensing systems. The physical lab will include all of the machines required to produce these sensing systems. These tools can be addressed in person or virtually, creating a truly global venue for advancement in monitoring earth's environment and agricultural systems. In this talk we will present an example of the process of design and publication the design and data from the OPENS-Permeameter. The publication includes 3-D printing code, Arduino (or other control/logging platform) operational code; sample data sets, and a full discussion of the design set in the scientific context of previous related devices. Editors for the peer-review process are currently sought - contact John.Selker@Oregonstate.edu or Clement.Roques@Oregonstate.edu.
Status and future plans for open source QuickPIC
NASA Astrophysics Data System (ADS)
An, Weiming; Decyk, Viktor; Mori, Warren
2017-10-01
QuickPIC is a three dimensional (3D) quasi-static particle-in-cell (PIC) code developed based on the UPIC framework. It can be used for efficiently modeling plasma based accelerator (PBA) problems. With quasi-static approximation, QuickPIC can use different time scales for calculating the beam (or laser) evolution and the plasma response, and a 3D plasma wake field can be simulated using a two-dimensional (2D) PIC code where the time variable is ξ = ct - z and z is the beam propagation direction. QuickPIC can be thousand times faster than the normal PIC code when simulating the PBA. It uses an MPI/OpenMP hybrid parallel algorithm, which can be run on either a laptop or the largest supercomputer. The open source QuickPIC is an object-oriented program with high level classes written in Fortran 2003. It can be found at https://github.com/UCLA-Plasma-Simulation-Group/QuickPIC-OpenSource.git
Mohanty, Sujit Kumar; Yu, Chi-Li; Das, Shuvendu; Louie, Tai Man; Gakhar, Lokesh
2012-01-01
The molecular basis of the ability of bacteria to live on caffeine via the C-8 oxidation pathway is unknown. The first step of this pathway, caffeine to trimethyluric acid (TMU), has been attributed to poorly characterized caffeine oxidases and a novel quinone-dependent caffeine dehydrogenase. Here, we report the detailed characterization of the second enzyme, a novel NADH-dependent trimethyluric acid monooxygenase (TmuM), a flavoprotein that catalyzes the conversion of TMU to 1,3,7-trimethyl-5-hydroxyisourate (TM-HIU). This product spontaneously decomposes to racemic 3,6,8-trimethylallantoin (TMA). TmuM prefers trimethyluric acids and, to a lesser extent, dimethyluric acids as substrates, but it exhibits no activity on uric acid. Homology models of TmuM against uric acid oxidase HpxO (which catalyzes uric acid to 5-hydroxyisourate) reveal a much bigger and hydrophobic cavity to accommodate the larger substrates. Genes involved in the caffeine C-8 oxidation pathway are located in a 25.2-kb genomic DNA fragment of CBB1, including cdhABC (coding for caffeine dehydrogenase) and tmuM (coding for TmuM). Comparison of this gene cluster to the uric acid-metabolizing gene cluster and pathway of Klebsiella pneumoniae revealed two major open reading frames coding for the conversion of TM-HIU to S-(+)-trimethylallantoin [S-(+)-TMA]. The first one, designated tmuH, codes for a putative TM-HIU hydrolase, which catalyzes the conversion of TM-HIU to 3,6,8-trimethyl-2-oxo-4-hydroxy-4-carboxy-5-ureidoimidazoline (TM-OHCU). The second one, designated tmuD, codes for a putative TM-OHCU decarboxylase which catalyzes the conversion of TM-OHCU to S-(+)-TMA. Based on a combination of enzymology and gene-analysis, a new degradative pathway for caffeine has been proposed via TMU, TM-HIU, TM-OHCU to S-(+)-TMA. PMID:22609920
Vascular tone pathway polymorphisms in relation to primary open-angle glaucoma.
Kang, J H; Loomis, S J; Yaspan, B L; Bailey, J C; Weinreb, R N; Lee, R K; Lichter, P R; Budenz, D L; Liu, Y; Realini, T; Gaasterland, D; Gaasterland, T; Friedman, D S; McCarty, C A; Moroi, S E; Olson, L; Schuman, J S; Singh, K; Vollrath, D; Wollstein, G; Zack, D J; Brilliant, M; Sit, A J; Christen, W G; Fingert, J; Forman, J P; Buys, E S; Kraft, P; Zhang, K; Allingham, R R; Pericak-Vance, M A; Richards, J E; Hauser, M A; Haines, J L; Wiggs, J L; Pasquale, L R
2014-06-01
Vascular perfusion may be impaired in primary open-angle glaucoma (POAG); thus, we evaluated a panel of markers in vascular tone-regulating genes in relation to POAG. We used Illumina 660W-Quad array genotype data and pooled P-values from 3108 POAG cases and 3430 controls from the combined National Eye Institute Glaucoma Human Genetics Collaboration consortium and Glaucoma Genes and Environment studies. Using information from previous literature and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways, we compiled single-nucleotide polymorphisms (SNPs) in 186 vascular tone-regulating genes. We used the 'Pathway Analysis by Randomization Incorporating Structure' analysis software, which performed 1000 permutations to compare the overall pathway and selected genes with comparable randomly generated pathways and genes in their association with POAG. The vascular tone pathway was not associated with POAG overall or POAG subtypes, defined by the type of visual field loss (early paracentral loss (n=224 cases) or only peripheral loss (n=993 cases)) (permuted P≥0.20). In gene-based analyses, eight were associated with POAG overall at permuted P<0.001: PRKAA1, CAV1, ITPR3, EDNRB, GNB2, DNM2, HFE, and MYL9. Notably, six of these eight (the first six listed) code for factors involved in the endothelial nitric oxide synthase activity, and three of these six (CAV1, ITPR3, and EDNRB) were also associated with early paracentral loss at P<0.001, whereas none of the six genes reached P<0.001 for peripheral loss only. Although the assembled vascular tone SNP set was not associated with POAG, genes that code for local factors involved in setting vascular tone were associated with POAG.
Geospace simulations using modern accelerator processor technology
NASA Astrophysics Data System (ADS)
Germaschewski, K.; Raeder, J.; Larson, D. J.
2009-12-01
OpenGGCM (Open Geospace General Circulation Model) is a well-established numerical code simulating the Earth's space environment. The most computing intensive part is the MHD (magnetohydrodynamics) solver that models the plasma surrounding Earth and its interaction with Earth's magnetic field and the solar wind flowing in from the sun. Like other global magnetosphere codes, OpenGGCM's realism is currently limited by computational constraints on grid resolution. OpenGGCM has been ported to make use of the added computational powerof modern accelerator based processor architectures, in particular the Cell processor. The Cell architecture is a novel inhomogeneous multicore architecture capable of achieving up to 230 GFLops on a single chip. The University of New Hampshire recently acquired a PowerXCell 8i based computing cluster, and here we will report initial performance results of OpenGGCM. Realizing the high theoretical performance of the Cell processor is a programming challenge, though. We implemented the MHD solver using a multi-level parallelization approach: On the coarsest level, the problem is distributed to processors based upon the usual domain decomposition approach. Then, on each processor, the problem is divided into 3D columns, each of which is handled by the memory limited SPEs (synergistic processing elements) slice by slice. Finally, SIMD instructions are used to fully exploit the SIMD FPUs in each SPE. Memory management needs to be handled explicitly by the code, using DMA to move data from main memory to the per-SPE local store and vice versa. We use a modern technique, automatic code generation, which shields the application programmer from having to deal with all of the implementation details just described, keeping the code much more easily maintainable. Our preliminary results indicate excellent performance, a speed-up of a factor of 30 compared to the unoptimized version.
Degenerate quantum codes and the quantum Hamming bound
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sarvepalli, Pradeep; Klappenecker, Andreas
2010-03-15
The parameters of a nondegenerate quantum code must obey the Hamming bound. An important open problem in quantum coding theory is whether the parameters of a degenerate quantum code can violate this bound for nondegenerate quantum codes. In this article we show that Calderbank-Shor-Steane (CSS) codes, over a prime power alphabet q{>=}5, cannot beat the quantum Hamming bound. We prove a quantum version of the Griesmer bound for the CSS codes, which allows us to strengthen the Rains' bound that an [[n,k,d
Dere, E; Zheng-Fischhöfer, Q; Viggiano, D; Gironi Carnevale, U A; Ruocco, L A; Zlomuzica, A; Schnichels, M; Willecke, K; Huston, J P; Sadile, A G
2008-05-02
Neuronal gap junctions in the brain, providing intercellular electrotonic signal transfer, have been implicated in physiological and behavioral correlates of learning and memory. In connexin31.1 (Cx31.1) knockout (KO) mice the coding region of the Cx31.1 gene was replaced by a LacZ reporter gene. We investigated the impact of Cx31.1 deficiency on open-field exploration, the behavioral response to an odor, non-selective attention, learning and memory performance, and the levels of memory-related proteins in the hippocampus, striatum and the piriform cortex. In terms of behavior, the deletion of the Cx31.1 coding DNA in the mouse led to increased exploratory behaviors in a novel environment, and impaired one-trial object recognition at all delays tested. Despite strong Cx31.1 expression in the peripheral and central olfactory system, Cx31.1 KO mice exhibited normal behavioral responses to an odor. We found increased levels of acetylcholine esterase (AChE) and cAMP response element-binding protein (CREB) in the striatum of Cx31.1 KO mice. In the piriform cortex the Cx31.1 KO mice had an increased heterogeneity of CREB expression among neurons. In conclusion, gap-junctions featuring the Cx31.1 protein might be involved in open-field exploration as well as object memory and modulate levels of AChE and CREB in the striatum and piriform cortex.
Conversion of HSPF Legacy Model to a Platform-Independent, Open-Source Language
NASA Astrophysics Data System (ADS)
Heaphy, R. T.; Burke, M. P.; Love, J. T.
2015-12-01
Since its initial development over 30 years ago, the Hydrologic Simulation Program - FORTAN (HSPF) model has been used worldwide to support water quality planning and management. In the United States, HSPF receives widespread endorsement as a regulatory tool at all levels of government and is a core component of the EPA's Better Assessment Science Integrating Point and Nonpoint Sources (BASINS) system, which was developed to support nationwide Total Maximum Daily Load (TMDL) analysis. However, the model's legacy code and data management systems have limitations in their ability to integrate with modern software, hardware, and leverage parallel computing, which have left voids in optimization, pre-, and post-processing tools. Advances in technology and our scientific understanding of environmental processes that have occurred over the last 30 years mandate that upgrades be made to HSPF to allow it to evolve and continue to be a premiere tool for water resource planners. This work aims to mitigate the challenges currently facing HSPF through two primary tasks: (1) convert code to a modern widely accepted, open-source, high-performance computing (hpc) code; and (2) convert model input and output files to modern widely accepted, open-source, data model, library, and binary file format. Python was chosen as the new language for the code conversion. It is an interpreted, object-oriented, hpc code with dynamic semantics that has become one of the most popular open-source languages. While python code execution can be slow compared to compiled, statically typed programming languages, such as C and FORTRAN, the integration of Numba (a just-in-time specializing compiler) has allowed this challenge to be overcome. For the legacy model data management conversion, HDF5 was chosen to store the model input and output. The code conversion for HSPF's hydrologic and hydraulic modules has been completed. The converted code has been tested against HSPF's suite of "test" runs and shown good agreement and similar execution times while using the Numba compiler. Continued verification of the accuracy of the converted code against more complex legacy applications and improvement upon execution times by incorporating an intelligent network change detection tool is currently underway, and preliminary results will be presented.
Biosignal PI, an Affordable Open-Source ECG and Respiration Measurement System
Abtahi, Farhad; Snäll, Jonatan; Aslamy, Benjamin; Abtahi, Shirin; Seoane, Fernando; Lindecrantz, Kaj
2015-01-01
Bioimedical pilot projects e.g., telemedicine, homecare, animal and human trials usually involve several physiological measurements. Technical development of these projects is time consuming and in particular costly. A versatile but affordable biosignal measurement platform can help to reduce time and risk while keeping the focus on the important goal and making an efficient use of resources. In this work, an affordable and open source platform for development of physiological signals is proposed. As a first step an 8–12 leads electrocardiogram (ECG) and respiration monitoring system is developed. Chips based on iCoupler technology have been used to achieve electrical isolation as required by IEC 60601 for patient safety. The result shows the potential of this platform as a base for prototyping compact, affordable, and medically safe measurement systems. Further work involves both hardware and software development to develop modules. These modules may require development of front-ends for other biosignals or just collect data wirelessly from different devices e.g., blood pressure, weight, bioimpedance spectrum, blood glucose, e.g., through Bluetooth. All design and development documents, files and source codes will be available for non-commercial use through project website, BiosignalPI.org. PMID:25545268
Teaching communication with ethnic minority patients: ten recommendations.
Seeleman, Conny; Selleger, Veronica; Essink-Bot, Marie-Louise; Bonke, Benno
2011-01-01
Culturally competent communication is indispensable for medical practice in an ethnically diverse society. This article offers recommendations to teach such communication skills based on the experiences of members of a Dutch NMVO Special Interest Group on 'Diversity'. A questionnaire with three open-ended questions on recommendations for training in culturally competent communication was sent to all members (n = 35). Returned questionnaires (n = 23) were analysed qualitatively with a thematic coding framework based on educational themes emerging from the data. All students need to be educated in culturally competent communication. Teachers should stimulate awareness of personal biases and an open attitude. Teach the three core communication skills, listening, exploring and checking, and offer practice with a professional interpreter. Knowledge content should focus on mechanisms relevant to various ethnic groups. Offer students a variety of experiences in a safe environment. All involved should be aware that stereotyping is a pitfall. Training in communication skills for consultation with ethnic minority patients cannot be separated from teaching issues of awareness and knowledge. The shared views on the content of these communication trainings are in line with general patient-centred approaches. The development of proper training in this field demands specific efforts of those involved.
The Forgotten Women of Pre-Code: An Annotated Filmography and Bibliography
ERIC Educational Resources Information Center
Tang, Jennifer
2010-01-01
In recent years, "pre-code" films have been re-discovered and applauded by film scholars and feminists. The term refers to the period between 1929 and 1934 when many Hollywood studios openly disregarded the censorship restrictions of the Hays Code. Named after censorship czar William H. Hays, the Code forbade nudity, cursing, sexual innuendo,…
Code Analysis and Refactoring with Clang Tools, Version 0.1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kelley, Timothy M.
2016-12-23
Code Analysis and Refactoring with Clang Tools is a small set of example code that demonstrates techniques for applying tools distributed with the open source Clang compiler. Examples include analyzing where variables are used and replacing old data structures with standard structures.
Temporal Code-Driven Stimulation: Definition and Application to Electric Fish Signaling
Lareo, Angel; Forlim, Caroline G.; Pinto, Reynaldo D.; Varona, Pablo; Rodriguez, Francisco de Borja
2016-01-01
Closed-loop activity-dependent stimulation is a powerful methodology to assess information processing in biological systems. In this context, the development of novel protocols, their implementation in bioinformatics toolboxes and their application to different description levels open up a wide range of possibilities in the study of biological systems. We developed a methodology for studying biological signals representing them as temporal sequences of binary events. A specific sequence of these events (code) is chosen to deliver a predefined stimulation in a closed-loop manner. The response to this code-driven stimulation can be used to characterize the system. This methodology was implemented in a real time toolbox and tested in the context of electric fish signaling. We show that while there are codes that evoke a response that cannot be distinguished from a control recording without stimulation, other codes evoke a characteristic distinct response. We also compare the code-driven response to open-loop stimulation. The discussed experiments validate the proposed methodology and the software toolbox. PMID:27766078
CACTI: free, open-source software for the sequential coding of behavioral interactions.
Glynn, Lisa H; Hallgren, Kevin A; Houck, Jon M; Moyers, Theresa B
2012-01-01
The sequential analysis of client and clinician speech in psychotherapy sessions can help to identify and characterize potential mechanisms of treatment and behavior change. Previous studies required coding systems that were time-consuming, expensive, and error-prone. Existing software can be expensive and inflexible, and furthermore, no single package allows for pre-parsing, sequential coding, and assignment of global ratings. We developed a free, open-source, and adaptable program to meet these needs: The CASAA Application for Coding Treatment Interactions (CACTI). Without transcripts, CACTI facilitates the real-time sequential coding of behavioral interactions using WAV-format audio files. Most elements of the interface are user-modifiable through a simple XML file, and can be further adapted using Java through the terms of the GNU Public License. Coding with this software yields interrater reliabilities comparable to previous methods, but at greatly reduced time and expense. CACTI is a flexible research tool that can simplify psychotherapy process research, and has the potential to contribute to the improvement of treatment content and delivery.
Temporal Code-Driven Stimulation: Definition and Application to Electric Fish Signaling.
Lareo, Angel; Forlim, Caroline G; Pinto, Reynaldo D; Varona, Pablo; Rodriguez, Francisco de Borja
2016-01-01
Closed-loop activity-dependent stimulation is a powerful methodology to assess information processing in biological systems. In this context, the development of novel protocols, their implementation in bioinformatics toolboxes and their application to different description levels open up a wide range of possibilities in the study of biological systems. We developed a methodology for studying biological signals representing them as temporal sequences of binary events. A specific sequence of these events (code) is chosen to deliver a predefined stimulation in a closed-loop manner. The response to this code-driven stimulation can be used to characterize the system. This methodology was implemented in a real time toolbox and tested in the context of electric fish signaling. We show that while there are codes that evoke a response that cannot be distinguished from a control recording without stimulation, other codes evoke a characteristic distinct response. We also compare the code-driven response to open-loop stimulation. The discussed experiments validate the proposed methodology and the software toolbox.
TEA: A Code Calculating Thermochemical Equilibrium Abundances
NASA Astrophysics Data System (ADS)
Blecic, Jasmina; Harrington, Joseph; Bowman, M. Oliver
2016-07-01
We present an open-source Thermochemical Equilibrium Abundances (TEA) code that calculates the abundances of gaseous molecular species. The code is based on the methodology of White et al. and Eriksson. It applies Gibbs free-energy minimization using an iterative, Lagrangian optimization scheme. Given elemental abundances, TEA calculates molecular abundances for a particular temperature and pressure or a list of temperature-pressure pairs. We tested the code against the method of Burrows & Sharp, the free thermochemical equilibrium code Chemical Equilibrium with Applications (CEA), and the example given by Burrows & Sharp. Using their thermodynamic data, TEA reproduces their final abundances, but with higher precision. We also applied the TEA abundance calculations to models of several hot-Jupiter exoplanets, producing expected results. TEA is written in Python in a modular format. There is a start guide, a user manual, and a code document in addition to this theory paper. TEA is available under a reproducible-research, open-source license via https://github.com/dzesmin/TEA.
TEA: A CODE CALCULATING THERMOCHEMICAL EQUILIBRIUM ABUNDANCES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blecic, Jasmina; Harrington, Joseph; Bowman, M. Oliver, E-mail: jasmina@physics.ucf.edu
2016-07-01
We present an open-source Thermochemical Equilibrium Abundances (TEA) code that calculates the abundances of gaseous molecular species. The code is based on the methodology of White et al. and Eriksson. It applies Gibbs free-energy minimization using an iterative, Lagrangian optimization scheme. Given elemental abundances, TEA calculates molecular abundances for a particular temperature and pressure or a list of temperature–pressure pairs. We tested the code against the method of Burrows and Sharp, the free thermochemical equilibrium code Chemical Equilibrium with Applications (CEA), and the example given by Burrows and Sharp. Using their thermodynamic data, TEA reproduces their final abundances, but withmore » higher precision. We also applied the TEA abundance calculations to models of several hot-Jupiter exoplanets, producing expected results. TEA is written in Python in a modular format. There is a start guide, a user manual, and a code document in addition to this theory paper. TEA is available under a reproducible-research, open-source license via https://github.com/dzesmin/TEA.« less
ERIC Educational Resources Information Center
Arffman, Inga
2016-01-01
Open-ended (OE) items are widely used to gather data on student performance in international achievement studies. However, several factors may threaten validity when using such items. This study examined Finnish coders' opinions about threats to validity when coding responses to OE items in the PISA 2012 problem-solving test. A total of 6…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Evans, Thomas; Hamilton, Steven; Slattery, Stuart
Profugus is an open-source mini-application (mini-app) for radiation transport and reactor applications. It contains the fundamental computational kernels used in the Exnihilo code suite from Oak Ridge National Laboratory. However, Exnihilo is production code with a substantial user base. Furthermore, Exnihilo is export controlled. This makes collaboration with computer scientists and computer engineers difficult. Profugus is designed to bridge that gap. By encapsulating the core numerical algorithms in an abbreviated code base that is open-source, computer scientists can analyze the algorithms and easily make code-architectural changes to test performance without compromising the production code values of Exnihilo. Profugus is notmore » meant to be production software with respect to problem analysis. The computational kernels in Profugus are designed to analyze performance, not correctness. Nonetheless, users of Profugus can setup and run problems with enough real-world features to be useful as proof-of-concept for actual production work.« less
Software Writing Skills for Your Research - Lessons Learned from Workshops in the Geosciences
NASA Astrophysics Data System (ADS)
Hammitzsch, Martin
2016-04-01
Findings presented in scientific papers are based on data and software. Once in a while they come along with data - but not commonly with software. However, the software used to gain findings plays a crucial role in the scientific work. Nevertheless, software is rarely seen publishable. Thus researchers may not reproduce the findings without the software which is in conflict with the principle of reproducibility in sciences. For both, the writing of publishable software and the reproducibility issue, the quality of software is of utmost importance. For many programming scientists the treatment of source code, e.g. with code design, version control, documentation, and testing is associated with additional work that is not covered in the primary research task. This includes the adoption of processes following the software development life cycle. However, the adoption of software engineering rules and best practices has to be recognized and accepted as part of the scientific performance. Most scientists have little incentive to improve code and do not publish code because software engineering habits are rarely practised by researchers or students. Software engineering skills are not passed on to followers as for paper writing skill. Thus it is often felt that the software or code produced is not publishable. The quality of software and its source code has a decisive influence on the quality of research results obtained and their traceability. So establishing best practices from software engineering to serve scientific needs is crucial for the success of scientific software. Even though scientists use existing software and code, i.e., from open source software repositories, only few contribute their code back into the repositories. So writing and opening code for Open Science means that subsequent users are able to run the code, e.g. by the provision of sufficient documentation, sample data sets, tests and comments which in turn can be proven by adequate and qualified reviews. This assumes that scientist learn to write and release code and software as they learn to write and publish papers. Having this in mind, software could be valued and assessed as a contribution to science. But this requires the relevant skills that can be passed to colleagues and followers. Therefore, the GFZ German Research Centre for Geosciences performed three workshops in 2015 to address the passing of software writing skills to young scientists, the next generation of researchers in the Earth, planetary and space sciences. Experiences in running these workshops and the lessons learned will be summarized in this presentation. The workshops have received support and funding by Software Carpentry, a volunteer organization whose goal is to make scientists more productive, and their work more reliable, by teaching them basic computing skills, and by FOSTER (Facilitate Open Science Training for European Research), a two-year, EU-Funded (FP7) project, whose goal to produce a European-wide training programme that will help to incorporate Open Access approaches into existing research methodologies and to integrate Open Science principles and practice in the current research workflow by targeting the young researchers and other stakeholders.
Computational Fluids Domain Reduction to a Simplified Fluid Network
2012-04-19
readily available read/ write software library. Code components from the open source projects OpenFoam and Paraview were explored for their adaptability...to the project. Both Paraview and OpenFoam read polyhedral mesh. OpenFoam does not read results data. Paraview actually allows for user “filters
nRC: non-coding RNA Classifier based on structural features.
Fiannaca, Antonino; La Rosa, Massimo; La Paglia, Laura; Rizzo, Riccardo; Urso, Alfonso
2017-01-01
Non-coding RNA (ncRNA) are small non-coding sequences involved in gene expression regulation of many biological processes and diseases. The recent discovery of a large set of different ncRNAs with biologically relevant roles has opened the way to develop methods able to discriminate between the different ncRNA classes. Moreover, the lack of knowledge about the complete mechanisms in regulative processes, together with the development of high-throughput technologies, has required the help of bioinformatics tools in addressing biologists and clinicians with a deeper comprehension of the functional roles of ncRNAs. In this work, we introduce a new ncRNA classification tool, nRC (non-coding RNA Classifier). Our approach is based on features extraction from the ncRNA secondary structure together with a supervised classification algorithm implementing a deep learning architecture based on convolutional neural networks. We tested our approach for the classification of 13 different ncRNA classes. We obtained classification scores, using the most common statistical measures. In particular, we reach an accuracy and sensitivity score of about 74%. The proposed method outperforms other similar classification methods based on secondary structure features and machine learning algorithms, including the RNAcon tool that, to date, is the reference classifier. nRC tool is freely available as a docker image at https://hub.docker.com/r/tblab/nrc/. The source code of nRC tool is also available at https://github.com/IcarPA-TBlab/nrc.
Lucyk, Kelsey; Tang, Karen; Quan, Hude
2017-11-22
Administrative health data are increasingly used for research and surveillance to inform decision-making because of its large sample sizes, geographic coverage, comprehensivity, and possibility for longitudinal follow-up. Within Canadian provinces, individuals are assigned unique personal health numbers that allow for linkage of administrative health records in that jurisdiction. It is therefore necessary to ensure that these data are of high quality, and that chart information is accurately coded to meet this end. Our objective is to explore the potential barriers that exist for high quality data coding through qualitative inquiry into the roles and responsibilities of medical chart coders. We conducted semi-structured interviews with 28 medical chart coders from Alberta, Canada. We used thematic analysis and open-coded each transcript to understand the process of administrative health data generation and identify barriers to its quality. The process of generating administrative health data is highly complex and involves a diverse workforce. As such, there are multiple points in this process that introduce challenges for high quality data. For coders, the main barriers to data quality occurred around chart documentation, variability in the interpretation of chart information, and high quota expectations. This study illustrates the complex nature of barriers to high quality coding, in the context of administrative data generation. The findings from this study may be of use to data users, researchers, and decision-makers who wish to better understand the limitations of their data or pursue interventions to improve data quality.
The openEHR Java reference implementation project.
Chen, Rong; Klein, Gunnar
2007-01-01
The openEHR foundation has developed an innovative design for interoperable and future-proof Electronic Health Record (EHR) systems based on a dual model approach with a stable reference information model complemented by archetypes for specific clinical purposes.A team from Sweden has implemented all the stable specifications in the Java programming language and donated the source code to the openEHR foundation. It was adopted as the openEHR Java Reference Implementation in March 2005 and released under open source licenses. This encourages early EHR implementation projects around the world and a number of groups have already started to use this code. The early Java implementation experience has also led to the publication of the openEHR Java Implementation Technology Specification. A number of design changes to the specifications and important minor corrections have been directly initiated by the implementation project over the last two years. The Java Implementation has been important for the validation and improvement of the openEHR design specifications and provides building blocks for future EHR systems.
Evaluation of Proteus as a Tool for the Rapid Development of Models of Hydrologic Systems
NASA Astrophysics Data System (ADS)
Weigand, T. M.; Farthing, M. W.; Kees, C. E.; Miller, C. T.
2013-12-01
Models of modern hydrologic systems can be complex and involve a variety of operators with varying character. The goal is to implement approximations of such models that are both efficient for the developer and computationally efficient, which is a set of naturally competing objectives. Proteus is a Python-based toolbox that supports prototyping of model formulations as well as a wide variety of modern numerical methods and parallel computing. We used Proteus to develop numerical approximations for three models: Richards' equation, a brine flow model derived using the Thermodynamically Constrained Averaging Theory (TCAT), and a multiphase TCAT-based tumor growth model. For Richards' equation, we investigated discontinuous Galerkin solutions with higher order time integration based on the backward difference formulas. The TCAT brine flow model was implemented using Proteus and a variety of numerical methods were compared to hand coded solutions. Finally, an existing tumor growth model was implemented in Proteus to introduce more advanced numerics and allow the code to be run in parallel. From these three example models, Proteus was found to be an attractive open-source option for rapidly developing high quality code for solving existing and evolving computational science models.
Subsurface Transport Over Multiple Phases Demonstration Software
DOE Office of Scientific and Technical Information (OSTI.GOV)
2016-01-05
The STOMP simulator is a suite of numerical simulators developed by Pacific Northwest National Laboratory for addressing problems involving coupled multifluid hydrologic, thermal, geochemical, and geomechanical processes in the subsurface. The simulator has been applied to problems concerning environmental remediation, environmental stewardship, carbon sequestration, conventional petroleum production, and the production of unconventional hydrocarbon fuels. The simulator is copyrighted by Battelle Memorial Institute, and is available outside of PNNL via use agreements. To promote the open exchange of scientific ideas the simulator is provided as source code. A demonstration version of the simulator has been developed, which will provide potential newmore » users with an executable (not source code) implementation of the software royalty free. Demonstration versions will be offered via the STOMP website for all currently available operational modes of the simulator. The demonstration versions of the simulator will be configured with the direct banded linear system solver and have a limit of 1,000 active grid cells. This will provide potential new users with an opportunity to apply the code to simple problems, including many of the STOMP short course problems, without having to pay a license fee. Users will be required to register on the STOMP website prior to receiving an executable.« less
Turbulence Modeling: Progress and Future Outlook
NASA Technical Reports Server (NTRS)
Marvin, Joseph G.; Huang, George P.
1996-01-01
Progress in the development of the hierarchy of turbulence models for Reynolds-averaged Navier-Stokes codes used in aerodynamic applications is reviewed. Steady progress is demonstrated, but transfer of the modeling technology has not kept pace with the development and demands of the computational fluid dynamics (CFD) tools. An examination of the process of model development leads to recommendations for a mid-course correction involving close coordination between modelers, CFD developers, and application engineers. In instances where the old process is changed and cooperation enhanced, timely transfer is realized. A turbulence modeling information database is proposed to refine the process and open it to greater participation among modeling and CFD practitioners.
Ribosome profiling reveals the what, when, where and how of protein synthesis.
Brar, Gloria A; Weissman, Jonathan S
2015-11-01
Ribosome profiling, which involves the deep sequencing of ribosome-protected mRNA fragments, is a powerful tool for globally monitoring translation in vivo. The method has facilitated discovery of the regulation of gene expression underlying diverse and complex biological processes, of important aspects of the mechanism of protein synthesis, and even of new proteins, by providing a systematic approach for experimental annotation of coding regions. Here, we introduce the methodology of ribosome profiling and discuss examples in which this approach has been a key factor in guiding biological discovery, including its prominent role in identifying thousands of novel translated short open reading frames and alternative translation products.
Benchmarking Defmod, an open source FEM code for modeling episodic fault rupture
NASA Astrophysics Data System (ADS)
Meng, Chunfang
2017-03-01
We present Defmod, an open source (linear) finite element code that enables us to efficiently model the crustal deformation due to (quasi-)static and dynamic loadings, poroelastic flow, viscoelastic flow and frictional fault slip. Ali (2015) provides the original code introducing an implicit solver for (quasi-)static problem, and an explicit solver for dynamic problem. The fault constraint is implemented via Lagrange Multiplier. Meng (2015) combines these two solvers into a hybrid solver that uses failure criteria and friction laws to adaptively switch between the (quasi-)static state and dynamic state. The code is capable of modeling episodic fault rupture driven by quasi-static loadings, e.g. due to reservoir fluid withdraw or injection. Here, we focus on benchmarking the Defmod results against some establish results.
A qualitative study of DRG coding practice in hospitals under the Thai Universal Coverage scheme.
Pongpirul, Krit; Walker, Damian G; Winch, Peter J; Robinson, Courtland
2011-04-08
In the Thai Universal Coverage health insurance scheme, hospital providers are paid for their inpatient care using Diagnosis Related Group-based retrospective payment, for which quality of the diagnosis and procedure codes is crucial. However, there has been limited understandings on which health care professions are involved and how the diagnosis and procedure coding is actually done within hospital settings. The objective of this study is to detail hospital coding structure and process, and to describe the roles of key hospital staff, and other related internal dynamics in Thai hospitals that affect quality of data submitted for inpatient care reimbursement. Research involved qualitative semi-structured interview with 43 participants at 10 hospitals chosen to represent a range of hospital sizes (small/medium/large), location (urban/rural), and type (public/private). Hospital Coding Practice has structural and process components. While the structural component includes human resources, hospital committee, and information technology infrastructure, the process component comprises all activities from patient discharge to submission of the diagnosis and procedure codes. At least eight health care professional disciplines are involved in the coding process which comprises seven major steps, each of which involves different hospital staff: 1) Discharge Summarization, 2) Completeness Checking, 3) Diagnosis and Procedure Coding, 4) Code Checking, 5) Relative Weight Challenging, 6) Coding Report, and 7) Internal Audit. The hospital coding practice can be affected by at least five main factors: 1) Internal Dynamics, 2) Management Context, 3) Financial Dependency, 4) Resource and Capacity, and 5) External Factors. Hospital coding practice comprises both structural and process components, involves many health care professional disciplines, and is greatly varied across hospitals as a result of five main factors.
A qualitative study of DRG coding practice in hospitals under the Thai Universal Coverage Scheme
2011-01-01
Background In the Thai Universal Coverage health insurance scheme, hospital providers are paid for their inpatient care using Diagnosis Related Group-based retrospective payment, for which quality of the diagnosis and procedure codes is crucial. However, there has been limited understandings on which health care professions are involved and how the diagnosis and procedure coding is actually done within hospital settings. The objective of this study is to detail hospital coding structure and process, and to describe the roles of key hospital staff, and other related internal dynamics in Thai hospitals that affect quality of data submitted for inpatient care reimbursement. Methods Research involved qualitative semi-structured interview with 43 participants at 10 hospitals chosen to represent a range of hospital sizes (small/medium/large), location (urban/rural), and type (public/private). Results Hospital Coding Practice has structural and process components. While the structural component includes human resources, hospital committee, and information technology infrastructure, the process component comprises all activities from patient discharge to submission of the diagnosis and procedure codes. At least eight health care professional disciplines are involved in the coding process which comprises seven major steps, each of which involves different hospital staff: 1) Discharge Summarization, 2) Completeness Checking, 3) Diagnosis and Procedure Coding, 4) Code Checking, 5) Relative Weight Challenging, 6) Coding Report, and 7) Internal Audit. The hospital coding practice can be affected by at least five main factors: 1) Internal Dynamics, 2) Management Context, 3) Financial Dependency, 4) Resource and Capacity, and 5) External Factors. Conclusions Hospital coding practice comprises both structural and process components, involves many health care professional disciplines, and is greatly varied across hospitals as a result of five main factors. PMID:21477310
X-ray backscatter radiography with lower open fraction coded masks
NASA Astrophysics Data System (ADS)
Muñoz, André A. M.; Vella, Anna; Healy, Matthew J. F.; Lane, David W.; Jupp, Ian; Lockley, David
2017-09-01
Single sided radiographic imaging would find great utility for medical, aerospace and security applications. While coded apertures can be used to form such an image from backscattered X-rays they suffer from near field limitations that introduce noise. Several theoretical studies have indicated that for an extended source the images signal to noise ratio may be optimised by using a low open fraction (<0.5) mask. However, few experimental results have been published for such low open fraction patterns and details of their formulation are often unavailable or are ambiguous. In this paper we address this process for two types of low open fraction mask, the dilute URA and the Singer set array. For the dilute URA the procedure for producing multiple 2D array patterns from given 1D binary sequences (Barker codes) is explained. Their point spread functions are calculated and their imaging properties are critically reviewed. These results are then compared to those from the Singer set and experimental exposures are presented for both type of pattern; their prospects for near field imaging are discussed.
PyNCS: a microkernel for high-level definition and configuration of neuromorphic electronic systems
Stefanini, Fabio; Neftci, Emre O.; Sheik, Sadique; Indiveri, Giacomo
2014-01-01
Neuromorphic hardware offers an electronic substrate for the realization of asynchronous event-based sensory-motor systems and large-scale spiking neural network architectures. In order to characterize these systems, configure them, and carry out modeling experiments, it is often necessary to interface them to workstations. The software used for this purpose typically consists of a large monolithic block of code which is highly specific to the hardware setup used. While this approach can lead to highly integrated hardware/software systems, it hampers the development of modular and reconfigurable infrastructures thus preventing a rapid evolution of such systems. To alleviate this problem, we propose PyNCS, an open-source front-end for the definition of neural network models that is interfaced to the hardware through a set of Python Application Programming Interfaces (APIs). The design of PyNCS promotes modularity, portability and expandability and separates implementation from hardware description. The high-level front-end that comes with PyNCS includes tools to define neural network models as well as to create, monitor and analyze spiking data. Here we report the design philosophy behind the PyNCS framework and describe its implementation. We demonstrate its functionality with two representative case studies, one using an event-based neuromorphic vision sensor, and one using a set of multi-neuron devices for carrying out a cognitive decision-making task involving state-dependent computation. PyNCS, already applicable to a wide range of existing spike-based neuromorphic setups, will accelerate the development of hybrid software/hardware neuromorphic systems, thanks to its code flexibility. The code is open-source and available online at https://github.com/inincs/pyNCS. PMID:25232314
Fellner, Lea; Simon, Svenja; Scherling, Christian; Witting, Michael; Schober, Steffen; Polte, Christine; Schmitt-Kopplin, Philippe; Keim, Daniel A; Scherer, Siegfried; Neuhaus, Klaus
2015-12-18
Gene duplication is believed to be the classical way to form novel genes, but overprinting may be an important alternative. Overprinting allows entirely novel proteins to evolve de novo, i.e., formerly non-coding open reading frames within functional genes become expressed. Only three cases have been described for Escherichia coli. Here, a fourth example is presented. RNA sequencing revealed an open reading frame weakly transcribed in cow dung, coding for 101 residues and embedded completely in the -2 reading frame of citC in enterohemorrhagic E. coli. This gene is designated novel overlapping gene, nog1. The promoter region fused to gfp exhibits specific activities and 5' rapid amplification of cDNA ends indicated the transcriptional start 40-bp upstream of the start codon. nog1 was strand-specifically arrested in translation by a nonsense mutation silent in citC. This Nog1-mutant showed a phenotype in competitive growth against wild type in the presence of MgCl2. Small differences in metabolite concentrations were also found. Bioinformatic analyses propose Nog1 to be inner membrane-bound and to possess at least one membrane-spanning domain. A phylogenetic analysis suggests that the orphan gene nog1 arose by overprinting after Escherichia/Shigella separated from the other γ-proteobacteria. Since nog1 is of recent origin, non-essential, short, weakly expressed and only marginally involved in E. coli's central metabolism, we propose that this gene is in an initial stage of evolution. While we present specific experimental evidence for the existence of a fourth overlapping gene in enterohemorrhagic E. coli, we believe that this may be an initial finding only and overlapping genes in bacteria may be more common than is currently assumed by microbiologists.
PyNCS: a microkernel for high-level definition and configuration of neuromorphic electronic systems.
Stefanini, Fabio; Neftci, Emre O; Sheik, Sadique; Indiveri, Giacomo
2014-01-01
Neuromorphic hardware offers an electronic substrate for the realization of asynchronous event-based sensory-motor systems and large-scale spiking neural network architectures. In order to characterize these systems, configure them, and carry out modeling experiments, it is often necessary to interface them to workstations. The software used for this purpose typically consists of a large monolithic block of code which is highly specific to the hardware setup used. While this approach can lead to highly integrated hardware/software systems, it hampers the development of modular and reconfigurable infrastructures thus preventing a rapid evolution of such systems. To alleviate this problem, we propose PyNCS, an open-source front-end for the definition of neural network models that is interfaced to the hardware through a set of Python Application Programming Interfaces (APIs). The design of PyNCS promotes modularity, portability and expandability and separates implementation from hardware description. The high-level front-end that comes with PyNCS includes tools to define neural network models as well as to create, monitor and analyze spiking data. Here we report the design philosophy behind the PyNCS framework and describe its implementation. We demonstrate its functionality with two representative case studies, one using an event-based neuromorphic vision sensor, and one using a set of multi-neuron devices for carrying out a cognitive decision-making task involving state-dependent computation. PyNCS, already applicable to a wide range of existing spike-based neuromorphic setups, will accelerate the development of hybrid software/hardware neuromorphic systems, thanks to its code flexibility. The code is open-source and available online at https://github.com/inincs/pyNCS.
2007-10-01
Architecture ................................................................................ 14 Figure 2. Eclipse Java Model...16 Figure 3. Eclipse Java Model at the Source Code Level...24 Figure 9. Java Source Code
pyOpenMS: a Python-based interface to the OpenMS mass-spectrometry algorithm library.
Röst, Hannes L; Schmitt, Uwe; Aebersold, Ruedi; Malmström, Lars
2014-01-01
pyOpenMS is an open-source, Python-based interface to the C++ OpenMS library, providing facile access to a feature-rich, open-source algorithm library for MS-based proteomics analysis. It contains Python bindings that allow raw access to the data structures and algorithms implemented in OpenMS, specifically those for file access (mzXML, mzML, TraML, mzIdentML among others), basic signal processing (smoothing, filtering, de-isotoping, and peak-picking) and complex data analysis (including label-free, SILAC, iTRAQ, and SWATH analysis tools). pyOpenMS thus allows fast prototyping and efficient workflow development in a fully interactive manner (using the interactive Python interpreter) and is also ideally suited for researchers not proficient in C++. In addition, our code to wrap a complex C++ library is completely open-source, allowing other projects to create similar bindings with ease. The pyOpenMS framework is freely available at https://pypi.python.org/pypi/pyopenms while the autowrap tool to create Cython code automatically is available at https://pypi.python.org/pypi/autowrap (both released under the 3-clause BSD licence). © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
PARALLELISATION OF THE MODEL-BASED ITERATIVE RECONSTRUCTION ALGORITHM DIRA.
Örtenberg, A; Magnusson, M; Sandborg, M; Alm Carlsson, G; Malusek, A
2016-06-01
New paradigms for parallel programming have been devised to simplify software development on multi-core processors and many-core graphical processing units (GPU). Despite their obvious benefits, the parallelisation of existing computer programs is not an easy task. In this work, the use of the Open Multiprocessing (OpenMP) and Open Computing Language (OpenCL) frameworks is considered for the parallelisation of the model-based iterative reconstruction algorithm DIRA with the aim to significantly shorten the code's execution time. Selected routines were parallelised using OpenMP and OpenCL libraries; some routines were converted from MATLAB to C and optimised. Parallelisation of the code with the OpenMP was easy and resulted in an overall speedup of 15 on a 16-core computer. Parallelisation with OpenCL was more difficult owing to differences between the central processing unit and GPU architectures. The resulting speedup was substantially lower than the theoretical peak performance of the GPU; the cause was explained. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Woohyun; Lutes, Robert G.; Katipamula, Srinivas
This document is a users guide for OpenEIS, a software code designed to provide standard methods for authoring, sharing, testing, using and improving algorithms for operational building energy efficiency.
Regulatory variation: an emerging vantage point for cancer biology.
Li, Luolan; Lorzadeh, Alireza; Hirst, Martin
2014-01-01
Transcriptional regulation involves complex and interdependent interactions of noncoding and coding regions of the genome with proteins that interact and modify them. Genetic variation/mutation in coding and noncoding regions of the genome can drive aberrant transcription and disease. In spite of accounting for nearly 98% of the genome comparatively little is known about the contribution of noncoding DNA elements to disease. Genome-wide association studies of complex human diseases including cancer have revealed enrichment for variants in the noncoding genome. A striking finding of recent cancer genome re-sequencing efforts has been the previously underappreciated frequency of mutations in epigenetic modifiers across a wide range of cancer types. Taken together these results point to the importance of dysregulation in transcriptional regulatory control in genesis of cancer. Powered by recent technological advancements in functional genomic profiling, exploration of normal and transformed regulatory networks will provide novel insight into the initiation and progression of cancer and open new windows to future prognostic and diagnostic tools. © 2013 Wiley Periodicals, Inc.
IB2d: a Python and MATLAB implementation of the immersed boundary method.
Battista, Nicholas A; Strickland, W Christopher; Miller, Laura A
2017-03-29
The development of fluid-structure interaction (FSI) software involves trade-offs between ease of use, generality, performance, and cost. Typically there are large learning curves when using low-level software to model the interaction of an elastic structure immersed in a uniform density fluid. Many existing codes are not publicly available, and the commercial software that exists usually requires expensive licenses and may not be as robust or allow the necessary flexibility that in house codes can provide. We present an open source immersed boundary software package, IB2d, with full implementations in both MATLAB and Python, that is capable of running a vast range of biomechanics models and is accessible to scientists who have experience in high-level programming environments. IB2d contains multiple options for constructing material properties of the fiber structure, as well as the advection-diffusion of a chemical gradient, muscle mechanics models, and artificial forcing to drive boundaries with a preferred motion.
AN OPEN-SOURCE NEUTRINO RADIATION HYDRODYNAMICS CODE FOR CORE-COLLAPSE SUPERNOVAE
DOE Office of Scientific and Technical Information (OSTI.GOV)
O’Connor, Evan, E-mail: evanoconnor@ncsu.edu; CITA, Canadian Institute for Theoretical Astrophysics, Toronto, M5S 3H8
2015-08-15
We present an open-source update to the spherically symmetric, general-relativistic hydrodynamics, core-collapse supernova (CCSN) code GR1D. The source code is available at http://www.GR1Dcode.org. We extend its capabilities to include a general-relativistic treatment of neutrino transport based on the moment formalisms of Shibata et al. and Cardall et al. We pay special attention to implementing and testing numerical methods and approximations that lessen the computational demand of the transport scheme by removing the need to invert large matrices. This is especially important for the implementation and development of moment-like transport methods in two and three dimensions. A critical component of neutrinomore » transport calculations is the neutrino–matter interaction coefficients that describe the production, absorption, scattering, and annihilation of neutrinos. In this article we also describe our open-source neutrino interaction library NuLib (available at http://www.nulib.org). We believe that an open-source approach to describing these interactions is one of the major steps needed to progress toward robust models of CCSNe and robust predictions of the neutrino signal. We show, via comparisons to full Boltzmann neutrino-transport simulations of CCSNe, that our neutrino transport code performs remarkably well. Furthermore, we show that the methods and approximations we employ to increase efficiency do not decrease the fidelity of our results. We also test the ability of our general-relativistic transport code to model failed CCSNe by evolving a 40-solar-mass progenitor to the onset of collapse to a black hole.« less
Understanding Lustre Internals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Feiyi; Oral, H Sarp; Shipman, Galen M
2009-04-01
Lustre was initiated and funded, almost a decade ago, by the U.S. Department of Energy (DoE) Office of Science and National Nuclear Security Administration laboratories to address the need for an open source, highly-scalable, high-performance parallel filesystem on by then present and future supercomputing platforms. Throughout the last decade, it was deployed over numerous medium-to-large-scale supercomputing platforms and clusters, and it performed and met the expectations of the Lustre user community. As it stands at the time of writing this document, according to the Top500 list, 15 of the top 30 supercomputers in the world use Lustre filesystem. This reportmore » aims to present a streamlined overview on how Lustre works internally at reasonable details including relevant data structures, APIs, protocols and algorithms involved for Lustre version 1.6 source code base. More importantly, it tries to explain how various components interconnect with each other and function as a system. Portions of this report are based on discussions with Oak Ridge National Laboratory Lustre Center of Excellence team members and portions of it are based on our own understanding of how the code works. We, as the authors team bare all responsibilities for all errors and omissions in this document. We can only hope it helps current and future Lustre users and Lustre code developers as much as it helped us understanding the Lustre source code and its internal workings.« less
Position coding effects in a 2D scenario: the case of musical notation.
Perea, Manuel; García-Chamorro, Cristina; Centelles, Arnau; Jiménez, María
2013-07-01
How does the cognitive system encode the location of objects in a visual scene? In the past decade, this question has attracted much attention in the field of visual-word recognition (e.g., "jugde" is perceptually very close to "judge"). Letter transposition effects have been explained in terms of perceptual uncertainty or shared "open bigrams". In the present study, we focus on note position coding in music reading (i.e., a 2D scenario). The usual way to display music is the staff (i.e., a set of 5 horizontal lines and their resultant 4 spaces). When reading musical notation, it is critical to identify not only each note (temporal duration), but also its pitch (y-axis) and its temporal sequence (x-axis). To examine note position coding, we employed a same-different task in which two briefly and consecutively presented staves contained four notes. The experiment was conducted with experts (musicians) and non-experts (non-musicians). For the "different" trials, the critical conditions involved staves in which two internal notes that were switched vertically, horizontally, or fully transposed--as well as the appropriate control conditions. Results revealed that note position coding was only approximate at the early stages of processing and that this encoding process was modulated by expertise. We examine the implications of these findings for models of object position encoding. Copyright © 2013 Elsevier B.V. All rights reserved.
Simulation of hydrostatic water level measuring system for pressure vessels with the ATHLET-code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hampel, R.; Vandreier, B.; Kaestner, W.
1996-11-01
The static and dynamic behavior of measuring systems determine the value indicated by the measuring systems in relation to the true operating conditions. This paper demonstrates the necessity to involve the behavior of measuring systems in accident analysis with the thermohydraulic code ATHLET (developed by GRS Germany) by the example of hydrostatic water level measurement for horizontal steam generators on NPP (VVER). The modelling of a comparison vessel for the level measuring system with high sensitivity and a limited range of measurement (narrow-range level measuring system) by using ATHLET components and the checking of the function of the module weremore » realized. A good correspondence (maximal deviation 3%) between the measured and calculated narrow-range water level by the module was obtained for a realized post calculation of a measured operational transient in a NPP (VVER). The research carried out was sponsored by the Federal Ministry for Research and Technology within the projects {open_quotes}Basic research of process and system behaviour of NPP, control technique for accident management{close_quotes} (Project number 150 0855/7) and the project RS 978. The research work appertains to the theoretic and experimental work of institute {open_quotes}Institut fuer ProzeBtechnik, ProzeBautomatisierung und MeBtechnik (IPM){close_quotes} for accident analysis and accident management.« less
Temporal motifs reveal collaboration patterns in online task-oriented networks
NASA Astrophysics Data System (ADS)
Xuan, Qi; Fang, Huiting; Fu, Chenbo; Filkov, Vladimir
2015-05-01
Real networks feature layers of interactions and complexity. In them, different types of nodes can interact with each other via a variety of events. Examples of this complexity are task-oriented social networks (TOSNs), where teams of people share tasks towards creating a quality artifact, such as academic research papers or software development in commercial or open source environments. Accomplishing those tasks involves both work, e.g., writing the papers or code, and communication, to discuss and coordinate. Taking into account the different types of activities and how they alternate over time can result in much more precise understanding of the TOSNs behaviors and outcomes. That calls for modeling techniques that can accommodate both node and link heterogeneity as well as temporal change. In this paper, we report on methodology for finding temporal motifs in TOSNs, limited to a system of two people and an artifact. We apply the methods to publicly available data of TOSNs from 31 Open Source Software projects. We find that these temporal motifs are enriched in the observed data. When applied to software development outcome, temporal motifs reveal a distinct dependency between collaboration and communication in the code writing process. Moreover, we show that models based on temporal motifs can be used to more precisely relate both individual developer centrality and team cohesion to programmer productivity than models based on aggregated TOSNs.
Temporal motifs reveal collaboration patterns in online task-oriented networks.
Xuan, Qi; Fang, Huiting; Fu, Chenbo; Filkov, Vladimir
2015-05-01
Real networks feature layers of interactions and complexity. In them, different types of nodes can interact with each other via a variety of events. Examples of this complexity are task-oriented social networks (TOSNs), where teams of people share tasks towards creating a quality artifact, such as academic research papers or software development in commercial or open source environments. Accomplishing those tasks involves both work, e.g., writing the papers or code, and communication, to discuss and coordinate. Taking into account the different types of activities and how they alternate over time can result in much more precise understanding of the TOSNs behaviors and outcomes. That calls for modeling techniques that can accommodate both node and link heterogeneity as well as temporal change. In this paper, we report on methodology for finding temporal motifs in TOSNs, limited to a system of two people and an artifact. We apply the methods to publicly available data of TOSNs from 31 Open Source Software projects. We find that these temporal motifs are enriched in the observed data. When applied to software development outcome, temporal motifs reveal a distinct dependency between collaboration and communication in the code writing process. Moreover, we show that models based on temporal motifs can be used to more precisely relate both individual developer centrality and team cohesion to programmer productivity than models based on aggregated TOSNs.
Greenland Regional and Ice Sheet-wide Geometry Sensitivity to Boundary and Initial conditions
NASA Astrophysics Data System (ADS)
Logan, L. C.; Narayanan, S. H. K.; Greve, R.; Heimbach, P.
2017-12-01
Ice sheet and glacier model outputs require inputs from uncertainly known initial and boundary conditions, and other parameters. Conservation and constitutive equations formalize the relationship between model inputs and outputs, and the sensitivity of model-derived quantities of interest (e.g., ice sheet volume above floatation) to model variables can be obtained via the adjoint model of an ice sheet. We show how one particular ice sheet model, SICOPOLIS (SImulation COde for POLythermal Ice Sheets), depends on these inputs through comprehensive adjoint-based sensitivity analyses. SICOPOLIS discretizes the shallow-ice and shallow-shelf approximations for ice flow, and is well-suited for paleo-studies of Greenland and Antarctica, among other computational domains. The adjoint model of SICOPOLIS was developed via algorithmic differentiation, facilitated by the source transformation tool OpenAD (developed at Argonne National Lab). While model sensitivity to various inputs can be computed by costly methods involving input perturbation simulations, the time-dependent adjoint model of SICOPOLIS delivers model sensitivities to initial and boundary conditions throughout time at lower cost. Here, we explore both the sensitivities of the Greenland Ice Sheet's entire and regional volumes to: initial ice thickness, precipitation, basal sliding, and geothermal flux over the Holocene epoch. Sensitivity studies such as described here are now accessible to the modeling community, based on the latest version of SICOPOLIS that has been adapted for OpenAD to generate correct and efficient adjoint code.
Simulation of partially coherent light propagation using parallel computing devices
NASA Astrophysics Data System (ADS)
Magalhães, Tiago C.; Rebordão, José M.
2017-08-01
Light acquires or loses coherence and coherence is one of the few optical observables. Spectra can be derived from coherence functions and understanding any interferometric experiment is also relying upon coherence functions. Beyond the two limiting cases (full coherence or incoherence) the coherence of light is always partial and it changes with propagation. We have implemented a code to compute the propagation of partially coherent light from the source plane to the observation plane using parallel computing devices (PCDs). In this paper, we restrict the propagation in free space only. To this end, we used the Open Computing Language (OpenCL) and the open-source toolkit PyOpenCL, which gives access to OpenCL parallel computation through Python. To test our code, we chose two coherence source models: an incoherent source and a Gaussian Schell-model source. In the former case, we divided into two different source shapes: circular and rectangular. The results were compared to the theoretical values. Our implemented code allows one to choose between the PyOpenCL implementation and a standard one, i.e using the CPU only. To test the computation time for each implementation (PyOpenCL and standard), we used several computer systems with different CPUs and GPUs. We used powers of two for the dimensions of the cross-spectral density matrix (e.g. 324, 644) and a significant speed increase is observed in the PyOpenCL implementation when compared to the standard one. This can be an important tool for studying new source models.
Open science encompasses many concepts, but most agree that for science to be truly open four things must be true. First, all components of the scientific project must be freely available including manuscripts, code, and data. Second, others must be able to repeat your work and ...
The GenABEL Project for statistical genomics.
Karssen, Lennart C; van Duijn, Cornelia M; Aulchenko, Yurii S
2016-01-01
Development of free/libre open source software is usually done by a community of people with an interest in the tool. For scientific software, however, this is less often the case. Most scientific software is written by only a few authors, often a student working on a thesis. Once the paper describing the tool has been published, the tool is no longer developed further and is left to its own device. Here we describe the broad, multidisciplinary community we formed around a set of tools for statistical genomics. The GenABEL project for statistical omics actively promotes open interdisciplinary development of statistical methodology and its implementation in efficient and user-friendly software under an open source licence. The software tools developed withing the project collectively make up the GenABEL suite, which currently consists of eleven tools. The open framework of the project actively encourages involvement of the community in all stages, from formulation of methodological ideas to application of software to specific data sets. A web forum is used to channel user questions and discussions, further promoting the use of the GenABEL suite. Developer discussions take place on a dedicated mailing list, and development is further supported by robust development practices including use of public version control, code review and continuous integration. Use of this open science model attracts contributions from users and developers outside the "core team", facilitating agile statistical omics methodology development and fast dissemination.
MyMolDB: a micromolecular database solution with open source and free components.
Xia, Bing; Tai, Zheng-Fu; Gu, Yu-Cheng; Li, Bang-Jing; Ding, Li-Sheng; Zhou, Yan
2011-10-01
To manage chemical structures in small laboratories is one of the important daily tasks. Few solutions are available on the internet, and most of them are closed source applications. The open-source applications typically have limited capability and basic cheminformatics functionalities. In this article, we describe an open-source solution to manage chemicals in research groups based on open source and free components. It has a user-friendly interface with the functions of chemical handling and intensive searching. MyMolDB is a micromolecular database solution that supports exact, substructure, similarity, and combined searching. This solution is mainly implemented using scripting language Python with a web-based interface for compound management and searching. Almost all the searches are in essence done with pure SQL on the database by using the high performance of the database engine. Thus, impressive searching speed has been archived in large data sets for no external Central Processing Unit (CPU) consuming languages were involved in the key procedure of the searching. MyMolDB is an open-source software and can be modified and/or redistributed under GNU General Public License version 3 published by the Free Software Foundation (Free Software Foundation Inc. The GNU General Public License, Version 3, 2007. Available at: http://www.gnu.org/licenses/gpl.html). The software itself can be found at http://code.google.com/p/mymoldb/. Copyright © 2011 Wiley Periodicals, Inc.
Phase 1 Validation Testing and Simulation for the WEC-Sim Open Source Code
NASA Astrophysics Data System (ADS)
Ruehl, K.; Michelen, C.; Gunawan, B.; Bosma, B.; Simmons, A.; Lomonaco, P.
2015-12-01
WEC-Sim is an open source code to model wave energy converters performance in operational waves, developed by Sandia and NREL and funded by the US DOE. The code is a time-domain modeling tool developed in MATLAB/SIMULINK using the multibody dynamics solver SimMechanics, and solves the WEC's governing equations of motion using the Cummins time-domain impulse response formulation in 6 degrees of freedom. The WEC-Sim code has undergone verification through code-to-code comparisons; however validation of the code has been limited to publicly available experimental data sets. While these data sets provide preliminary code validation, the experimental tests were not explicitly designed for code validation, and as a result are limited in their ability to validate the full functionality of the WEC-Sim code. Therefore, dedicated physical model tests for WEC-Sim validation have been performed. This presentation provides an overview of the WEC-Sim validation experimental wave tank tests performed at the Oregon State University's Directional Wave Basin at Hinsdale Wave Research Laboratory. Phase 1 of experimental testing was focused on device characterization and completed in Fall 2015. Phase 2 is focused on WEC performance and scheduled for Winter 2015/2016. These experimental tests were designed explicitly to validate the performance of WEC-Sim code, and its new feature additions. Upon completion, the WEC-Sim validation data set will be made publicly available to the wave energy community. For the physical model test, a controllable model of a floating wave energy converter has been designed and constructed. The instrumentation includes state-of-the-art devices to measure pressure fields, motions in 6 DOF, multi-axial load cells, torque transducers, position transducers, and encoders. The model also incorporates a fully programmable Power-Take-Off system which can be used to generate or absorb wave energy. Numerical simulations of the experiments using WEC-Sim will be presented. These simulations highlight the code features included in the latest release of WEC-Sim (v1.2), including: wave directionality, nonlinear hydrostatics and hydrodynamics, user-defined wave elevation time-series, state space radiation, and WEC-Sim compatibility with BEMIO (open source AQWA/WAMI/NEMOH coefficient parser).
Involving Practicing Scientists in K-12 Science Teacher Professional Development
NASA Astrophysics Data System (ADS)
Bertram, K. B.
2011-12-01
The Science Teacher Education Program (STEP) offered a unique framework for creating professional development courses focused on Arctic research from 2006-2009. Under the STEP framework, science, technology, engineering, and math (STEM) training was delivered by teams of practicing Arctic researchers in partnership with master teachers with 20+ years experience teaching STEM content in K-12 classrooms. Courses based on the framework were offered to educators across Alaska. STEP offered in-person summer-intensive institutes and follow-on audio-conferenced field-test courses during the academic year, supplemented by online scientist mentorship for teachers. During STEP courses, teams of scientists offered in-depth STEM content instruction at the graduate level for teachers of all grade levels. STEP graduate-level training culminated in the translation of information and data learned from Arctic scientists into standard-aligned lessons designed for immediate use in K-12 classrooms. This presentation will focus on research that explored the question: To what degree was scientist involvement beneficial to teacher training and to what degree was STEP scientist involvement beneficial to scientist instructors? Data sources reveal consistently high levels of ongoing (4 year) scientist and teacher participation; high STEM content learning outcomes for teachers; high STEM content learning outcomes for students; high ratings of STEP courses by scientists and teachers; and a discussion of the reasons scientists indicate they benefited from STEP involvement. Analyses of open-ended comments by teachers and scientists support and clarify these findings. A grounded theory approach was used to analyze teacher and scientist qualitative feedback. Comments were coded and patterns analyzed in three databases. The vast majority of teacher open-ended comments indicate that STEP involvement improved K-12 STEM classroom instruction, and the vast majority of scientist open-ended comments focus on the benefits scientists received from networking with K-12 teachers. The classroom lessons resulting from STEP have been so popular among teachers, the Alaska Department of Education and Early Development recently contracted with the PI to create a website that will make the STEP database open to teachers across Alaska. When the Alaska Department of Education and Early Development launched the new website in August 2011, the name of the STEP program was changed to the Alaska K-12 Science Curricular Initiative (AKSCI). The STEP courses serving as the foundation to the new AKSCI site are located under the "History" tab of the new website.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matthew Ellis; Derek Gaston; Benoit Forget
In recent years the use of Monte Carlo methods for modeling reactors has become feasible due to the increasing availability of massively parallel computer systems. One of the primary challenges yet to be fully resolved, however, is the efficient and accurate inclusion of multiphysics feedback in Monte Carlo simulations. The research in this paper presents a preliminary coupling of the open source Monte Carlo code OpenMC with the open source Multiphysics Object-Oriented Simulation Environment (MOOSE). The coupling of OpenMC and MOOSE will be used to investigate efficient and accurate numerical methods needed to include multiphysics feedback in Monte Carlo codes.more » An investigation into the sensitivity of Doppler feedback to fuel temperature approximations using a two dimensional 17x17 PWR fuel assembly is presented in this paper. The results show a functioning multiphysics coupling between OpenMC and MOOSE. The coupling utilizes Functional Expansion Tallies to accurately and efficiently transfer pin power distributions tallied in OpenMC to unstructured finite element meshes used in MOOSE. The two dimensional PWR fuel assembly case also demonstrates that for a simplified model the pin-by-pin doppler feedback can be adequately replicated by scaling a representative pin based on pin relative powers.« less
The HYPE Open Source Community
NASA Astrophysics Data System (ADS)
Strömbäck, L.; Pers, C.; Isberg, K.; Nyström, K.; Arheimer, B.
2013-12-01
The Hydrological Predictions for the Environment (HYPE) model is a dynamic, semi-distributed, process-based, integrated catchment model. It uses well-known hydrological and nutrient transport concepts and can be applied for both small and large scale assessments of water resources and status. In the model, the landscape is divided into classes according to soil type, vegetation and altitude. The soil representation is stratified and can be divided in up to three layers. Water and substances are routed through the same flow paths and storages (snow, soil, groundwater, streams, rivers, lakes) considering turn-over and transformation on the way towards the sea. HYPE has been successfully used in many hydrological applications at SMHI. For Europe, we currently have three different models; The S-HYPE model for Sweden; The BALT-HYPE model for the Baltic Sea; and the E-HYPE model for the whole Europe. These models simulate hydrological conditions and nutrients for their respective areas and are used for characterization, forecasts, and scenario analyses. Model data can be downloaded from hypeweb.smhi.se. In addition, we provide models for the Arctic region, the Arab (Middle East and Northern Africa) region, India, the Niger River basin, the La Plata Basin. This demonstrates the applicability of the HYPE model for large scale modeling in different regions of the world. An important goal with our work is to make our data and tools available as open data and services. For this aim we created the HYPE Open Source Community (OSC) that makes the source code of HYPE available for anyone interested in further development of HYPE. The HYPE OSC (hype.sourceforge.net) is an open source initiative under the Lesser GNU Public License taken by SMHI to strengthen international collaboration in hydrological modeling and hydrological data production. The hypothesis is that more brains and more testing will result in better models and better code. The code is transparent and can be changed and learnt from. New versions of the main code are delivered frequently. HYPE OSC is open to everyone interested in hydrology, hydrological modeling and code development - e.g. scientists, authorities, and consultancies. By joining the HYPE OSC you get access a state-of-the-art operational hydrological model. The HYPE source code is designed to efficiently handle large scale modeling for forecast, hindcast and climate applications. The code is under constant development to improve the hydrological processes, efficiency and readability. In the beginning of 2013 we released a version with new and better modularization based on hydrological processes. This will make the code easier to understand and further develop for a new user. An important challenge in this process is to produce code that is easy for anyone to understand and work with, but still maintain the properties that make the code efficient enough for large scale applications. Input from the HYPE Open Source Community is an important source for future improvements of the HYPE model. Therefore, by joining the community you become an active part of the development, get access to the latest features and can influence future versions of the model.
Chaparro, Cristian; Gayraud, Thomas; de Souza, Rogerio Fernandes; Domingues, Douglas Silva; Akaffou, Sélastique; Laforga Vanzela, Andre Luis; de Kochko, Alexandre; Rigoreau, Michel; Crouzillat, Dominique; Hamon, Serge; Hamon, Perla; Guyot, Romain
2015-01-01
A novel structure of nonautonomous long terminal repeat (LTR) retrotransposons called terminal repeat with GAG domain (TR-GAG) has been described in plants, both in monocotyledonous, dicotyledonous and basal angiosperm genomes. TR-GAGs are relatively short elements in length (<4 kb) showing the typical features of LTR-retrotransposons. However, they carry only one open reading frame coding for the GAG precursor protein involved for instance in transposition, the assembly, and the packaging of the element into the virus-like particle. GAG precursors show similarities with both Copia and Gypsy GAG proteins, suggesting evolutionary relationships of TR-GAG elements with both families. Despite the lack of the enzymatic machinery required for their mobility, strong evidences suggest that TR-GAGs are still active. TR-GAGs represent ubiquitous nonautonomous structures that could be involved in the molecular diversities of plant genomes. PMID:25573958
Numerical Predictions of Mode Reflections in an Open Circular Duct: Comparison with Theory
NASA Technical Reports Server (NTRS)
Dahl, Milo D.; Hixon, Ray
2015-01-01
The NASA Broadband Aeroacoustic Stator Simulation code was used to compute the acoustic field for higher-order modes in a circular duct geometry. To test the accuracy of the results computed by the code, the duct was terminated by an open end with an infinite flange or no flange. Both open end conditions have a theoretical solution that was used to compare with the computed results. Excellent comparison for reflection matrix values was achieved after suitable refinement of the grid at the open end. The study also revealed issues with the level of the mode amplitude introduced into the acoustic held from the source boundary and the amount of reflection that occurred at the source boundary when a general nonreflecting boundary condition was applied.
Factors Affecting Christian Parents' School Choice Decision Processes: A Grounded Theory Study
ERIC Educational Resources Information Center
Prichard, Tami G.; Swezey, James A.
2016-01-01
This study identifies factors affecting the decision processes for school choice by Christian parents. Grounded theory design incorporated interview transcripts, field notes, and a reflective journal to analyze themes. Comparative analysis, including open, axial, and selective coding, was used to reduce the coded statements to five code families:…
NASA Astrophysics Data System (ADS)
Gel, Aytekin; Hu, Jonathan; Ould-Ahmed-Vall, ElMoustapha; Kalinkin, Alexander A.
2017-02-01
Legacy codes remain a crucial element of today's simulation-based engineering ecosystem due to the extensive validation process and investment in such software. The rapid evolution of high-performance computing architectures necessitates the modernization of these codes. One approach to modernization is a complete overhaul of the code. However, this could require extensive investments, such as rewriting in modern languages, new data constructs, etc., which will necessitate systematic verification and validation to re-establish the credibility of the computational models. The current study advocates using a more incremental approach and is a culmination of several modernization efforts of the legacy code MFIX, which is an open-source computational fluid dynamics code that has evolved over several decades, widely used in multiphase flows and still being developed by the National Energy Technology Laboratory. Two different modernization approaches,'bottom-up' and 'top-down', are illustrated. Preliminary results show up to 8.5x improvement at the selected kernel level with the first approach, and up to 50% improvement in total simulated time with the latter were achieved for the demonstration cases and target HPC systems employed.
CACTI: Free, Open-Source Software for the Sequential Coding of Behavioral Interactions
Glynn, Lisa H.; Hallgren, Kevin A.; Houck, Jon M.; Moyers, Theresa B.
2012-01-01
The sequential analysis of client and clinician speech in psychotherapy sessions can help to identify and characterize potential mechanisms of treatment and behavior change. Previous studies required coding systems that were time-consuming, expensive, and error-prone. Existing software can be expensive and inflexible, and furthermore, no single package allows for pre-parsing, sequential coding, and assignment of global ratings. We developed a free, open-source, and adaptable program to meet these needs: The CASAA Application for Coding Treatment Interactions (CACTI). Without transcripts, CACTI facilitates the real-time sequential coding of behavioral interactions using WAV-format audio files. Most elements of the interface are user-modifiable through a simple XML file, and can be further adapted using Java through the terms of the GNU Public License. Coding with this software yields interrater reliabilities comparable to previous methods, but at greatly reduced time and expense. CACTI is a flexible research tool that can simplify psychotherapy process research, and has the potential to contribute to the improvement of treatment content and delivery. PMID:22815713
Code Parallelization with CAPO: A User Manual
NASA Technical Reports Server (NTRS)
Jin, Hao-Qiang; Frumkin, Michael; Yan, Jerry; Biegel, Bryan (Technical Monitor)
2001-01-01
A software tool has been developed to assist the parallelization of scientific codes. This tool, CAPO, extends an existing parallelization toolkit, CAPTools developed at the University of Greenwich, to generate OpenMP parallel codes for shared memory architectures. This is an interactive toolkit to transform a serial Fortran application code to an equivalent parallel version of the software - in a small fraction of the time normally required for a manual parallelization. We first discuss the way in which loop types are categorized and how efficient OpenMP directives can be defined and inserted into the existing code using the in-depth interprocedural analysis. The use of the toolkit on a number of application codes ranging from benchmark to real-world application codes is presented. This will demonstrate the great potential of using the toolkit to quickly parallelize serial programs as well as the good performance achievable on a large number of toolkit to quickly parallelize serial programs as well as the good performance achievable on a large number of processors. The second part of the document gives references to the parameters and the graphic user interface implemented in the toolkit. Finally a set of tutorials is included for hands-on experiences with this toolkit.
76 FR 55677 - Federal Open Market Committee; Domestic Policy Directive of August 9, 2011
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-08
.... English, Secretary, Federal Open Market Committee. [FR Doc. 2011-22896 Filed 9-7-11; 8:45 am] BILLING CODE... FEDERAL RESERVE SYSTEM Federal Open Market Committee; Domestic Policy Directive of August 9, 2011...), there is set forth below the domestic policy directive issued by the Federal Open Market Committee at...
78 FR 13673 - Federal Open Market Committee; Domestic Policy Directive of January 29-30, 2013
Federal Register 2010, 2011, 2012, 2013, 2014
2013-02-28
..., Federal Open Market Committee. [FR Doc. 2013-04693 Filed 2-27-13; 8:45 am] BILLING CODE 6210-01-P ... FEDERAL RESERVE SYSTEM Federal Open Market Committee; Domestic Policy Directive of January 29-30... 271), there is set forth below the domestic policy directive issued by the Federal Open Market...
Description of Panel Method Code ANTARES
NASA Technical Reports Server (NTRS)
Ulbrich, Norbert; George, Mike (Technical Monitor)
2000-01-01
Panel method code ANTARES was developed to compute wall interference corrections in a rectangular wind tunnel. The code uses point doublets to represent blockage effects and line doublets to represent lifting effects of a wind tunnel model. Subsonic compressibility effects are modeled by applying the Prandtl-Glauert transformation. The closed wall, open jet, or perforated wall boundary condition may be assigned to a wall panel centroid. The tunnel walls can be represented by using up to 8000 panels. The accuracy of panel method code ANTARES was successfully investigated by comparing solutions for the closed wall and open jet boundary condition with corresponding Method of Images solutions. Fourier transform solutions of a two-dimensional wind tunnel flow field were used to check the application of the perforated wall boundary condition. Studies showed that the accuracy of panel method code ANTARES can be improved by increasing the total number of wall panels in the circumferential direction. It was also shown that the accuracy decreases with increasing free-stream Mach number of the wind tunnel flow field.
NASA Astrophysics Data System (ADS)
Kwon, N.; Gentle, J.; Pierce, S. A.
2015-12-01
Software code developed for research is often used for a relatively short period of time before it is abandoned, lost, or becomes outdated. This unintentional abandonment of code is a valid problem in the 21st century scientific process, hindering widespread reusability and increasing the effort needed to develop research software. Potentially important assets, these legacy codes may be resurrected and documented digitally for long-term reuse, often with modest effort. Furthermore, the revived code may be openly accessible in a public repository for researchers to reuse or improve. For this study, the research team has begun to revive the codebase for Groundwater Decision Support System (GWDSS), originally developed for participatory decision making to aid urban planning and groundwater management, though it may serve multiple use cases beyond those originally envisioned. GWDSS was designed as a java-based wrapper with loosely federated commercial and open source components. If successfully revitalized, GWDSS will be useful for both practical applications as a teaching tool and case study for groundwater management, as well as informing theoretical research. Using the knowledge-sharing approaches documented by the NSF-funded Ontosoft project, digital documentation of GWDSS is underway, from conception to development, deployment, characterization, integration, composition, and dissemination through open source communities and geosciences modeling frameworks. Information assets, documentation, and examples are shared using open platforms for data sharing and assigned digital object identifiers. Two instances of GWDSS version 3.0 are being created: 1) a virtual machine instance for the original case study to serve as a live demonstration of the decision support tool, assuring the original version is usable, and 2) an open version of the codebase, executable installation files, and developer guide available via an open repository, assuring the source for the application is accessible with version control and potential for new branch developments. Finally, metadata about the software has been completed within the OntoSoft portal to provide descriptive curation, make GWDSS searchable, and complete documentation of the scientific software lifecycle.
Code of Federal Regulations, 2011 CFR
2011-10-01
... OF THE INTERIOR LAND RESOURCE MANAGEMENT (2000) SPECIAL LAWS AND RULES Segregation and Opening of... regulatory provisions in title 43 of the Code of Federal Regulations dealing with the segregation and opening...
Time Resolved Phonon Spectroscopy, Version 1.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goett, Johnny; Zhu, Brian
TRPS code was developed for the project "Time Resolved Phonon Spectroscopy". Routines contained in this piece of software were specially created to model phonon generation and tracking within materials that interact with ionizing radiation, particularly applicable to the modeling of cryogenic radiation detectors for dark matter and neutrino research. These routines were created to link seamlessly with the open source Geant4 framework for the modeling of radiation transport in matter, with the explicit intent of open sourcing them for eventual integration into that code base.
Bruni, Rebecca A; Laupacis, Andreas; Levinson, Wendy; Martin, Douglas K
2007-11-16
As no health system can afford to provide all possible services and treatments for the people it serves, each system must set priorities. Priority setting decision makers are increasingly involving the public in policy making. This study focuses on public engagement in a key priority setting context that plagues every health system around the world: wait list management. The purpose of this study is to describe and evaluate priority setting for the Ontario Wait Time Strategy, with special attention to public engagement. This study was conducted at the Ontario Wait Time Strategy in Ontario, Canada which is part of a Federal-Territorial-Provincial initiative to improve access and reduce wait times in five areas: cancer, cardiac, sight restoration, joint replacements, and diagnostic imaging. There were two sources of data: (1) over 25 documents (e.g. strategic planning reports, public updates), and (2) 28 one-on-one interviews with informants (e.g. OWTS participants, MOHLTC representatives, clinicians, patient advocates). Analysis used a modified thematic technique in three phases: open coding, axial coding, and evaluation. The Ontario Wait Time Strategy partially meets the four conditions of 'accountability for reasonableness'. The public was not directly involved in the priority setting activities of the Ontario Wait Time Strategy. Study participants identified both benefits (supporting the initiative, experts of the lived experience, a publicly funded system and sustainability of the healthcare system) and concerns (personal biases, lack of interest to be involved, time constraints, and level of technicality) for public involvement in the Ontario Wait Time Strategy. Additionally, the participants identified concern for the consequences (sustainability, cannibalism, and a class system) resulting from the Ontario Wait Times Strategy. We described and evaluated a wait time management initiative (the Ontario Wait Time Strategy) with special attention to public engagement, and provided a concrete plan to operationalize a strategy for improving public involvement in this, and other, wait time initiatives.
Bruni, Rebecca A; Laupacis, Andreas; Levinson, Wendy; Martin, Douglas K
2007-01-01
Background As no health system can afford to provide all possible services and treatments for the people it serves, each system must set priorities. Priority setting decision makers are increasingly involving the public in policy making. This study focuses on public engagement in a key priority setting context that plagues every health system around the world: wait list management. The purpose of this study is to describe and evaluate priority setting for the Ontario Wait Time Strategy, with special attention to public engagement. Methods This study was conducted at the Ontario Wait Time Strategy in Ontario, Canada which is part of a Federal-Territorial-Provincial initiative to improve access and reduce wait times in five areas: cancer, cardiac, sight restoration, joint replacements, and diagnostic imaging. There were two sources of data: (1) over 25 documents (e.g. strategic planning reports, public updates), and (2) 28 one-on-one interviews with informants (e.g. OWTS participants, MOHLTC representatives, clinicians, patient advocates). Analysis used a modified thematic technique in three phases: open coding, axial coding, and evaluation. Results The Ontario Wait Time Strategy partially meets the four conditions of 'accountability for reasonableness'. The public was not directly involved in the priority setting activities of the Ontario Wait Time Strategy. Study participants identified both benefits (supporting the initiative, experts of the lived experience, a publicly funded system and sustainability of the healthcare system) and concerns (personal biases, lack of interest to be involved, time constraints, and level of technicality) for public involvement in the Ontario Wait Time Strategy. Additionally, the participants identified concern for the consequences (sustainability, cannibalism, and a class system) resulting from the Ontario Wait Times Strategy. Conclusion We described and evaluated a wait time management initiative (the Ontario Wait Time Strategy) with special attention to public engagement, and provided a concrete plan to operationalize a strategy for improving public involvement in this, and other, wait time initiatives. PMID:18021393
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yidong Xia; Mitch Plummer; Robert Podgorney
2016-02-01
Performance of heat production process over a 30-year period is assessed in a conceptual EGS model with a geothermal gradient of 65K per km depth in the reservoir. Water is circulated through a pair of parallel wells connected by a set of single large wing fractures. The results indicate that the desirable output electric power rate and lifespan could be obtained under suitable material properties and system parameters. A sensitivity analysis on some design constraints and operation parameters indicates that 1) the fracture horizontal spacing has profound effect on the long-term performance of heat production, 2) the downward deviation anglemore » for the parallel doublet wells may help overcome the difficulty of vertical drilling to reach a favorable production temperature, and 3) the thermal energy production rate and lifespan has close dependence on water mass flow rate. The results also indicate that the heat production can be improved when the horizontal fracture spacing, well deviation angle, and production flow rate are under reasonable conditions. To conduct the reservoir modeling and simulations, an open-source, finite element based, fully implicit, fully coupled hydrothermal code, namely FALCON, has been developed and used in this work. Compared with most other existing codes that are either closed-source or commercially available in this area, this new open-source code has demonstrated a code development strategy that aims to provide an unparalleled easiness for user-customization and multi-physics coupling. Test results have shown that the FALCON code is able to complete the long-term tests efficiently and accurately, thanks to the state-of-the-art nonlinear and linear solver algorithms implemented in the code.« less
NASA Astrophysics Data System (ADS)
Smith, J. A.; Peter, D. B.; Tromp, J.; Komatitsch, D.; Lefebvre, M. P.
2015-12-01
We present both SPECFEM3D_Cartesian and SPECFEM3D_GLOBE open-source codes, representing high-performance numerical wave solvers simulating seismic wave propagation for local-, regional-, and global-scale application. These codes are suitable for both forward propagation in complex media and tomographic imaging. Both solvers compute highly accurate seismic wave fields using the continuous Galerkin spectral-element method on unstructured meshes. Lateral variations in compressional- and shear-wave speeds, density, as well as 3D attenuation Q models, topography and fluid-solid coupling are all readily included in both codes. For global simulations, effects due to rotation, ellipticity, the oceans, 3D crustal models, and self-gravitation are additionally included. Both packages provide forward and adjoint functionality suitable for adjoint tomography on high-performance computing architectures. We highlight the most recent release of the global version which includes improved performance, simultaneous MPI runs, OpenCL and CUDA support via an automatic source-to-source transformation library (BOAST), parallel I/O readers and writers for databases using ADIOS and seismograms using the recently developed Adaptable Seismic Data Format (ASDF) with built-in provenance. This makes our spectral-element solvers current state-of-the-art, open-source community codes for high-performance seismic wave propagation on arbitrarily complex 3D models. Together with these solvers, we provide full-waveform inversion tools to image the Earth's interior at unprecedented resolution.
Xia, Yidong; Lou, Jialin; Luo, Hong; ...
2015-02-09
Here, an OpenACC directive-based graphics processing unit (GPU) parallel scheme is presented for solving the compressible Navier–Stokes equations on 3D hybrid unstructured grids with a third-order reconstructed discontinuous Galerkin method. The developed scheme requires the minimum code intrusion and algorithm alteration for upgrading a legacy solver with the GPU computing capability at very little extra effort in programming, which leads to a unified and portable code development strategy. A face coloring algorithm is adopted to eliminate the memory contention because of the threading of internal and boundary face integrals. A number of flow problems are presented to verify the implementationmore » of the developed scheme. Timing measurements were obtained by running the resulting GPU code on one Nvidia Tesla K20c GPU card (Nvidia Corporation, Santa Clara, CA, USA) and compared with those obtained by running the equivalent Message Passing Interface (MPI) parallel CPU code on a compute node (consisting of two AMD Opteron 6128 eight-core CPUs (Advanced Micro Devices, Inc., Sunnyvale, CA, USA)). Speedup factors of up to 24× and 1.6× for the GPU code were achieved with respect to one and 16 CPU cores, respectively. The numerical results indicate that this OpenACC-based parallel scheme is an effective and extensible approach to port unstructured high-order CFD solvers to GPU computing.« less
Massange-Sánchez, Julio A.; Palmeros-Suárez, Paola A.; Espitia-Rangel, Eduardo; Rodríguez-Arévalo, Isaac; Sánchez-Segura, Lino; Martínez-Gallardo, Norma A.; Alatorre-Cobos, Fulgencio; Tiessen, Axel; Délano-Frier, John P.
2016-01-01
Two grain amaranth transcription factor (TF) genes were overexpressed in Arabidopsis plants. The first, coding for a group VII ethylene response factor TF (i.e., AhERF-VII) conferred tolerance to water-deficit stress (WS) in transgenic Arabidopsis without affecting vegetative or reproductive growth. A significantly lower water-loss rate in detached leaves coupled to a reduced stomatal opening in leaves of plants subjected to WS was associated with this trait. WS tolerance was also associated with an increased antioxidant enzyme activity and the accumulation of putative stress-related secondary metabolites. However, microarray and GO data did not indicate an obvious correlation between WS tolerance, stomatal closure, and abscisic acid (ABA)-related signaling. This scenario suggested that stomatal closure during WS in these plants involved ABA-independent mechanisms, possibly involving reactive oxygen species (ROS). WS tolerance may have also involved other protective processes, such as those employed for methyl glyoxal detoxification. The second, coding for a class A and cluster I DNA binding with one finger TF (i.e., AhDof-AI) provided salt-stress (SS) tolerance with no evident fitness penalties. The lack of an obvious development-related phenotype contrasted with microarray and GO data showing an enrichment of categories and genes related to developmental processes, particularly flowering. SS tolerance also correlated with increased superoxide dismutase activity but not with augmented stomatal closure. Additionally, microarray and GO data indicated that, contrary to AhERF-VII, SS tolerance conferred by AhDof-AI in Arabidopsis involved ABA-dependent and ABA-independent stress amelioration mechanisms. PMID:27749893
HELIOS-R: An Ultrafast, Open-Source Retrieval Code For Exoplanetary Atmosphere Characterization
NASA Astrophysics Data System (ADS)
LAVIE, Baptiste
2015-12-01
Atmospheric retrieval is a growing, new approach in the theory of exoplanet atmosphere characterization. Unlike self-consistent modeling it allows us to fully explore the parameter space, as well as the degeneracies between the parameters using a Bayesian framework. We present HELIOS-R, a very fast retrieving code written in Python and optimized for GPU computation. Once it is ready, HELIOS-R will be the first open-source atmospheric retrieval code accessible to the exoplanet community. As the new generation of direct imaging instruments (SPHERE, GPI) have started to gather data, the first version of HELIOS-R focuses on emission spectra. We use a 1D two-stream forward model for computing fluxes and couple it to an analytical temperature-pressure profile that is constructed to be in radiative equilibrium. We use our ultra-fast opacity calculator HELIOS-K (also open-source) to compute the opacities of CO2, H2O, CO and CH4 from the HITEMP database. We test both opacity sampling (which is typically used by other workers) and the method of k-distributions. Using this setup, we compute a grid of synthetic spectra and temperature-pressure profiles, which is then explored using a nested sampling algorithm. By focusing on model selection (Occam’s razor) through the explicit computation of the Bayesian evidence, nested sampling allows us to deal with current sparse data as well as upcoming high-resolution observations. Once the best model is selected, HELIOS-R provides posterior distributions of the parameters. As a test for our code we studied HR8799 system and compared our results with the previous analysis of Lee, Heng & Irwin (2013), which used the proprietary NEMESIS retrieval code. HELIOS-R and HELIOS-K are part of the set of open-source community codes we named the Exoclimes Simulation Platform (www.exoclime.org).
Eyler, Amy A; Hipp, J Aaron; Lokuta, Julie
2015-01-01
Ciclovía, or Open Streets initiatives, are events where streets are opened for physical activity and closed to motorized traffic. Although the initiatives are gaining popularity in the United States, little is known about planning and implementing them. The goals of this paper are to explore the development and implementation of Open Streets initiatives and make recommendations for increasing the capacity of organizers to enhance initiative success. Phenomenology with qualitative analysis of structured interviews was used. Study setting was urban and suburban communities in the United States. Study participants were organizers of Open Streets initiatives in U.S. cities. Using a list of 47 events held in 2011, 27 lead organizers were interviewed by telephone about planning, implementation, and lessons learned. The interviews were digitally recorded and transcribed. A phenomenologic approach was used, an initial coding tool was developed after reviewing a sample of transcripts, and constant comparative coding methodology was applied. Themes and subthemes were generated from codes. The most common reasons for initiation were to highlight or improve health and transportation. Most initiatives aimed to reach the general population, but some targeted families, children, or specific neighborhoods. Getting people to understand the concept of Open Streets was an important challenge. Other challenges included lack of funding and personnel, and complex logistics. These initiatives democratize public space for citizens while promoting physical activity, social connectedness, and other broad agendas. There are opportunities for the research community to contribute to the expanse and sustainability of Open Streets, particularly in evaluation and dissemination.
Comparison of two LES codes for wind turbine wake studies
NASA Astrophysics Data System (ADS)
Sarlak, H.; Pierella, F.; Mikkelsen, R.; Sørensen, J. N.
2014-06-01
For the third time a blind test comparison in Norway 2013, was conducted comparing numerical simulations for the rotor Cp and Ct and wake profiles with the experimental results. As the only large eddy simulation study among participants, results of the Technical University of Denmark (DTU) using their in-house CFD solver, EllipSys3D, proved to be more reliable among the other models for capturing the wake profiles and the turbulence intensities downstream the turbine. It was therefore remarked in the workshop to investigate other LES codes to compare their performance with EllipSys3D. The aim of this paper is to investigate on two CFD solvers, the DTU's in-house code, EllipSys3D and the open-sourse toolbox, OpenFoam, for a set of actuator line based LES computations. Two types of simulations are performed: the wake behind a signle rotor and the wake behind a cluster of three inline rotors. Results are compared in terms of velocity deficit, turbulence kinetic energy and eddy viscosity. It is seen that both codes predict similar near-wake flow structures with the exception of OpenFoam's simulations without the subgrid-scale model. The differences begin to increase with increasing the distance from the upstream rotor. From the single rotor simulations, EllipSys3D is found to predict a slower wake recovery in the case of uniform laminar flow. From the 3-rotor computations, it is seen that the difference between the codes is smaller as the disturbance created by the downstream rotors causes break down of the wake structures and more homogenuous flow structures. It is finally observed that OpenFoam computations are more sensitive to the SGS models.
ERIC Educational Resources Information Center
Uehara, Suwako; Noriega, Edgar Josafat Martinez
2016-01-01
The availability of user-friendly coding software is increasing, yet teachers might hesitate to use this technology to develop for educational needs. This paper discusses studies related to technology for educational uses and introduces an evaluation application being developed. Through questionnaires by student users and open-ended discussion by…
A Regularization Approach to Blind Deblurring and Denoising of QR Barcodes.
van Gennip, Yves; Athavale, Prashant; Gilles, Jérôme; Choksi, Rustum
2015-09-01
QR bar codes are prototypical images for which part of the image is a priori known (required patterns). Open source bar code readers, such as ZBar, are readily available. We exploit both these facts to provide and assess purely regularization-based methods for blind deblurring of QR bar codes in the presence of noise.
TACOM LCMC IB and DMSMS Mitigation
2011-09-26
Sources I Gosed II Opened ~ I AAC flag: Vii6d AI CAGE codes (CONUS): 3 3 CAGE codes (OCONUS): 0 0 ---- Total: 3 3 Single or no CAGE code...v In box - I’lL- I Qi) chambers:... I ~ Microsoft - I I~ AADO SER.- t@) ~ i_ .... gose I I Used On Reference/Part Numbers I~ 26SEP11
Use Computer-Aided Tools to Parallelize Large CFD Applications
NASA Technical Reports Server (NTRS)
Jin, H.; Frumkin, M.; Yan, J.
2000-01-01
Porting applications to high performance parallel computers is always a challenging task. It is time consuming and costly. With rapid progressing in hardware architectures and increasing complexity of real applications in recent years, the problem becomes even more sever. Today, scalability and high performance are mostly involving handwritten parallel programs using message-passing libraries (e.g. MPI). However, this process is very difficult and often error-prone. The recent reemergence of shared memory parallel (SMP) architectures, such as the cache coherent Non-Uniform Memory Access (ccNUMA) architecture used in the SGI Origin 2000, show good prospects for scaling beyond hundreds of processors. Programming on an SMP is simplified by working in a globally accessible address space. The user can supply compiler directives, such as OpenMP, to parallelize the code. As an industry standard for portable implementation of parallel programs for SMPs, OpenMP is a set of compiler directives and callable runtime library routines that extend Fortran, C and C++ to express shared memory parallelism. It promises an incremental path for parallel conversion of existing software, as well as scalability and performance for a complete rewrite or an entirely new development. Perhaps the main disadvantage of programming with directives is that inserted directives may not necessarily enhance performance. In the worst cases, it can create erroneous results. While vendors have provided tools to perform error-checking and profiling, automation in directive insertion is very limited and often failed on large programs, primarily due to the lack of a thorough enough data dependence analysis. To overcome the deficiency, we have developed a toolkit, CAPO, to automatically insert OpenMP directives in Fortran programs and apply certain degrees of optimization. CAPO is aimed at taking advantage of detailed inter-procedural dependence analysis provided by CAPTools, developed by the University of Greenwich, to reduce potential errors made by users. Earlier tests on NAS Benchmarks and ARC3D have demonstrated good success of this tool. In this study, we have applied CAPO to parallelize three large applications in the area of computational fluid dynamics (CFD): OVERFLOW, TLNS3D and INS3D. These codes are widely used for solving Navier-Stokes equations with complicated boundary conditions and turbulence model in multiple zones. Each one comprises of from 50K to 1,00k lines of FORTRAN77. As an example, CAPO took 77 hours to complete the data dependence analysis of OVERFLOW on a workstation (SGI, 175MHz, R10K processor). A fair amount of effort was spent on correcting false dependencies due to lack of necessary knowledge during the analysis. Even so, CAPO provides an easy way for user to interact with the parallelization process. The OpenMP version was generated within a day after the analysis was completed. Due to sequential algorithms involved, code sections in TLNS3D and INS3D need to be restructured by hand to produce more efficient parallel codes. An included figure shows preliminary test results of the generated OVERFLOW with several test cases in single zone. The MPI data points for the small test case were taken from a handcoded MPI version. As we can see, CAPO's version has achieved 18 fold speed up on 32 nodes of the SGI O2K. For the small test case, it outperformed the MPI version. These results are very encouraging, but further work is needed. For example, although CAPO attempts to place directives on the outer- most parallel loops in an interprocedural framework, it does not insert directives based on the best manual strategy. In particular, it lacks the support of parallelization at the multi-zone level. Future work will emphasize on the development of methodology to work in a multi-zone level and with a hybrid approach. Development of tools to perform more complicated code transformation is also needed.
Interim Open Source Software (OSS) Policy
This interim Policy establishes a framework to implement the requirements of the Office of Management and Budget's (OMB) Federal Source Code Policy to achieve efficiency, transparency and innovation through reusable and open source software.
Low-cost embedded systems for democratizing ocean sensor technology in the coastal zone
NASA Astrophysics Data System (ADS)
Glazer, B. T.; Lio, H. I.
2017-12-01
Environmental sciences suffer from undersampling. Enabling sustained and unattended data collection in the coastal zone typically involves expensive instrumentation and infrastructure deployed as cabled observatories or moorings with little flexibility in deployment location following initial installation. High costs of commercially-available or custom instruments have limited the number of sensor sites that can be targeted by academic researchers, and have also limited engagement with the public. We have developed a novel, low-cost, open-source sensor and software platform to enable wireless data transfer of biogeochemical sensors in the coastal zone. The platform is centered upon widely available, low-cost, single board computers and microcontrollers. We have used a blend of on-hand research-grade sensors and low-cost open-source electronics that can be assembled by tech-savvy non-engineers. Robust, open-source code that remains customizable for specific miniNode configurations can match a specific site's measurement needs, depending on the scientific research priorities. We have demonstrated prototype capabilities and versatility through lab testing and field deployments of multiple sensor nodes with multiple sensor inputs, all of which are streaming near-real-time data from Kaneohe Bay over wireless RF links to a shore-based base station.
Improved Speech Coding Based on Open-Loop Parameter Estimation
NASA Technical Reports Server (NTRS)
Juang, Jer-Nan; Chen, Ya-Chin; Longman, Richard W.
2000-01-01
A nonlinear optimization algorithm for linear predictive speech coding was developed early that not only optimizes the linear model coefficients for the open loop predictor, but does the optimization including the effects of quantization of the transmitted residual. It also simultaneously optimizes the quantization levels used for each speech segment. In this paper, we present an improved method for initialization of this nonlinear algorithm, and demonstrate substantial improvements in performance. In addition, the new procedure produces monotonically improving speech quality with increasing numbers of bits used in the transmitted error residual. Examples of speech encoding and decoding are given for 8 speech segments and signal to noise levels as high as 47 dB are produced. As in typical linear predictive coding, the optimization is done on the open loop speech analysis model. Here we demonstrate that minimizing the error of the closed loop speech reconstruction, instead of the simpler open loop optimization, is likely to produce negligible improvement in speech quality. The examples suggest that the algorithm here is close to giving the best performance obtainable from a linear model, for the chosen order with the chosen number of bits for the codebook.
RMG An Open Source Electronic Structure Code for Multi-Petaflops Calculations
NASA Astrophysics Data System (ADS)
Briggs, Emil; Lu, Wenchang; Hodak, Miroslav; Bernholc, Jerzy
RMG (Real-space Multigrid) is an open source, density functional theory code for quantum simulations of materials. It solves the Kohn-Sham equations on real-space grids, which allows for natural parallelization via domain decomposition. Either subspace or Davidson diagonalization, coupled with multigrid methods, are used to accelerate convergence. RMG is a cross platform open source package which has been used in the study of a wide range of systems, including semiconductors, biomolecules, and nanoscale electronic devices. It can optionally use GPU accelerators to improve performance on systems where they are available. The recently released versions (>2.0) support multiple GPU's per compute node, have improved performance and scalability, enhanced accuracy and support for additional hardware platforms. New versions of the code are regularly released at http://www.rmgdft.org. The releases include binaries for Linux, Windows and MacIntosh systems, automated builds for clusters using cmake, as well as versions adapted to the major supercomputing installations and platforms. Several recent, large-scale applications of RMG will be discussed.
Reinhardt, Josephine A.; Wanjiru, Betty M.; Brant, Alicia T.; Saelao, Perot; Begun, David J.; Jones, Corbin D.
2013-01-01
How non-coding DNA gives rise to new protein-coding genes (de novo genes) is not well understood. Recent work has revealed the origins and functions of a few de novo genes, but common principles governing the evolution or biological roles of these genes are unknown. To better define these principles, we performed a parallel analysis of the evolution and function of six putatively protein-coding de novo genes described in Drosophila melanogaster. Reconstruction of the transcriptional history of de novo genes shows that two de novo genes emerged from novel long non-coding RNAs that arose at least 5 MY prior to evolution of an open reading frame. In contrast, four other de novo genes evolved a translated open reading frame and transcription within the same evolutionary interval suggesting that nascent open reading frames (proto-ORFs), while not required, can contribute to the emergence of a new de novo gene. However, none of the genes arose from proto-ORFs that existed long before expression evolved. Sequence and structural evolution of de novo genes was rapid compared to nearby genes and the structural complexity of de novo genes steadily increases over evolutionary time. Despite the fact that these genes are transcribed at a higher level in males than females, and are most strongly expressed in testes, RNAi experiments show that most of these genes are essential in both sexes during metamorphosis. This lethality suggests that protein coding de novo genes in Drosophila quickly become functionally important. PMID:24146629
Wilkinson, Karl A; Hine, Nicholas D M; Skylaris, Chris-Kriton
2014-11-11
We present a hybrid MPI-OpenMP implementation of Linear-Scaling Density Functional Theory within the ONETEP code. We illustrate its performance on a range of high performance computing (HPC) platforms comprising shared-memory nodes with fast interconnect. Our work has focused on applying OpenMP parallelism to the routines which dominate the computational load, attempting where possible to parallelize different loops from those already parallelized within MPI. This includes 3D FFT box operations, sparse matrix algebra operations, calculation of integrals, and Ewald summation. While the underlying numerical methods are unchanged, these developments represent significant changes to the algorithms used within ONETEP to distribute the workload across CPU cores. The new hybrid code exhibits much-improved strong scaling relative to the MPI-only code and permits calculations with a much higher ratio of cores to atoms. These developments result in a significantly shorter time to solution than was possible using MPI alone and facilitate the application of the ONETEP code to systems larger than previously feasible. We illustrate this with benchmark calculations from an amyloid fibril trimer containing 41,907 atoms. We use the code to study the mechanism of delamination of cellulose nanofibrils when undergoing sonification, a process which is controlled by a large number of interactions that collectively determine the structural properties of the fibrils. Many energy evaluations were needed for these simulations, and as these systems comprise up to 21,276 atoms this would not have been feasible without the developments described here.
Ohno, S
1984-01-01
Three outstanding properties uniquely qualify repeats of base oligomers as the primordial coding sequences of all polypeptide chains. First, when compared with randomly generated base sequences in general, they are more likely to have long open reading frames. Second, periodical polypeptide chains specified by such repeats are more likely to assume either alpha-helical or beta-sheet secondary structures than are polypeptide chains of random sequence. Third, provided that the number of bases in the oligomeric unit is not a multiple of 3, these internally repetitious coding sequences are impervious to randomly sustained base substitutions, deletions, and insertions. This is because the recurring periodicity of their polypeptide chains is given by three consecutive copies of the oligomeric unit translated in three different reading frames. Accordingly, when one reading frame is open, the other two are automatically open as well, all three being capable of coding for polypeptide chains of identical periodicity. Under this circumstance, a frame shift due to the deletion or insertion of a number of bases that is not a multiple of 3 fails to alter the down-stream amino acid sequence, and even a base change causing premature chain-termination can silence only one of the three potential coding units. Newly arisen coding sequences in modern organisms are oligomeric repeats, and most of the older genes retain various vestiges of their original internal repetitions. Some of the genes (e.g., oncogenes) have even inherited the property of being impervious to randomly sustained base changes.
On the existence of binary simplex codes. [using combinatorial construction
NASA Technical Reports Server (NTRS)
Taylor, H.
1977-01-01
Using a simple combinatorial construction, the existence of a binary simplex code with m codewords for all m is greater than or equal to 1 is proved. The problem of the shortest possible length is left open.
St Ecedil Pień, Ewa; Costa, Marina C; Kurc, Szczepan; Drożdż, Anna; Cortez-Dias, Nuno; Enguita, Francisco J
2018-06-07
Pervasive transcription of the human genome is responsible for the production of a myriad of non-coding RNA molecules (ncRNAs) some of them with regulatory functions. The pivotal role of ncRNAs in cardiovascular biology has been unveiled in the last decade, starting from the characterization of the involvement of micro-RNAs in cardiovascular development and function, and followed by the use of circulating ncRNAs as biomarkers of cardiovascular diseases. The human non-coding secretome is composed by several RNA species that circulate in body fluids and could be used as biomarkers for diagnosis and outcome prediction. In cardiovascular diseases, secreted ncRNAs have been described as biomarkers of several conditions including myocardial infarction, cardiac failure, and atrial fibrillation. Among circulating ncRNAs, micro-RNAs (miRNAs), long noncoding RNAs (lncRNAs) and circular RNAs (circRNAs) have been proposed as biomarkers in different cardiovascular diseases. In comparison with standard biomarkers, the biochemical nature of ncRNAs offers better stability and flexible storage conditions of the samples, and increased sensitivity and specificity. In this review we describe the current trends and future prospects of the use of the ncRNA secretome components as biomarkers of cardiovascular diseases, including the opening questions related with their secretion mechanisms and regulatory actions.
NASA Astrophysics Data System (ADS)
Fabien-Ouellet, Gabriel; Gloaguen, Erwan; Giroux, Bernard
2017-03-01
Full Waveform Inversion (FWI) aims at recovering the elastic parameters of the Earth by matching recordings of the ground motion with the direct solution of the wave equation. Modeling the wave propagation for realistic scenarios is computationally intensive, which limits the applicability of FWI. The current hardware evolution brings increasing parallel computing power that can speed up the computations in FWI. However, to take advantage of the diversity of parallel architectures presently available, new programming approaches are required. In this work, we explore the use of OpenCL to develop a portable code that can take advantage of the many parallel processor architectures now available. We present a program called SeisCL for 2D and 3D viscoelastic FWI in the time domain. The code computes the forward and adjoint wavefields using finite-difference and outputs the gradient of the misfit function given by the adjoint state method. To demonstrate the code portability on different architectures, the performance of SeisCL is tested on three different devices: Intel CPUs, NVidia GPUs and Intel Xeon PHI. Results show that the use of GPUs with OpenCL can speed up the computations by nearly two orders of magnitudes over a single threaded application on the CPU. Although OpenCL allows code portability, we show that some device-specific optimization is still required to get the best performance out of a specific architecture. Using OpenCL in conjunction with MPI allows the domain decomposition of large models on several devices located on different nodes of a cluster. For large enough models, the speedup of the domain decomposition varies quasi-linearly with the number of devices. Finally, we investigate two different approaches to compute the gradient by the adjoint state method and show the significant advantages of using OpenCL for FWI.
Swan: A tool for porting CUDA programs to OpenCL
NASA Astrophysics Data System (ADS)
Harvey, M. J.; De Fabritiis, G.
2011-04-01
The use of modern, high-performance graphical processing units (GPUs) for acceleration of scientific computation has been widely reported. The majority of this work has used the CUDA programming model supported exclusively by GPUs manufactured by NVIDIA. An industry standardisation effort has recently produced the OpenCL specification for GPU programming. This offers the benefits of hardware-independence and reduced dependence on proprietary tool-chains. Here we describe a source-to-source translation tool, "Swan" for facilitating the conversion of an existing CUDA code to use the OpenCL model, as a means to aid programmers experienced with CUDA in evaluating OpenCL and alternative hardware. While the performance of equivalent OpenCL and CUDA code on fixed hardware should be comparable, we find that a real-world CUDA application ported to OpenCL exhibits an overall 50% increase in runtime, a reduction in performance attributable to the immaturity of contemporary compilers. The ported application is shown to have platform independence, running on both NVIDIA and AMD GPUs without modification. We conclude that OpenCL is a viable platform for developing portable GPU applications but that the more mature CUDA tools continue to provide best performance. Program summaryProgram title: Swan Catalogue identifier: AEIH_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIH_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU Public License version 2 No. of lines in distributed program, including test data, etc.: 17 736 No. of bytes in distributed program, including test data, etc.: 131 177 Distribution format: tar.gz Programming language: C Computer: PC Operating system: Linux RAM: 256 Mbytes Classification: 6.5 External routines: NVIDIA CUDA, OpenCL Nature of problem: Graphical Processing Units (GPUs) from NVIDIA are preferentially programed with the proprietary CUDA programming toolkit. An alternative programming model promoted as an industry standard, OpenCL, provides similar capabilities to CUDA and is also supported on non-NVIDIA hardware (including multicore ×86 CPUs, AMD GPUs and IBM Cell processors). The adaptation of a program from CUDA to OpenCL is relatively straightforward but laborious. The Swan tool facilitates this conversion. Solution method:Swan performs a translation of CUDA kernel source code into an OpenCL equivalent. It also generates the C source code for entry point functions, simplifying kernel invocation from the host program. A concise host-side API abstracts the CUDA and OpenCL APIs. A program adapted to use Swan has no dependency on the CUDA compiler for the host-side program. The converted program may be built for either CUDA or OpenCL, with the selection made at compile time. Restrictions: No support for CUDA C++ features Running time: Nominal
2009-09-01
boarding team, COTS, WLAN, smart antenna, OpenVPN application, wireless base station, OFDM, latency, point-to-point wireless link. 16. PRICE CODE 17...16 c. SSL/TLS .................................17 2. OpenVPN ......................................17 III. EXPERIMENT METHODOLOGY...network frame at Layer 2 has already been secured by encryption at a higher level. 2. OpenVPN OpenVPN is open source software that provides a VPN
NASA Astrophysics Data System (ADS)
Huba, J. D.; Joyce, G.
2001-05-01
In the past decade, the Open Source Model for software development has gained popularity and has had numerous major achievements: emacs, Linux, the Gimp, and Python, to name a few. The basic idea is to provide the source code of the model or application, a tutorial on its use, and a feedback mechanism with the community so that the model can be tested, improved, and archived. Given the success of the Open Source Model, we believe it may prove valuable in the development of scientific research codes. With this in mind, we are `Open Sourcing' the low to mid-latitude ionospheric model that has recently been developed at the Naval Research Laboratory: SAMI2 (Sami2 is Another Model of the Ionosphere). The model is comprehensive and uses modern numerical techniques. The structure and design of SAMI2 make it relatively easy to understand and modify: the numerical algorithms are simple and direct, and the code is reasonably well-written. Furthermore, SAMI2 is designed to run on personal computers; prohibitive computational resources are not necessary, thereby making the model accessible and usable by virtually all researchers. For these reasons, SAMI2 is an excellent candidate to explore and test the open source modeling paradigm in space physics research. We will discuss various topics associated with this project. Research supported by the Office of Naval Research.
Open Rotor Aeroacoustic Modeling
NASA Technical Reports Server (NTRS)
Envia, Edmane
2012-01-01
Owing to their inherent fuel efficiency, there is renewed interest in developing open rotor propulsion systems that are both efficient and quiet. The major contributor to the overall noise of an open rotor system is the propulsor noise, which is produced as a result of the interaction of the airstream with the counter-rotating blades. As such, robust aeroacoustic prediction methods are an essential ingredient in any approach to designing low-noise open rotor systems. To that end, an effort has been underway at NASA to assess current open rotor noise prediction tools and develop new capabilities. Under this effort, high-fidelity aerodynamic simulations of a benchmark open rotor blade set were carried out and used to make noise predictions via existing NASA open rotor noise prediction codes. The results have been compared with the aerodynamic and acoustic data that were acquired for this benchmark open rotor blade set. The emphasis of this paper is on providing a summary of recent results from a NASA Glenn effort to validate an in-house open noise prediction code called LINPROP which is based on a high-blade-count asymptotic approximation to the Ffowcs-Williams Hawkings Equation. The results suggest that while predicting the absolute levels may be difficult, the noise trends are reasonably well predicted by this approach.
Open Rotor Aeroacoustic Modelling
NASA Technical Reports Server (NTRS)
Envia, Edmane
2012-01-01
Owing to their inherent fuel efficiency, there is renewed interest in developing open rotor propulsion systems that are both efficient and quiet. The major contributor to the overall noise of an open rotor system is the propulsor noise, which is produced as a result of the interaction of the airstream with the counter-rotating blades. As such, robust aeroacoustic prediction methods are an essential ingredient in any approach to designing low-noise open rotor systems. To that end, an effort has been underway at NASA to assess current open rotor noise prediction tools and develop new capabilities. Under this effort, high-fidelity aerodynamic simulations of a benchmark open rotor blade set were carried out and used to make noise predictions via existing NASA open rotor noise prediction codes. The results have been compared with the aerodynamic and acoustic data that were acquired for this benchmark open rotor blade set. The emphasis of this paper is on providing a summary of recent results from a NASA Glenn effort to validate an in-house open noise prediction code called LINPROP which is based on a high-blade-count asymptotic approximation to the Ffowcs-Williams Hawkings Equation. The results suggest that while predicting the absolute levels may be difficult, the noise trends are reasonably well predicted by this approach.
Porting plasma physics simulation codes to modern computing architectures using the
NASA Astrophysics Data System (ADS)
Germaschewski, Kai; Abbott, Stephen
2015-11-01
Available computing power has continued to grow exponentially even after single-core performance satured in the last decade. The increase has since been driven by more parallelism, both using more cores and having more parallelism in each core, e.g. in GPUs and Intel Xeon Phi. Adapting existing plasma physics codes is challenging, in particular as there is no single programming model that covers current and future architectures. We will introduce the open-source
NASA Astrophysics Data System (ADS)
Konnik, Mikhail V.; Welsh, James
2012-09-01
Numerical simulators for adaptive optics systems have become an essential tool for the research and development of the future advanced astronomical instruments. However, growing software code of the numerical simulator makes it difficult to continue to support the code itself. The problem of adequate documentation of the astronomical software for adaptive optics simulators may complicate the development since the documentation must contain up-to-date schemes and mathematical descriptions implemented in the software code. Although most modern programming environments like MATLAB or Octave have in-built documentation abilities, they are often insufficient for the description of a typical adaptive optics simulator code. This paper describes a general cross-platform framework for the documentation of scientific software using open-source tools such as LATEX, mercurial, Doxygen, and Perl. Using the Perl script that translates M-files MATLAB comments into C-like, one can use Doxygen to generate and update the documentation for the scientific source code. The documentation generated by this framework contains the current code description with mathematical formulas, images, and bibliographical references. A detailed description of the framework components is presented as well as the guidelines for the framework deployment. Examples of the code documentation for the scripts and functions of a MATLAB-based adaptive optics simulator are provided.
Bilingual Voicing: A Study of Code-Switching in the Reported Speech of Finnish Immigrants in Estonia
ERIC Educational Resources Information Center
Frick, Maria; Riionheimo, Helka
2013-01-01
Through a conversation analytic investigation of Finnish-Estonian bilingual (direct) reported speech (i.e., voicing) by Finns who live in Estonia, this study shows how code-switching is used as a double contextualization device. The code-switched voicings are shaped by the on-going interactional situation, serving its needs by opening up a context…
Cloudy's Journey from FORTRAN to C, Why and How
NASA Astrophysics Data System (ADS)
Ferland, G. J.
Cloudy is a large-scale plasma simulation code that is widely used across the astronomical community as an aid in the interpretation of spectroscopic data. The cover of the ADAS VI book featured predictions of the code. The FORTRAN 77 source code has always been freely available on the Internet, contributing to its widespread use. The coming of PCs and Linux has fundamentally changed the computing environment. Modern Fortran compilers (F90 and F95) are not freely available. A common-use code must be written in either FORTRAN 77 or C to be Open Source/GNU/Linux friendly. F77 has serious drawbacks - modern language constructs cannot be used, students do not have skills in this language, and it does not contribute to their future employability. It became clear that the code would have to be ported to C to have a viable future. I describe the approach I used to convert Cloudy from FORTRAN 77 with MILSPEC extensions to ANSI/ISO 89 C. Cloudy is now openly available as a C code, and will evolve to C++ as gcc and standard C++ mature. Cloudy looks to a bright future with a modern language.
Gel, Aytekin; Hu, Jonathan; Ould-Ahmed-Vall, ElMoustapha; ...
2017-03-20
Legacy codes remain a crucial element of today's simulation-based engineering ecosystem due to the extensive validation process and investment in such software. The rapid evolution of high-performance computing architectures necessitates the modernization of these codes. One approach to modernization is a complete overhaul of the code. However, this could require extensive investments, such as rewriting in modern languages, new data constructs, etc., which will necessitate systematic verification and validation to re-establish the credibility of the computational models. The current study advocates using a more incremental approach and is a culmination of several modernization efforts of the legacy code MFIX, whichmore » is an open-source computational fluid dynamics code that has evolved over several decades, widely used in multiphase flows and still being developed by the National Energy Technology Laboratory. Two different modernization approaches,‘bottom-up’ and ‘top-down’, are illustrated. Here, preliminary results show up to 8.5x improvement at the selected kernel level with the first approach, and up to 50% improvement in total simulated time with the latter were achieved for the demonstration cases and target HPC systems employed.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gel, Aytekin; Hu, Jonathan; Ould-Ahmed-Vall, ElMoustapha
Legacy codes remain a crucial element of today's simulation-based engineering ecosystem due to the extensive validation process and investment in such software. The rapid evolution of high-performance computing architectures necessitates the modernization of these codes. One approach to modernization is a complete overhaul of the code. However, this could require extensive investments, such as rewriting in modern languages, new data constructs, etc., which will necessitate systematic verification and validation to re-establish the credibility of the computational models. The current study advocates using a more incremental approach and is a culmination of several modernization efforts of the legacy code MFIX, whichmore » is an open-source computational fluid dynamics code that has evolved over several decades, widely used in multiphase flows and still being developed by the National Energy Technology Laboratory. Two different modernization approaches,‘bottom-up’ and ‘top-down’, are illustrated. Here, preliminary results show up to 8.5x improvement at the selected kernel level with the first approach, and up to 50% improvement in total simulated time with the latter were achieved for the demonstration cases and target HPC systems employed.« less
Lattice QCD simulations using the OpenACC platform
NASA Astrophysics Data System (ADS)
Majumdar, Pushan
2016-10-01
In this article we will explore the OpenACC platform for programming Graphics Processing Units (GPUs). The OpenACC platform offers a directive based programming model for GPUs which avoids the detailed data flow control and memory management necessary in a CUDA programming environment. In the OpenACC model, programs can be written in high level languages with OpenMP like directives. We present some examples of QCD simulation codes using OpenACC and discuss their performance on the Fermi and Kepler GPUs.
NASA Technical Reports Server (NTRS)
Miki, Kenji; Moder, Jeff; Liou, Meng-Sing
2016-01-01
In this paper, we present the recent enhancement of the Open National Combustion Code (OpenNCC) and apply the OpenNCC to model a realistic combustor configuration (Energy Efficient Engine (E3)). First, we perform a series of validation tests for the newly-implemented advection upstream splitting method (AUSM) and the extended version of the AUSM-family schemes (AUSM+-up). Compared with the analytical/experimental data of the validation tests, we achieved good agreement. In the steady-state E3 cold flow results using the Reynolds-averaged Navier-Stokes(RANS), we find a noticeable difference in the flow fields calculated by the two different numerical schemes, the standard Jameson- Schmidt-Turkel (JST) scheme and the AUSM scheme. The main differences are that the AUSM scheme is less numerical dissipative and it predicts much stronger reverse flow in the recirculation zone. This study indicates that two schemes could show different flame-holding predictions and overall flame structures.
Teaching Robotics Software with the Open Hardware Mobile Manipulator
ERIC Educational Resources Information Center
Vona, M.; Shekar, N. H.
2013-01-01
The "open hardware mobile manipulator" (OHMM) is a new open platform with a unique combination of features for teaching robotics software and algorithms. On-board low- and high-level processors support real-time embedded programming and motor control, as well as higher-level coding with contemporary libraries. Full hardware designs and…
The GenABEL Project for statistical genomics
Karssen, Lennart C.; van Duijn, Cornelia M.; Aulchenko, Yurii S.
2016-01-01
Development of free/libre open source software is usually done by a community of people with an interest in the tool. For scientific software, however, this is less often the case. Most scientific software is written by only a few authors, often a student working on a thesis. Once the paper describing the tool has been published, the tool is no longer developed further and is left to its own device. Here we describe the broad, multidisciplinary community we formed around a set of tools for statistical genomics. The GenABEL project for statistical omics actively promotes open interdisciplinary development of statistical methodology and its implementation in efficient and user-friendly software under an open source licence. The software tools developed withing the project collectively make up the GenABEL suite, which currently consists of eleven tools. The open framework of the project actively encourages involvement of the community in all stages, from formulation of methodological ideas to application of software to specific data sets. A web forum is used to channel user questions and discussions, further promoting the use of the GenABEL suite. Developer discussions take place on a dedicated mailing list, and development is further supported by robust development practices including use of public version control, code review and continuous integration. Use of this open science model attracts contributions from users and developers outside the “core team”, facilitating agile statistical omics methodology development and fast dissemination. PMID:27347381
Lizama, Natalia; Johnson, Claire E; Ghosh, Manonita; Garg, Neeraj; Emery, Jonathan D; Saunders, Christobel
2015-06-01
To investigate general practitioners' (GP) perceptions about communication when providing cancer care. A self-report survey, which included an open response section, was mailed to a random sample of 1969 eligible Australian GPs. Content analysis of open response comments pertaining to communication was undertaken in order to ascertain GPs' views about communication issues in the provision of cancer care. Of the 648 GPs who completed the survey, 68 (10%) included open response comments about interprofessional communication. Participants who commented on communication were a median age of 50 years and worked 33 h/week; 28% were male and 59% practiced in the metropolitan area. Comments pertaining to communication were coded using five non-mutually exclusive categories: being kept in the loop; continuity of care; relationships with specialists; positive communication experiences; and strategies for improving communication.GPs repeatedly noted the importance of receiving detailed and timely communication from specialists and hospitals, particularly in relation to patients' treatment regimes and follow-up care. Several GPs remarked that they were left out of "the information loop" and that patients were "lost" or "dumped" after referral. While many GPs are currently involved in some aspects of cancer management, detailed and timely communication between specialists and GPs is imperative to support shared care and ensure optimal patient outcomes. This research highlights the need for established channels of communication between specialist and primary care medicine to support greater involvement by GPs in cancer care. © 2015 Wiley Publishing Asia Pty Ltd.
Cross-cultural perspectives on research participation and informed consent.
Barata, Paula C; Gucciardi, Enza; Ahmad, Farah; Stewart, Donna E
2006-01-01
This study examined Portuguese Canadian and Caribbean Canadian immigrants' perceptions of health research and informed consent procedures. Six focus groups (three in each cultural group) involving 42 participants and two individual interviews were conducted. The focus groups began with a general question about health research. This was followed by three short role-plays between the moderator and the assistant. The role-plays involved a fictional health research study in which a patient is approached for recruitment, is read a consent form, and is asked to sign. The role-plays stopped at key moments at which time focus group participants were asked questions about their understanding and their perceptions. Focus group transcripts were coded in QSR NUDIST software using open coding and then compared across cultural groups. Six overriding themes emerged: two were common in both the Portuguese and Caribbean transcripts, one emphasized the importance of trust and mistrust, and the other highlighted the need and desire for more information about health research. However, these themes were expressed somewhat differently in the two groups. In addition, there were four overriding themes that were specific to only one cultural group. In the Portuguese groups, there was an overwhelming positive regard for the research process and an emphasis on verbal as opposed to written information. The Caribbean participants qualified their participation in research studies and repeatedly raised images of invasive research.
NASA Astrophysics Data System (ADS)
Chatterjee, Tanmoy; Peet, Yulia T.
2018-03-01
Length scales of eddies involved in the power generation of infinite wind farms are studied by analyzing the spectra of the turbulent flux of mean kinetic energy (MKE) from large eddy simulations (LES). Large-scale structures with an order of magnitude bigger than the turbine rotor diameter (D ) are shown to have substantial contribution to wind power. Varying dynamics in the intermediate scales (D -10 D ) are also observed from a parametric study involving interturbine distances and hub height of the turbines. Further insight about the eddies responsible for the power generation have been provided from the scaling analysis of two-dimensional premultiplied spectra of MKE flux. The LES code is developed in a high Reynolds number near-wall modeling framework, using an open-source spectral element code Nek5000, and the wind turbines have been modelled using a state-of-the-art actuator line model. The LES of infinite wind farms have been validated against the statistical results from the previous literature. The study is expected to improve our understanding of the complex multiscale dynamics in the domain of large wind farms and identify the length scales that contribute to the power. This information can be useful for design of wind farm layout and turbine placement that take advantage of the large-scale structures contributing to wind turbine power.
Legal regime of human activities in outer space law
NASA Technical Reports Server (NTRS)
Golda, Carlo
1994-01-01
Current developments in space activities increasingly involve the presence of humans on board spacecraft and, in the near future, on the Moon, on Mars, on board Space Stations, etc. With respect to these challenges, the political and legal issues connected to the status of astronauts are largely unclear and require a new doctrinal attention. In the same way, many legal and political questions remain open in the structure of future space crews: the need for international standards in the definition and training of astronauts, etc.; but, first of all, an international uniform legal definition of astronauts. Moreover, the legal structure for human life and operations in outer space can be a new and relevant paradigm for the definition of similar rules in all the situations and environments in which humans are involved in extreme frontiers. The present article starts from an overview on the existing legal and political definitions of 'astronauts', moving to the search of a more useful definition. This is followed by an analysis of the concrete problems created by human space activities, and the legal and political responses to them (the need for a code of conduct; the structure of the crew and the existing rules in the US and ex-USSR; the new legal theories on the argument; the definition and structure of a code of conduct; the next legal problems in fields such as privacy law, communications law, business law, criminal law, etc.).
Nonlinear Wave Simulation on the Xeon Phi Knights Landing Processor
NASA Astrophysics Data System (ADS)
Hristov, Ivan; Goranov, Goran; Hristova, Radoslava
2018-02-01
We consider an interesting from computational point of view standing wave simulation by solving coupled 2D perturbed Sine-Gordon equations. We make an OpenMP realization which explores both thread and SIMD levels of parallelism. We test the OpenMP program on two different energy equivalent Intel architectures: 2× Xeon E5-2695 v2 processors, (code-named "Ivy Bridge-EP") in the Hybrilit cluster, and Xeon Phi 7250 processor (code-named "Knights Landing" (KNL). The results show 2 times better performance on KNL processor.
Open ISEmeter: An open hardware high-impedance interface for potentiometric detection.
Salvador, C; Mesa, M S; Durán, E; Alvarez, J L; Carbajo, J; Mozo, J D
2016-05-01
In this work, a new open hardware interface based on Arduino to read electromotive force (emf) from potentiometric detectors is presented. The interface has been fully designed with the open code philosophy and all documentation will be accessible on web. The paper describes a comprehensive project including the electronic design, the firmware loaded on Arduino, and the Java-coded graphical user interface to load data in a computer (PC or Mac) for processing. The prototype was tested by measuring the calibration curve of a detector. As detection element, an active poly(vinyl chloride)-based membrane was used, doped with cetyltrimethylammonium dodecylsulphate (CTA(+)-DS(-)). The experimental measures of emf indicate Nernstian behaviour with the CTA(+) content of test solutions, as it was described in the literature, proving the validity of the developed prototype. A comparative analysis of performance was made by using the same chemical detector but changing the measurement instrumentation.
An OpenMI Implementation of a Water Resources System using Simple Script Wrappers
NASA Astrophysics Data System (ADS)
Steward, D. R.; Aistrup, J. A.; Kulcsar, L.; Peterson, J. M.; Welch, S. M.; Andresen, D.; Bernard, E. A.; Staggenborg, S. A.; Bulatewicz, T.
2013-12-01
This team has developed an adaption of the Open Modelling Interface (OpenMI) that utilizes Simple Script Wrappers. Code is made OpenMI compliant through organization within three modules that initialize, perform time steps, and finalize results. A configuration file is prepared that specifies variables a model expects to receive as input and those it will make available as output. An example is presented for groundwater, economic, and agricultural production models in the High Plains Aquifer region of Kansas. Our models use the programming environments in Scilab and Matlab, along with legacy Fortran code, and our Simple Script Wrappers can also use Python. These models are collectively run within this interdisciplinary framework from initial conditions into the future. It will be shown that by applying model constraints to one model, the impact may be accessed on changes to the water resources system.
Beta Testing of CFD Code for the Analysis of Combustion Systems
NASA Technical Reports Server (NTRS)
Yee, Emma; Wey, Thomas
2015-01-01
A preliminary version of OpenNCC was tested to assess its accuracy in generating steady-state temperature fields for combustion systems at atmospheric conditions using three-dimensional tetrahedral meshes. Meshes were generated from a CAD model of a single-element lean-direct injection combustor, and the latest version of OpenNCC was used to calculate combustor temperature fields. OpenNCC was shown to be capable of generating sustainable reacting flames using a tetrahedral mesh, and the subsequent results were compared to experimental results. While nonreacting flow results closely matched experimental results, a significant discrepancy was present between the code's reacting flow results and experimental results. When wide air circulation regions with high velocities were present in the model, this appeared to create inaccurately high temperature fields. Conversely, low recirculation velocities caused low temperature profiles. These observations will aid in future modification of OpenNCC reacting flow input parameters to improve the accuracy of calculated temperature fields.
An Open-Source Bayesian Atmospheric Radiative Transfer (BART) Code, with Application to WASP-12b
NASA Astrophysics Data System (ADS)
Harrington, Joseph; Blecic, Jasmina; Cubillos, Patricio; Rojo, Patricio; Loredo, Thomas J.; Bowman, M. Oliver; Foster, Andrew S. D.; Stemm, Madison M.; Lust, Nate B.
2015-01-01
Atmospheric retrievals for solar-system planets typically fit, either with a minimizer or by eye, a synthetic spectrum to high-resolution (Δλ/λ ~ 1000-100,000) data with S/N > 100 per point. In contrast, exoplanet data often have S/N ~ 10 per point, and may have just a few points representing bandpasses larger than 1 um. To derive atmospheric constraints and robust parameter uncertainty estimates from such data requires a Bayesian approach. To date there are few investigators with the relevant codes, none of which are publicly available. We are therefore pleased to announce the open-source Bayesian Atmospheric Radiative Transfer (BART) code. BART uses a Bayesian phase-space explorer to drive a radiative-transfer model through the parameter phase space, producing the most robust estimates available for the thermal profile and chemical abundances in the atmosphere. We present an overview of the code and an initial application to Spitzer eclipse data for WASP-12b. We invite the community to use and improve BART via the open-source development site GitHub.com. This work was supported by NASA Planetary Atmospheres grant NNX12AI69G and NASA Astrophysics Data Analysis Program grant NNX13AF38G. JB holds a NASA Earth and Space Science Fellowship.
An Open-Source Bayesian Atmospheric Radiative Transfer (BART) Code, and Application to WASP-12b
NASA Astrophysics Data System (ADS)
Harrington, Joseph; Blecic, Jasmina; Cubillos, Patricio; Rojo, Patricio M.; Loredo, Thomas J.; Bowman, Matthew O.; Foster, Andrew S.; Stemm, Madison M.; Lust, Nate B.
2014-11-01
Atmospheric retrievals for solar-system planets typically fit, either with a minimizer or by eye, a synthetic spectrum to high-resolution (Δλ/λ ~ 1000-100,000) data with S/N > 100 per point. In contrast, exoplanet data often have S/N ~ 10 per point, and may have just a few points representing bandpasses larger than 1 um. To derive atmospheric constraints and robust parameter uncertainty estimates from such data requires a Bayesian approach. To date there are few investigators with the relevant codes, none of which are publicly available. We are therefore pleased to announce the open-source Bayesian Atmospheric Radiative Transfer (BART) code. BART uses a Bayesian phase-space explorer to drive a radiative-transfer model through the parameter phase space, producing the most robust estimates available for the thermal profile and chemical abundances in the atmosphere. We present an overview of the code and an initial application to Spitzer eclipse data for WASP-12b. We invite the community to use and improve BART via the open-source development site GitHub.com. This work was supported by NASA Planetary Atmospheres grant NNX12AI69G and NASA Astrophysics Data Analysis Program grant NNX13AF38G. JB holds a NASA Earth and Space Science Fellowship.
NASA Astrophysics Data System (ADS)
Korchagova, V. N.; Kraposhin, M. V.; Marchevsky, I. K.; Smirnova, E. V.
2017-11-01
A droplet impact on a deep pool can induce macro-scale or micro-scale effects like a crown splash, a high-speed jet, formation of secondary droplets or thin liquid films, etc. It depends on the diameter and velocity of the droplet, liquid properties, effects of external forces and other factors that a ratio of dimensionless criteria can account for. In the present research, we considered the droplet and the pool consist of the same viscous incompressible liquid. We took surface tension into account but neglected gravity forces. We used two open-source codes (OpenFOAM and Gerris) for our computations. We review the possibility of using these codes for simulation of processes in free-surface flows that may take place after a droplet impact on the pool. Both codes simulated several modes of droplet impact. We estimated the effect of liquid properties with respect to the Reynolds number and Weber number. Numerical simulation enabled us to find boundaries between different modes of droplet impact on a deep pool and to plot corresponding mode maps. The ratio of liquid density to that of the surrounding gas induces several changes in mode maps. Increasing this density ratio suppresses the crown splash.
Evaluation of software maintain ability with open EHR - a comparison of architectures.
Atalag, Koray; Yang, Hong Yul; Tempero, Ewan; Warren, James R
2014-11-01
To assess whether it is easier to maintain a clinical information system developed using open EHR model driven development versus mainstream methods. A new open source application (GastrOS) has been developed following open EHR's multi-level modelling approach using .Net/C# based on the same requirements of an existing clinically used application developed using Microsoft Visual Basic and Access database. Almost all the domain knowledge was embedded into the software code and data model in the latter. The same domain knowledge has been expressed as a set of open EHR Archetypes in GastrOS. We then introduced eight real-world change requests that had accumulated during live clinical usage, and implemented these in both systems while measuring time for various development tasks and change in software size for each change request. Overall it took half the time to implement changes in GastrOS. However it was the more difficult application to modify for one change request, suggesting the nature of change is also important. It was not possible to implement changes by modelling only. Comparison of relative measures of time and software size change within each application highlights how architectural differences affected maintain ability across change requests. The use of open EHR model driven development can result in better software maintain ability. The degree to which open EHR affects software maintain ability depends on the extent and nature of domain knowledge involved in changes. Although we used relative measures for time and software size, confounding factors could not be totally excluded as a controlled study design was not feasible. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
2013-06-26
flow code used ( OpenFOAM ) to include differential diffusion and cell-based stochastic RTE solvers. The models were validated by simulation of laminar...wavenumber selection is improved about by a factor of 10. (5) OpenFOAM Improvements for Laminar Flames A laminar-diffusion combustion solver, taking into...account the effects of differential diffusion, was developed within the open source CFD package OpenFOAM [18]. In addition, OpenFOAM was augmented to take
NASA Astrophysics Data System (ADS)
Liu, Tianyu; Wolfe, Noah; Lin, Hui; Zieb, Kris; Ji, Wei; Caracappa, Peter; Carothers, Christopher; Xu, X. George
2017-09-01
This paper contains two parts revolving around Monte Carlo transport simulation on Intel Many Integrated Core coprocessors (MIC, also known as Xeon Phi). (1) MCNP 6.1 was recompiled into multithreading (OpenMP) and multiprocessing (MPI) forms respectively without modification to the source code. The new codes were tested on a 60-core 5110P MIC. The test case was FS7ONNi, a radiation shielding problem used in MCNP's verification and validation suite. It was observed that both codes became slower on the MIC than on a 6-core X5650 CPU, by a factor of 4 for the MPI code and, abnormally, 20 for the OpenMP code, and both exhibited limited capability of strong scaling. (2) We have recently added a Constructive Solid Geometry (CSG) module to our ARCHER code to provide better support for geometry modelling in radiation shielding simulation. The functions of this module are frequently called in the particle random walk process. To identify the performance bottleneck we developed a CSG proxy application and profiled the code using the geometry data from FS7ONNi. The profiling data showed that the code was primarily memory latency bound on the MIC. This study suggests that despite low initial porting e_ort, Monte Carlo codes do not naturally lend themselves to the MIC platform — just like to the GPUs, and that the memory latency problem needs to be addressed in order to achieve decent performance gain.
Source Code Stylometry Improvements in Python
2017-12-14
person can be identified via their handwriting or an author identified by their style or prose, programmers can be identified by their code...to say , picking 1 author out of a known complete set. However, expanded open-world classification and multiauthor classification have also been
NASA Technical Reports Server (NTRS)
Choo, Y. K.; Staiger, P. J.
1982-01-01
The code was designed to analyze performance at valves-wide-open design flow. The code can model conventional steam cycles as well as cycles that include such special features as process steam extraction and induction and feedwater heating by external heat sources. Convenience features and extensions to the special features were incorporated into the PRESTO code. The features are described, and detailed examples illustrating the use of both the original and the special features are given.
Parser for Sabin-to-Mahoney Transition Model of Quasispecies Replication
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ecale Zhou, Carol
2016-01-03
This code is a data parse for preparing output from the Qspp agent-based stochastic simulation model for plotting in Excel. This code is specific to a set of simulations that were run for the purpose of preparing data for a publication. It is necessary to make this code open-source in order to publish the model code (Qspp), which has already been released. There is a necessity of assuring that results from using Qspp for a publication
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seefeldt, Ben; Sondak, David; Hensinger, David M.
Drekar is an application code that solves partial differential equations for fluids that can be optionally coupled to electromagnetics. Drekar solves low-mach compressible and incompressible computational fluid dynamics (CFD), compressible and incompressible resistive magnetohydrodynamics (MHD), and multiple species plasmas interacting with electromagnetic fields. Drekar discretization technology includes continuous and discontinuous finite element formulations, stabilized finite element formulations, mixed integration finite element bases (nodal, edge, face, volume) and an initial arbitrary Lagrangian Eulerian (ALE) capability. Drekar contains the implementation of the discretized physics and leverages the open source Trilinos project for both parallel solver capabilities and general finite element discretization tools.more » The code will be released open source under a BSD license. The code is used for fundamental research for simulation of fluids and plasmas on high performance computing environments.« less
Automatic Multilevel Parallelization Using OpenMP
NASA Technical Reports Server (NTRS)
Jin, Hao-Qiang; Jost, Gabriele; Yan, Jerry; Ayguade, Eduard; Gonzalez, Marc; Martorell, Xavier; Biegel, Bryan (Technical Monitor)
2002-01-01
In this paper we describe the extension of the CAPO (CAPtools (Computer Aided Parallelization Toolkit) OpenMP) parallelization support tool to support multilevel parallelism based on OpenMP directives. CAPO generates OpenMP directives with extensions supported by the NanosCompiler to allow for directive nesting and definition of thread groups. We report some results for several benchmark codes and one full application that have been parallelized using our system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Campbell, Michael T.; Safdari, Masoud; Kress, Jessica E.
The project described in this report constructed and exercised an innovative multiphysics coupling toolkit called the Illinois Rocstar MultiPhysics Application Coupling Toolkit (IMPACT). IMPACT is an open source, flexible, natively parallel infrastructure for coupling multiple uniphysics simulation codes into multiphysics computational systems. IMPACT works with codes written in several high-performance-computing (HPC) programming languages, and is designed from the beginning for HPC multiphysics code development. It is designed to be minimally invasive to the individual physics codes being integrated, and has few requirements on those physics codes for integration. The goal of IMPACT is to provide the support needed to enablemore » coupling existing tools together in unique and innovative ways to produce powerful new multiphysics technologies without extensive modification and rewrite of the physics packages being integrated. There are three major outcomes from this project: 1) construction, testing, application, and open-source release of the IMPACT infrastructure, 2) production of example open-source multiphysics tools using IMPACT, and 3) identification and engagement of interested organizations in the tools and applications resulting from the project. This last outcome represents the incipient development of a user community and application echosystem being built using IMPACT. Multiphysics coupling standardization can only come from organizations working together to define needs and processes that span the space of necessary multiphysics outcomes, which Illinois Rocstar plans to continue driving toward. The IMPACT system, including source code, documentation, and test problems are all now available through the public gitHUB.org system to anyone interested in multiphysics code coupling. Many of the basic documents explaining use and architecture of IMPACT are also attached as appendices to this document. Online HTML documentation is available through the gitHUB site. There are over 100 unit tests provided that run through the Illinois Rocstar Application Development (IRAD) lightweight testing infrastructure that is also supplied along with IMPACT. The package as a whole provides an excellent base for developing high-quality multiphysics applications using modern software development practices. To facilitate understanding how to utilize IMPACT effectively, two multiphysics systems have been developed and are available open-source through gitHUB. The simpler of the two systems, named ElmerFoamFSI in the repository, is a multiphysics, fluid-structure-interaction (FSI) coupling of the solid mechanics package Elmer with a fluid dynamics module from OpenFOAM. This coupling illustrates how to combine software packages that are unrelated by either author or architecture and combine them into a robust, parallel multiphysics system. A more complex multiphysics tool is the Illinois Rocstar Rocstar Multiphysics code that was rebuilt during the project around IMPACT. Rocstar Multiphysics was already an HPC multiphysics tool, but now that it has been rearchitected around IMPACT, it can be readily expanded to capture new and different physics in the future. In fact, during this project, the Elmer and OpenFOAM tools were also coupled into Rocstar Multiphysics and demonstrated. The full Rocstar Multiphysics codebase is also available on gitHUB, and licensed for any organization to use as they wish. Finally, the new IMPACT product is already being used in several multiphysics code coupling projects for the Air Force, NASA and the Missile Defense Agency, and initial work on expansion of the IMPACT-enabled Rocstar Multiphysics has begun in support of a commercial company. These initiatives promise to expand the interest and reach of IMPACT and Rocstar Multiphysics, ultimately leading to the envisioned standardization and consortium of users that was one of the goals of this project.« less
ERIC Educational Resources Information Center
O'Connor, Eileen A.
2015-01-01
Opening with the history, recent advances, and emerging ways to use avatar-based virtual reality, an instructor who has used virtual environments since 2007 shares how these environments bring more options to community building, teaching, and education. With the open-source movement, where the source code for virtual environments was made…
ERIC Educational Resources Information Center
Wen, Wen
2012-01-01
While open source software (OSS) emphasizes open access to the source code and avoids the use of formal appropriability mechanisms, there has been little understanding of how the existence and exercise of formal intellectual property rights (IPR) such as patents influence the direction of OSS innovation. This dissertation seeks to bridge this gap…
Open Code - Open Content - Open Law. Building a Digital Commons
1999-06-21
keep porn away from kids . And while I’m all for defeating COPA or the CDA, or whatever “C” word they come up with the next time around, I am...completely baffled about the priorities. Sure, civil liberties will be compromised if COPA stands; sure, cyberspace will be different if porn is not available
Optimization techniques using MODFLOW-GWM
Grava, Anna; Feinstein, Daniel T.; Barlow, Paul M.; Bonomi, Tullia; Buarne, Fabiola; Dunning, Charles; Hunt, Randall J.
2015-01-01
An important application of optimization codes such as MODFLOW-GWM is to maximize water supply from unconfined aquifers subject to constraints involving surface-water depletion and drawdown. In optimizing pumping for a fish hatchery in a bedrock aquifer system overlain by glacial deposits in eastern Wisconsin, various features of the GWM-2000 code were used to overcome difficulties associated with: 1) Non-linear response matrices caused by unconfined conditions and head-dependent boundaries; 2) Efficient selection of candidate well and drawdown constraint locations; and 3) Optimizing against water-level constraints inside pumping wells. Features of GWM-2000 were harnessed to test the effects of systematically varying the decision variables and constraints on the optimized solution for managing withdrawals. An important lesson of the procedure, similar to lessons learned in model calibration, is that the optimized outcome is non-unique, and depends on a range of choices open to the user. The modeler must balance the complexity of the numerical flow model used to represent the groundwater-flow system against the range of options (decision variables, objective functions, constraints) available for optimizing the model.
Artificial Intelligence, DNA Mimicry, and Human Health.
Stefano, George B; Kream, Richard M
2017-08-14
The molecular evolution of genomic DNA across diverse plant and animal phyla involved dynamic registrations of sequence modifications to maintain existential homeostasis to increasingly complex patterns of environmental stressors. As an essential corollary, driver effects of positive evolutionary pressure are hypothesized to effect concerted modifications of genomic DNA sequences to meet expanded platforms of regulatory controls for successful implementation of advanced physiological requirements. It is also clearly apparent that preservation of updated registries of advantageous modifications of genomic DNA sequences requires coordinate expansion of convergent cellular proofreading/error correction mechanisms that are encoded by reciprocally modified genomic DNA. Computational expansion of operationally defined DNA memory extends to coordinate modification of coding and previously under-emphasized noncoding regions that now appear to represent essential reservoirs of untapped genetic information amenable to evolutionary driven recruitment into the realm of biologically active domains. Additionally, expansion of DNA memory potential via chemical modification and activation of noncoding sequences is targeted to vertical augmentation and integration of an expanded cadre of transcriptional and epigenetic regulatory factors affecting linear coding of protein amino acid sequences within open reading frames.
RNA-Seq Based Transcriptional Map of Bovine Respiratory Disease Pathogen “Histophilus somni 2336”
Kumar, Ranjit; Lawrence, Mark L.; Watt, James; Cooksey, Amanda M.; Burgess, Shane C.; Nanduri, Bindu
2012-01-01
Genome structural annotation, i.e., identification and demarcation of the boundaries for all the functional elements in a genome (e.g., genes, non-coding RNAs, proteins and regulatory elements), is a prerequisite for systems level analysis. Current genome annotation programs do not identify all of the functional elements of the genome, especially small non-coding RNAs (sRNAs). Whole genome transcriptome analysis is a complementary method to identify “novel” genes, small RNAs, regulatory regions, and operon structures, thus improving the structural annotation in bacteria. In particular, the identification of non-coding RNAs has revealed their widespread occurrence and functional importance in gene regulation, stress and virulence. However, very little is known about non-coding transcripts in Histophilus somni, one of the causative agents of Bovine Respiratory Disease (BRD) as well as bovine infertility, abortion, septicemia, arthritis, myocarditis, and thrombotic meningoencephalitis. In this study, we report a single nucleotide resolution transcriptome map of H. somni strain 2336 using RNA-Seq method. The RNA-Seq based transcriptome map identified 94 sRNAs in the H. somni genome of which 82 sRNAs were never predicted or reported in earlier studies. We also identified 38 novel potential protein coding open reading frames that were absent in the current genome annotation. The transcriptome map allowed the identification of 278 operon (total 730 genes) structures in the genome. When compared with the genome sequence of a non-virulent strain 129Pt, a disproportionate number of sRNAs (∼30%) were located in genomic region unique to strain 2336 (∼18% of the total genome). This observation suggests that a number of the newly identified sRNAs in strain 2336 may be involved in strain-specific adaptations. PMID:22276113
RNA-seq based transcriptional map of bovine respiratory disease pathogen "Histophilus somni 2336".
Kumar, Ranjit; Lawrence, Mark L; Watt, James; Cooksey, Amanda M; Burgess, Shane C; Nanduri, Bindu
2012-01-01
Genome structural annotation, i.e., identification and demarcation of the boundaries for all the functional elements in a genome (e.g., genes, non-coding RNAs, proteins and regulatory elements), is a prerequisite for systems level analysis. Current genome annotation programs do not identify all of the functional elements of the genome, especially small non-coding RNAs (sRNAs). Whole genome transcriptome analysis is a complementary method to identify "novel" genes, small RNAs, regulatory regions, and operon structures, thus improving the structural annotation in bacteria. In particular, the identification of non-coding RNAs has revealed their widespread occurrence and functional importance in gene regulation, stress and virulence. However, very little is known about non-coding transcripts in Histophilus somni, one of the causative agents of Bovine Respiratory Disease (BRD) as well as bovine infertility, abortion, septicemia, arthritis, myocarditis, and thrombotic meningoencephalitis. In this study, we report a single nucleotide resolution transcriptome map of H. somni strain 2336 using RNA-Seq method.The RNA-Seq based transcriptome map identified 94 sRNAs in the H. somni genome of which 82 sRNAs were never predicted or reported in earlier studies. We also identified 38 novel potential protein coding open reading frames that were absent in the current genome annotation. The transcriptome map allowed the identification of 278 operon (total 730 genes) structures in the genome. When compared with the genome sequence of a non-virulent strain 129Pt, a disproportionate number of sRNAs (∼30%) were located in genomic region unique to strain 2336 (∼18% of the total genome). This observation suggests that a number of the newly identified sRNAs in strain 2336 may be involved in strain-specific adaptations.
Rosenthal, Jennifer L; Okumura, Megumi J; Hernandez, Lenore; Li, Su-Ting T; Rehm, Roberta S
2016-01-01
Children with special health care needs often require health services that are only provided at subspecialty centers. Such children who present to nonspecialty hospitals might require a hospital-to-hospital transfer. When transitioning between medical settings, communication is an integral aspect that can affect the quality of patient care. The objectives of the study were to identify barriers and facilitators to effective interfacility pediatric transfer communication to general pediatric floors from the perspectives of referring and accepting physicians, and then develop a conceptual model for effective interfacility transfer communication. This was a single-center qualitative study using grounded theory methodology. Referring and accepting physicians of children with special health care needs were interviewed. Four researchers coded the data using ATLAS.ti (version 7, Scientific Software Development GMBH, Berlin, Germany), using a 2-step process of open coding, followed by focused coding until no new codes emerged. The research team reached consensus on the final major categories and subsequently developed a conceptual model. Eight referring and 9 accepting physicians were interviewed. Theoretical coding resulted in 3 major categories: streamlined transfer process, quality handoff and 2-way communication, and positive relationships between physicians across facilities. The conceptual model unites these categories and shows how these categories contribute to effective interfacility transfer communication. Proposed interventions involved standardizing the communication process and incorporating technology such as telemedicine during transfers. Communication is perceived to be an integral component of interfacility transfers. We recommend that transfer systems be re-engineered to make the process more streamlined, to improve the quality of the handoff and 2-way communication, and to facilitate positive relationships between physicians across facilities. Copyright © 2016 Academic Pediatric Association. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Bilke, Lars; Watanabe, Norihiro; Naumov, Dmitri; Kolditz, Olaf
2016-04-01
A complex software project in general with high standards regarding code quality requires automated tools to help developers in doing repetitive and tedious tasks such as compilation on different platforms and configurations, doing unit testing as well as end-to-end tests and generating distributable binaries and documentation. This is known as continuous integration (CI). A community-driven FOSS-project within the Earth Sciences benefits even more from CI as time and resources regarding software development are often limited. Therefore testing developed code on more than the developers PC is a task which is often neglected and where CI can be the solution. We developed an integrated workflow based on GitHub, Travis and Jenkins for the community project OpenGeoSys - a coupled multiphysics modeling and simulation package - allowing developers to concentrate on implementing new features in a tight feedback loop. Every interested developer/user can create a pull request containing source code modifications on the online collaboration platform GitHub. The modifications are checked (compilation, compiler warnings, memory leaks, undefined behaviors, unit tests, end-to-end tests, analyzing differences in simulation run results between changes etc.) from the CI system which automatically responds to the pull request or by email on success or failure with detailed reports eventually requesting to improve the modifications. Core team developers review the modifications and merge them into the main development line once they satisfy agreed standards. We aim for efficient data structures and algorithms, self-explaining code, comprehensive documentation and high test code coverage. This workflow keeps entry barriers to get involved into the project low and permits an agile development process concentrating on feature additions rather than software maintenance procedures.
NASA Technical Reports Server (NTRS)
Lawson, Gary; Sosonkina, Masha; Baurle, Robert; Hammond, Dana
2017-01-01
In many fields, real-world applications for High Performance Computing have already been developed. For these applications to stay up-to-date, new parallel strategies must be explored to yield the best performance; however, restructuring or modifying a real-world application may be daunting depending on the size of the code. In this case, a mini-app may be employed to quickly explore such options without modifying the entire code. In this work, several mini-apps have been created to enhance a real-world application performance, namely the VULCAN code for complex flow analysis developed at the NASA Langley Research Center. These mini-apps explore hybrid parallel programming paradigms with Message Passing Interface (MPI) for distributed memory access and either Shared MPI (SMPI) or OpenMP for shared memory accesses. Performance testing shows that MPI+SMPI yields the best execution performance, while requiring the largest number of code changes. A maximum speedup of 23 was measured for MPI+SMPI, but only 11 was measured for MPI+OpenMP.
Antman, Yair; Yaron, Lior; Langer, Tomi; Tur, Moshe; Levanon, Nadav; Zadok, Avi
2013-11-15
Dynamic Brillouin gratings (DBGs), inscribed by comodulating two writing pump waves with a perfect Golomb code, are demonstrated and characterized experimentally. Compared with pseudo-random bit sequence (PRBS) modulation of the pump waves, the Golomb code provides lower off-peak reflectivity due to the unique properties of its cyclic autocorrelation function. Golomb-coded DBGs allow the long variable delay of one-time probe waveforms with higher signal-to-noise ratios, and without averaging. As an example, the variable delay of return-to-zero, on-off keyed data at a 1 Gbit/s rate, by as much as 10 ns, is demonstrated successfully. The eye diagram of the reflected waveform remains open, whereas PRBS modulation of the pump waves results in a closed eye. The variable delay of data at 2.5 Gbit/s is reported as well, with a marginally open eye diagram. The experimental results are in good agreement with simulations.
AirShow 1.0 CFD Software Users' Guide
NASA Technical Reports Server (NTRS)
Mohler, Stanley R., Jr.
2005-01-01
AirShow is visualization post-processing software for Computational Fluid Dynamics (CFD). Upon reading binary PLOT3D grid and solution files into AirShow, the engineer can quickly see how hundreds of complex 3-D structured blocks are arranged and numbered. Additionally, chosen grid planes can be displayed and colored according to various aerodynamic flow quantities such as Mach number and pressure. The user may interactively rotate and translate the graphical objects using the mouse. The software source code was written in cross-platform Java, C++, and OpenGL, and runs on Unix, Linux, and Windows. The graphical user interface (GUI) was written using Java Swing. Java also provides multiple synchronized threads. The Java Native Interface (JNI) provides a bridge between the Java code and the C++ code where the PLOT3D files are read, the OpenGL graphics are rendered, and numerical calculations are performed. AirShow is easy to learn and simple to use. The source code is available for free from the NASA Technology Transfer and Partnership Office.
NASA Astrophysics Data System (ADS)
Kempton, Eliza M.-R.; Lupu, Roxana; Owusu-Asare, Albert; Slough, Patrick; Cale, Bryson
2017-04-01
We present Exo-Transmit, a software package to calculate exoplanet transmission spectra for planets of varied composition. The code is designed to generate spectra of planets with a wide range of atmospheric composition, temperature, surface gravity, and size, and is therefore applicable to exoplanets ranging in mass and size from hot Jupiters down to rocky super-Earths. Spectra can be generated with or without clouds or hazes with options to (1) include an optically thick cloud deck at a user-specified atmospheric pressure or (2) to augment the nominal Rayleigh scattering by a user-specified factor. The Exo-Transmit code is written in C and is extremely easy to use. Typically the user will only need to edit parameters in a single user input file in order to run the code for a planet of their choosing. Exo-Transmit is available publicly on Github with open-source licensing at https://github.com/elizakempton/Exo_Transmit.
Accountability for Information Flow via Explicit Formal Proof
2009-10-01
macrobenchmarks. The first (called OpenSSL in the table below), unpacks the OpenSSL source code, compiles it and deletes it. The other (called Fuse in...penalty for PCFS as compared to Fuse/Null is approximately 10% for OpenSSL , and 2.5% for Fuse. The difference arises because the OpenSSL benchmark depends...Macrobenchmarks Benchmark PCFS Fuse/Null Ext3 OpenSSL 126 114 94 Fuse x 5 79 77 70 15 In summary, assuming a low rate of cache misses, the
The case for open-source software in drug discovery.
DeLano, Warren L
2005-02-01
Widespread adoption of open-source software for network infrastructure, web servers, code development, and operating systems leads one to ask how far it can go. Will "open source" spread broadly, or will it be restricted to niches frequented by hopeful hobbyists and midnight hackers? Here we identify reasons for the success of open-source software and predict how consumers in drug discovery will benefit from new open-source products that address their needs with increased flexibility and in ways complementary to proprietary options.
NASA Astrophysics Data System (ADS)
Jenness, Tim; Robitaille, Thomas; Tollerud, Erik; Mumford, Stuart; Cruz, Kelle
2016-04-01
The second Python in Astronomy conference will be held from 21-25 March 2016 at the University of Washington eScience Institute in Seattle, WA, USA. Similarly to the 2015 meeting (which was held at the Lorentz Center), we are aiming to bring together researchers, Python developers, users, and educators. The conference will include presentations, tutorials, unconference sessions, and coding sprints. In addition to sharing information about state-of-the art Python Astronomy packages, the workshop will focus on improving interoperability between astronomical Python packages, providing training for new open-source contributors, and developing educational materials for Python in Astronomy. The meeting is therefore not only aimed at current developers, but also users and educators who are interested in being involved in these efforts.
A Comprehensive review on the open source hackable text editor-ATOM
NASA Astrophysics Data System (ADS)
Sumangali, K.; Borra, Lokesh; Suraj Mishra, Amol
2017-11-01
This document represents a comprehensive study of “Atom”, one of the best open-source code editors available with many features built-in to support multitude of programming environments and to provide a more productive toolset for developers.
Lipinska, B; Rao, A S; Bolten, B M; Balakrishnan, R; Goldberg, E B
1989-01-01
We sequenced bacteriophage T4 genes 2 and 3 and the putative C-terminal portion of gene 50. They were found to have appropriate open reading frames directed counterclockwise on the T4 map. Mutations in genes 2 and 64 were shown to be in the same open reading frame, which we now call gene 2. This gene codes for a protein of 27,068 daltons. The open reading frame corresponding to gene 3 codes for a protein of 20,634 daltons. Appropriate bands on polyacrylamide gels were identified at 30 and 20 kilodaltons, respectively. We found that the product of the cloned gene 2 can protect T4 DNA double-stranded ends from exonuclease V action. Images PMID:2644202
Monitor Network Traffic with Packet Capture (pcap) on an Android Device
2015-09-01
administrative privileges . Under the current design Android development requirement, an Android Graphical User Interface (GUI) application cannot directly...build an Android application to monitor network traffic using open source packet capture (pcap) libraries. 15. SUBJECT TERMS ELIDe, Android , pcap 16...Building Application with Native Codes 5 8.1 Calling Native Codes Using JNI 5 8.2 Calling Native Codes from an Android Application 8 9. Retrieve Live
OCTGRAV: Sparse Octree Gravitational N-body Code on Graphics Processing Units
NASA Astrophysics Data System (ADS)
Gaburov, Evghenii; Bédorf, Jeroen; Portegies Zwart, Simon
2010-10-01
Octgrav is a very fast tree-code which runs on massively parallel Graphical Processing Units (GPU) with NVIDIA CUDA architecture. The algorithms are based on parallel-scan and sort methods. The tree-construction and calculation of multipole moments is carried out on the host CPU, while the force calculation which consists of tree walks and evaluation of interaction list is carried out on the GPU. In this way, a sustained performance of about 100GFLOP/s and data transfer rates of about 50GB/s is achieved. It takes about a second to compute forces on a million particles with an opening angle of heta approx 0.5. To test the performance and feasibility, we implemented the algorithms in CUDA in the form of a gravitational tree-code which completely runs on the GPU. The tree construction and traverse algorithms are portable to many-core devices which have support for CUDA or OpenCL programming languages. The gravitational tree-code outperforms tuned CPU code during the tree-construction and shows a performance improvement of more than a factor 20 overall, resulting in a processing rate of more than 2.8 million particles per second. The code has a convenient user interface and is freely available for use.
On transform coding tools under development for VP10
NASA Astrophysics Data System (ADS)
Parker, Sarah; Chen, Yue; Han, Jingning; Liu, Zoe; Mukherjee, Debargha; Su, Hui; Wang, Yongzhe; Bankoski, Jim; Li, Shunyao
2016-09-01
Google started the WebM Project in 2010 to develop open source, royaltyfree video codecs designed specifically for media on the Web. The second generation codec released by the WebM project, VP9, is currently served by YouTube, and enjoys billions of views per day. Realizing the need for even greater compression efficiency to cope with the growing demand for video on the web, the WebM team embarked on an ambitious project to develop a next edition codec, VP10, that achieves at least a generational improvement in coding efficiency over VP9. Starting from VP9, a set of new experimental coding tools have already been added to VP10 to achieve decent coding gains. Subsequently, Google joined a consortium of major tech companies called the Alliance for Open Media to jointly develop a new codec AV1. As a result, the VP10 effort is largely expected to merge with AV1. In this paper, we focus primarily on new tools in VP10 that improve coding of the prediction residue using transform coding techniques. Specifically, we describe tools that increase the flexibility of available transforms, allowing the codec to handle a more diverse range or residue structures. Results are presented on a standard test set.
Code of Federal Regulations, 2010 CFR
2010-10-01
... Disabled; and (5) Other acquisitions not using full and open competition, if authorized by Subpart 6.2 or 6... table: The service(Federal Service Codes from the Federal Procurement Data System Product/Service Code... military services overseas. X X X X (2) (i) Automatic data processing (ADP) telecommunications and...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-28
... Process To Develop Consumer Data Privacy Code of Conduct Concerning Mobile Application Transparency AGENCY... convene the first meeting of a privacy multistakeholder process concerning mobile application transparency... concerning mobile application transparency. Stakeholders will engage in an open, transparent, consensus...
A Numerical Study of Mesh Adaptivity in Multiphase Flows with Non-Newtonian Fluids
NASA Astrophysics Data System (ADS)
Percival, James; Pavlidis, Dimitrios; Xie, Zhihua; Alberini, Federico; Simmons, Mark; Pain, Christopher; Matar, Omar
2014-11-01
We present an investigation into the computational efficiency benefits of dynamic mesh adaptivity in the numerical simulation of transient multiphase fluid flow problems involving Non-Newtonian fluids. Such fluids appear in a range of industrial applications, from printing inks to toothpastes and introduce new challenges for mesh adaptivity due to the additional ``memory'' of viscoelastic fluids. Nevertheless, the multiscale nature of these flows implies huge potential benefits for a successful implementation. The study is performed using the open source package Fluidity, which couples an unstructured mesh control volume finite element solver for the multiphase Navier-Stokes equations to a dynamic anisotropic mesh adaptivity algorithm, based on estimated solution interpolation error criteria, and conservative mesh-to-mesh interpolation routine. The code is applied to problems involving rheologies ranging from simple Newtonian to shear-thinning to viscoelastic materials and verified against experimental data for various industrial and microfluidic flows. This work was undertaken as part of the EPSRC MEMPHIS programme grant EP/K003976/1.
NASA Astrophysics Data System (ADS)
Karataş, F. Ö.; Bodner, G. M.; Unal, Suat
2016-01-01
A study was conducted on the views of the nature of engineering held by 114 first-year engineering majors; the study built on prior work on views of the nature of science held by students, their instructors, and the general public. Open-coding analysis of responses to a 12-item questionnaire suggested that the participants held tacit beliefs that engineering (1) involves problem solving; (2) is a form of applied science; (3) involves the design of artefacts or systems; (4) is subject to various constraints; and (5) requires teamwork. These beliefs, however, were often unsophisticated, and significant aspects of the field of engineering as described in the literature on engineering practices were missing from the student responses. The results of this study are important because students' beliefs have a strong influence on what they value in a classroom situation, what they attend to in class, and how they choose to study for a course.
Extrusion Process by Finite Volume Method Using OpenFoam Software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matos Martins, Marcelo; Tonini Button, Sergio; Divo Bressan, Jose
The computational codes are very important tools to solve engineering problems. In the analysis of metal forming process, such as extrusion, this is not different because the computational codes allow analyzing the process with reduced cost. Traditionally, the Finite Element Method is used to solve solid mechanic problems, however, the Finite Volume Method (FVM) have been gaining force in this field of applications. This paper presents the velocity field and friction coefficient variation results, obtained by numerical simulation using the OpenFoam Software and the FVM to solve an aluminum direct cold extrusion process.
Automatic Multilevel Parallelization Using OpenMP
NASA Technical Reports Server (NTRS)
Jin, Hao-Qiang; Jost, Gabriele; Yan, Jerry; Ayguade, Eduard; Gonzalez, Marc; Martorell, Xavier; Biegel, Bryan (Technical Monitor)
2002-01-01
In this paper we describe the extension of the CAPO parallelization support tool to support multilevel parallelism based on OpenMP directives. CAPO generates OpenMP directives with extensions supported by the NanosCompiler to allow for directive nesting and definition of thread groups. We report first results for several benchmark codes and one full application that have been parallelized using our system.
ERIC Educational Resources Information Center
Long, Ju
2009-01-01
Open Source Software (OSS) is a major force in today's Information Technology (IT) landscape. Companies are increasingly using OSS in mission-critical applications. The transparency of the OSS technology itself with openly available source codes makes it ideal for students to participate in the OSS project development. OSS can provide unique…
Open chromatin reveals the functional maize genome
USDA-ARS?s Scientific Manuscript database
Every cellular process mediated through nuclear DNA must contend with chromatin. As results from ENCODE show, open chromatin assays can efficiently integrate across diverse regulatory elements, revealing functional non-coding genome. In this study, we use a MNase hypersensitivity assay to discover o...
Effects of Taxi Regulatory Revision in Seattle, Washington
DOT National Transportation Integrated Search
1983-05-01
In May 1979 the City of Seattle enacted license code revisions affecting taxicabs. Entry is opened to both fleets and independents and there is no limit on total licenses or the number of licenses a single operator may obtain. Open rate setting repla...
Large Eddy Simulations using oodlesDST
2016-01-01
Research Agency DST-Group-TR-3205 ABSTRACT The oodlesDST code is based on OpenFOAM software and performs Large Eddy Simulations of......maritime platforms using a variety of simulation techniques. He is currently using OpenFOAM software to perform both Reynolds Averaged Navier-Stokes
Using SPARK as a Solver for Modelica
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wetter, Michael; Wetter, Michael; Haves, Philip
Modelica is an object-oriented acausal modeling language that is well positioned to become a de-facto standard for expressing models of complex physical systems. To simulate a model expressed in Modelica, it needs to be translated into executable code. For generating run-time efficient code, such a translation needs to employ algebraic formula manipulations. As the SPARK solver has been shown to be competitive for generating such code but currently cannot be used with the Modelica language, we report in this paper how SPARK's symbolic and numerical algorithms can be implemented in OpenModelica, an open-source implementation of a Modelica modeling and simulationmore » environment. We also report benchmark results that show that for our air flow network simulation benchmark, the SPARK solver is competitive with Dymola, which is believed to provide the best solver for Modelica.« less
Simulations of 4D edge transport and dynamics using the TEMPEST gyro-kinetic code
NASA Astrophysics Data System (ADS)
Rognlien, T. D.; Cohen, B. I.; Cohen, R. H.; Dorr, M. R.; Hittinger, J. A. F.; Kerbel, G. D.; Nevins, W. M.; Xiong, Z.; Xu, X. Q.
2006-10-01
Simulation results are presented for tokamak edge plasmas with a focus on the 4D (2r,2v) option of the TEMPEST continuum gyro-kinetic code. A detailed description of a variety of kinetic simulations is reported, including neoclassical radial transport from Coulomb collisions, electric field generation, dynamic response to perturbations by geodesic acoustic modes, and parallel transport on open magnetic-field lines. Comparison is made between the characteristics of the plasma solutions on closed and open magnetic-field line regions separated by a magnetic separatrix, and simple physical models are used to qualitatively explain the differences observed in mean flow and electric-field generation. The status of extending the simulations to 5D turbulence will be summarized. The code structure used in this ongoing project is also briefly described, together with future plans.
Tools for open geospatial science
NASA Astrophysics Data System (ADS)
Petras, V.; Petrasova, A.; Mitasova, H.
2017-12-01
Open science uses open source to deal with reproducibility challenges in data and computational sciences. However, just using open source software or making the code public does not make the research reproducible. Moreover, the scientists face the challenge of learning new unfamiliar tools and workflows. In this contribution, we will look at a graduate-level course syllabus covering several software tools which make validation and reuse by a wider professional community possible. For the novices in the open science arena, we will look at how scripting languages such as Python and Bash help us reproduce research (starting with our own work). Jupyter Notebook will be introduced as a code editor, data exploration tool, and a lab notebook. We will see how Git helps us not to get lost in revisions and how Docker is used to wrap all the parts together using a single text file so that figures for a scientific paper or a technical report can be generated with a single command. We will look at examples of software and publications in the geospatial domain which use these tools and principles. Scientific contributions to GRASS GIS, a powerful open source desktop GIS and geoprocessing backend, will serve as an example of why and how to publish new algorithms and tools as part of a bigger open source project.
Observations and Thermochemical Calculations for Hot-Jupiter Atmospheres
NASA Astrophysics Data System (ADS)
Blecic, Jasmina; Harrington, Joseph; Bowman, M. Oliver; Cubillos, Patricio; Stemm, Madison
2015-01-01
I present Spitzer eclipse observations for WASP-14b and WASP-43b, an open source tool for thermochemical equilibrium calculations, and components of an open source tool for atmospheric parameter retrieval from spectroscopic data. WASP-14b is a planet that receives high irradiation from its host star, yet, although theory does not predict it, the planet hosts a thermal inversion. The WASP-43b eclipses have signal-to-noise ratios of ~25, one of the largest among exoplanets. To assess these planets' atmospheric composition and thermal structure, we developed an open-source Bayesian Atmospheric Radiative Transfer (BART) code. My dissertation tasks included developing a Thermochemical Equilibrium Abundances (TEA) code, implementing the eclipse geometry calculation in BART's radiative transfer module, and generating parameterized pressure and temperature profiles so the radiative-transfer module can be driven by the statistical module.To initialize the radiative-transfer calculation in BART, TEA calculates the equilibrium abundances of gaseous molecular species at a given temperature and pressure. It uses the Gibbs-free-energy minimization method with an iterative Lagrangian optimization scheme. Given elemental abundances, TEA calculates molecular abundances for a particular temperature and pressure or a list of temperature-pressure pairs. The code is tested against the original method developed by White at al. (1958), the analytic method developed by Burrows and Sharp (1999), and the Newton-Raphson method implemented in the open-source Chemical Equilibrium with Applications (CEA) code. TEA, written in Python, is modular, documented, and available to the community via the open-source development site GitHub.com.Support for this work was provided by NASA Headquarters under the NASA Earth and Space Science Fellowship Program, grant NNX12AL83H, by NASA through an award issued by JPL/Caltech, and through the Science Mission Directorate's Planetary Atmospheres Program, grant NNX12AI69G.
NASA Astrophysics Data System (ADS)
Melton, R.; Thomas, J.
With the rapid growth in the number of space actors, there has been a marked increase in the complexity and diversity of software systems utilized to support SSA target tracking, indication, warning, and collision avoidance. Historically, most SSA software has been constructed with "closed" proprietary code, which limits interoperability, inhibits the code transparency that some SSA customers need to develop domain expertise, and prevents the rapid injection of innovative concepts into these systems. Open-source aerospace software, a rapidly emerging, alternative trend in code development, is based on open collaboration, which has the potential to bring greater transparency, interoperability, flexibility, and reduced development costs. Open-source software is easily adaptable, geared to rapidly changing mission needs, and can generally be delivered at lower costs to meet mission requirements. This paper outlines Ball's COSMOS C2 system, a fully open-source, web-enabled, command-and-control software architecture which provides several unique capabilities to move the current legacy SSA software paradigm to an open source model that effectively enables pre- and post-launch asset command and control. Among the unique characteristics of COSMOS is the ease with which it can integrate with diverse hardware. This characteristic enables COSMOS to serve as the command-and-control platform for the full life-cycle development of SSA assets, from board test, to box test, to system integration and test, to on-orbit operations. The use of a modern scripting language, Ruby, also permits automated procedures to provide highly complex decision making for the tasking of SSA assets based on both telemetry data and data received from outside sources. Detailed logging enables quick anomaly detection and resolution. Integrated real-time and offline data graphing renders the visualization of the both ground and on-orbit assets simple and straightforward.
OpenCMISS: a multi-physics & multi-scale computational infrastructure for the VPH/Physiome project.
Bradley, Chris; Bowery, Andy; Britten, Randall; Budelmann, Vincent; Camara, Oscar; Christie, Richard; Cookson, Andrew; Frangi, Alejandro F; Gamage, Thiranja Babarenda; Heidlauf, Thomas; Krittian, Sebastian; Ladd, David; Little, Caton; Mithraratne, Kumar; Nash, Martyn; Nickerson, David; Nielsen, Poul; Nordbø, Oyvind; Omholt, Stig; Pashaei, Ali; Paterson, David; Rajagopal, Vijayaraghavan; Reeve, Adam; Röhrle, Oliver; Safaei, Soroush; Sebastián, Rafael; Steghöfer, Martin; Wu, Tim; Yu, Ting; Zhang, Heye; Hunter, Peter
2011-10-01
The VPH/Physiome Project is developing the model encoding standards CellML (cellml.org) and FieldML (fieldml.org) as well as web-accessible model repositories based on these standards (models.physiome.org). Freely available open source computational modelling software is also being developed to solve the partial differential equations described by the models and to visualise results. The OpenCMISS code (opencmiss.org), described here, has been developed by the authors over the last six years to replace the CMISS code that has supported a number of organ system Physiome projects. OpenCMISS is designed to encompass multiple sets of physical equations and to link subcellular and tissue-level biophysical processes into organ-level processes. In the Heart Physiome project, for example, the large deformation mechanics of the myocardial wall need to be coupled to both ventricular flow and embedded coronary flow, and the reaction-diffusion equations that govern the propagation of electrical waves through myocardial tissue need to be coupled with equations that describe the ion channel currents that flow through the cardiac cell membranes. In this paper we discuss the design principles and distributed memory architecture behind the OpenCMISS code. We also discuss the design of the interfaces that link the sets of physical equations across common boundaries (such as fluid-structure coupling), or between spatial fields over the same domain (such as coupled electromechanics), and the concepts behind CellML and FieldML that are embodied in the OpenCMISS data structures. We show how all of these provide a flexible infrastructure for combining models developed across the VPH/Physiome community. Copyright © 2011 Elsevier Ltd. All rights reserved.
2017-01-01
Background Machine learning techniques may be an effective and efficient way to classify open-text reports on doctor’s activity for the purposes of quality assurance, safety, and continuing professional development. Objective The objective of the study was to evaluate the accuracy of machine learning algorithms trained to classify open-text reports of doctor performance and to assess the potential for classifications to identify significant differences in doctors’ professional performance in the United Kingdom. Methods We used 1636 open-text comments (34,283 words) relating to the performance of 548 doctors collected from a survey of clinicians’ colleagues using the General Medical Council Colleague Questionnaire (GMC-CQ). We coded 77.75% (1272/1636) of the comments into 5 global themes (innovation, interpersonal skills, popularity, professionalism, and respect) using a qualitative framework. We trained 8 machine learning algorithms to classify comments and assessed their performance using several training samples. We evaluated doctor performance using the GMC-CQ and compared scores between doctors with different classifications using t tests. Results Individual algorithm performance was high (range F score=.68 to .83). Interrater agreement between the algorithms and the human coder was highest for codes relating to “popular” (recall=.97), “innovator” (recall=.98), and “respected” (recall=.87) codes and was lower for the “interpersonal” (recall=.80) and “professional” (recall=.82) codes. A 10-fold cross-validation demonstrated similar performance in each analysis. When combined together into an ensemble of multiple algorithms, mean human-computer interrater agreement was .88. Comments that were classified as “respected,” “professional,” and “interpersonal” related to higher doctor scores on the GMC-CQ compared with comments that were not classified (P<.05). Scores did not vary between doctors who were rated as popular or innovative and those who were not rated at all (P>.05). Conclusions Machine learning algorithms can classify open-text feedback of doctor performance into multiple themes derived by human raters with high performance. Colleague open-text comments that signal respect, professionalism, and being interpersonal may be key indicators of doctor’s performance. PMID:28298265
Gibbons, Chris; Richards, Suzanne; Valderas, Jose Maria; Campbell, John
2017-03-15
Machine learning techniques may be an effective and efficient way to classify open-text reports on doctor's activity for the purposes of quality assurance, safety, and continuing professional development. The objective of the study was to evaluate the accuracy of machine learning algorithms trained to classify open-text reports of doctor performance and to assess the potential for classifications to identify significant differences in doctors' professional performance in the United Kingdom. We used 1636 open-text comments (34,283 words) relating to the performance of 548 doctors collected from a survey of clinicians' colleagues using the General Medical Council Colleague Questionnaire (GMC-CQ). We coded 77.75% (1272/1636) of the comments into 5 global themes (innovation, interpersonal skills, popularity, professionalism, and respect) using a qualitative framework. We trained 8 machine learning algorithms to classify comments and assessed their performance using several training samples. We evaluated doctor performance using the GMC-CQ and compared scores between doctors with different classifications using t tests. Individual algorithm performance was high (range F score=.68 to .83). Interrater agreement between the algorithms and the human coder was highest for codes relating to "popular" (recall=.97), "innovator" (recall=.98), and "respected" (recall=.87) codes and was lower for the "interpersonal" (recall=.80) and "professional" (recall=.82) codes. A 10-fold cross-validation demonstrated similar performance in each analysis. When combined together into an ensemble of multiple algorithms, mean human-computer interrater agreement was .88. Comments that were classified as "respected," "professional," and "interpersonal" related to higher doctor scores on the GMC-CQ compared with comments that were not classified (P<.05). Scores did not vary between doctors who were rated as popular or innovative and those who were not rated at all (P>.05). Machine learning algorithms can classify open-text feedback of doctor performance into multiple themes derived by human raters with high performance. Colleague open-text comments that signal respect, professionalism, and being interpersonal may be key indicators of doctor's performance. ©Chris Gibbons, Suzanne Richards, Jose Maria Valderas, John Campbell. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 15.03.2017.
An open-source textbook for teaching climate-related risk analysis using the R computing environment
NASA Astrophysics Data System (ADS)
Applegate, P. J.; Keller, K.
2015-12-01
Greenhouse gas emissions lead to increased surface air temperatures and sea level rise. In turn, sea level rise increases the risks of flooding for people living near the world's coastlines. Our own research on assessing sea level rise-related risks emphasizes both Earth science and statistics. At the same time, the free, open-source computing environment R is growing in popularity among statisticians and scientists due to its flexibility and graphics capabilities, as well as its large library of existing functions. We have developed a set of laboratory exercises that introduce students to the Earth science and statistical concepts needed for assessing the risks presented by climate change, particularly sea-level rise. These exercises will be published as a free, open-source textbook on the Web. Each exercise begins with a description of the Earth science and/or statistical concepts that the exercise teaches, with references to key journal articles where appropriate. Next, students are asked to examine in detail a piece of existing R code, and the exercise text provides a clear explanation of how the code works. Finally, students are asked to modify the existing code to produce a well-defined outcome. We discuss our experiences in developing the exercises over two separate semesters at Penn State, plus using R Markdown to interweave explanatory text with sample code and figures in the textbook.
Geospace simulations on the Cell BE processor
NASA Astrophysics Data System (ADS)
Germaschewski, K.; Raeder, J.; Larson, D.
2008-12-01
OpenGGCM (Open Geospace General circulation Model) is an established numerical code that simulates the Earth's space environment. The most computing intensive part is the MHD (magnetohydrodynamics) solver that models the plasma surrounding Earth and its interaction with Earth's magnetic field and the solar wind flowing in from the sun. Like other global magnetosphere codes, OpenGGCM's realism is limited by computational constraints on grid resolution. We investigate porting of the MHD solver to the Cell BE architecture, a novel inhomogeneous multicore architecture capable of up to 230 GFlops per processor. Realizing this high performance on the Cell processor is a programming challenge, though. We implemented the MHD solver using a multi-level parallel approach: On the coarsest level, the problem is distributed to processors based upon the usual domain decomposition approach. Then, on each processor, the problem is divided into 3D columns, each of which is handled by the memory limited SPEs (synergistic processing elements) slice by slice. Finally, SIMD instructions are used to fully exploit the vector/SIMD FPUs in each SPE. Memory management needs to be handled explicitly by the code, using DMA to move data from main memory to the per-SPE local store and vice versa. We obtained excellent performance numbers, a speed-up of a factor of 25 compared to just using the main processor, while still keeping the numerical implementation details of the code maintainable.
OpenRBC: Redefining the Frontier of Red Blood Cell Simulations at Protein Resolution
NASA Astrophysics Data System (ADS)
Tang, Yu-Hang; Lu, Lu; Li, He; Grinberg, Leopold; Sachdeva, Vipin; Evangelinos, Constantinos; Karniadakis, George
We present a from-scratch development of OpenRBC, a coarse-grained molecular dynamics code, which is capable of performing an unprecedented in silico experiment - simulating an entire mammal red blood cell lipid bilayer and cytoskeleton modeled by 4 million mesoscopic particles - on a single shared memory node. To achieve this, we invented an adaptive spatial searching algorithm to accelerate the computation of short-range pairwise interactions in an extremely sparse 3D space. The algorithm is based on a Voronoi partitioning of the point cloud of coarse-grained particles, and is continuously updated over the course of the simulation. The algorithm enables the construction of a lattice-free cell list, i.e. the key spatial searching data structure in our code, in O (N) time and space space with cells whose position and shape adapts automatically to the local density and curvature. The code implements NUMA/NUCA-aware OpenMP parallelization and achieves perfect scaling with up to hundreds of hardware threads. The code outperforms a legacy solver by more than 8 times in time-to-solution and more than 20 times in problem size, thus providing a new venue for probing the cytomechanics of red blood cells. This work was supported by the Department of Energy (DOE) Collaboratory on Mathematics for Mesoscopic Model- ing of Materials (CM4). YHT acknowledges partial financial support from an IBM Ph.D. Scholarship Award.
The Discourse of Making Amends: A Grammar of Remedial Interchanges.
ERIC Educational Resources Information Center
Walton, Marsha D.
Narrative observations were made of remedial interchanges occurring among school children (K-4) in open classrooms. Transcripts of interchanges were typed move by move and coded according to a hierarchical coding scheme (remedy, defiance, no response, relief, ending, and ambiguous). The interchanges of the kindergarteners and first graders were…
Color Coding of Circuit Quantities in Introductory Circuit Analysis Instruction
ERIC Educational Resources Information Center
Reisslein, Jana; Johnson, Amy M.; Reisslein, Martin
2015-01-01
Learning the analysis of electrical circuits represented by circuit diagrams is often challenging for novice students. An open research question in electrical circuit analysis instruction is whether color coding of the mathematical symbols (variables) that denote electrical quantities can improve circuit analysis learning. The present study…
Van Hoeck, Arne; Horemans, Nele; Monsieurs, Pieter; Cao, Hieu Xuan; Vandenhove, Hildegarde; Blust, Ronny
2015-01-01
Freshwater duckweed, comprising the smallest, fastest growing and simplest macrophytes has various applications in agriculture, phytoremediation and energy production. Lemna minor, the so-called common duckweed, is a model system of these aquatic plants for ecotoxicological bioassays, genetic transformation tools and industrial applications. Given the ecotoxic relevance and high potential for biomass production, whole-genome information of this cosmopolitan duckweed is needed. The 472 Mbp assembly of the L. minor genome (2n = 40; estimated 481 Mbp; 98.1 %) contains 22,382 protein-coding genes and 61.5 % repetitive sequences. The repeat content explains 94.5 % of the genome size difference in comparison with the greater duckweed, Spirodela polyrhiza (2n = 40; 158 Mbp; 19,623 protein-coding genes; and 15.79 % repetitive sequences). Comparison of proteins from other monocot plants, protein ortholog identification, OrthoMCL, suggests 1356 duckweed-specific groups (3367 proteins, 15.0 % total L. minor proteins) and 795 Lemna-specific groups (2897 proteins, 12.9 % total L. minor proteins). Interestingly, proteins involved in biosynthetic processes in response to various stimuli and hydrolase activities are enriched in the Lemna proteome in comparison with the Spirodela proteome. The genome sequence and annotation of L. minor protein-coding genes provide new insights in biological understanding and biomass production applications of Lemna species.
The effect of trampoline parks on presentations to the Christchurch Emergency Department.
Roffe, Lloyd; Pearson, Scott; Sharr, Johnathan; Ardagh, Michael
2018-01-19
To analyse trampoline-related injuries suffered after the opening of two new trampoline parks in Christchurch. Data was collected from three 90-day periods. All trampoline-related injuries were collected from electronic documentation and coding. Those injured after both arenas opened were contacted and a semi-structured interview performed. In the 90 days after both parks opened there were 602 claims for trampoline-related injuries with 106 hospital presentations (55% male). This was a significant increase (p<0.01) from one year earlier (333 claims, 37 hospital presentations) and the 90 days prior to their opening (201 claims, 15 hospital presentations). Most injuries affected an older group of children, aged between 10-14 years (26%, n=28), compared to the other two periods (p<0.01). There was also a greater proportion of lower-limb injuries (52%, n=55) compared to the other two periods (p<0.01). Thirty-six required hospital admission, with 29 operations and an average length of stay of 2.11 days. One trampoline park allowed two or more people to use the same trampoline at the same time, and had over twice as many presentations (33%, n=35) than the other trampoline park (14%, n=15). Christchurch saw a significant increase in trampoline-related injuries after the opening of two new parks. These injuries involved an older group of children, affected predominantly the lower limbs and were more severe than those reported from the use of domestic trampolines. Consistent with past research, the trampoline park allowing multiple users had a higher proportion of presentations and more injuries requiring operative intervention.
Evaluation of Image Segmentation and Object Recognition Algorithms for Image Parsing
2013-09-01
generation of the features from the key points. OpenCV uses Euclidean distance to match the key points and has the option to use Manhattan distance...feature vector includes polarity and intensity information. Final step is matching the key points. In OpenCV , Euclidean distance or Manhattan...the code below is one way and OpenCV offers the function radiusMatch (a pair must have a distance less than a given maximum distance). OpenCV’s
Evaluation of the OpenCL AES Kernel using the Intel FPGA SDK for OpenCL
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jin, Zheming; Yoshii, Kazutomo; Finkel, Hal
The OpenCL standard is an open programming model for accelerating algorithms on heterogeneous computing system. OpenCL extends the C-based programming language for developing portable codes on different platforms such as CPU, Graphics processing units (GPUs), Digital Signal Processors (DSPs) and Field Programmable Gate Arrays (FPGAs). The Intel FPGA SDK for OpenCL is a suite of tools that allows developers to abstract away the complex FPGA-based development flow for a high-level software development flow. Users can focus on the design of hardware-accelerated kernel functions in OpenCL and then direct the tools to generate the low-level FPGA implementations. The approach makes themore » FPGA-based development more accessible to software users as the needs for hybrid computing using CPUs and FPGAs are increasing. It can also significantly reduce the hardware development time as users can evaluate different ideas with high-level language without deep FPGA domain knowledge. In this report, we evaluate the performance of the kernel using the Intel FPGA SDK for OpenCL and Nallatech 385A FPGA board. Compared to the M506 module, the board provides more hardware resources for a larger design exploration space. The kernel performance is measured with the compute kernel throughput, an upper bound to the FPGA throughput. The report presents the experimental results in details. The Appendix lists the kernel source code.« less
2014-01-01
Background Our randomized controlled trial (The BETTER Trial) found that training a clinician to become a Prevention Practitioner (PP) in family practices improved chronic disease prevention and screening (CDPS). PPs were trained on CDPS and provided prevention prescriptions tailored to participating patients. For this embedded qualitative study, we explored perceptions of this new role to understand the PP intervention. Methods We used grounded theory methodology and purposefully sampled participants involved in any capacity with the BETTER Trial. Two physicians and one coordinator in each of two cities (Toronto, Ontario and Edmonton, Alberta) conducted eight individual semi-structured interviews and seven focus groups. We used an interview guide and documented research activities through an audit trail, journals, field notes and memos. We analyzed the data using the constant comparative method throughout open coding followed by theoretical coding. Results A framework and process involving external and internal practice facilitation using the new role of PP was thought to impact CDPS. The PP facilitated CDPS through on-going relationships with patients and practice team members. Key components included: 1) approaching CDPS in a comprehensive manner, 2) an individualized and personalized approach at multiple levels, 3) integrated continuity that included linking the patients and practices to CPDS resources, and 4) adaptability to different practices and settings. Conclusions The BETTER framework and key components are described as impacting CDPS through a process that involved a new role, the PP. The introduction of a novel role of a clinician within the primary care practice with skills in CDPS could appropriately address gaps in prevention and screening. PMID:24720686
Taxi Regulatory Revision in Seattle, Washington : Background and Implementation
DOT National Transportation Integrated Search
1980-01-01
In May 1979 the City of Seattle enacted license code revisions affecting taxicabs. Entry is opened to both fleets and independents and there is no limit on total licenses or the number of licenses a single operator may obtain. Open rate setting repla...
76 FR 65727 - Sunshine Act Meeting Notice
Federal Register 2010, 2011, 2012, 2013, 2014
2011-10-24
... be open to the public. The rest of the meeting will be closed to the public. MATTERS TO BE CONSIDERED: Portion Open to the Public: (1) Oral Argument in The North Carolina Board of Dental Examiners, Docket 9343...:45 am] BILLING CODE 6750-01-M ...
The adaption and use of research codes for performance assessment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liebetrau, A.M.
1987-05-01
Models of real-world phenomena are developed for many reasons. The models are usually, if not always, implemented in the form of a computer code. The characteristics of a code are determined largely by its intended use. Realizations or implementations of detailed mathematical models of complex physical and/or chemical processes are often referred to as research or scientific (RS) codes. Research codes typically require large amounts of computing time. One example of an RS code is a finite-element code for solving complex systems of differential equations that describe mass transfer through some geologic medium. Considerable computing time is required because computationsmore » are done at many points in time and/or space. Codes used to evaluate the overall performance of real-world physical systems are called performance assessment (PA) codes. Performance assessment codes are used to conduct simulated experiments involving systems that cannot be directly observed. Thus, PA codes usually involve repeated simulations of system performance in situations that preclude the use of conventional experimental and statistical methods. 3 figs.« less
Open ISEmeter: An open hardware high-impedance interface for potentiometric detection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Salvador, C.; Carbajo, J.; Mozo, J. D., E-mail: jdaniel.mozo@diq.uhu.es
In this work, a new open hardware interface based on Arduino to read electromotive force (emf) from potentiometric detectors is presented. The interface has been fully designed with the open code philosophy and all documentation will be accessible on web. The paper describes a comprehensive project including the electronic design, the firmware loaded on Arduino, and the Java-coded graphical user interface to load data in a computer (PC or Mac) for processing. The prototype was tested by measuring the calibration curve of a detector. As detection element, an active poly(vinyl chloride)-based membrane was used, doped with cetyltrimethylammonium dodecylsulphate (CTA{sup +}-DS{supmore » −}). The experimental measures of emf indicate Nernstian behaviour with the CTA{sup +} content of test solutions, as it was described in the literature, proving the validity of the developed prototype. A comparative analysis of performance was made by using the same chemical detector but changing the measurement instrumentation.« less
On Chaotic and Hyperchaotic Complex Nonlinear Dynamical Systems
NASA Astrophysics Data System (ADS)
Mahmoud, Gamal M.
Dynamical systems described by real and complex variables are currently one of the most popular areas of scientific research. These systems play an important role in several fields of physics, engineering, and computer sciences, for example, laser systems, control (or chaos suppression), secure communications, and information science. Dynamical basic properties, chaos (hyperchaos) synchronization, chaos control, and generating hyperchaotic behavior of these systems are briefly summarized. The main advantage of introducing complex variables is the reduction of phase space dimensions by a half. They are also used to describe and simulate the physics of detuned laser and thermal convection of liquid flows, where the electric field and the atomic polarization amplitudes are both complex. Clearly, if the variables of the system are complex the equations involve twice as many variables and control parameters, thus making it that much harder for a hostile agent to intercept and decipher the coded message. Chaotic and hyperchaotic complex systems are stated as examples. Finally there are many open problems in the study of chaotic and hyperchaotic complex nonlinear dynamical systems, which need further investigations. Some of these open problems are given.
NASA Technical Reports Server (NTRS)
Wendel, Deirdre E.; Reiff, Patricia H.; Goldstein, Melvyn L.
2010-01-01
We simulate a northward IMF cusp reconnection event at the magnetopause using the OpenGGCM resistive MHD code. The ACE input data, solar wind parameters, and dipole tilt belong to a 2002 reconnection event observed by IMAGE and Cluster. Based on a fully three-dimensional skeleton separators, nulls, and parallel electric fields, we show magnetic draping, convection, ionospheric field line tying play a role in producing a series of locally reconnecting nulls with flux ropes. The flux ropes in the cusp along the global separator line of symmetry. In 2D projection, the flux ropes the appearance of a tearing mode with a series of 'x's' and 'o's' but bearing a kind of 'guide field' that exists only within the magnetopause. The reconnecting field lines in the string of ropes involve IMF and both open and closed Earth magnetic field lines. The observed magnetic geometry reproduces the findings of a superposed epoch impact parameter study derived from the Cluster magnetometer data for the same event. The observed geometry has repercussions for spacecraft observations of cusp reconnection and for the imposed boundary conditions reconnection simulations.
GPU-accelerated Tersoff potentials for massively parallel Molecular Dynamics simulations
NASA Astrophysics Data System (ADS)
Nguyen, Trung Dac
2017-03-01
The Tersoff potential is one of the empirical many-body potentials that has been widely used in simulation studies at atomic scales. Unlike pair-wise potentials, the Tersoff potential involves three-body terms, which require much more arithmetic operations and data dependency. In this contribution, we have implemented the GPU-accelerated version of several variants of the Tersoff potential for LAMMPS, an open-source massively parallel Molecular Dynamics code. Compared to the existing MPI implementation in LAMMPS, the GPU implementation exhibits a better scalability and offers a speedup of 2.2X when run on 1000 compute nodes on the Titan supercomputer. On a single node, the speedup ranges from 2.0 to 8.0 times, depending on the number of atoms per GPU and hardware configurations. The most notable features of our GPU-accelerated version include its design for MPI/accelerator heterogeneous parallelism, its compatibility with other functionalities in LAMMPS, its ability to give deterministic results and to support both NVIDIA CUDA- and OpenCL-enabled accelerators. Our implementation is now part of the GPU package in LAMMPS and accessible for public use.
NASA Astrophysics Data System (ADS)
Bellerby, Tim
2014-05-01
Model Integration System (MIST) is open-source environmental modelling programming language that directly incorporates data parallelism. The language is designed to enable straightforward programming structures, such as nested loops and conditional statements to be directly translated into sequences of whole-array (or more generally whole data-structure) operations. MIST thus enables the programmer to use well-understood constructs, directly relating to the mathematical structure of the model, without having to explicitly vectorize code or worry about details of parallelization. A range of common modelling operations are supported by dedicated language structures operating on cell neighbourhoods rather than individual cells (e.g.: the 3x3 local neighbourhood needed to implement an averaging image filter can be simply accessed from within a simple loop traversing all image pixels). This facility hides details of inter-process communication behind more mathematically relevant descriptions of model dynamics. The MIST automatic vectorization/parallelization process serves both to distribute work among available nodes and separately to control storage requirements for intermediate expressions - enabling operations on very large domains for which memory availability may be an issue. MIST is designed to facilitate efficient interpreter based implementations. A prototype open source interpreter is available, coded in standard FORTRAN 95, with tools to rapidly integrate existing FORTRAN 77 or 95 code libraries. The language is formally specified and thus not limited to FORTRAN implementation or to an interpreter-based approach. A MIST to FORTRAN compiler is under development and volunteers are sought to create an ANSI-C implementation. Parallel processing is currently implemented using OpenMP. However, parallelization code is fully modularised and could be replaced with implementations using other libraries. GPU implementation is potentially possible.
FEAMAC/CARES Stochastic-Strength-Based Damage Simulation Tool for Ceramic Matrix Composites
NASA Technical Reports Server (NTRS)
Nemeth, Noel; Bednarcyk, Brett; Pineda, Evan; Arnold, Steven; Mital, Subodh; Murthy, Pappu; Bhatt, Ramakrishna
2016-01-01
Reported here is a coupling of two NASA developed codes: CARES (Ceramics Analysis and Reliability Evaluation of Structures) with the MAC/GMC (Micromechanics Analysis Code/ Generalized Method of Cells) composite material analysis code. The resulting code is called FEAMAC/CARES and is constructed as an Abaqus finite element analysis UMAT (user defined material). Here we describe the FEAMAC/CARES code and an example problem (taken from the open literature) of a laminated CMC in off-axis loading is shown. FEAMAC/CARES performs stochastic-strength-based damage simulation response of a CMC under multiaxial loading using elastic stiffness reduction of the failed elements.
Stochastic-Strength-Based Damage Simulation Tool for Ceramic Matrix Composite
NASA Technical Reports Server (NTRS)
Nemeth, Noel; Bednarcyk, Brett; Pineda, Evan; Arnold, Steven; Mital, Subodh; Murthy, Pappu
2015-01-01
Reported here is a coupling of two NASA developed codes: CARES (Ceramics Analysis and Reliability Evaluation of Structures) with the MAC/GMC (Micromechanics Analysis Code/ Generalized Method of Cells) composite material analysis code. The resulting code is called FEAMAC/CARES and is constructed as an Abaqus finite element analysis UMAT (user defined material). Here we describe the FEAMAC/CARES code and an example problem (taken from the open literature) of a laminated CMC in off-axis loading is shown. FEAMAC/CARES performs stochastic-strength-based damage simulation response of a CMC under multiaxial loading using elastic stiffness reduction of the failed elements.
Computer Simulation of the VASIMR Engine
NASA Technical Reports Server (NTRS)
Garrison, David
2005-01-01
The goal of this project is to develop a magneto-hydrodynamic (MHD) computer code for simulation of the VASIMR engine. This code is designed be easy to modify and use. We achieve this using the Cactus framework, a system originally developed for research in numerical relativity. Since its release, Cactus has become an extremely powerful and flexible open source framework. The development of the code will be done in stages, starting with a basic fluid dynamic simulation and working towards a more complex MHD code. Once developed, this code can be used by students and researchers in order to further test and improve the VASIMR engine.
OpenMP 4.5 Validation and Verification Suite
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pophale, Swaroop S; Bernholdt, David E; Hernandez, Oscar R
2017-12-15
OpenMP, a directive-based programming API, introduce directives for accelerator devices that programmers are starting to use more frequently in production codes. To make sure OpenMP directives work correctly across architectures, it is critical to have a mechanism that tests for an implementation's conformance to the OpenMP standard. This testing process can uncover ambiguities in the OpenMP specification, which helps compiler developers and users make a better use of the standard. We fill this gap through our validation and verification test suite that focuses on the offload directives available in OpenMP 4.5.
ERIC Educational Resources Information Center
Behr, Dorothée
2015-01-01
Open-ended probing questions in cross-cultural surveys help uncover equivalence problems in cross-cultural survey research. For languages that a project team does not understand, probe answers need to be translated into a common project language. This article presents a case study on translating open-ended, that is, narrative answers. It describes…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parker, Andrew; Haves, Philip; Jegi, Subhash
This paper describes a software system for automatically generating a reference (baseline) building energy model from the proposed (as-designed) building energy model. This system is built using the OpenStudio Software Development Kit (SDK) and is designed to operate on building energy models in the OpenStudio file format.
ERIC Educational Resources Information Center
Simpson, James Daniel
2014-01-01
Free, libre, and open source software (FLOSS) is software that is collaboratively developed. FLOSS provides end-users with the source code and the freedom to adapt or modify a piece of software to fit their needs (Deek & McHugh, 2008; Stallman, 2010). FLOSS has a 30 year history that dates to the open hacker community at the Massachusetts…
NASA Astrophysics Data System (ADS)
He, Lirong; Cui, Guangmang; Feng, Huajun; Xu, Zhihai; Li, Qi; Chen, Yueting
2015-03-01
Coded exposure photography makes the motion de-blurring a well-posed problem. The integration pattern of light is modulated using the method of coded exposure by opening and closing the shutter within the exposure time, changing the traditional shutter frequency spectrum into a wider frequency band in order to preserve more image information in frequency domain. The searching method of optimal code is significant for coded exposure. In this paper, an improved criterion of the optimal code searching is proposed by analyzing relationship between code length and the number of ones in the code, considering the noise effect on code selection with the affine noise model. Then the optimal code is obtained utilizing the method of genetic searching algorithm based on the proposed selection criterion. Experimental results show that the time consuming of searching optimal code decreases with the presented method. The restoration image is obtained with better subjective experience and superior objective evaluation values.
Simple scheme for encoding and decoding a qubit in unknown state for various topological codes
Łodyga, Justyna; Mazurek, Paweł; Grudka, Andrzej; Horodecki, Michał
2015-01-01
We present a scheme for encoding and decoding an unknown state for CSS codes, based on syndrome measurements. We illustrate our method by means of Kitaev toric code, defected-lattice code, topological subsystem code and 3D Haah code. The protocol is local whenever in a given code the crossings between the logical operators consist of next neighbour pairs, which holds for the above codes. For subsystem code we also present scheme in a noisy case, where we allow for bit and phase-flip errors on qubits as well as state preparation and syndrome measurement errors. Similar scheme can be built for two other codes. We show that the fidelity of the protected qubit in the noisy scenario in a large code size limit is of , where p is a probability of error on a single qubit per time step. Regarding Haah code we provide noiseless scheme, leaving the noisy case as an open problem. PMID:25754905
Automatic Generation of OpenMP Directives and Its Application to Computational Fluid Dynamics Codes
NASA Technical Reports Server (NTRS)
Yan, Jerry; Jin, Haoqiang; Frumkin, Michael; Yan, Jerry (Technical Monitor)
2000-01-01
The shared-memory programming model is a very effective way to achieve parallelism on shared memory parallel computers. As great progress was made in hardware and software technologies, performance of parallel programs with compiler directives has demonstrated large improvement. The introduction of OpenMP directives, the industrial standard for shared-memory programming, has minimized the issue of portability. In this study, we have extended CAPTools, a computer-aided parallelization toolkit, to automatically generate OpenMP-based parallel programs with nominal user assistance. We outline techniques used in the implementation of the tool and discuss the application of this tool on the NAS Parallel Benchmarks and several computational fluid dynamics codes. This work demonstrates the great potential of using the tool to quickly port parallel programs and also achieve good performance that exceeds some of the commercial tools.
Free and Open Source Software for Geospatial in the field of planetary science
NASA Astrophysics Data System (ADS)
Frigeri, A.
2012-12-01
Information technology applied to geospatial analyses has spread quickly in the last ten years. The availability of OpenData and data from collaborative mapping projects increased the interest on tools, procedures and methods to handle spatially-related information. Free Open Source Software projects devoted to geospatial data handling are gaining a good success as the use of interoperable formats and protocols allow the user to choose what pipeline of tools and libraries is needed to solve a particular task, adapting the software scene to his specific problem. In particular, the Free Open Source model of development mimics the scientific method very well, and researchers should be naturally encouraged to take part to the development process of these software projects, as this represent a very agile way to interact among several institutions. When it comes to planetary sciences, geospatial Free Open Source Software is gaining a key role in projects that commonly involve different subjects in an international scenario. Very popular software suites for processing scientific mission data (for example, ISIS) and for navigation/planning (SPICE) are being distributed along with the source code and the interaction between user and developer is often very strict, creating a continuum between these two figures. A very widely spread library for handling geospatial data (GDAL) has started to support planetary data from the Planetary Data System, and recent contributions enabled the support to other popular data formats used in planetary science, as the Vicar one. The use of Geographic Information System in planetary science is now diffused, and Free Open Source GIS, open GIS formats and network protocols allow to extend existing tools and methods developed to solve Earth based problems, also to the case of the study of solar system bodies. A day in the working life of a researcher using Free Open Source Software for geospatial will be presented, as well as benefits and solutions to possible detriments coming from the effort required by using, supporting and contributing.
Oil and gas field code master list 1994
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
This is the thirteenth annual edition of the Energy Information Administration`s (EIA) Oil and Gas Field Code Master List. It reflects data collected through October 1994 and provides standardized field name spellings and codes for all identified oil and/or gas fields in the United States. The master field name spellings and codes are to be used by respondents when filing the following Department of Energy (DOE) forms: Form EIA-23, {open_quotes}Annual Survey of Domestic Oil and Gas Reserves,{close_quotes} filed by oil and gas well operators (field codes are required from larger operators only); Forms FERC 8 and EIA-191, {open_quotes}Underground Gas Storagemore » Report,{close_quotes} filed by natural gas producers and distributors who operate underground natural gas storage facilities. Other Federal and State government agencies, as well as industry, use the EIA Oil and Gas Field Code Master List as the standard for field identification. A machine-readable version of the Oil and Gas Field Code Master List is available from the National Technical Information Service, 5285 Port Royal Road, Springfield, Virginia 22161, (703) 487-4650. In order for the Master List to be useful, it must be accurate and remain current. To accomplish this, EIA constantly reviews and revises this list. The EIA welcomes all comments, corrections, and additions to the Master List. All such information should be given to the EIA Field Code Coordinator at (214) 953-1858. EIA gratefully acknowledges the assistance provides by numerous State organizations and trade associations in verifying the existence of fields and their official nomenclature.« less
The 2017 Bioinformatics Open Source Conference (BOSC)
Harris, Nomi L.; Cock, Peter J.A.; Chapman, Brad; Fields, Christopher J.; Hokamp, Karsten; Lapp, Hilmar; Munoz-Torres, Monica; Tzovaras, Bastian Greshake; Wiencko, Heather
2017-01-01
The Bioinformatics Open Source Conference (BOSC) is a meeting organized by the Open Bioinformatics Foundation (OBF), a non-profit group dedicated to promoting the practice and philosophy of Open Source software development and Open Science within the biological research community. The 18th annual BOSC ( http://www.open-bio.org/wiki/BOSC_2017) took place in Prague, Czech Republic in July 2017. The conference brought together nearly 250 bioinformatics researchers, developers and users of open source software to interact and share ideas about standards, bioinformatics software development, open and reproducible science, and this year’s theme, open data. As in previous years, the conference was preceded by a two-day collaborative coding event open to the bioinformatics community, called the OBF Codefest. PMID:29118973
The 2017 Bioinformatics Open Source Conference (BOSC).
Harris, Nomi L; Cock, Peter J A; Chapman, Brad; Fields, Christopher J; Hokamp, Karsten; Lapp, Hilmar; Munoz-Torres, Monica; Tzovaras, Bastian Greshake; Wiencko, Heather
2017-01-01
The Bioinformatics Open Source Conference (BOSC) is a meeting organized by the Open Bioinformatics Foundation (OBF), a non-profit group dedicated to promoting the practice and philosophy of Open Source software development and Open Science within the biological research community. The 18th annual BOSC ( http://www.open-bio.org/wiki/BOSC_2017) took place in Prague, Czech Republic in July 2017. The conference brought together nearly 250 bioinformatics researchers, developers and users of open source software to interact and share ideas about standards, bioinformatics software development, open and reproducible science, and this year's theme, open data. As in previous years, the conference was preceded by a two-day collaborative coding event open to the bioinformatics community, called the OBF Codefest.
NASA Astrophysics Data System (ADS)
Chan, Chia-Hsin; Tu, Chun-Chuan; Tsai, Wen-Jiin
2017-01-01
High efficiency video coding (HEVC) not only improves the coding efficiency drastically compared to the well-known H.264/AVC but also introduces coding tools for parallel processing, one of which is tiles. Tile partitioning is allowed to be arbitrary in HEVC, but how to decide tile boundaries remains an open issue. An adaptive tile boundary (ATB) method is proposed to select a better tile partitioning to improve load balancing (ATB-LoadB) and coding efficiency (ATB-Gain) with a unified scheme. Experimental results show that, compared to ordinary uniform-space partitioning, the proposed ATB can save up to 17.65% of encoding times in parallel encoding scenarios and can reduce up to 0.8% of total bit rates for coding efficiency.
Flores, Janet E; Montgomery, Susanne; Lee, Jerry W
2005-09-01
To evaluate parent involvement in a Southern California teen pregnancy prevention community partnership project. Researchers expected to find parent and family-related participation barriers similar to those described in the family support literature, which they could address with program modifications. Three phases of qualitative evaluation occurred: key informant interviews and focus groups with youth and parents; focus groups with service providers; and key informant interviews with service providers, their supervisor, and the collaborative coordinator. Theory-based, open-ended question guides directed the interviews and focus groups, and transcriptions were coded and themed using grounded theory methods. Parents and youth sought ways to improve connections and communication with each other, and parents welcomed parenting education from the project. Unexpectedly, the major obstacles to parent participation identified in this project were largely organizational, and included the assignment of parent involvement tasks to agencies lacking capacities to work effectively with parents, inadequate administrative support for staff, and the absence of an effective system for communicating concerns and resolving conflicts among collaborative partners. Youth serving agencies may not be the best partners to implement effective parent involvement or family support interventions. Collaborative leadership must identify appropriate partners, engender their cooperation, and support their staff to further the overall goals of the collaborative.
Studying Functions of All Yeast Genes Simultaneously
NASA Technical Reports Server (NTRS)
Stolc, Viktor; Eason, Robert G.; Poumand, Nader; Herman, Zelek S.; Davis, Ronald W.; Anthony Kevin; Jejelowo, Olufisayo
2006-01-01
A method of studying the functions of all the genes of a given species of microorganism simultaneously has been developed in experiments on Saccharomyces cerevisiae (commonly known as baker's or brewer's yeast). It is already known that many yeast genes perform functions similar to those of corresponding human genes; therefore, by facilitating understanding of yeast genes, the method may ultimately also contribute to the knowledge needed to treat some diseases in humans. Because of the complexity of the method and the highly specialized nature of the underlying knowledge, it is possible to give only a brief and sketchy summary here. The method involves the use of unique synthetic deoxyribonucleic acid (DNA) sequences that are denoted as DNA bar codes because of their utility as molecular labels. The method also involves the disruption of gene functions through deletion of genes. Saccharomyces cerevisiae is a particularly powerful experimental system in that multiple deletion strains easily can be pooled for parallel growth assays. Individual deletion strains recently have been created for 5,918 open reading frames, representing nearly all of the estimated 6,000 genetic loci of Saccharomyces cerevisiae. Tagging of each deletion strain with one or two unique 20-nucleotide sequences enables identification of genes affected by specific growth conditions, without prior knowledge of gene functions. Hybridization of bar-code DNA to oligonucleotide arrays can be used to measure the growth rate of each strain over several cell-division generations. The growth rate thus measured serves as an index of the fitness of the strain.
Espino, Jeremy U; Wagner, M; Szczepaniak, C; Tsui, F C; Su, H; Olszewski, R; Liu, Z; Chapman, W; Zeng, X; Ma, L; Lu, Z; Dara, J
2004-09-24
Computer-based outbreak and disease surveillance requires high-quality software that is well-supported and affordable. Developing software in an open-source framework, which entails free distribution and use of software and continuous, community-based software development, can produce software with such characteristics, and can do so rapidly. The objective of the Real-Time Outbreak and Disease Surveillance (RODS) Open Source Project is to accelerate the deployment of computer-based outbreak and disease surveillance systems by writing software and catalyzing the formation of a community of users, developers, consultants, and scientists who support its use. The University of Pittsburgh seeded the Open Source Project by releasing the RODS software under the GNU General Public License. An infrastructure was created, consisting of a website, mailing lists for developers and users, designated software developers, and shared code-development tools. These resources are intended to encourage growth of the Open Source Project community. Progress is measured by assessing website usage, number of software downloads, number of inquiries, number of system deployments, and number of new features or modules added to the code base. During September--November 2003, users generated 5,370 page views of the project website, 59 software downloads, 20 inquiries, one new deployment, and addition of four features. Thus far, health departments and companies have been more interested in using the software as is than in customizing or developing new features. The RODS laboratory anticipates that after initial installation has been completed, health departments and companies will begin to customize the software and contribute their enhancements to the public code base.
77 FR 12077 - Meeting of the Judicial Conference Advisory Committee on Rules of Evidence
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-28
... Evidence. ACTION: Notice of open meeting. SUMMARY: The Advisory Committee on Rules of Evidence will hold a one- day meeting. The meeting will be open to public observation but not participation. DATES: October...; 8:45 am] BILLING CODE 2210-55-P ...
Open Mess Management Career Ladder AFS 742X0 and CEM Code 74200.
1980-12-01
I. OPEN MESS MANAGERS (SPC049, N=187) 11. FOOD / BEVERAGE OPERATIONS ASSISTANI MANAGERS ’LUSTER (GRP076, N=92) a. Bar and Operations Managers (GKP085...said they will or probably will reenlist. 1I. FOOD / BEVERAGE OPERATIONS ASSISTANT MANAGERS CLUSTER (GRP076).- This cluster of 9-2 reslpo nrts-(23...operation of open mess food and beverage functions. The majority of these airmen identify themselves as Assistant Managers of open mess facilities and are
2014-09-15
solver, OpenFOAM version 2.1.‡ In particular, the incompressible laminar flow equations (Eq. 6-8) were solved in conjunction with the pressure im- plicit...central differencing and upwinding schemes, respectively. Since the OpenFOAM code is inherently transient, steady-state conditions were ob- tained...collaborative effort between Kitware and Los Alamos National Laboratory. ‡ OpenFOAM is a free, open-source computational fluid dynamics software developed
Integration of OpenMC methods into MAMMOTH and Serpent
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kerby, Leslie; DeHart, Mark; Tumulak, Aaron
OpenMC, a Monte Carlo particle transport simulation code focused on neutron criticality calculations, contains several methods we wish to emulate in MAMMOTH and Serpent. First, research coupling OpenMC and the Multiphysics Object-Oriented Simulation Environment (MOOSE) has shown promising results. Second, the utilization of Functional Expansion Tallies (FETs) allows for a more efficient passing of multiphysics data between OpenMC and MOOSE. Both of these capabilities have been preliminarily implemented into Serpent. Results are discussed and future work recommended.
Mapping among Number Words, Numerals, and Nonsymbolic Quantities in Preschoolers
ERIC Educational Resources Information Center
Hurst, Michelle; Anderson, Ursula; Cordes, Sara
2017-01-01
In mathematically literate societies, numerical information is represented in 3 distinct codes: a verbal code (i.e., number words); a digital, symbolic code (e.g., Arabic numerals); and an analogical code (i.e., quantities; Dehaene, 1992). To communicate effectively using these numerical codes, our understanding of number must involve an…
A Unified View of Global Instability of Compressible Flow over Open Cavities
2006-03-28
in terms of number of steps realized by the DNS code per second (S/sec) as the number of processors ( np ) increases. For this comparison the “new...computations). It may clearly be seen that both solutions performed comparably well at low number of processors; however, as np increased, the Myrinet...has subsequently been designed, hard -coded and validated at nu modelling. Design characteristics of the code have been a) high-accuracy, b
Cracking the Code: Synchronizing Policy and Practice for Performance-Based Learning
ERIC Educational Resources Information Center
Patrick, Susan; Sturgis, Chris
2011-01-01
Performance-based learning is one of the keys to cracking open the assumptions that undergird the current educational codes, structures, and practices. By finally moving beyond the traditions of a time-based system, greater customized educational services can flourish, preparing more and more students for college and careers. This proposed policy…
Comparison of OpenFOAM and EllipSys3D actuator line methods with (NEW) MEXICO results
NASA Astrophysics Data System (ADS)
Nathan, J.; Meyer Forsting, A. R.; Troldborg, N.; Masson, C.
2017-05-01
The Actuator Line Method exists for more than a decade and has become a well established choice for simulating wind rotors in computational fluid dynamics. Numerous implementations exist and are used in the wind energy research community. These codes were verified by experimental data such as the MEXICO experiment. Often the verification against other codes were made on a very broad scale. Therefore this study attempts first a validation by comparing two different implementations, namely an adapted version of SOWFA/OpenFOAM and EllipSys3D and also a verification by comparing against experimental results from the MEXICO and NEW MEXICO experiments.
NASA Technical Reports Server (NTRS)
Lawson, Gary; Poteat, Michael; Sosonkina, Masha; Baurle, Robert; Hammond, Dana
2016-01-01
In this work, several mini-apps have been created to enhance a real-world application performance, namely the VULCAN code for complex flow analysis developed at the NASA Langley Research Center. These mini-apps explore hybrid parallel programming paradigms with Message Passing Interface (MPI) for distributed memory access and either Shared MPI (SMPI) or OpenMP for shared memory accesses. Performance testing shows that MPI+SMPI yields the best execution performance, while requiring the largest number of code changes. A maximum speedup of 23X was measured for MPI+SMPI, but only 10X was measured for MPI+OpenMP.
General-Purpose Serial Interface For Remote Control
NASA Technical Reports Server (NTRS)
Busquets, Anthony M.; Gupton, Lawrence E.
1990-01-01
Computer controls remote television camera. General-purpose controller developed to serve as interface between host computer and pan/tilt/zoom/focus functions on series of automated video cameras. Interface port based on 8251 programmable communications-interface circuit configured for tristated outputs, and connects controller system to any host computer with RS-232 input/output (I/O) port. Accepts byte-coded data from host, compares them with prestored codes in read-only memory (ROM), and closes or opens appropriate switches. Six output ports control opening and closing of as many as 48 switches. Operator controls remote television camera by speaking commands, in system including general-purpose controller.
Yu, Xuefei; Lin, Liangzhuo; Shen, Jie; Chen, Zhi; Jian, Jun; Li, Bin; Xin, Sherman Xuegang
2018-01-01
The mean amplitude of glycemic excursions (MAGE) is an essential index for glycemic variability assessment, which is treated as a key reference for blood glucose controlling at clinic. However, the traditional "ruler and pencil" manual method for the calculation of MAGE is time-consuming and prone to error due to the huge data size, making the development of robust computer-aided program an urgent requirement. Although several software products are available instead of manual calculation, poor agreement among them is reported. Therefore, more studies are required in this field. In this paper, we developed a mathematical algorithm based on integer nonlinear programming. Following the proposed mathematical method, an open-code computer program named MAGECAA v1.0 was developed and validated. The results of the statistical analysis indicated that the developed program was robust compared to the manual method. The agreement among the developed program and currently available popular software is satisfied, indicating that the worry about the disagreement among different software products is not necessary. The open-code programmable algorithm is an extra resource for those peers who are interested in the related study on methodology in the future.
Computer-Aided Parallelizer and Optimizer
NASA Technical Reports Server (NTRS)
Jin, Haoqiang
2011-01-01
The Computer-Aided Parallelizer and Optimizer (CAPO) automates the insertion of compiler directives (see figure) to facilitate parallel processing on Shared Memory Parallel (SMP) machines. While CAPO currently is integrated seamlessly into CAPTools (developed at the University of Greenwich, now marketed as ParaWise), CAPO was independently developed at Ames Research Center as one of the components for the Legacy Code Modernization (LCM) project. The current version takes serial FORTRAN programs, performs interprocedural data dependence analysis, and generates OpenMP directives. Due to the widely supported OpenMP standard, the generated OpenMP codes have the potential to run on a wide range of SMP machines. CAPO relies on accurate interprocedural data dependence information currently provided by CAPTools. Compiler directives are generated through identification of parallel loops in the outermost level, construction of parallel regions around parallel loops and optimization of parallel regions, and insertion of directives with automatic identification of private, reduction, induction, and shared variables. Attempts also have been made to identify potential pipeline parallelism (implemented with point-to-point synchronization). Although directives are generated automatically, user interaction with the tool is still important for producing good parallel codes. A comprehensive graphical user interface is included for users to interact with the parallelization process.
Performance comparison of AV1, HEVC, and JVET video codecs on 360 (spherical) video
NASA Astrophysics Data System (ADS)
Topiwala, Pankaj; Dai, Wei; Krishnan, Madhu; Abbas, Adeel; Doshi, Sandeep; Newman, David
2017-09-01
This paper compares the coding efficiency performance on 360 videos, of three software codecs: (a) AV1 video codec from the Alliance for Open Media (AOM); (b) the HEVC Reference Software HM; and (c) the JVET JEM Reference SW. Note that 360 video is especially challenging content, in that one codes full res globally, but typically looks locally (in a viewport), which magnifies errors. These are tested in two different projection formats ERP and RSP, to check consistency. Performance is tabulated for 1-pass encoding on two fronts: (1) objective performance based on end-to-end (E2E) metrics such as SPSNR-NN, and WS-PSNR, currently developed in the JVET committee; and (2) informal subjective assessment of static viewports. Constant quality encoding is performed with all the three codecs for an unbiased comparison of the core coding tools. Our general conclusion is that under constant quality coding, AV1 underperforms HEVC, which underperforms JVET. We also test with rate control, where AV1 currently underperforms the open source X265 HEVC codec. Objective and visual evidence is provided.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1987-10-01
Huffman codes, comma-free codes, and block codes with shift indicators are important candidate-message compression codes for improving the efficiency of communications systems. This study was undertaken to determine if these codes could be used to increase the thruput of the fixed very-low-frequency (FVLF) communication system. This applications involves the use of compression codes in a channel with errors.
Sustaining Open Source Communities through Hackathons - An Example from the ASPECT Community
NASA Astrophysics Data System (ADS)
Heister, T.; Hwang, L.; Bangerth, W.; Kellogg, L. H.
2016-12-01
The ecosystem surrounding a successful scientific open source software package combines both social and technical aspects. Much thought has been given to the technology side of writing sustainable software for large infrastructure projects and software libraries, but less about building the human capacity to perpetuate scientific software used in computational modeling. One effective format for building capacity is regular multi-day hackathons. Scientific hackathons bring together a group of science domain users and scientific software contributors to make progress on a specific software package. Innovation comes through the chance to work with established and new collaborations. Especially in the domain sciences with small communities, hackathons give geographically distributed scientists an opportunity to connect face-to-face. They foster lively discussions amongst scientists with different expertise, promote new collaborations, and increase transparency in both the technical and scientific aspects of code development. ASPECT is an open source, parallel, extensible finite element code to simulate thermal convection, that began development in 2011 under the Computational Infrastructure for Geodynamics. ASPECT hackathons for the past 3 years have grown the number of authors to >50, training new code maintainers in the process. Hackathons begin with leaders establishing project-specific conventions for development, demonstrating the workflow for code contributions, and reviewing relevant technical skills. Each hackathon expands the developer community. Over 20 scientists add >6,000 lines of code during the >1 week event. Participants grow comfortable contributing to the repository and over half continue to contribute afterwards. A high return rate of participants ensures continuity and stability of the group as well as mentoring for novice members. We hope to build other software communities on this model, but anticipate each to bring their own unique challenges.
Cloudy - simulating the non-equilibrium microphysics of gas and dust, and its observed spectrum
NASA Astrophysics Data System (ADS)
Ferland, Gary J.
2014-01-01
Cloudy is an open-source plasma/spectral simulation code, last described in the open-access journal Revista Mexicana (Ferland et al. 2013, 2013RMxAA..49..137F). The project goal is a complete simulation of the microphysics of gas and dust over the full range of density, temperature, and ionization that we encounter in astrophysics, together with a prediction of the observed spectrum. Cloudy is one of the more widely used theory codes in astrophysics with roughly 200 papers citing its documentation each year. It is developed by graduate students, postdocs, and an international network of collaborators. Cloudy is freely available on the web at trac.nublado.org, the user community can post questions on http://groups.yahoo.com/neo/groups/cloudy_simulations/info, and summer schools are organized to learn more about Cloudy and its use (http://cloud9.pa.uky.edu gary/cloudy/CloudySummerSchool/). The code’s widespread use is possible because of extensive automatic testing. It is exercised over its full range of applicability whenever the source is changed. Changes in predicted quantities are automatically detected along with any newly introduced problems. The code is designed to be autonomous and self-aware. It generates a report at the end of a calculation that summarizes any problems encountered along with suggestions of potentially incorrect boundary conditions. This self-monitoring is a core feature since the code is now often used to generate large MPI grids of simulations, making it impossible for a user to verify each calculation by hand. I will describe some challenges in developing a large physics code, with its many interconnected physical processes, many at the frontier of research in atomic or molecular physics, all in an open environment.
All-Optical Fibre Networks For Coal Mines
NASA Astrophysics Data System (ADS)
Zientkiewicz, Jacek K.
1987-09-01
A topic of the paper is fiber-optic integrated network (FOIN) suited to the most hostile environments existing in coal mines. The use of optical fibres for transmission of mine instrumentation data offers the prospects of improved safety and immunity to electromagnetic interference (EMI). The feasibility of optically powered sensors has opened up new opportunities for research into optical signal processing architectures. This article discusses a new fibre-optic sensor network involving a time domain multiplexing(TDM)scheme and optical signal processing techniques. The pros and cons of different FOIN topologies with respect to coal mine applications are considered. The emphasis has been placed on a recently developed all-optical fibre network using spread spectrum code division multiple access (COMA) techniques. The all-optical networks have applications in explosive environments where electrical isolation is required.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feltus, M.A.
1996-12-31
Previously, nuclear utilities have been considered {open_quotes}deep pockets{close_quotes} for university research; however, in the current cost-cutting competitive environment, most utilities have drastically reduced or eliminated research. Any collaboration with universities requires that any research effort have a focused objective, short-term duration, and tangible payback. Furthermore, the research must concentrate on solving operating problems, rather than on long-term general concerns. Although practical studies may seem mundane, untheoretical, and uninteresting for most academics, such pragmatic topics can provide interesting research for students and helpful results for the utilities. This paper provides examples of the author`s research funded by utilities. Each project hasmore » a specific objective involving a particular utility need or computer code analysis tool.« less
Empowered citizen 'health hackers' who are not waiting.
Omer, Timothy
2016-08-17
Due to the easier access to information, the availability of low cost technologies and the involvement of well educated, passionate patients, a group of citizen 'Health Hackers', who are building their own medical systems to help them overcome the unmet needs of their conditions, is emerging. This has recently been the case in the type 1 diabetes community, under the movement #WeAreNotWaiting, with innovative use of current medical devices hacked to access data and Open-Source code producing solutions ranging from remote monitoring of diabetic children to producing an Artificial Pancreas System to automate the management and monitoring of a patient's condition. Timothy Omer is working with the community to utilise the technology already in his pocket to build a mobile- and smartwatch-based Artificial Pancreas System.
78 FR 41928 - Sunshine Act Meeting
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-12
... Room, Washington, DC. STATUS: The first portion of the meeting will be in Open Session and the remainder of the meeting will be in Closed Session. MATTERS TO BE CONSIDERED: Open Session 1. Briefing on.... [FR Doc. 2013-16798 Filed 7-10-13; 11:15 am] BILLING CODE 6730-01-P ...
78 FR 14141 - Sunshine Act Meetings
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-04
... Commission will hold an Open Meeting on Wednesday, March 6, 2013 at 10:00 a.m., in the Auditorium, Room L-002. The subject matter of the Open Meeting will be: The Commission will consider whether to propose... Filed 2-28-13; 11:15 am] BILLING CODE 8011-01-P ...
76 FR 5411 - Sunshine Act Meeting
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-31
... Commission will hold an Open Meeting on February 2, 2011 at 10 a.m., in the Auditorium, Room L-002. The subject matter of the Open Meeting will be: The Commission will consider whether to propose rules and a... Filed 1-27-11; 11:15 am] BILLING CODE 8011-01-P ...
77 FR 12078 - Meeting of the Judicial Conference Advisory Committee on Rules of Appellate Procedure
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-28
... Rules of Appellate Procedure. ACTION: Notice of open meeting. SUMMARY: The Advisory Committee on Rules of Appellate Procedure will hold a two-day meeting. The meeting will be open to public observation...-12; 8:45 am] BILLING CODE 2210-55-P ...
NASA Astrophysics Data System (ADS)
Bellerby, Tim
2015-04-01
PM (Parallel Models) is a new parallel programming language specifically designed for writing environmental and geophysical models. The language is intended to enable implementers to concentrate on the science behind the model rather than the details of running on parallel hardware. At the same time PM leaves the programmer in control - all parallelisation is explicit and the parallel structure of any given program may be deduced directly from the code. This paper describes a PM implementation based on the Message Passing Interface (MPI) and Open Multi-Processing (OpenMP) standards, looking at issues involved with translating the PM parallelisation model to MPI/OpenMP protocols and considering performance in terms of the competing factors of finer-grained parallelisation and increased communication overhead. In order to maximise portability, the implementation stays within the MPI 1.3 standard as much as possible, with MPI-2 MPI-IO file handling the only significant exception. Moreover, it does not assume a thread-safe implementation of MPI. PM adopts a two-tier abstract representation of parallel hardware. A PM processor is a conceptual unit capable of efficiently executing a set of language tasks, with a complete parallel system consisting of an abstract N-dimensional array of such processors. PM processors may map to single cores executing tasks using cooperative multi-tasking, to multiple cores or even to separate processing nodes, efficiently sharing tasks using algorithms such as work stealing. While tasks may move between hardware elements within a PM processor, they may not move between processors without specific programmer intervention. Tasks are assigned to processors using a nested parallelism approach, building on ideas from Reyes et al. (2009). The main program owns all available processors. When the program enters a parallel statement then either processors are divided out among the newly generated tasks (number of new tasks < number of processors) or tasks are divided out among the available processors (number of tasks > number of processors). Nested parallel statements may further subdivide the processor set owned by a given task. Tasks or processors are distributed evenly by default, but uneven distributions are possible under programmer control. It is also possible to explicitly enable child tasks to migrate within the processor set owned by their parent task, reducing load unbalancing at the potential cost of increased inter-processor message traffic. PM incorporates some programming structures from the earlier MIST language presented at a previous EGU General Assembly, while adopting a significantly different underlying parallelisation model and type system. PM code is available at www.pm-lang.org under an unrestrictive MIT license. Reference Ruymán Reyes, Antonio J. Dorta, Francisco Almeida, Francisco de Sande, 2009. Automatic Hybrid MPI+OpenMP Code Generation with llc, Recent Advances in Parallel Virtual Machine and Message Passing Interface, Lecture Notes in Computer Science Volume 5759, 185-195
NASA Technical Reports Server (NTRS)
Nemeth, Noel N.; Bednarcyk, Brett A.; Pineda, Evan; Arnold, Steven; Mital, Subodh; Murthy, Pappu; Walton, Owen
2015-01-01
Reported here is a coupling of two NASA developed codes: CARES (Ceramics Analysis and Reliability Evaluation of Structures) with the MACGMC composite material analysis code. The resulting code is called FEAMACCARES and is constructed as an Abaqus finite element analysis UMAT (user defined material). Here we describe the FEAMACCARES code and an example problem (taken from the open literature) of a laminated CMC in off-axis loading is shown. FEAMACCARES performs stochastic-strength-based damage simulation response of a CMC under multiaxial loading using elastic stiffness reduction of the failed elements.
Analysis of Phenix end-of-life natural convection test with the MARS-LMR code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jeong, H. Y.; Ha, K. S.; Lee, K. L.
The end-of-life test of Phenix reactor performed by the CEA provided an opportunity to have reliable and valuable test data for the validation and verification of a SFR system analysis code. KAERI joined this international program for the analysis of Phenix end-of-life natural circulation test coordinated by the IAEA from 2008. The main objectives of this study were to evaluate the capability of existing SFR system analysis code MARS-LMR and to identify any limitation of the code. The analysis was performed in three stages: pre-test analysis, blind posttest analysis, and final post-test analysis. In the pre-test analysis, the design conditionsmore » provided by the CEA were used to obtain a prediction of the test. The blind post-test analysis was based on the test conditions measured during the tests but the test results were not provided from the CEA. The final post-test analysis was performed to predict the test results as accurate as possible by improving the previous modeling of the test. Based on the pre-test analysis and blind test analysis, the modeling for heat structures in the hot pool and cold pool, steel structures in the core, heat loss from roof and vessel, and the flow path at core outlet were reinforced in the final analysis. The results of the final post-test analysis could be characterized into three different phases. In the early phase, the MARS-LMR simulated the heat-up process correctly due to the enhanced heat structure modeling. In the mid phase before the opening of SG casing, the code reproduced the decrease of core outlet temperature successfully. Finally, in the later phase the increase of heat removal by the opening of the SG opening was well predicted with the MARS-LMR code. (authors)« less
Rubus: A compiler for seamless and extensible parallelism.
Adnan, Muhammad; Aslam, Faisal; Nawaz, Zubair; Sarwar, Syed Mansoor
2017-01-01
Nowadays, a typical processor may have multiple processing cores on a single chip. Furthermore, a special purpose processing unit called Graphic Processing Unit (GPU), originally designed for 2D/3D games, is now available for general purpose use in computers and mobile devices. However, the traditional programming languages which were designed to work with machines having single core CPUs, cannot utilize the parallelism available on multi-core processors efficiently. Therefore, to exploit the extraordinary processing power of multi-core processors, researchers are working on new tools and techniques to facilitate parallel programming. To this end, languages like CUDA and OpenCL have been introduced, which can be used to write code with parallelism. The main shortcoming of these languages is that programmer needs to specify all the complex details manually in order to parallelize the code across multiple cores. Therefore, the code written in these languages is difficult to understand, debug and maintain. Furthermore, to parallelize legacy code can require rewriting a significant portion of code in CUDA or OpenCL, which can consume significant time and resources. Thus, the amount of parallelism achieved is proportional to the skills of the programmer and the time spent in code optimizations. This paper proposes a new open source compiler, Rubus, to achieve seamless parallelism. The Rubus compiler relieves the programmer from manually specifying the low-level details. It analyses and transforms a sequential program into a parallel program automatically, without any user intervention. This achieves massive speedup and better utilization of the underlying hardware without a programmer's expertise in parallel programming. For five different benchmarks, on average a speedup of 34.54 times has been achieved by Rubus as compared to Java on a basic GPU having only 96 cores. Whereas, for a matrix multiplication benchmark the average execution speedup of 84 times has been achieved by Rubus on the same GPU. Moreover, Rubus achieves this performance without drastically increasing the memory footprint of a program.
Rubus: A compiler for seamless and extensible parallelism
Adnan, Muhammad; Aslam, Faisal; Sarwar, Syed Mansoor
2017-01-01
Nowadays, a typical processor may have multiple processing cores on a single chip. Furthermore, a special purpose processing unit called Graphic Processing Unit (GPU), originally designed for 2D/3D games, is now available for general purpose use in computers and mobile devices. However, the traditional programming languages which were designed to work with machines having single core CPUs, cannot utilize the parallelism available on multi-core processors efficiently. Therefore, to exploit the extraordinary processing power of multi-core processors, researchers are working on new tools and techniques to facilitate parallel programming. To this end, languages like CUDA and OpenCL have been introduced, which can be used to write code with parallelism. The main shortcoming of these languages is that programmer needs to specify all the complex details manually in order to parallelize the code across multiple cores. Therefore, the code written in these languages is difficult to understand, debug and maintain. Furthermore, to parallelize legacy code can require rewriting a significant portion of code in CUDA or OpenCL, which can consume significant time and resources. Thus, the amount of parallelism achieved is proportional to the skills of the programmer and the time spent in code optimizations. This paper proposes a new open source compiler, Rubus, to achieve seamless parallelism. The Rubus compiler relieves the programmer from manually specifying the low-level details. It analyses and transforms a sequential program into a parallel program automatically, without any user intervention. This achieves massive speedup and better utilization of the underlying hardware without a programmer’s expertise in parallel programming. For five different benchmarks, on average a speedup of 34.54 times has been achieved by Rubus as compared to Java on a basic GPU having only 96 cores. Whereas, for a matrix multiplication benchmark the average execution speedup of 84 times has been achieved by Rubus on the same GPU. Moreover, Rubus achieves this performance without drastically increasing the memory footprint of a program. PMID:29211758
NASA Astrophysics Data System (ADS)
Varseev, E.
2017-11-01
The present work is dedicated to verification of numerical model in standard solver of open-source CFD code OpenFOAM for two-phase flow simulation and to determination of so-called “baseline” model parameters. Investigation of heterogeneous coolant flow parameters, which leads to abnormal friction increase of channel in two-phase adiabatic “water-gas” flows with low void fractions, presented.
An Evolving Worldview: Making Open Source Easy
NASA Technical Reports Server (NTRS)
Rice, Zachary
2017-01-01
NASA Worldview is an interactive interface for browsing full-resolution, global satellite imagery. Worldview supports an open data policy so that academia, private industries and the general public can use NASA's satellite data to address Earth science related issues. Worldview was open sourced in 2014. By shifting to an open source approach, the Worldview application has evolved to better serve end-users. Project developers are able to have discussions with end-users and community developers to understand issues and develop new features. New developers are able to track upcoming features, collaborate on them and make their own contributions. Getting new developers to contribute to the project has been one of the most important and difficult aspects of open sourcing Worldview. A focus has been made on making the installation of Worldview simple to reduce the initial learning curve and make contributing code easy. One way we have addressed this is through a simplified setup process. Our setup documentation includes a set of prerequisites and a set of straight forward commands to clone, configure, install and run. This presentation will emphasis our focus to simplify and standardize Worldview's open source code so more people are able to contribute. The more people who contribute, the better the application will become over time.
NASA Astrophysics Data System (ADS)
Zaghi, S.
2014-07-01
OFF, an open source (free software) code for performing fluid dynamics simulations, is presented. The aim of OFF is to solve, numerically, the unsteady (and steady) compressible Navier-Stokes equations of fluid dynamics by means of finite volume techniques: the research background is mainly focused on high-order (WENO) schemes for multi-fluids, multi-phase flows over complex geometries. To this purpose a highly modular, object-oriented application program interface (API) has been developed. In particular, the concepts of data encapsulation and inheritance available within Fortran language (from standard 2003) have been stressed in order to represent each fluid dynamics "entity" (e.g. the conservative variables of a finite volume, its geometry, etc…) by a single object so that a large variety of computational libraries can be easily (and efficiently) developed upon these objects. The main features of OFF can be summarized as follows: Programming LanguageOFF is written in standard (compliant) Fortran 2003; its design is highly modular in order to enhance simplicity of use and maintenance without compromising the efficiency; Parallel Frameworks Supported the development of OFF has been also targeted to maximize the computational efficiency: the code is designed to run on shared-memory multi-cores workstations and distributed-memory clusters of shared-memory nodes (supercomputers); the code's parallelization is based on Open Multiprocessing (OpenMP) and Message Passing Interface (MPI) paradigms; Usability, Maintenance and Enhancement in order to improve the usability, maintenance and enhancement of the code also the documentation has been carefully taken into account; the documentation is built upon comprehensive comments placed directly into the source files (no external documentation files needed): these comments are parsed by means of doxygen free software producing high quality html and latex documentation pages; the distributed versioning system referred as git has been adopted in order to facilitate the collaborative maintenance and improvement of the code; CopyrightsOFF is a free software that anyone can use, copy, distribute, study, change and improve under the GNU Public License version 3. The present paper is a manifesto of OFF code and presents the currently implemented features and ongoing developments. This work is focused on the computational techniques adopted and a detailed description of the main API characteristics is reported. OFF capabilities are demonstrated by means of one and two dimensional examples and a three dimensional real application.
An open science approach to modeling and visualizing ...
It is expected that cyanobacteria blooms will increase in frequency, duration, and severity as inputs of nutrients increase and the impacts of climate change are realized. Partly in response to this, federal, state, and local entities have ramped up efforts to better understand blooms which has resulted in new life for old datasets, new monitoring programs, and novel uses for non-traditional sources of data. To fully benefit from these datasets, it is also imperative that the full body of work including data, code, and manuscripts be openly available (i.e., open science). This presentation will provide several examples of our work which occurs at the intersection of open science and research on cyanobacetria blooms in lakes and ponds. In particular we will discuss 1) why open science is particularly important for environmental human health issues; 2) the lakemorpho and elevatr R packages and how we use those to model lake morphometry; 3) Shiny server applications to visualize data collected as part of the Cyanobacteria Monitoring Collaborative; and 4) distribution of our research and models via open access publications and as R packages on GitHub. Modelling and visualizing information on cyanobacteria blooms is important as it provides estimates of the extent of potential problems associated with these blooms. Furthermore, conducting this work in the open allows others to access our code, data, and results. In turn, this allows for a greater impact because the
NASA Technical Reports Server (NTRS)
Hartle, M.; McKnight, R. L.
2000-01-01
This manual is a combination of a user manual, theory manual, and programmer manual. The reader is assumed to have some previous exposure to the finite element method. This manual is written with the idea that the CSTEM (Coupled Structural Thermal Electromagnetic-Computer Code) user needs to have a basic understanding of what the code is actually doing in order to properly use the code. For that reason, the underlying theory and methods used in the code are described to a basic level of detail. The manual gives an overview of the CSTEM code: how the code came into existence, a basic description of what the code does, and the order in which it happens (a flowchart). Appendices provide a listing and very brief description of every file used by the CSTEM code, including the type of file it is, what routine regularly accesses the file, and what routine opens the file, as well as special features included in CSTEM.
Detecting well-being via computerized content analysis of brief diary entries.
Tov, William; Ng, Kok Leong; Lin, Han; Qiu, Lin
2013-12-01
Two studies evaluated the correspondence between self-reported well-being and codings of emotion and life content by the Linguistic Inquiry and Word Count (LIWC; Pennebaker, Booth, & Francis, 2011). Open-ended diary responses were collected from 206 participants daily for 3 weeks (Study 1) and from 139 participants twice a week for 8 weeks (Study 2). LIWC negative emotion consistently correlated with self-reported negative emotion. LIWC positive emotion correlated with self-reported positive emotion in Study 1 but not in Study 2. No correlations were observed with global life satisfaction. Using a co-occurrence coding method to combine LIWC emotion codings with life-content codings, we estimated the frequency of positive and negative events in 6 life domains (family, friends, academics, health, leisure, and money). Domain-specific event frequencies predicted self-reported satisfaction in all domains in Study 1 but not consistently in Study 2. We suggest that the correspondence between LIWC codings and self-reported well-being is affected by the number of writing samples collected per day as well as the target period (e.g., past day vs. past week) assessed by the self-report measure. Extensions and possible implications for the analyses of similar types of open-ended data (e.g., social media messages) are discussed. (PsycINFO Database Record (c) 2013 APA, all rights reserved).
The World in a Tomato: Revisiting the Use of "Codes" in Freire's Problem-Posing Education.
ERIC Educational Resources Information Center
Barndt, Deborah
1998-01-01
Gives examples of the use of Freire's notion of codes or generative themes in problem-posing literacy education. Describes how these applications expand Freire's conceptions by involving students in code production, including multicultural perspectives, and rethinking codes as representations. (SK)
The digital code driven autonomous synthesis of ibuprofen automated in a 3D-printer-based robot.
Kitson, Philip J; Glatzel, Stefan; Cronin, Leroy
2016-01-01
An automated synthesis robot was constructed by modifying an open source 3D printing platform. The resulting automated system was used to 3D print reaction vessels (reactionware) of differing internal volumes using polypropylene feedstock via a fused deposition modeling 3D printing approach and subsequently make use of these fabricated vessels to synthesize the nonsteroidal anti-inflammatory drug ibuprofen via a consecutive one-pot three-step approach. The synthesis of ibuprofen could be achieved on different scales simply by adjusting the parameters in the robot control software. The software for controlling the synthesis robot was written in the python programming language and hard-coded for the synthesis of ibuprofen by the method described, opening possibilities for the sharing of validated synthetic 'programs' which can run on similar low cost, user-constructed robotic platforms towards an 'open-source' regime in the area of chemical synthesis.
The digital code driven autonomous synthesis of ibuprofen automated in a 3D-printer-based robot
Kitson, Philip J; Glatzel, Stefan
2016-01-01
An automated synthesis robot was constructed by modifying an open source 3D printing platform. The resulting automated system was used to 3D print reaction vessels (reactionware) of differing internal volumes using polypropylene feedstock via a fused deposition modeling 3D printing approach and subsequently make use of these fabricated vessels to synthesize the nonsteroidal anti-inflammatory drug ibuprofen via a consecutive one-pot three-step approach. The synthesis of ibuprofen could be achieved on different scales simply by adjusting the parameters in the robot control software. The software for controlling the synthesis robot was written in the python programming language and hard-coded for the synthesis of ibuprofen by the method described, opening possibilities for the sharing of validated synthetic ‘programs’ which can run on similar low cost, user-constructed robotic platforms towards an ‘open-source’ regime in the area of chemical synthesis. PMID:28144350
Programmable multi-node quantum network design and simulation
NASA Astrophysics Data System (ADS)
Dasari, Venkat R.; Sadlier, Ronald J.; Prout, Ryan; Williams, Brian P.; Humble, Travis S.
2016-05-01
Software-defined networking offers a device-agnostic programmable framework to encode new network functions. Externally centralized control plane intelligence allows programmers to write network applications and to build functional network designs. OpenFlow is a key protocol widely adopted to build programmable networks because of its programmability, flexibility and ability to interconnect heterogeneous network devices. We simulate the functional topology of a multi-node quantum network that uses programmable network principles to manage quantum metadata for protocols such as teleportation, superdense coding, and quantum key distribution. We first show how the OpenFlow protocol can manage the quantum metadata needed to control the quantum channel. We then use numerical simulation to demonstrate robust programmability of a quantum switch via the OpenFlow network controller while executing an application of superdense coding. We describe the software framework implemented to carry out these simulations and we discuss near-term efforts to realize these applications.
Defect Detection in Superconducting Radiofrequency Cavity Surface Using C + + and OpenCV
NASA Astrophysics Data System (ADS)
Oswald, Samantha; Thomas Jefferson National Accelerator Facility Collaboration
2014-03-01
Thomas Jefferson National Accelerator Facility (TJNAF) uses superconducting radiofrequency (SRF) cavities to accelerate an electron beam. If theses cavities have a small particle or defect, it can degrade the performance of the cavity. The problem at hand is inspecting the cavity for defects, little bubbles of niobium on the surface of the cavity. Thousands of pictures have to be taken of a single cavity and then looked through to see how many defects were found. A C + + program with Open Source Computer Vision (OpenCV) was constructed to reduce the number of hours searching through the images and finds all the defects. Using this code, the SRF group is now able to use the code to identify defects in on-going tests of SRF cavities. Real time detection is the next step so that instead of taking pictures when looking at the cavity, the camera will detect all the defects.
Support for Systematic Code Reviews with the SCRUB Tool
NASA Technical Reports Server (NTRS)
Holzmann, Gerald J.
2010-01-01
SCRUB is a code review tool that supports both large, team-based software development efforts (e.g., for mission software) as well as individual tasks. The tool was developed at JPL to support a new, streamlined code review process that combines human-generated review reports with program-generated review reports from a customizable range of state-of-the-art source code analyzers. The leading commercial tools include Codesonar, Coverity, and Klocwork, each of which can achieve a reasonably low rate of false-positives in the warnings that they generate. The time required to analyze code with these tools can vary greatly. In each case, however, the tools produce results that would be difficult to realize with human code inspections alone. There is little overlap in the results produced by the different analyzers, and each analyzer used generally increases the effectiveness of the overall effort. The SCRUB tool allows all reports to be accessed through a single, uniform interface (see figure) that facilitates brows ing code and reports. Improvements over existing software include significant simplification, and leveraging of a range of commercial, static source code analyzers in a single, uniform framework. The tool runs as a small stand-alone application, avoiding the security problems related to tools based on Web browsers. A developer or reviewer, for instance, must have already obtained access rights to a code base before that code can be browsed and reviewed with the SCRUB tool. The tool cannot open any files or folders to which the user does not already have access. This means that the tool does not need to enforce or administer any additional security policies. The analysis results presented through the SCRUB tool s user interface are always computed off-line, given that, especially for larger projects, this computation can take longer than appropriate for interactive tool use. The recommended code review process that is supported by the SCRUB tool consists of three phases: Code Review, Developer Response, and Closeout Resolution. In the Code Review phase, all tool-based analysis reports are generated, and specific comments from expert code reviewers are entered into the SCRUB tool. In the second phase, Developer Response, the developer is asked to respond to each comment and tool-report that was produced, either agreeing or disagreeing to provide a fix that addresses the issue that was raised. In the third phase, Closeout Resolution, all disagreements are discussed in a meeting of all parties involved, and a resolution is made for all disagreements. The first two phases generally take one week each, and the third phase is concluded in a single closeout meeting.
Day, Suzanne; Mason, Robin; Tannenbaum, Cara; Rochon, Paula A
2017-01-01
Integrating sex and gender in health research is essential to produce the best possible evidence to inform health care. Comprehensive integration of sex and gender requires considering these variables from the very beginning of the research process, starting at the proposal stage. To promote excellence in sex and gender integration, we have developed a set of metrics to assess the quality of sex and gender integration in research proposals. These metrics are designed to assist both researchers in developing proposals and reviewers in making funding decisions. We developed this tool through an iterative three-stage method involving 1) review of existing sex and gender integration resources and initial metrics design, 2) expert review and feedback via anonymous online survey (Likert scale and open-ended questions), and 3) analysis of feedback data and collective revision of the metrics. We received feedback on the initial metrics draft from 20 reviewers with expertise in conducting sex- and/or gender-based health research. The majority of reviewers responded positively to questions regarding the utility, clarity and completeness of the metrics, and all reviewers provided responses to open-ended questions about suggestions for improvements. Coding and analysis of responses identified three domains for improvement: clarifying terminology, refining content, and broadening applicability. Based on this analysis we revised the metrics into the Essential Metrics for Assessing Sex and Gender Integration in Health Research Proposals Involving Human Participants, which outlines criteria for excellence within each proposal component and provides illustrative examples to support implementation. By enhancing the quality of sex and gender integration in proposals, the metrics will help to foster comprehensive, meaningful integration of sex and gender throughout each stage of the research process, resulting in better quality evidence to inform health care for all.
Mason, Robin; Tannenbaum, Cara; Rochon, Paula A.
2017-01-01
Integrating sex and gender in health research is essential to produce the best possible evidence to inform health care. Comprehensive integration of sex and gender requires considering these variables from the very beginning of the research process, starting at the proposal stage. To promote excellence in sex and gender integration, we have developed a set of metrics to assess the quality of sex and gender integration in research proposals. These metrics are designed to assist both researchers in developing proposals and reviewers in making funding decisions. We developed this tool through an iterative three-stage method involving 1) review of existing sex and gender integration resources and initial metrics design, 2) expert review and feedback via anonymous online survey (Likert scale and open-ended questions), and 3) analysis of feedback data and collective revision of the metrics. We received feedback on the initial metrics draft from 20 reviewers with expertise in conducting sex- and/or gender-based health research. The majority of reviewers responded positively to questions regarding the utility, clarity and completeness of the metrics, and all reviewers provided responses to open-ended questions about suggestions for improvements. Coding and analysis of responses identified three domains for improvement: clarifying terminology, refining content, and broadening applicability. Based on this analysis we revised the metrics into the Essential Metrics for Assessing Sex and Gender Integration in Health Research Proposals Involving Human Participants, which outlines criteria for excellence within each proposal component and provides illustrative examples to support implementation. By enhancing the quality of sex and gender integration in proposals, the metrics will help to foster comprehensive, meaningful integration of sex and gender throughout each stage of the research process, resulting in better quality evidence to inform health care for all. PMID:28854192
CDAC Student Report: Summary of LLNL Internship
DOE Office of Scientific and Technical Information (OSTI.GOV)
Herriman, Jane E.
Multiple objectives motivated me to apply for an internship at LLNL: I wanted to experience the work environment at a national lab, to learn about research and job opportunities at LLNL in particular, and to gain greater experience with code development, particularly within the realm of high performance computing (HPC). This summer I was selected to participate in LLNL's Computational Chemistry and Material Science Summer Institute (CCMS). CCMS is a 10 week program hosted by the Quantum Simulations group leader, Dr. Eric Schwegler. CCMS connects graduate students to mentors at LLNL involved in similar re- search and provides weekly seminarsmore » on a broad array of topics from within chemistry and materials science. Dr. Xavier Andrade and Dr. Erik Draeger served as my co-mentors over the summer, and Dr. Andrade continues to mentor me now that CCMS has concluded. Dr. Andrade is a member of the Quantum Simulations group within the Physical and Life Sciences at LLNL, and Dr. Draeger leads the HPC group within the Center for Applied Scientific Computing (CASC). The two have worked together to develop Qb@ll, an open-source first principles molecular dynamics code that was the platform for my summer research project.« less
A qualitative analysis of aspects of treatment that adolescents with anorexia identify as helpful.
Zaitsoff, Shannon; Pullmer, Rachelle; Menna, Rosanne; Geller, Josie
2016-04-30
This study aimed to identify aspects of treatment that adolescents with anorexia nervosa (AN) believe are helpful or unhelpful. Adolescent females receiving treatment for AN or subthreshold AN (n=21) were prompted during semi-structured interviews to generate responses to open-ended questions on what they felt would be most helpful or unhelpful in treating adolescents with eating disorders. Eight codes were developed and the two most frequently endorsed categories were (1) Alliance, where the therapist demonstrates clinical expertise and also expresses interest in the patient (n=21, 100.0%), and (2) Client Involvement in treatment (n=16, 76.2%). These top two categories were shared by participants with AN versus subthreshold AN and participants with high versus low readiness to change their dietary restriction behaviours. Development of the coding scheme and sample participant responses will be discussed. The integration of identified factors into empirically supported treatments for adolescent AN, such as Family-based Treatment, will be considered. This study provides initial information regarding aspects of treatment that adolescents identify as most helpful or unhelpful in their treatment. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Evaluation of Force Transfer Around Openings - Experimental and Analytical Studies
Borjen Yeh; Tom Skaggs; Frank Lam; Minghao Li; Douglas Rammer; James Wacker
2011-01-01
Wood structural panel (WSP) sheathed shear walls and diaphragms are the primary lateral-load-resistingelements in wood-frame construction. The historical performance of light-frame structures in North America is very good due, in part, to model building codes that are designed to safeguard life safety. These model building codes have spawned continual improvement and...
Modelling Force Transfer Around Openings of Full-Scale Shear Walls
Tom Skaggs; Borjen Yeh; Frank Lam; Minghao Li; Doug Rammer; James Wacker
2011-01-01
Wood structural panel (WSP) sheathed shear walls and diaphragms are the primary lateralload-resisting elements in wood-frame construction. The historical performance of lightframe structures in North America has been very good due, in part, to model building codes that are designed to preserve life safety. These model building codes have spawned continual improvement...
Spotlight on Speech Codes 2012: The State of Free Speech on Our Nation's Campuses
ERIC Educational Resources Information Center
Foundation for Individual Rights in Education (NJ1), 2012
2012-01-01
The U.S. Supreme Court has called America's colleges and universities "vital centers for the Nation's intellectual life," but the reality today is that many of these institutions severely restrict free speech and open debate. Speech codes--policies prohibiting student and faculty speech that would, outside the bounds of campus, be…
Secret Codes: The Hidden Curriculum of Semantic Web Technologies
ERIC Educational Resources Information Center
Edwards, Richard; Carmichael, Patrick
2012-01-01
There is a long tradition in education of examination of the hidden curriculum, those elements which are implicit or tacit to the formal goals of education. This article draws upon that tradition to open up for investigation the hidden curriculum and assumptions about students and knowledge that are embedded in the coding undertaken to facilitate…
Open-path FTIR data reduction algorithm with atmospheric absorption corrections: the NONLIN code
NASA Astrophysics Data System (ADS)
Phillips, William; Russwurm, George M.
1999-02-01
This paper describes the progress made to date in developing, testing, and refining a data reduction computer code, NONLIN, that alleviates many of the difficulties experienced in the analysis of open path FTIR data. Among the problems that currently effect FTIR open path data quality are: the inability to obtain a true I degree or background, spectral interferences of atmospheric gases such as water vapor and carbon dioxide, and matching the spectral resolution and shift of the reference spectra to a particular field instrument. This algorithm is based on a non-linear fitting scheme and is therefore not constrained by many of the assumptions required for the application of linear methods such as classical least squares (CLS). As a result, a more realistic mathematical model of the spectral absorption measurement process can be employed in the curve fitting process. Applications of the algorithm have proven successful in circumventing open path data reduction problems. However, recent studies, by one of the authors, of the temperature and pressure effects on atmospheric absorption indicate there exist temperature and water partial pressure effects that should be incorporated into the NONLIN algorithm for accurate quantification of gas concentrations. This paper investigates the sources of these phenomena. As a result of this study a partial pressure correction has been employed in NONLIN computer code. Two typical field spectra are examined to determine what effect the partial pressure correction has on gas quantification.
Marine Corps Recruits: A Historical Look at Accessions and Bootcamp Performance
2010-09-01
without the promise of an occupational field . Their code is PN, and these are called open contracts. In this slide, we graph the percentage of open ...means that occupational specialties such as infantry, armor, field artillery, and short-range air defense artillery are closed to women...enlistment contracts began for small numbers of Marines. While the 3 -, 4-, and 6-year initial enlistment contracts were open to the same group of MOSs
NASA Astrophysics Data System (ADS)
Peter, Daniel; Videau, Brice; Pouget, Kevin; Komatitsch, Dimitri
2015-04-01
Improving the resolution of tomographic images is crucial to answer important questions on the nature of Earth's subsurface structure and internal processes. Seismic tomography is the most prominent approach where seismic signals from ground-motion records are used to infer physical properties of internal structures such as compressional- and shear-wave speeds, anisotropy and attenuation. Recent advances in regional- and global-scale seismic inversions move towards full-waveform inversions which require accurate simulations of seismic wave propagation in complex 3D media, providing access to the full 3D seismic wavefields. However, these numerical simulations are computationally very expensive and need high-performance computing (HPC) facilities for further improving the current state of knowledge. During recent years, many-core architectures such as graphics processing units (GPUs) have been added to available large HPC systems. Such GPU-accelerated computing together with advances in multi-core central processing units (CPUs) can greatly accelerate scientific applications. There are mainly two possible choices of language support for GPU cards, the CUDA programming environment and OpenCL language standard. CUDA software development targets NVIDIA graphic cards while OpenCL was adopted mainly by AMD graphic cards. In order to employ such hardware accelerators for seismic wave propagation simulations, we incorporated a code generation tool BOAST into an existing spectral-element code package SPECFEM3D_GLOBE. This allows us to use meta-programming of computational kernels and generate optimized source code for both CUDA and OpenCL languages, running simulations on either CUDA or OpenCL hardware accelerators. We show here applications of forward and adjoint seismic wave propagation on CUDA/OpenCL GPUs, validating results and comparing performances for different simulations and hardware usages.
Ojeda-May, Pedro; Nam, Kwangho
2017-08-08
The strategy and implementation of scalable and efficient semiempirical (SE) QM/MM methods in CHARMM are described. The serial version of the code was first profiled to identify routines that required parallelization. Afterward, the code was parallelized and accelerated with three approaches. The first approach was the parallelization of the entire QM/MM routines, including the Fock matrix diagonalization routines, using the CHARMM message passage interface (MPI) machinery. In the second approach, two different self-consistent field (SCF) energy convergence accelerators were implemented using density and Fock matrices as targets for their extrapolations in the SCF procedure. In the third approach, the entire QM/MM and MM energy routines were accelerated by implementing the hybrid MPI/open multiprocessing (OpenMP) model in which both the task- and loop-level parallelization strategies were adopted to balance loads between different OpenMP threads. The present implementation was tested on two solvated enzyme systems (including <100 QM atoms) and an S N 2 symmetric reaction in water. The MPI version exceeded existing SE QM methods in CHARMM, which include the SCC-DFTB and SQUANTUM methods, by at least 4-fold. The use of SCF convergence accelerators further accelerated the code by ∼12-35% depending on the size of the QM region and the number of CPU cores used. Although the MPI version displayed good scalability, the performance was diminished for large numbers of MPI processes due to the overhead associated with MPI communications between nodes. This issue was partially overcome by the hybrid MPI/OpenMP approach which displayed a better scalability for a larger number of CPU cores (up to 64 CPUs in the tested systems).
When study participants are vulnerable: getting and keeping the right team.
Hill, Nikki L; Mogle, Jacqueline; Wion, Rachel; Kolanowski, Ann M; Fick, Donna; Behrens, Liza; Muhall, Paula; McDowell, Jane
2017-09-19
Research assistants (RAs) are critical members of all research teams. When a study involves vulnerable populations, it is particularly important to have the right team members. To describe the motivations, personal characteristics and team characteristics that promoted the job satisfaction of RAs who worked on two multi-year, randomised clinical trials involving older adults with dementia. A survey was conducted with 41 community members who worked as RAs for up to five years. Measures included demographics, work engagement, personality and characteristics of effective teams, as well as open-ended questions about respondents' experiences of the study. Quantitative analyses and coding of open-ended responses were used to summarise results. Almost all the RAs surveyed joined the team because of previous experiences of interacting with cognitively impaired older people. The RA respondents scored higher in 'dedication to work', 'extraversion', 'agreeableness' and 'conscientiousness' than average. An important aspect of their job satisfaction was team culture, including positive interpersonal interaction and the development of supportive team relationships. A positive work culture provides RAs with an opportunity to work with a study population that they are personally driven to help, and promotes motivation and satisfaction in team members. Results from this study can guide the recruitment, screening and retention of team members for studies that include vulnerable populations. ©2012 RCN Publishing Company Ltd. All rights reserved. Not to be copied, transmitted or recorded in any way, in whole or part, without prior permission of the publishers.
What Do Learners and Pedagogical Agents Discuss When Given Opportunities for Open-Ended Dialogue?
ERIC Educational Resources Information Center
Veletsianos, George; Russell, Gregory S.
2013-01-01
Researchers claim that pedagogical agents engender opportunities for social learning in digital environments. Prior literature, however, has not thoroughly examined the discourse between agents and learners. To address this gap, we analyzed a data corpus of interactions between agents and learners using open coding methods. Analysis revealed that:…
XSEOS: An Open Software for Chemical Engineering Thermodynamics
ERIC Educational Resources Information Center
Castier, Marcelo
2008-01-01
An Excel add-in--XSEOS--that implements several excess Gibbs free energy models and equations of state has been developed for educational use. Several traditional and modern thermodynamic models are available in the package with a user-friendly interface. XSEOS has open code, is freely available, and should be useful for instructors and students…
Turkish Pre-Service Social Studies Teachers' Perceptions of "Good" Citizenship
ERIC Educational Resources Information Center
Yesilbursa, Cemil Cahit
2015-01-01
The current study explores Turkish pre-service social studies teachers' perceptions of "good" citizenship. The participants were 580 pre-service social studies teachers from 6 different universities in Turkey. The data were collected through an interview form having one open-ended question and analyzed according to open coding procedure.…
75 FR 35104 - Sunshine Act Meeting
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-21
... Announcement: 75 FR 34183, June 16, 2010. STATUS: Open Meeting. PLACE: 100 F Street, NE., Washington, DC. DATE...: Cancellation of Meeting. The Open Meeting scheduled for Friday, June 18, 2010 at 10 a.m. has been cancelled..., 2010. Florence E. Harmon, Deputy Secretary. [FR Doc. 2010-15011 Filed 6-17-10; 11:15 am] BILLING CODE P ...
78 FR 73862 - Notice of Sunshine Act Meeting
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-09
... Floor Hearing Room, Washington, DC. STATUS: A portion of the meeting will be held in open session; the remainder will be held in closed session. MATTERS TO BE CONSIDERED: Open Session 1. Briefing on U.S.--China..., Secretary. [FR Doc. 2013-29373 Filed 12-5-13; 11:15 am] BILLING CODE 6730-01-P ...
76 FR 42143 - Sunshine Act Meeting
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-18
... Announcement: [76 FR 41534, July 14, 2011]. STATUS: Open Meeting. PLACE: 100 F Street, NE., Washington, DC...: Cancellation of Meeting. The Open Meeting scheduled for Thursday, July 14, 2011 at 10 a.m. has been cancelled..., 2011. Cathy H. Ahn, Deputy Secretary. [FR Doc. 2011-18051 Filed 7-14-11; 11:15 am] BILLING CODE 8011-01...
76 FR 76457 - Sunshine Act Meeting
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-07
... Announcement: [76 FR 74835, December 1, 2011]. STATUS: Open Meeting. PLACE: 100 F Street, NE., Washington, DC...: Cancellation of Meeting. The Open Meeting scheduled for Tuesday, December 6, 2011 at 10 a.m. has been cancelled... 2, 2011. Elizabeth M. Murphy, Secretary. [FR Doc. 2011-31484 Filed 12-5-11; 11:15 am] BILLING CODE...
77 FR 64514 - Sunshine Act Meeting; Open Commission Meeting; Wednesday, October 17, 2012
Federal Register 2010, 2011, 2012, 2013, 2014
2012-10-22
.../Video coverage of the meeting will be broadcast live with open captioning over the Internet from the FCC... format and alternative media, including large print/ type; digital disk; and audio and video tape. Best.... 2012-26060 Filed 10-18-12; 4:15 pm] BILLING CODE 6712-01-P ...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sanchez del Rio, M.; Cerrina, F.
1996-10-01
In the article {open_quote}{open_quote}Comments on the use of asymmetric monochromators for x-ray diffraction on a synchrotron source,{close_quote}{close_quote} by Colin Nave, Ana Gonzalez, Graham Clark, Sean McSweeney, Stewart Cummings, and Michael Hart, Rev. Sci. Instrum. {bold 66}, 2174 (1995), paragraph II, the authors{close_quote} unfamiliarity with our modeling codes leads them to claim that our approach to treat bent-asymmetrically cut crystals in ray tracing calculations is incorrect. Since SHADOW is a widely used code, it is important to correct any misunderstandings, and we give here arguments to demonstrate that our approach is perfectly valid, and the arguments used by the authors tomore » criticize our method are based on an unwarranted conclusion extracted from one of our previous articles. We show that SHADOW, when properly run, treats the cases raised exactly. Indeed, their arguments provide a nice benchmark test for verifying the accuracy of SHADOW {copyright} {ital 1996 American Institute of Physics.}« less
NASA Astrophysics Data System (ADS)
Zhang, Ke; Cao, Ping; Ma, Guowei; Fan, Wenchen; Meng, Jingjing; Li, Kaihui
2016-07-01
Using the Chengmenshan Copper Mine as a case study, a new methodology for open pit slope design in karst-prone ground conditions is presented based on integrated stochastic-limit equilibrium analysis. The numerical modeling and optimization design procedure contain a collection of drill core data, karst cave stochastic model generation, SLIDE simulation and bisection method optimization. Borehole investigations are performed, and the statistical result shows that the length of the karst cave fits a negative exponential distribution model, but the length of carbonatite does not exactly follow any standard distribution. The inverse transform method and acceptance-rejection method are used to reproduce the length of the karst cave and carbonatite, respectively. A code for karst cave stochastic model generation, named KCSMG, is developed. The stability of the rock slope with the karst cave stochastic model is analyzed by combining the KCSMG code and the SLIDE program. This approach is then applied to study the effect of the karst cave on the stability of the open pit slope, and a procedure to optimize the open pit slope angle is presented.
Rey-Martinez, Jorge; Pérez-Fernández, Nicolás
2016-12-01
The proposed validation goal of 0.9 in intra-class correlation coefficient was reached with the results of this study. With the obtained results we consider that the developed software (RombergLab) is a validated balance assessment software. The reliability of this software is dependent of the used force platform technical specifications. Develop and validate a posturography software and share its source code in open source terms. Prospective non-randomized validation study: 20 consecutive adults underwent two balance assessment tests, six condition posturography was performed using a clinical approved software and force platform and the same conditions were measured using the new developed open source software using a low cost force platform. Intra-class correlation index of the sway area obtained from the center of pressure variations in both devices for the six conditions was the main variable used for validation. Excellent concordance between RombergLab and clinical approved force platform was obtained (intra-class correlation coefficient =0.94). A Bland and Altman graphic concordance plot was also obtained. The source code used to develop RombergLab was published in open source terms.
Procacci, Piero
2016-06-27
We present a new release (6.0β) of the ORAC program [Marsili et al. J. Comput. Chem. 2010, 31, 1106-1116] with a hybrid OpenMP/MPI (open multiprocessing message passing interface) multilevel parallelism tailored for generalized ensemble (GE) and fast switching double annihilation (FS-DAM) nonequilibrium technology aimed at evaluating the binding free energy in drug-receptor system on high performance computing platforms. The production of the GE or FS-DAM trajectories is handled using a weak scaling parallel approach on the MPI level only, while a strong scaling force decomposition scheme is implemented for intranode computations with shared memory access at the OpenMP level. The efficiency, simplicity, and inherent parallel nature of the ORAC implementation of the FS-DAM algorithm, project the code as a possible effective tool for a second generation high throughput virtual screening in drug discovery and design. The code, along with documentation, testing, and ancillary tools, is distributed under the provisions of the General Public License and can be freely downloaded at www.chim.unifi.it/orac .
The 2016 Bioinformatics Open Source Conference (BOSC).
Harris, Nomi L; Cock, Peter J A; Chapman, Brad; Fields, Christopher J; Hokamp, Karsten; Lapp, Hilmar; Muñoz-Torres, Monica; Wiencko, Heather
2016-01-01
Message from the ISCB: The Bioinformatics Open Source Conference (BOSC) is a yearly meeting organized by the Open Bioinformatics Foundation (OBF), a non-profit group dedicated to promoting the practice and philosophy of Open Source software development and Open Science within the biological research community. BOSC has been run since 2000 as a two-day Special Interest Group (SIG) before the annual ISMB conference. The 17th annual BOSC ( http://www.open-bio.org/wiki/BOSC_2016) took place in Orlando, Florida in July 2016. As in previous years, the conference was preceded by a two-day collaborative coding event open to the bioinformatics community. The conference brought together nearly 100 bioinformatics researchers, developers and users of open source software to interact and share ideas about standards, bioinformatics software development, and open and reproducible science.
Evidence for an ergot alkaloid gene cluster in Claviceps purpurea.
Tudzynski, P; Hölter, K; Correia, T; Arntz, C; Grammel, N; Keller, U
1999-02-01
A gene (cpd1) coding for the dimethylallyltryptophan synthase (DMATS) that catalyzes the first specific step in the biosynthesis of ergot alkaloids, was cloned from a strain of Claviceps purpurea that produces alkaloids in axenic culture. The derived gene product (CPD1) shows only 70% similarity to the corresponding gene previously isolated from Claviceps strain ATCC 26245, which is likely to be an isolate of C. fusiformis. Therefore, the related cpd1 most probably represents the first C. purpurea gene coding for an enzymatic step of the alkaloid biosynthetic pathway to be cloned. Analysis of the 3'-flanking region of cpd1 revealed a second, closely linked ergot alkaloid biosynthetic gene named cpps1, which codes for a 356-kDa polypeptide showing significant similarity to fungal modular peptide synthetases. The protein contains three amino acid-activating modules, and in the second module a sequence is found which matches that of an internal peptide (17 amino acids in length) obtained from a tryptic digest of lysergyl peptide synthetase 1 (LPS1) of C. purpurea, thus confirming that cpps1 encodes LPS1. LPS1 activates the three amino acids of the peptide portion of ergot peptide alkaloids during D-lysergyl peptide assembly. Chromosome walking revealed the presence of additional genes upstream of cpd1 which are probably also involved in ergot alkaloid biosynthesis: cpox1 probably codes for an FAD-dependent oxidoreductase (which could represent the chanoclavine cyclase), and a second putative oxidoreductase gene, cpox2, is closely linked to it in inverse orientation. RT-PCR experiments confirm that all four genes are expressed under conditions of peptide alkaloid biosynthesis. These results strongly suggest that at least some genes of ergot alkaloid biosynthesis in C. purpurea are clustered, opening the way for a detailed molecular genetic analysis of the pathway.
NASA Technical Reports Server (NTRS)
Ayguade, Eduard; Gonzalez, Marc; Martorell, Xavier; Jost, Gabriele
2004-01-01
In this paper we describe the parallelization of the multi-zone code versions of the NAS Parallel Benchmarks employing multi-level OpenMP parallelism. For our study we use the NanosCompiler, which supports nesting of OpenMP directives and provides clauses to control the grouping of threads, load balancing, and synchronization. We report the benchmark results, compare the timings with those of different hybrid parallelization paradigms and discuss OpenMP implementation issues which effect the performance of multi-level parallel applications.
pvsR: An Open Source Interface to Big Data on the American Political Sphere.
Matter, Ulrich; Stutzer, Alois
2015-01-01
Digital data from the political sphere is abundant, omnipresent, and more and more directly accessible through the Internet. Project Vote Smart (PVS) is a prominent example of this big public data and covers various aspects of U.S. politics in astonishing detail. Despite the vast potential of PVS' data for political science, economics, and sociology, it is hardly used in empirical research. The systematic compilation of semi-structured data can be complicated and time consuming as the data format is not designed for conventional scientific research. This paper presents a new tool that makes the data easily accessible to a broad scientific community. We provide the software called pvsR as an add-on to the R programming environment for statistical computing. This open source interface (OSI) serves as a direct link between a statistical analysis and the large PVS database. The free and open code is expected to substantially reduce the cost of research with PVS' new big public data in a vast variety of possible applications. We discuss its advantages vis-à-vis traditional methods of data generation as well as already existing interfaces. The validity of the library is documented based on an illustration involving female representation in local politics. In addition, pvsR facilitates the replication of research with PVS data at low costs, including the pre-processing of data. Similar OSIs are recommended for other big public databases.
Critical fiber length technique for composite manufacturing processes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sivley, G.N.; Vandiver, T.L.; Dougherty, N.S.
1996-12-31
An improved injection technique for composite structures has been cooperatively developed by the U.S. Army Missile Command (MICOM) and Rockwell International (RI). This process simultaneously injects chopped fiberglass fibers and an epoxy resin matrix into a mold. Four injection techniques: (1){open_quotes}Little Willie{close_quotes} RTM system, (2) Pressure Vat system, (3) Pressure Vat system with vacuum assistance, and (4) Injection gun system, were investigated for use with a 304.8 mm x 304.8 mm x 5.08 mm (12 in x 12 in x 0.2 in) flat plaque mold. The driving factors in the process optimization included: fiber length, fiber weight, matrix viscosity, injectionmore » pressure, flow rate, and tool design. At fiber weights higher than 30 percent, the injection gun appears to have advantages over the other systems investigated. Results of an experimental investigation are reviewed in this paper. The investigation of injection techniques is the initial part of the research involved in a developing process, {open_quotes}Critical Fiber Length Technique{close_quotes}. This process will use the data collected in injection experiment along with mechanical properties derived from coupon test data to be incorporated into a composite material design code. The {open_quotes}Critical Fiber Length Technique{close_quotes} is part of a Cooperative Research and Development Agreement (CRADA) established in 1994 between MICOM and RI.« less
The MIMIC Code Repository: enabling reproducibility in critical care research.
Johnson, Alistair Ew; Stone, David J; Celi, Leo A; Pollard, Tom J
2018-01-01
Lack of reproducibility in medical studies is a barrier to the generation of a robust knowledge base to support clinical decision-making. In this paper we outline the Medical Information Mart for Intensive Care (MIMIC) Code Repository, a centralized code base for generating reproducible studies on an openly available critical care dataset. Code is provided to load the data into a relational structure, create extractions of the data, and reproduce entire analysis plans including research studies. Concepts extracted include severity of illness scores, comorbid status, administrative definitions of sepsis, physiologic criteria for sepsis, organ failure scores, treatment administration, and more. Executable documents are used for tutorials and reproduce published studies end-to-end, providing a template for future researchers to replicate. The repository's issue tracker enables community discussion about the data and concepts, allowing users to collaboratively improve the resource. The centralized repository provides a platform for users of the data to interact directly with the data generators, facilitating greater understanding of the data. It also provides a location for the community to collaborate on necessary concepts for research progress and share them with a larger audience. Consistent application of the same code for underlying concepts is a key step in ensuring that research studies on the MIMIC database are comparable and reproducible. By providing open source code alongside the freely accessible MIMIC-III database, we enable end-to-end reproducible analysis of electronic health records. © The Author 2017. Published by Oxford University Press on behalf of the American Medical Informatics Association.
Sailfish: A flexible multi-GPU implementation of the lattice Boltzmann method
NASA Astrophysics Data System (ADS)
Januszewski, M.; Kostur, M.
2014-09-01
We present Sailfish, an open source fluid simulation package implementing the lattice Boltzmann method (LBM) on modern Graphics Processing Units (GPUs) using CUDA/OpenCL. We take a novel approach to GPU code implementation and use run-time code generation techniques and a high level programming language (Python) to achieve state of the art performance, while allowing easy experimentation with different LBM models and tuning for various types of hardware. We discuss the general design principles of the code, scaling to multiple GPUs in a distributed environment, as well as the GPU implementation and optimization of many different LBM models, both single component (BGK, MRT, ELBM) and multicomponent (Shan-Chen, free energy). The paper also presents results of performance benchmarks spanning the last three NVIDIA GPU generations (Tesla, Fermi, Kepler), which we hope will be useful for researchers working with this type of hardware and similar codes. Catalogue identifier: AETA_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AETA_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU Lesser General Public License, version 3 No. of lines in distributed program, including test data, etc.: 225864 No. of bytes in distributed program, including test data, etc.: 46861049 Distribution format: tar.gz Programming language: Python, CUDA C, OpenCL. Computer: Any with an OpenCL or CUDA-compliant GPU. Operating system: No limits (tested on Linux and Mac OS X). RAM: Hundreds of megabytes to tens of gigabytes for typical cases. Classification: 12, 6.5. External routines: PyCUDA/PyOpenCL, Numpy, Mako, ZeroMQ (for multi-GPU simulations), scipy, sympy Nature of problem: GPU-accelerated simulation of single- and multi-component fluid flows. Solution method: A wide range of relaxation models (LBGK, MRT, regularized LB, ELBM, Shan-Chen, free energy, free surface) and boundary conditions within the lattice Boltzmann method framework. Simulations can be run in single or double precision using one or more GPUs. Restrictions: The lattice Boltzmann method works for low Mach number flows only. Unusual features: The actual numerical calculations run exclusively on GPUs. The numerical code is built dynamically at run-time in CUDA C or OpenCL, using templates and symbolic formulas. The high-level control of the simulation is maintained by a Python process. Additional comments: !!!!! The distribution file for this program is over 45 Mbytes and therefore is not delivered directly when Download or Email is requested. Instead a html file giving details of how the program can be obtained is sent. !!!!! Running time: Problem-dependent, typically minutes (for small cases or short simulations) to hours (large cases or long simulations).
3D-PDR: Three-dimensional photodissociation region code
NASA Astrophysics Data System (ADS)
Bisbas, T. G.; Bell, T. A.; Viti, S.; Yates, J.; Barlow, M. J.
2018-03-01
3D-PDR is a three-dimensional photodissociation region code written in Fortran. It uses the Sundials package (written in C) to solve the set of ordinary differential equations and it is the successor of the one-dimensional PDR code UCL_PDR (ascl:1303.004). Using the HEALpix ray-tracing scheme (ascl:1107.018), 3D-PDR solves a three-dimensional escape probability routine and evaluates the attenuation of the far-ultraviolet radiation in the PDR and the propagation of FIR/submm emission lines out of the PDR. The code is parallelized (OpenMP) and can be applied to 1D and 3D problems.
Chow, Sze Loon; Loh, Siew Yim; Su, Tin Tin
2015-06-01
Return to work (RTW) can be a challenging occupational health (OH) issue among previously-employed colorectal cancer survivors. This study aimed to explore the various perceived barriers and facilitators encountered during the RTW process in cancer survivorship, from the perception of healthcare professionals (HCP). Face to face, semistructured interviews were carried out on twelve HCP (government and private sectors) from various disciplines. Data collected were transcribed verbatim and data management was aided by NVivo software 8.0. A new theory from contextual data was generated using open coding, axial coding and selective coding. The HCP shared numerous barriers and facilitators associated with RTW, under four categories. The key barriers were disturbing side effects, psychological barriers (personal factor), compensation (financial factor), poor ability to multitask (work-related factor), long paid medical leaves policy, employer's lackadaisical attitude, lack of knowledge and awareness of RTW (environmental factor). Key facilitators identified were desire to resume working life and to contribute to society (personal factor), financial pressure, maintain organizational health insurance (financial factor), less physically demanding job (work-related factor), supportive workplace and strict organizational policy on medical leaves (environmental factor). While not all HCP were trained in RTW, they all agreed that RTW is important for survivors and workplace. Occupational health doctors have a direct role in helping survivors RTW. Early Intervention on RTW during survivorship should involve occupational health doctors and employers, targeting the modifiable factors (environmental and work-related) to improve RTW after cancer.
Bari, Attia; Khan, Rehan Ahmed; Jabeen, Uzma; Rathore, Ahsan Waheed
2017-01-01
Objective: To analyze communication skills of pediatric postgraduate residents in clinical encounter by using video recordings. Methods: This qualitative exploratory research was conducted through video recording at The Children’s Hospital Lahore, Pakistan. Residents who had attended the mandatory communication skills workshop offered by CPSP were included. The video recording of clinical encounter was done by a trained audiovisual person while the resident was interacting with the patient in the clinical encounter. Data was analyzed by thematic analysis. Results: Initially on open coding 36 codes emerged and then through axial and selective coding these were condensed to 17 subthemes. Out of these four main themes emerged: (1) Courteous and polite attitude, (2) Marginal nonverbal communication skills, (3) Power game/Ignoring child participation and (4) Patient as medical object/Instrumental behaviour. All residents treated the patient as a medical object to reach a right diagnosis and ignored them as a human being. There was dominant role of doctors and marginal nonverbal communication skills were displayed by the residents in the form of lack of social touch, and appropriate eye contact due to documenting notes. A brief non-medical interaction for rapport building at the beginning of interaction was missing and there was lack of child involvement. Conclusion: Paediatric postgraduate residents were polite while communicating with parents and child but lacking in good nonverbal communication skills. Communication pattern in our study was mostly one-way showing doctor’s instrumental behaviour and ignoring the child participation. PMID:29492050
cncRNAs: Bi-functional RNAs with protein coding and non-coding functions
Kumari, Pooja; Sampath, Karuna
2015-01-01
For many decades, the major function of mRNA was thought to be to provide protein-coding information embedded in the genome. The advent of high-throughput sequencing has led to the discovery of pervasive transcription of eukaryotic genomes and opened the world of RNA-mediated gene regulation. Many regulatory RNAs have been found to be incapable of protein coding and are hence termed as non-coding RNAs (ncRNAs). However, studies in recent years have shown that several previously annotated non-coding RNAs have the potential to encode proteins, and conversely, some coding RNAs have regulatory functions independent of the protein they encode. Such bi-functional RNAs, with both protein coding and non-coding functions, which we term as ‘cncRNAs’, have emerged as new players in cellular systems. Here, we describe the functions of some cncRNAs identified from bacteria to humans. Because the functions of many RNAs across genomes remains unclear, we propose that RNAs be classified as coding, non-coding or both only after careful analysis of their functions. PMID:26498036
Protecting quantum memories using coherent parity check codes
NASA Astrophysics Data System (ADS)
Roffe, Joschka; Headley, David; Chancellor, Nicholas; Horsman, Dominic; Kendon, Viv
2018-07-01
Coherent parity check (CPC) codes are a new framework for the construction of quantum error correction codes that encode multiple qubits per logical block. CPC codes have a canonical structure involving successive rounds of bit and phase parity checks, supplemented by cross-checks to fix the code distance. In this paper, we provide a detailed introduction to CPC codes using conventional quantum circuit notation. We demonstrate the implementation of a CPC code on real hardware, by designing a [[4, 2, 2
Comparison of memory thresholds for planar qudit geometries
NASA Astrophysics Data System (ADS)
Marks, Jacob; Jochym-O'Connor, Tomas; Gheorghiu, Vlad
2017-11-01
We introduce and analyze a new type of decoding algorithm called general color clustering, based on renormalization group methods, to be used in qudit color codes. The performance of this decoder is analyzed under a generalized bit-flip error model, and is used to obtain the first memory threshold estimates for qudit 6-6-6 color codes. The proposed decoder is compared with similar decoding schemes for qudit surface codes as well as the current leading qubit decoders for both sets of codes. We find that, as with surface codes, clustering performs sub-optimally for qubit color codes, giving a threshold of 5.6 % compared to the 8.0 % obtained through surface projection decoding methods. However, the threshold rate increases by up to 112% for large qudit dimensions, plateauing around 11.9 % . All the analysis is performed using QTop, a new open-source software for simulating and visualizing topological quantum error correcting codes.
NASA Astrophysics Data System (ADS)
Kraljić, K.; Strüngmann, L.; Fimmel, E.; Gumbel, M.
2018-01-01
The genetic code is degenerated and it is assumed that redundancy provides error detection and correction mechanisms in the translation process. However, the biological meaning of the code's structure is still under current research. This paper presents a Genetic Code Analysis Toolkit (GCAT) which provides workflows and algorithms for the analysis of the structure of nucleotide sequences. In particular, sets or sequences of codons can be transformed and tested for circularity, comma-freeness, dichotomic partitions and others. GCAT comes with a fertile editor custom-built to work with the genetic code and a batch mode for multi-sequence processing. With the ability to read FASTA files or load sequences from GenBank, the tool can be used for the mathematical and statistical analysis of existing sequence data. GCAT is Java-based and provides a plug-in concept for extensibility. Availability: Open source Homepage:http://www.gcat.bio/
The next-generation ESL continuum gyrokinetic edge code
NASA Astrophysics Data System (ADS)
Cohen, R.; Dorr, M.; Hittinger, J.; Rognlien, T.; Collela, P.; Martin, D.
2009-05-01
The Edge Simulation Laboratory (ESL) project is developing continuum-based approaches to kinetic simulation of edge plasmas. A new code is being developed, based on a conservative formulation and fourth-order discretization of full-f gyrokinetic equations in parallel-velocity, magnetic-moment coordinates. The code exploits mapped multiblock grids to deal with the geometric complexities of the edge region, and utilizes a new flux limiter [P. Colella and M.D. Sekora, JCP 227, 7069 (2008)] to suppress unphysical oscillations about discontinuities while maintaining high-order accuracy elsewhere. The code is just becoming operational; we will report initial tests for neoclassical orbit calculations in closed-flux surface and limiter (closed plus open flux surfaces) geometry. It is anticipated that the algorithmic refinements in the new code will address the slow numerical instability that was observed in some long simulations with the existing TEMPEST code. We will also discuss the status and plans for physics enhancements to the new code.
ERIC Educational Resources Information Center
DeLyser, Dydia; Potter, Amy E.
2013-01-01
This article describes experiential-learning approaches to conveying the work and rewards involved in qualitative research. Seminar students interviewed one another, transcribed or took notes on those interviews, shared those materials to create a set of empirical materials for coding, developed coding schemes, and coded the materials using those…
The Social Interactive Coding System (SICS): An On-Line, Clinically Relevant Descriptive Tool.
ERIC Educational Resources Information Center
Rice, Mabel L.; And Others
1990-01-01
The Social Interactive Coding System (SICS) assesses the continuous verbal interactions of preschool children as a function of play areas, addressees, script codes, and play levels. This paper describes the 26 subjects and the setting involved in SICS development, coding definitions and procedures, training procedures, reliability, sample…
Roadway contributing factors in traffic crashes.
DOT National Transportation Integrated Search
2014-09-01
This project involved an evaluation of the codes which relate to roadway contributing : factors. This included a review of relevant codes used in other states. Crashes with related : codes were summarized and analyzed. A sample of crash sites was ins...
ERIC Educational Resources Information Center
VanBiervliet, Alan
A project to develop and evaluate a bar code reader system as a self-directed information and instructional aid for handicapped nonreaders is described. The bar code technology involves passing a light sensitive pen or laser over a printed code with bars which correspond to coded numbers. A system would consist of a portable device which could…
OPAL: An Open-Source MPI-IO Library over Cray XT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, Weikuan; Vetter, Jeffrey S; Canon, Richard Shane
Parallel IO over Cray XT is supported by a vendor-supplied MPI-IO package. This package contains a proprietary ADIO implementation built on top of the sysio library. While it is reasonable to maintain a stable code base for application scientists' convenience, it is also very important to the system developers and researchers to analyze and assess the effectiveness of parallel IO software, and accordingly, tune and optimize the MPI-IO implementation. A proprietary parallel IO code base relinquishes such flexibilities. On the other hand, a generic UFS-based MPI-IO implementation is typically used on many Linux-based platforms. We have developed an open-source MPI-IOmore » package over Lustre, referred to as OPAL (OPportunistic and Adaptive MPI-IO Library over Lustre). OPAL provides a single source-code base for MPI-IO over Lustre on Cray XT and Linux platforms. Compared to Cray implementation, OPAL provides a number of good features, including arbitrary specification of striping patterns and Lustre-stripe aligned file domain partitioning. This paper presents the performance comparisons between OPAL and Cray's proprietary implementation. Our evaluation demonstrates that OPAL achieves the performance comparable to the Cray implementation. We also exemplify the benefits of an open source package in revealing the underpinning of the parallel IO performance.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nagler, Robert; Moeller, Paul
Sirepo is an open source framework for cloud computing. The graphical user interface (GUI) for Sirepo, also known as the client, executes in any HTML5 compliant web browser on any computing platform, including tablets. The client is built in JavaScript, making use of the following open source libraries: Bootstrap, which is fundamental for cross-platform web applications; AngularJS, which provides a model–view–controller (MVC) architecture and GUI components; and D3.js, which provides interactive plots and data-driven transformations. The Sirepo server is built on the following Python technologies: Flask, which is a lightweight framework for web development; Jin-ja, which is a secure andmore » widely used templating language; and Werkzeug, a utility library that is compliant with the WSGI standard. We use Nginx as the HTTP server and proxy, which provides a scalable event-driven architecture. The physics codes supported by Sirepo execute inside a Docker container. One of the codes supported by Sirepo is Warp. Warp is a particle-in-cell (PIC) code de-signed to simulate high-intensity charged particle beams and plasmas in both the electrostatic and electromagnetic regimes, with a wide variety of integrated physics models and diagnostics. At pre-sent, Sirepo supports a small subset of Warp’s capabilities. Warp is open source and is part of the Berkeley Lab Accelerator Simulation Toolkit.« less
Montemurro, Genevieve R; Raine, Kim D; Nykiforuk, Candace I J; Mayan, Maria
2014-09-01
Community capacity-building is a central element to health promotion. While capacity-building features, domains and relationships to program sustainability have been well examined, information on the process of capacity-building as experienced by practitioners is needed. This study examined this process as experienced by coordinators working within a community-based chronic disease prevention project implemented in four communities in Alberta (Canada) from 2005-2010 using a case study approach with a mixed-method design. Data collection involved semi-structured interviews, a focus group and program documents tracking coordinator activity. Qualitative analysis followed the constant comparative method using open, axial and selective coding. Quantitative data were analyzed for frequency of major activity distribution. Capacity-building process involves distinct stages of networking, information exchange, partnering, prioritizing, planning/implementing and supporting/ sustaining. Stages are incremental though not always linear. Contextual factors exert a great influence on the process. Implications for research, practice and policy are discussed. © The Author (2013). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Constructing the spectral web of rotating plasmas
NASA Astrophysics Data System (ADS)
Goedbloed, Hans
2012-10-01
Rotating plasmas are ubiquitous in nature. The theory of MHD stability of such plasmas, initiated a long time ago, has severely suffered from the wide spread misunderstanding that it necessarily involves non-self-adjoint operators. It has been shown (J.P. Goedbloed, PPCF 16, 074001, 2011; Goedbloed, Keppens and Poedts, Advanced Magnetohydrodynamics, Cambridge, 2010) that, on the contrary, spectral theory of moving plasmas can be constructed entirely on the basis of energy conservation and self-adjointness of the occurring operators. The spectral web is a further development along this line. It involves the construction of a network of curves in the complex omega-plane associated with the complex complementary energy, which is the energy needed to maintain harmonic time dependence in an open system. Vanishing of that energy, at the intersections of the mentioned curves, yields the eigenvalues of the closed system. This permits to consider the enormous diversity of MHD instabilities of rotating tokamaks, accretion disks about compact objects, and jets emitted from those objects, from a single view point. This will be illustrated with results obtained with a new spectral code (ROC).
Dai, Xinbin; Zhuang, Zhaohong; Torres-Jerez, Ivone; Nogales, Joaquina
2017-01-01
Growing evidence indicates that small, secreted peptides (SSPs) play critical roles in legume growth and development, yet the annotation of SSP-coding genes is far from complete. Systematic reannotation of the Medicago truncatula genome identified 1,970 homologs of established SSP gene families and an additional 2,455 genes that are potentially novel SSPs, previously unreported in the literature. The expression patterns of known and putative SSP genes based on 144 RNA sequencing data sets covering various stages of macronutrient deficiencies and symbiotic interactions with rhizobia and mycorrhiza were investigated. Focusing on those known or suspected to act via receptor-mediated signaling, 240 nutrient-responsive and 365 nodulation-responsive Signaling-SSPs were identified, greatly expanding the number of SSP gene families potentially involved in acclimation to nutrient deficiencies and nodulation. Synthetic peptide applications were shown to alter root growth and nodulation phenotypes, revealing additional regulators of legume nutrient acquisition. Our results constitute a powerful resource enabling further investigations of specific SSP functions via peptide treatment and reverse genetics. PMID:29030416
High-order moments of spin-orbit energy in a multielectron configuration
NASA Astrophysics Data System (ADS)
Na, Xieyu; Poirier, M.
2016-07-01
In order to analyze the energy-level distribution in complex ions such as those found in warm dense plasmas, this paper provides values for high-order moments of the spin-orbit energy in a multielectron configuration. Using second-quantization results and standard angular algebra or fully analytical expressions, explicit values are given for moments up to 10th order for the spin-orbit energy. Two analytical methods are proposed, using the uncoupled or coupled orbital and spin angular momenta. The case of multiple open subshells is considered with the help of cumulants. The proposed expressions for spin-orbit energy moments are compared to numerical computations from Cowan's code and agree with them. The convergence of the Gram-Charlier expansion involving these spin-orbit moments is analyzed. While a spectrum with infinitely thin components cannot be adequately represented by such an expansion, a suitable convolution procedure ensures the convergence of the Gram-Charlier series provided high-order terms are accounted for. A corrected analytical formula for the third-order moment involving both spin-orbit and electron-electron interactions turns out to be in fair agreement with Cowan's numerical computations.
SiC JFET Transistor Circuit Model for Extreme Temperature Range
NASA Technical Reports Server (NTRS)
Neudeck, Philip G.
2008-01-01
A technique for simulating extreme-temperature operation of integrated circuits that incorporate silicon carbide (SiC) junction field-effect transistors (JFETs) has been developed. The technique involves modification of NGSPICE, which is an open-source version of the popular Simulation Program with Integrated Circuit Emphasis (SPICE) general-purpose analog-integrated-circuit-simulating software. NGSPICE in its unmodified form is used for simulating and designing circuits made from silicon-based transistors that operate at or near room temperature. Two rapid modifications of NGSPICE source code enable SiC JFETs to be simulated to 500 C using the well-known Level 1 model for silicon metal oxide semiconductor field-effect transistors (MOSFETs). First, the default value of the MOSFET surface potential must be changed. In the unmodified source code, this parameter has a value of 0.6, which corresponds to slightly more than half the bandgap of silicon. In NGSPICE modified to simulate SiC JFETs, this parameter is changed to a value of 1.6, corresponding to slightly more than half the bandgap of SiC. The second modification consists of changing the temperature dependence of MOSFET transconductance and saturation parameters. The unmodified NGSPICE source code implements a T(sup -1.5) temperature dependence for these parameters. In order to mimic the temperature behavior of experimental SiC JFETs, a T(sup -1.3) temperature dependence must be implemented in the NGSPICE source code. Following these two simple modifications, the Level 1 MOSFET model of the NGSPICE circuit simulation program reasonably approximates the measured high-temperature behavior of experimental SiC JFETs properly operated with zero or reverse bias applied to the gate terminal. Modification of additional silicon parameters in the NGSPICE source code was not necessary to model experimental SiC JFET current-voltage performance across the entire temperature range from 25 to 500 C.
Practice patterns of academic general thoracic and adult cardiac surgeons.
Ingram, Michael T; Wisner, David H; Cooke, David T
2014-10-01
We hypothesized that academic adult cardiac surgeons (CSs) and general thoracic surgeons (GTSs) would have distinct practice patterns of, not just case-mix, but also time devoted to outpatient care, involvement in critical care, and work relative value unit (wRVU) generation for the procedures they perform. We queried the University Health System Consortium-Association of American Medical Colleges Faculty Practice Solution Center database for fiscal years 2007-2008, 2008-2009, and 2009-2010 for the frequency of inpatient and outpatient current procedural terminology coding and wRVU data of academic GTSs and CSs. The Faculty Practice Solution Center database is a compilation of productivity and payer data from 86 academic institutions. The greatest wRVU generating current procedural terminology codes for CSs were, in order, coronary artery bypass grafting, aortic valve replacement, and mitral valve replacement. In contrast, open lobectomy, video-assisted thoracic surgery wedge, and video-assisted thoracic surgery lobectomy were greatest for GTSs. The 10 greatest wRVU-generating procedures for CSs generated more wRVUs than those for GTSs (P<.001). Although CSs generated significantly more hospital inpatient evaluation and management (E & M) wRVUs than did GTSs (P<.001), only 2.5% of the total wRVUs generated by CSs were from E & M codes versus 18.8% for GTSs. Critical care codes were 1.5% of total evaluation and management billing for both CSs and GTSs. Academic CSs and GTSs have distinct practice patterns. CSs receive greater reimbursement for services because of the greater wRVUs of the procedures performed compared with GTSs, and evaluation and management coding is a more important wRVU generator for GTSs. The results of our study could guide academic CS and GTS practice structure and time prioritization. Copyright © 2014 The American Association for Thoracic Surgery. Published by Elsevier Inc. All rights reserved.
Aiello, Francesco A; Judelson, Dejah R; Messina, Louis M; Indes, Jeffrey; FitzGerald, Gordon; Doucet, Danielle R; Simons, Jessica P; Schanzer, Andres
2016-08-01
Vascular surgery procedural reimbursement depends on accurate procedural coding and documentation. Despite the critical importance of correct coding, there has been a paucity of research focused on the effect of direct physician involvement. We hypothesize that direct physician involvement in procedural coding will lead to improved coding accuracy, increased work relative value unit (wRVU) assignment, and increased physician reimbursement. This prospective observational cohort study evaluated procedural coding accuracy of fistulograms at an academic medical institution (January-June 2014). All fistulograms were coded by institutional coders (traditional coding) and by a single vascular surgeon whose codes were verified by two institution coders (multidisciplinary coding). The coding methods were compared, and differences were translated into revenue and wRVUs using the Medicare Physician Fee Schedule. Comparison between traditional and multidisciplinary coding was performed for three discrete study periods: baseline (period 1), after a coding education session for physicians and coders (period 2), and after a coding education session with implementation of an operative dictation template (period 3). The accuracy of surgeon operative dictations during each study period was also assessed. An external validation at a second academic institution was performed during period 1 to assess and compare coding accuracy. During period 1, traditional coding resulted in a 4.4% (P = .004) loss in reimbursement and a 5.4% (P = .01) loss in wRVUs compared with multidisciplinary coding. During period 2, no significant difference was found between traditional and multidisciplinary coding in reimbursement (1.3% loss; P = .24) or wRVUs (1.8% loss; P = .20). During period 3, traditional coding yielded a higher overall reimbursement (1.3% gain; P = .26) than multidisciplinary coding. This increase, however, was due to errors by institution coders, with six inappropriately used codes resulting in a higher overall reimbursement that was subsequently corrected. Assessment of physician documentation showed improvement, with decreased documentation errors at each period (11% vs 3.1% vs 0.6%; P = .02). Overall, between period 1 and period 3, multidisciplinary coding resulted in a significant increase in additional reimbursement ($17.63 per procedure; P = .004) and wRVUs (0.50 per procedure; P = .01). External validation at a second academic institution was performed to assess coding accuracy during period 1. Similar to institution 1, traditional coding revealed an 11% loss in reimbursement ($13,178 vs $14,630; P = .007) and a 12% loss in wRVU (293 vs 329; P = .01) compared with multidisciplinary coding. Physician involvement in the coding of endovascular procedures leads to improved procedural coding accuracy, increased wRVU assignments, and increased physician reimbursement. Copyright © 2016 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Ivanov, Anisoara; Neacsu, Andrei
2011-01-01
This study describes the possibility and advantages of utilizing simple computer codes to complement the teaching techniques for high school physics. The authors have begun working on a collection of open source programs which allow students to compare the results and graphics from classroom exercises with the correct solutions and further more to…
ChromaStarPy: A Stellar Atmosphere and Spectrum Modeling and Visualization Lab in Python
NASA Astrophysics Data System (ADS)
Short, C. Ian; Bayer, Jason H. T.; Burns, Lindsey M.
2018-02-01
We announce ChromaStarPy, an integrated general stellar atmospheric modeling and spectrum synthesis code written entirely in python V. 3. ChromaStarPy is a direct port of the ChromaStarServer (CSServ) Java modeling code described in earlier papers in this series, and many of the associated JavaScript (JS) post-processing procedures have been ported and incorporated into CSPy so that students have access to ready-made data products. A python integrated development environment (IDE) allows a student in a more advanced course to experiment with the code and to graphically visualize intermediate and final results, ad hoc, as they are running it. CSPy allows students and researchers to compare modeled to observed spectra in the same IDE in which they are processing observational data, while having complete control over the stellar parameters affecting the synthetic spectra. We also take the opportunity to describe improvements that have been made to the related codes, ChromaStar (CS), CSServ, and ChromaStarDB (CSDB), that, where relevant, have also been incorporated into CSPy. The application may be found at the home page of the OpenStars project: http://www.ap.smu.ca/OpenStars/.
Relationships between palaeogeography and opal occurrence in Australia: A data-mining approach
NASA Astrophysics Data System (ADS)
Landgrebe, T. C. W.; Merdith, A.; Dutkiewicz, A.; Müller, R. D.
2013-07-01
Age-coded multi-layered geological datasets are becoming increasingly prevalent with the surge in open-access geodata, yet there are few methodologies for extracting geological information and knowledge from these data. We present a novel methodology, based on the open-source GPlates software in which age-coded digital palaeogeographic maps are used to “data-mine” spatio-temporal patterns related to the occurrence of Australian opal. Our aim is to test the concept that only a particular sequence of depositional/erosional environments may lead to conditions suitable for the formation of gem quality sedimentary opal. Time-varying geographic environment properties are extracted from a digital palaeogeographic dataset of the eastern Australian Great Artesian Basin (GAB) at 1036 opal localities. We obtain a total of 52 independent ordinal sequences sampling 19 time slices from the Early Cretaceous to the present-day. We find that 95% of the known opal deposits are tied to only 27 sequences all comprising fluvial and shallow marine depositional sequences followed by a prolonged phase of erosion. We then map the total area of the GAB that matches these 27 opal-specific sequences, resulting in an opal-prospective region of only about 10% of the total area of the basin. The key patterns underlying this association involve only a small number of key environmental transitions. We demonstrate that these key associations are generally absent at arbitrary locations in the basin. This new methodology allows for the simplification of a complex time-varying geological dataset into a single map view, enabling straightforward application for opal exploration and for future co-assessment with other datasets/geological criteria. This approach may help unravel the poorly understood opal formation process using an empirical spatio-temporal data-mining methodology and readily available datasets to aid hypothesis testing.
Beller, Elaine; Clark, Justin; Tsafnat, Guy; Adams, Clive; Diehl, Heinz; Lund, Hans; Ouzzani, Mourad; Thayer, Kristina; Thomas, James; Turner, Tari; Xia, Jun; Robinson, Karen; Glasziou, Paul
2018-05-19
Systematic reviews (SR) are vital to health care, but have become complicated and time-consuming, due to the rapid expansion of evidence to be synthesised. Fortunately, many tasks of systematic reviews have the potential to be automated or may be assisted by automation. Recent advances in natural language processing, text mining and machine learning have produced new algorithms that can accurately mimic human endeavour in systematic review activity, faster and more cheaply. Automation tools need to be able to work together, to exchange data and results. Therefore, we initiated the International Collaboration for the Automation of Systematic Reviews (ICASR), to successfully put all the parts of automation of systematic review production together. The first meeting was held in Vienna in October 2015. We established a set of principles to enable tools to be developed and integrated into toolkits.This paper sets out the principles devised at that meeting, which cover the need for improvement in efficiency of SR tasks, automation across the spectrum of SR tasks, continuous improvement, adherence to high quality standards, flexibility of use and combining components, the need for a collaboration and varied skills, the desire for open source, shared code and evaluation, and a requirement for replicability through rigorous and open evaluation.Automation has a great potential to improve the speed of systematic reviews. Considerable work is already being done on many of the steps involved in a review. The 'Vienna Principles' set out in this paper aim to guide a more coordinated effort which will allow the integration of work by separate teams and build on the experience, code and evaluations done by the many teams working across the globe.
NASA Astrophysics Data System (ADS)
Bastrakova, I.; Car, N.
2017-12-01
Geoscience Australia (GA) is recognised and respected as the National Repository and steward of multiple nationally significance data collections that provides geoscience information, services and capability to the Australian Government, industry and stakeholders. Internally, this brings a challenge of managing large volume (11 PB) of diverse and highly complex data distributed through a significant number of catalogues, applications, portals, virtual laboratories, and direct downloads from multiple locations. Externally, GA is facing constant changer in the Government regulations (e.g. open data and archival laws), growing stakeholder demands for high quality and near real-time delivery of data and products, and rapid technological advances enabling dynamic data access. Traditional approach to citing static data and products cannot satisfy increasing demands for the results from scientific workflows, or items within the workflows to be open, discoverable, thrusted and reproducible. Thus, citation of data, products, codes and applications through the implementation of provenance records is being implemented. This approach involves capturing the provenance of many GA processes according to a standardised data model and storing it, as well as metadata for the elements it references, in a searchable set of systems. This provides GA with ability to cite workflows unambiguously as well as each item within each workflow, including inputs and outputs and many other registered components. Dynamic objects can therefore be referenced flexibly in relation to their generation process - a dataset's metadata indicates where to obtain its provenance from - meaning the relevant facts of its dynamism need not be crammed into a single citation object with a single set of attributes. This allows for simple citations, similar to traditional static document citations such as references in journals, to be used for complex dynamic data and other objects such as software code.
Conversion of a Temporary Tent with Steel Frame into a Permanent Warehouse
NASA Astrophysics Data System (ADS)
Georgescu, Mircea; Ungureanu, Viorel; Grecea, Daniel; Petran, Ioan
2017-10-01
The paper is dealing with the problem of a functional conversion (involving both architectural and structural issues) applied to the case of an industrial building. As well known, temporary tents, designed according to the European Code EN13782, represent a remarkable stake on the building market and a fast and practical solution for some situations. It is exactly the case approached by the paper, where the investor has initially decided to erect on his platform a provisional shelter for agricultural machines and subsequent staff, built of a light steel structure covered by PVC roofing and cladding. This temporary tent has been acquired from a specialized supplier in form of a series product. After using the tent for a number of years, the investor has decided to convert the existing structure from architectural and structural point of view by switching to a permanent structure designed accordingly. Important changes were thus imposed both to the architectural part (technological flows, openings, facades) and especially to the structural part where this switch imposed a re-design to the codes of permanent structures (especially as far as climatic loadings are concerned). The required architectural change implied the building of a 70 cm high concrete plinth and replacing the PVC membrane temporary roofing and cladding by permanent 60 mm thick PUR sandwich panels. Together with a new system of openings this has led to renewed facades of the buildings. As for the structural change, the required conversion has imposed a thorough checking of the existing steel structure (very slender and typical to a tent) in view of transforming it into a permanent structure. The consolidation measures of the existing galvanized steel structure are described, together with the measures applied at infrastructure level in order to implement the required conversion.
PharmTeX: a LaTeX-Based Open-Source Platform for Automated Reporting Workflow.
Rasmussen, Christian Hove; Smith, Mike K; Ito, Kaori; Sundararajan, Vijayakumar; Magnusson, Mats O; Niclas Jonsson, E; Fostvedt, Luke; Burger, Paula; McFadyen, Lynn; Tensfeldt, Thomas G; Nicholas, Timothy
2018-03-16
Every year, the pharmaceutical industry generates a large number of scientific reports related to drug research, development, and regulatory submissions. Many of these reports are created using text processing tools such as Microsoft Word. Given the large number of figures, tables, references, and other elements, this is often a tedious task involving hours of copying and pasting and substantial efforts in quality control (QC). In the present article, we present the LaTeX-based open-source reporting platform, PharmTeX, a community-based effort to make reporting simple, reproducible, and user-friendly. The PharmTeX creators put a substantial effort into simplifying the sometimes complex elements of LaTeX into user-friendly functions that rely on advanced LaTeX and Perl code running in the background. Using this setup makes LaTeX much more accessible for users with no prior LaTeX experience. A software collection was compiled for users not wanting to manually install the required software components. The PharmTeX templates allow for inclusion of tables directly from mathematical software output as well and figures from several formats. Code listings can be included directly from source. No previous experience and only a few hours of training are required to start writing reports using PharmTeX. PharmTeX significantly reduces the time required for creating a scientific report fully compliant with regulatory and industry expectations. QC is made much simpler, since there is a direct link between analysis output and report input. PharmTeX makes available to report authors the strengths of LaTeX document processing without the need for extensive training. Graphical Abstract ᅟ.
Full 3D visualization tool-kit for Monte Carlo and deterministic transport codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frambati, S.; Frignani, M.
2012-07-01
We propose a package of tools capable of translating the geometric inputs and outputs of many Monte Carlo and deterministic radiation transport codes into open source file formats. These tools are aimed at bridging the gap between trusted, widely-used radiation analysis codes and very powerful, more recent and commonly used visualization software, thus supporting the design process and helping with shielding optimization. Three main lines of development were followed: mesh-based analysis of Monte Carlo codes, mesh-based analysis of deterministic codes and Monte Carlo surface meshing. The developed kit is considered a powerful and cost-effective tool in the computer-aided design formore » radiation transport code users of the nuclear world, and in particular in the fields of core design and radiation analysis. (authors)« less
The Future of ECHO: Evaluating Open Source Possibilities
NASA Astrophysics Data System (ADS)
Pilone, D.; Gilman, J.; Baynes, K.; Mitchell, A. E.
2012-12-01
NASA's Earth Observing System ClearingHOuse (ECHO) is a format agnostic metadata repository supporting over 3000 collections and 100M science granules. ECHO exposes FTP and RESTful Data Ingest APIs in addition to both SOAP and RESTful search and order capabilities. Built on top of ECHO is a human facing search and order web application named Reverb. ECHO processes hundreds of orders, tens of thousands of searches, and 1-2M ingest actions each week. As ECHO's holdings, metadata format support, and visibility have increased, the ECHO team has received requests by non-NASA entities for copies of ECHO that can be run locally against their data holdings. ESDIS and the ECHO Team have begun investigations into various deployment and Open Sourcing models that can balance the real constraints faced by the ECHO project with the benefits of providing ECHO capabilities to a broader set of users and providers. This talk will discuss several release and Open Source models being investigated by the ECHO team along with the impacts those models are expected to have on the project. We discuss: - Addressing complex deployment or setup issues for potential users - Models of vetting code contributions - Balancing external (public) user requests versus our primary partners - Preparing project code for public release, including navigating licensing issues related to leveraged libraries - Dealing with non-free project dependencies such as commercial databases - Dealing with sensitive aspects of project code such as database passwords, authentication approaches, security through obscurity, etc. - Ongoing support for the released code including increased testing demands, bug fixes, security fixes, and new features.
NASA Astrophysics Data System (ADS)
Naumov, D.; Fischer, T.; Böttcher, N.; Watanabe, N.; Walther, M.; Rink, K.; Bilke, L.; Shao, H.; Kolditz, O.
2014-12-01
OpenGeoSys (OGS) is a scientific open source code for numerical simulation of thermo-hydro-mechanical-chemical processes in porous and fractured media. Its basic concept is to provide a flexible numerical framework for solving multi-field problems for applications in geoscience and hydrology as e.g. for CO2 storage applications, geothermal power plant forecast simulation, salt water intrusion, water resources management, etc. Advances in computational mathematics have revolutionized the variety and nature of the problems that can be addressed by environmental scientists and engineers nowadays and an intensive code development in the last years enables in the meantime the solutions of much larger numerical problems and applications. However, solving environmental processes along the water cycle at large scales, like for complete catchment or reservoirs, stays computationally still a challenging task. Therefore, we started a new OGS code development with focus on execution speed and parallelization. In the new version, a local data structure concept improves the instruction and data cache performance by a tight bundling of data with an element-wise numerical integration loop. Dedicated analysis methods enable the investigation of memory-access patterns in the local and global assembler routines, which leads to further data structure optimization for an additional performance gain. The concept is presented together with a technical code analysis of the recent development and a large case study including transient flow simulation in the unsaturated / saturated zone of the Thuringian Syncline, Germany. The analysis is performed on a high-resolution mesh (up to 50M elements) with embedded fault structures.
Real-time visual simulation of APT system based on RTW and Vega
NASA Astrophysics Data System (ADS)
Xiong, Shuai; Fu, Chengyu; Tang, Tao
2012-10-01
The Matlab/Simulink simulation model of APT (acquisition, pointing and tracking) system is analyzed and established. Then the model's C code which can be used for real-time simulation is generated by RTW (Real-Time Workshop). Practical experiments show, the simulation result of running the C code is the same as running the Simulink model directly in the Matlab environment. MultiGen-Vega is a real-time 3D scene simulation software system. With it and OpenGL, the APT scene simulation platform is developed and used to render and display the virtual scenes of the APT system. To add some necessary graphics effects to the virtual scenes real-time, GLSL (OpenGL Shading Language) shaders are used based on programmable GPU. By calling the C code, the scene simulation platform can adjust the system parameters on-line and get APT system's real-time simulation data to drive the scenes. Practical application shows that this visual simulation platform has high efficiency, low charge and good simulation effect.
LSSGalPy: Interactive Visualization of the Large-scale Environment Around Galaxies
NASA Astrophysics Data System (ADS)
Argudo-Fernández, M.; Duarte Puertas, S.; Ruiz, J. E.; Sabater, J.; Verley, S.; Bergond, G.
2017-05-01
New tools are needed to handle the growth of data in astrophysics delivered by recent and upcoming surveys. We aim to build open-source, light, flexible, and interactive software designed to visualize extensive three-dimensional (3D) tabular data. Entirely written in the Python language, we have developed interactive tools to browse and visualize the positions of galaxies in the universe and their positions with respect to its large-scale structures (LSS). Motivated by a previous study, we created two codes using Mollweide projection and wedge diagram visualizations, where survey galaxies can be overplotted on the LSS of the universe. These are interactive representations where the visualizations can be controlled by widgets. We have released these open-source codes that have been designed to be easily re-used and customized by the scientific community to fulfill their needs. The codes are adaptable to other kinds of 3D tabular data and are robust enough to handle several millions of objects. .
Novel inter and intra prediction tools under consideration for the emerging AV1 video codec
NASA Astrophysics Data System (ADS)
Joshi, Urvang; Mukherjee, Debargha; Han, Jingning; Chen, Yue; Parker, Sarah; Su, Hui; Chiang, Angie; Xu, Yaowu; Liu, Zoe; Wang, Yunqing; Bankoski, Jim; Wang, Chen; Keyder, Emil
2017-09-01
Google started the WebM Project in 2010 to develop open source, royalty- free video codecs designed specifically for media on the Web. The second generation codec released by the WebM project, VP9, is currently served by YouTube, and enjoys billions of views per day. Realizing the need for even greater compression efficiency to cope with the growing demand for video on the web, the WebM team embarked on an ambitious project to develop a next edition codec AV1, in a consortium of major tech companies called the Alliance for Open Media, that achieves at least a generational improvement in coding efficiency over VP9. In this paper, we focus primarily on new tools in AV1 that improve the prediction of pixel blocks before transforms, quantization and entropy coding are invoked. Specifically, we describe tools and coding modes that improve intra, inter and combined inter-intra prediction. Results are presented on standard test sets.
Optical performances of the FM JEM-X masks
NASA Astrophysics Data System (ADS)
Reglero, V.; Rodrigo, J.; Velasco, T.; Gasent, J. L.; Chato, R.; Alamo, J.; Suso, J.; Blay, P.; Martínez, S.; Doñate, M.; Reina, M.; Sabau, D.; Ruiz-Urien, I.; Santos, I.; Zarauz, J.; Vázquez, J.
2001-09-01
The JEM-X Signal Multiplexing Systems are large HURA codes "written" in a pure tungsten plate 0.5 mm thick. 24.247 hexagonal pixels (25% open) are spread over a total area of 535 mm diameter. The tungsten plate is embedded in a mechanical structure formed by a Ti ring, a pretensioning system (Cu-Be) and an exoskeleton structure that provides the required stiffness. The JEM-X masks differ from the SPI and IBIS masks on the absence of a code support structure covering the mask assembly. Open pixels are fully transparent to X-rays. The scope of this paper is to report the optical performances of the FM JEM-X masks defined by uncertainties on the pixel location (centroid) and size coming from the manufacturing and assembly processes. Stability of the code elements under thermoelastic deformations is also discussed. As a general statement, JEM-X Mask optical properties are nearly one order of magnitude better than specified in 1994 during the ESA instrument selection.
Open source clustering software.
de Hoon, M J L; Imoto, S; Nolan, J; Miyano, S
2004-06-12
We have implemented k-means clustering, hierarchical clustering and self-organizing maps in a single multipurpose open-source library of C routines, callable from other C and C++ programs. Using this library, we have created an improved version of Michael Eisen's well-known Cluster program for Windows, Mac OS X and Linux/Unix. In addition, we generated a Python and a Perl interface to the C Clustering Library, thereby combining the flexibility of a scripting language with the speed of C. The C Clustering Library and the corresponding Python C extension module Pycluster were released under the Python License, while the Perl module Algorithm::Cluster was released under the Artistic License. The GUI code Cluster 3.0 for Windows, Macintosh and Linux/Unix, as well as the corresponding command-line program, were released under the same license as the original Cluster code. The complete source code is available at http://bonsai.ims.u-tokyo.ac.jp/mdehoon/software/cluster. Alternatively, Algorithm::Cluster can be downloaded from CPAN, while Pycluster is also available as part of the Biopython distribution.
An open-source library for the numerical modeling of mass-transfer in solid oxide fuel cells
NASA Astrophysics Data System (ADS)
Novaresio, Valerio; García-Camprubí, María; Izquierdo, Salvador; Asinari, Pietro; Fueyo, Norberto
2012-01-01
The generation of direct current electricity using solid oxide fuel cells (SOFCs) involves several interplaying transport phenomena. Their simulation is crucial for the design and optimization of reliable and competitive equipment, and for the eventual market deployment of this technology. An open-source library for the computational modeling of mass-transport phenomena in SOFCs is presented in this article. It includes several multicomponent mass-transport models ( i.e. Fickian, Stefan-Maxwell and Dusty Gas Model), which can be applied both within porous media and in porosity-free domains, and several diffusivity models for gases. The library has been developed for its use with OpenFOAM ®, a widespread open-source code for fluid and continuum mechanics. The library can be used to model any fluid flow configuration involving multicomponent transport phenomena and it is validated in this paper against the analytical solution of one-dimensional test cases. In addition, it is applied for the simulation of a real SOFC and further validated using experimental data. Program summaryProgram title: multiSpeciesTransportModels Catalogue identifier: AEKB_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKB_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License No. of lines in distributed program, including test data, etc.: 18 140 No. of bytes in distributed program, including test data, etc.: 64 285 Distribution format: tar.gz Programming language:: C++ Computer: Any x86 (the instructions reported in the paper consider only the 64 bit case for the sake of simplicity) Operating system: Generic Linux (the instructions reported in the paper consider only the open-source Ubuntu distribution for the sake of simplicity) Classification: 12 External routines: OpenFOAM® (version 1.6-ext) ( http://www.extend-project.de) Nature of problem: This software provides a library of models for the simulation of the steady state mass and momentum transport in a multi-species gas mixture, possibly in a porous medium. The software is particularly designed to be used as the mass-transport library for the modeling of solid oxide fuel cells (SOFC). When supplemented with other sub-models, such as thermal and charge-transport ones, it allows the prediction of the cell polarization curve and hence the cell performance. Solution method: Standard finite volume method (FVM) is used for solving all the conservation equations. The pressure-velocity coupling is solved using the SIMPLE algorithm (possibly adding a porous drag term if required). The mass transport can be calculated using different alternative models, namely Fick, Maxwell-Stefan or dusty gas model. The code adopts a segregated method to solve the resulting linear system of equations. The different regions of the SOFC, namely gas channels, electrodes and electrolyte, are solved independently, and coupled through boundary conditions. Restrictions: When extremely large species fluxes are considered, current implementation of the Neumann and Robin boundary conditions do not avoid negative values of molar and/or mass fractions, which finally end up with numerical instability. However this never happened in the documented runs. Eventually these boundary conditions could be reformulated to become more robust. Running time: From seconds to hours depending on the mesh size and number of species. For example, on a 64 bit machine with Intel Core Duo T8300 and 3 GBytes of RAM, the provided test run requires less than 1 second.
Born, Karen; Orkin, Aaron; VanderBurgh, David; Beardy, Jackson
2012-01-01
To understand how community members of a remote First Nations community respond to an emergency first aid education programme. A qualitative study involving focus groups and participant observation as part of a community-based participatory research project, which involved the development and implementation of a wilderness first aid course in collaboration with the community. Twenty community members participated in the course and agreed to be part of the research focus groups. Three community research partners validated and reviewed the data collected from this process. These data were coded and analysed using open coding. Community members responded to the course in ways related to their past experiences with injury and first aid, both as individuals and as members of the community. Feelings of confidence and self-efficacy related access to care and treatment of injury surfaced during the course. Findings also highlighted how the context of the remote First Nations community influenced the delivery and development of course materials. Developing and delivering a first aid course in a remote community requires sensitivity towards the response of participants to the course, as well as the context in which it is being delivered. Employing collaborative approaches to teaching first aid can aim to address these unique needs. Though delivery of a first response training programme in a small remote community will probably not impact the morbidity and mortality associated with injury, it has the potential to impact community self-efficacy and confidence when responding to an emergency situation.
Kaufholdt, David; Broggini, Giovanni A.L.; Flachowsky, Henryk; Hänsch, Robert
2015-01-01
Upon pathogen attack, fruit trees such as apple (Malus spp.) and pear (Pyrus spp.) accumulate biphenyl and dibenzofuran phytoalexins, with aucuparin as a major biphenyl compound. 4-Hydroxylation of the biphenyl scaffold, formed by biphenyl synthase (BIS), is catalyzed by a cytochrome P450 (CYP). The biphenyl 4-hydroxylase (B4H) coding sequence of rowan (Sorbus aucuparia) was isolated and functionally expressed in yeast (Saccharomyces cerevisiae). SaB4H was named CYP736A107. No catalytic function of CYP736 was known previously. SaB4H exhibited absolute specificity for 3-hydroxy-5-methoxybiphenyl. In rowan cell cultures treated with elicitor from the scab fungus, transient increases in the SaB4H, SaBIS, and phenylalanine ammonia lyase transcript levels preceded phytoalexin accumulation. Transient expression of a carboxyl-terminal reporter gene construct directed SaB4H to the endoplasmic reticulum. A construct lacking the amino-terminal leader and transmembrane domain caused cytoplasmic localization. Functional B4H coding sequences were also isolated from two apple (Malus × domestica) cultivars. The MdB4Hs were named CYP736A163. When stems of cv Golden Delicious were infected with the fire blight bacterium, highest MdB4H transcript levels were observed in the transition zone. In a phylogenetic tree, the three B4Hs were closest to coniferaldehyde 5-hydroxylases involved in lignin biosynthesis, suggesting a common ancestor. Coniferaldehyde and related compounds were not converted by SaB4H. PMID:25862456
Meinecke, Annika L; Lehmann-Willenbrock, Nale; Kauffeld, Simone
2017-07-01
Despite a wealth of research on antecedents and outcomes of annual appraisal interviews, the ingredients that make for a successful communication process within the interview itself remain unclear. This study takes a communication approach to highlight leader-follower dynamics in annual appraisal interviews. We integrate relational leadership theory and recent findings on leader-follower interactions to argue (a) how supervisors' task- and relation-oriented statements can elicit employee involvement during the interview process and (b) how these communication patterns affect both supervisors' and employees' perceptions of the interview. Moreover, we explore (c) how supervisor behavior is contingent upon employee contributions to the appraisal interview. We audiotaped 48 actual annual appraisal interviews between supervisors and their employees. Adopting a multimethod approach, we used quantitative interaction coding (N = 32,791 behavioral events) as well as qualitative open-axial coding to explore communication patterns among supervisors and their employees. Lag sequential analysis revealed that supervisors' relation-oriented statements triggered active employee contributions and vice versa. These relation-activation patterns were linked to higher interview success ratings by both supervisors and employees. Moreover, our qualitative findings highlight employee disagreement as a crucial form of active employee contributions during appraisal interviews. We distinguish what employees disagreed about, how the disagreement was enacted, and how supervisors responded to it. Overall employee disagreement was negatively related to ratings of supervisor support. We discuss theoretical implications for performance appraisal and leadership theory and derive practical recommendations for promoting employee involvement during appraisal interviews. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Reactive Fluid Flow and Applications to Diagenesis, Mineral Deposits, and Crustal Rocks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rye, Danny M.; Bolton, Edward W.
2002-11-04
The objective is to initiate new: modeling of coupled fluid flow and chemical reactions of geologic environments; experimental and theoretical studies of water-rock reactions; collection and interpretation of stable isotopic and geochemical field data at many spatial scales of systems involving fluid flow and reaction in environments ranging from soils to metamorphic rocks. Theoretical modeling of coupled fluid flow and chemical reactions, involving kinetics, has been employed to understand the differences between equilibrium, steady-state, and non-steady-state behavior of the chemical evolution of open fluid-rock systems. The numerical codes developed in this project treat multi-component, finite-rate reactions combined with advective andmore » dispersive transport in multi-dimensions. The codes incorporate heat, mass, and isotopic transfer in both porous and fractured media. Experimental work has obtained the kinetic rate laws of pertinent silicate-water reactions and the rates of Sr release during chemical weathering. Ab-initio quantum mechanical techniques have been applied to obtain the kinetics and mechanisms of silicate surface reactions and isotopic exchange between water and dissolved species. Geochemical field-based studies were carried out on the Wepawaug metamorphic schist, on the Irish base-metal sediment-hosted ore system, in the Dalradian metamorphic complex in Scotland, and on weathering in the Columbia River flood basalts. The geochemical and isotopic field data, and the experimental and theoretical rate data, were used as constraints on the numerical models and to determine the length and time scales relevant to each of the field areas.« less
A Three-Dimensional Parallel Time-Accurate Turbopump Simulation Procedure Using Overset Grid System
NASA Technical Reports Server (NTRS)
Kiris, Cetin; Chan, William; Kwak, Dochan
2002-01-01
The objective of the current effort is to provide a computational framework for design and analysis of the entire fuel supply system of a liquid rocket engine, including high-fidelity unsteady turbopump flow analysis. This capability is needed to support the design of pump sub-systems for advanced space transportation vehicles that are likely to involve liquid propulsion systems. To date, computational tools for design/analysis of turbopump flows are based on relatively lower fidelity methods. An unsteady, three-dimensional viscous flow analysis tool involving stationary and rotational components for the entire turbopump assembly has not been available for real-world engineering applications. The present effort provides developers with information such as transient flow phenomena at start up, and nonuniform inflows, and will eventually impact on system vibration and structures. In the proposed paper, the progress toward the capability of complete simulation of the turbo-pump for a liquid rocket engine is reported. The Space Shuttle Main Engine (SSME) turbo-pump is used as a test case for evaluation of the hybrid MPI/Open-MP and MLP versions of the INS3D code. CAD to solution auto-scripting capability is being developed for turbopump applications. The relative motion of the grid systems for the rotor-stator interaction was obtained using overset grid techniques. Unsteady computations for the SSME turbo-pump, which contains 114 zones with 34.5 million grid points, are carried out on Origin 3000 systems at NASA Ames Research Center. Results from these time-accurate simulations with moving boundary capability are presented along with the performance of parallel versions of the code.
Time-Dependent Simulations of Turbopump Flows
NASA Technical Reports Server (NTRS)
Kris, Cetin C.; Kwak, Dochan
2001-01-01
The objective of the current effort is to provide a computational framework for design and analysis of the entire fuel supply system of a liquid rocket engine, including high-fidelity unsteady turbopump flow analysis. This capability is needed to support the design of pump sub-systems for advanced space transportation vehicles that are likely to involve liquid propulsion systems. To date, computational tools for design/analysis of turbopump flows are based on relatively lower fidelity methods. An unsteady, three-dimensional viscous flow analysis tool involving stationary and rotational components for the entire turbopump assembly has not been available for real-world engineering applications. The present effort will provide developers with information such as transient flow phenomena at start up, impact of non-uniform inflows, system vibration and impact on the structure. In the proposed paper, the progress toward the capability of complete simulation of the turbo-pump for a liquid rocket engine is reported. The Space Shuttle Main Engine (SSME) turbo-pump is used as a test case for evaluation of the hybrid MPI/Open-MP and MLP versions of the INS3D code. The relative motion of the grid systems for the rotor-stator interaction was obtained using overset grid techniques. Time-accuracy of the scheme has been evaluated with simple test cases. Unsteady computations for the SSME turbo-pump, which contains 114 zones with 34.5 million grid points, are carried out on Origin 2000 systems at NASA Ames Research Center. Results from these time-accurate simulations with moving boundary capability will be presented along with the performance of parallel versions of the code.
Combinatorial Histone Acetylation Patterns Are Generated by Motif-Specific Reactions.
Blasi, Thomas; Feller, Christian; Feigelman, Justin; Hasenauer, Jan; Imhof, Axel; Theis, Fabian J; Becker, Peter B; Marr, Carsten
2016-01-27
Post-translational modifications (PTMs) are pivotal to cellular information processing, but how combinatorial PTM patterns ("motifs") are set remains elusive. We develop a computational framework, which we provide as open source code, to investigate the design principles generating the combinatorial acetylation patterns on histone H4 in Drosophila melanogaster. We find that models assuming purely unspecific or lysine site-specific acetylation rates were insufficient to explain the experimentally determined motif abundances. Rather, these abundances were best described by an ensemble of models with acetylation rates that were specific to motifs. The model ensemble converged upon four acetylation pathways; we validated three of these using independent data from a systematic enzyme depletion study. Our findings suggest that histone acetylation patterns originate through specific pathways involving motif-specific acetylation activity. Copyright © 2016 Elsevier Inc. All rights reserved.
Communicating about resuscitation: problems and prospects.
Ventres, W B
1993-01-01
The Patient Self-Determination Act of 1991 implicitly encourages physicians to discuss advance directives and no-code orders with their patients. The medical literature to date, however, has done little to place resuscitative decision making in the context of how physicians, patients, and families communicate with one another. This paper investigates how interactions between involved parties affect the process and outcome of this decision making. Participant observation and open-ended interviews were conducted with patients, their families, resident physicians, and family medicine faculty members. This report describes three social and cultural issues that commonly influence and shape the process of do-not-resuscitate decision making: judging competency and capacity, dealing with uncertainty, and recognizing attitudes toward death. Improved understanding of the communicative process can facilitate the establishment of meaningful, therapeutic alliances between physicians, patients, and families at an influential juncture in the family life cycle.
Recommendations for open data science.
Gymrek, Melissa; Farjoun, Yossi
2016-01-01
Life science research increasingly relies on large-scale computational analyses. However, the code and data used for these analyses are often lacking in publications. To maximize scientific impact, reproducibility, and reuse, it is crucial that these resources are made publicly available and are fully transparent. We provide recommendations for improving the openness of data-driven studies in life sciences.
77 FR 12077 - Meeting of the Judicial Conference Advisory Committee on Rules of Bankruptcy Procedure
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-28
... Rules of Bankruptcy Procedure. ACTION: Notice of open meeting. SUMMARY: The Advisory Committee on Rules of Bankruptcy Procedure will hold a two-day meeting. The meeting will be open to public observation... Officer and Counsel. [FR Doc. 2012-4637 Filed 2-27-12; 8:45 am] BILLING CODE 2210-55-P ...
77 FR 12078 - Meeting of the Judicial Conference Advisory Committee on Rules of Criminal Procedure
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-28
... of Criminal Procedure. ACTION: Notice of open meeting. SUMMARY: The Advisory Committee on Rules of Criminal Procedure will hold a two-day meeting. The meeting will be open to public observation but not... Deputy and Counsel. [FR Doc. 2012-4654 Filed 2-27-12; 8:45 am] BILLING CODE 2210-55-P ...
78 FR 52785 - Meeting of the Judicial Conference Committee on Rules of Practice and Procedure
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-26
... Evidence. ACTION: Notice of Open Meeting. SUMMARY: The Advisory Committee on Rules of Evidence will hold a one- day meeting. The meeting will be open to public observation but not participation. DATES: October... Chief Rules Officer. [FR Doc. 2013-20669 Filed 8-23-13; 8:45 am] BILLING CODE 2210-55-P ...
77 FR 12077 - Meeting of the Judicial Conference Advisory Committee on Rules of Civil Procedure
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-28
... Civil Procedure. ACTION: Notice of open meeting. SUMMARY: The Advisory Committee on Rules of Civil Procedure will hold a two-day meeting. The meeting will be open to public observation but not participation.... [FR Doc. 2012-4671 Filed 2-27-12; 8:45 am] BILLING CODE 2210-55-P ...
77 FR 12077 - Meeting of the Judicial Conference Advisory Committee on Rules of Bankruptcy Procedure
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-28
... Rules of Bankruptcy Procedure. ACTION: Notice of open meeting. SUMMARY: The Advisory Committee on Rules of Bankruptcy Procedure will hold a two-day meeting. The meeting will be open to public observation... Committee Deputy and Counsel. [FR Doc. 2012-4668 Filed 2-27-12; 8:45 am] BILLING CODE 2210-55-P ...
Open Technology Development: Roadmap Plan
2006-04-01
65 RECOMMENDATION 1: APPROVE AND FUND AN OTD STRIKE TEAM................. 67 Senior Leadership...negotiated, rather than an innate property of the product. Software’s replicability also means it can be incorporated into other software systems without...to leverage an open code development model, DoD would provide the market incentives to increase the agility and competitiveness of the industrial
Fac-Back-OPAC: An Open Source Interface to Your Library System
ERIC Educational Resources Information Center
Beccaria, Mike; Scott, Dan
2007-01-01
The new Fac-Back-OPAC (a faceted backup OPAC) is built on code that was originally developed by Casey Durfee in February 2007. It represents the convergence of two prominent trends in library tools: the decoupling of discovery tools from the traditional integrated library system (ILS) and the use of readily available open source components to…
Non-coding functions of alternative pre-mRNA splicing in development
Mockenhaupt, Stefan; Makeyev, Eugene V.
2015-01-01
A majority of messenger RNA precursors (pre-mRNAs) in the higher eukaryotes undergo alternative splicing to generate more than one mature product. By targeting the open reading frame region this process increases diversity of protein isoforms beyond the nominal coding capacity of the genome. However, alternative splicing also frequently controls output levels and spatiotemporal features of cellular and organismal gene expression programs. Here we discuss how these non-coding functions of alternative splicing contribute to development through regulation of mRNA stability, translational efficiency and cellular localization. PMID:26493705
The 2016 Bioinformatics Open Source Conference (BOSC)
Harris, Nomi L.; Cock, Peter J.A.; Chapman, Brad; Fields, Christopher J.; Hokamp, Karsten; Lapp, Hilmar; Muñoz-Torres, Monica; Wiencko, Heather
2016-01-01
Message from the ISCB: The Bioinformatics Open Source Conference (BOSC) is a yearly meeting organized by the Open Bioinformatics Foundation (OBF), a non-profit group dedicated to promoting the practice and philosophy of Open Source software development and Open Science within the biological research community. BOSC has been run since 2000 as a two-day Special Interest Group (SIG) before the annual ISMB conference. The 17th annual BOSC ( http://www.open-bio.org/wiki/BOSC_2016) took place in Orlando, Florida in July 2016. As in previous years, the conference was preceded by a two-day collaborative coding event open to the bioinformatics community. The conference brought together nearly 100 bioinformatics researchers, developers and users of open source software to interact and share ideas about standards, bioinformatics software development, and open and reproducible science. PMID:27781083
Death of a dogma: eukaryotic mRNAs can code for more than one protein
Mouilleron, Hélène; Delcourt, Vivian; Roucou, Xavier
2016-01-01
mRNAs carry the genetic information that is translated by ribosomes. The traditional view of a mature eukaryotic mRNA is a molecule with three main regions, the 5′ UTR, the protein coding open reading frame (ORF) or coding sequence (CDS), and the 3′ UTR. This concept assumes that ribosomes translate one ORF only, generally the longest one, and produce one protein. As a result, in the early days of genomics and bioinformatics, one CDS was associated with each protein-coding gene. This fundamental concept of a single CDS is being challenged by increasing experimental evidence indicating that annotated proteins are not the only proteins translated from mRNAs. In particular, mass spectrometry (MS)-based proteomics and ribosome profiling have detected productive translation of alternative open reading frames. In several cases, the alternative and annotated proteins interact. Thus, the expression of two or more proteins translated from the same mRNA may offer a mechanism to ensure the co-expression of proteins which have functional interactions. Translational mechanisms already described in eukaryotic cells indicate that the cellular machinery is able to translate different CDSs from a single viral or cellular mRNA. In addition to summarizing data showing that the protein coding potential of eukaryotic mRNAs has been underestimated, this review aims to challenge the single translated CDS dogma. PMID:26578573
Genetic code mutations: the breaking of a three billion year invariance.
Mat, Wai-Kin; Xue, Hong; Wong, J Tze-Fei
2010-08-20
The genetic code has been unchanging for some three billion years in its canonical ensemble of encoded amino acids, as indicated by the universal adoption of this ensemble by all known organisms. Code mutations beginning with the encoding of 4-fluoro-Trp by Bacillus subtilis, initially replacing and eventually displacing Trp from the ensemble, first revealed the intrinsic mutability of the code. This has since been confirmed by a spectrum of other experimental code alterations in both prokaryotes and eukaryotes. To shed light on the experimental conversion of a rigidly invariant code to a mutating code, the present study examined code mutations determining the propagation of Bacillus subtilis on Trp and 4-, 5- and 6-fluoro-tryptophans. The results obtained with the mutants with respect to cross-inhibitions between the different indole amino acids, and the growth effects of individual nutrient withdrawals rendering essential their biosynthetic pathways, suggested that oligogenic barriers comprising sensitive proteins which malfunction with amino acid analogues provide effective mechanisms for preserving the invariance of the code through immemorial time, and mutations of these barriers open up the code to continuous change.
NASA Astrophysics Data System (ADS)
Hofierka, Jaroslav; Lacko, Michal; Zubal, Stanislav
2017-10-01
In this paper, we describe the parallelization of three complex and computationally intensive modules of GRASS GIS using the OpenMP application programming interface for multi-core computers. These include the v.surf.rst module for spatial interpolation, the r.sun module for solar radiation modeling and the r.sim.water module for water flow simulation. We briefly describe the functionality of the modules and parallelization approaches used in the modules. Our approach includes the analysis of the module's functionality, identification of source code segments suitable for parallelization and proper application of OpenMP parallelization code to create efficient threads processing the subtasks. We document the efficiency of the solutions using the airborne laser scanning data representing land surface in the test area and derived high-resolution digital terrain model grids. We discuss the performance speed-up and parallelization efficiency depending on the number of processor threads. The study showed a substantial increase in computation speeds on a standard multi-core computer while maintaining the accuracy of results in comparison to the output from original modules. The presented parallelization approach showed the simplicity and efficiency of the parallelization of open-source GRASS GIS modules using OpenMP, leading to an increased performance of this geospatial software on standard multi-core computers.
Early Experiences Writing Performance Portable OpenMP 4 Codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Joubert, Wayne; Hernandez, Oscar R
In this paper, we evaluate the recently available directives in OpenMP 4 to parallelize a computational kernel using both the traditional shared memory approach and the newer accelerator targeting capabilities. In addition, we explore various transformations that attempt to increase application performance portability, and examine the expressiveness and performance implications of using these approaches. For example, we want to understand if the target map directives in OpenMP 4 improve data locality when mapped to a shared memory system, as opposed to the traditional first touch policy approach in traditional OpenMP. To that end, we use recent Cray and Intel compilersmore » to measure the performance variations of a simple application kernel when executed on the OLCF s Titan supercomputer with NVIDIA GPUs and the Beacon system with Intel Xeon Phi accelerators attached. To better understand these trade-offs, we compare our results from traditional OpenMP shared memory implementations to the newer accelerator programming model when it is used to target both the CPU and an attached heterogeneous device. We believe the results and lessons learned as presented in this paper will be useful to the larger user community by providing guidelines that can assist programmers in the development of performance portable code.« less
openPSTD: The open source pseudospectral time-domain method for acoustic propagation
NASA Astrophysics Data System (ADS)
Hornikx, Maarten; Krijnen, Thomas; van Harten, Louis
2016-06-01
An open source implementation of the Fourier pseudospectral time-domain (PSTD) method for computing the propagation of sound is presented, which is geared towards applications in the built environment. Being a wave-based method, PSTD captures phenomena like diffraction, but maintains efficiency in processing time and memory usage as it allows to spatially sample close to the Nyquist criterion, thus keeping both the required spatial and temporal resolution coarse. In the implementation it has been opted to model the physical geometry as a composition of rectangular two-dimensional subdomains, hence initially restricting the implementation to orthogonal and two-dimensional situations. The strategy of using subdomains divides the problem domain into local subsets, which enables the simulation software to be built according to Object-Oriented Programming best practices and allows room for further computational parallelization. The software is built using the open source components, Blender, Numpy and Python, and has been published under an open source license itself as well. For accelerating the software, an option has been included to accelerate the calculations by a partial implementation of the code on the Graphical Processing Unit (GPU), which increases the throughput by up to fifteen times. The details of the implementation are reported, as well as the accuracy of the code.
Martin, Erika G; Law, Jennie; Ran, Weijia; Helbig, Natalie; Birkhead, Guthrie S
Government datasets are newly available on open data platforms that are publicly accessible, available in nonproprietary formats, free of charge, and with unlimited use and distribution rights. They provide opportunities for health research, but their quality and usability are unknown. To describe available open health data, identify whether data are presented in a way that is aligned with best practices and usable for researchers, and examine differences across platforms. Two reviewers systematically reviewed a random sample of data offerings on NYC OpenData (New York City, all offerings, n = 37), Health Data NY (New York State, 25% sample, n = 71), and HealthData.gov (US Department of Health and Human Services, 5% sample, n = 75), using a standard coding guide. Three open health data platforms at the federal, New York State, and New York City levels. Data characteristics from the coding guide were aggregated into summary indices for intrinsic data quality, contextual data quality, adherence to the Dublin Core metadata standards, and the 5-star open data deployment scheme. One quarter of the offerings were structured datasets; other presentation styles included charts (14.7%), documents describing data (12.0%), maps (10.9%), and query tools (7.7%). Health Data NY had higher intrinsic data quality (P < .001), contextual data quality (P < .001), and Dublin Core metadata standards adherence (P < .001). All met basic "web availability" open data standards; fewer met higher standards of "hyperlinked to other data." Although all platforms need improvement, they already provide readily available data for health research. Sustained effort on improving open data websites and metadata is necessary for ensuring researchers use these data, thereby increasing their research value.
Weaving a knowledge network for Deep Carbon Science
NASA Astrophysics Data System (ADS)
Ma, Xiaogang; West, Patrick; Zednik, Stephan; Erickson, John; Eleish, Ahmed; Chen, Yu; Wang, Han; Zhong, Hao; Fox, Peter
2017-05-01
Geoscience researchers are increasingly dependent on informatics and the Web to conduct their research. Geoscience is one of the first domains that take lead in initiatives such as open data, open code, open access, and open collections, which comprise key topics of Open Science in academia. The meaning of being open can be understood at two levels. The lower level is to make data, code, sample collections and publications, etc. freely accessible online and allow reuse, modification and sharing. The higher level is the annotation and connection between those resources to establish a network for collaborative scientific research. In the data science component of the Deep Carbon Observatory (DCO), we have leveraged state-of-the-art information technologies and existing online resources to deploy a web portal for the over 1000 researchers in the DCO community. An initial aim of the portal is to keep track of all research and outputs related to the DCO community. Further, we intend for the portal to establish a knowledge network, which supports various stages of an open scientific process within and beyond the DCO community. Annotation and linking are the key characteristics of the knowledge network. Not only are key assets, including DCO data and methods, published in an open and inter-linked fashion, but the people, organizations, groups, grants, projects, samples, field sites, instruments, software programs, activities, meetings, etc. are recorded and connected to each other through relationships based on well-defined, formal conceptual models. The network promotes collaboration among DCO participants, improves the openness and reproducibility of carbon-related research, facilitates accreditation to resource contributors, and eventually stimulates new ideas and findings in deep carbon-related studies.
MRMPlus: an open source quality control and assessment tool for SRM/MRM assay development.
Aiyetan, Paul; Thomas, Stefani N; Zhang, Zhen; Zhang, Hui
2015-12-12
Selected and multiple reaction monitoring involves monitoring a multiplexed assay of proteotypic peptides and associated transitions in mass spectrometry runs. To describe peptide and associated transitions as stable, quantifiable, and reproducible representatives of proteins of interest, experimental and analytical validation is required. However, inadequate and disparate analytical tools and validation methods predispose assay performance measures to errors and inconsistencies. Implemented as a freely available, open-source tool in the platform independent Java programing language, MRMPlus computes analytical measures as recommended recently by the Clinical Proteomics Tumor Analysis Consortium Assay Development Working Group for "Tier 2" assays - that is, non-clinical assays sufficient enough to measure changes due to both biological and experimental perturbations. Computed measures include; limit of detection, lower limit of quantification, linearity, carry-over, partial validation of specificity, and upper limit of quantification. MRMPlus streamlines assay development analytical workflow and therefore minimizes error predisposition. MRMPlus may also be used for performance estimation for targeted assays not described by the Assay Development Working Group. MRMPlus' source codes and compiled binaries can be freely downloaded from https://bitbucket.org/paiyetan/mrmplusgui and https://bitbucket.org/paiyetan/mrmplusgui/downloads respectively.
Gene expression regulation by upstream open reading frames and human disease.
Barbosa, Cristina; Peixeiro, Isabel; Romão, Luísa
2013-01-01
Upstream open reading frames (uORFs) are major gene expression regulatory elements. In many eukaryotic mRNAs, one or more uORFs precede the initiation codon of the main coding region. Indeed, several studies have revealed that almost half of human transcripts present uORFs. Very interesting examples have shown that these uORFs can impact gene expression of the downstream main ORF by triggering mRNA decay or by regulating translation. Also, evidence from recent genetic and bioinformatic studies implicates disturbed uORF-mediated translational control in the etiology of many human diseases, including malignancies, metabolic or neurologic disorders, and inherited syndromes. In this review, we will briefly present the mechanisms through which uORFs regulate gene expression and how they can impact on the organism's response to different cell stress conditions. Then, we will emphasize the importance of these structures by illustrating, with specific examples, how disturbed uORF-mediated translational control can be involved in the etiology of human diseases, giving special importance to genotype-phenotype correlations. Identifying and studying more cases of uORF-altering mutations will help us to understand and establish genotype-phenotype associations, leading to advancements in diagnosis, prognosis, and treatment of many human disorders.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alexandrov, Boian S.; Lliev, Filip L.; Stanev, Valentin G.
This code is a toy (short) version of CODE-2016-83. From a general perspective, the code represents an unsupervised adaptive machine learning algorithm that allows efficient and high performance de-mixing and feature extraction of a multitude of non-negative signals mixed and recorded by a network of uncorrelated sensor arrays. The code identifies the number of the mixed original signals and their locations. Further, the code also allows deciphering of signals that have been delayed in regards to the mixing process in each sensor. This code is high customizable and it can be efficiently used for a fast macro-analyses of data. Themore » code is applicable to a plethora of distinct problems: chemical decomposition, pressure transient decomposition, unknown sources/signal allocation, EM signal decomposition. An additional procedure for allocation of the unknown sources is incorporated in the code.« less
Using CellML with OpenCMISS to Simulate Multi-Scale Physiology
Nickerson, David P.; Ladd, David; Hussan, Jagir R.; Safaei, Soroush; Suresh, Vinod; Hunter, Peter J.; Bradley, Christopher P.
2014-01-01
OpenCMISS is an open-source modeling environment aimed, in particular, at the solution of bioengineering problems. OpenCMISS consists of two main parts: a computational library (OpenCMISS-Iron) and a field manipulation and visualization library (OpenCMISS-Zinc). OpenCMISS is designed for the solution of coupled multi-scale, multi-physics problems in a general-purpose parallel environment. CellML is an XML format designed to encode biophysically based systems of ordinary differential equations and both linear and non-linear algebraic equations. A primary design goal of CellML is to allow mathematical models to be encoded in a modular and reusable format to aid reproducibility and interoperability of modeling studies. In OpenCMISS, we make use of CellML models to enable users to configure various aspects of their multi-scale physiological models. This avoids the need for users to be familiar with the OpenCMISS internal code in order to perform customized computational experiments. Examples of this are: cellular electrophysiology models embedded in tissue electrical propagation models; material constitutive relationships for mechanical growth and deformation simulations; time-varying boundary conditions for various problem domains; and fluid constitutive relationships and lumped-parameter models. In this paper, we provide implementation details describing how CellML models are integrated into multi-scale physiological models in OpenCMISS. The external interface OpenCMISS presents to users is also described, including specific examples exemplifying the extensibility and usability these tools provide the physiological modeling and simulation community. We conclude with some thoughts on future extension of OpenCMISS to make use of other community developed information standards, such as FieldML, SED-ML, and BioSignalML. Plans for the integration of accelerator code (graphical processing unit and field programmable gate array) generated from CellML models is also discussed. PMID:25601911
Jaitly, Navdeep; Mayampurath, Anoop; Littlefield, Kyle; Adkins, Joshua N; Anderson, Gordon A; Smith, Richard D
2009-01-01
Background Data generated from liquid chromatography coupled to high-resolution mass spectrometry (LC-MS)-based studies of a biological sample can contain large amounts of biologically significant information in the form of proteins, peptides, and metabolites. Interpreting this data involves inferring the masses and abundances of biomolecules injected into the instrument. Because of the inherent complexity of mass spectral patterns produced by these biomolecules, the analysis is significantly enhanced by using visualization capabilities to inspect and confirm results. In this paper we describe Decon2LS, an open-source software package for automated processing and visualization of high-resolution MS data. Drawing extensively on algorithms developed over the last ten years for ICR2LS, Decon2LS packages the algorithms as a rich set of modular, reusable processing classes for performing diverse functions such as reading raw data, routine peak finding, theoretical isotope distribution modelling, and deisotoping. Because the source code is openly available, these functionalities can now be used to build derivative applications in relatively fast manner. In addition, Decon2LS provides an extensive set of visualization tools, such as high performance chart controls. Results With a variety of options that include peak processing, deisotoping, isotope composition, etc, Decon2LS supports processing of multiple raw data formats. Deisotoping can be performed on an individual scan, an individual dataset, or on multiple datasets using batch processing. Other processing options include creating a two dimensional view of mass and liquid chromatography (LC) elution time features, generating spectrum files for tandem MS data, creating total intensity chromatograms, and visualizing theoretical peptide profiles. Application of Decon2LS to deisotope different datasets obtained across different instruments yielded a high number of features that can be used to identify and quantify peptides in the biological sample. Conclusion Decon2LS is an efficient software package for discovering and visualizing features in proteomics studies that require automated interpretation of mass spectra. Besides being easy to use, fast, and reliable, Decon2LS is also open-source, which allows developers in the proteomics and bioinformatics communities to reuse and refine the algorithms to meet individual needs. Decon2LS source code, installer, and tutorials may be downloaded free of charge at . PMID:19292916
Jaitly, Navdeep; Mayampurath, Anoop; Littlefield, Kyle; Adkins, Joshua N; Anderson, Gordon A; Smith, Richard D
2009-03-17
Data generated from liquid chromatography coupled to high-resolution mass spectrometry (LC-MS)-based studies of a biological sample can contain large amounts of biologically significant information in the form of proteins, peptides, and metabolites. Interpreting this data involves inferring the masses and abundances of biomolecules injected into the instrument. Because of the inherent complexity of mass spectral patterns produced by these biomolecules, the analysis is significantly enhanced by using visualization capabilities to inspect and confirm results. In this paper we describe Decon2LS, an open-source software package for automated processing and visualization of high-resolution MS data. Drawing extensively on algorithms developed over the last ten years for ICR2LS, Decon2LS packages the algorithms as a rich set of modular, reusable processing classes for performing diverse functions such as reading raw data, routine peak finding, theoretical isotope distribution modelling, and deisotoping. Because the source code is openly available, these functionalities can now be used to build derivative applications in relatively fast manner. In addition, Decon2LS provides an extensive set of visualization tools, such as high performance chart controls. With a variety of options that include peak processing, deisotoping, isotope composition, etc, Decon2LS supports processing of multiple raw data formats. Deisotoping can be performed on an individual scan, an individual dataset, or on multiple datasets using batch processing. Other processing options include creating a two dimensional view of mass and liquid chromatography (LC) elution time features, generating spectrum files for tandem MS data, creating total intensity chromatograms, and visualizing theoretical peptide profiles. Application of Decon2LS to deisotope different datasets obtained across different instruments yielded a high number of features that can be used to identify and quantify peptides in the biological sample. Decon2LS is an efficient software package for discovering and visualizing features in proteomics studies that require automated interpretation of mass spectra. Besides being easy to use, fast, and reliable, Decon2LS is also open-source, which allows developers in the proteomics and bioinformatics communities to reuse and refine the algorithms to meet individual needs.Decon2LS source code, installer, and tutorials may be downloaded free of charge at http://http:/ncrr.pnl.gov/software/.
SENR /NRPy + : Numerical relativity in singular curvilinear coordinate systems
NASA Astrophysics Data System (ADS)
Ruchlin, Ian; Etienne, Zachariah B.; Baumgarte, Thomas W.
2018-03-01
We report on a new open-source, user-friendly numerical relativity code package called SENR /NRPy + . Our code extends previous implementations of the BSSN reference-metric formulation to a much broader class of curvilinear coordinate systems, making it ideally suited to modeling physical configurations with approximate or exact symmetries. In the context of modeling black hole dynamics, it is orders of magnitude more efficient than other widely used open-source numerical relativity codes. NRPy + provides a Python-based interface in which equations are written in natural tensorial form and output at arbitrary finite difference order as highly efficient C code, putting complex tensorial equations at the scientist's fingertips without the need for an expensive software license. SENR provides the algorithmic framework that combines the C codes generated by NRPy + into a functioning numerical relativity code. We validate against two other established, state-of-the-art codes, and achieve excellent agreement. For the first time—in the context of moving puncture black hole evolutions—we demonstrate nearly exponential convergence of constraint violation and gravitational waveform errors to zero as the order of spatial finite difference derivatives is increased, while fixing the numerical grids at moderate resolution in a singular coordinate system. Such behavior outside the horizons is remarkable, as numerical errors do not converge to zero near punctures, and all points along the polar axis are coordinate singularities. The formulation addresses such coordinate singularities via cell-centered grids and a simple change of basis that analytically regularizes tensor components with respect to the coordinates. Future plans include extending this formulation to allow dynamical coordinate grids and bispherical-like distribution of points to efficiently capture orbiting compact binary dynamics.
Liu, Shuo; Cui, Tie Jun; Zhang, Lei; Xu, Quan; Wang, Qiu; Wan, Xiang; Gu, Jian Qiang; Tang, Wen Xuan; Qing Qi, Mei; Han, Jia Guang; Zhang, Wei Li; Zhou, Xiao Yang; Cheng, Qiang
2016-10-01
The concept of coding metasurface makes a link between physically metamaterial particles and digital codes, and hence it is possible to perform digital signal processing on the coding metasurface to realize unusual physical phenomena. Here, this study presents to perform Fourier operations on coding metasurfaces and proposes a principle called as scattering-pattern shift using the convolution theorem, which allows steering of the scattering pattern to an arbitrarily predesigned direction. Owing to the constant reflection amplitude of coding particles, the required coding pattern can be simply achieved by the modulus of two coding matrices. This study demonstrates that the scattering patterns that are directly calculated from the coding pattern using the Fourier transform have excellent agreements to the numerical simulations based on realistic coding structures, providing an efficient method in optimizing coding patterns to achieve predesigned scattering beams. The most important advantage of this approach over the previous schemes in producing anomalous single-beam scattering is its flexible and continuous controls to arbitrary directions. This work opens a new route to study metamaterial from a fully digital perspective, predicting the possibility of combining conventional theorems in digital signal processing with the coding metasurface to realize more powerful manipulations of electromagnetic waves.
Channon, Sue; Bekkers, Marie-Jet; Sanders, Julia; Cannings-John, Rebecca; Robertson, Laura; Bennert, Kristina; Butler, Christopher; Hood, Kerenza; Robling, Michael
2016-01-01
Motivational Interviewing (MI) is a person-centred counselling approach to behaviour change which is increasingly being used in public health settings, either as a stand-alone approach or in combination with other structured programmes of health promotion. One example of this is the Family Nurse Partnership (FNP) a licensed, preventative programme for first time mothers under the age of 20, delivered by specialist family nurses who are additionally trained in MI. The Building Blocks trial was an individually randomised controlled trial comparing effectiveness of Family Nurse Partnership when added to usual care compared to usual care alone within 18 sites in England. The aim of this process evaluation component of the trial is to determine the extent to which Motivational Interviewing skills taught to Family Nurse Partnership nurses were used in their home visits with clients. Between July 2010 and November 2011, 92 audio-recordings of nurse-client consultations were collected during the 'pregnancy' and 'infancy' phases of the FNP programme. They were analysed using The Motivational Interviewing Treatment Integrity (MITI) coding system. A competent level of overall MI adherent practice according to the MITI criteria for 'global clinician ratings' was apparent in over 70 % of the consultations. However, on specific behaviours and the MITI-derived practitioner competency variables, there was a large variation in the percentage of recordings in which "beginner proficiency" levels in MI (as defined by the MITI criteria) was achieved, ranging from 73.9 % for the 'MI adherent behaviour' variable in the pregnancy phase to 6.7 % for 'percentage of questions coded as open' in the infancy phase. The results suggest that it is possible to deliver a structured programme in an MI-consistent way. However, some of the behaviours regarded as key to MI practice such as the percentage of questions coded as open can be more difficult to achieve in such a context. This is an important consideration for those involved in designing effective structured interventions with an MI-informed approach and wanting to maintain fidelity to both MI and the structured programme. Current Controlled Trials ISRCTN23019866 Registered 20/4/2009.
A portable approach for PIC on emerging architectures
NASA Astrophysics Data System (ADS)
Decyk, Viktor
2016-03-01
A portable approach for designing Particle-in-Cell (PIC) algorithms on emerging exascale computers, is based on the recognition that 3 distinct programming paradigms are needed. They are: low level vector (SIMD) processing, middle level shared memory parallel programing, and high level distributed memory programming. In addition, there is a memory hierarchy associated with each level. Such algorithms can be initially developed using vectorizing compilers, OpenMP, and MPI. This is the approach recommended by Intel for the Phi processor. These algorithms can then be translated and possibly specialized to other programming models and languages, as needed. For example, the vector processing and shared memory programming might be done with CUDA instead of vectorizing compilers and OpenMP, but generally the algorithm itself is not greatly changed. The UCLA PICKSC web site at http://www.idre.ucla.edu/ contains example open source skeleton codes (mini-apps) illustrating each of these three programming models, individually and in combination. Fortran2003 now supports abstract data types, and design patterns can be used to support a variety of implementations within the same code base. Fortran2003 also supports interoperability with C so that implementations in C languages are also easy to use. Finally, main codes can be translated into dynamic environments such as Python, while still taking advantage of high performing compiled languages. Parallel languages are still evolving with interesting developments in co-Array Fortran, UPC, and OpenACC, among others, and these can also be supported within the same software architecture. Work supported by NSF and DOE Grants.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Modest, Michael
The effects of radiation in particle-laden flows were the object of the present research. The presence of particles increases optical thickness substantially, making the use of the “optically thin” approximation in most cases a very poor assumption. However, since radiation fluxes peak at intermediate optical thicknesses, overall radiative effects may not necessarily be stronger than in gas combustion. Also, the spectral behavior of particle radiation properties is much more benign, making spectral models simpler (and making the assumption of a gray radiator halfway acceptable, at least for fluidized beds when gas radiation is not large). On the other hand, particlesmore » scatter radiation, making the radiative transfer equation (RTE) much more di fficult to solve. The research carried out in this project encompassed three general areas: (i) assessment of relevant radiation properties of particle clouds encountered in fluidized bed and pulverized coal combustors, (ii) development of proper spectral models for gas–particulate mixtures for various types of two-phase combustion flows, and (iii) development of a Radiative Transfer Equation (RTE) solution module for such applications. The resulting models were validated against artificial cases since open literature experimental data were not available. The final models are in modular form tailored toward maximum portability, and were incorporated into two research codes: (i) the open-source CFD code OpenFOAM, which we have extensively used in our previous work, and (ii) the open-source multi-phase flow code MFIX, which is maintained by NETL.« less
The HYPE Open Source Community
NASA Astrophysics Data System (ADS)
Strömbäck, Lena; Arheimer, Berit; Pers, Charlotta; Isberg, Kristina
2013-04-01
The Hydrological Predictions for the Environment (HYPE) model is a dynamic, semi-distributed, process-based, integrated catchment model (Lindström et al., 2010). It uses well-known hydrological and nutrient transport concepts and can be applied for both small and large scale assessments of water resources and status. In the model, the landscape is divided into classes according to soil type, vegetation and altitude. The soil representation is stratified and can be divided in up to three layers. Water and substances are routed through the same flow paths and storages (snow, soil, groundwater, streams, rivers, lakes) considering turn-over and transformation on the way towards the sea. In Sweden, the model is used by water authorities to fulfil the Water Framework Directive and the Marine Strategy Framework Directive. It is used for characterization, forecasts, and scenario analyses. Model data can be downloaded for free from three different HYPE applications: Europe (www.smhi.se/e-hype), Baltic Sea basin (www.smhi.se/balt-hype), and Sweden (vattenweb.smhi.se) The HYPE OSC (hype.sourceforge.net) is an open source initiative under the Lesser GNU Public License taken by SMHI to strengthen international collaboration in hydrological modelling and hydrological data production. The hypothesis is that more brains and more testing will result in better models and better code. The code is transparent and can be changed and learnt from. New versions of the main code will be delivered frequently. The main objective of the HYPE OSC is to provide public access to a state-of-the-art operational hydrological model and to encourage hydrologic expertise from different parts of the world to contribute to model improvement. HYPE OSC is open to everyone interested in hydrology, hydrological modelling and code development - e.g. scientists, authorities, and consultancies. The HYPE Open Source Community was initiated in November 2011 by a kick-off and workshop with 50 eager participants from twelve different countries. In beginning of 2013 we will release a new version of the code featuring new and better modularization, corresponding to hydrological processes which will make the code easier to understand and further develop. During 2013 we also plan a new workshop and HYPE course for everyone interested in the community. Lindström, G., Pers, C.P., Rosberg, R., Strömqvist, J., Arheimer, B. 2010. Development and test of the HYPE (Hydrological Predictions for the Environment) model - A water quality model for different spatial scales. Hydrology Research 41.3-4:295-319
[Series: Medical Applications of the PHITS Code (2): Acceleration by Parallel Computing].
Furuta, Takuya; Sato, Tatsuhiko
2015-01-01
Time-consuming Monte Carlo dose calculation becomes feasible owing to the development of computer technology. However, the recent development is due to emergence of the multi-core high performance computers. Therefore, parallel computing becomes a key to achieve good performance of software programs. A Monte Carlo simulation code PHITS contains two parallel computing functions, the distributed-memory parallelization using protocols of message passing interface (MPI) and the shared-memory parallelization using open multi-processing (OpenMP) directives. Users can choose the two functions according to their needs. This paper gives the explanation of the two functions with their advantages and disadvantages. Some test applications are also provided to show their performance using a typical multi-core high performance workstation.
Shared Memory Parallelization of an Implicit ADI-type CFD Code
NASA Technical Reports Server (NTRS)
Hauser, Th.; Huang, P. G.
1999-01-01
A parallelization study designed for ADI-type algorithms is presented using the OpenMP specification for shared-memory multiprocessor programming. Details of optimizations specifically addressed to cache-based computer architectures are described and performance measurements for the single and multiprocessor implementation are summarized. The paper demonstrates that optimization of memory access on a cache-based computer architecture controls the performance of the computational algorithm. A hybrid MPI/OpenMP approach is proposed for clusters of shared memory machines to further enhance the parallel performance. The method is applied to develop a new LES/DNS code, named LESTool. A preliminary DNS calculation of a fully developed channel flow at a Reynolds number of 180, Re(sub tau) = 180, has shown good agreement with existing data.
Trinity Phase 2 Open Science: CTH
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruggirello, Kevin Patrick; Vogler, Tracy
CTH is an Eulerian hydrocode developed by Sandia National Laboratories (SNL) to solve a wide range of shock wave propagation and material deformation problems. Adaptive mesh refinement is also used to improve efficiency for problems with a wide range of spatial scales. The code has a history of running on a variety of computing platforms ranging from desktops to massively parallel distributed-data systems. For the Trinity Phase 2 Open Science campaign, CTH was used to study mesoscale simulations of the hypervelocity penetration of granular SiC powders. The simulations were compared to experimental data. A scaling study of CTH up tomore » 8192 KNL nodes was also performed, and several improvements were made to the code to improve the scalability.« less
Testing New Programming Paradigms with NAS Parallel Benchmarks
NASA Technical Reports Server (NTRS)
Jin, H.; Frumkin, M.; Schultz, M.; Yan, J.
2000-01-01
Over the past decade, high performance computing has evolved rapidly, not only in hardware architectures but also with increasing complexity of real applications. Technologies have been developing to aim at scaling up to thousands of processors on both distributed and shared memory systems. Development of parallel programs on these computers is always a challenging task. Today, writing parallel programs with message passing (e.g. MPI) is the most popular way of achieving scalability and high performance. However, writing message passing programs is difficult and error prone. Recent years new effort has been made in defining new parallel programming paradigms. The best examples are: HPF (based on data parallelism) and OpenMP (based on shared memory parallelism). Both provide simple and clear extensions to sequential programs, thus greatly simplify the tedious tasks encountered in writing message passing programs. HPF is independent of memory hierarchy, however, due to the immaturity of compiler technology its performance is still questionable. Although use of parallel compiler directives is not new, OpenMP offers a portable solution in the shared-memory domain. Another important development involves the tremendous progress in the internet and its associated technology. Although still in its infancy, Java promisses portability in a heterogeneous environment and offers possibility to "compile once and run anywhere." In light of testing these new technologies, we implemented new parallel versions of the NAS Parallel Benchmarks (NPBs) with HPF and OpenMP directives, and extended the work with Java and Java-threads. The purpose of this study is to examine the effectiveness of alternative programming paradigms. NPBs consist of five kernels and three simulated applications that mimic the computation and data movement of large scale computational fluid dynamics (CFD) applications. We started with the serial version included in NPB2.3. Optimization of memory and cache usage was applied to several benchmarks, noticeably BT and SP, resulting in better sequential performance. In order to overcome the lack of an HPF performance model and guide the development of the HPF codes, we employed an empirical performance model for several primitives found in the benchmarks. We encountered a few limitations of HPF, such as lack of supporting the "REDISTRIBUTION" directive and no easy way to handle irregular computation. The parallelization with OpenMP directives was done at the outer-most loop level to achieve the largest granularity. The performance of six HPF and OpenMP benchmarks is compared with their MPI counterparts for the Class-A problem size in the figure in next page. These results were obtained on an SGI Origin2000 (195MHz) with MIPSpro-f77 compiler 7.2.1 for OpenMP and MPI codes and PGI pghpf-2.4.3 compiler with MPI interface for HPF programs.
Developing a Hypercard-UNIX Interface for Electronic Mail Transfer
1992-06-01
My thanks to Greqg for his support. Many of the comments for -- the MacTCP version are his. His code is set ,%ppart by borders. on openStacK put the...HUES-ModemVersion......- *-*-*-* STACK SCRIPTi "-*-*-* on openStack put the seconds into card fid theTime of card interface hide menubar global...34Loqin" hide fid receiving put empty into cd tia msqname of card theoessaqe end openStack on closeStack global logoutme put eqFpty into card fld text of
Code Verification Results of an LLNL ASC Code on Some Tri-Lab Verification Test Suite Problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, S R; Bihari, B L; Salari, K
As scientific codes become more complex and involve larger numbers of developers and algorithms, chances for algorithmic implementation mistakes increase. In this environment, code verification becomes essential to building confidence in the code implementation. This paper will present first results of a new code verification effort within LLNL's B Division. In particular, we will show results of code verification of the LLNL ASC ARES code on the test problems: Su Olson non-equilibrium radiation diffusion, Sod shock tube, Sedov point blast modeled with shock hydrodynamics, and Noh implosion.
Anomalous Upwelling in Nan Wan: July 2008
2009-12-01
Head Ruth H. Preller 7300 Security, Code 1226 Office of Couns sl.Code 1008.3 ADOR/Director NCST E. R. Franchi , 7000 Public Affairs (Unclassified...State University (OSU) tidal forcing drives the tidal currents. A global weather forecast model (Navy Operational Global Atmospheric Prediction...system derives its open ocean boundary conditions from NRL global NCOM (Navy Co- astal Ocean Model) (Rhodes et al. 2002) that operates daily
A transonic wind tunnel wall interference prediction code
NASA Technical Reports Server (NTRS)
Phillips, Pamela S.; Waggoner, Edgar G.
1988-01-01
A small disturbance transonic wall interference prediction code has been developed that is capable of modeling solid, open, perforated, and slotted walls as well as slotted and solid walls with viscous effects. This code was developed by modifying the outer boundary conditions of an existing aerodynamic wing-body-pod-pylon-winglet analysis code. The boundary conditions are presented in the form of equations which simulate the flow at the wall, as well as finite difference approximations to the equations. Comparisons are presented at transonic flow conditions between computational results and experimental data for a wing alone in a solid wall wind tunnel and wing-body configurations in both slotted and solid wind tunnels.
MILC Code Performance on High End CPU and GPU Supercomputer Clusters
NASA Astrophysics Data System (ADS)
DeTar, Carleton; Gottlieb, Steven; Li, Ruizi; Toussaint, Doug
2018-03-01
With recent developments in parallel supercomputing architecture, many core, multi-core, and GPU processors are now commonplace, resulting in more levels of parallelism, memory hierarchy, and programming complexity. It has been necessary to adapt the MILC code to these new processors starting with NVIDIA GPUs, and more recently, the Intel Xeon Phi processors. We report on our efforts to port and optimize our code for the Intel Knights Landing architecture. We consider performance of the MILC code with MPI and OpenMP, and optimizations with QOPQDP and QPhiX. For the latter approach, we concentrate on the staggered conjugate gradient and gauge force. We also consider performance on recent NVIDIA GPUs using the QUDA library.
Aeroacoustic Simulation of a Nose Landing Gear in an Open Jet Facility Using FUN3D
NASA Technical Reports Server (NTRS)
Vatsa, Veer N.; Lockard, David P.; Khorrami, Mehdi R.; Carlson, Jan-Renee
2012-01-01
Numerical simulations have been performed for a partially-dressed, cavity-closed nose landing gear configuration that was tested in NASA Langley s closed-wall Basic Aerodynamic Research Tunnel (BART) and in the University of Florida s open-jet acoustic facility known as UFAFF. The unstructured-grid flow solver, FUN3D, developed at NASA Langley Research center is used to compute the unsteady flow field for this configuration. A hybrid Reynolds-averaged Navier-Stokes/large eddy simulation (RANS/LES) turbulence model is used for these computations. Time-averaged and instantaneous solutions compare favorably with the measured data. Unsteady flowfield data obtained from the FUN3D code are used as input to a Ffowcs Williams-Hawkings noise propagation code to compute the sound pressure levels at microphones placed in the farfield. Significant improvement in predicted noise levels is obtained when the flowfield data from the open jet UFAFF simulations is used as compared to the case using flowfield data from the closed-wall BART configuration.
DOE Office of Scientific and Technical Information (OSTI.GOV)
White, Richard A.; Brown, Joseph M.; Colby, Sean M.
ATLAS (Automatic Tool for Local Assembly Structures) is a comprehensive multiomics data analysis pipeline that is massively parallel and scalable. ATLAS contains a modular analysis pipeline for assembly, annotation, quantification and genome binning of metagenomics and metatranscriptomics data and a framework for reference metaproteomic database construction. ATLAS transforms raw sequence data into functional and taxonomic data at the microbial population level and provides genome-centric resolution through genome binning. ATLAS provides robust taxonomy based on majority voting of protein coding open reading frames rolled-up at the contig level using modified lowest common ancestor (LCA) analysis. ATLAS provides robust taxonomy based onmore » majority voting of protein coding open reading frames rolled-up at the contig level using modified lowest common ancestor (LCA) analysis. ATLAS is user-friendly, easy install through bioconda maintained as open-source on GitHub, and is implemented in Snakemake for modular customizable workflows.« less
NASA Astrophysics Data System (ADS)
Hasenkopf, C. A.
2017-12-01
Increasingly, open data, open-source projects are unearthing rich datasets and tools, previously impossible for more traditional avenues to generate. These projects are possible, in part, because of the emergence of online collaborative and code-sharing tools, decreasing costs of cloud-based services to fetch, store, and serve data, and increasing interest of individuals to contribute their time and skills to 'open projects.' While such projects have generated palpable enthusiasm from many sectors, many of these projects face uncharted paths for sustainability, visibility, and acceptance. Our project, OpenAQ, is an example of an open-source, open data community that is currently forging its own uncharted path. OpenAQ is an open air quality data platform that aggregates and universally formats government and research-grade air quality data from 50 countries across the world. To date, we make available more than 76 million air quality (PM2.5, PM10, SO2, NO2, O3, CO and black carbon) data points through an open Application Programming Interface (API) and a user-customizable download interface at https://openaq.org. The goal of the platform is to enable an ecosystem of users to advance air pollution efforts from science to policy to the private sector. The platform is also an open-source project (https://github.com/openaq) and has only been made possible through the coding and data contributions of individuals around the world. In our first two years of existence, we have seen requests for data to our API skyrocket to more than 6 million datapoints per month, and use-cases as varied as ingesting data aggregated from our system into real-time models of wildfires to building open-source statistical packages (e.g. ropenaq and py-openaq) on top of the platform to creating public-friendly apps and chatbots. We will share a whirl-wind trip through our evolution and the many lessons learned so far related to platform structure, community engagement, organizational model type and sustainability.
ERIC Educational Resources Information Center
Birel, Firat Kiyas
2016-01-01
Problem Statement: Dressing for school has been intensely disputed and has led to periodic changes in dress codes since the foundation of the Turkish republic. Practitioners have tried to put some new practices related to school dress codes into practice for redressing former dress code issues involving mandatory dress standards for both students…
A Guide for Recertification of Ground Based Pressure Vessels and Liquid Holding Tanks
1987-12-15
Boiler and Pressure Vessel Code , Section...Requirements 202 Calculate Vessel MAWP Using ASME Boiler and Pressure Vessel Code Section VUI, Division 1. 203 Assess Vessel MAWP Using ASME Boiler and Pressure Vessel Code Section...Engineers (ASME) Boiler and Pressure Vessel Code (B&PV) Section VIll, Division 1, or other applicable standard. This activity involves the
GANDALF - Graphical Astrophysics code for N-body Dynamics And Lagrangian Fluids
NASA Astrophysics Data System (ADS)
Hubber, D. A.; Rosotti, G. P.; Booth, R. A.
2018-01-01
GANDALF is a new hydrodynamics and N-body dynamics code designed for investigating planet formation, star formation and star cluster problems. GANDALF is written in C++, parallelized with both OPENMP and MPI and contains a PYTHON library for analysis and visualization. The code has been written with a fully object-oriented approach to easily allow user-defined implementations of physics modules or other algorithms. The code currently contains implementations of smoothed particle hydrodynamics, meshless finite-volume and collisional N-body schemes, but can easily be adapted to include additional particle schemes. We present in this paper the details of its implementation, results from the test suite, serial and parallel performance results and discuss the planned future development. The code is freely available as an open source project on the code-hosting website github at https://github.com/gandalfcode/gandalf and is available under the GPLv2 license.
NASA Astrophysics Data System (ADS)
Giorgino, Toni
2018-07-01
The proper choice of collective variables (CVs) is central to biased-sampling free energy reconstruction methods in molecular dynamics simulations. The PLUMED 2 library, for instance, provides several sophisticated CV choices, implemented in a C++ framework; however, developing new CVs is still time consuming due to the need to provide code for the analytical derivatives of all functions with respect to atomic coordinates. We present two solutions to this problem, namely (a) symbolic differentiation and code generation, and (b) automatic code differentiation, in both cases leveraging open-source libraries (SymPy and Stan Math, respectively). The two approaches are demonstrated and discussed in detail implementing a realistic example CV, the local radius of curvature of a polymer. Users may use the code as a template to streamline the implementation of their own CVs using high-level constructs and automatic gradient computation.
Spatial panel analyses of alcohol outlets and motor vehicle crashes in California: 1999–2008
Ponicki, William R.; Gruenewald, Paul J.; Remer, Lillian G.
2014-01-01
Although past research has linked alcohol outlet density to higher rates of drinking and many related social problems, there is conflicting evidence of density’s association with traffic crashes. An abundance of local alcohol outlets simultaneously encourages drinking and reduces driving distances required to obtain alcohol, leading to an indeterminate expected impact on alcohol-involved crash risk. This study separately investigates the effects of outlet density on (1) the risk of injury crashes relative to population and (2) the likelihood that any given crash is alcohol-involved, as indicated by police reports and single-vehicle nighttime status of crashes. Alcohol outlet density effects are estimated using Bayesian misalignment Poisson analyses of all California ZIP codes over the years 1999–2008. These misalignment models allow panel analysis of ZIP-code data despite frequent redefinition of postal-code boundaries, while also controlling for overdispersion and the effects of spatial autocorrelation. Because models control for overall retail density, estimated alcohol-outlet associations represent the extra effect of retail establishments selling alcohol. The results indicate a number of statistically well-supported associations between retail density and crash behavior, but the implied effects on crash risks are relatively small. Alcohol-serving restaurants have a greater impact on overall crash risks than on the likelihood that those crashes involve alcohol, whereas bars primarily affect the odds that crashes are alcohol-involved. Off-premise outlet density is negatively associated with risks of both crashes and alcohol involvement, while the presence of a tribal casino in a ZIP code is linked to higher odds of police-reported drinking involvement. Alcohol outlets in a given area are found to influence crash risks both locally and in adjacent ZIP codes, and significant spatial autocorrelation also suggests important relationships across geographical units. These results suggest that each type of alcohol outlet can have differing impacts on risks of crashing as well as the alcohol involvement of those crashes. PMID:23537623
RETRACTED — PMD mitigation through interleaving LDPC codes with polarization scramblers
NASA Astrophysics Data System (ADS)
Han, Dahai; Chen, Haoran; Xi, Lixia
2012-11-01
The combination of forward error correction (FEC) and distributed fast polarization scramblers (D-FPSs) is approved as an effective method to mitigate polarization mode dispersion (PMD) in high-speed optical fiber communication system. The low-density parity-check (LDPC) codes are newly introduced into the PMD mitigation scheme with D-FPSs in this paper as one of the promising FEC codes to achieve better performance. The scrambling speed of FPS for LDPC (2040, 1903) codes system is discussed, and the reasonable speed 10 MHz is obtained from the simulation results. For easy application in practical large scale integrated (LSI) circuit, the number of iterations in decoding LDPC codes is also investigated. The PMD tolerance and cut-off optical signal-to-noise ratio (OSNR) of LDPC codes are compared with Reed-Solomon (RS) codes in different conditions. In the simulation, the interleaving LDPC codes brings incremental performance of error correction, and the PMD tolerance is 10 ps at OSNR=11.4 dB. The results show that the meaning of the work is that LDPC codes are a substitute for traditional RS codes with D-FPSs and all of the executable code files are open for researchers who have practical LSI platform for PMD mitigation.
PMD mitigation through interleaving LDPC codes with polarization scramblers
NASA Astrophysics Data System (ADS)
Han, Dahai; Chen, Haoran; Xi, Lixia
2013-09-01
The combination of forward error correction (FEC) and distributed fast polarization scramblers (D-FPSs) is approved an effective method to mitigate polarization mode dispersion (PMD) in high-speed optical fiber communication system. The low-density parity-check (LDPC) codes are newly introduced into the PMD mitigation scheme with D-FPSs in this article as one of the promising FEC codes to achieve better performance. The scrambling speed of FPS for LDPC (2040, 1903) codes system is discussed, and the reasonable speed 10MHz is obtained from the simulation results. For easy application in practical large scale integrated (LSI) circuit, the number of iterations in decoding LDPC codes is also investigated. The PMD tolerance and cut-off optical signal-to-noise ratio (OSNR) of LDPC codes are compared with Reed-Solomon (RS) codes in different conditions. In the simulation, the interleaving LDPC codes bring incremental performance of error correction, and the PMD tolerance is 10ps at OSNR=11.4dB. The results show the meaning of the work is that LDPC codes are a substitute for traditional RS codes with D-FPSs and all of the executable code files are open for researchers who have practical LSI platform for PMD mitigation.
1981-10-01
unique alphanumeric designation assigned by the performing orga- nization or provided by the sponsoring organization in accordance with American...for cataloging. (b). Identifiers and Open-Ended Terms. Use identifiers for project names, code names, equipment designators , etc. Use open- ended...spool. Note. These components ae designed to function together or with the BASS alone, if internal control of job processing is not a requirement at a
ERIC Educational Resources Information Center
Williams van Rooij, Shahron
2010-01-01
This paper contrasts the arguments offered in the literature advocating the adoption of open source software (OSS)--software delivered with its source code--for teaching and learning applications, with the reality of limited enterprise-wide deployment of those applications in U.S. higher education. Drawing on the fields of organizational…
78 FR 52785 - Meeting of the Judicial Conference Committee on Rules of Practice and Procedure
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-26
... Appellate Procedure. ACTION: Notice of Open Meeting. SUMMARY: The Advisory Committee on Rules of Appellate Procedure will hold a two-day meeting. The meeting will be open to public observation but not participation..., Secretary and Chief Rules Officer. [FR Doc. 2013-20670 Filed 8-23-13; 8:45 am] BILLING CODE 2210-55-P ...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-28
... Practice and Procedure. ACTION: Notice of open meeting. SUMMARY: The Committee on Rules of Practice and Procedure will hold a two-day meeting. The meeting will be open to public observation but not participation.... Robinson, Deputy Rules Officer and Counsel. [FR Doc. 2012-4650 Filed 2-27-12; 8:45 am] BILLING CODE 2210-55...
78 FR 53159 - Meeting of the Judicial Conference Committee on Rules of Practice and Procedure
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-28
... Criminal Procedure. ACTION: Notice of open meeting. SUMMARY: The Advisory Committee on Rules of Criminal Procedure will hold a one-day meeting. The meeting will be open to public observation but not participation.... Rose, Secretary and Chief Rules Officer . [FR Doc. 2013-20980 Filed 8-27-13; 8:45 am] BILLING CODE 2210...
77 FR 12077 - Meeting of the Judicial Conference Advisory Committee on Rules of Appellate Procedure
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-28
... Rules of Appellate Procedure. ACTION: Notice of open meeting. SUMMARY: The Advisory Committee on Rules of Appellate Procedure will hold a two-day meeting. The meeting will be open to public observation.... Robinson, Deputy Rules Officer and Counsel. [FR Doc. 2012-4636 Filed 2-27-12; 8:45 am] BILLING CODE 2210-55...
77 FR 12078 - Meeting of the Judicial Conference Advisory Committee on Rules of Evidence
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-28
... Evidence. ACTION: Notice of open meeting. SUMMARY: The Advisory Committee on Rules of Evidence will hold a two- day meeting. The meeting will be open to public observation but not participation. DATES: April 3... Committee Deputy and Counsel. [FR Doc. 2012-4664 Filed 2-27-12; 8:45 am] BILLING CODE 2210-55-P ...
78 FR 21977 - Meeting of the Judicial Conference Committee on Rules of Practice and Procedure
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-12
... Procedure. ACTION: Notice of Open Meeting. SUMMARY: The Committee on Rules of Practice and Procedure will hold a two-day meeting. The meeting will be open to public observation but not participation. DATE.... Rose, Rules Committee Secretary. [FR Doc. 2013-08535 Filed 4-11-13; 8:45 am] BILLING CODE 2210-55-P ...
77 FR 12077 - Meeting of The Judicial Conference Advisory Committee on Rules of Civil Procedure
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-28
... Civil Procedure. ACTION: Notice of open meeting. SUMMARY: The Advisory Committee on Rules of Civil Procedure will hold a two-day meeting. The meeting will be open to public observation but not participation.... Robinson, Deputy Rules Officer and Counsel. [FR Doc. 2012-4630 Filed 2-27-12; 8:45 am] BILLING CODE 2210-55...
Skoblikow, Nikolai E; Zimin, Andrei A
2016-05-01
The hypothesis of direct coding, assuming the direct contact of pairs of coding molecules with amino acid side chains in hollow unit cells (cellules) of a regular crystal-structure mineral is proposed. The coding nucleobase-containing molecules in each cellule (named "lithocodon") partially shield each other; the remaining free space determines the stereochemical character of the filling side chain. Apatite-group minerals are considered as the most preferable for this type of coding (named "lithocoding"). A scheme of the cellule with certain stereometric parameters, providing for the isomeric selection of contacting molecules is proposed. We modelled the filling of cellules with molecules involved in direct coding, with the possibility of coding by their single combination for a group of stereochemically similar amino acids. The regular ordered arrangement of cellules enables the polymerization of amino acids and nucleobase-containing molecules in the same direction (named "lithotranslation") preventing the shift of coding. A table of the presumed "LithoCode" (possible and optimal lithocodon assignments for abiogenically synthesized α-amino acids involved in lithocoding and lithotranslation) is proposed. The magmatic nature of the mineral, abiogenic synthesis of organic molecules and polymerization events are considered within the framework of the proposed "volcanic scenario".
DOE Office of Scientific and Technical Information (OSTI.GOV)
Salinger, Andrew; Phipps, Eric; Ostien, Jakob
2016-01-13
The Albany code is a general-purpose finite element code for solving partial differential equations (PDEs). Albany is a research code that demonstrates how a PDE code can be built by interfacing many of the open-source software libraries that are released under Sandia's Trilinos project. Part of the mission of Albany is to be a testbed for new Trilinos libraries, to refine their methods, usability, and interfaces. Albany includes hooks to optimization and uncertainty quantification algorithms, including those in Trilinos as well as those in the Dakota toolkit. Because of this, Albany is a desirable starting point for new code developmentmore » efforts that wish to make heavy use of Trilinos. Albany is both a framework and the host for specific finite element applications. These applications have project names, and can be controlled by configuration option when the code is compiled, but are all developed and released as part of the single Albany code base, These include LCM, QCAD, FELIX, Aeras, and ATO applications.« less
An Evolving Worldview: Making Open Source Easy
NASA Astrophysics Data System (ADS)
Rice, Z.
2017-12-01
NASA Worldview is an interactive interface for browsing full-resolution, global satellite imagery. Worldview supports an open data policy so that academia, private industries and the general public can use NASA's satellite data to address Earth science related issues. Worldview was open sourced in 2014. By shifting to an open source approach, the Worldview application has evolved to better serve end-users. Project developers are able to have discussions with end-users and community developers to understand issues and develop new features. Community developers are able to track upcoming features, collaborate on them and make their own contributions. Developers who discover issues are able to address those issues and submit a fix. This reduces the time it takes for a project developer to reproduce an issue or develop a new feature. Getting new developers to contribute to the project has been one of the most important and difficult aspects of open sourcing Worldview. After witnessing potential outside contributors struggle, a focus has been made on making the installation of Worldview simple to reduce the initial learning curve and make contributing code easy. One way we have addressed this is through a simplified setup process. Our setup documentation includes a set of prerequisites and a set of straightforward commands to clone, configure, install and run. This presentation will emphasize our focus to simplify and standardize Worldview's open source code so that more people are able to contribute. The more people who contribute, the better the application will become over time.
NASA Astrophysics Data System (ADS)
Schildhauer, M.; Jones, M. B.; Bolker, B.; Lenhardt, W. C.; Hampton, S. E.; Idaszak, R.; Rebich Hespanha, S.; Ahalt, S.; Christopherson, L.
2014-12-01
Continuing advances in computational capabilities, access to Big Data, and virtual collaboration technologies are creating exciting new opportunities for accomplishing Earth science research at finer resolutions, with much broader scope, using powerful modeling and analytical approaches that were unachievable just a few years ago. Yet, there is a perceptible lag in the abilities of the research community to capitalize on these new possibilities, due to lacking the relevant skill-sets, especially with regards to multi-disciplinary and integrative investigations that involve active collaboration. UC Santa Barbara's National Center for Ecological Analysis and Synthesis (NCEAS), and the University of North Carolina's Renaissance Computing Institute (RENCI), were recipients of NSF OCI S2I2 "Conceptualization awards", charged with helping define the needs of the research community relative to enabling science and education through "sustained software infrastructure". Over the course of our activities, a consistent request from Earth scientists was for "better training in software that enables more effective, reproducible research." This community-based feedback led to creation of an "Open Science for Synthesis" Institute— a innovative, three-week, bi-coastal training program for early career researchers. We provided a mix of lectures, hands-on exercises, and working group experience on topics including: data discovery and preservation; code creation, management, sharing, and versioning; scientific workflow documentation and reproducibility; statistical and machine modeling techniques; virtual collaboration mechanisms; and methods for communicating scientific results. All technologies and quantitative tools presented were suitable for advancing open, collaborative, and reproducible synthesis research. In this talk, we will report on the lessons learned from running this ambitious training program, that involved coordinating classrooms among two remote sites, and included developing original synthesis research activities as part of the course. We also report on the feedback provided by participants as to the learning approaches and topical issues they found most engaging, and why.
OpenSQUID: A Flexible Open-Source Software Framework for the Control of SQUID Electronics
Jaeckel, Felix T.; Lafler, Randy J.; Boyd, S. T. P.
2013-02-06
We report commercially available computer-controlled SQUID electronics are usually delivered with software providing a basic user interface for adjustment of SQUID tuning parameters, such as bias current, flux offset, and feedback loop settings. However, in a research context it would often be useful to be able to modify this code and/or to have full control over all these parameters from researcher-written software. In the case of the STAR Cryoelectronics PCI/PFL family of SQUID control electronics, the supplied software contains modules for automatic tuning and noise characterization, but does not provide an interface for user code. On the other hand, themore » Magnicon SQUIDViewer software package includes a public application programming interface (API), but lacks auto-tuning and noise characterization features. To overcome these and other limitations, we are developing an "open-source" framework for controlling SQUID electronics which should provide maximal interoperability with user software, a unified user interface for electronics from different manufacturers, and a flexible platform for the rapid development of customized SQUID auto-tuning and other advanced features. Finally, we have completed a first implementation for the STAR Cryoelectronics hardware and have made the source code for this ongoing project available to the research community on SourceForge (http://opensquid.sourceforge.net) under the GNU public license.« less
Development of an expert based ICD-9-CM and ICD-10-CM map to AIS 2005 update 2008.
Loftis, Kathryn L; Price, Janet P; Gillich, Patrick J; Cookman, Kathy J; Brammer, Amy L; St Germain, Trish; Barnes, Jo; Graymire, Vickie; Nayduch, Donna A; Read-Allsopp, Christine; Baus, Katherine; Stanley, Patsye A; Brennan, Maureen
2016-09-01
This article describes how maps were developed from the clinical modifications of the 9th and 10th revisions of the International Classification of Diseases (ICD) to the Abbreviated Injury Scale 2005 Update 2008 (AIS08). The development of the mapping methodology is described, with discussion of the major assumptions used in the process to map ICD codes to AIS severities. There were many intricacies to developing the maps, because the 2 coding systems, ICD and AIS, were developed for different purposes and contain unique classification structures to meet these purposes. Experts in ICD and AIS analyzed the rules and coding guidelines of both injury coding schemes to develop rules for mapping ICD injury codes to the AIS08. This involved subject-matter expertise, detailed knowledge of anatomy, and an in-depth understanding of injury terms and definitions as applied in both taxonomies. The official ICD-9-CM and ICD-10-CM versions (injury sections) were mapped to the AIS08 codes and severities, following the rules outlined in each coding manual. The panel of experts was composed of coders certified in ICD and/or AIS from around the world. In the process of developing the map from ICD to AIS, the experts created rules to address issues with the differences in coding guidelines between the 2 schemas and assure a consistent approach to all codes. Over 19,000 ICD codes were analyzed and maps were generated for each code to AIS08 chapters, AIS08 severities, and Injury Severity Score (ISS) body regions. After completion of the maps, 14,101 (74%) of the eligible 19,012 injury-related ICD-9-CM and ICD-10-CM codes were assigned valid AIS08 severity scores between 1 and 6. The remaining 4,911 codes were assigned an AIS08 of 9 (unknown) or were determined to be nonmappable because the ICD description lacked sufficient qualifying information for determining severity according to AIS rules. There were also 15,214 (80%) ICD codes mapped to AIS08 chapter and ISS body region, which allow for ISS calculations for patient data sets. This mapping between ICD and AIS provides a comprehensive, expert-designed solution for analysts to bridge the data gap between the injury descriptions provided in hospital codes (ICD-9-CM, ICD-10-CM) and injury severity codes (AIS08). By applying consistent rules from both the ICD and AIS taxonomies, the expert panel created these definitive maps, which are the only ones endorsed by the Association for the Advancement of Automotive Medicine (AAAM). Initial validation upheld the quality of these maps for the estimation of AIS severity, but future work should include verification of these maps for MAIS and ISS estimations with large data sets. These ICD-AIS maps will support data analysis from databases with injury information classified in these 2 different systems and open new doors for the investigation of injury from traumatic events using large injury data sets.
Chaste: An Open Source C++ Library for Computational Physiology and Biology
Mirams, Gary R.; Arthurs, Christopher J.; Bernabeu, Miguel O.; Bordas, Rafel; Cooper, Jonathan; Corrias, Alberto; Davit, Yohan; Dunn, Sara-Jane; Fletcher, Alexander G.; Harvey, Daniel G.; Marsh, Megan E.; Osborne, James M.; Pathmanathan, Pras; Pitt-Francis, Joe; Southern, James; Zemzemi, Nejib; Gavaghan, David J.
2013-01-01
Chaste — Cancer, Heart And Soft Tissue Environment — is an open source C++ library for the computational simulation of mathematical models developed for physiology and biology. Code development has been driven by two initial applications: cardiac electrophysiology and cancer development. A large number of cardiac electrophysiology studies have been enabled and performed, including high-performance computational investigations of defibrillation on realistic human cardiac geometries. New models for the initiation and growth of tumours have been developed. In particular, cell-based simulations have provided novel insight into the role of stem cells in the colorectal crypt. Chaste is constantly evolving and is now being applied to a far wider range of problems. The code provides modules for handling common scientific computing components, such as meshes and solvers for ordinary and partial differential equations (ODEs/PDEs). Re-use of these components avoids the need for researchers to ‘re-invent the wheel’ with each new project, accelerating the rate of progress in new applications. Chaste is developed using industrially-derived techniques, in particular test-driven development, to ensure code quality, re-use and reliability. In this article we provide examples that illustrate the types of problems Chaste can be used to solve, which can be run on a desktop computer. We highlight some scientific studies that have used or are using Chaste, and the insights they have provided. The source code, both for specific releases and the development version, is available to download under an open source Berkeley Software Distribution (BSD) licence at http://www.cs.ox.ac.uk/chaste, together with details of a mailing list and links to documentation and tutorials. PMID:23516352