Science.gov

Sample records for actual source code

  1. Astrophysics Source Code Library

    NASA Astrophysics Data System (ADS)

    Allen, A.; DuPrie, K.; Berriman, B.; Hanisch, R. J.; Mink, J.; Teuben, P. J.

    2013-10-01

    The Astrophysics Source Code Library (ASCL), founded in 1999, is a free on-line registry for source codes of interest to astronomers and astrophysicists. The library is housed on the discussion forum for Astronomy Picture of the Day (APOD) and can be accessed at http://ascl.net. The ASCL has a comprehensive listing that covers a significant number of the astrophysics source codes used to generate results published in or submitted to refereed journals and continues to grow. The ASCL currently has entries for over 500 codes; its records are citable and are indexed by ADS. The editors of the ASCL and members of its Advisory Committee were on hand at a demonstration table in the ADASS poster room to present the ASCL, accept code submissions, show how the ASCL is starting to be used by the astrophysics community, and take questions on and suggestions for improving the resource.

  2. Coded source neutron imaging

    SciTech Connect

    Bingham, Philip R; Santos-Villalobos, Hector J

    2011-01-01

    Coded aperture techniques have been applied to neutron radiography to address limitations in neutron flux and resolution of neutron detectors in a system labeled coded source imaging (CSI). By coding the neutron source, a magnified imaging system is designed with small spot size aperture holes (10 and 100 m) for improved resolution beyond the detector limits and with many holes in the aperture (50% open) to account for flux losses due to the small pinhole size. An introduction to neutron radiography and coded aperture imaging is presented. A system design is developed for a CSI system with a development of equations for limitations on the system based on the coded image requirements and the neutron source characteristics of size and divergence. Simulation has been applied to the design using McStas to provide qualitative measures of performance with simulations of pinhole array objects followed by a quantitative measure through simulation of a tilted edge and calculation of the modulation transfer function (MTF) from the line spread function. MTF results for both 100um and 10um aperture hole diameters show resolutions matching the hole diameters.

  3. Authorship Attribution of Source Code

    ERIC Educational Resources Information Center

    Tennyson, Matthew F.

    2013-01-01

    Authorship attribution of source code is the task of deciding who wrote a program, given its source code. Applications include software forensics, plagiarism detection, and determining software ownership. A number of methods for the authorship attribution of source code have been presented in the past. A review of those existing methods is…

  4. Distributed transform coding via source-splitting

    NASA Astrophysics Data System (ADS)

    Yahampath, Pradeepa

    2012-12-01

    Transform coding (TC) is one of the best known practical methods for quantizing high-dimensional vectors. In this article, a practical approach to distributed TC of jointly Gaussian vectors is presented. This approach, referred to as source-split distributed transform coding (SP-DTC), can be used to easily implement two terminal transform codes for any given rate-pair. The main idea is to apply source-splitting using orthogonal-transforms, so that only Wyner-Ziv (WZ) quantizers are required for compression of transform coefficients. This approach however requires optimizing the bit allocation among dependent sets of WZ quantizers. In order to solve this problem, a low-complexity tree-search algorithm based on analytical models for transform coefficient quantization is developed. A rate-distortion (RD) analysis of SP-DTCs for jointly Gaussian sources is presented, which indicates that these codes can significantly outperform the practical alternative of independent TC of each source, whenever there is a strong correlation between the sources. For practical implementation of SP-DTCs, the idea of using conditional entropy constrained (CEC) quantizers followed by Slepian-Wolf coding is explored. Experimental results obtained with SP-DTC designs based on both CEC scalar quantizers and CEC trellis-coded quantizers demonstrate that actual implementations of SP-DTCs can achieve RD performance close to the analytically predicted limits.

  5. Astrophysics Source Code Library Enhancements

    NASA Astrophysics Data System (ADS)

    Hanisch, R. J.; Allen, A.; Berriman, G. B.; DuPrie, K.; Mink, J.; Nemiroff, R. J.; Schmidt, J.; Shamir, L.; Shortridge, K.; Taylor, M.; Teuben, P. J.; Wallin, J.

    2015-09-01

    The Astrophysics Source Code Library (ASCL)1 is a free online registry of codes used in astronomy research; it currently contains over 900 codes and is indexed by ADS. The ASCL has recently moved a new infrastructure into production. The new site provides a true database for the code entries and integrates the WordPress news and information pages and the discussion forum into one site. Previous capabilities are retained and permalinks to ascl.net continue to work. This improvement offers more functionality and flexibility than the previous site, is easier to maintain, and offers new possibilities for collaboration. This paper covers these recent changes to the ASCL.

  6. FORTRAN Static Source Code Analyzer

    NASA Technical Reports Server (NTRS)

    Merwarth, P.

    1982-01-01

    FORTRAN Static Source Code Analyzer program (SAP) automatically gathers and reports statistics on occurrences of statements and structures within FORTRAN program. Provisions are made for weighting each statistic, providing user with overall figure of complexity. Statistics, as well as figures of complexity, are gathered on module-by-module basis. Overall summed statistics are accumulated for complete input source file.

  7. Source-Code-Analyzing Program

    NASA Technical Reports Server (NTRS)

    Manteufel, Thomas; Jun, Linda

    1991-01-01

    FORTRAN Static Source Code Analyzer program, SAP, developed to gather statistics automatically on occurrences of statements and structures within FORTRAN program and provide for reporting of those statistics. Provisions made to weight each statistic and provide overall figure of complexity. Statistics, as well as figures of complexity, gathered on module-by-module basis. Overall summed statistics also accumulated for complete input source file. Written in FORTRAN IV.

  8. FORTRAN Static Source Code Analyzer

    NASA Technical Reports Server (NTRS)

    Merwarth, P.

    1984-01-01

    FORTRAN Static Source Code Analyzer program, SAP (DEC VAX version), automatically gathers statistics on occurrences of statements and structures within FORTRAN program and provides reports of those statistics. Provisions made for weighting each statistic and provide an overall figure of complexity.

  9. Syndrome source coding and its universal generalization

    NASA Technical Reports Server (NTRS)

    Ancheta, T. C., Jr.

    1975-01-01

    A method of using error-correcting codes to obtain data compression, called syndrome-source-coding, is described in which the source sequence is treated as an error pattern whose syndrome forms the compressed data. It is shown that syndrome-source-coding can achieve arbitrarily small distortion with the number of compressed digits per source digit arbitrarily close to the entropy of a binary memoryless source. A universal generalization of syndrome-source-coding is formulated which provides robustly-effective, distortionless, coding of source ensembles.

  10. Practices in Code Discoverability: Astrophysics Source Code Library

    NASA Astrophysics Data System (ADS)

    Allen, A.; Teuben, P.; Nemiroff, R. J.; Shamir, L.

    2012-09-01

    Here we describe the Astrophysics Source Code Library (ASCL), which takes an active approach to sharing astrophysics source code. ASCL's editor seeks out both new and old peer-reviewed papers that describe methods or experiments that involve the development or use of source code, and adds entries for the found codes to the library. This approach ensures that source codes are added without requiring authors to actively submit them, resulting in a comprehensive listing that covers a significant number of the astrophysics source codes used in peer-reviewed studies. The ASCL now has over 340 codes in it and continues to grow. In 2011, the ASCL has on average added 19 codes per month. An advisory committee has been established to provide input and guide the development and expansion of the new site, and a marketing plan has been developed and is being executed. All ASCL source codes have been used to generate results published in or submitted to a refereed journal and are freely available either via a download site or from an identified source. This paper provides the history and description of the ASCL. It lists the requirements for including codes, examines the advantages of the ASCL, and outlines some of its future plans.

  11. The Astrophysics Source Code Library: An Update

    NASA Astrophysics Data System (ADS)

    Allen, Alice; Nemiroff, R. J.; Shamir, L.; Teuben, P. J.

    2012-01-01

    The Astrophysics Source Code Library (ASCL), founded in 1999, takes an active approach to sharing astrophysical source code. ASCL's editor seeks out both new and old peer-reviewed papers that describe methods or experiments that involve the development or use of source code, and adds entries for the found codes to the library. This approach ensures that source codes are added without requiring authors to actively submit them, resulting in a comprehensive listing that covers a significant number of the astrophysics source codes used in peer-reviewed studies. The ASCL moved to a new location in 2010, and has over 300 codes in it and continues to grow. In 2011, the ASCL (http://asterisk.apod.com/viewforum.php?f=35) has on average added 19 new codes per month; we encourage scientists to submit their codes for inclusion. An advisory committee has been established to provide input and guide the development and expansion of its new site, and a marketing plan has been developed and is being executed. All ASCL source codes have been used to generate results published in or submitted to a refereed journal and are freely available either via a download site or from an identified source. This presentation covers the history of the ASCL and examines the current state and benefits of the ASCL, the means of and requirements for including codes, and outlines its future plans.

  12. Implementation issues in source coding

    NASA Technical Reports Server (NTRS)

    Sayood, Khalid; Chen, Yun-Chung; Hadenfeldt, A. C.

    1989-01-01

    An edge preserving image coding scheme which can be operated in both a lossy and a lossless manner was developed. The technique is an extension of the lossless encoding algorithm developed for the Mars observer spectral data. It can also be viewed as a modification of the DPCM algorithm. A packet video simulator was also developed from an existing modified packet network simulator. The coding scheme for this system is a modification of the mixture block coding (MBC) scheme described in the last report. Coding algorithms for packet video were also investigated.

  13. Making your code citable with the Astrophysics Source Code Library

    NASA Astrophysics Data System (ADS)

    Allen, Alice; DuPrie, Kimberly; Schmidt, Judy; Berriman, G. Bruce; Hanisch, Robert J.; Mink, Jessica D.; Nemiroff, Robert J.; Shamir, Lior; Shortridge, Keith; Taylor, Mark B.; Teuben, Peter J.; Wallin, John F.

    2016-01-01

    The Astrophysics Source Code Library (ASCL, ascl.net) is a free online registry of codes used in astronomy research. With nearly 1,200 codes, it is the largest indexed resource for astronomy codes in existence. Established in 1999, it offers software authors a path to citation of their research codes even without publication of a paper describing the software, and offers scientists a way to find codes used in refereed publications, thus improving the transparency of the research. It also provides a method to quantify the impact of source codes in a fashion similar to the science metrics of journal articles. Citations using ASCL IDs are accepted by major astronomy journals and if formatted properly are tracked by ADS and other indexing services. The number of citations to ASCL entries increased sharply from 110 citations in January 2014 to 456 citations in September 2015. The percentage of code entries in ASCL that were cited at least once rose from 7.5% in January 2014 to 17.4% in September 2015. The ASCL's mid-2014 infrastructure upgrade added an easy entry submission form, more flexible browsing, search capabilities, and an RSS feeder for updates. A Changes/Additions form added this past fall lets authors submit links for papers that use their codes for addition to the ASCL entry even if those papers don't formally cite the codes, thus increasing the transparency of that research and capturing the value of their software to the community.

  14. Source Code Plagiarism--A Student Perspective

    ERIC Educational Resources Information Center

    Joy, M.; Cosma, G.; Yau, J. Y.-K.; Sinclair, J.

    2011-01-01

    This paper considers the problem of source code plagiarism by students within the computing disciplines and reports the results of a survey of students in Computing departments in 18 institutions in the U.K. This survey was designed to investigate how well students understand the concept of source code plagiarism and to discover what, if any,…

  15. Astrophysics Source Code Library: Incite to Cite!

    NASA Astrophysics Data System (ADS)

    DuPrie, K.; Allen, A.; Berriman, B.; Hanisch, R. J.; Mink, J.; Nemiroff, R. J.; Shamir, L.; Shortridge, K.; Taylor, M. B.; Teuben, P.; Wallen, J. F.

    2014-05-01

    The Astrophysics Source Code Library (ASCl,http://ascl.net/) is an on-line registry of over 700 source codes that are of interest to astrophysicists, with more being added regularly. The ASCL actively seeks out codes as well as accepting submissions from the code authors, and all entries are citable and indexed by ADS. All codes have been used to generate results published in or submitted to a refereed journal and are available either via a download site or from an identified source. In addition to being the largest directory of scientist-written astrophysics programs available, the ASCL is also an active participant in the reproducible research movement with presentations at various conferences, numerous blog posts and a journal article. This poster provides a description of the ASCL and the changes that we are starting to see in the astrophysics community as a result of the work we are doing.

  16. Maximum aposteriori joint source/channel coding

    NASA Technical Reports Server (NTRS)

    Sayood, Khalid; Gibson, Jerry D.

    1991-01-01

    A maximum aposteriori probability (MAP) approach to joint source/channel coder design is presented in this paper. This method attempts to explore a technique for designing joint source/channel codes, rather than ways of distributing bits between source coders and channel coders. For a nonideal source coder, MAP arguments are used to design a decoder which takes advantage of redundancy in the source coder output to perform error correction. Once the decoder is obtained, it is analyzed with the purpose of obtaining 'desirable properties' of the channel input sequence for improving overall system performance. Finally, an encoder design which incorporates these properties is proposed.

  17. Astrophysics Source Code Library -- Now even better!

    NASA Astrophysics Data System (ADS)

    Allen, Alice; Schmidt, Judy; Berriman, Bruce; DuPrie, Kimberly; Hanisch, Robert J.; Mink, Jessica D.; Nemiroff, Robert J.; Shamir, Lior; Shortridge, Keith; Taylor, Mark B.; Teuben, Peter J.; Wallin, John F.

    2015-01-01

    The Astrophysics Source Code Library (ASCL, ascl.net) is a free online registry of codes used in astronomy research. Indexed by ADS, it now contains nearly 1,000 codes and with recent major changes, is better than ever! The resource has a new infrastructure that offers greater flexibility and functionality for users, including an easier submission process, better browsing, one-click author search, and an RSS feeder for news. The new database structure is easier to maintain and offers new possibilities for collaboration. Come see what we've done!

  18. Source code management with version control software

    NASA Astrophysics Data System (ADS)

    Arraki, Kenza S.

    2016-01-01

    Developing and maintaining software is an important part of astronomy research. As time progresses projects can move in unexpected directions or simply last longer than expected. Making changes to software can quickly result in many different versions of the code, wanting to return to a previous lost version, and problems sharing updated code with others. It is simple to update and collaboratively edit source code when you use version control software. This short talk will highlight the version control softwares svn, git, and hg for use with local and remote software repositories. In addition I will touch on using GitHub and BitBucket as excellent ways to share your code using an online interface.

  19. Using the Astrophysics Source Code Library

    NASA Astrophysics Data System (ADS)

    Allen, Alice; Teuben, P. J.; Berriman, G. B.; DuPrie, K.; Hanisch, R. J.; Mink, J. D.; Nemiroff, R. J.; Shamir, L.; Wallin, J. F.

    2013-01-01

    The Astrophysics Source Code Library (ASCL) is a free on-line registry of source codes that are of interest to astrophysicists; with over 500 codes, it is the largest collection of scientist-written astrophysics programs in existence. All ASCL source codes have been used to generate results published in or submitted to a refereed journal and are available either via a download site or from an identified source. An advisory committee formed in 2011 provides input and guides the development and expansion of the ASCL, and since January 2012, all accepted ASCL entries are indexed by ADS. Though software is increasingly important for the advancement of science in astrophysics, these methods are still often hidden from view or difficult to find. The ASCL (ascl.net/) seeks to improve the transparency and reproducibility of research by making these vital methods discoverable, and to provide recognition and incentive to those who write and release programs useful for astrophysics research. This poster provides a description of the ASCL, an update on recent additions, and the changes in the astrophysics community we are starting to see because of the ASCL.

  20. Multimedia Multicast Based on Multiterminal Source Coding

    NASA Astrophysics Data System (ADS)

    Aghagolzadeh, Ali; Nooshyar, Mahdi; Rabiee, Hamid R.; Mikaili, Elhameh

    Multimedia multicast with two servers based on the multiterminal source coding is studied in some previous researches. Due to the possibility of providing an approach for practical code design for more than two correlated sources in IMTSC/CEO setup, in this paper, the framework of Slepian-Wolf coded quantization is extended and a practical code design is presented for IMTSC/CEO with the number of encoders greater than two. Then the multicast system based on the IMTSC/CEO is applied to the cases with three, four and five servers. Since the underlying code design approach for the IMTSC/CEO problem has the capability of applying to an arbitrary number of active encoders, the proposed MMBMSC method can also be used with an arbitrary number of servers easily. Also, explicit expressions of the expected distortion with an arbitrary number of servers in the MMBMSC system are presented. Experimental results with data, image and video signals show the superiority of our proposed method over the conventional solutions and over the MMBMSC system with two servers.

  1. Magnified Neutron Radiography with Coded Sources

    NASA Astrophysics Data System (ADS)

    Bingham, P.; Santos-Villalobos, H.; Lavrik, N.; Gregor, J.; Bilheux, H.

    A coded source imaging (CSI) system has been developed and tested at the High Flux Isotope Reactor (HFIR) CG-1D beamline at Oak Ridge National Laboratory (ORNL). The goal of this system is to use magnification to improve resolution of the imaging system beyond the detector resolution. For this system, coded masks have been manufactured at 10 μm resolution with 9 μm thick Gd patterned on Si wafers, a system model base iterative reconstruction code developed, and experiments have been performed at resolutions of 200 μm, 100 μm, 50 μm, 20 μm, and 10 μm with the object place greater than 5.5m from the detector giving magnifications up to 25 times.

  2. Testing and Troubleshooting Automatically Generated Source Code

    NASA Technical Reports Server (NTRS)

    Henry, Joel

    1998-01-01

    Tools allowing engineers to model the real-time behavior of systems that control many types of NASA systems have become widespread. These tools automatically generate source code that is compiled, linked, then downloaded into computers controlling everything from wind tunnels to space flight systems. These tools save hundreds of hours of software development time and allow engineers with thorough application area knowledge but little software development experience to generate software to control the systems they use daily. These systems are verified and validated by simulating the real-time models, and by other techniques that focus on the model or the hardware. The automatically generated source code is typically not subjected to rigorous testing using conventional software testing techniques. Given the criticality and safety issues surrounding these systems, the application of conventional and new software testing and troubleshooting techniques to the automatically generated will improve the reliability of the resulting systems.

  3. Iterative Reconstruction of Coded Source Neutron Radiographs

    SciTech Connect

    Santos-Villalobos, Hector J; Bingham, Philip R; Gregor, Jens

    2012-01-01

    Use of a coded source facilitates high-resolution neutron imaging but requires that the radiographic data be deconvolved. In this paper, we compare direct deconvolution with two different iterative algorithms, namely, one based on direct deconvolution embedded in an MLE-like framework and one based on a geometric model of the neutron beam and a least squares formulation of the inverse imaging problem.

  4. Documentation generator application for VHDL source codes

    NASA Astrophysics Data System (ADS)

    Niton, B.; Pozniak, K. T.; Romaniuk, R. S.

    2011-06-01

    The UML, which is a complex system modeling and description technology, has recently been expanding its uses in the field of formalization and algorithmic approach to such systems like multiprocessor photonic, optoelectronic and advanced electronics carriers; distributed, multichannel measurement systems; optical networks, industrial electronics, novel R&D solutions. The paper describes a realization of an application for documenting VHDL source codes. There are presented own novel solution based on Doxygen program which is available on the free license, with accessible source code. The used supporting tools for parser building were Bison and Flex. There are presented the practical results of the documentation generator. The program was applied for exemplary VHDL codes. The documentation generator application is used for design of large optoelectronic and electronic measurement and control systems. The paper consists of three parts which describe the following components of the documentation generator for photonic and electronic systems: concept, MatLab application and VHDL application. This is part three which describes the VHDL application. VHDL is used for behavioral description of the Optoelectronic system.

  5. Iterative Reconstruction of Coded Source Neutron Radiographs

    SciTech Connect

    Santos-Villalobos, Hector J; Bingham, Philip R; Gregor, Jens

    2013-01-01

    Use of a coded source facilitates high-resolution neutron imaging through magnifications but requires that the radiographic data be deconvolved. A comparison of direct deconvolution with two different iterative algorithms has been performed. One iterative algorithm is based on a maximum likelihood estimation (MLE)-like framework and the second is based on a geometric model of the neutron beam within a least squares formulation of the inverse imaging problem. Simulated data for both uniform and Gaussian shaped source distributions was used for testing to understand the impact of non-uniformities present in neutron beam distributions on the reconstructed images. Results indicate that the model based reconstruction method will match resolution and improve on contrast over convolution methods in the presence of non-uniform sources. Additionally, the model based iterative algorithm provides direct calculation of quantitative transmission values while the convolution based methods must be normalized base on known values.

  6. Coded source imaging simulation with visible light

    NASA Astrophysics Data System (ADS)

    Wang, Sheng; Zou, Yubin; Zhang, Xueshuang; Lu, Yuanrong; Guo, Zhiyu

    2011-09-01

    A coded source could increase the neutron flux with high L/ D ratio. It may benefit a neutron imaging system with low yield neutron source. Visible light CSI experiments were carried out to test the physical design and reconstruction algorithm. We used a non-mosaic Modified Uniformly Redundant Array (MURA) mask to project the shadow of black/white samples on a screen. A cooled-CCD camera was used to record the image on the screen. Different mask sizes and amplification factors were tested. The correlation, Wiener filter deconvolution and Richardson-Lucy maximum likelihood iteration algorithm were employed to reconstruct the object imaging from the original projection. The results show that CSI can benefit the low flux neutron imaging with high background noise.

  7. Software Model Checking Without Source Code

    NASA Technical Reports Server (NTRS)

    Chaki, Sagar; Ivers, James

    2009-01-01

    We present a framework, called AIR, for verifying safety properties of assembly language programs via software model checking. AIR extends the applicability of predicate abstraction and counterexample guided abstraction refinement to the automated verification of low-level software. By working at the assembly level, AIR allows verification of programs for which source code is unavailable-such as legacy and COTS software-and programs that use features-such as pointers, structures, and object-orientation-that are problematic for source-level software verification tools. In addition, AIR makes no assumptions about the underlying compiler technology. We have implemented a prototype of AIR and present encouraging results on several non-trivial examples.

  8. Syndrome-source-coding and its universal generalization. [error correcting codes for data compression

    NASA Technical Reports Server (NTRS)

    Ancheta, T. C., Jr.

    1976-01-01

    A method of using error-correcting codes to obtain data compression, called syndrome-source-coding, is described in which the source sequence is treated as an error pattern whose syndrome forms the compressed data. It is shown that syndrome-source-coding can achieve arbitrarily small distortion with the number of compressed digits per source digit arbitrarily close to the entropy of a binary memoryless source. A 'universal' generalization of syndrome-source-coding is formulated which provides robustly effective distortionless coding of source ensembles. Two examples are given, comparing the performance of noiseless universal syndrome-source-coding to (1) run-length coding and (2) Lynch-Davisson-Schalkwijk-Cover universal coding for an ensemble of binary memoryless sources.

  9. Actual use scene of Han-Character for proper name and coded character set

    NASA Astrophysics Data System (ADS)

    Kobayashi, Tatsuo

    This article discusses the following two issues. One is overview of standardization of Han-Character in coded character set including Universal coded character set (ISO/IEC 10646), with the relation to Japanese language policy of the government. The other is the difference and particularity of Han-Character usage for proper name and difficulty to implement in ICT systems.

  10. Astronomy education and the Astrophysics Source Code Library

    NASA Astrophysics Data System (ADS)

    Allen, Alice; Nemiroff, Robert J.

    2016-01-01

    The Astrophysics Source Code Library (ASCL) is an online registry of source codes used in refereed astrophysics research. It currently lists nearly 1,200 codes and covers all aspects of computational astrophysics. How can this resource be of use to educators and to the graduate students they mentor? The ASCL serves as a discovery tool for codes that can be used for one's own research. Graduate students can also investigate existing codes to see how common astronomical problems are approached numerically in practice, and use these codes as benchmarks for their own solutions to these problems. Further, they can deepen their knowledge of software practices and techniques through examination of others' codes.

  11. Adaptive Source Coding Schemes for Geometrically Distributed Integer Alphabets

    NASA Technical Reports Server (NTRS)

    Cheung, K-M.; Smyth, P.

    1993-01-01

    Revisit the Gallager and van Voorhis optimal source coding scheme for geometrically distributed non-negative integer alphabets and show that the various subcodes in the popular Rice algorithm can be derived from the Gallager and van Voorhis code.

  12. Transport code for radiocolloid migration: with an assessment of an actual low-level waste site

    SciTech Connect

    Travis, B.J.; Nuttall, H.E.

    1984-12-31

    Recently, there is increased concern that radiocolloids may act as a rapid transport mechanism for the release of radionuclides from high-level waste repositories. The role of colloids is, however, controversial because the necessary data and assessment methodology have been limited. Evidence is accumulating to indicate that colloids are an important consideration in the geological disposal of nuclear waste. To quantitatively assess the role of colloids, the TRACR3D transport code has been enhanced by the addition of the population balance equations. This new version of the code can simulate the migration of colloids through combinations of porous/fractured, unsaturated, geologic media. The code was tested against the experimental laboratory column data of Avogadro et al. in order to compare the code results to both experimental data and an analytical solution. Next, a low-level radioactive waste site was investigated to explore whether colloid migration could account for the unusually rapid and long transport of plutonium and americium observed at a low-level waste site. Both plutonium and americium migrated 30 meters through unsaturated volcanic tuff. The nature and modeling of radiocolloids are discussed along with site simulation results from the TRACR3D code. 20 references.

  13. A Construction of Lossy Source Code Using LDPC Matrices

    NASA Astrophysics Data System (ADS)

    Miyake, Shigeki; Muramatsu, Jun

    Research into applying LDPC code theory, which is used for channel coding, to source coding has received a lot of attention in several research fields such as distributed source coding. In this paper, a source coding problem with a fidelity criterion is considered. Matsunaga et al. and Martinian et al. constructed a lossy code under the conditions of a binary alphabet, a uniform distribution, and a Hamming measure of fidelity criterion. We extend their results and construct a lossy code under the extended conditions of a binary alphabet, a distribution that is not necessarily uniform, and a fidelity measure that is bounded and additive and show that the code can achieve the optimal rate, rate-distortion function. By applying a formula for the random walk on lattice to the analysis of LDPC matrices on Zq, where q is a prime number, we show that results similar to those for the binary alphabet condition hold for Zq, the multiple alphabet condition.

  14. What do European veterinary codes of conduct actually say and mean? A case study approach.

    PubMed

    Magalhães-Sant'Ana, M; More, S J; Morton, D B; Osborne, M; Hanlon, A

    2015-06-20

    Codes of Professional Conduct (CPCs) are pivotal instruments of self-regulation, providing the standards to which veterinarians should, and sometimes must, comply. Despite their importance to the training and guidance of veterinary professionals, research is lacking on the scope and emphasis of the requirements set out in veterinary CPCs. This paper provides the first systematic investigation of veterinary CPCs. It relies on a case study approach, combining content and thematic analyses of five purposively selected European CPCs: Federation of Veterinarians of Europe (FVE), Denmark, Ireland, Portugal and the UK. Eight overarching themes were identified, including 'definitions and framing concepts', 'duties to animals', 'duties to clients', 'duties to other professionals', 'duties to competent authorities', 'duties to society', 'professionalism' and 'practice-related issues'. Some differences were observed, which may be indicative of different approaches to the regulation of the veterinary profession in Europe (which is reflected in having a 'code of ethics' or a 'code of conduct'), cultural differences on the status of animals in society, and regulatory bodies' proactivity in adapting to professional needs and to societal changes regarding the status of animals. These findings will contribute to an improved understanding of the roles of CPCs in regulating the veterinary profession in Europe. PMID:25861823

  15. An Efficient Variable Length Coding Scheme for an IID Source

    NASA Technical Reports Server (NTRS)

    Cheung, K. -M.

    1995-01-01

    A scheme is examined for using two alternating Huffman codes to encode a discrete independent and identically distributed source with a dominant symbol. This combined strategy, or alternating runlength Huffman (ARH) coding, was found to be more efficient than ordinary coding in certain circumstances.

  16. Some techniques in universal source coding and during for composite sources

    NASA Astrophysics Data System (ADS)

    Wallace, M. S.

    1981-12-01

    We consider three problems in source coding. First, we consider the composite source model. A composite source has a switch driven by a random process which selects one of a possible set of subsources. We derive some convergence results for estimation of the switching process, and use these to prove that the entropy of some composite sources may be computed. some coding techniques for composite sources are also presented and their performance is bounded. Next, we construct a variable-length-to-fixed-length (VL-FL) universal code for a class of unifilar Markov sources. A VL-FL code maps strings of source outputs into fixed-length codewords. We show that the redundancy of the code converges to zero uniformly over the class of sources as the blocklength increases. The code is also universal with respect to the initial state of the source. We compare the performance of this code to FL-VL universal codes. We then consider universal coding for real-valued sources. We show that given some coding technique for a known source, we may construct a code for a class of sources. We show that this technique works for some classes of memoryless sources, and also for a compact subset of the class of k-th order Gaussian autoregressive sources.

  17. Data processing with microcode designed with source coding

    DOEpatents

    McCoy, James A; Morrison, Steven E

    2013-05-07

    Programming for a data processor to execute a data processing application is provided using microcode source code. The microcode source code is assembled to produce microcode that includes digital microcode instructions with which to signal the data processor to execute the data processing application.

  18. The Astrophysics Source Code Library: http://www.ascl.net/

    NASA Astrophysics Data System (ADS)

    Nemiroff, R. J.; Wallin, J. F.

    1999-05-01

    Submissions are invited to the newly formed Astrophysics Source Code Library (ASCL). Original codes that have generated significant results for any paper published in a refereed astronomy or astrophysics journal are eligible for inclusion in ASCL. All submissions and personalized correspondence will be handled electronically. ASCL will not claim copyright on any of its archived codes, but will not archive codes without permission from the copyright owners. ASCL archived source codes will be indexed on the World Wide Web and made freely available for non-commercial purposes. Many results reported in astrophysics are derived though the writing and implementation of source codes. Small or large, few source codes are ever made publicly available. Because of the effort involved in the creation of scientific codes and their impact in astrophysics, we have created a site which archives and distribute codes which were used in astrophysical publications. Goals in the creation of ASCL include increasing the availability, falsifiability, and utility of source codes important to astrophysicists. ASCL is an experimental concept in its formative year - its value will be assessed from author response and user feedback in one years' time.

  19. Merged Source Word Codes for Efficient, High-Speed Entropy Coding

    SciTech Connect

    Senecal, J; Joy, K; Duchaineau, M

    2002-12-05

    We present our work on fast entropy coders for binary messages utilizing only bit shifts and table lookups. To minimize code table size we limit our code lengths with a novel type of variable-to-variable (VV) length code created from source word merging. We refer to these codes as merged codes. With merged codes it is possible to achieve a desired level of efficiency by adjusting the number of bits read from the source at each step. The most efficient merged codes yield a coder with an inefficiency of 0.4%, relative to the Shannon entropy, in the worst case. On one of our test systems a current implementation of coder using merged codes has a throughput of 35 Mbytes/sec.

  20. Coded source neutron imaging with a MURA mask

    NASA Astrophysics Data System (ADS)

    Zou, Y. B.; Schillinger, B.; Wang, S.; Zhang, X. S.; Guo, Z. Y.; Lu, Y. R.

    2011-09-01

    In coded source neutron imaging the single aperture commonly used in neutron radiography is replaced with a coded mask. Using a coded source can improve the neutron flux at the sample plane when a very high L/ D ratio is needed. The coded source imaging is a possible way to reduce the exposure time to get a neutron image with very high L/ D ratio. A 17×17 modified uniformly redundant array coded source was tested in this work. There are 144 holes of 0.8 mm diameter on the coded source. The neutron flux from the coded source is as high as from a single 9.6 mm aperture, while its effective L/ D is the same as in the case of a 0.8 mm aperture. The Richardson-Lucy maximum likelihood algorithm was used for image reconstruction. Compared to an in-line phase contrast neutron image taken with a 1 mm aperture, it takes much less time for the coded source to get an image of similar quality.

  1. Moving Sound Source Localization Based on Sequential Subspace Estimation in Actual Room Environments

    NASA Astrophysics Data System (ADS)

    Tsuji, Daisuke; Suyama, Kenji

    This paper presents a novel method for moving sound source localization and its performance evaluation in actual room environments. The method is based on the MUSIC (MUltiple SIgnal Classification) which is one of the most high resolution localization methods. When using the MUSIC, a computation of eigenvectors of correlation matrix is required for the estimation. It needs often a high computational costs. Especially, in the situation of moving source, it becomes a crucial drawback because the estimation must be conducted at every the observation time. Moreover, since the correlation matrix varies its characteristics due to the spatial-temporal non-stationarity, the matrix have to be estimated using only a few observed samples. It makes the estimation accuracy degraded. In this paper, the PAST (Projection Approximation Subspace Tracking) is applied for sequentially estimating the eigenvectors spanning the subspace. In the PAST, the eigen-decomposition is not required, and therefore it is possible to reduce the computational costs. Several experimental results in the actual room environments are shown to present the superior performance of the proposed method.

  2. Source Term Code Package: a user's guide (Mod 1)

    SciTech Connect

    Gieseke, J.A.; Cybulskis, P.; Jordan, H.; Lee, K.W.; Schumacher, P.M.; Curtis, L.A.; Wooton, R.O.; Quayle, S.F.; Kogan, V.

    1986-07-01

    As part of a major reassessment of the release of radioactive materials to the environment (source terms) in severe reactor accidents, a group of state-of-the-art computer codes was utilized to perform extensive analyses. A major product of this source term reassessment effort was a demonstrated methodology for analyzing specific accident situations to provide source term predictions. The computer codes forming this methodology have been upgraded and modified for release and further use. This system of codes has been named the Source Term Code Package (STCP) and is the subject of this user's guide. The guide is intended to provide an understanding of the STCP structure and to facilitate STCP use. The STCP was prepared for operation on a CDC system but is written in FORTRAN-77 to permit transportability. In the current version (Mod 1) of the STCP, the various calculational elements fall into four major categories represented by the codes MARCH3, TRAP-MELT3, VANESA, and NAUA/SPARC/ICEDF. The MARCH3 code is a combination of the MARCH2, CORSOR-M, and CORCON-Mod 2 codes. The TRAP-MELT3 code is a combination of the TRAP-MELT2.0 and MERGE codes.

  3. Modelling RF sources using 2-D PIC codes

    SciTech Connect

    Eppley, K.R.

    1993-03-01

    In recent years, many types of RF sources have been successfully modelled using 2-D PIC codes. Both cross field devices (magnetrons, cross field amplifiers, etc.) and pencil beam devices (klystrons, gyrotrons, TWT`S, lasertrons, etc.) have been simulated. All these devices involve the interaction of an electron beam with an RF circuit. For many applications, the RF structure may be approximated by an equivalent circuit, which appears in the simulation as a boundary condition on the electric field (``port approximation``). The drive term for the circuit is calculated from the energy transfer between beam and field in the drift space. For some applications it may be necessary to model the actual geometry of the structure, although this is more expensive. One problem not entirely solved is how to accurately model in 2-D the coupling to an external waveguide. Frequently this is approximated by a radial transmission line, but this sometimes yields incorrect results. We also discuss issues in modelling the cathode and injecting the beam into the PIC simulation.

  4. Modelling RF sources using 2-D PIC codes

    SciTech Connect

    Eppley, K.R.

    1993-03-01

    In recent years, many types of RF sources have been successfully modelled using 2-D PIC codes. Both cross field devices (magnetrons, cross field amplifiers, etc.) and pencil beam devices (klystrons, gyrotrons, TWT'S, lasertrons, etc.) have been simulated. All these devices involve the interaction of an electron beam with an RF circuit. For many applications, the RF structure may be approximated by an equivalent circuit, which appears in the simulation as a boundary condition on the electric field ( port approximation''). The drive term for the circuit is calculated from the energy transfer between beam and field in the drift space. For some applications it may be necessary to model the actual geometry of the structure, although this is more expensive. One problem not entirely solved is how to accurately model in 2-D the coupling to an external waveguide. Frequently this is approximated by a radial transmission line, but this sometimes yields incorrect results. We also discuss issues in modelling the cathode and injecting the beam into the PIC simulation.

  5. MATLAB tensor classes for fast algorithm prototyping : source code.

    SciTech Connect

    Bader, Brett William; Kolda, Tamara Gibson

    2004-10-01

    We present the source code for three MATLAB classes for manipulating tensors in order to allow fast algorithm prototyping. A tensor is a multidimensional or Nway array. This is a supplementary report; details on using this code are provided separately in SAND-XXXX.

  6. Statistical physics, optimization and source coding

    NASA Astrophysics Data System (ADS)

    Zechhina, Riccardo

    2005-06-01

    The combinatorial problem of satisfying a given set of constraints that depend on N discrete variables is a fundamental one in optimization and coding theory. Even for instances of randomly generated problems, the question ``does there exist an assignment to the variables that satisfies all constraints?'' may become extraordinarily difficult to solve in some range of parameters where a glass phase sets in. We shall provide a brief review of the recent advances in the statistical mechanics approach to these satisfiability problems and show how the analytic results have helped to design a new class of message-passing algorithms -- the survey propagation (SP) algorithms -- that can efficiently solve some combinatorial problems considered intractable. As an application, we discuss how the packing properties of clusters of solutions in randomly generated satisfiability problems can be exploited in the design of simple lossy data compression algorithms.

  7. The FORTRAN static source code analyzer program (SAP) system description

    NASA Technical Reports Server (NTRS)

    Decker, W.; Taylor, W.; Merwarth, P.; Oneill, M.; Goorevich, C.; Waligora, S.

    1982-01-01

    A source code analyzer program (SAP) designed to assist personnel in conducting studies of FORTRAN programs is described. The SAP scans FORTRAN source code and produces reports that present statistics and measures of statements and structures that make up a module. The processing performed by SAP and of the routines, COMMON blocks, and files used by SAP are described. The system generation procedure for SAP is also presented.

  8. Toward the Automated Generation of Components from Existing Source Code

    SciTech Connect

    Quinlan, D; Yi, Q; Kumfert, G; Epperly, T; Dahlgren, T; Schordan, M; White, B

    2004-12-02

    A major challenge to achieving widespread use of software component technology in scientific computing is an effective migration strategy for existing, or legacy, source code. This paper describes initial work and challenges in automating the identification and generation of components using the ROSE compiler infrastructure and the Babel language interoperability tool. Babel enables calling interfaces expressed in the Scientific Interface Definition Language (SIDL) to be implemented in, and called from, an arbitrary combination of supported languages. ROSE is used to build specialized source-to-source translators that (1) extract a SIDL interface specification from information implicit in existing C++ source code and (2) transform Babel's output to include dispatches to the legacy code.

  9. A preprocessor for FORTRAN source code produced by reduce

    NASA Astrophysics Data System (ADS)

    Kaneko, Toshiaki; Kawabata, Setsuya

    1989-09-01

    For Estimating total cross sections and various spectra for complicated processes in high energy physics, the most time consuming part is numerical integration over the phase volume. When a FORTRAN source code for the integrand is produced by REDUCE, often it is not only too long but is not enough reduced to be optimized by a FORTRAN compiler. A program package called SPROC has been developed to convert FORTRAN source code to a more optimized form and to divide the code into subroutines whose lengths are short enough for FORTRAN compilers. It can also generate a vectorizable code, which can achieve high efficiency of vector computers. The output is given in a suitable form for the numerical integration package BASES and its vector computer version VBASES. By this improvement the CPU-time for integration is shortened by a factor of about two on a scalar computer and of several times then on a vector computer.

  10. Streamlined Genome Sequence Compression using Distributed Source Coding

    PubMed Central

    Wang, Shuang; Jiang, Xiaoqian; Chen, Feng; Cui, Lijuan; Cheng, Samuel

    2014-01-01

    We aim at developing a streamlined genome sequence compression algorithm to support alternative miniaturized sequencing devices, which have limited communication, storage, and computation power. Existing techniques that require heavy client (encoder side) cannot be applied. To tackle this challenge, we carefully examined distributed source coding theory and developed a customized reference-based genome compression protocol to meet the low-complexity need at the client side. Based on the variation between source and reference, our protocol will pick adaptively either syndrome coding or hash coding to compress subsequences of changing code length. Our experimental results showed promising performance of the proposed method when compared with the state-of-the-art algorithm (GRS). PMID:25520552

  11. Encoding of multi-alphabet sources by binary arithmetic coding

    NASA Astrophysics Data System (ADS)

    Guo, Muling; Oka, Takahumi; Kato, Shigeo; Kajiwara, Hiroshi; Kawamura, Naoto

    1998-12-01

    In case of encoding a multi-alphabet source, the multi- alphabet symbol sequence can be encoded directly by a multi- alphabet arithmetic encoder, or the sequence can be first converted into several binary sequences and then each binary sequence is encoded by binary arithmetic encoder, such as the L-R arithmetic coder. Arithmetic coding, however, requires arithmetic operations for each symbol and is computationally heavy. In this paper, a binary representation method using Huffman tree is introduced to reduce the number of arithmetic operations, and a new probability approximation for L-R arithmetic coding is further proposed to improve the coding efficiency when the probability of LPS (Least Probable Symbol) is near 0.5. Simulation results show that our proposed scheme has high coding efficacy and can reduce the number of coding symbols.

  12. Actual Evapotranspiration using a two source energy balance model and gridded reference ET0

    NASA Astrophysics Data System (ADS)

    Geli, H. M.; Neale, C. M.; Verdin, J. P.; Senay, G. B.; Hobbins, M.

    2013-12-01

    In an ongoing effort to provide estimates of actual evapotranspiration (ETa) at different spatial scales from local to regional this study investigate the use of a newly under development gridded reference ET0 product. This study is conducted within the context of a USGS project aimed to provide a standardized framework for the remote sensing of ETa that can be followed in the implementation of the WaterSMART program. Most thermal remote sensing based models provide instantaneous estimates of latent heat flux which then can be extrapolated to daily ETa. In many cases extrapolation is achieved using the ETref method. At field scales reference ET0, daily and instantaneous values, are obtained from point-based/local scale measurements. When considering regional scale this local scale estimates of ET0 might not be appropriate to account for the corresponding spatial variability. This analysis provides a comparison of ETa estimates based on a two source energy balance approach using point-based and gridded reference ET0 data. The two source energy balance SEBS (Norman et al. 1995) is used to calculate surface energy fluxes and ETa. Data from Palo Verdi Irrigation District (PVID), CA is used during the analysis. The area which extends over 500 km2 covered mostly with alfalfa, cotton and vegetable crops. Ground-based hydrometeorological data including reference ET0 are provided from a nearby weather stations. CONUS wide gridded reference ET0 which being developed by NOAA using NLDAS-phase 2 weather forcing are used. Both estimates of ETa_point and ETa_NLDAS based on ground and gridded ET0 data, respectively, are compared to ground-based measurement. Preliminary results of the comparison will be presented to highlight on the potential use of such gridded ET0 data in the use of remote sensing of ETa at regional scales application. References Norman, J. M., W. P. Kustas, & K. S. Humes, 1995: A two-source approach for estimating soil and vegetation energy fluxes in

  13. Using cryptology models for protecting PHP source code

    NASA Astrophysics Data System (ADS)

    Jevremović, Aleksandar; Ristić, Nenad; Veinović, Mladen

    2013-10-01

    Protecting PHP scripts from unwanted use, copying and modifications is a big issue today. Existing solutions on source code level are mostly working as obfuscators, they are free, and they are not providing any serious protection. Solutions that encode opcode are more secure, but they are commercial and require closed-source proprietary PHP interpreter's extension. Additionally, encoded opcode is not compatible with future versions of interpreters which imply re-buying encoders from the authors. Finally, if extension source-code is compromised, all scripts encoded with that solution are compromised too. In this paper, we will present a new model for free and open-source PHP script protection solution. Protection level provided by the proposed solution is equal to protection level of commercial solutions. Model is based on conclusions from use of standard cryptology models for analysis of strengths and weaknesses of the existing solutions, when a scripts protection is seen as secure communication channel in the cryptology.

  14. Coded source neutron imaging at the PULSTAR reactor

    SciTech Connect

    Xiao, Ziyu; Mishra, Kaushal; Hawari, Ayman; Bingham, Philip R; Bilheux, Hassina Z; Tobin Jr, Kenneth William

    2011-01-01

    A neutron imaging facility is located on beam-tube No.5 of the 1-MW PULSTAR reactor at North Carolina State University. An investigation of high resolution imaging using the coded source imaging technique has been initiated at the facility. Coded imaging uses a mosaic of pinholes to encode an aperture, thus generating an encoded image of the object at the detector. To reconstruct the image data received by the detector, the corresponding decoding patterns are used. The optimized design of coded mask is critical for the performance of this technique and will depend on the characteristics of the imaging beam. In this work, a 34 x 38 uniformly redundant array (URA) coded aperture system is studied for application at the PULSTAR reactor neutron imaging facility. The URA pattern was fabricated on a 500 ?m gadolinium sheet. Simulations and experiments with a pinhole object have been conducted using the Gd URA and the optimized beam line.

  15. A Comparison of Source Code Plagiarism Detection Engines

    ERIC Educational Resources Information Center

    Lancaster, Thomas; Culwin, Fintan

    2004-01-01

    Automated techniques for finding plagiarism in student source code submissions have been in use for over 20 years and there are many available engines and services. This paper reviews the literature on the major modern detection engines, providing a comparison of them based upon the metrics and techniques they deploy. Generally the most common and…

  16. MATHEMATICAL MODEL OF ELECTROSTATIC PRECIPITATION (REVISION 3): SOURCE CODE

    EPA Science Inventory

    This tape contains the source code (FORTRAN) for Revision 3 of the Mathematical Model of Electrostatic Precipitation. Improvements found in Revision 3 of the model include a new method of calculating the solutions to the electric field equations, a dynamic method for calculating ...

  17. Plagiarism Detection Algorithm for Source Code in Computer Science Education

    ERIC Educational Resources Information Center

    Liu, Xin; Xu, Chan; Ouyang, Boyu

    2015-01-01

    Nowadays, computer programming is getting more necessary in the course of program design in college education. However, the trick of plagiarizing plus a little modification exists among some students' home works. It's not easy for teachers to judge if there's plagiarizing in source code or not. Traditional detection algorithms cannot fit this…

  18. Secondary neutron source modelling using MCNPX and ALEPH codes

    NASA Astrophysics Data System (ADS)

    Trakas, Christos; Kerkar, Nordine

    2014-06-01

    Monitoring the subcritical state and divergence of reactors requires the presence of neutron sources. But mainly secondary neutrons from these sources feed the ex-core detectors (SRD, Source Range Detector) whose counting rate is correlated with the level of the subcriticality of reactor. In cycle 1, primary neutrons are provided by sources activated outside of the reactor (e.g. Cf252); part of this source can be used for the divergence of cycle 2 (not systematic). A second family of neutron sources is used for the second cycle: the spontaneous neutrons of actinides produced after irradiation of fuel in the first cycle. Both families of sources are not sufficient to efficiently monitor the divergence of the second cycles and following ones, in most reactors. Secondary sources cluster (SSC) fulfil this role. In the present case, the SSC [Sb, Be], after activation in the first cycle (production of Sb124, unstable), produces in subsequent cycles a photo-neutron source by gamma (from Sb124)-neutron (on Be9) reaction. This paper presents the model of the process between irradiation in cycle 1 and cycle 2 results for SRD counting rate at the beginning of cycle 2, using the MCNPX code and the depletion chain ALEPH-V1 (coupling of MCNPX and ORIGEN codes). The results of this simulation are compared with two experimental results of the PWR 1450 MWe-N4 reactors. A good agreement is observed between these results and the simulations. The subcriticality of the reactors is about at -15,000 pcm. Discrepancies on the SRD counting rate between calculations and measurements are in the order of 10%, lower than the combined uncertainty of measurements and code simulation. This comparison validates the AREVA methodology, which allows having an SRD counting rate best-estimate for cycles 2 and next ones and optimizing the position of the SSC, depending on the geographic location of sources, main parameter for optimal monitoring of subcritical states.

  19. Codes for sound-source location in nontonotopic auditory cortex.

    PubMed

    Middlebrooks, J C; Xu, L; Eddins, A C; Green, D M

    1998-08-01

    We evaluated two hypothetical codes for sound-source location in the auditory cortex. The topographical code assumed that single neurons are selective for particular locations and that sound-source locations are coded by the cortical location of small populations of maximally activated neurons. The distributed code assumed that the responses of individual neurons can carry information about locations throughout 360 degrees of azimuth and that accurate sound localization derives from information that is distributed across large populations of such panoramic neurons. We recorded from single units in the anterior ectosylvian sulcus area (area AES) and in area A2 of alpha-chloralose-anesthetized cats. Results obtained in the two areas were essentially equivalent. Noise bursts were presented from loudspeakers spaced in 20 degrees intervals of azimuth throughout 360 degrees of the horizontal plane. Spike counts of the majority of units were modulated >50% by changes in sound-source azimuth. Nevertheless, sound-source locations that produced greater than half-maximal spike counts often spanned >180 degrees of azimuth. The spatial selectivity of units tended to broaden and, often, to shift in azimuth as sound pressure levels (SPLs) were increased to a moderate level. We sometimes saw systematic changes in spatial tuning along segments of electrode tracks as long as 1.5 mm but such progressions were not evident at higher sound levels. Moderate-level sounds presented anywhere in the contralateral hemifield produced greater than half-maximal activation of nearly all units. These results are not consistent with the hypothesis of a topographic code. We used an artificial-neural-network algorithm to recognize spike patterns and, thereby, infer the locations of sound sources. Network input consisted of spike density functions formed by averages of responses to eight stimulus repetitions. Information carried in the responses of single units permitted reasonable estimates of sound-source

  20. Source-Code Instrumentation and Quantification of Events

    NASA Technical Reports Server (NTRS)

    Filman, Robert E.; Havelund, Klaus; Clancy, Daniel (Technical Monitor)

    2002-01-01

    Aspect Oriented Programming (AOP) is making quantified programmatic assertions over programs that otherwise are not annotated to receive these assertions. Varieties of AOP systems are characterized by which quantified assertions they allow, what they permit in the actions of the assertions (including how the actions interact with the base code), and what mechanisms they use to achieve the overall effect. Here, we argue that all quantification is over dynamic events, and describe our preliminary work in developing a system that maps dynamic events to transformations over source code. We discuss possible applications of this system, particularly with respect to debugging concurrent systems.

  1. A Comparison of Computer Codes for the Propagation of Sonic Booms Through Realistic Atmospheres Utilizing Actual Acoustic Signatures

    NASA Technical Reports Server (NTRS)

    Chambers, James P.; Cleveland, Robin O.; Bass, David T.; Raspet, Richard; Blackstock, David T.; Hamilton, Mark F.

    1996-01-01

    A numerical exercise to compare computer codes for the propagation of sonic booms through the atmosphere is reported. For the initial portion of the comparison, artificial, yet realistic, waveforms were numerically propagated through identical atmospheres. In addition to this comparison, one of these codes has been used to make preliminary predictions of the boom generated from a recent SR-71 flight. For the initial comparison, ground waveforms are calculated using four different codes or algorithms: (1) weak shock theory, an analytical prediction, (2) SHOCKN, a mixed time and frequency domain code developed at the University of Mississippi, (3) ZEPHYRUS, another mixed time and frequency code developed at the University of Texas, and (4) THOR, a pure time domain code recently developed at the University of Texas. The codes are described and their differences noted.

  2. The Need for Vendor Source Code at NAS. Revised

    NASA Technical Reports Server (NTRS)

    Carter, Russell; Acheson, Steve; Blaylock, Bruce; Brock, David; Cardo, Nick; Ciotti, Bob; Poston, Alan; Wong, Parkson; Chancellor, Marisa K. (Technical Monitor)

    1997-01-01

    The Numerical Aerodynamic Simulation (NAS) Facility has a long standing practice of maintaining buildable source code for installed hardware. There are two reasons for this: NAS's designated pathfinding role, and the need to maintain a smoothly running operational capacity given the widely diversified nature of the vendor installations. NAS has a need to maintain support capabilities when vendors are not able; diagnose and remedy hardware or software problems where applicable; and to support ongoing system software development activities whether or not the relevant vendors feel support is justified. This note provides an informal history of these activities at NAS, and brings together the general principles that drive the requirement that systems integrated into the NAS environment run binaries built from source code, onsite.

  3. Verification test calculations for the Source Term Code Package

    SciTech Connect

    Denning, R S; Wooton, R O; Alexander, C A; Curtis, L A; Cybulskis, P; Gieseke, J A; Jordan, H; Lee, K W; Nicolosi, S L

    1986-07-01

    The purpose of this report is to demonstrate the reasonableness of the Source Term Code Package (STCP) results. Hand calculations have been performed spanning a wide variety of phenomena within the context of a single accident sequence, a loss of all ac power with late containment failure, in the Peach Bottom (BWR) plant, and compared with STCP results. The report identifies some of the limitations of the hand calculation effort. The processes involved in a core meltdown accident are complex and coupled. Hand calculations by their nature must deal with gross simplifications of these processes. Their greatest strength is as an indicator that a computer code contains an error, for example that it doesn't satisfy basic conservation laws, rather than in showing the analysis accurately represents reality. Hand calculations are an important element of verification but they do not satisfy the need for code validation. The code validation program for the STCP is a separate effort. In general the hand calculation results show that models used in the STCP codes (e.g., MARCH, TRAP-MELT, VANESA) obey basic conservation laws and produce reasonable results. The degree of agreement and significance of the comparisons differ among the models evaluated. 20 figs., 26 tabs.

  4. SAP- FORTRAN STATIC SOURCE CODE ANALYZER PROGRAM (DEC VAX VERSION)

    NASA Technical Reports Server (NTRS)

    Merwarth, P. D.

    1994-01-01

    The FORTRAN Static Source Code Analyzer program, SAP, was developed to automatically gather statistics on the occurrences of statements and structures within a FORTRAN program and to provide for the reporting of those statistics. Provisions have been made for weighting each statistic and to provide an overall figure of complexity. Statistics, as well as figures of complexity, are gathered on a module by module basis. Overall summed statistics are also accumulated for the complete input source file. SAP accepts as input syntactically correct FORTRAN source code written in the FORTRAN 77 standard language. In addition, code written using features in the following languages is also accepted: VAX-11 FORTRAN, IBM S/360 FORTRAN IV Level H Extended; and Structured FORTRAN. The SAP program utilizes two external files in its analysis procedure. A keyword file allows flexibility in classifying statements and in marking a statement as either executable or non-executable. A statistical weight file allows the user to assign weights to all output statistics, thus allowing the user flexibility in defining the figure of complexity. The SAP program is written in FORTRAN IV for batch execution and has been implemented on a DEC VAX series computer under VMS and on an IBM 370 series computer under MVS. The SAP program was developed in 1978 and last updated in 1985.

  5. SAP- FORTRAN STATIC SOURCE CODE ANALYZER PROGRAM (IBM VERSION)

    NASA Technical Reports Server (NTRS)

    Manteufel, R.

    1994-01-01

    The FORTRAN Static Source Code Analyzer program, SAP, was developed to automatically gather statistics on the occurrences of statements and structures within a FORTRAN program and to provide for the reporting of those statistics. Provisions have been made for weighting each statistic and to provide an overall figure of complexity. Statistics, as well as figures of complexity, are gathered on a module by module basis. Overall summed statistics are also accumulated for the complete input source file. SAP accepts as input syntactically correct FORTRAN source code written in the FORTRAN 77 standard language. In addition, code written using features in the following languages is also accepted: VAX-11 FORTRAN, IBM S/360 FORTRAN IV Level H Extended; and Structured FORTRAN. The SAP program utilizes two external files in its analysis procedure. A keyword file allows flexibility in classifying statements and in marking a statement as either executable or non-executable. A statistical weight file allows the user to assign weights to all output statistics, thus allowing the user flexibility in defining the figure of complexity. The SAP program is written in FORTRAN IV for batch execution and has been implemented on a DEC VAX series computer under VMS and on an IBM 370 series computer under MVS. The SAP program was developed in 1978 and last updated in 1985.

  6. Robust video transmission with distributed source coded auxiliary channel.

    PubMed

    Wang, Jiajun; Majumdar, Abhik; Ramchandran, Kannan

    2009-12-01

    We propose a novel solution to the problem of robust, low-latency video transmission over lossy channels. Predictive video codecs, such as MPEG and H.26x, are very susceptible to prediction mismatch between encoder and decoder or "drift" when there are packet losses. These mismatches lead to a significant degradation in the decoded quality. To address this problem, we propose an auxiliary codec system that sends additional information alongside an MPEG or H.26x compressed video stream to correct for errors in decoded frames and mitigate drift. The proposed system is based on the principles of distributed source coding and uses the (possibly erroneous) MPEG/H.26x decoder reconstruction as side information at the auxiliary decoder. The distributed source coding framework depends upon knowing the statistical dependency (or correlation) between the source and the side information. We propose a recursive algorithm to analytically track the correlation between the original source frame and the erroneous MPEG/H.26x decoded frame. Finally, we propose a rate-distortion optimization scheme to allocate the rate used by the auxiliary encoder among the encoding blocks within a video frame. We implement the proposed system and present extensive simulation results that demonstrate significant gains in performance both visually and objectively (on the order of 2 dB in PSNR over forward error correction based solutions and 1.5 dB in PSNR over intrarefresh based solutions for typical scenarios) under tight latency constraints. PMID:19703801

  7. Documentation generator application for MatLab source codes

    NASA Astrophysics Data System (ADS)

    Niton, B.; Pozniak, K. T.; Romaniuk, R. S.

    2011-06-01

    The UML, which is a complex system modeling and description technology, has recently been expanding its uses in the field of formalization and algorithmic approach to such systems like multiprocessor photonic, optoelectronic and advanced electronics carriers; distributed, multichannel measurement systems; optical networks, industrial electronics, novel R&D solutions. The paper describes a realization of an application for documenting MatLab source codes. There are presented own novel solution based on Doxygen program which is available on the free license, with accessible source code. The used supporting tools for parser building were Bison and Flex. There are presented the practical results of the documentation generator. The program was applied for exemplary MatLab codes. The documentation generator application is used for design of large optoelectronic and electronic measurement and control systems. The paper consists of three parts which describe the following components of the documentation generator for photonic and electronic systems: concept, MatLab application and VHDL application. This is part two which describes the MatLab application. MatLab is used for description of the measured phenomena.

  8. Energy efficient wireless sensor networks using asymmetric distributed source coding

    NASA Astrophysics Data System (ADS)

    Rao, Abhishek; Kulkarni, Murlidhar

    2013-01-01

    Wireless Sensor Networks (WSNs) are networks of sensor nodes deployed over a geographical area to perform a specific task. WSNs pose many design challenges. Energy conservation is one such design issue. In literature a wide range of solutions addressing this issue have been proposed. Generally WSNs are densely deployed. Thus the nodes with the close proximity are more likely to have the same data. Transmission of such non-aggregated data may lead to an inefficient energy management. Hence the data fusion has to be performed at the nodes so as to combine the edundant information into a single data unit. Distributed Source Coding is an efficient approach in achieving this task. In this paper an attempt has been made in modeling such a system. Various energy efficient codes were considered for the analysis. System performance in terms of energy efficiency has been made.

  9. Development of parallel DEM for the open source code MFIX

    SciTech Connect

    Gopalakrishnan, Pradeep; Tafti, Danesh

    2013-02-01

    The paper presents the development of a parallel Discrete Element Method (DEM) solver for the open source code, Multiphase Flow with Interphase eXchange (MFIX) based on the domain decomposition method. The performance of the code was evaluated by simulating a bubbling fluidized bed with 2.5 million particles. The DEM solver shows strong scalability up to 256 processors with an efficiency of 81%. Further, to analyze weak scaling, the static height of the fluidized bed was increased to hold 5 and 10 million particles. The results show that global communication cost increases with problem size while the computational cost remains constant. Further, the effects of static bed height on the bubble hydrodynamics and mixing characteristics are analyzed.

  10. Users manual for doctext: Producing documentation from C source code

    SciTech Connect

    Gropp, W.

    1995-03-01

    One of the major problems that software library writers face, particularly in a research environment, is the generation of documentation. Producing good, professional-quality documentation is tedious and time consuming. Often, no documentation is produced. For many users, however, much of the need for documentation may be satisfied by a brief description of the purpose and use of the routines and their arguments. Even for more complete, hand-generated documentation, this information provides a convenient starting point. We describe here a tool that may be used to generate documentation about programs written in the C language. It uses a structured comment convention that preserves the original C source code and does not require any additional files. The markup language is designed to be an almost invisible structured comment in the C source code, retaining readability in the original source. Documentation in a form suitable for the Unix man program (nroff), LaTeX, and the World Wide Web can be produced.

  11. Utilities for master source code distribution: MAX and Friends

    NASA Technical Reports Server (NTRS)

    Felippa, Carlos A.

    1988-01-01

    MAX is a program for the manipulation of FORTRAN master source code (MSC). This is a technique by which one maintains one and only one master copy of a FORTRAN program under a program developing system, which for MAX is assumed to be VAX/VMS. The master copy is not intended to be directly compiled. Instead it must be pre-processed by MAX to produce compilable instances. These instances may correspond to different code versions (for example, double precision versus single precision), different machines (for example, IBM, CDC, Cray) or different operating systems (i.e., VAX/VMS versus VAX/UNIX). The advantage os using a master source is more pronounced in complex application programs that are developed and maintained over many years and are to be transported and executed on several computer environments. The version lag problem that plagues many such programs is avoided by this approach. MAX is complemented by several auxiliary programs that perform nonessential functions. The ensemble is collectively known as MAX and Friends. All of these programs, including MAX, are executed as foreign VAX/VMS commands and can easily be hidden in customized VMS command procedures.

  12. Joint source-channel coding with allpass filtering source shaping for image transmission over noisy channels

    NASA Astrophysics Data System (ADS)

    Cai, Jianfei; Chen, Chang W.

    2000-04-01

    In this paper, we proposed a fixed-length robust joint source- channel coding (JSCC) scheme for image transmission over noisy channels. Three channel models are studied: binary symmetric channels (BSC) and additive white Gaussian noise (AWGN) channels for memoryless channels, and Gilbert-Elliott channels (GEC) for bursty channels. We derive, in this research, an explicit operational rate-distortion (R-D) function, which represents an end-to-end error measurement that includes errors due to both quantization and channel noise. In particular, we are able to incorporate the channel transition probability and channel bit error rate into the R-D function in the case of bursty channels. With the operational R-D function, bits are allocated not only among different subsources, but also between source coding and channel coding so that, under a fixed transmission rate, an optimum tradeoff between source coding accuracy and channel error protection can be achieved. This JSCC scheme is also integrated with allpass filtering source shaping to further improve the robustness against channel errors. Experimental results show that the proposed scheme can achieve not only high PSNR performance, but also excellent perceptual quality. Compared with the state-of-the-art JSCC schemes, this proposed scheme outperforms most of them especially when the channel mismatch occurs.

  13. A Comparison of Source Code Plagiarism Detection Engines

    NASA Astrophysics Data System (ADS)

    Lancaster, Thomas; Culwin, Fintan

    2004-06-01

    Automated techniques for finding plagiarism in student source code submissions have been in use for over 20 years and there are many available engines and services. This paper reviews the literature on the major modern detection engines, providing a comparison of them based upon the metrics and techniques they deploy. Generally the most common and effective techniques are seen to involve tokenising student submissions then searching pairs of submissions for long common substrings, an example of what is defined to be a paired structural metric. Computing academics are recommended to use one of the two Web-based detection engines, MOSS and JPlag. It is shown that whilst detection is well established there are still places where further research would be useful, particularly where visual support of the investigation process is possible.

  14. Continuation of research into language concepts for the mission support environment: Source code

    NASA Technical Reports Server (NTRS)

    Barton, Timothy J.; Ratner, Jeremiah M.

    1991-01-01

    Research into language concepts for the Mission Control Center is presented. A computer code for source codes is presented. The file contains the routines which allow source code files to be created and compiled. The build process assumes that all elements and the COMP exist in the current directory. The build process places as much code generation as possible on the preprocessor as possible. A summary is given of the source files as used and/or manipulated by the build routine.

  15. SOURCES: a code for calculating (alpha,n), spontaneous fission, and delayed neutron sources and spectra.

    PubMed

    Wilson, W B; Perry, R T; Charlton, W S; Parish, T A; Shores, E F

    2005-01-01

    SOURCES is a computer code that determines neutron production rates and spectra from (alpha,n) reactions, spontaneous fission and delayed neutron emission owing to the decay of radionuclides in homogeneous media, interface problems and three-region interface problems. The code is also capable of calculating the neutron production rates due to (alpha,n) reactions induced by a monoenergetic beam of alpha particles incident on a slab of target material. The (alpha,n) spectra are calculated using an assumed isotropic angular distribution in the centre-of-mass system with a library of 107 nuclide decay alpha-particle spectra, 24 sets of measured and/or evaluated (alpha,n) cross sections and product nuclide level branching fractions, and functional alpha particle stopping cross sections for Z < 106. Spontaneous fission sources and spectra are calculated with evaluated half-life, spontaneous fission branching and Watt spectrum parameters for 44 actinides. The delayed neutron spectra are taken from an evaluated library of 105 precursors. The code outputs the magnitude and spectra of the resultant neutron sources. It also provides an analysis of the contributions to that source by each nuclide in the problem. PMID:16381695

  16. Optimal source codes for geometrically distributed integer alphabets

    NASA Technical Reports Server (NTRS)

    Gallager, R. G.; Van Voorhis, D. C.

    1975-01-01

    An approach is shown for using the Huffman algorithm indirectly to prove the optimality of a code for an infinite alphabet if an estimate concerning the nature of the code can be made. Attention is given to nonnegative integers with a geometric probability assignment. The particular distribution considered arises in run-length coding and in encoding protocol information in data networks. Questions of redundancy of the optimal code are also investigated.

  17. FLOWTRAN-TF v1.2 source code

    SciTech Connect

    Aleman, S.E.; Cooper, R.E.; Flach, G.P.; Hamm, L.L.; Lee, S.; Smith, F.G. III

    1993-02-01

    The FLOWTRAN-TF code development effort was initiated in early 1989 as a code to monitor production reactor cooling systems at the Savannah River Plant. This report is a documentation of the various codes that make up FLOWTRAN-TF.

  18. FLOWTRAN-TF v1. 2 source code

    SciTech Connect

    Aleman, S.E.; Cooper, R.E.; Flach, G.P.; Hamm, L.L.; Lee, S.; Smith, F.G. III.

    1993-02-01

    The FLOWTRAN-TF code development effort was initiated in early 1989 as a code to monitor production reactor cooling systems at the Savannah River Plant. This report is a documentation of the various codes that make up FLOWTRAN-TF.

  19. HELIOS: A new open-source radiative transfer code

    NASA Astrophysics Data System (ADS)

    Malik, Matej; Grosheintz, Luc; Lukas Grimm, Simon; Mendonça, João; Kitzmann, Daniel; Heng, Kevin

    2015-12-01

    I present the new open-source code HELIOS, developed to accurately describe radiative transfer in a wide variety of irradiated atmospheres. We employ a one-dimensional multi-wavelength two-stream approach with scattering. Written in Cuda C++, HELIOS uses the GPU’s potential of massive parallelization and is able to compute the TP-profile of an atmosphere in radiative equilibrium and the subsequent emission spectrum in a few minutes on a single computer (for 60 layers and 1000 wavelength bins).The required molecular opacities are obtained with the recently published code HELIOS-K [1], which calculates the line shapes from an input line list and resamples the numerous line-by-line data into a manageable k-distribution format. Based on simple equilibrium chemistry theory [2] we combine the k-distribution functions of the molecules H2O, CO2, CO & CH4 to generate a k-table, which we then employ in HELIOS.I present our results of the following: (i) Various numerical tests, e.g. isothermal vs. non-isothermal treatment of layers. (ii) Comparison of iteratively determined TP-profiles with their analytical parametric prescriptions [3] and of the corresponding spectra. (iii) Benchmarks of TP-profiles & spectra for various elemental abundances. (iv) Benchmarks of averaged TP-profiles & spectra for the exoplanets GJ1214b, HD189733b & HD209458b. (v) Comparison with secondary eclipse data for HD189733b, XO-1b & Corot-2b.HELIOS is being developed, together with the dynamical core THOR and the chemistry solver VULCAN, in the group of Kevin Heng at the University of Bern as part of the Exoclimes Simulation Platform (ESP) [4], which is an open-source project aimed to provide community tools to model exoplanetary atmospheres.-----------------------------[1] Grimm & Heng 2015, ArXiv, 1503.03806[2] Heng, Lyons & Tsai, Arxiv, 1506.05501Heng & Lyons, ArXiv, 1507.01944[3] e.g. Heng, Mendonca & Lee, 2014, ApJS, 215, 4H[4] exoclime.net

  20. An Open Source Embedding Code for the Condensed Phase

    NASA Astrophysics Data System (ADS)

    Genova, Alessandro; Ceresoli, Davide; Krishtal, Alisa; Andreussi, Oliviero; Distasio, Robert; Pavanello, Michele

    Work from our group as well as others has shown that for many systems such as molecular aggregates, liquids, and complex layered materials, subsystem Density-Functional Theory (DFT) is capable of immensely reducing the computational cost while providing a better and more intuitive insight into the underlying physics. We developed a massively parallel implementation of Subsystem DFT for the condensed phase into the open-source Quantum ESPRESSO software package. In this talk, we will discuss how we: (1) implemented such a flexible parallel framework aiming at the optimal load balancing; (2) simplified the solution of the electronic structure problem by allowing a fragment specific sampling of the first Brillouin Zone; (3) achieve enormous speedups by solving the electronic structure of each fragment in a unit cell smaller than the supersystem simulation cell, effectively introducing a fragment specific basis set, with no deterioration of the fully periodic simulation. As of March 14, 2016, the code has been released and is available to the public.

  1. Living Up to the Code's Exhortations? Social Workers' Political Knowledge Sources, Expectations, and Behaviors.

    PubMed

    Felderhoff, Brandi Jean; Hoefer, Richard; Watson, Larry Dan

    2016-01-01

    The National Association of Social Workers' (NASW's) Code of Ethics urges social workers to engage in political action. However, little recent research has been conducted to examine whether social workers support this admonition and the extent to which they actually engage in politics. The authors gathered data from a survey of social workers in Austin, Texas, to address three questions. First, because keeping informed about government and political news is an important basis for action, the authors asked what sources of knowledge social workers use. Second, they asked what the respondents believe are appropriate political behaviors for other social workers and NASW. Third, they asked for self-reports regarding respondents' own political behaviors. Results indicate that social workers use the Internet and traditional media services to stay informed; expect other social workers and NASW to be active; and are, overall, more active than the general public in many types of political activities. The comparisons made between expectations for others and their own behaviors are interesting in their complex outcomes. Social workers should strive for higher levels of adherence to the code's urgings on political activity. Implications for future work are discussed. PMID:26897996

  2. Using National Drug Codes and Drug Knowledge Bases to Organize Prescription Records from Multiple Sources

    PubMed Central

    Simonaitis, Linas; McDonald, Clement J

    2009-01-01

    Purpose Pharmacy systems contain electronic prescription information needed for clinical care, decision support, performance measurements and research. The master files of most pharmacy systems include National Drug Codes (NDCs) as well as the local codes they use within their systems to identify the products they dispense. We sought to assess how well one could map the products dispensed by many pharmacies to clinically oriented codes via the mapping tables provided by Drug Knowledge Base (DKB) producers. Methods We obtained a large sample of prescription records from seven different sources. These records either carried a national product code or a local code that could be translated into a national product code via their formulary master. We obtained mapping tables from five DKBs. We measured the degree to which the DKB mapping tables covered the national product codes carried in, or associated with, our sample of prescription records. Results Considering the total prescription volume, DKBs covered 93.0% to 99.8% of the product codes (15 comparisons) from three outpatient, and 77.4% to 97.0% (20 comparisons) from four inpatient, sources. Among the inpatient sources, invented codes explained much – from 36% to 94% (3 of 4 sources) – of the non coverage. Outpatient pharmacy sources invented codes rarely – in 0.11% to 0.21% of their total prescription volume, and inpatient sources, more commonly – in 1.7% to 7.4% of their prescription volume. The distribution of prescribed products is highly skewed: from 1.4% to 4.4% of codes account for 50% of the message volume; from 10.7% to 34.5% of codes account for 90% of the volume. Conclusion DKBs cover the product codes used by outpatient sources sufficiently well to permit automatic mapping. Changes in policies and standards could increase coverage of product codes used by inpatient sources. PMID:19767382

  3. A review of the potential and actual sources of pollution to groundwater in selected karst areas in Slovenia

    NASA Astrophysics Data System (ADS)

    Kovačič, G.; Ravbar, N.

    2005-02-01

    Slovenian karst areas extend over 43% of the country; limestones and dolomites of the Mesozoic era prevail. In Slovenia karst groundwater contributes up to 50% of the total drinking water supply. The quality of water is very high, despite the fact that it is extremely vulnerable to pollution. The present article is a study and a review of the potential and actual sources of pollution to the groundwater in the selected karst aquifers (the Kras, Velika planina and Snežnik plateaus), which differ in their natural characteristics. Unlike the other selected plateaus, the Kras plateau is inhabited. There are several settlements in the area and the industrial, agricultural and traffic activities carried out that represent a serious threat to the quality of karst groundwater. The Velika planina and Snežnik plateaus do not have permanent residents, however there are some serious hazards to the quality of the karst springs arising from sports, tourist, construction and farming activities, as well as from the traffic related to them. Despite relatively favourable conditions for protection, many important karst aquifers and springs are improperly protected in Slovenia. The reason is the lack of knowledge about sustainable water management in karst regions and the confusion in drinking water protection policy.

  4. A first collision source method for ATTILA, an unstructured tetrahedral mesh discrete ordinates code

    SciTech Connect

    Wareing, T.A.; Morel, J.E.; Parsons, D.K.

    1998-12-01

    A semi-analytic first collision source method is developed for the transport code, ATTILA, a three-dimensional, unstructured tetrahedral mesh, discrete-ordinates code. This first collision source method is intended to mitigate ray effects due to point sources. The method is third-order accurate, which is the same order of accuracy as the linear-discontinuous spatial differencing scheme used in ATTILA. Numerical results are provided to demonstrate the accuracy and efficiency of the first collision source method.

  5. Process Model Improvement for Source Code Plagiarism Detection in Student Programming Assignments

    ERIC Educational Resources Information Center

    Kermek, Dragutin; Novak, Matija

    2016-01-01

    In programming courses there are various ways in which students attempt to cheat. The most commonly used method is copying source code from other students and making minimal changes in it, like renaming variable names. Several tools like Sherlock, JPlag and Moss have been devised to detect source code plagiarism. However, for larger student…

  6. Presenting an Alternative Source Code Plagiarism Detection Framework for Improving the Teaching and Learning of Programming

    ERIC Educational Resources Information Center

    Hattingh, Frederik; Buitendag, Albertus A. K.; van der Walt, Jacobus S.

    2013-01-01

    The transfer and teaching of programming and programming related skills has become, increasingly difficult on an undergraduate level over the past years. This is partially due to the number of programming languages available as well as access to readily available source code over the Web. Source code plagiarism is common practice amongst many…

  7. Joint-source-channel coding scheme for scalable video-coding-based digital video broadcasting, second generation satellite broadcasting system

    NASA Astrophysics Data System (ADS)

    Seo, Kwang-Deok; Chi, Won Sup; Lee, In Ki; Chang, Dae-Ig

    2010-10-01

    We propose a joint-source-channel coding (JSCC) scheme that can provide and sustain high-quality video service in spite of deteriorated transmission channel conditions of the second generation of the digital video broadcasting (DVB-S2) satellite broadcasting service. Especially by combining the layered characteristics of the SVC (scalable video coding) video and the robust channel coding capability of LDPC (low-density parity check) employed for DVB-S2, a new concept of JSCC for digital satellite broadcasting service is developed. Rain attenuation in high-frequency bands such as the Ka band is a major factor for lowering the link capacity in satellite broadcasting service. Therefore, it is necessary to devise a new technology to dynamically manage the rain attenuation by adopting a JSCC scheme that can apply variable code rates for both source and channel coding. For this purpose, we develop a JSCC scheme by combining SVC and LDPC, and prove the performance of the proposed JSCC scheme by extensive simulations where SVC coded video is transmitted over various error-prone channels with AWGN (additive white Gaussian noise) patterns in DVB-S2 broadcasting service.

  8. Multicode comparison of selected source-term computer codes

    SciTech Connect

    Hermann, O.W.; Parks, C.V.; Renier, J.P.; Roddy, J.W.; Ashline, R.C.; Wilson, W.B.; LaBauve, R.J.

    1989-04-01

    This report summarizes the results of a study to assess the predictive capabilities of three radionuclide inventory/depletion computer codes, ORIGEN2, ORIGEN-S, and CINDER-2. The task was accomplished through a series of comparisons of their output for several light-water reactor (LWR) models (i.e., verification). Of the five cases chosen, two modeled typical boiling-water reactors (BWR) at burnups of 27.5 and 40 GWd/MTU and two represented typical pressurized-water reactors (PWR) at burnups of 33 and 50 GWd/MTU. In the fifth case, identical input data were used for each of the codes to examine the results of decay only and to show differences in nuclear decay constants and decay heat rates. Comparisons were made for several different characteristics (mass, radioactivity, and decay heat rate) for 52 radionuclides and for nine decay periods ranging from 30 d to 10,000 years. Only fission products and actinides were considered. The results are presented in comparative-ratio tables for each of the characteristics, decay periods, and cases. A brief summary description of each of the codes has been included. Of the more than 21,000 individual comparisons made for the three codes (taken two at a time), nearly half (45%) agreed to within 1%, and an additional 17% fell within the range of 1 to 5%. Approximately 8% of the comparison results disagreed by more than 30%. However, relatively good agreement was obtained for most of the radionuclides that are expected to contribute the greatest impact to waste disposal. Even though some defects have been noted, each of the codes in the comparison appears to produce respectable results. 12 figs., 12 tabs.

  9. Joint source-channel coding: secured and progressive transmission of compressed medical images on the Internet.

    PubMed

    Babel, Marie; Parrein, Benoît; Déforges, Olivier; Normand, Nicolas; Guédon, Jean-Pierre; Coat, Véronique

    2008-06-01

    The joint source-channel coding system proposed in this paper has two aims: lossless compression with a progressive mode and the integrity of medical data, which takes into account the priorities of the image and the properties of a network with no guaranteed quality of service. In this context, the use of scalable coding, locally adapted resolution (LAR) and a discrete and exact Radon transform, known as the Mojette transform, meets this twofold requirement. In this paper, details of this joint coding implementation are provided as well as a performance evaluation with respect to the reference CALIC coding and to unequal error protection using Reed-Solomon codes. PMID:18289830

  10. A Line Source Shielding Code for Personal Computers.

    1990-12-22

    Version 00 LINEDOSE computes the gamma-ray dose from a pipe source modeled as a line. The pipe is assumed to be iron and has a concrete shield of arbitrary thickness. The calculation is made for eight source energies between 0.1 and 3.5 MeV.

  11. Learning a multi-dimensional companding function for lossy source coding.

    PubMed

    Maeda, Shin-ichi; Ishii, Shin

    2009-09-01

    Although the importance of lossy source coding has been growing, the general and practical methodology for its design has not been completely resolved. The well-known vector quantization (VQ) can represent any fixed-length lossy source coding, but requires too much computation resource. Companding vector quantization (CVQ) can reduce the complexity of non-structured VQ by replacing vector quantization with a set of scalar quantizations and can represent a wide class of practically useful VQs. Although an analytical derivation of optimal CVQ is difficult except for very limited cases, optimization using data samples can be performed instead. Here we propose a CVQ optimization method, which includes bit allocation by a newly derived distortion formula as a generalization of Bennett's formula, and test its validity. We applied the method to transform coding and compared the performance of our CVQ with those of Karhunen-Loëve transformation (KLT)-based coding and non-structured VQ. As a consequence, we found that our trained CVQ outperforms not only KLT-based coding but also non-structured VQ in the case of high bit-rate coding of linear mixtures of uniform sources. We also found that trained CVQ even outperformed KLT-based coding in the low bit-rate coding of a Gaussian source. To highlight the advantages of our approach, we also discuss the degradation of non-structured VQ and the limitations of theoretical analyses which are valid for high bit-rate coding. PMID:19556103

  12. Assume-Guarantee Verification of Source Code with Design-Level Assumptions

    NASA Technical Reports Server (NTRS)

    Giannakopoulou, Dimitra; Pasareanu, Corina S.; Cobleigh, Jamieson M.

    2004-01-01

    Model checking is an automated technique that can be used to determine whether a system satisfies certain required properties. To address the 'state explosion' problem associated with this technique, we propose to integrate assume-guarantee verification at different phases of system development. During design, developers build abstract behavioral models of the system components and use them to establish key properties of the system. To increase the scalability of model checking at this level, we have developed techniques that automatically decompose the verification task by generating component assumptions for the properties to hold. The design-level artifacts are subsequently used to guide the implementation of the system, but also to enable more efficient reasoning at the source code-level. In particular we propose to use design-level assumptions to similarly decompose the verification of the actual system implementation. We demonstrate our approach on a significant NASA application, where design-level models were used to identify; and correct a safety property violation, and design-level assumptions allowed us to check successfully that the property was presented by the implementation.

  13. Source Authentication for Code Dissemination Supporting Dynamic Packet Size in Wireless Sensor Networks †

    PubMed Central

    Kim, Daehee; Kim, Dongwan; An, Sunshin

    2016-01-01

    Code dissemination in wireless sensor networks (WSNs) is a procedure for distributing a new code image over the air in order to update programs. Due to the fact that WSNs are mostly deployed in unattended and hostile environments, secure code dissemination ensuring authenticity and integrity is essential. Recent works on dynamic packet size control in WSNs allow enhancing the energy efficiency of code dissemination by dynamically changing the packet size on the basis of link quality. However, the authentication tokens attached by the base station become useless in the next hop where the packet size can vary according to the link quality of the next hop. In this paper, we propose three source authentication schemes for code dissemination supporting dynamic packet size. Compared to traditional source authentication schemes such as μTESLA and digital signatures, our schemes provide secure source authentication under the environment, where the packet size changes in each hop, with smaller energy consumption. PMID:27409616

  14. Source Authentication for Code Dissemination Supporting Dynamic Packet Size in Wireless Sensor Networks.

    PubMed

    Kim, Daehee; Kim, Dongwan; An, Sunshin

    2016-01-01

    Code dissemination in wireless sensor networks (WSNs) is a procedure for distributing a new code image over the air in order to update programs. Due to the fact that WSNs are mostly deployed in unattended and hostile environments, secure code dissemination ensuring authenticity and integrity is essential. Recent works on dynamic packet size control in WSNs allow enhancing the energy efficiency of code dissemination by dynamically changing the packet size on the basis of link quality. However, the authentication tokens attached by the base station become useless in the next hop where the packet size can vary according to the link quality of the next hop. In this paper, we propose three source authentication schemes for code dissemination supporting dynamic packet size. Compared to traditional source authentication schemes such as μTESLA and digital signatures, our schemes provide secure source authentication under the environment, where the packet size changes in each hop, with smaller energy consumption. PMID:27409616

  15. Model-Based Least Squares Reconstruction of Coded Source Neutron Radiographs: Integrating the ORNL HFIR CG1D Source Model

    SciTech Connect

    Santos-Villalobos, Hector J; Gregor, Jens; Bingham, Philip R

    2014-01-01

    At the present, neutron sources cannot be fabricated small and powerful enough in order to achieve high resolution radiography while maintaining an adequate flux. One solution is to employ computational imaging techniques such as a Magnified Coded Source Imaging (CSI) system. A coded-mask is placed between the neutron source and the object. The system resolution is increased by reducing the size of the mask holes and the flux is increased by increasing the size of the coded-mask and/or the number of holes. One limitation of such system is that the resolution of current state-of-the-art scintillator-based detectors caps around 50um. To overcome this challenge, the coded-mask and object are magnified by making the distance from the coded-mask to the object much smaller than the distance from object to detector. In previous work, we have shown via synthetic experiments that our least squares method outperforms other methods in image quality and reconstruction precision because of the modeling of the CSI system components. However, the validation experiments were limited to simplistic neutron sources. In this work, we aim to model the flux distribution of a real neutron source and incorporate such a model in our least squares computational system. We provide a full description of the methodology used to characterize the neutron source and validate the method with synthetic experiments.

  16. Soft and Joint Source-Channel Decoding of Quasi-Arithmetic Codes

    NASA Astrophysics Data System (ADS)

    Guionnet, Thomas; Guillemot, Christine

    2004-12-01

    The issue of robust and joint source-channel decoding of quasi-arithmetic codes is addressed. Quasi-arithmetic coding is a reduced precision and complexity implementation of arithmetic coding. This amounts to approximating the distribution of the source. The approximation of the source distribution leads to the introduction of redundancy that can be exploited for robust decoding in presence of transmission errors. Hence, this approximation controls both the trade-off between compression efficiency and complexity and at the same time the redundancy ( excess rate) introduced by this suboptimality. This paper provides first a state model of a quasi-arithmetic coder and decoder for binary and[InlineEquation not available: see fulltext.]-ary sources. The design of an error-resilient soft decoding algorithm follows quite naturally. The compression efficiency of quasi-arithmetic codes allows to add extra redundancy in the form of markers designed specifically to prevent desynchronization. The algorithm is directly amenable for iterative source-channel decoding in the spirit of serial turbo codes. The coding and decoding algorithms have been tested for a wide range of channel signal-to-noise ratios (SNRs). Experimental results reveal improved symbol error rate (SER) and SNR performances against Huffman and optimal arithmetic codes.

  17. Building guide : how to build Xyce from source code.

    SciTech Connect

    Keiter, Eric Richard; Russo, Thomas V.; Schiek, Richard Louis; Sholander, Peter E.; Thornquist, Heidi K.; Mei, Ting; Verley, Jason C.

    2013-08-01

    While Xyce uses the Autoconf and Automake system to configure builds, it is often necessary to perform more than the customary %E2%80%9C./configure%E2%80%9D builds many open source users have come to expect. This document describes the steps needed to get Xyce built on a number of common platforms.

  18. Open-Source Development of the Petascale Reactive Flow and Transport Code PFLOTRAN

    NASA Astrophysics Data System (ADS)

    Hammond, G. E.; Andre, B.; Bisht, G.; Johnson, T.; Karra, S.; Lichtner, P. C.; Mills, R. T.

    2013-12-01

    Open-source software development has become increasingly popular in recent years. Open-source encourages collaborative and transparent software development and promotes unlimited free redistribution of source code to the public. Open-source development is good for science as it reveals implementation details that are critical to scientific reproducibility, but generally excluded from journal publications. In addition, research funds that would have been spent on licensing fees can be redirected to code development that benefits more scientists. In 2006, the developers of PFLOTRAN open-sourced their code under the U.S. Department of Energy SciDAC-II program. Since that time, the code has gained popularity among code developers and users from around the world seeking to employ PFLOTRAN to simulate thermal, hydraulic, mechanical and biogeochemical processes in the Earth's surface/subsurface environment. PFLOTRAN is a massively-parallel subsurface reactive multiphase flow and transport simulator designed from the ground up to run efficiently on computing platforms ranging from the laptop to leadership-class supercomputers, all from a single code base. The code employs domain decomposition for parallelism and is founded upon the well-established and open-source parallel PETSc and HDF5 frameworks. PFLOTRAN leverages modern Fortran (i.e. Fortran 2003-2008) in its extensible object-oriented design. The use of this progressive, yet domain-friendly programming language has greatly facilitated collaboration in the code's software development. Over the past year, PFLOTRAN's top-level data structures were refactored as Fortran classes (i.e. extendible derived types) to improve the flexibility of the code, ease the addition of new process models, and enable coupling to external simulators. For instance, PFLOTRAN has been coupled to the parallel electrical resistivity tomography code E4D to enable hydrogeophysical inversion while the same code base can be used as a third

  19. Evaluating Open-Source Full-Text Search Engines for Matching ICD-10 Codes.

    PubMed

    Jurcău, Daniel-Alexandru; Stoicu-Tivadar, Vasile

    2016-01-01

    This research presents the results of evaluating multiple free, open-source engines on matching ICD-10 diagnostic codes via full-text searches. The study investigates what it takes to get an accurate match when searching for a specific diagnostic code. For each code the evaluation starts by extracting the words that make up its text and continues with building full-text search queries from the combinations of these words. The queries are then run against all the ICD-10 codes until a match indicates the code in question as a match with the highest relative score. This method identifies the minimum number of words that must be provided in order for the search engines choose the desired entry. The engines analyzed include a popular Java-based full-text search engine, a lightweight engine written in JavaScript which can even execute on the user's browser, and two popular open-source relational database management systems. PMID:27350484

  20. IllinoisGRMHD: an open-source, user-friendly GRMHD code for dynamical spacetimes

    NASA Astrophysics Data System (ADS)

    Etienne, Zachariah B.; Paschalidis, Vasileios; Haas, Roland; Mösta, Philipp; Shapiro, Stuart L.

    2015-09-01

    In the extreme violence of merger and mass accretion, compact objects like black holes and neutron stars are thought to launch some of the most luminous outbursts of electromagnetic and gravitational wave energy in the Universe. Modeling these systems realistically is a central problem in theoretical astrophysics, but has proven extremely challenging, requiring the development of numerical relativity codes that solve Einstein's equations for the spacetime, coupled to the equations of general relativistic (ideal) magnetohydrodynamics (GRMHD) for the magnetized fluids. Over the past decade, the Illinois numerical relativity (ILNR) group's dynamical spacetime GRMHD code has proven itself as a robust and reliable tool for theoretical modeling of such GRMHD phenomena. However, the code was written ‘by experts and for experts’ of the code, with a steep learning curve that would severely hinder community adoption if it were open-sourced. Here we present IllinoisGRMHD, which is an open-source, highly extensible rewrite of the original closed-source GRMHD code of the ILNR group. Reducing the learning curve was the primary focus of this rewrite, with the goal of facilitating community involvement in the code's use and development, as well as the minimization of human effort in generating new science. IllinoisGRMHD also saves computer time, generating roundoff-precision identical output to the original code on adaptive-mesh grids, but nearly twice as fast at scales of hundreds to thousands of cores.

  1. Source coding with escort distributions and Rényi entropy bounds

    NASA Astrophysics Data System (ADS)

    Bercher, J.-F.

    2009-08-01

    We discuss the interest of escort distributions and Rényi entropy in the context of source coding. We first recall a source coding theorem by Campbell relating a generalized measure of length to the Rényi-Tsallis entropy. We show that the associated optimal codes can be obtained using considerations on escort-distributions. We propose a new family of measure of length involving escort-distributions and we show that these generalized lengths are also bounded below by the Rényi entropy. Furthermore, we obtain that the standard Shannon codes lengths are optimum for the new generalized lengths measures, whatever the entropic index. Finally, we show that there exists in this setting an interplay between standard and escort distributions.

  2. Joint source-channel coding for wireless object-based video communications utilizing data hiding.

    PubMed

    Wang, Haohong; Tsaftaris, Sotirios A; Katsaggelos, Aggelos K

    2006-08-01

    In recent years, joint source-channel coding for multimedia communications has gained increased popularity. However, very limited work has been conducted to address the problem of joint source-channel coding for object-based video. In this paper, we propose a data hiding scheme that improves the error resilience of object-based video by adaptively embedding the shape and motion information into the texture data. Within a rate-distortion theoretical framework, the source coding, channel coding, data embedding, and decoder error concealment are jointly optimized based on knowledge of the transmission channel conditions. Our goal is to achieve the best video quality as expressed by the minimum total expected distortion. The optimization problem is solved using Lagrangian relaxation and dynamic programming. The performance of the proposed scheme is tested using simulations of a Rayleigh-fading wireless channel, and the algorithm is implemented based on the MPEG-4 verification model. Experimental results indicate that the proposed hybrid source-channel coding scheme significantly outperforms methods without data hiding or unequal error protection. PMID:16900673

  3. A "Genuine Relationship with the Actual": New Perspectives on Primary Sources, History and the Internet in the Classroom

    ERIC Educational Resources Information Center

    Eamon, Michael

    2006-01-01

    The pedagogic value of using archival holdings for the teaching of history has long been appreciated. Using primary sources in the teaching of history transcends the rote learning of facts and figures. It encourages critical thinking skills, introducing students to issues of context, selection and bias, to the nature of collective memory and to…

  4. SOURCES 4C : a code for calculating ([alpha],n), spontaneous fission, and delayed neutron sources and spectra.

    SciTech Connect

    Wilson, W. B.; Perry, R. T.; Shores, E. F.; Charlton, W. S.; Parish, Theodore A.; Estes, G. P.; Brown, T. H.; Arthur, Edward D. ,; Bozoian, Michael; England, T. R.; Madland, D. G.; Stewart, J. E.

    2002-01-01

    SOURCES 4C is a computer code that determines neutron production rates and spectra from ({alpha},n) reactions, spontaneous fission, and delayed neutron emission due to radionuclide decay. The code is capable of calculating ({alpha},n) source rates and spectra in four types of problems: homogeneous media (i.e., an intimate mixture of a-emitting source material and low-Z target material), two-region interface problems (i.e., a slab of {alpha}-emitting source material in contact with a slab of low-Z target material), three-region interface problems (i.e., a thin slab of low-Z target material sandwiched between {alpha}-emitting source material and low-Z target material), and ({alpha},n) reactions induced by a monoenergetic beam of {alpha}-particles incident on a slab of target material. Spontaneous fission spectra are calculated with evaluated half-life, spontaneous fission branching, and Watt spectrum parameters for 44 actinides. The ({alpha},n) spectra are calculated using an assumed isotropic angular distribution in the center-of-mass system with a library of 107 nuclide decay {alpha}-particle spectra, 24 sets of measured and/or evaluated ({alpha},n) cross sections and product nuclide level branching fractions, and functional {alpha}-particle stopping cross sections for Z < 106. The delayed neutron spectra are taken from an evaluated library of 105 precursors. The code provides the magnitude and spectra, if desired, of the resultant neutron source in addition to an analysis of the'contributions by each nuclide in the problem. LASTCALL, a graphical user interface, is included in the code package.

  5. Non-Uniform Contrast and Noise Correction for Coded Source Neutron Imaging

    SciTech Connect

    Santos-Villalobos, Hector J; Bingham, Philip R

    2012-01-01

    Since the first application of neutron radiography in the 1930s, the field of neutron radiography has matured enough to develop several applications. However, advances in the technology are far from concluded. In general, the resolution of scintillator-based detection systems is limited to the $10\\mu m$ range, and the relatively low neutron count rate of neutron sources compared to other illumination sources restricts time resolved measurement. One path toward improved resolution is the use of magnification; however, to date neutron optics are inefficient, expensive, and difficult to develop. There is a clear demand for cost-effective scintillator-based neutron imaging systems that achieve resolutions of $1 \\mu m$ or less. Such imaging system would dramatically extend the application of neutron imaging. For such purposes a coded source imaging system is under development. The current challenge is to reduce artifacts in the reconstructed coded source images. Artifacts are generated by non-uniform illumination of the source, gamma rays, dark current at the imaging sensor, and system noise from the reconstruction kernel. In this paper, we describe how to pre-process the coded signal to reduce noise and non-uniform illumination, and how to reconstruct the coded signal with three reconstruction methods correlation, maximum likelihood estimation, and algebraic reconstruction technique. We illustrates our results with experimental examples.

  6. Neutron imaging with coded sources: new challenges and the implementation of a simultaneous iterative reconstruction technique

    SciTech Connect

    Santos-Villalobos, Hector J; Bingham, Philip R; Gregor, Jens

    2013-01-01

    The limitations in neutron flux and resolution (L/D) of current neutron imaging systems can be addressed with a Coded Source Imaging system with magnification (xCSI). More precisely, the multiple sources in an xCSI system can exceed the flux of a single pinhole system for several orders of magnitude, while maintaining a higher L/D with the small sources. Moreover, designing for an xCSI system reduces noise from neutron scattering, because the object is placed away from the detector to achieve magnification. However, xCSI systems are adversely affected by correlated noise such as non-uniform illumination of the neutron source, incorrect sampling of the coded radiograph, misalignment of the coded masks, mask transparency, and the imperfection of the system Point Spread Function (PSF). We argue that a model-based reconstruction algorithm can overcome these problems and describe the implementation of a Simultaneous Iterative Reconstruction Technique algorithm for coded sources. Design pitfalls that preclude a satisfactory reconstruction are documented.

  7. Preliminary study of coded-source-based neutron imaging at the CPHS

    NASA Astrophysics Data System (ADS)

    Li, Yuanji; Huang, Zhifeng; Chen, Zhiqiang; Kang, Kejun; Xiao, Yongshun; Wang, Xuewu; Wei, Jie; Loong, C.-K.

    2011-09-01

    A cold neutron radiography/tomography instrument is under construction at the Compact Pulsed Hadron Source (CPHS) at Tsinghua University, China. The neutron flux is so low that an acceptable neutron radiographic image requires a long exposure time in the single-hole imaging mode. The coded-source-based imaging technique is helpful to increase the utilization of neutron flux to reduce the exposure time without loss in spatial resolution and provides high signal-to-noise ratio (SNR) images. Here we report a preliminary study on the feasibility of coded-source-based technique applied to the cold neutron imaging with a low-brilliance neutron source at the CPHS. A proper coded aperture is designed to be used in the beamline instead of the single-hole aperture. Two image retrieval algorithms, the Wiener filter algorithm and the Richardson-Lucy algorithm, are evaluated by using analytical and Monte Carlo simulations. The simulation results reveal that the coded source imaging technique is suitable for the CPHS to partially solve the problem of low neutron flux.

  8. SOURCES 4A: A Code for Calculating (alpha,n), Spontaneous Fission, and Delayed Neutron Sources and Spectra

    SciTech Connect

    Madland, D.G.; Arthur, E.D.; Estes, G.P.; Stewart, J.E.; Bozoian, M.; Perry, R.T.; Parish, T.A.; Brown, T.H.; England, T.R.; Wilson, W.B.; Charlton, W.S.

    1999-09-01

    SOURCES 4A is a computer code that determines neutron production rates and spectra from ({alpha},n) reactions, spontaneous fission, and delayed neutron emission due to the decay of radionuclides. The code is capable of calculating ({alpha},n) source rates and spectra in four types of problems: homogeneous media (i.e., a mixture of {alpha}-emitting source material and low-Z target material), two-region interface problems (i.e., a slab of {alpha}-emitting source material in contact with a slab of low-Z target material), three-region interface problems (i.e., a thin slab of low-Z target material sandwiched between {alpha}-emitting source material and low-Z target material), and ({alpha},n) reactions induced by a monoenergetic beam of {alpha}-particles incident on a slab of target material. Spontaneous fission spectra are calculated with evaluated half-life, spontaneous fission branching, and Watt spectrum parameters for 43 actinides. The ({alpha},n) spectra are calculated using an assumed isotropic angular distribution in the center-of-mass system with a library of 89 nuclide decay {alpha}-particle spectra, 24 sets of measured and/or evaluated ({alpha},n) cross sections and product nuclide level branching fractions, and functional {alpha}-particle stopping cross sections for Z < 106. The delayed neutron spectra are taken from an evaluated library of 105 precursors. The code outputs the magnitude and spectra of the resultant neutron source. It also provides an analysis of the contributions to that source by each nuclide in the problem.

  9. Shared and Distributed Memory Parallel Security Analysis of Large-Scale Source Code and Binary Applications

    SciTech Connect

    Quinlan, D; Barany, G; Panas, T

    2007-08-30

    Many forms of security analysis on large scale applications can be substantially automated but the size and complexity can exceed the time and memory available on conventional desktop computers. Most commercial tools are understandably focused on such conventional desktop resources. This paper presents research work on the parallelization of security analysis of both source code and binaries within our Compass tool, which is implemented using the ROSE source-to-source open compiler infrastructure. We have focused on both shared and distributed memory parallelization of the evaluation of rules implemented as checkers for a wide range of secure programming rules, applicable to desktop machines, networks of workstations and dedicated clusters. While Compass as a tool focuses on source code analysis and reports violations of an extensible set of rules, the binary analysis work uses the exact same infrastructure but is less well developed into an equivalent final tool.

  10. 3-D localization of gamma ray sources with coded apertures for medical applications

    NASA Astrophysics Data System (ADS)

    Kaissas, I.; Papadimitropoulos, C.; Karafasoulis, K.; Potiriadis, C.; Lambropoulos, C. P.

    2015-09-01

    Several small gamma cameras for radioguided surgery using CdTe or CdZnTe have parallel or pinhole collimators. Coded aperture imaging is a well-known method for gamma ray source directional identification, applied in astrophysics mainly. The increase in efficiency due to the substitution of the collimators by the coded masks renders the method attractive for gamma probes used in radioguided surgery. We have constructed and operationally verified a setup consisting of two CdTe gamma cameras with Modified Uniform Redundant Array (MURA) coded aperture masks of rank 7 and 19 and a video camera. The 3-D position of point-like radioactive sources is estimated via triangulation using decoded images acquired by the gamma cameras. We have also developed code for both fast and detailed simulations and we have verified the agreement between experimental results and simulations. In this paper we present a simulation study for the spatial localization of two point sources using coded aperture masks with rank 7 and 19.

  11. Detection and Location of Gamma-Ray Sources with a Modulating Coded Mask

    SciTech Connect

    Anderson, Dale N.; Stromswold, David C.; Wunschel, Sharon C.; Peurrung, Anthony J.; Hansen, Randy R.

    2006-01-31

    This paper presents methods of detecting and locating a concelaed nuclear gamma-ray source with a coded aperture mask. Energetic gamma rays readily penetrate moderate amounts of shielding material and can be detected at distances of many meters. The detection of high energy gamma-ray sources is vitally important to national security for several reasons, including nuclear materials smuggling interdiction, monitoring weapon components under treaties, and locating nuclear weapons and materials in the possession terrorist organizations.

  12. Study of an External Neutron Source for an Accelerator-Driven System using the PHITS Code

    SciTech Connect

    Sugawara, Takanori; Iwasaki, Tomohiko; Chiba, Takashi

    2005-05-24

    A code system for the Accelerator Driven System (ADS) has been under development for analyzing dynamic behaviors of a subcritical core coupled with an accelerator. This code system named DSE (Dynamics calculation code system for a Subcritical system with an External neutron source) consists of an accelerator part and a reactor part. The accelerator part employs a database, which is calculated by using PHITS, for investigating the effect related to the accelerator such as the changes of beam energy, beam diameter, void generation, and target level. This analysis method using the database may introduce some errors into dynamics calculations since the neutron source data derived from the database has some errors in fitting or interpolating procedures. In this study, the effects of various events are investigated to confirm that the method based on the database is appropriate.

  13. Automated Source-Code-Based Testing of Object-Oriented Software

    NASA Astrophysics Data System (ADS)

    Gerlich, Ralf; Gerlich, Rainer; Dietrich, Carsten

    2014-08-01

    With the advent of languages such as C++ and Java in mission- and safety-critical space on-board software, new challenges for testing and specifically automated testing arise. In this paper we discuss some of these challenges, consequences and solutions based on an experiment in automated source-code-based testing for C++.

  14. VizieR Online Data Catalog: Transiting planets search Matlab/Octave source code (Ofir+, 2014)

    NASA Astrophysics Data System (ADS)

    Ofir, A.

    2014-01-01

    The Matlab/Octave source code for Optimal BLS is made available here. Detailed descriptions of all inputs and outputs are given by comment lines in the file. Note: Octave does not currently support parallel for loops ("parfor"). Octave users therefore need to change the "parfor" command (line 217 of OptimalBLS.m) to "for". (7 data files).

  15. Phase 1 Validation Testing and Simulation for the WEC-Sim Open Source Code

    NASA Astrophysics Data System (ADS)

    Ruehl, K.; Michelen, C.; Gunawan, B.; Bosma, B.; Simmons, A.; Lomonaco, P.

    2015-12-01

    WEC-Sim is an open source code to model wave energy converters performance in operational waves, developed by Sandia and NREL and funded by the US DOE. The code is a time-domain modeling tool developed in MATLAB/SIMULINK using the multibody dynamics solver SimMechanics, and solves the WEC's governing equations of motion using the Cummins time-domain impulse response formulation in 6 degrees of freedom. The WEC-Sim code has undergone verification through code-to-code comparisons; however validation of the code has been limited to publicly available experimental data sets. While these data sets provide preliminary code validation, the experimental tests were not explicitly designed for code validation, and as a result are limited in their ability to validate the full functionality of the WEC-Sim code. Therefore, dedicated physical model tests for WEC-Sim validation have been performed. This presentation provides an overview of the WEC-Sim validation experimental wave tank tests performed at the Oregon State University's Directional Wave Basin at Hinsdale Wave Research Laboratory. Phase 1 of experimental testing was focused on device characterization and completed in Fall 2015. Phase 2 is focused on WEC performance and scheduled for Winter 2015/2016. These experimental tests were designed explicitly to validate the performance of WEC-Sim code, and its new feature additions. Upon completion, the WEC-Sim validation data set will be made publicly available to the wave energy community. For the physical model test, a controllable model of a floating wave energy converter has been designed and constructed. The instrumentation includes state-of-the-art devices to measure pressure fields, motions in 6 DOF, multi-axial load cells, torque transducers, position transducers, and encoders. The model also incorporates a fully programmable Power-Take-Off system which can be used to generate or absorb wave energy. Numerical simulations of the experiments using WEC-Sim will be

  16. A coded-aperture technique allowing x-ray phase contrast imaging with conventional sources

    SciTech Connect

    Olivo, Alessandro; Speller, Robert

    2007-08-13

    Phase contrast imaging (PCI) solves the basic limitation of x-ray imaging, i.e., poor image contrast resulting from small absorption differences. Up to now, it has been mostly limited to synchrotron radiation facilities, due to the stringent requirements on the x-ray source and detectors, and only one technique was shown to provide PCI images with conventional sources but with limits in practical implementation. The authors propose a different approach, based on coded apertures, which provides high PCI signals with conventional sources and detectors and imposes practically no applicability limits. They expect this method to cast the basis of a widespread diffusion of PCI.

  17. Computer vision for detecting and quantifying gamma-ray sources in coded-aperture images

    SciTech Connect

    Schaich, P.C.; Clark, G.A.; Sengupta, S.K.; Ziock, K.P.

    1994-11-02

    The authors report the development of an automatic image analysis system that detects gamma-ray source regions in images obtained from a coded aperture, gamma-ray imager. The number of gamma sources in the image is not known prior to analysis. The system counts the number (K) of gamma sources detected in the image and estimates the lower bound for the probability that the number of sources in the image is K. The system consists of a two-stage pattern classification scheme in which the Probabilistic Neural Network is used in the supervised learning mode. The algorithms were developed and tested using real gamma-ray images from controlled experiments in which the number and location of depleted uranium source disks in the scene are known.

  18. HELIOS-R: An Ultrafast, Open-Source Retrieval Code For Exoplanetary Atmosphere Characterization

    NASA Astrophysics Data System (ADS)

    LAVIE, Baptiste

    2015-12-01

    Atmospheric retrieval is a growing, new approach in the theory of exoplanet atmosphere characterization. Unlike self-consistent modeling it allows us to fully explore the parameter space, as well as the degeneracies between the parameters using a Bayesian framework. We present HELIOS-R, a very fast retrieving code written in Python and optimized for GPU computation. Once it is ready, HELIOS-R will be the first open-source atmospheric retrieval code accessible to the exoplanet community. As the new generation of direct imaging instruments (SPHERE, GPI) have started to gather data, the first version of HELIOS-R focuses on emission spectra. We use a 1D two-stream forward model for computing fluxes and couple it to an analytical temperature-pressure profile that is constructed to be in radiative equilibrium. We use our ultra-fast opacity calculator HELIOS-K (also open-source) to compute the opacities of CO2, H2O, CO and CH4 from the HITEMP database. We test both opacity sampling (which is typically used by other workers) and the method of k-distributions. Using this setup, we compute a grid of synthetic spectra and temperature-pressure profiles, which is then explored using a nested sampling algorithm. By focusing on model selection (Occam’s razor) through the explicit computation of the Bayesian evidence, nested sampling allows us to deal with current sparse data as well as upcoming high-resolution observations. Once the best model is selected, HELIOS-R provides posterior distributions of the parameters. As a test for our code we studied HR8799 system and compared our results with the previous analysis of Lee, Heng & Irwin (2013), which used the proprietary NEMESIS retrieval code. HELIOS-R and HELIOS-K are part of the set of open-source community codes we named the Exoclimes Simulation Platform (www.exoclime.org).

  19. An Adaptive Source-Channel Coding with Feedback for Progressive Transmission of Medical Images

    PubMed Central

    Lo, Jen-Lung; Sanei, Saeid; Nazarpour, Kianoush

    2009-01-01

    A novel adaptive source-channel coding with feedback for progressive transmission of medical images is proposed here. In the source coding part, the transmission starts from the region of interest (RoI). The parity length in the channel code varies with respect to both the proximity of the image subblock to the RoI and the channel noise, which is iteratively estimated in the receiver. The overall transmitted data can be controlled by the user (clinician). In the case of medical data transmission, it is vital to keep the distortion level under control as in most of the cases certain clinically important regions have to be transmitted without any visible error. The proposed system significantly reduces the transmission time and error. Moreover, the system is very user friendly since the selection of the RoI, its size, overall code rate, and a number of test features such as noise level can be set by the users in both ends. A MATLAB-based TCP/IP connection has been established to demonstrate the proposed interactive and adaptive progressive transmission system. The proposed system is simulated for both binary symmetric channel (BSC) and Rayleigh channel. The experimental results verify the effectiveness of the design. PMID:19190770

  20. An adaptive source-channel coding with feedback for progressive transmission of medical images.

    PubMed

    Lo, Jen-Lung; Sanei, Saeid; Nazarpour, Kianoush

    2009-01-01

    A novel adaptive source-channel coding with feedback for progressive transmission of medical images is proposed here. In the source coding part, the transmission starts from the region of interest (RoI). The parity length in the channel code varies with respect to both the proximity of the image subblock to the RoI and the channel noise, which is iteratively estimated in the receiver. The overall transmitted data can be controlled by the user (clinician). In the case of medical data transmission, it is vital to keep the distortion level under control as in most of the cases certain clinically important regions have to be transmitted without any visible error. The proposed system significantly reduces the transmission time and error. Moreover, the system is very user friendly since the selection of the RoI, its size, overall code rate, and a number of test features such as noise level can be set by the users in both ends. A MATLAB-based TCP/IP connection has been established to demonstrate the proposed interactive and adaptive progressive transmission system. The proposed system is simulated for both binary symmetric channel (BSC) and Rayleigh channel. The experimental results verify the effectiveness of the design. PMID:19190770

  1. Severe accident source term characteristics for selected Peach Bottom sequences predicted by the MELCOR Code

    SciTech Connect

    Carbajo, J.J.

    1993-09-01

    The purpose of this report is to compare in-containment source terms developed for NUREG-1159, which used the Source Term Code Package (STCP), with those generated by MELCOR to identify significant differences. For this comparison, two short-term depressurized station blackout sequences (with a dry cavity and with a flooded cavity) and a Loss-of-Coolant Accident (LOCA) concurrent with complete loss of the Emergency Core Cooling System (ECCS) were analyzed for the Peach Bottom Atomic Power Station (a BWR-4 with a Mark I containment). The results indicate that for the sequences analyzed, the two codes predict similar total in-containment release fractions for each of the element groups. However, the MELCOR/CORBH Package predicts significantly longer times for vessel failure and reduced energy of the released material for the station blackout sequences (when compared to the STCP results). MELCOR also calculated smaller releases into the environment than STCP for the station blackout sequences.

  2. The FORTRAN static source code analyzer program (SAP) user's guide, revision 1

    NASA Technical Reports Server (NTRS)

    Decker, W.; Taylor, W.; Eslinger, S.

    1982-01-01

    The FORTRAN Static Source Code Analyzer Program (SAP) User's Guide (Revision 1) is presented. SAP is a software tool designed to assist Software Engineering Laboratory (SEL) personnel in conducting studies of FORTRAN programs. SAP scans FORTRAN source code and produces reports that present statistics and measures of statements and structures that make up a module. This document is a revision of the previous SAP user's guide, Computer Sciences Corporation document CSC/TM-78/6045. SAP Revision 1 is the result of program modifications to provide several new reports, additional complexity analysis, and recognition of all statements described in the FORTRAN 77 standard. This document provides instructions for operating SAP and contains information useful in interpreting SAP output.

  3. Joint source/channel coding for image transmission with JPEG2000 over memoryless channels.

    PubMed

    Wu, Zhenyu; Bilgin, Ali; Marcellin, Michael W

    2005-08-01

    The high compression efficiency and various features provided by JPEG2000 make it attractive for image transmission purposes. A novel joint source/channel coding scheme tailored for JPEG2000 is proposed in this paper to minimize the end-to-end image distortion within a given total transmission rate through memoryless channels. It provides unequal error protection by combining the forward error correction capability from channel codes and the error detection/localization functionality from JPEG2000 in an effective way. The proposed scheme generates quality scalable and error-resilient codestreams. It gives competitive performance with other existing schemes for JPEG2000 in the matched channel condition case and provides more graceful quality degradation for mismatched cases. Furthermore, both fixed-length source packets and fixed-length channel packets can be efficiently formed with the same algorithm. PMID:16121451

  4. Source-Search Sensitivity of a Large-Area, Coded-Aperture, Gamma-Ray Imager

    SciTech Connect

    Ziock, K P; Collins, J W; Craig, W W; Fabris, L; Lanza, R C; Gallagher, S; Horn, B P; Madden, N W; Smith, E; Woodring, M L

    2004-10-27

    We have recently completed a large-area, coded-aperture, gamma-ray imager for use in searching for radiation sources. The instrument was constructed to verify that weak point sources can be detected at considerable distances if one uses imaging to overcome fluctuations in the natural background. The instrument uses a rank-19, one-dimensional coded aperture to cast shadow patterns onto a 0.57 m{sup 2} NaI(Tl) detector composed of 57 individual cubes each 10 cm on a side. These are arranged in a 19 x 3 array. The mask is composed of four-centimeter thick, one-meter high, 10-cm wide lead blocks. The instrument is mounted in the back of a small truck from which images are obtained as one drives through a region. Results of first measurements obtained with the system are presented.

  5. CACTI: free, open-source software for the sequential coding of behavioral interactions.

    PubMed

    Glynn, Lisa H; Hallgren, Kevin A; Houck, Jon M; Moyers, Theresa B

    2012-01-01

    The sequential analysis of client and clinician speech in psychotherapy sessions can help to identify and characterize potential mechanisms of treatment and behavior change. Previous studies required coding systems that were time-consuming, expensive, and error-prone. Existing software can be expensive and inflexible, and furthermore, no single package allows for pre-parsing, sequential coding, and assignment of global ratings. We developed a free, open-source, and adaptable program to meet these needs: The CASAA Application for Coding Treatment Interactions (CACTI). Without transcripts, CACTI facilitates the real-time sequential coding of behavioral interactions using WAV-format audio files. Most elements of the interface are user-modifiable through a simple XML file, and can be further adapted using Java through the terms of the GNU Public License. Coding with this software yields interrater reliabilities comparable to previous methods, but at greatly reduced time and expense. CACTI is a flexible research tool that can simplify psychotherapy process research, and has the potential to contribute to the improvement of treatment content and delivery. PMID:22815713

  6. The NASA Langley Research Center 0.3-meter transonic cryogenic tunnel microcomputer controller source code

    NASA Technical Reports Server (NTRS)

    Kilgore, W. Allen; Balakrishna, S.

    1991-01-01

    The 0.3 m Transonic Cryogenic Tunnel (TCT) microcomputer based controller has been operating for several thousand hours in a safe and efficient manner. A complete listing is provided of the source codes for the tunnel controller and tunnel simulator. Included also is a listing of all the variables used in these programs. Several changes made to the controller are described. These changes are to improve the controller ease of use and safety.

  7. A source-channel coding approach to digital image protection and self-recovery.

    PubMed

    Sarreshtedari, Saeed; Akhaee, Mohammad Ali

    2015-07-01

    Watermarking algorithms have been widely applied to the field of image forensics recently. One of these very forensic applications is the protection of images against tampering. For this purpose, we need to design a watermarking algorithm fulfilling two purposes in case of image tampering: 1) detecting the tampered area of the received image and 2) recovering the lost information in the tampered zones. State-of-the-art techniques accomplish these tasks using watermarks consisting of check bits and reference bits. Check bits are used for tampering detection, whereas reference bits carry information about the whole image. The problem of recovering the lost reference bits still stands. This paper is aimed at showing that having the tampering location known, image tampering can be modeled and dealt with as an erasure error. Therefore, an appropriate design of channel code can protect the reference bits against tampering. In the present proposed method, the total watermark bit-budget is dedicated to three groups: 1) source encoder output bits; 2) channel code parity bits; and 3) check bits. In watermark embedding phase, the original image is source coded and the output bit stream is protected using appropriate channel encoder. For image recovery, erasure locations detected by check bits help channel erasure decoder to retrieve the original source encoded image. Experimental results show that our proposed scheme significantly outperforms recent techniques in terms of image quality for both watermarked and recovered image. The watermarked image quality gain is achieved through spending less bit-budget on watermark, while image recovery quality is considerably improved as a consequence of consistent performance of designed source and channel codes. PMID:25807568

  8. Code System for Calculating Alpha, N; Spontaneous Fission; and Delayed Neutron Sources and Spectra.

    2002-07-18

    Version: 04 SOURCES4C is a code system that determines neutron production rates and spectra from (alpha,n) reactions, spontaneous fission, and delayed neutron emission due to radionuclide decay. In this release the three-region problem was modified to correct several bugs, and new documentation was added to the package. Details are available in the included LA-UR-02-1617 (2002) report. The code is capable of calculating (alpha,n) source rates and spectra in four types of problems: homogeneous media (i.e.,more » an intimate mixture of alpha-emitting source material and low-Z target material), two-region interface problems (i.e., a slab of alpha-emitting source material in contact with a slab of low-Z target material), three-region interface problems (i.e., a thin slab of low-Z target material sandwiched between alpha-emitting source material and low-Z target material), and (alpha,n) reactions induced by a monoenergetic beam of alpha-particles incident on a slab of target material. The process of creating a SOURCES input file (tape1) is streamlined with the Los Alamos SOURCES Tape1 Creator and Library Link (LASTCALL) Version 1. Intended to supplement the SOURCES manual, LASTCALL is a simple graphical user interface designed to minimize common errors made during input. The optional application, LASTCALL, consists of a single dialog window launched from an executable (lastcall.exe) on Windows-based personal computers.« less

  9. OpenAD/F : a modular, open-source tool for automatic differentiation of Fortran codes.

    SciTech Connect

    Utke, J.; Naumann, U.; Fagan, M.; Tallent, N.; Strout, M.; Heimbach, P.; Hill, C.; Wunsch, C.; Mathematics and Computer Science; Rheinisch Westfalische Technische Hochschule Aachen; Rice Univ.; Colorado State Univ.; MIT

    2008-01-01

    The OpenAD/F tool allows the evaluation of derivatives of functions defined by a Fortran program. The derivative evaluation is performed by a Fortran code resulting from the analysis and transformation of the original program that defines the function of interest. OpenAD/F has been designed with a particular emphasis on modularity, flexibility, and the use of open source components. While the code transformation follows the basic principles of automatic differentiation, the tool implements new algorithmic approaches at various levels, for example, for basic block preaccumulation and call graph reversal. Unlike most other automatic differentiation tools, OpenAD/F uses components provided by the OpenAD framework, which supports a comparatively easy extension of the code transformations in a language-independent fashion. It uses code analysis results implemented in the OpenAnalysis component. The interface to the language-independent transformation engine is an XML-based format, specified through an XML schema. The implemented transformation algorithms allow efficient derivative computations utilizing locally optimized cross-country sequences of vertex, edge, and face elimination steps. Specifically, for the generation of adjoint codes, OpenAD/F supports various code reversal schemes with hierarchical checkpointing at the subroutine level. As an example from geophysical fluid dynamics a nonlinear time-dependent scalable, yet simple, barotropic ocean model is considered. OpenAD/F's reverse mode is applied to compute sensitivities of some of the model's transport properties with respect to gridded fields such as bottom topography as independent (control) variables.

  10. Interpreting observations of molecular outflow sources: the MHD shock code mhd_vode

    NASA Astrophysics Data System (ADS)

    Flower, D. R.; Pineau des Forêts, G.

    2015-06-01

    The planar MHD shock code mhd_vode has been developed in order to simulate both continuous (C) type shock waves and jump (J) type shock waves in the interstellar medium. The physical and chemical state of the gas in steady-state may also be computed and used as input to a shock wave model. The code is written principally in FORTRAN 90, although some routines remain in FORTRAN 77. The documented program and its input data are described and provided as supplementary material, and the results of exemplary test runs are presented. Our intention is to enable the interested user to run the code for any sensible parameter set and to comprehend the results. With applications to molecular outflow sources in mind, we have computed, and are making available as supplementary material, integrated atomic and molecular line intensities for grids of C- and J-type models; these computations are summarized in the Appendices. Appendix tables, a copy of the current version of the code, and of the two model grids are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/578/A63

  11. Sources of financial pressure and up coding behavior in French public hospitals.

    PubMed

    Georgescu, Irène; Hartmann, Frank G H

    2013-05-01

    Drawing upon role theory and the literature concerning unintended consequences of financial pressure, this study investigates the effects of health care decision pressure from the hospital's administration and from the professional peer group on physician's inclination to engage in up coding. We explore two kinds of up coding, information-related and action-related, and develop hypothesis that connect these kinds of data manipulation to the sources of pressure via the intermediate effect of role conflict. Qualitative data from initial interviews with physicians and subsequent questionnaire evidence from 578 physicians in 14 French hospitals suggest that the source of pressure is a relevant predictor of physicians' inclination to engage in data-manipulation. We further find that this effect is partly explained by the extent to which these pressures create role conflict. Given the concern about up coding in treatment-based reimbursement systems worldwide, our analysis adds to understanding how the design of the hospital's management control system may enhance this undesired type of behavior. PMID:23477807

  12. User`s Manual for the SOURCE1 and SOURCE2 Computer Codes: Models for Evaluating Low-Level Radioactive Waste Disposal Facility Source Terms (Version 2.0)

    SciTech Connect

    Icenhour, A.S.; Tharp, M.L.

    1996-08-01

    The SOURCE1 and SOURCE2 computer codes calculate source terms (i.e. radionuclide release rates) for performance assessments of low-level radioactive waste (LLW) disposal facilities. SOURCE1 is used to simulate radionuclide releases from tumulus-type facilities. SOURCE2 is used to simulate releases from silo-, well-, well-in-silo-, and trench-type disposal facilities. The SOURCE codes (a) simulate the degradation of engineered barriers and (b) provide an estimate of the source term for LLW disposal facilities. This manual summarizes the major changes that have been effected since the codes were originally developed.

  13. Probabilistic positional association of catalogs of astrophysical sources: the Aspects code

    NASA Astrophysics Data System (ADS)

    Fioc, Michel

    2014-06-01

    We describe a probabilistic method of cross-identifying astrophysical sources in two catalogs from their positions and positional uncertainties. The probability that an object is associated with a source from the other catalog, or that it has no counterpart, is derived under two exclusive assumptions: first, the classical case of several-to-one associations, and then the more realistic but more difficult problem of one-to-one associations. In either case, the likelihood of observing the objects in the two catalogs at their effective positions is computed and a maximum likelihood estimator of the fraction of sources with a counterpart - a quantity needed to compute the probabilities of association - is built. When the positional uncertainty in one or both catalogs is unknown, this method may be used to estimate its typical value and even to study its dependence on the size of objects. It may also be applied when the true centers of a source and of its counterpart at another wavelength do not coincide. To compute the likelihood and association probabilities under the different assumptions, we developed a Fortran 95 code called Aspects ([aspɛ], "Association positionnelle/probabiliste de catalogues de sources" in French); its source files are made freely available. To test Aspects, all-sky mock catalogs containing up to 105 objects were created, forcing either several-to-one or one-to-one associations. The analysis of these simulations confirms that, in both cases, the assumption with the highest likelihood is the right one and that estimators of unknown parameters built for the appropriate association model are reliable. Available at http://www2.iap.fr/users/fioc/Aspects/The Aspects code is available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/566/A8

  14. Optimal power allocation and joint source-channel coding for wireless DS-CDMA visual sensor networks

    NASA Astrophysics Data System (ADS)

    Pandremmenou, Katerina; Kondi, Lisimachos P.; Parsopoulos, Konstantinos E.

    2011-01-01

    In this paper, we propose a scheme for the optimal allocation of power, source coding rate, and channel coding rate for each of the nodes of a wireless Direct Sequence Code Division Multiple Access (DS-CDMA) visual sensor network. The optimization is quality-driven, i.e. the received quality of the video that is transmitted by the nodes is optimized. The scheme takes into account the fact that the sensor nodes may be imaging scenes with varying levels of motion. Nodes that image low-motion scenes will require a lower source coding rate, so they will be able to allocate a greater portion of the total available bit rate to channel coding. Stronger channel coding will mean that such nodes will be able to transmit at lower power. This will both increase battery life and reduce interference to other nodes. Two optimization criteria are considered. One that minimizes the average video distortion of the nodes and one that minimizes the maximum distortion among the nodes. The transmission powers are allowed to take continuous values, whereas the source and channel coding rates can assume only discrete values. Thus, the resulting optimization problem lies in the field of mixed-integer optimization tasks and is solved using Particle Swarm Optimization. Our experimental results show the importance of considering the characteristics of the video sequences when determining the transmission power, source coding rate and channel coding rate for the nodes of the visual sensor network.

  15. Source Term Model for Steady Micro Jets in a Navier-Stokes Computer Code

    NASA Technical Reports Server (NTRS)

    Waithe, Kenrick A.

    2005-01-01

    A source term model for steady micro jets was implemented into a non-proprietary Navier-Stokes computer code, OVERFLOW. The source term models the mass flow and momentum created by a steady blowing micro jet. The model is obtained by adding the momentum and mass flow created by the jet to the Navier-Stokes equations. The model was tested by comparing with data from numerical simulations of a single, steady micro jet on a flat plate in two and three dimensions. The source term model predicted the velocity distribution well compared to the two-dimensional plate using a steady mass flow boundary condition, which was used to simulate a steady micro jet. The model was also compared to two three-dimensional flat plate cases using a steady mass flow boundary condition to simulate a steady micro jet. The three-dimensional comparison included a case with a grid generated to capture the circular shape of the jet and a case without a grid generated for the micro jet. The case without the jet grid mimics the application of the source term. The source term model compared well with both of the three-dimensional cases. Comparisons of velocity distribution were made before and after the jet and Mach and vorticity contours were examined. The source term model allows a researcher to quickly investigate different locations of individual or several steady micro jets. The researcher is able to conduct a preliminary investigation with minimal grid generation and computational time.

  16. Documentation generator for VHDL and MatLab source codes for photonic and electronic systems

    NASA Astrophysics Data System (ADS)

    Niton, B.; Pozniak, K. T.; Romaniuk, R. S.

    2011-06-01

    The UML, which is a complex system modeling and description technology, has recently been expanding its uses in the field of formalization and algorithmic approach to such systems like multiprocessor photonic, optoelectronic and advanced electronics carriers; distributed, multichannel measurement systems; optical networks, industrial electronics, novel R&D solutions. The paper describes a new concept of software dedicated for documenting the source codes written in VHDL and MatLab. The work starts with the analysis of available documentation generators for both programming languages, with an emphasis on the open source solutions. There are presented own solutions which base on the Doxygen program available as a free license with the source code. The supporting tools for parsers building were used like Bison and Flex. The documentation generator application is used for design of large optoelectronic and electronic measurement and control systems. The paper consists of three parts which describe the following components of the documentation generator for photonic and electronic systems: concept, MatLab application and VHDL application. This is part one which describes the system concept. Part two describes the MatLab application. MatLab is used for description of the measured phenomena. Part three describes the VHDL application. VHDL is used for behavioral description of the optoelectronic system. All the proposed approach and application documents big, complex software configurations for large systems.

  17. LENSED: a code for the forward reconstruction of lenses and sources from strong lensing observations

    NASA Astrophysics Data System (ADS)

    Tessore, Nicolas; Bellagamba, Fabio; Metcalf, R. Benton

    2016-09-01

    Robust modelling of strong lensing systems is fundamental to exploit the information they contain about the distribution of matter in galaxies and clusters. In this work, we present LENSED, a new code which performs forward parametric modelling of strong lenses. LENSED takes advantage of a massively parallel ray-tracing kernel to perform the necessary calculations on a modern graphics processing unit (GPU). This makes the precise rendering of the background lensed sources much faster, and allows the simultaneous optimisation of tens of parameters for the selected model. With a single run, the code is able to obtain the full posterior probability distribution for the lens light, the mass distribution and the background source at the same time. LENSED is first tested on mock images which reproduce realistic space-based observations of lensing systems. In this way, we show that it is able to recover unbiased estimates of the lens parameters, even when the sources do not follow exactly the assumed model. Then, we apply it to a subsample of the SLACS lenses, in order to demonstrate its use on real data. The results generally agree with the literature, and highlight the flexibility and robustness of the algorithm.

  18. Source Listings for Computer Code SPIRALI Incompressible, Turbulent Spiral Grooved Cylindrical and Face Seals

    NASA Technical Reports Server (NTRS)

    Walowit, Jed A.; Shapiro, Wibur

    2005-01-01

    This is the source listing of the computer code SPIRALI which predicts the performance characteristics of incompressible cylindrical and face seals with or without the inclusion of spiral grooves. Performance characteristics include load capacity (for face seals), leakage flow, power requirements and dynamic characteristics in the form of stiffness, damping and apparent mass coefficients in 4 degrees of freedom for cylindrical seals and 3 degrees of freedom for face seals. These performance characteristics are computed as functions of seal and groove geometry, load or film thickness, running and disturbance speeds, fluid viscosity, and boundary pressures.

  19. Compressed X-ray phase-contrast imaging using a coded source

    NASA Astrophysics Data System (ADS)

    Sung, Yongjin; Xu, Ling; Nagarkar, Vivek; Gupta, Rajiv

    2014-12-01

    X-ray phase-contrast imaging (XPCI) holds great promise for medical X-ray imaging with high soft-tissue contrast. Obviating optical elements in the imaging chain, propagation-based XPCI (PB-XPCI) has definite advantages over other XPCI techniques in terms of cost, alignment and scalability. However, it imposes strict requirements on the spatial coherence of the source and the resolution of the detector. In this study, we demonstrate that using a coded X-ray source and sparsity-based reconstruction, we can significantly relax these requirements. Using numerical simulation, we assess the feasibility of our approach and study the effect of system parameters on the reconstructed image. The results are demonstrated with images obtained using a bench-top micro-focus XPCI system.

  20. Source Term Model for Vortex Generator Vanes in a Navier-Stokes Computer Code

    NASA Technical Reports Server (NTRS)

    Waithe, Kenrick A.

    2004-01-01

    A source term model for an array of vortex generators was implemented into a non-proprietary Navier-Stokes computer code, OVERFLOW. The source term models the side force created by a vortex generator vane. The model is obtained by introducing a side force to the momentum and energy equations that can adjust its strength automatically based on the local flow. The model was tested and calibrated by comparing data from numerical simulations and experiments of a single low profile vortex generator vane on a flat plate. In addition, the model was compared to experimental data of an S-duct with 22 co-rotating, low profile vortex generators. The source term model allowed a grid reduction of about seventy percent when compared with the numerical simulations performed on a fully gridded vortex generator on a flat plate without adversely affecting the development and capture of the vortex created. The source term model was able to predict the shape and size of the stream-wise vorticity and velocity contours very well when compared with both numerical simulations and experimental data. The peak vorticity and its location were also predicted very well when compared to numerical simulations and experimental data. The circulation predicted by the source term model matches the prediction of the numerical simulation. The source term model predicted the engine fan face distortion and total pressure recovery of the S-duct with 22 co-rotating vortex generators very well. The source term model allows a researcher to quickly investigate different locations of individual or a row of vortex generators. The researcher is able to conduct a preliminary investigation with minimal grid generation and computational time.

  1. Open Source Physics: Code and Curriculum Material for Teachers, Authors, and Developers

    NASA Astrophysics Data System (ADS)

    Christian, Wolfgang

    2004-03-01

    The continued use of procedural languages in education is due in part to the lack of up-to-date curricular materials that combine science topics with an object-oriented programming framework. Although there are many resources for teaching computational physics, few are object-oriented. What is needed by the broader science education community is not another computational physics, numerical analysis, or Java programming book (although such books are essential for discipline-specific practitioners), but a synthesis of curriculum development, computational physics, computer science, and physics education that will be useful for scientists and students wishing to write their own simulations and develop their own curricular material. The Open Source Physics (OSP) project was established to meet this need. OSP is an NSF-funded curriculum development project that is developing and distributing a code library, programs, and examples of computer-based interactive curricular material. In this talk, we will describe this library, demonstrate its use, and report on its adoption by curriculum authors. The Open Source Physics code library, documentation, and sample curricular material can be downloaded from http://www.opensourcephysics.org/. Partial funding for this work was obtained through NSF grant DUE-0126439.

  2. PRIMUS: a computer code for the preparation of radionuclide ingrowth matrices from user-specified sources

    SciTech Connect

    Hermann, O.W.; Baes, C.F. III; Miller, C.W.; Begovich, C.L.; Sjoreen, A.L.

    1984-10-01

    The computer program, PRIMUS, reads a library of radionuclide branching fractions and half-lives and constructs a decay-chain data library and a problem-specific decay-chain data file. PRIMUS reads the decay data compiled for 496 nuclides from the Evaluated Nuclear Structure Data File (ENSDF). The ease of adding radionuclides to the input library allows the CRRIS system to further expand its comprehensive data base. The decay-chain library produced is input to the ANEMOS code. Also, PRIMUS produces a data set reduced to only the decay chains required in a particular problem, for input to the SUMIT, TERRA, MLSOIL, and ANDROS codes. Air concentrations and deposition rates from the PRIMUS decay-chain data file. Source term data may be entered directly to PRIMUS to be read by MLSOIL, TERRA, and ANDROS. The decay-chain data prepared by PRIMUS is needed for a matrix-operator method that computes either time-dependent decay products from an initial concentration generated from a constant input source. This document describes the input requirements and the output obtained. Also, sections are included on methods, applications, subroutines, and sample cases. A short appendix indicates a method of utilizing PRIMUS and the associated decay subroutines from TERRA or ANDROS for applications to other decay problems. 18 references.

  3. Fast space-varying convolution using matrix source coding with applications to camera stray light reduction.

    PubMed

    Wei, Jianing; Bouman, Charles A; Allebach, Jan P

    2014-05-01

    Many imaging applications require the implementation of space-varying convolution for accurate restoration and reconstruction of images. Here, we use the term space-varying convolution to refer to linear operators whose impulse response has slow spatial variation. In addition, these space-varying convolution operators are often dense, so direct implementation of the convolution operator is typically computationally impractical. One such example is the problem of stray light reduction in digital cameras, which requires the implementation of a dense space-varying deconvolution operator. However, other inverse problems, such as iterative tomographic reconstruction, can also depend on the implementation of dense space-varying convolution. While space-invariant convolution can be efficiently implemented with the fast Fourier transform, this approach does not work for space-varying operators. So direct convolution is often the only option for implementing space-varying convolution. In this paper, we develop a general approach to the efficient implementation of space-varying convolution, and demonstrate its use in the application of stray light reduction. Our approach, which we call matrix source coding, is based on lossy source coding of the dense space-varying convolution matrix. Importantly, by coding the transformation matrix, we not only reduce the memory required to store it; we also dramatically reduce the computation required to implement matrix-vector products. Our algorithm is able to reduce computation by approximately factoring the dense space-varying convolution operator into a product of sparse transforms. Experimental results show that our method can dramatically reduce the computation required for stray light reduction while maintaining high accuracy. PMID:24710398

  4. Effectiveness Evaluation of Skin Covers against Intravascular Brachytherapy Sources Using VARSKIN3 Code

    PubMed Central

    Baghani, H R; Nazempour, A R; Aghamiri, S M R; Hosseini Daghigh, S M; Mowlavi, A A

    2013-01-01

    Background and Objective: The most common intravascular brachytherapy sources include 32P, 188Re, 106Rh and 90Sr/90Y. In this research, skin absorbed dose for different covering materials in dealing with these sources were evaluated and the best covering material for skin protection and reduction of absorbed dose by radiation staff was recognized and recommended. Method: Four materials including polyethylene, cotton and two different kinds of plastic were proposed as skin covers and skin absorbed dose at different depths for each kind of the materials was calculated separately using the VARSKIN3 code. Results: The results suggested that for all sources, skin absorbed dose was minimized when using polyethylene. Considering this material as skin cover, maximum and minimum doses at skin surface were related to 90Sr/90Y and 106Rh, respectively. Conclusion: polyethylene was found the most effective cover in reducing skin dose and protecting the skin. Furthermore, proper agreement between the results of VARSKIN3 and other experimental measurements indicated that VRASKIN3 is a powerful tool for skin dose calculations when working with beta emitter sources. Therefore, it can be utilized in dealing with the issue of radiation protection. PMID:25505758

  5. Model documentation: Electricity Market Module, Load and Demand-Side Management submodule. Volume 2, Model code listing

    SciTech Connect

    Not Available

    1994-04-07

    Volume II of the documentation contains the actual source code of the LDSM submodule, and the cross reference table of its variables. The code is divided into two parts. The first part contains the main part of the source code. The second part lists the INCLUDE files referenced inside the main part of the code.

  6. Multiple-source models for electron beams of a medical linear accelerator using BEAMDP computer code

    PubMed Central

    Jabbari, Nasrollah; Barati, Amir Hoshang; Rahmatnezhad, Leili

    2012-01-01

    Aim The aim of this work was to develop multiple-source models for electron beams of the NEPTUN 10PC medical linear accelerator using the BEAMDP computer code. Background One of the most accurate techniques of radiotherapy dose calculation is the Monte Carlo (MC) simulation of radiation transport, which requires detailed information of the beam in the form of a phase-space file. The computing time required to simulate the beam data and obtain phase-space files from a clinical accelerator is significant. Calculation of dose distributions using multiple-source models is an alternative method to phase-space data as direct input to the dose calculation system. Materials and methods Monte Carlo simulation of accelerator head was done in which a record was kept of the particle phase-space regarding the details of the particle history. Multiple-source models were built from the phase-space files of Monte Carlo simulations. These simplified beam models were used to generate Monte Carlo dose calculations and to compare those calculations with phase-space data for electron beams. Results Comparison of the measured and calculated dose distributions using the phase-space files and multiple-source models for three electron beam energies showed that the measured and calculated values match well each other throughout the curves. Conclusion It was found that dose distributions calculated using both the multiple-source models and the phase-space data agree within 1.3%, demonstrating that the models can be used for dosimetry research purposes and dose calculations in radiotherapy. PMID:24377026

  7. Coded apertures allow high-energy x-ray phase contrast imaging with laboratory sources

    NASA Astrophysics Data System (ADS)

    Ignatyev, K.; Munro, P. R. T.; Chana, D.; Speller, R. D.; Olivo, A.

    2011-07-01

    This work analyzes the performance of the coded-aperture based x-ray phase contrast imaging approach, showing that it can be used at high x-ray energies with acceptable exposure times. Due to limitations in the used source, we show images acquired at tube voltages of up to 100 kVp, however, no intrinsic reason indicates that the method could not be extended to even higher energies. In particular, we show quantitative agreement between the contrast extracted from the experimental x-ray images and the theoretical one, determined by the behavior of the material's refractive index as a function of energy. This proves that all energies in the used spectrum contribute to the image formation, and also that there are no additional factors affecting image contrast as the x-ray energy is increased. We also discuss the method flexibility by displaying and analyzing the first set of images obtained while varying the relative displacement between coded-aperture sets, which leads to image variations to some extent similar to those observed when changing the crystal angle in analyzer-based imaging. Finally, we discuss the method's possible advantages in terms of simplification of the set-up, scalability, reduced exposure times, and complete achromaticity. We believe this would helpful in applications requiring the imaging of highly absorbing samples, e.g., material science and security inspection, and, in the way of example, we demonstrate a possible application in the latter.

  8. An Open-Source, Pseudo-Spectral Convection Code for O(105) Cores

    NASA Astrophysics Data System (ADS)

    Featherstone, N. A.

    2014-12-01

    Spectral algorithms are a popular choice for modeling systems of turbulent, incompressible flow, due in part to their inherent numerical accuracy and also, as in the case of the sphere, geometrical considerations. These advantages must be weighed against the high cost of communication, however, as any time step taken by a spectral method will typically require multiple, global reorganizations (i.e. transposes) of the distributed flow fields and thermal variables. As more processors are employed in the solution of a particular problem, the total computation time decreases, but the number of inter-processor messages initiated increases. It is this property of spectral algorithms that ultimately limits their parallel scalability because, for any given problem size, there exists a sufficiently large process count such that the message initiation time overwhelms any gains in computation time. I will discuss the parallelization of a community-sourced spectral code that has been designed to mitigate this problem by minimizing the number of messages initiated within a single time step. The resulting algorithm possesses efficient strong scalability for problems both small (5123 grid points, 16,000 cores) and large (20483 grid points, 130,000 cores). This code, named Rayleigh, has been designed with the study of planetary and stellar dynamos in mind, and can efficiently simulate anelastic MHD convection within both spherical and Cartesian geometries. Rayleigh is being developed through the Computational Infrastructure for Geodynamics (UC Davis), and will be made publicly available in winter of 2015.

  9. Using the EGS4 computer code to determine radiation sources along beam lines at electron accelerators

    SciTech Connect

    Mao, S.; Liu, J.; Nelson, W.R.

    1992-01-01

    The EGS computer code, developed for the Monte Carlo simulation of the transport of electrons and photons, has been used since 1970 in the design of accelerators and detectors for high-energy physics. In this paper we present three examples demonstrating how the current version, EGS4, is used to determine energy-loss patterns and source terms along beam pipes, (i.e., including flanges, collimators, etc.). This information is useful for further shielding and dosimetry studies. The calculated results from the analysis are in close agreement with the measured values. To facilitate this review, a new add-on package called SHOWGRAF, is used in order to display shower trajectories for the three examples.

  10. RIES - Rijnland Internet Election System: A Cursory Study of Published Source Code

    NASA Astrophysics Data System (ADS)

    Gonggrijp, Rop; Hengeveld, Willem-Jan; Hotting, Eelco; Schmidt, Sebastian; Weidemann, Frederik

    The Rijnland Internet Election System (RIES) is a system designed for voting in public elections over the internet. A rather cursory scan of the source code to RIES showed a significant lack of security-awareness among the programmers which - among other things - appears to have left RIES vulnerable to near-trivial attacks. If it had not been for independent studies finding problems, RIES would have been used in the 2008 Water Board elections, possibly handling a million votes or more. While RIES was more extensively studied to find cryptographic shortcomings, our work shows that more down-to-earth secure design practices can be at least as important, and the aspects need to be examined much sooner than right before an election.

  11. Precise hypocentral distribution in the source area of the 1964 Niigata earthquake based on an actual crustal model recorded by Ocean Bottom Cabled Seismometers in Japan Sea

    NASA Astrophysics Data System (ADS)

    Machida, Y.; Shimbo, T.; Shinohara, M.; Mochizuki, K.; Yamada, T.; Kanazawa, T.

    2012-12-01

    At the eastern margin of the Japan Sea, large earthquakes have been occurred (e.g., 1964 Niigata earthquake, the 1983 Japan Sea earthquake, the 2004 Chuetsu earthquake and the 2007 Chuetsu-oki earthquake) along the Niigata-Kobe Tectonic Zone (NKTZ). The NKTZ is recognized as a region of large strain rate along the Japan Sea coast and in the northern Chubu and Kinki distinct. Among these events, the 2004 Chuetsu earthquake and the 2007 Chuetsu-oki earthquake is triggered by reactivation of pre-existing faults within ancient rift systems by stress loading through a ductile creeping of the weak lower crust (Kato et al., 2008). Because the tectonic zone is thought to be spread in offshore region, it is difficult to understand a precise activity of the tectonic zone from only land-base observations. In order to understand precise seismic activities in the NKTZ, especially in offshore region, we installed Ocean Bottom Cabled Seismometers (OBCSs) in the source region of the 1964 Niigata earthquake in 2010 (Shinohara et al., 2010). The OBCS system has a length of 25 km and 4 OBCSs were developed with 5 km interval. The OBCSs have three accelerometers as seismic sensor. We estimated hypocenters using a locations program for finding a maximum likelihood solution using a Bayesian approach (Hirata and Matsu'ura, 1987). We used a simple one dimensional Vp structure, and we assumed a Vp/Vs of 1.73. In general, seismic waves recorded by OBCSs arrive later than those estimated from the average structural model due to unconsolidated sediments just below the sea floor. Therefore the delay of arrival times by the sedimentary layer should be taken into account for the location. In 2011, a seismic survey using airgun and OBCSs was carried out to obtain a seismic velocity model. We obtained station corrections of the P- and S-arrivals for each station using differences of traveltimes estimated by the assumed model and the actual model. This procedure helps us to obtain precise seismic

  12. Comparison of TG-43 dosimetric parameters of brachytherapy sources obtained by three different versions of MCNP codes.

    PubMed

    Zaker, Neda; Zehtabian, Mehdi; Sina, Sedigheh; Koontz, Craig; Meigooni, Ali S

    2016-01-01

    Monte Carlo simulations are widely used for calculation of the dosimetric parameters of brachytherapy sources. MCNP4C2, MCNP5, MCNPX, EGS4, EGSnrc, PTRAN, and GEANT4 are among the most commonly used codes in this field. Each of these codes utilizes a cross-sectional library for the purpose of simulating different elements and materials with complex chemical compositions. The accuracies of the final outcomes of these simulations are very sensitive to the accuracies of the cross-sectional libraries. Several investigators have shown that inaccuracies of some of the cross section files have led to errors in 125I and 103Pd parameters. The purpose of this study is to compare the dosimetric parameters of sample brachytherapy sources, calculated with three different versions of the MCNP code - MCNP4C, MCNP5, and MCNPX. In these simulations for each source type, the source and phantom geometries, as well as the number of the photons, were kept identical, thus eliminating the possible uncertainties. The results of these investigations indicate that for low-energy sources such as 125I and 103Pd there are discrepancies in gL(r) values. Discrepancies up to 21.7% and 28% are observed between MCNP4C and other codes at a distance of 6 cm for 103Pd and 10 cm for 125I from the source, respectively. However, for higher energy sources, the discrepancies in gL(r) values are less than 1.1% for 192Ir and less than 1.2% for 137Cs between the three codes. PMID:27074460

  13. PyVCI: A flexible open-source code for calculating accurate molecular infrared spectra

    NASA Astrophysics Data System (ADS)

    Sibaev, Marat; Crittenden, Deborah L.

    2016-06-01

    The PyVCI program package is a general purpose open-source code for simulating accurate molecular spectra, based upon force field expansions of the potential energy surface in normal mode coordinates. It includes harmonic normal coordinate analysis and vibrational configuration interaction (VCI) algorithms, implemented primarily in Python for accessibility but with time-consuming routines written in C. Coriolis coupling terms may be optionally included in the vibrational Hamiltonian. Non-negligible VCI matrix elements are stored in sparse matrix format to alleviate the diagonalization problem. CPU and memory requirements may be further controlled by algorithmic choices and/or numerical screening procedures, and recommended values are established by benchmarking using a test set of 44 molecules for which accurate analytical potential energy surfaces are available. Force fields in normal mode coordinates are obtained from the PyPES library of high quality analytical potential energy surfaces (to 6th order) or by numerical differentiation of analytic second derivatives generated using the GAMESS quantum chemical program package (to 4th order).

  14. Source size and temporal coherence requirements of coded aperture type x-ray phase contrast imaging systems.

    PubMed

    Munro, Peter R T; Ignatyev, Konstantin; Speller, Robert D; Olivo, Alessandro

    2010-09-13

    There is currently much interest in developing X-ray Phase Contrast Imaging (XPCI) systems which employ laboratory sources in order to deploy the technique in real world applications. The challenge faced by nearly all XPCI techniques is that of efficiently utilising the x-ray flux emitted by an x-ray tube which is polychromatic and possesses only partial spatial coherence. Techniques have, however, been developed which overcome these limitations. Such a technique, known as coded aperture XPCI, has been under development in our laboratories in recent years for application principally in medical imaging and security screening. In this paper we derive limitations imposed upon source polychromaticity and spatial extent by the coded aperture system. We also show that although other grating XPCI techniques employ a different physical principle, they satisfy design constraints similar to those of the coded aperture XPCI. PMID:20940863

  15. Source coherence impairments in a direct detection direct sequence optical code-division multiple-access system.

    PubMed

    Fsaifes, Ihsan; Lepers, Catherine; Lourdiane, Mounia; Gallion, Philippe; Beugin, Vincent; Guignard, Philippe

    2007-02-01

    We demonstrate that direct sequence optical code- division multiple-access (DS-OCDMA) encoders and decoders using sampled fiber Bragg gratings (S-FBGs) behave as multipath interferometers. In that case, chip pulses of the prime sequence codes generated by spreading in time-coherent data pulses can result from multiple reflections in the interferometers that can superimpose within a chip time duration. We show that the autocorrelation function has to be considered as the sum of complex amplitudes of the combined chip as the laser source coherence time is much greater than the integration time of the photodetector. To reduce the sensitivity of the DS-OCDMA system to the coherence time of the laser source, we analyze the use of sparse and nonperiodic quadratic congruence and extended quadratic congruence codes. PMID:17230236

  16. Use of WIMS-E lattice code for prediction of the transuranic source term for spent fuel dose estimation

    SciTech Connect

    Schwinkendorf, K.N.

    1996-04-15

    A recent source term analysis has shown a discrepancy between ORIGEN2 transuranic isotopic production estimates and those produced with the WIMS-E lattice physics code. Excellent agreement between relevant experimental measurements and WIMS-E was shown, thus exposing an error in the cross section library used by ORIGEN2.

  17. Acoustic Scattering by Three-Dimensional Stators and Rotors Using the SOURCE3D Code. Volume 2; Scattering Plots

    NASA Technical Reports Server (NTRS)

    Meyer, Harold D.

    1999-01-01

    This second volume of Acoustic Scattering by Three-Dimensional Stators and Rotors Using the SOURCE3D Code provides the scattering plots referenced by Volume 1. There are 648 plots. Half are for the 8750 rpm "high speed" operating condition and the other half are for the 7031 rpm "mid speed" operating condition.

  18. Toward quantifying the source term for predicting global climatic effects of nuclear war: applications of urban fire codes

    SciTech Connect

    Reitter, T.A.; Kang, S.W.; Takata, A.N.

    1985-06-15

    Calculating urban-area fire development is critical to estimating global smoke production effects due to nuclear warfare. To improve calculations of fire starts and spread in urban areas, we performed a parameter-sensitivity analysis using three codes from IIT Research Institute. We applied improved versions of the codes to two urban areas: an infinite ''uniform city'' with only one type of building and the ''San Jose urban area'' as of the late 1960s. We varied parameters and compared affected fuel consumption and areas with a baseline case. The dominant parameters for the uniform city were wind speed, atmospheric visibility, frequency of secondary fire starts, building density, and window sizes. For San Jose (1968), they were wind speed, building densities, location of ground zero (GZ), height of burst (HOB), window sizes, and brand range. Because some results are very sensitive to actual fuel-distribution characteristics and the attack scenario, it is not possible to use a uniform city to represent actual urban areas. This was confirmed by a few calculations for the Detroit area as of the late 1960s. Many improvements are needed, such as inclusion of fire-induced winds and debris fires, before results can be extrapolated to the global scale.

  19. ANEMOS: A computer code to estimate air concentrations and ground deposition rates for atmospheric nuclides emitted from multiple operating sources

    SciTech Connect

    Miller, C.W.; Sjoreen, A.L.; Begovich, C.L.; Hermann, O.W.

    1986-11-01

    This code estimates concentrations in air and ground deposition rates for Atmospheric Nuclides Emitted from Multiple Operating Sources. ANEMOS is one component of an integrated Computerized Radiological Risk Investigation System (CRRIS) developed for the US Environmental Protection Agency (EPA) for use in performing radiological assessments and in developing radiation standards. The concentrations and deposition rates calculated by ANEMOS are used in subsequent portions of the CRRIS for estimating doses and risks to man. The calculations made in ANEMOS are based on the use of a straight-line Gaussian plume atmospheric dispersion model with both dry and wet deposition parameter options. The code will accommodate a ground-level or elevated point and area source or windblown source. Adjustments may be made during the calculations for surface roughness, building wake effects, terrain height, wind speed at the height of release, the variation in plume rise as a function of downwind distance, and the in-growth and decay of daughter products in the plume as it travels downwind. ANEMOS can also accommodate multiple particle sizes and clearance classes, and it may be used to calculate the dose from a finite plume of gamma-ray-emitting radionuclides passing overhead. The output of this code is presented for 16 sectors of a circular grid. ANEMOS can calculate both the sector-average concentrations and deposition rates at a given set of downwind distances in each sector and the average of these quantities over an area within each sector bounded by two successive downwind distances. ANEMOS is designed to be used primarily for continuous, long-term radionuclide releases. This report describes the models used in the code, their computer implementation, the uncertainty associated with their use, and the use of ANEMOS in conjunction with other codes in the CRRIS. A listing of the code is included in Appendix C.

  20. Fan Noise Prediction System Development: Source/Radiation Field Coupling and Workstation Conversion for the Acoustic Radiation Code

    NASA Technical Reports Server (NTRS)

    Meyer, H. D.

    1993-01-01

    The Acoustic Radiation Code (ARC) is a finite element program used on the IBM mainframe to predict far-field acoustic radiation from a turbofan engine inlet. In this report, requirements for developers of internal aerodynamic codes regarding use of their program output an input for the ARC are discussed. More specifically, the particular input needed from the Bolt, Beranek and Newman/Pratt and Whitney (turbofan source noise generation) Code (BBN/PWC) is described. In a separate analysis, a method of coupling the source and radiation models, that recognizes waves crossing the interface in both directions, has been derived. A preliminary version of the coupled code has been developed and used for initial evaluation of coupling issues. Results thus far have shown that reflection from the inlet is sufficient to indicate that full coupling of the source and radiation fields is needed for accurate noise predictions ' Also, for this contract, the ARC has been modified for use on the Sun and Silicon Graphics Iris UNIX workstations. Changes and additions involved in this effort are described in an appendix.

  1. Dosimetry characterization of 32P intravascular brachytherapy source wires using Monte Carlo codes PENELOPE and GEANT4.

    PubMed

    Torres, Javier; Buades, Manuel J; Almansa, Julio F; Guerrero, Rafael; Lallena, Antonio M

    2004-02-01

    Monte Carlo calculations using the codes PENELOPE and GEANT4 have been performed to characterize the dosimetric parameters of the new 20 mm long catheter-based 32P beta source manufactured by the Guidant Corporation. The dose distribution along the transverse axis and the two-dimensional dose rate table have been calculated. Also, the dose rate at the reference point, the radial dose function, and the anisotropy function were evaluated according to the adapted TG-60 formalism for cylindrical sources. PENELOPE and GEANT4 codes were first verified against previous results corresponding to the old 27 mm Guidant 32P beta source. The dose rate at the reference point for the unsheathed 27 mm source in water was calculated to be 0.215 +/- 0.001 cGy s(-1) mCi(-1), for PENELOPE, and 0.2312 +/- 0.0008 cGy s(-1) mCi(-1), for GEANT4. For the unsheathed 20 mm source, these values were 0.2908 +/- 0.0009 cGy s(-1) mCi(-1) and 0.311 0.001 cGy s(-1) mCi(-1), respectively. Also, a comparison with the limited data available on this new source is shown. We found non-negligible differences between the results obtained with PENELOPE and GEANT4. PMID:15000615

  2. Production version of the extended NASA-Langley vortex lattice FORTRAN computer program. Volume 2: Source code

    NASA Technical Reports Server (NTRS)

    Herbert, H. E.; Lamar, J. E.

    1982-01-01

    The source code for the latest production version, MARK IV, of the NASA-Langley Vortex Lattice Computer Program is presented. All viable subcritical aerodynamic features of previous versions were retained. This version extends the previously documented program capabilities to four planforms, 400 panels, and enables the user to obtain vortex-flow aerodynamics on cambered planforms, flowfield properties off the configuration in attached flow, and planform longitudinal load distributions.

  3. General Purpose Kernel Integration Shielding Code System-Point and Extended Gamma-Ray Sources.

    1981-06-11

    PELSHIE3 calculates dose rates from gamma-emitting sources with different source geometries and shielding configurations. Eight source geometries are provided and are called by means of geometry index numbers. Gamma-emission characteristics for 134 isotopes, attenuation coefficients for 57 elements or shielding materials and Berger build-up parameters for 17 shielding materials can be obtained from a direct access data library by specifying only the appropriate library numbers. A different option allows these data to be read frommore » cards. For extended sources, constant source strengths as well as exponential and Bessel function source strength distributions are allowed in most cases.« less

  4. Investigation of Coded Source Neutron Imaging at the North Carolina State University PULSTAR Reactor

    SciTech Connect

    Xiao, Ziyu; Mishra, Kaushal; Hawari, Ayman; Bingham, Philip R; Bilheux, Hassina Z; Tobin Jr, Kenneth William

    2010-10-01

    A neutron imaging facility is located on beam-tube #5 of the 1-MWth PULSTAR reactor at the North Carolina State University. An investigation has been initiated to explore the application of coded imaging techniques at the facility. Coded imaging uses a mosaic of pinholes to encode an aperture, thus generating an encoded image of the object at the detector. To reconstruct the image recorded by the detector, corresponding decoding patterns are used. The optimized design of coded masks is critical for the performance of this technique and will depend on the characteristics of the imaging beam. In this work, Monte Carlo (MCNP) simulations were utilized to explore the needed modifications to the PULSTAR thermal neutron beam to support coded imaging techniques. In addition, an assessment of coded mask design has been performed. The simulations indicated that a 12 inch single crystal sapphire filter is suited for such an application at the PULSTAR beam in terms of maximizing flux with good neutron-to-gamma ratio. Computational simulations demonstrate the feasibility of correlation reconstruction methods on neutron transmission imaging. A gadolinium aperture with thickness of 500 m was used to construct the mask using a 38 34 URA pattern. A test experiment using such a URA design has been conducted and the point spread function of the system has been measured.

  5. Supporting the Cybercrime Investigation Process: Effective Discrimination of Source Code Authors Based on Byte-Level Information

    NASA Astrophysics Data System (ADS)

    Frantzeskou, Georgia; Stamatatos, Efstathios; Gritzalis, Stefanos

    Source code authorship analysis is the particular field that attempts to identify the author of a computer program by treating each program as a linguistically analyzable entity. This is usually based on other undisputed program samples from the same author. There are several cases where the application of such a method could be of a major benefit, such as tracing the source of code left in the system after a cyber attack, authorship disputes, proof of authorship in court, etc. In this paper, we present our approach which is based on byte-level n-gram profiles and is an extension of a method that has been successfully applied to natural language text authorship attribution. We propose a simplified profile and a new similarity measure which is less complicated than the algorithm followed in text authorship attribution and it seems more suitable for source code identification since is better able to deal with very small training sets. Experiments were performed on two different data sets, one with programs written in C++ and the second with programs written in Java. Unlike the traditional language-dependent metrics used by previous studies, our approach can be applied to any programming language with no additional cost. The presented accuracy rates are much better than the best reported results for the same data sets.

  6. Transparent ICD and DRG Coding Using Information Technology: Linking and Associating Information Sources with the eXtensible Markup Language

    PubMed Central

    Hoelzer, Simon; Schweiger, Ralf K.; Dudeck, Joachim

    2003-01-01

    With the introduction of ICD-10 as the standard for diagnostics, it becomes necessary to develop an electronic representation of its complete content, inherent semantics, and coding rules. The authors' design relates to the current efforts by the CEN/TC 251 to establish a European standard for hierarchical classification systems in health care. The authors have developed an electronic representation of ICD-10 with the eXtensible Markup Language (XML) that facilitates integration into current information systems and coding software, taking different languages and versions into account. In this context, XML provides a complete processing framework of related technologies and standard tools that helps develop interoperable applications. XML provides semantic markup. It allows domain-specific definition of tags and hierarchical document structure. The idea of linking and thus combining information from different sources is a valuable feature of XML. In addition, XML topic maps are used to describe relationships between different sources, or “semantically associated” parts of these sources. The issue of achieving a standardized medical vocabulary becomes more and more important with the stepwise implementation of diagnostically related groups, for example. The aim of the authors' work is to provide a transparent and open infrastructure that can be used to support clinical coding and to develop further software applications. The authors are assuming that a comprehensive representation of the content, structure, inherent semantics, and layout of medical classification systems can be achieved through a document-oriented approach. PMID:12807813

  7. Transparent ICD and DRG coding using information technology: linking and associating information sources with the eXtensible Markup Language.

    PubMed

    Hoelzer, Simon; Schweiger, Ralf K; Dudeck, Joachim

    2003-01-01

    With the introduction of ICD-10 as the standard for diagnostics, it becomes necessary to develop an electronic representation of its complete content, inherent semantics, and coding rules. The authors' design relates to the current efforts by the CEN/TC 251 to establish a European standard for hierarchical classification systems in health care. The authors have developed an electronic representation of ICD-10 with the eXtensible Markup Language (XML) that facilitates integration into current information systems and coding software, taking different languages and versions into account. In this context, XML provides a complete processing framework of related technologies and standard tools that helps develop interoperable applications. XML provides semantic markup. It allows domain-specific definition of tags and hierarchical document structure. The idea of linking and thus combining information from different sources is a valuable feature of XML. In addition, XML topic maps are used to describe relationships between different sources, or "semantically associated" parts of these sources. The issue of achieving a standardized medical vocabulary becomes more and more important with the stepwise implementation of diagnostically related groups, for example. The aim of the authors' work is to provide a transparent and open infrastructure that can be used to support clinical coding and to develop further software applications. The authors are assuming that a comprehensive representation of the content, structure, inherent semantics, and layout of medical classification systems can be achieved through a document-oriented approach. PMID:12807813

  8. Active Fault Near-Source Zones Within and Bordering the State of California for the 1997 Uniform Building Code

    USGS Publications Warehouse

    Petersen, M.D.; Toppozada, Tousson R.; Cao, T.; Cramer, C.H.; Reichle, M.S.; Bryant, W.A.

    2000-01-01

    The fault sources in the Project 97 probabilistic seismic hazard maps for the state of California were used to construct maps for defining near-source seismic coefficients, Na and Nv, incorporated in the 1997 Uniform Building Code (ICBO 1997). The near-source factors are based on the distance from a known active fault that is classified as either Type A or Type B. To determine the near-source factor, four pieces of geologic information are required: (1) recognizing a fault and determining whether or not the fault has been active during the Holocene, (2) identifying the location of the fault at or beneath the ground surface, (3) estimating the slip rate of the fault, and (4) estimating the maximum earthquake magnitude for each fault segment. This paper describes the information used to produce the fault classifications and distances.

  9. The Impact of Causality on Information-Theoretic Source and Channel Coding Problems

    ERIC Educational Resources Information Center

    Palaiyanur, Harikrishna R.

    2011-01-01

    This thesis studies several problems in information theory where the notion of causality comes into play. Causality in information theory refers to the timing of when information is available to parties in a coding system. The first part of the thesis studies the error exponent (or reliability function) for several communication problems over…

  10. Pre-coding method and apparatus for multiple source or time-shifted single source data and corresponding inverse post-decoding method and apparatus

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu (Inventor)

    1997-01-01

    A pre-coding method and device for improving data compression performance by removing correlation between a first original data set and a second original data set, each having M members, respectively. The pre-coding method produces a compression-efficiency-enhancing double-difference data set. The method and device produce a double-difference data set, i.e., an adjacent-delta calculation performed on a cross-delta data set or a cross-delta calculation performed on two adjacent-delta data sets, from either one of (1) two adjacent spectral bands coming from two discrete sources, respectively, or (2) two time-shifted data sets coming from a single source. The resulting double-difference data set is then coded using either a distortionless data encoding scheme (entropy encoding) or a lossy data compression scheme. Also, a post-decoding method and device for recovering a second original data set having been represented by such a double-difference data set.

  11. Pre-coding method and apparatus for multiple source or time-shifted single source data and corresponding inverse post-decoding method and apparatus

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu (Inventor)

    1998-01-01

    A pre-coding method and device for improving data compression performance by removing correlation between a first original data set and a second original data set, each having M members, respectively. The pre-coding method produces a compression-efficiency-enhancing double-difference data set. The method and device produce a double-difference data set, i.e., an adjacent-delta calculation performed on a cross-delta data set or a cross-delta calculation performed on two adjacent-delta data sets, from either one of (1) two adjacent spectral bands coming from two discrete sources, respectively, or (2) two time-shifted data sets coming from a single source. The resulting double-difference data set is then coded using either a distortionless data encoding scheme (entropy encoding) or a lossy data compression scheme. Also, a post-decoding method and device for recovering a second original data set having been represented by such a double-difference data set.

  12. kspectrum: an open-source code for high-resolution molecular absorption spectra production

    NASA Astrophysics Data System (ADS)

    Eymet, V.; Coustet, C.; Piaud, B.

    2016-01-01

    We present the kspectrum, scientific code that produces high-resolution synthetic absorption spectra from public molecular transition parameters databases. This code was originally required by the atmospheric and astrophysics communities, and its evolution is now driven by new scientific projects among the user community. Since it was designed without any optimization that would be specific to any particular application field, its use could also be extended to other domains. kspectrum produces spectral data that can subsequently be used either for high-resolution radiative transfer simulations, or for producing statistic spectral model parameters using additional tools. This is a open project that aims at providing an up-to-date tool that takes advantage of modern computational hardware and recent parallelization libraries. It is currently provided by Méso-Star (http://www.meso-star.com) under the CeCILL license, and benefits from regular updates and improvements.

  13. An ion-source model for first-order beam dynamic codes

    SciTech Connect

    Fink, C.L.; Curry, B.P.

    1993-08-01

    A model of a plasma ion source has been developed that approximates the system of Poisson and Boltzmann-Vlasov equations normally used to describe ion sources by an external electric field, a collective electric field due to the charge column, and the starting boundary conditions. The equations of this model can be used directly in the Lorentz force equation to calculate trajectories without iteration.

  14. BIOTC: An open-source CFD code for simulating biomass fast pyrolysis

    NASA Astrophysics Data System (ADS)

    Xiong, Qingang; Aramideh, Soroush; Passalacqua, Alberto; Kong, Song-Charng

    2014-06-01

    The BIOTC code is a computer program that combines a multi-fluid model for multiphase hydrodynamics and global chemical kinetics for chemical reactions to simulate fast pyrolysis of biomass at reactor scale. The object-oriented characteristic of BIOTC makes it easy for researchers to insert their own sub-models, while the user-friendly interface provides users a friendly environment as in commercial software. A laboratory-scale bubbling fluidized bed reactor for biomass fast pyrolysis was simulated using BIOTC to demonstrate its capability.

  15. On the Efficacy of Source Code Optimizations for Cache-Based Systems

    NASA Technical Reports Server (NTRS)

    VanderWijngaart, Rob F.; Saphir, William C.; Saini, Subhash (Technical Monitor)

    1998-01-01

    Obtaining high performance without machine-specific tuning is an important goal of scientific application programmers. Since most scientific processing is done on commodity microprocessors with hierarchical memory systems, this goal of "portable performance" can be achieved if a common set of optimization principles is effective for all such systems. It is widely believed, or at least hoped, that portable performance can be realized. The rule of thumb for optimization on hierarchical memory systems is to maximize temporal and spatial locality of memory references by reusing data and minimizing memory access stride. We investigate the effects of a number of optimizations on the performance of three related kernels taken from a computational fluid dynamics application. Timing the kernels on a range of processors, we observe an inconsistent and often counterintuitive impact of the optimizations on performance. In particular, code variations that have a positive impact on one architecture can have a negative impact on another, and variations expected to be unimportant can produce large effects. Moreover, we find that cache miss rates-as reported by a cache simulation tool, and confirmed by hardware counters-only partially explain the results. By contrast, the compiler-generated assembly code provides more insight by revealing the importance of processor-specific instructions and of compiler maturity, both of which strongly, and sometimes unexpectedly, influence performance. We conclude that it is difficult to obtain performance portability on modern cache-based computers, and comment on the implications of this result.

  16. On the Efficacy of Source Code Optimizations for Cache-Based Systems

    NASA Technical Reports Server (NTRS)

    VanderWijngaart, Rob F.; Saphir, William C.

    1998-01-01

    Obtaining high performance without machine-specific tuning is an important goal of scientific application programmers. Since most scientific processing is done on commodity microprocessors with hierarchical memory systems, this goal of "portable performance" can be achieved if a common set of optimization principles is effective for all such systems. It is widely believed, or at least hoped, that portable performance can be realized. The rule of thumb for optimization on hierarchical memory systems is to maximize temporal and spatial locality of memory references by reusing data and minimizing memory access stride. We investigate the effects of a number of optimizations on the performance of three related kernels taken from a computational fluid dynamics application. Timing the kernels on a range of processors, we observe an inconsistent and often counterintuitive impact of the optimizations on performance. In particular, code variations that have a positive impact on one architecture can have a negative impact on another, and variations expected to be unimportant can produce large effects. Moreover, we find that cache miss rates - as reported by a cache simulation tool, and confirmed by hardware counters - only partially explain the results. By contrast, the compiler-generated assembly code provides more insight by revealing the importance of processor-specific instructions and of compiler maturity, both of which strongly, and sometimes unexpectedly, influence performance. We conclude that it is difficult to obtain performance portability on modern cache-based computers, and comment on the implications of this result.

  17. NONCODE 2016: an informative and valuable data source of long non-coding RNAs

    PubMed Central

    Zhao, Yi; Li, Hui; Fang, Shuangsang; Kang, Yue; wu, Wei; Hao, Yajing; Li, Ziyang; Bu, Dechao; Sun, Ninghui; Zhang, Michael Q.; Chen, Runsheng

    2016-01-01

    NONCODE (http://www.bioinfo.org/noncode/) is an interactive database that aims to present the most complete collection and annotation of non-coding RNAs, especially long non-coding RNAs (lncRNAs). The recently reduced cost of RNA sequencing has produced an explosion of newly identified data. Revolutionary third-generation sequencing methods have also contributed to more accurate annotations. Accumulative experimental data also provides more comprehensive knowledge of lncRNA functions. In this update, NONCODE has added six new species, bringing the total to 16 species altogether. The lncRNAs in NONCODE have increased from 210 831 to 527,336. For human and mouse, the lncRNA numbers are 167,150 and 130,558, respectively. NONCODE 2016 has also introduced three important new features: (i) conservation annotation; (ii) the relationships between lncRNAs and diseases; and (iii) an interface to choose high-quality datasets through predicted scores, literature support and long-read sequencing method support. NONCODE is also accessible through http://www.noncode.org/. PMID:26586799

  18. Knowledge and potential impact of the WHO Global code of practice on the international recruitment of health personnel: Does it matter for source and destination country stakeholders?

    PubMed

    Bourgeault, Ivy Lynn; Labonté, Ronald; Packer, Corinne; Runnels, Vivien; Tomblin Murphy, Gail

    2016-01-01

    The WHO Global Code of Practice on the International Recruitment of Health Personnel was implemented in May 2010. The present commentary offers some insights into what is known about the Code five years on, as well as its potential impact, drawing from interviews with health care and policy stakeholders from a number of 'source' and 'destination' countries. PMID:27381004

  19. The Self Actualized Reader.

    ERIC Educational Resources Information Center

    Marino, Michael; Moylan, Mary Elizabeth

    A study examined the commonalities that "voracious" readers share, and how their experiences can guide parents, teachers, and librarians in assisting children to become self-actualized readers. Subjects, 25 adults ranging in age from 20 to 67 years, completed a questionnaire concerning their reading histories and habits. Respondents varied in…

  20. Photoplus: auxiliary information for printed images based on distributed source coding

    NASA Astrophysics Data System (ADS)

    Samadani, Ramin; Mukherjee, Debargha

    2008-01-01

    A printed photograph is difficult to reuse because the digital information that generated the print may no longer be available. This paper describes a mechanism for approximating the original digital image by combining a scan of the printed photograph with small amounts of digital auxiliary information kept together with the print. The auxiliary information consists of a small amount of digital data to enable accurate registration and color-reproduction, followed by a larger amount of digital data to recover residual errors and lost frequencies by distributed Wyner-Ziv coding techniques. Approximating the original digital image enables many uses, including making good quality reprints from the original print, even when they are faded many years later. In essence, the print itself becomes the currency for archiving and repurposing digital images, without requiring computer infrastructure.

  1. ACT-ARA: Code System for the Calculation of Changes in Radiological Source Terms with Time

    1988-02-01

    The program calculates the source term activity as a function of time for parent isotopes as well as daughters. Also, at each time, the "probable release" is produced. Finally, the program determines the time integrated probable release for each isotope over the time period of interest.

  2. Optimization of a photoneutron source based on 10 MeV electron beam using Geant4 Monte Carlo code

    NASA Astrophysics Data System (ADS)

    Askri, Boubaker

    2015-10-01

    Geant4 Monte Carlo code has been used to conceive and optimize a simple and compact neutron source based on a 10 MeV electron beam impinging on a tungsten target adjoined to a beryllium target. For this purpose, a precise photonuclear reaction cross-section model issued from the International Atomic Energy Agency (IAEA) database was linked to Geant4 to accurately simulate the interaction of low energy bremsstrahlung photons with beryllium material. A benchmark test showed that a good agreement was achieved when comparing the emitted neutron flux spectra predicted by Geant4 and Fluka codes for a beryllium cylinder bombarded with a 5 MeV photon beam. The source optimization was achieved through a two stage Monte Carlo simulation. In the first stage, the distributions of the seven phase space coordinates of the bremsstrahlung photons at the boundaries of the tungsten target were determined. In the second stage events corresponding to photons emitted according to these distributions were tracked. A neutron yield of 4.8 × 1010 neutrons/mA/s was obtained at 20 cm from the beryllium target. A thermal neutron yield of 1.5 × 109 neutrons/mA/s was obtained after introducing a spherical shell of polyethylene as a neutron moderator.

  3. Validation and verification of RELAP5 for Advanced Neutron Source accident analysis: Part I, comparisons to ANSDM and PRSDYN codes

    SciTech Connect

    Chen, N.C.J.; Ibn-Khayat, M.; March-Leuba, J.A.; Wendel, M.W.

    1993-12-01

    As part of verification and validation, the Advanced Neutron Source reactor RELAP5 system model was benchmarked by the Advanced Neutron Source dynamic model (ANSDM) and PRSDYN models. RELAP5 is a one-dimensional, two-phase transient code, developed by the Idaho National Engineering Laboratory for reactor safety analysis. Both the ANSDM and PRSDYN models use a simplified single-phase equation set to predict transient thermal-hydraulic performance. Brief descriptions of each of the codes, models, and model limitations were included. Even though comparisons were limited to single-phase conditions, a broad spectrum of accidents was benchmarked: a small loss-of-coolant-accident (LOCA), a large LOCA, a station blackout, and a reactivity insertion accident. The overall conclusion is that the three models yield similar results if the input parameters are the same. However, ANSDM does not capture pressure wave propagation through the coolant system. This difference is significant in very rapid pipe break events. Recommendations are provided for further model improvements.

  4. Anode optimization for miniature electronic brachytherapy X-ray sources using Monte Carlo and computational fluid dynamic codes.

    PubMed

    Khajeh, Masoud; Safigholi, Habib

    2016-03-01

    A miniature X-ray source has been optimized for electronic brachytherapy. The cooling fluid for this device is water. Unlike the radionuclide brachytherapy sources, this source is able to operate at variable voltages and currents to match the dose with the tumor depth. First, Monte Carlo (MC) optimization was performed on the tungsten target-buffer thickness layers versus energy such that the minimum X-ray attenuation occurred. Second optimization was done on the selection of the anode shape based on the Monte Carlo in water TG-43U1 anisotropy function. This optimization was carried out to get the dose anisotropy functions closer to unity at any angle from 0° to 170°. Three anode shapes including cylindrical, spherical, and conical were considered. Moreover, by Computational Fluid Dynamic (CFD) code the optimal target-buffer shape and different nozzle shapes for electronic brachytherapy were evaluated. The characterization criteria of the CFD were the minimum temperature on the anode shape, cooling water, and pressure loss from inlet to outlet. The optimal anode was conical in shape with a conical nozzle. Finally, the TG-43U1 parameters of the optimal source were compared with the literature. PMID:26966563

  5. Anode optimization for miniature electronic brachytherapy X-ray sources using Monte Carlo and computational fluid dynamic codes

    PubMed Central

    Khajeh, Masoud; Safigholi, Habib

    2015-01-01

    A miniature X-ray source has been optimized for electronic brachytherapy. The cooling fluid for this device is water. Unlike the radionuclide brachytherapy sources, this source is able to operate at variable voltages and currents to match the dose with the tumor depth. First, Monte Carlo (MC) optimization was performed on the tungsten target-buffer thickness layers versus energy such that the minimum X-ray attenuation occurred. Second optimization was done on the selection of the anode shape based on the Monte Carlo in water TG-43U1 anisotropy function. This optimization was carried out to get the dose anisotropy functions closer to unity at any angle from 0° to 170°. Three anode shapes including cylindrical, spherical, and conical were considered. Moreover, by Computational Fluid Dynamic (CFD) code the optimal target-buffer shape and different nozzle shapes for electronic brachytherapy were evaluated. The characterization criteria of the CFD were the minimum temperature on the anode shape, cooling water, and pressure loss from inlet to outlet. The optimal anode was conical in shape with a conical nozzle. Finally, the TG-43U1 parameters of the optimal source were compared with the literature. PMID:26966563

  6. Acoustic Scattering by Three-Dimensional Stators and Rotors Using the SOURCE3D Code. Volume 1; Analysis and Results

    NASA Technical Reports Server (NTRS)

    Meyer, Harold D.

    1999-01-01

    This report provides a study of rotor and stator scattering using the SOURCE3D Rotor Wake/Stator Interaction Code. SOURCE3D is a quasi-three-dimensional computer program that uses three-dimensional acoustics and two-dimensional cascade load response theory to calculate rotor and stator modal reflection and transmission (scattering) coefficients. SOURCE3D is at the core of the TFaNS (Theoretical Fan Noise Design/Prediction System), developed for NASA, which provides complete fully coupled (inlet, rotor, stator, exit) noise solutions for turbofan engines. The reason for studying scattering is that we must first understand the behavior of the individual scattering coefficients provided by SOURCE3D, before eventually understanding the more complicated predictions from TFaNS. To study scattering, we have derived a large number of scattering curves for vane and blade rows. The curves are plots of output wave power divided by input wave power (in dB units) versus vane/blade ratio. Some of these plots are shown in this report. All of the plots are provided in a separate volume. To assist in understanding the plots, formulas have been derived for special vane/blade ratios for which wavefronts are either parallel or normal to rotor or stator chords. From the plots, we have found that, for the most part, there was strong transmission and weak reflection over most of the vane/blade ratio range for the stator. For the rotor, there was little transmission loss.

  7. FORTRAN codes to implement enhanced local wave number technique to determine the depth and location and shape of the causative source using magnetic anomaly

    NASA Astrophysics Data System (ADS)

    Agarwal, B. N. P.; Srivastava, Shalivahan

    2008-12-01

    The total field magnetic anomaly is analyzed to compute the depth and location and geometry of the causative source using two FORTRAN source codes, viz., FRCON1D and ELW. No assumption on the nature of source geometry, susceptibility contrast, etc. has been made. The source geometry is estimated by computing the structural index from previously determined depth and location. A detailed procedure is outlined for using these codes through a theoretical anomaly. The suppression of high-frequency noise in the observed data is tackled by designing a box-car window with cosine termination. The termination criterion is based on the peak position of the derivative operator computed for a pre-assumed depth of a shallow source below which the target is situated. The applicability of these codes has been demonstrated by analyzing a total field aeromagnetic anomaly of the Matheson area of northern Ontario, Canada.

  8. VADER: A flexible, robust, open-source code for simulating viscous thin accretion disks

    NASA Astrophysics Data System (ADS)

    Krumholz, M. R.; Forbes, J. C.

    2015-06-01

    The evolution of thin axisymmetric viscous accretion disks is a classic problem in astrophysics. While models based on this simplified geometry provide only approximations to the true processes of instability-driven mass and angular momentum transport, their simplicity makes them invaluable tools for both semi-analytic modeling and simulations of long-term evolution where two- or three-dimensional calculations are too computationally costly. Despite the utility of these models, the only publicly-available frameworks for simulating them are rather specialized and non-general. Here we describe a highly flexible, general numerical method for simulating viscous thin disks with arbitrary rotation curves, viscosities, boundary conditions, grid spacings, equations of state, and rates of gain or loss of mass (e.g., through winds) and energy (e.g., through radiation). Our method is based on a conservative, finite-volume, second-order accurate discretization of the equations, which we solve using an unconditionally-stable implicit scheme. We implement Anderson acceleration to speed convergence of the scheme, and show that this leads to factor of ∼5 speed gains over non-accelerated methods in realistic problems, though the amount of speedup is highly problem-dependent. We have implemented our method in the new code Viscous Accretion Disk Evolution Resource (VADER), which is freely available for download from

  9. Self characterization of a coded aperture array for neutron source imaging.

    PubMed

    Volegov, P L; Danly, C R; Fittinghoff, D N; Guler, N; Merrill, F E; Wilde, C H

    2014-12-01

    The neutron imaging system at the National Ignition Facility (NIF) is an important diagnostic tool for measuring the two-dimensional size and shape of the neutrons produced in the burning deuterium-tritium plasma during the stagnation stage of inertial confinement fusion implosions. Since the neutron source is small (∼100 μm) and neutrons are deeply penetrating (>3 cm) in all materials, the apertures used to achieve the desired 10-μm resolution are 20-cm long, triangular tapers machined in gold foils. These gold foils are stacked to form an array of 20 apertures for pinhole imaging and three apertures for penumbral imaging. These apertures must be precisely aligned to accurately place the field of view of each aperture at the design location, or the location of the field of view for each aperture must be measured. In this paper we present a new technique that has been developed for the measurement and characterization of the precise location of each aperture in the array. We present the detailed algorithms used for this characterization and the results of reconstructed sources from inertial confinement fusion implosion experiments at NIF. PMID:25554292

  10. Sound frequency-invariant neural coding of a frequency-dependent cue to sound source location.

    PubMed

    Jones, Heath G; Brown, Andrew D; Koka, Kanthaiah; Thornton, Jennifer L; Tollin, Daniel J

    2015-07-01

    The century-old duplex theory of sound localization posits that low- and high-frequency sounds are localized with two different acoustical cues, interaural time and level differences (ITDs and ILDs), respectively. While behavioral studies in humans and behavioral and neurophysiological studies in a variety of animal models have largely supported the duplex theory, behavioral sensitivity to ILD is curiously invariant across the audible spectrum. Here we demonstrate that auditory midbrain neurons in the chinchilla (Chinchilla lanigera) also encode ILDs in a frequency-invariant manner, efficiently representing the full range of acoustical ILDs experienced as a joint function of sound source frequency, azimuth, and distance. We further show, using Fisher information, that nominal "low-frequency" and "high-frequency" ILD-sensitive neural populations can discriminate ILD with similar acuity, yielding neural ILD discrimination thresholds for near-midline sources comparable to behavioral discrimination thresholds estimated for chinchillas. These findings thus suggest a revision to the duplex theory and reinforce ecological and efficiency principles that hold that neural systems have evolved to encode the spectrum of biologically relevant sensory signals to which they are naturally exposed. PMID:25972580

  11. Self characterization of a coded aperture array for neutron source imaging

    NASA Astrophysics Data System (ADS)

    Volegov, P. L.; Danly, C. R.; Fittinghoff, D. N.; Guler, N.; Merrill, F. E.; Wilde, C. H.

    2014-12-01

    The neutron imaging system at the National Ignition Facility (NIF) is an important diagnostic tool for measuring the two-dimensional size and shape of the neutrons produced in the burning deuterium-tritium plasma during the stagnation stage of inertial confinement fusion implosions. Since the neutron source is small (˜100 μm) and neutrons are deeply penetrating (>3 cm) in all materials, the apertures used to achieve the desired 10-μm resolution are 20-cm long, triangular tapers machined in gold foils. These gold foils are stacked to form an array of 20 apertures for pinhole imaging and three apertures for penumbral imaging. These apertures must be precisely aligned to accurately place the field of view of each aperture at the design location, or the location of the field of view for each aperture must be measured. In this paper we present a new technique that has been developed for the measurement and characterization of the precise location of each aperture in the array. We present the detailed algorithms used for this characterization and the results of reconstructed sources from inertial confinement fusion implosion experiments at NIF.

  12. Self characterization of a coded aperture array for neutron source imaging

    SciTech Connect

    Volegov, P. L. Danly, C. R.; Guler, N.; Merrill, F. E.; Wilde, C. H.; Fittinghoff, D. N.

    2014-12-15

    The neutron imaging system at the National Ignition Facility (NIF) is an important diagnostic tool for measuring the two-dimensional size and shape of the neutrons produced in the burning deuterium-tritium plasma during the stagnation stage of inertial confinement fusion implosions. Since the neutron source is small (∼100 μm) and neutrons are deeply penetrating (>3 cm) in all materials, the apertures used to achieve the desired 10-μm resolution are 20-cm long, triangular tapers machined in gold foils. These gold foils are stacked to form an array of 20 apertures for pinhole imaging and three apertures for penumbral imaging. These apertures must be precisely aligned to accurately place the field of view of each aperture at the design location, or the location of the field of view for each aperture must be measured. In this paper we present a new technique that has been developed for the measurement and characterization of the precise location of each aperture in the array. We present the detailed algorithms used for this characterization and the results of reconstructed sources from inertial confinement fusion implosion experiments at NIF.

  13. Open source development experience with a computational gas-solids flow code

    SciTech Connect

    Syamlal, M; O'Brien, T. J.; Benyahia, Sofiane; Gel, Aytekin; Pannala, Sreekanth

    2008-01-01

    A case study on the use of open source (OS) software development in chemical engineering research and education is presented here. The multiphase computational fluid dynamics software MFIX is the object of the case study. The verification and validation steps required for constructing modern computational software and the advantages of OS development in those steps are discussed. The infrastructure used for enabling the OS development of MFIX is described. The impact of OS development on computational research and education in gas-solids flow and the dissemination of information to other areas such as geotechnical and volcanology research are demonstrated. It is shown that the advantages of OS development methodology were realized: verification by many users, which enhances software quality; the use of software as a means for accumulating and exchanging information; and the facilitation of peer review of the results of computational research.

  14. ISOLA a Fortran code and a Matlab GUI to perform multiple-point source inversion of seismic data

    NASA Astrophysics Data System (ADS)

    Sokos, Efthimios N.; Zahradnik, Jiri

    2008-08-01

    In this paper, a software package for multiple- or single-point source inversion is presented. The package consists of ISOLA-GUI, a user-friendly MATLAB-based interface, and the ISOLA Fortran code, which is the computational core of the application. The methodology used is similar to iterative deconvolution technique, often used in teleseismic studies, but here adjusted for regional and local distances. The advantage of the software is the graphical interface that provides the user with an easy to use environment, rich in graphics and data handling routines, while at the same time the speed of Fortran code is retained. Besides that, the software allows the results to be exported to popular software packages, like Generic Mapping Tools, while at the same time utilizing them for quality plots of the results. The modular design of ISOLA-GUI can be used by users for the addition of supplementary routines in all the stages of processing. An example of the method's ability to obtain a quick insight into the complexity of an earthquake is presented, using records from a moderate size event.

  15. Review of the status of validation of the computer codes used in the severe accident source term reassessment study (BMI-2104). [PWR; BWR

    SciTech Connect

    Kress, T. S.

    1985-04-01

    The determination of severe accident source terms must, by necessity it seems, rely heavily on the use of complex computer codes. Source term acceptability, therefore, rests on the assessed validity of such codes. Consequently, one element of NRC's recent efforts to reassess LWR severe accident source terms is to provide a review of the status of validation of the computer codes used in the reassessment. The results of this review is the subject of this document. The separate review documents compiled in this report were used as a resource along with the results of the BMI-2104 study by BCL and the QUEST study by SNL to arrive at a more-or-less independent appraisal of the status of source term modeling at this time.

  16. Comparison of Orbiter PRCS Plume Flow Fields Using CFD and Modified Source Flow Codes

    NASA Technical Reports Server (NTRS)

    Rochelle, Wm. C.; Kinsey, Robin E.; Reid, Ethan A.; Stuart, Phillip C.; Lumpkin, Forrest E.

    1997-01-01

    The Space Shuttle Orbiter will use Reaction Control System (RCS) jets for docking with the planned International Space Station (ISS). During approach and backout maneuvers, plumes from these jets could cause high pressure, heating, and thermal loads on ISS components. The object of this paper is to present comparisons of RCS plume flow fields used to calculate these ISS environments. Because of the complexities of 3-D plumes with variable scarf-angle and multi-jet combinations, NASA/JSC developed a plume flow-field methodology for all of these Orbiter jets. The RCS Plume Model (RPM), which includes effects of scarfed nozzles and dual jets, was developed as a modified source-flow engineering tool to rapidly generate plume properties and impingement environments on ISS components. This paper presents flow-field properties from four PRCS jets: F3U low scarf-angle single jet, F3F high scarf-angle single jet, DTU zero scarf-angle dual jet, and F1F/F2F high scarf-angle dual jet. The RPM results compared well with plume flow fields using four CFD programs: General Aerodynamic Simulation Program (GASP), Cartesian (CART), Unified Solution Algorithm (USA), and Reacting and Multi-phase Program (RAMP). Good comparisons of predicted pressures are shown with STS 64 Shuttle Plume Impingement Flight Experiment (SPIFEX) data.

  17. An open-source, massively parallel code for non-LTE synthesis and inversion of spectral lines and Zeeman-induced Stokes profiles

    NASA Astrophysics Data System (ADS)

    Socas-Navarro, H.; de la Cruz Rodríguez, J.; Asensio Ramos, A.; Trujillo Bueno, J.; Ruiz Cobo, B.

    2015-05-01

    With the advent of a new generation of solar telescopes and instrumentation, interpreting chromospheric observations (in particular, spectropolarimetry) requires new, suitable diagnostic tools. This paper describes a new code, NICOLE, that has been designed for Stokes non-LTE radiative transfer, for synthesis and inversion of spectral lines and Zeeman-induced polarization profiles, spanning a wide range of atmospheric heights from the photosphere to the chromosphere. The code features a number of unique features and capabilities and has been built from scratch with a powerful parallelization scheme that makes it suitable for application on massive datasets using large supercomputers. The source code is written entirely in Fortran 90/2003 and complies strictly with the ANSI standards to ensure maximum compatibility and portability. It is being publicly released, with the idea of facilitating future branching by other groups to augment its capabilities. The source code is currently hosted at the following repository: http://https://github.com/hsocasnavarro/NICOLE

  18. What Actually Happened.

    PubMed

    2016-04-01

    The medical team found the patient to lack medical decisionmaking capacity. However, the team felt that the patient was still able to respond appropriately to some situations. KS had displayed a consistent refusal of all medical treatments that made her uncomfortable or caused pain. During her sister's visits, the patient would be much more receptive to eating. A meeting was planned with the patient's sister in which the ethicist explained that the patient was not able to make her own decisions. The patient's sister agreed that she would honor the patient's wishes but would let the team make any decisions outside of what she knew about the patient's preferences. The patient's sister agreed and was willing to be at the patient's bedside as much as she could to encourage her eating. If the patient's condition worsened, it was discussed that the team honor the patient's wishes and not force a feeding tube on her. The patient's code status was also addressed, and KS's sister felt comfortable in communicating to the team that the patient would not want to be resuscitated if medical treatments would not be able to improve her current quality of life. A natural passing away would be most amenable to the patient. The patient was discharged to her nursing home with a physician order for life-sustaining treatment (POLST) form signed by the sister documenting a do-not-resuscitate code status with comfort-focused treatments. PMID:26957461

  19. Wind Farm Stabilization by using DFIG with Current Controlled Voltage Source Converters Taking Grid Codes into Consideration

    NASA Astrophysics Data System (ADS)

    Okedu, Kenneth Eloghene; Muyeen, S. M.; Takahashi, Rion; Tamura, Junji

    Recent wind farm grid codes require wind generators to ride through voltage sags, which means that normal power production should be re-initiated once the nominal grid voltage is recovered. However, fixed speed wind turbine generator system using induction generator (IG) has the stability problem similar to the step-out phenomenon of a synchronous generator. On the other hand, doubly fed induction generator (DFIG) can control its real and reactive powers independently while being operated in variable speed mode. This paper proposes a new control strategy using DFIGs for stabilizing a wind farm composed of DFIGs and IGs, without incorporating additional FACTS devices. A new current controlled voltage source converter (CC-VSC) scheme is proposed to control the converters of DFIG and the performance is verified by comparing the results with those of voltage controlled voltage source converter (VC-VSC) scheme. Another salient feature of this study is to reduce the number of proportionate integral (PI) controllers used in the rotor side converter without degrading dynamic and transient performances. Moreover, DC-link protection scheme during grid fault can be omitted in the proposed scheme which reduces overall cost of the system. Extensive simulation analyses by using PSCAD/EMTDC are carried out to clarify the effectiveness of the proposed CC-VSC based control scheme of DFIGs.

  20. Web-MCQ: a set of methods and freely available open source code for administering online multiple choice question assessments.

    PubMed

    Hewson, Claire

    2007-08-01

    E-learning approaches have received increasing attention in recent years. Accordingly, a number of tools have become available to assist the nonexpert computer user in constructing and managing virtual learning environments, and implementing computer-based and/or online procedures to support pedagogy. Both commercial and free packages are now available, with new developments emerging periodically. Commercial products have the advantage of being comprehensive and reliable, but tend to require substantial financial investment and are not always transparent to use. They may also restrict pedagogical choices due to their predetermined ranges of functionality. With these issues in mind, several authors have argued for the pedagogical benefits of developing freely available, open source e-learning resources, which can be shared and further developed within a community of educational practitioners. The present paper supports this objective by presenting a set of methods, along with supporting freely available, downloadable, open source programming code, to allow administration of online multiple choice question assessments to students. PMID:17958158

  1. MARE2DEM: an open-source code for anisotropic inversion of controlled-source electromagnetic and magnetotelluric data using parallel adaptive 2D finite elements (Invited)

    NASA Astrophysics Data System (ADS)

    Key, K.

    2013-12-01

    This work announces the public release of an open-source inversion code named MARE2DEM (Modeling with Adaptively Refined Elements for 2D Electromagnetics). Although initially designed for the rapid inversion of marine electromagnetic data, MARE2DEM now supports a wide variety of acquisition configurations for both offshore and onshore surveys that utilize electric and magnetic dipole transmitters or magnetotelluric plane waves. The model domain is flexibly parameterized using a grid of arbitrarily shaped polygonal regions, allowing for complicated structures such as topography or seismically imaged horizons to be easily assimilated. MARE2DEM efficiently solves the forward problem in parallel by dividing the input data parameters into smaller subsets using a parallel data decomposition algorithm. The data subsets are then solved in parallel using an automatic adaptive finite element method that iterative solves the forward problem on successively refined finite element meshes until a specified accuracy tolerance is met, thus freeing the end user from the burden of designing an accurate numerical modeling grid. Regularized non-linear inversion for isotropic or anisotropic conductivity is accomplished with a new implementation of Occam's method referred to as fast-Occam, which is able to minimize the objective function in much fewer forward evaluations than the required by the original method. This presentation will review the theoretical considerations behind MARE2DEM and use a few recent offshore EM data sets to demonstrate its capabilities and to showcase the software interface tools that streamline model building and data inversion.

  2. What Actually Happened.

    PubMed

    2016-07-01

    An ethics consult was scheduled for the following day. Prior to the consult, Mr. Hope subsequently decompensated and was transferred to the local hospital. The ethics consultation service continued with the ethics consult to discuss the ethical concerns of the medical staff but in particular to create an open forum for the staff to process their moral distress over the care of this patient and to come to an agreed-on plan on how they would proceed should the resident code. The patient never returned to the long-term care setting. While in the emergency room, the patient took a turn for the worse and appeared to require intubation. The emergency room attending physician contacted the patient's family and discussed the imminent likelihood of the patient's demise and the potential harm caused to the patient by resuscitation and intubation, and the family agreed to switch to comfort measures, allowing the patient to pass peacefully. The family stated to the ER physician that they needed to feel as though they had done everything they could to keep their loved one alive and did not want any responsibility for his death. The staff at the long-term care setting still remember Mr. Hope in their daily work and talk about him often. PMID:27348844

  3. Modeling of a three-source perfusion and blood oxygenation sensor for transplant monitoring using multilayer Monte Carlo code

    NASA Astrophysics Data System (ADS)

    Ibey, Bennett L.; Lee, Seungjoon; Ericson, M. Nance; Wilson, Mark A.; Cote, Gerard L.

    2004-06-01

    A Multi-Layer Monte Carlo (MLMC) model was developed to predict the results of in vivo blood perfusion and oxygenation measurement of transplanted organs as measured by an indwelling optical sensor. A sensor has been developed which uses three-source excitation in the red and infrared ranges (660, 810, 940 nm). In vitro data was taken using this sensor by changing the oxygenation state of whole blood and passing it through a single-tube pump system wrapped in bovine liver tissue. The collected data showed that the red signal increased as blood oxygenation increased and infrared signal decreased. The center wavelength of 810 nanometers was shown to be quite indifferent to blood oxygenation change. A model was developed using MLMC code that sampled the wavelength range from 600-1000 nanometers every 6 nanometers. Using scattering and absorption data for blood and liver tissue within this wavelength range, a five-layer model was developed (tissue, clear tubing, blood, clear tubing, tissue). The theoretical data generated from this model was compared to the in vitro data and showed good correlation with changing blood oxygenation.

  4. Beam simulation and radiation dose calculation at the Advanced Photon Source with shower, an Interface Program to the EGS4 code system

    SciTech Connect

    Emery, L.

    1995-07-01

    The interface program shower to the FGS Monte Carlo electromagnetic cascade shower simulation code system was written to facilitate the definition of complicated target and shielding geometries and to simplify the handling of input and output of data. The geometry is defined by a series of namelist commands in an input file. The input and output beam data files follow the SPDDS (self-describing data set) protocol, which makes the files compatible with other physics codes that follow the same protocol. For instance, one can use the results of the cascade shower simulation as the input data for an accelerator tracking code. The shower code has also been used to calculate the bremsstrahlung component of radiation doses for possible beam loss scenarios at the Advanced Photon Source (APS) at Argonne National Laboratory.

  5. Calculation of source terms for NUREG-1150

    SciTech Connect

    Breeding, R.J.; Williams, D.C.; Murfin, W.B.; Amos, C.N.; Helton, J.C.

    1987-10-01

    The source terms estimated for NUREG-1150 are generally based on the Source Term Code Package (STCP), but the actual source term calculations used in computing risk are performed by much smaller codes which are specific to each plant. This was done because the method of estimating the uncertainty in risk for NUREG-1150 requires hundreds of source term calculations for each accident sequence. This is clearly impossible with a large, detailed code like the STCP. The small plant-specific codes are based on simple algorithms and utilize adjustable parameters. The values of the parameters appearing in these codes are derived from the available STCP results. To determine the uncertainty in the estimation of the source terms, these parameters were varied as specified by an expert review group. This method was used to account for the uncertainties in the STCP results and the uncertainties in phenomena not considered by the STCP.

  6. The circulating transcriptome as a source of non-invasive cancer biomarkers: concepts and controversies of non-coding and coding RNA in body fluids

    PubMed Central

    Fernandez-Mercado, Marta; Manterola, Lorea; Larrea, Erika; Goicoechea, Ibai; Arestin, María; Armesto, María; Otaegui, David; Lawrie, Charles H

    2015-01-01

    The gold standard for cancer diagnosis remains the histological examination of affected tissue, obtained either by surgical excision, or radiologically guided biopsy. Such procedures however are expensive, not without risk to the patient, and require consistent evaluation by expert pathologists. Consequently, the search for non-invasive tools for the diagnosis and management of cancer has led to great interest in the field of circulating nucleic acids in plasma and serum. An additional benefit of blood-based testing is the ability to carry out screening and repeat sampling on patients undergoing therapy, or monitoring disease progression allowing for the development of a personalized approach to cancer patient management. Despite having been discovered over 60 years ago, the clear clinical potential of circulating nucleic acids, with the notable exception of prenatal diagnostic testing, has yet to translate into the clinic. The recent discovery of non-coding (nc) RNA (in particular micro(mi)RNAs) in the blood has provided fresh impetuous for the field. In this review, we discuss the potential of the circulating transcriptome (coding and ncRNA), as novel cancer biomarkers, the controversy surrounding their origin and biology, and most importantly the hurdles that remain to be overcome if they are really to become part of future clinical practice. PMID:26119132

  7. Validation of the MCNP-DSP Monte Carlo code for calculating source-driven noise parameters of subcritical systems

    SciTech Connect

    Valentine, T.E.; Mihalczo, J.T.

    1995-12-31

    This paper describes calculations performed to validate the modified version of the MCNP code, the MCNP-DSP, used for: the neutron and photon spectra of the spontaneous fission of californium 252; the representation of the detection processes for scattering detectors; the timing of the detection process; and the calculation of the frequency analysis parameters for the MCNP-DSP code.

  8. Coded aperture imaging of fusion source in a plasma focus operated with pure D{sub 2} and a D{sub 2}-Kr gas admixture

    SciTech Connect

    Springham, S. V.; Talebitaher, A.; Shutler, P. M. E.; Rawat, R. S.; Lee, P.; Lee, S.

    2012-09-10

    The coded aperture imaging (CAI) technique has been used to investigate the spatial distribution of DD fusion in a 1.6 kJ plasma focus (PF) device operated in, alternatively, pure deuterium or deuterium-krypton admixture. The coded mask pattern is based on a singer cyclic difference set with 25% open fraction and positioned close to 90 Degree-Sign to the plasma focus axis, with CR-39 detectors used to register tracks of protons from the D(d, p)T reaction. Comparing the coded aperture imaging proton images for pure D{sub 2} and D{sub 2}-Kr admixture operation reveals clear differences in size, density, and shape between the fusion sources for these two cases.

  9. Generating code adapted for interlinking legacy scalar code and extended vector code

    DOEpatents

    Gschwind, Michael K

    2013-06-04

    Mechanisms for intermixing code are provided. Source code is received for compilation using an extended Application Binary Interface (ABI) that extends a legacy ABI and uses a different register configuration than the legacy ABI. First compiled code is generated based on the source code, the first compiled code comprising code for accommodating the difference in register configurations used by the extended ABI and the legacy ABI. The first compiled code and second compiled code are intermixed to generate intermixed code, the second compiled code being compiled code that uses the legacy ABI. The intermixed code comprises at least one call instruction that is one of a call from the first compiled code to the second compiled code or a call from the second compiled code to the first compiled code. The code for accommodating the difference in register configurations is associated with the at least one call instruction.

  10. Joint Source-Channel Coding Based on Cosine-Modulated Filter Banks for Erasure-Resilient Signal Transmission

    NASA Astrophysics Data System (ADS)

    Marinkovic, Slavica; Guillemot, Christine

    2005-12-01

    This paper examines erasure resilience of oversampled filter bank (OFB) codes, focusing on two families of codes based on cosine-modulated filter banks (CMFB). We first revisit OFBs in light of filter bank and frame theory. The analogy with channel codes is then shown. In particular, for paraunitary filter banks, we show that the signal reconstruction methods derived from the filter bank theory and from coding theory are equivalent, even in the presence of quantization noise. We further discuss frame properties of the considered OFB structures. Perfect reconstruction (PR) for the CMFB-based OFBs with erasures is proven for the case of erasure patterns for which PR depends only on the general structure of the code and not on the prototype filters. For some of these erasure patterns, the expression of the mean-square reconstruction error is also independent of the filter coefficients. It can be expressed in terms of the number of erasures, and of parameters such as the number of channels and the oversampling ratio. The various structures are compared by simulation for the example of an image transmission system.

  11. A Mode Propagation Database Suitable for Code Validation Utilizing the NASA Glenn Advanced Noise Control Fan and Artificial Sources

    NASA Technical Reports Server (NTRS)

    Sutliff, Daniel L.

    2014-01-01

    The NASA Glenn Research Center's Advanced Noise Control Fan (ANCF) was developed in the early 1990s to provide a convenient test bed to measure and understand fan-generated acoustics, duct propagation, and radiation to the farfield. A series of tests were performed primarily for the use of code validation and tool validation. Rotating Rake mode measurements were acquired for parametric sets of: (1) mode blockage, (2) liner insertion loss, (3) short ducts, and (4) mode reflection.

  12. A Mode Propagation Database Suitable for Code Validation Utilizing the NASA Glenn Advanced Noise Control Fan and Artificial Sources

    NASA Technical Reports Server (NTRS)

    Sutliff, Daniel L.

    2014-01-01

    The NASA Glenn Research Center's Advanced Noise Control Fan (ANCF) was developed in the early 1990s to provide a convenient test bed to measure and understand fan-generated acoustics, duct propagation, and radiation to the farfield. A series of tests were performed primarily for the use of code validation and tool validation. Rotating Rake mode measurements were acquired for parametric sets of: (i) mode blockage, (ii) liner insertion loss, (iii) short ducts, and (iv) mode reflection.

  13. Source coding with a permutation-based reversible memory-binding transform for data compression in categorical data domains.

    PubMed

    Talbot, B G; Talbot, L M

    1998-01-01

    A general purpose reversible memory-binding transform (MBT) is developed, which uses a permutation transform technique to bind memory information to a transformed signal alphabet. The algorithm performs well in conjunction with a Huffman coder for both ordered sources, such as pixel intensities, and categorical sources, such as vector quantized codebook indices. PMID:18276336

  14. 40 CFR 74.22 - Actual SO2 emissions rate.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ....6 for natural gas For other fuels, the combustion source must specify the SO2 emissions factor. (c... (CONTINUED) SULFUR DIOXIDE OPT-INS Allowance Calculations for Combustion Sources § 74.22 Actual SO2 emissions rate. (a) Data requirements. The designated representative of a combustion source shall submit...

  15. Dosimetric comparison between the microSelectron HDR 192Ir v2 source and the BEBIG 60Co source for HDR brachytherapy using the EGSnrc Monte Carlo transport code

    PubMed Central

    Islam, M. Anwarul; Akramuzzaman, M. M.; Zakaria, G. A.

    2012-01-01

    Manufacturing of miniaturized high activity 192Ir sources have been made a market preference in modern brachytherapy. The smaller dimensions of the sources are flexible for smaller diameter of the applicators and it is also suitable for interstitial implants. Presently, miniaturized 60Co HDR sources have been made available with identical dimensions to those of 192Ir sources. 60Co sources have an advantage of longer half life while comparing with 192Ir source. High dose rate brachytherapy sources with longer half life are logically pragmatic solution for developing country in economic point of view. This study is aimed to compare the TG-43U1 dosimetric parameters for new BEBIG 60Co HDR and new microSelectron 192Ir HDR sources. Dosimetric parameters are calculated using EGSnrc-based Monte Carlo simulation code accordance with the AAPM TG-43 formalism for microSlectron HDR 192Ir v2 and new BEBIG 60Co HDR sources. Air-kerma strength per unit source activity, calculated in dry air are 9.698×10-8 ± 0.55% U Bq-1 and 3.039×10-7 ± 0.41% U Bq-1 for the above mentioned two sources, respectively. The calculated dose rate constants per unit air-kerma strength in water medium are 1.116±0.12% cGy h-1U-1 and 1.097±0.12% cGy h-1U-1, respectively, for the two sources. The values of radial dose function for distances up to 1 cm and more than 22 cm for BEBIG 60Co HDR source are higher than that of other source. The anisotropic values are sharply increased to the longitudinal sides of the BEBIG 60Co source and the rise is comparatively sharper than that of the other source. Tissue dependence of the absorbed dose has been investigated with vacuum phantom for breast, compact bone, blood, lung, thyroid, soft tissue, testis, and muscle. No significant variation is noted at 5 cm of radial distance in this regard while comparing the two sources except for lung tissues. The true dose rates are calculated with considering photon as well as electron transport using appropriate cut

  16. Mobile, hybrid Compton/coded aperture imaging for detection, identification and localization of gamma-ray sources at stand-off distances

    NASA Astrophysics Data System (ADS)

    Tornga, Shawn R.

    The Stand-off Radiation Detection System (SORDS) program is an Advanced Technology Demonstration (ATD) project through the Department of Homeland Security's Domestic Nuclear Detection Office (DNDO) with the goal of detection, identification and localization of weak radiological sources in the presence of large dynamic backgrounds. The Raytheon-SORDS Tri-Modal Imager (TMI) is a mobile truck-based, hybrid gamma-ray imaging system able to quickly detect, identify and localize, radiation sources at standoff distances through improved sensitivity while minimizing the false alarm rate. Reconstruction of gamma-ray sources is performed using a combination of two imaging modalities; coded aperture and Compton scatter imaging. The TMI consists of 35 sodium iodide (NaI) crystals 5x5x2 in3 each, arranged in a random coded aperture mask array (CA), followed by 30 position sensitive NaI bars each 24x2.5x3 in3 called the detection array (DA). The CA array acts as both a coded aperture mask and scattering detector for Compton events. The large-area DA array acts as a collection detector for both Compton scattered events and coded aperture events. In this thesis, developed coded aperture, Compton and hybrid imaging algorithms will be described along with their performance. It will be shown that multiple imaging modalities can be fused to improve detection sensitivity over a broader energy range than either alone. Since the TMI is a moving system, peripheral data, such as a Global Positioning System (GPS) and Inertial Navigation System (INS) must also be incorporated. A method of adapting static imaging algorithms to a moving platform has been developed. Also, algorithms were developed in parallel with detector hardware, through the use of extensive simulations performed with the Geometry and Tracking Toolkit v4 (GEANT4). Simulations have been well validated against measured data. Results of image reconstruction algorithms at various speeds and distances will be presented as well as

  17. The characterization and optimization of NIO1 ion source extraction aperture using a 3D particle-in-cell code

    NASA Astrophysics Data System (ADS)

    Taccogna, F.; Minelli, P.; Cavenago, M.; Veltri, P.; Ippolito, N.

    2016-02-01

    The geometry of a single aperture in the extraction grid plays a relevant role for the optimization of negative ion transport and extraction probability in a hybrid negative ion source. For this reason, a three-dimensional particle-in-cell/Monte Carlo collision model of the extraction region around the single aperture including part of the source and part of the acceleration (up to the extraction grid (EG) middle) regions has been developed for the new aperture design prepared for negative ion optimization 1 source. Results have shown that the dimension of the flat and chamfered parts and the slope of the latter in front of the source region maximize the product of production rate and extraction probability (allowing the best EG field penetration) of surface-produced negative ions. The negative ion density in the plane yz has been reported.

  18. The characterization and optimization of NIO1 ion source extraction aperture using a 3D particle-in-cell code.

    PubMed

    Taccogna, F; Minelli, P; Cavenago, M; Veltri, P; Ippolito, N

    2016-02-01

    The geometry of a single aperture in the extraction grid plays a relevant role for the optimization of negative ion transport and extraction probability in a hybrid negative ion source. For this reason, a three-dimensional particle-in-cell/Monte Carlo collision model of the extraction region around the single aperture including part of the source and part of the acceleration (up to the extraction grid (EG) middle) regions has been developed for the new aperture design prepared for negative ion optimization 1 source. Results have shown that the dimension of the flat and chamfered parts and the slope of the latter in front of the source region maximize the product of production rate and extraction probability (allowing the best EG field penetration) of surface-produced negative ions. The negative ion density in the plane yz has been reported. PMID:26932027

  19. TIDY, a complete code for renumbering and editing FORTRAN source programs. User's manual for IBM 360/67

    NASA Technical Reports Server (NTRS)

    Barlow, A. V.; Vanderplaats, G. N.

    1973-01-01

    TIDY, a computer code which edits and renumerates FORTRAN decks which have become difficult to read because of many patches and revisions, is described. The old program is reorganized so that statement numbers are added sequentially, and extraneous FORTRAN statements are deleted. General instructions for using TIDY on the IBM 360/67 Tymeshare System, and specific instructions for use on the NASA/AMES IBM 360/67 TSS system are included as well as specific instructions on how to run TIDY in conversational and in batch modes. TIDY may be adopted for use on other computers.

  20. FORTRAN code-evaluation system

    NASA Technical Reports Server (NTRS)

    Capps, J. D.; Kleir, R.

    1977-01-01

    Automated code evaluation system can be used to detect coding errors and unsound coding practices in any ANSI FORTRAN IV source code before they can cause execution-time malfunctions. System concentrates on acceptable FORTRAN code features which are likely to produce undesirable results.

  1. Linguistic Theory and Actual Language.

    ERIC Educational Resources Information Center

    Segerdahl, Par

    1995-01-01

    Examines Noam Chomsky's (1957) discussion of "grammaticalness" and the role of linguistics in the "correct" way of speaking and writing. It is argued that the concern of linguistics with the tools of grammar has resulted in confusion, with the tools becoming mixed up with the actual language, thereby becoming the central element in a metaphysical…

  2. The Fast Scattering Code (FSC): Validation Studies and Program Guidelines

    NASA Technical Reports Server (NTRS)

    Tinetti, Ana F.; Dunn, Mark H.

    2011-01-01

    The Fast Scattering Code (FSC) is a frequency domain noise prediction program developed at the NASA Langley Research Center (LaRC) to simulate the acoustic field produced by the interaction of known, time harmonic incident sound with bodies of arbitrary shape and surface impedance immersed in a potential flow. The code uses the equivalent source method (ESM) to solve an exterior 3-D Helmholtz boundary value problem (BVP) by expanding the scattered acoustic pressure field into a series of point sources distributed on a fictitious surface placed inside the actual scatterer. This work provides additional code validation studies and illustrates the range of code parameters that produce accurate results with minimal computational costs. Systematic noise prediction studies are presented in which monopole generated incident sound is scattered by simple geometric shapes - spheres (acoustically hard and soft surfaces), oblate spheroids, flat disk, and flat plates with various edge topologies. Comparisons between FSC simulations and analytical results and experimental data are presented.

  3. Dosimetric comparison of Monte Carlo codes (EGS4, MCNP, MCNPX) considering external and internal exposures of the Zubal phantom to electron and photon sources.

    PubMed

    Chiavassa, S; Lemosquet, A; Aubineau-Lanièce, I; de Carlan, L; Clairand, I; Ferrer, L; Bardiès, M; Franck, D; Zankl, M

    2005-01-01

    This paper aims at comparing dosimetric assessments performed with three Monte Carlo codes: EGS4, MCNP4c2 and MCNPX2.5e, using a realistic voxel phantom, namely the Zubal phantom, in two configurations of exposure. The first one deals with an external irradiation corresponding to the example of a radiological accident. The results are obtained using the EGS4 and the MCNP4c2 codes and expressed in terms of the mean absorbed dose (in Gy per source particle) for brain, lungs, liver and spleen. The second one deals with an internal exposure corresponding to the treatment of a medullary thyroid cancer by 131I-labelled radiopharmaceutical. The results are obtained by EGS4 and MCNPX2.5e and compared in terms of S-values (expressed in mGy per kBq and per hour) for liver, kidney, whole body and thyroid. The results of these two studies are presented and differences between the codes are analysed and discussed. PMID:16604715

  4. Facilitating Internet-Scale Code Retrieval

    ERIC Educational Resources Information Center

    Bajracharya, Sushil Krishna

    2010-01-01

    Internet-Scale code retrieval deals with the representation, storage, and access of relevant source code from a large amount of source code available on the Internet. Internet-Scale code retrieval systems support common emerging practices among software developers related to finding and reusing source code. In this dissertation we focus on some…

  5. Source convergence diagnostics using Boltzmann entropy criterion application to different OECD/NEA criticality benchmarks with the 3-D Monte Carlo code Tripoli-4

    SciTech Connect

    Dumonteil, E.; Le Peillet, A.; Lee, Y. K.; Petit, O.; Jouanne, C.; Mazzolo, A.

    2006-07-01

    The measurement of the stationarity of Monte Carlo fission source distributions in k{sub eff} calculations plays a central role in the ability to discriminate between fake and 'true' convergence (in the case of a high dominant ratio or in case of loosely coupled systems). Recent theoretical developments have been made in the study of source convergence diagnostics, using Shannon entropy. We will first recall those results, and we will then generalize them using the expression of Boltzmann entropy, highlighting the gain in terms of the various physical problems that we can treat. Finally we will present the results of several OECD/NEA benchmarks using the Tripoli-4 Monte Carlo code, enhanced with this new criterion. (authors)

  6. OFF, Open source Finite volume Fluid dynamics code: A free, high-order solver based on parallel, modular, object-oriented Fortran API

    NASA Astrophysics Data System (ADS)

    Zaghi, S.

    2014-07-01

    OFF, an open source (free software) code for performing fluid dynamics simulations, is presented. The aim of OFF is to solve, numerically, the unsteady (and steady) compressible Navier-Stokes equations of fluid dynamics by means of finite volume techniques: the research background is mainly focused on high-order (WENO) schemes for multi-fluids, multi-phase flows over complex geometries. To this purpose a highly modular, object-oriented application program interface (API) has been developed. In particular, the concepts of data encapsulation and inheritance available within Fortran language (from standard 2003) have been stressed in order to represent each fluid dynamics “entity” (e.g. the conservative variables of a finite volume, its geometry, etc…) by a single object so that a large variety of computational libraries can be easily (and efficiently) developed upon these objects. The main features of OFF can be summarized as follows: Programming LanguageOFF is written in standard (compliant) Fortran 2003; its design is highly modular in order to enhance simplicity of use and maintenance without compromising the efficiency; Parallel Frameworks Supported the development of OFF has been also targeted to maximize the computational efficiency: the code is designed to run on shared-memory multi-cores workstations and distributed-memory clusters of shared-memory nodes (supercomputers); the code’s parallelization is based on Open Multiprocessing (OpenMP) and Message Passing Interface (MPI) paradigms; Usability, Maintenance and Enhancement in order to improve the usability, maintenance and enhancement of the code also the documentation has been carefully taken into account; the documentation is built upon comprehensive comments placed directly into the source files (no external documentation files needed): these comments are parsed by means of doxygen free software producing high quality html and latex documentation pages; the distributed versioning system referred

  7. Research on Universal Combinatorial Coding

    PubMed Central

    Lu, Jun; Zhang, Zhuo; Mo, Juan

    2014-01-01

    The conception of universal combinatorial coding is proposed. Relations exist more or less in many coding methods. It means that a kind of universal coding method is objectively existent. It can be a bridge connecting many coding methods. Universal combinatorial coding is lossless and it is based on the combinatorics theory. The combinational and exhaustive property make it closely related with the existing code methods. Universal combinatorial coding does not depend on the probability statistic characteristic of information source, and it has the characteristics across three coding branches. It has analyzed the relationship between the universal combinatorial coding and the variety of coding method and has researched many applications technologies of this coding method. In addition, the efficiency of universal combinatorial coding is analyzed theoretically. The multicharacteristic and multiapplication of universal combinatorial coding are unique in the existing coding methods. Universal combinatorial coding has theoretical research and practical application value. PMID:24772019

  8. Research on universal combinatorial coding.

    PubMed

    Lu, Jun; Zhang, Zhuo; Mo, Juan

    2014-01-01

    The conception of universal combinatorial coding is proposed. Relations exist more or less in many coding methods. It means that a kind of universal coding method is objectively existent. It can be a bridge connecting many coding methods. Universal combinatorial coding is lossless and it is based on the combinatorics theory. The combinational and exhaustive property make it closely related with the existing code methods. Universal combinatorial coding does not depend on the probability statistic characteristic of information source, and it has the characteristics across three coding branches. It has analyzed the relationship between the universal combinatorial coding and the variety of coding method and has researched many applications technologies of this coding method. In addition, the efficiency of universal combinatorial coding is analyzed theoretically. The multicharacteristic and multiapplication of universal combinatorial coding are unique in the existing coding methods. Universal combinatorial coding has theoretical research and practical application value. PMID:24772019

  9. TOMO3D: 3-D joint refraction and reflection traveltime tomography parallel code for active-source seismic data—synthetic test

    NASA Astrophysics Data System (ADS)

    Meléndez, A.; Korenaga, J.; Sallarès, V.; Miniussi, A.; Ranero, C. R.

    2015-10-01

    We present a new 3-D traveltime tomography code (TOMO3D) for the modelling of active-source seismic data that uses the arrival times of both refracted and reflected seismic phases to derive the velocity distribution and the geometry of reflecting boundaries in the subsurface. This code is based on its popular 2-D version TOMO2D from which it inherited the methods to solve the forward and inverse problems. The traveltime calculations are done using a hybrid ray-tracing technique combining the graph and bending methods. The LSQR algorithm is used to perform the iterative regularized inversion to improve the initial velocity and depth models. In order to cope with an increased computational demand due to the incorporation of the third dimension, the forward problem solver, which takes most of the run time (˜90 per cent in the test presented here), has been parallelized with a combination of multi-processing and message passing interface standards. This parallelization distributes the ray-tracing and traveltime calculations among available computational resources. The code's performance is illustrated with a realistic synthetic example, including a checkerboard anomaly and two reflectors, which simulates the geometry of a subduction zone. The code is designed to invert for a single reflector at a time. A data-driven layer-stripping strategy is proposed for cases involving multiple reflectors, and it is tested for the successive inversion of the two reflectors. Layers are bound by consecutive reflectors, and an initial velocity model for each inversion step incorporates the results from previous steps. This strategy poses simpler inversion problems at each step, allowing the recovery of strong velocity discontinuities that would otherwise be smoothened.

  10. Transmission from theory to practice: Experiences using open-source code development and a virtual short course to increase the adoption of new theoretical approaches

    NASA Astrophysics Data System (ADS)

    Harman, C. J.

    2015-12-01

    Even amongst the academic community, new theoretical tools can remain underutilized due to the investment of time and resources required to understand and implement them. This surely limits the frequency that new theory is rigorously tested against data by scientists outside the group that developed it, and limits the impact that new tools could have on the advancement of science. Reducing the barriers to adoption through online education and open-source code can bridge the gap between theory and data, forging new collaborations, and advancing science. A pilot venture aimed at increasing the adoption of a new theory of time-variable transit time distributions was begun in July 2015 as a collaboration between Johns Hopkins University and The Consortium of Universities for the Advancement of Hydrologic Science (CUAHSI). There were four main components to the venture: a public online seminar covering the theory, an open source code repository, a virtual short course designed to help participants apply the theory to their data, and an online forum to maintain discussion and build a community of users. 18 participants were selected for the non-public components based on their responses in an application, and were asked to fill out a course evaluation at the end of the short course, and again several months later. These evaluations, along with participation in the forum and on-going contact with the organizer suggest strengths and weaknesses in this combination of components to assist participants in adopting new tools.

  11. XSOR codes users manual

    SciTech Connect

    Jow, Hong-Nian; Murfin, W.B.; Johnson, J.D.

    1993-11-01

    This report describes the source term estimation codes, XSORs. The codes are written for three pressurized water reactors (Surry, Sequoyah, and Zion) and two boiling water reactors (Peach Bottom and Grand Gulf). The ensemble of codes has been named ``XSOR``. The purpose of XSOR codes is to estimate the source terms which would be released to the atmosphere in severe accidents. A source term includes the release fractions of several radionuclide groups, the timing and duration of releases, the rates of energy release, and the elevation of releases. The codes have been developed by Sandia National Laboratories for the US Nuclear Regulatory Commission (NRC) in support of the NUREG-1150 program. The XSOR codes are fast running parametric codes and are used as surrogates for detailed mechanistic codes. The XSOR codes also provide the capability to explore the phenomena and their uncertainty which are not currently modeled by the mechanistic codes. The uncertainty distributions of input parameters may be used by an. XSOR code to estimate the uncertainty of source terms.

  12. System and method for investigating sub-surface features of a rock formation with acoustic sources generating coded signals

    SciTech Connect

    Vu, Cung Khac; Nihei, Kurt; Johnson, Paul A; Guyer, Robert; Ten Cate, James A; Le Bas, Pierre-Yves; Larmat, Carene S

    2014-12-30

    A system and a method for investigating rock formations includes generating, by a first acoustic source, a first acoustic signal comprising a first plurality of pulses, each pulse including a first modulated signal at a central frequency; and generating, by a second acoustic source, a second acoustic signal comprising a second plurality of pulses. A receiver arranged within the borehole receives a detected signal including a signal being generated by a non-linear mixing process from the first-and-second acoustic signal in a non-linear mixing zone within the intersection volume. The method also includes-processing the received signal to extract the signal generated by the non-linear mixing process over noise or over signals generated by a linear interaction process, or both.

  13. Automated model integration at source code level: An approach for implementing models into the NASA Land Information System

    NASA Astrophysics Data System (ADS)

    Wang, S.; Peters-Lidard, C. D.; Mocko, D. M.; Kumar, S.; Nearing, G. S.; Arsenault, K. R.; Geiger, J. V.

    2014-12-01

    Model integration bridges the data flow between modeling frameworks and models. However, models usually do not fit directly into a particular modeling environment, if not designed for it. An example includes implementing different types of models into the NASA Land Information System (LIS), a software framework for land-surface modeling and data assimilation. Model implementation requires scientific knowledge and software expertise and may take a developer months to learn LIS and model software structure. Debugging and testing of the model implementation is also time-consuming due to not fully understanding LIS or the model. This time spent is costly for research and operational projects. To address this issue, an approach has been developed to automate model integration into LIS. With this in mind, a general model interface was designed to retrieve forcing inputs, parameters, and state variables needed by the model and to provide as state variables and outputs to LIS. Every model can be wrapped to comply with the interface, usually with a FORTRAN 90 subroutine. Development efforts need only knowledge of the model and basic programming skills. With such wrappers, the logic is the same for implementing all models. Code templates defined for this general model interface could be re-used with any specific model. Therefore, the model implementation can be done automatically. An automated model implementation toolkit was developed with Microsoft Excel and its built-in VBA language. It allows model specifications in three worksheets and contains FORTRAN 90 code templates in VBA programs. According to the model specification, the toolkit generates data structures and procedures within FORTRAN modules and subroutines, which transfer data between LIS and the model wrapper. Model implementation is standardized, and about 80 - 90% of the development load is reduced. In this presentation, the automated model implementation approach is described along with LIS programming

  14. JSPAM: A restricted three-body code for simulating interacting galaxies

    NASA Astrophysics Data System (ADS)

    Wallin, J. F.; Holincheck, A. J.; Harvey, A.

    2016-07-01

    Restricted three-body codes have a proven ability to recreate much of the disturbed morphology of actual interacting galaxies. As more sophisticated n-body models were developed and computer speed increased, restricted three-body codes fell out of favor. However, their supporting role for performing wide searches of parameter space when fitting orbits to real systems demonstrates a continuing need for their use. Here we present the model and algorithm used in the JSPAM code. A precursor of this code was originally described in 1990, and was called SPAM. We have recently updated the software with an alternate potential and a treatment of dynamical friction to more closely mimic the results from n-body tree codes. The code is released publicly for use under the terms of the Academic Free License ("AFL") v. 3.0 and has been added to the Astrophysics Source Code Library.

  15. How People Actually Use Thermostats

    SciTech Connect

    Meier, Alan; Aragon, Cecilia; Hurwitz, Becky; Mujumdar, Dhawal; Peffer, Therese; Perry, Daniel; Pritoni, Marco

    2010-08-15

    Residential thermostats have been a key element in controlling heating and cooling systems for over sixty years. However, today's modern programmable thermostats (PTs) are complicated and difficult for users to understand, leading to errors in operation and wasted energy. Four separate tests of usability were conducted in preparation for a larger study. These tests included personal interviews, an on-line survey, photographing actual thermostat settings, and measurements of ability to accomplish four tasks related to effective use of a PT. The interviews revealed that many occupants used the PT as an on-off switch and most demonstrated little knowledge of how to operate it. The on-line survey found that 89% of the respondents rarely or never used the PT to set a weekday or weekend program. The photographic survey (in low income homes) found that only 30% of the PTs were actually programmed. In the usability test, we found that we could quantify the difference in usability of two PTs as measured in time to accomplish tasks. Users accomplished the tasks in consistently shorter times with the touchscreen unit than with buttons. None of these studies are representative of the entire population of users but, together, they illustrate the importance of improving user interfaces in PTs.

  16. Code System to Solve the Few-Group Neutron Diffusion Equation Utilizing the Nodal Expansion Method (NEM) for Eigenvalue, Adjoint, and Fixed-Source

    2004-04-21

    Version 04 NESTLE solves the few-group neutron diffusion equation utilizing the NEM. The NESTLE code can solve the eigenvalue (criticality), eigenvalue adjoint, external fixed-source steady-state, and external fixed-source or eigenvalue initiated transient problems. The eigenvalue problem allows criticality searches to be completed, and the external fixed-source steady-state problem can search to achieve a specified power level. Transient problems model delayed neutrons via precursor groups. Several core properties can be input as time dependent. Two- ormore » four-energy groups can be utilized, with all energy groups being thermal groups (i.e., upscatter exits) if desired. Core geometries modeled include Cartesian and hexagonal. Three-, two-, and one-dimensional models can be utilized with various symmetries. The thermal conditions predicted by the thermal-hydraulic model of the core are used to correct cross sections for temperature and density effects. Cross sections are parameterized by color, control rod state (i.e., in or out), and burnup, allowing fuel depletion to be modeled. Either a macroscopic or microscopic model may be employed.« less

  17. SFACTOR: a computer code for calculating dose equivalent to a target organ per microcurie-day residence of a radionuclide in a source organ - supplementary report

    SciTech Connect

    Dunning, Jr, D E; Pleasant, J C; Killough, G G

    1980-05-01

    The purpose of this report is to describe revisions in the SFACTOR computer code and to provide useful documentation for that program. The SFACTOR computer code has been developed to implement current methodologies for computing the average dose equivalent rate S(X reverse arrow Y) to specified target organs in man due to 1 ..mu..Ci of a given radionuclide uniformly distributed in designated source orrgans. The SFACTOR methodology is largely based upon that of Snyder, however, it has been expanded to include components of S from alpha and spontaneous fission decay, in addition to electron and photon radiations. With this methodology, S-factors can be computed for any radionuclide for which decay data are available. The tabulations in Appendix II provide a reference compilation of S-factors for several dosimetrically important radionuclides which are not available elsewhere in the literature. These S-factors are calculated for an adult with characteristics similar to those of the International Commission on Radiological Protection's Reference Man. Corrections to tabulations from Dunning are presented in Appendix III, based upon the methods described in Section 2.3. 10 refs.

  18. The cel3 gene of Agaricus bisporus codes for a modular cellulase and is transcriptionally regulated by the carbon source.

    PubMed Central

    Chow, C M; Yagüe, E; Raguz, S; Wood, D A; Thurston, C F

    1994-01-01

    A 52-kDa protein, CEL3, has been separated from the culture filtrate of Agaricus bisporus during growth on cellulose. A PCR-derived probe was made, with a degenerate oligodeoxynucleotide derived from the amino acid sequence of a CEL3 CNBr cleavage product and was used to select cel3 cDNA clones from an A. bisporus cDNA library. Two allelic cDNAs were isolated. They showed 98.8% identity of their nucleotide sequences. The deduced amino acid sequence and domain architecture of CEL3 showed a high degree of similarity to those of cellobiohydrolase II of Trichoderma reesei. Functional expression of cel3 cDNA in Saccharomyces cerevisiae was achieved by placing it under the control of a constitutive promoter and fusing it to the yeast invertase signal sequence. Recombinant CEL3 secreted by yeast showed enzymatic activity towards crystalline cellulose. At long reaction times, CEL3 was also able to degrade carboxymethyl cellulose. Northern (RNA) analysis showed that cel3 gene expression was induced by cellulose and repressed by glucose, fructose, 2-deoxyglucose, and lactose. Glycerol, mannitol, sorbitol, and maltose were neutral carbon sources. Nuclear run-on analysis showed that the rate of synthesis of cel3 mRNA in cellulose-grown cultures was 13 times higher than that in glucose-grown cultures. A low basal rate of cel3 mRNA synthesis was observed in the nuclei isolated from glucose-grown mycelia. Images PMID:8085821

  19. Clinical coding. Code breakers.

    PubMed

    Mathieson, Steve

    2005-02-24

    --The advent of payment by results has seen the role of the clinical coder pushed to the fore in England. --Examinations for a clinical coding qualification began in 1999. In 2004, approximately 200 people took the qualification. --Trusts are attracting people to the role by offering training from scratch or through modern apprenticeships. PMID:15768716

  20. Space and Terrestrial Power System Integration Optimization Code BRMAPS for Gas Turbine Space Power Plants With Nuclear Reactor Heat Sources

    NASA Technical Reports Server (NTRS)

    Juhasz, Albert J.

    2007-01-01

    In view of the difficult times the US and global economies are experiencing today, funds for the development of advanced fission reactors nuclear power systems for space propulsion and planetary surface applications are currently not available. However, according to the Energy Policy Act of 2005 the U.S. needs to invest in developing fission reactor technology for ground based terrestrial power plants. Such plants would make a significant contribution toward drastic reduction of worldwide greenhouse gas emissions and associated global warming. To accomplish this goal the Next Generation Nuclear Plant Project (NGNP) has been established by DOE under the Generation IV Nuclear Systems Initiative. Idaho National Laboratory (INL) was designated as the lead in the development of VHTR (Very High Temperature Reactor) and HTGR (High Temperature Gas Reactor) technology to be integrated with MMW (multi-megawatt) helium gas turbine driven electric power AC generators. However, the advantages of transmitting power in high voltage DC form over large distances are also explored in the seminar lecture series. As an attractive alternate heat source the Liquid Fluoride Reactor (LFR), pioneered at ORNL (Oak Ridge National Laboratory) in the mid 1960's, would offer much higher energy yields than current nuclear plants by using an inherently safe energy conversion scheme based on the Thorium --> U233 fuel cycle and a fission process with a negative temperature coefficient of reactivity. The power plants are to be sized to meet electric power demand during peak periods and also for providing thermal energy for hydrogen (H2) production during "off peak" periods. This approach will both supply electric power by using environmentally clean nuclear heat which does not generate green house gases, and also provide a clean fuel H2 for the future, when, due to increased global demand and the decline in discovering new deposits, our supply of liquid fossil fuels will have been used up. This is

  1. On the optimality of code options for a universal noiseless coder

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu; Rice, Robert F.; Miller, Warner

    1991-01-01

    A universal noiseless coding structure was developed that provides efficient performance over an extremely broad range of source entropy. This is accomplished by adaptively selecting the best of several easily implemented variable length coding algorithms. Custom VLSI coder and decoder modules capable of processing over 20 million samples per second are currently under development. The first of the code options used in this module development is shown to be equivalent to a class of Huffman code under the Humblet condition, other options are shown to be equivalent to the Huffman codes of a modified Laplacian symbol set, at specified symbol entropy values. Simulation results are obtained on actual aerial imagery, and they confirm the optimality of the scheme. On sources having Gaussian or Poisson distributions, coder performance is also projected through analysis and simulation.

  2. Evaluation of the scale dependent dynamic SGS model in the open source code caffa3d.MBRi in wall-bounded flows

    NASA Astrophysics Data System (ADS)

    Draper, Martin; Usera, Gabriel

    2015-04-01

    The Scale Dependent Dynamic Model (SDDM) has been widely validated in large-eddy simulations using pseudo-spectral codes [1][2][3]. The scale dependency, particularly the potential law, has been proved also in a priori studies [4][5]. To the authors' knowledge there have been only few attempts to use the SDDM in finite difference (FD) and finite volume (FV) codes [6][7], finding some improvements with the dynamic procedures (scale independent or scale dependent approach), but not showing the behavior of the scale-dependence parameter when using the SDDM. The aim of the present paper is to evaluate the SDDM in the open source code caffa3d.MBRi, an updated version of the code presented in [8]. caffa3d.MBRi is a FV code, second-order accurate, parallelized with MPI, in which the domain is divided in unstructured blocks of structured grids. To accomplish this, 2 cases are considered: flow between flat plates and flow over a rough surface with the presence of a model wind turbine, taking for this case the experimental data presented in [9]. In both cases the standard Smagorinsky Model (SM), the Scale Independent Dynamic Model (SIDM) and the SDDM are tested. As presented in [6][7] slight improvements are obtained with the SDDM. Nevertheless, the behavior of the scale-dependence parameter supports the generalization of the dynamic procedure proposed in the SDDM, particularly taking into account that no explicit filter is used (the implicit filter is unknown). [1] F. Porté-Agel, C. Meneveau, M.B. Parlange. "A scale-dependent dynamic model for large-eddy simulation: application to a neutral atmospheric boundary layer". Journal of Fluid Mechanics, 2000, 415, 261-284. [2] E. Bou-Zeid, C. Meneveau, M. Parlante. "A scale-dependent Lagrangian dynamic model for large eddy simulation of complex turbulent flows". Physics of Fluids, 2005, 17, 025105 (18p). [3] R. Stoll, F. Porté-Agel. "Dynamic subgrid-scale models for momentum and scalar fluxes in large-eddy simulations of

  3. Development and implementation in the Monte Carlo code PENELOPE of a new virtual source model for radiotherapy photon beams and portal image calculation.

    PubMed

    Chabert, I; Barat, E; Dautremer, T; Montagu, T; Agelou, M; Croc de Suray, A; Garcia-Hernandez, J C; Gempp, S; Benkreira, M; de Carlan, L; Lazaro, D

    2016-07-21

    This work aims at developing a generic virtual source model (VSM) preserving all existing correlations between variables stored in a Monte Carlo pre-computed phase space (PS) file, for dose calculation and high-resolution portal image prediction. The reference PS file was calculated using the PENELOPE code, after the flattening filter (FF) of an Elekta Synergy 6 MV photon beam. Each particle was represented in a mobile coordinate system by its radial position (r s ) in the PS plane, its energy (E), and its polar and azimuthal angles (φ d and θ d ), describing the particle deviation compared to its initial direction after bremsstrahlung, and the deviation orientation. Three sub-sources were created by sorting out particles according to their last interaction location (target, primary collimator or FF). For each sub-source, 4D correlated-histograms were built by storing E, r s , φ d and θ d values. Five different adaptive binning schemes were studied to construct 4D histograms of the VSMs, to ensure histogram efficient handling as well as an accurate reproduction of E, r s , φ d and θ d distribution details. The five resulting VSMs were then implemented in PENELOPE. Their accuracy was first assessed in the PS plane, by comparing E, r s , φ d and θ d distributions with those obtained from the reference PS file. Second, dose distributions computed in water, using the VSMs and the reference PS file located below the FF, and also after collimation in both water and heterogeneous phantom, were compared using a 1.5%-0 mm and a 2%-0 mm global gamma index, respectively. Finally, portal images were calculated without and with phantoms in the beam. The model was then evaluated using a 1%-0 mm global gamma index. Performance of a mono-source VSM was also investigated and led, as with the multi-source model, to excellent results when combined with an adaptive binning scheme. PMID:27353090

  4. Development and implementation in the Monte Carlo code PENELOPE of a new virtual source model for radiotherapy photon beams and portal image calculation

    NASA Astrophysics Data System (ADS)

    Chabert, I.; Barat, E.; Dautremer, T.; Montagu, T.; Agelou, M.; Croc de Suray, A.; Garcia-Hernandez, J. C.; Gempp, S.; Benkreira, M.; de Carlan, L.; Lazaro, D.

    2016-07-01

    This work aims at developing a generic virtual source model (VSM) preserving all existing correlations between variables stored in a Monte Carlo pre-computed phase space (PS) file, for dose calculation and high-resolution portal image prediction. The reference PS file was calculated using the PENELOPE code, after the flattening filter (FF) of an Elekta Synergy 6 MV photon beam. Each particle was represented in a mobile coordinate system by its radial position (r s ) in the PS plane, its energy (E), and its polar and azimuthal angles (φ d and θ d ), describing the particle deviation compared to its initial direction after bremsstrahlung, and the deviation orientation. Three sub-sources were created by sorting out particles according to their last interaction location (target, primary collimator or FF). For each sub-source, 4D correlated-histograms were built by storing E, r s , φ d and θ d values. Five different adaptive binning schemes were studied to construct 4D histograms of the VSMs, to ensure histogram efficient handling as well as an accurate reproduction of E, r s , φ d and θ d distribution details. The five resulting VSMs were then implemented in PENELOPE. Their accuracy was first assessed in the PS plane, by comparing E, r s , φ d and θ d distributions with those obtained from the reference PS file. Second, dose distributions computed in water, using the VSMs and the reference PS file located below the FF, and also after collimation in both water and heterogeneous phantom, were compared using a 1.5%–0 mm and a 2%–0 mm global gamma index, respectively. Finally, portal images were calculated without and with phantoms in the beam. The model was then evaluated using a 1%–0 mm global gamma index. Performance of a mono-source VSM was also investigated and led, as with the multi-source model, to excellent results when combined with an adaptive binning scheme.

  5. The actual goals of geoethics

    NASA Astrophysics Data System (ADS)

    Nemec, Vaclav

    2014-05-01

    The most actual goals of geoethics have been formulated as results of the International Conference on Geoethics (October 2013) held at the geoethics birth-place Pribram (Czech Republic): In the sphere of education and public enlightenment an appropriate needed minimum know how of Earth sciences should be intensively promoted together with cultivating ethical way of thinking and acting for the sustainable well-being of the society. The actual activities of the Intergovernmental Panel of Climate Changes are not sustainable with the existing knowledge of the Earth sciences (as presented in the results of the 33rd and 34th International Geological Congresses). This knowledge should be incorporated into any further work of the IPCC. In the sphere of legislation in a large international co-operation following steps are needed: - to re-formulate the term of a "false alarm" and its legal consequences, - to demand very consequently the needed evaluation of existing risks, - to solve problems of rights of individuals and minorities in cases of the optimum use of mineral resources and of the optimum protection of the local population against emergency dangers and disasters; common good (well-being) must be considered as the priority when solving ethical dilemmas. The precaution principle should be applied in any decision making process. Earth scientists presenting their expert opinions are not exempted from civil, administrative or even criminal liabilities. Details must be established by national law and jurisprudence. The well known case of the L'Aquila earthquake (2009) should serve as a serious warning because of the proven misuse of geoethics for protecting top Italian seismologists responsible and sentenced for their inadequate superficial behaviour causing lot of human victims. Another recent scandal with the Himalayan fossil fraud will be also documented. A support is needed for any effort to analyze and to disclose the problems of the deformation of the contemporary

  6. Multicast Reduction Network Source Code

    SciTech Connect

    Lee, G.

    2006-12-19

    MRNet is a software tree-based overlay network developed at the University of Wisconsin, Madison that provides a scalable communication mechanism for parallel tools. MRNet, uses a tree topology of networked processes between a user tool and distributed tool daemons. This tree topology allows scalable multicast communication from the tool to the daemons. The internal nodes of the tree can be used to distribute computation and alalysis on data sent from the tool daemons to the tool. This release covers minor implementation to port this software to the BlueGene/L architecuture and for use with a new implementation of the Dynamic Probe Class Library.

  7. Multicast Reduction Network Source Code

    2006-12-19

    MRNet is a software tree-based overlay network developed at the University of Wisconsin, Madison that provides a scalable communication mechanism for parallel tools. MRNet, uses a tree topology of networked processes between a user tool and distributed tool daemons. This tree topology allows scalable multicast communication from the tool to the daemons. The internal nodes of the tree can be used to distribute computation and alalysis on data sent from the tool daemons to themore » tool. This release covers minor implementation to port this software to the BlueGene/L architecuture and for use with a new implementation of the Dynamic Probe Class Library.« less

  8. Evaluation of the area factor used in the RESRAD code for the estimation of airborne contaminant concentrations of finite area sources

    SciTech Connect

    Chang, Y.S.; Yu, C.; Wang, S.K.

    1998-07-01

    The area factor is used in the RESRAD code to estimate the airborne contaminant concentrations for a finite area of contaminated soils. The area factor model used in RESRAD version 5.70 and earlier (referred to as the old area factor) was a simple, but conservative, mixing model that tended to overestimate the airborne concentrations of radionuclide contaminants. An improved and more realistic model for the area factor (referred to here as the new area factor) is described in this report. The new area factor model is designed to reflect site-specific soil characteristics and meteorological conditions. The site-specific parameters considered include the size of the source area, average particle diameter, and average wind speed. Other site-specific parameters (particle density, atmospheric stability, raindrop diameter, and annual precipitation rate) were assumed to be constant. The model uses the Gaussian plume model combined with contaminant removal processes, such as dry and wet deposition of particulates. Area factors estimated with the new model are compared with old area factors that were based on the simple mixing model. In addition, sensitivity analyses are conducted for parameters assumed to be constant. The new area factor model has been incorporated into RESRAD version 5.75 and later.

  9. Sharing the Code.

    ERIC Educational Resources Information Center

    Olsen, Florence

    2003-01-01

    Colleges and universities are beginning to consider collaborating on open-source-code projects as a way to meet critical software and computing needs. Points out the attractive features of noncommercial open-source software and describes some examples in use now, especially for the creation of Web infrastructure. (SLD)

  10. Measuring Diagnoses: ICD Code Accuracy

    PubMed Central

    O'Malley, Kimberly J; Cook, Karon F; Price, Matt D; Wildes, Kimberly Raiford; Hurdle, John F; Ashton, Carol M

    2005-01-01

    Objective To examine potential sources of errors at each step of the described inpatient International Classification of Diseases (ICD) coding process. Data Sources/Study Setting The use of disease codes from the ICD has expanded from classifying morbidity and mortality information for statistical purposes to diverse sets of applications in research, health care policy, and health care finance. By describing a brief history of ICD coding, detailing the process for assigning codes, identifying where errors can be introduced into the process, and reviewing methods for examining code accuracy, we help code users more systematically evaluate code accuracy for their particular applications. Study Design/Methods We summarize the inpatient ICD diagnostic coding process from patient admission to diagnostic code assignment. We examine potential sources of errors at each step and offer code users a tool for systematically evaluating code accuracy. Principle Findings Main error sources along the “patient trajectory” include amount and quality of information at admission, communication among patients and providers, the clinician's knowledge and experience with the illness, and the clinician's attention to detail. Main error sources along the “paper trail” include variance in the electronic and written records, coder training and experience, facility quality-control efforts, and unintentional and intentional coder errors, such as misspecification, unbundling, and upcoding. Conclusions By clearly specifying the code assignment process and heightening their awareness of potential error sources, code users can better evaluate the applicability and limitations of codes for their particular situations. ICD codes can then be used in the most appropriate ways. PMID:16178999

  11. Coding Issues in Grounded Theory

    ERIC Educational Resources Information Center

    Moghaddam, Alireza

    2006-01-01

    This paper discusses grounded theory as one of the qualitative research designs. It describes how grounded theory generates from data. Three phases of grounded theory--open coding, axial coding, and selective coding--are discussed, along with some of the issues which are the source of debate among grounded theorists, especially between its…

  12. Speech coding

    NASA Astrophysics Data System (ADS)

    Gersho, Allen

    1990-05-01

    Recent advances in algorithms and techniques for speech coding now permit high quality voice reproduction at remarkably low bit rates. The advent of powerful single-ship signal processors has made it cost effective to implement these new and sophisticated speech coding algorithms for many important applications in voice communication and storage. Some of the main ideas underlying the algorithms of major interest today are reviewed. The concept of removing redundancy by linear prediction is reviewed, first in the context of predictive quantization or DPCM. Then linear predictive coding, adaptive predictive coding, and vector quantization are discussed. The concepts of excitation coding via analysis-by-synthesis, vector sum excitation codebooks, and adaptive postfiltering are explained. The main idea of vector excitation coding (VXC) or code excited linear prediction (CELP) are presented. Finally low-delay VXC coding and phonetic segmentation for VXC are described.

  13. Uplink Coding

    NASA Technical Reports Server (NTRS)

    Pollara, Fabrizio; Hamkins, Jon; Dolinar, Sam; Andrews, Ken; Divsalar, Dariush

    2006-01-01

    This viewgraph presentation reviews uplink coding. The purpose and goals of the briefing are (1) Show a plan for using uplink coding and describe benefits (2) Define possible solutions and their applicability to different types of uplink, including emergency uplink (3) Concur with our conclusions so we can embark on a plan to use proposed uplink system (4) Identify the need for the development of appropriate technology and infusion in the DSN (5) Gain advocacy to implement uplink coding in flight projects Action Item EMB04-1-14 -- Show a plan for using uplink coding, including showing where it is useful or not (include discussion of emergency uplink coding).

  14. Moral Reasoning in Hypothetical and Actual Situations.

    ERIC Educational Resources Information Center

    Sumprer, Gerard F.; Butter, Eliot J.

    1978-01-01

    Results of this investigation suggest that moral reasoning of college students, when assessed using the DIT format, is the same whether the dilemmas involve hypothetical or actual situations. Subjects, when presented with hypothetical situations, become deeply immersed in them and respond as if they were actual participants. (Author/BEF)

  15. Factors Related to Self-Actualization.

    ERIC Educational Resources Information Center

    Hogan, H. Wayne; McWilliams, Jettie M.

    1978-01-01

    Provides data to further support the notions that females score higher in self-actualization measures and that self-actualization scores correlate inversely to the degree of undesirability individuals assign to their heights and weights. Finds that, contrary to predictions, greater androgyny was related to lower, not higher, self-actualization…

  16. Software Certification - Coding, Code, and Coders

    NASA Technical Reports Server (NTRS)

    Havelund, Klaus; Holzmann, Gerard J.

    2011-01-01

    We describe a certification approach for software development that has been adopted at our organization. JPL develops robotic spacecraft for the exploration of the solar system. The flight software that controls these spacecraft is considered to be mission critical. We argue that the goal of a software certification process cannot be the development of "perfect" software, i.e., software that can be formally proven to be correct under all imaginable and unimaginable circumstances. More realistically, the goal is to guarantee a software development process that is conducted by knowledgeable engineers, who follow generally accepted procedures to control known risks, while meeting agreed upon standards of workmanship. We target three specific issues that must be addressed in such a certification procedure: the coding process, the code that is developed, and the skills of the coders. The coding process is driven by standards (e.g., a coding standard) and tools. The code is mechanically checked against the standard with the help of state-of-the-art static source code analyzers. The coders, finally, are certified in on-site training courses that include formal exams.

  17. Two Applications of the Hamming-Golay Code

    ERIC Educational Resources Information Center

    Liu, Andy

    2009-01-01

    In this paper, we give two unexpected applications of a Hamming code. The first one, also known as the "Hat Problem," is based on the fact that a small portion of the available code words are actually used in a Hamming code. The second one is a magic trick based on the fact that a Hamming code is perfect for single-error correction.

  18. Computer Code

    NASA Technical Reports Server (NTRS)

    1985-01-01

    COSMIC MINIVER, a computer code developed by NASA for analyzing aerodynamic heating and heat transfer on the Space Shuttle, has been used by Marquardt Company to analyze heat transfer on Navy/Air Force missile bodies. The code analyzes heat transfer by four different methods which can be compared for accuracy. MINIVER saved Marquardt three months in computer time and $15,000.

  19. DNA codes

    SciTech Connect

    Torney, D. C.

    2001-01-01

    We have begun to characterize a variety of codes, motivated by potential implementation as (quaternary) DNA n-sequences, with letters denoted A, C The first codes we studied are the most reminiscent of conventional group codes. For these codes, Hamming similarity was generalized so that the score for matched letters takes more than one value, depending upon which letters are matched [2]. These codes consist of n-sequences satisfying an upper bound on the similarities, summed over the letter positions, of distinct codewords. We chose similarity 2 for matches of letters A and T and 3 for matches of the letters C and G, providing a rough approximation to double-strand bond energies in DNA. An inherent novelty of DNA codes is 'reverse complementation'. The latter may be defined, as follows, not only for alphabets of size four, but, more generally, for any even-size alphabet. All that is required is a matching of the letters of the alphabet: a partition into pairs. Then, the reverse complement of a codeword is obtained by reversing the order of its letters and replacing each letter by its match. For DNA, the matching is AT/CG because these are the Watson-Crick bonding pairs. Reversal arises because two DNA sequences form a double strand with opposite relative orientations. Thus, as will be described in detail, because in vitro decoding involves the formation of double-stranded DNA from two codewords, it is reasonable to assume - for universal applicability - that the reverse complement of any codeword is also a codeword. In particular, self-reverse complementary codewords are expressly forbidden in reverse-complement codes. Thus, an appropriate distance between all pairs of codewords must, when large, effectively prohibit binding between the respective codewords: to form a double strand. Only reverse-complement pairs of codewords should be able to bind. For most applications, a DNA code is to be bi-partitioned, such that the reverse-complementary pairs are separated

  20. A mathematical approach to the study of the United States Code

    NASA Astrophysics Data System (ADS)

    Bommarito, Michael J.; Katz, Daniel M.

    2010-10-01

    The United States Code (Code) is a document containing over 22 million words that represents a large and important source of Federal statutory law. Scholars and policy advocates often discuss the direction and magnitude of changes in various aspects of the Code. However, few have mathematically formalized the notions behind these discussions or directly measured the resulting representations. This paper addresses the current state of the literature in two ways. First, we formalize a representation of the United States Code as the union of a hierarchical network and a citation network over vertices containing the language of the Code. This representation reflects the fact that the Code is a hierarchically organized document containing language and explicit citations between provisions. Second, we use this formalization to measure aspects of the Code as codified in October 2008, November 2009, and March 2010. These measurements allow for a characterization of the actual changes in the Code over time. Our findings indicate that in the recent past, the Code has grown in its amount of structure, interdependence, and language.

  1. Measurements with Pinhole and Coded Aperture Gamma-Ray Imaging Systems

    SciTech Connect

    Raffo-Caiado, Ana Claudia; Solodov, Alexander A; Abdul-Jabbar, Najeb M; Hayward, Jason P; Ziock, Klaus-Peter

    2010-01-01

    From a safeguards perspective, gamma-ray imaging has the potential to reduce manpower and cost for effectively locating and monitoring special nuclear material. The purpose of this project was to investigate the performance of pinhole and coded aperture gamma-ray imaging systems at Oak Ridge National Laboratory (ORNL). With the aid of the European Commission Joint Research Centre (JRC), radiometric data will be combined with scans from a three-dimensional design information verification (3D-DIV) system. Measurements were performed at the ORNL Safeguards Laboratory using sources that model holdup in radiological facilities. They showed that for situations with moderate amounts of solid or dense U sources, the coded aperture was able to predict source location and geometry within ~7% of actual values while the pinhole gave a broad representation of source distributions

  2. Sequential Syndrome Decoding of Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1984-01-01

    The algebraic structure of convolutional codes are reviewed and sequential syndrome decoding is applied to those codes. These concepts are then used to realize by example actual sequential decoding, using the stack algorithm. The Fano metric for use in sequential decoding is modified so that it can be utilized to sequentially find the minimum weight error sequence.

  3. Code of Ethics.

    ERIC Educational Resources Information Center

    American Sociological Association, Washington, DC.

    The American Sociological Association's code of ethics for sociologists is presented. For sociological research and practice, 10 requirements for ethical behavior are identified, including: maintaining objectivity and integrity; fully reporting findings and research methods, without omission of significant data; reporting fully all sources of…

  4. Realizing actual feedback control of complex network

    NASA Astrophysics Data System (ADS)

    Tu, Chengyi; Cheng, Yuhua

    2014-06-01

    In this paper, we present the concept of feedbackability and how to identify the Minimum Feedbackability Set of an arbitrary complex directed network. Furthermore, we design an estimator and a feedback controller accessing one MFS to realize actual feedback control, i.e. control the system to our desired state according to the estimated system internal state from the output of estimator. Last but not least, we perform numerical simulations of a small linear time-invariant dynamics network and a real simple food network to verify the theoretical results. The framework presented here could make an arbitrary complex directed network realize actual feedback control and deepen our understanding of complex systems.

  5. Algorithms for high-speed universal noiseless coding

    NASA Technical Reports Server (NTRS)

    Rice, Robert F.; Yeh, Pen-Shu; Miller, Warner

    1993-01-01

    This paper provides the basic algorithmic definitions and performance characterizations for a high-performance adaptive noiseless (lossless) 'coding module' which is currently under separate developments as single-chip microelectronic circuits at two NASA centers. Laboratory tests of one of these implementations recently demonstrated coding rates of up to 900 Mbits/s. Operation of a companion 'decoding module' can operate at up to half the coder's rate. The functionality provided by these modules should be applicable to most of NASA's science data. The hardware modules incorporate a powerful adaptive noiseless coder for 'standard form' data sources (i.e., sources whose symbols can be represented by uncorrelated nonnegative integers where the smaller integers are more likely than the larger ones). Performance close to data entries can be expected over a 'dynamic range' of from 1.5 to 12-15 bits/sample (depending on the implementation). This is accomplished by adaptively choosing the best of many Huffman equivalent codes to use on each block of 1-16 samples. Because of the extreme simplicity of these codes no table lookups are actually required in an implementation, thus leading to the expected very high data rate capabilities already noted.

  6. Aeroacoustic Prediction Codes

    NASA Technical Reports Server (NTRS)

    Gliebe, P; Mani, R.; Shin, H.; Mitchell, B.; Ashford, G.; Salamah, S.; Connell, S.; Huff, Dennis (Technical Monitor)

    2000-01-01

    This report describes work performed on Contract NAS3-27720AoI 13 as part of the NASA Advanced Subsonic Transport (AST) Noise Reduction Technology effort. Computer codes were developed to provide quantitative prediction, design, and analysis capability for several aircraft engine noise sources. The objective was to provide improved, physics-based tools for exploration of noise-reduction concepts and understanding of experimental results. Methods and codes focused on fan broadband and 'buzz saw' noise and on low-emissions combustor noise and compliment work done by other contractors under the NASA AST program to develop methods and codes for fan harmonic tone noise and jet noise. The methods and codes developed and reported herein employ a wide range of approaches, from the strictly empirical to the completely computational, with some being semiempirical analytical, and/or analytical/computational. Emphasis was on capturing the essential physics while still considering method or code utility as a practical design and analysis tool for everyday engineering use. Codes and prediction models were developed for: (1) an improved empirical correlation model for fan rotor exit flow mean and turbulence properties, for use in predicting broadband noise generated by rotor exit flow turbulence interaction with downstream stator vanes: (2) fan broadband noise models for rotor and stator/turbulence interaction sources including 3D effects, noncompact-source effects. directivity modeling, and extensions to the rotor supersonic tip-speed regime; (3) fan multiple-pure-tone in-duct sound pressure prediction methodology based on computational fluid dynamics (CFD) analysis; and (4) low-emissions combustor prediction methodology and computer code based on CFD and actuator disk theory. In addition. the relative importance of dipole and quadrupole source mechanisms was studied using direct CFD source computation for a simple cascadeigust interaction problem, and an empirical combustor

  7. 50 CFR 253.16 - Actual cost.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 50 Wildlife and Fisheries 9 2011-10-01 2011-10-01 false Actual cost. 253.16 Section 253.16 Wildlife and Fisheries NATIONAL MARINE FISHERIES SERVICE, NATIONAL OCEANIC AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE AID TO FISHERIES FISHERIES ASSISTANCE PROGRAMS Fisheries Finance Program §...

  8. 50 CFR 253.16 - Actual cost.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 50 Wildlife and Fisheries 11 2013-10-01 2013-10-01 false Actual cost. 253.16 Section 253.16 Wildlife and Fisheries NATIONAL MARINE FISHERIES SERVICE, NATIONAL OCEANIC AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE AID TO FISHERIES FISHERIES ASSISTANCE PROGRAMS Fisheries Finance Program §...

  9. Humanistic Education and Self-Actualization Theory.

    ERIC Educational Resources Information Center

    Farmer, Rod

    1984-01-01

    Stresses the need for theoretical justification for the development of humanistic education programs in today's schools. Explores Abraham Maslow's hierarchy of needs and theory of self-actualization. Argues that Maslow's theory may be the best available for educators concerned with educating the whole child. (JHZ)

  10. Children's Rights and Self-Actualization Theory.

    ERIC Educational Resources Information Center

    Farmer, Rod

    1982-01-01

    Educators need to seriously reflect upon the concept of children's rights. Though the idea of children's rights has been debated numerous times, the idea remains vague and shapeless; however, Maslow's theory of self-actualization can provide the children's rights idea with a needed theoretical framework. (Author)

  11. Culture Studies and Self-Actualization Theory.

    ERIC Educational Resources Information Center

    Farmer, Rod

    1983-01-01

    True citizenship education is impossible unless students develop the habit of intelligently evaluating cultures. Abraham Maslow's theory of self-actualization, a theory of innate human needs and of human motivation, is a nonethnocentric tool which can be used by teachers and students to help them understand other cultures. (SR)

  12. Group Counseling for Self-Actualization.

    ERIC Educational Resources Information Center

    Streich, William H.; Keeler, Douglas J.

    Self-concept, creativity, growth orientation, an integrated value system, and receptiveness to new experiences are considered to be crucial variables to the self-actualization process. A regular, year-long group counseling program was conducted with 85 randomly selected gifted secondary students in the Farmington, Connecticut Public Schools. A…

  13. Racial Discrimination in Occupations: Perceived and Actual.

    ERIC Educational Resources Information Center

    Turner, Castellano B.; Turner, Barbara F.

    The relationship between the actual representation of Blacks in certain occupations and individual perceptions of the occupational opportunity structure were examined. A scale which rated the degree of perceived discrimination against Blacks in 21 occupations was administered to 75 black male, 70 black female, 1,429 white male and 1,457 white…

  14. Developing Human Resources through Actualizing Human Potential

    ERIC Educational Resources Information Center

    Clarken, Rodney H.

    2012-01-01

    The key to human resource development is in actualizing individual and collective thinking, feeling and choosing potentials related to our minds, hearts and wills respectively. These capacities and faculties must be balanced and regulated according to the standards of truth, love and justice for individual, community and institutional development,…

  15. Whiteheadian Actual Entitities and String Theory

    NASA Astrophysics Data System (ADS)

    Bracken, Joseph A.

    2012-06-01

    In the philosophy of Alfred North Whitehead, the ultimate units of reality are actual entities, momentary self-constituting subjects of experience which are too small to be sensibly perceived. Their combination into "societies" with a "common element of form" produces the organisms and inanimate things of ordinary sense experience. According to the proponents of string theory, tiny vibrating strings are the ultimate constituents of physical reality which in harmonious combination yield perceptible entities at the macroscopic level of physical reality. Given that the number of Whiteheadian actual entities and of individual strings within string theory are beyond reckoning at any given moment, could they be two ways to describe the same non-verifiable foundational reality? For example, if one could establish that the "superject" or objective pattern of self- constitution of an actual entity vibrates at a specific frequency, its affinity with the individual strings of string theory would be striking. Likewise, if one were to claim that the size and complexity of Whiteheadian 'societies" require different space-time parameters for the dynamic interrelationship of constituent actual entities, would that at least partially account for the assumption of 10 or even 26 instead of just 3 dimensions within string theory? The overall conclusion of this article is that, if a suitably revised understanding of Whiteheadian metaphysics were seen as compatible with the philosophical implications of string theory, their combination into a single world view would strengthen the plausibility of both schemes taken separately. Key words: actual entities, subject/superjects, vibrating strings, structured fields of activity, multi-dimensional physical reality.

  16. Speech coding

    SciTech Connect

    Ravishankar, C., Hughes Network Systems, Germantown, MD

    1998-05-08

    Speech is the predominant means of communication between human beings and since the invention of the telephone by Alexander Graham Bell in 1876, speech services have remained to be the core service in almost all telecommunication systems. Original analog methods of telephony had the disadvantage of speech signal getting corrupted by noise, cross-talk and distortion Long haul transmissions which use repeaters to compensate for the loss in signal strength on transmission links also increase the associated noise and distortion. On the other hand digital transmission is relatively immune to noise, cross-talk and distortion primarily because of the capability to faithfully regenerate digital signal at each repeater purely based on a binary decision. Hence end-to-end performance of the digital link essentially becomes independent of the length and operating frequency bands of the link Hence from a transmission point of view digital transmission has been the preferred approach due to its higher immunity to noise. The need to carry digital speech became extremely important from a service provision point of view as well. Modem requirements have introduced the need for robust, flexible and secure services that can carry a multitude of signal types (such as voice, data and video) without a fundamental change in infrastructure. Such a requirement could not have been easily met without the advent of digital transmission systems, thereby requiring speech to be coded digitally. The term Speech Coding is often referred to techniques that represent or code speech signals either directly as a waveform or as a set of parameters by analyzing the speech signal. In either case, the codes are transmitted to the distant end where speech is reconstructed or synthesized using the received set of codes. A more generic term that is applicable to these techniques that is often interchangeably used with speech coding is the term voice coding. This term is more generic in the sense that the

  17. California Charter Oversight: Key Elements and Actual Costs. CRB 12-001

    ERIC Educational Resources Information Center

    Blanton, Rebecca E.

    2012-01-01

    This study was mandated by SB537 (Simitian, Chapter 650, Stats. of 2007, codified at Ed. Code Section 47613), which requires the California Research Bureau (CRB) to prepare and submit to the Legislature a report on the key elements and actual costs of charter school oversight. Charter schools are public schools that are operated by entities other…

  18. Genetic algorithms applied to reconstructing coded imaging of neutrons and analysis of residual watermark

    SciTech Connect

    Zhang Tiankui; Hu Huasi; Jia Qinggang; Zhang Fengna; Liu Zhihua; Hu Guang; Guo Wei; Chen Da; Li Zhenghong; Wu Yuelei

    2012-11-15

    Monte-Carlo simulation of neutron coded imaging based on encoding aperture for Z-pinch of large field-of-view with 5 mm radius has been investigated, and then the coded image has been obtained. Reconstruction method of source image based on genetic algorithms (GA) has been established. 'Residual watermark,' which emerges unavoidably in reconstructed image, while the peak normalization is employed in GA fitness calculation because of its statistical fluctuation amplification, has been discovered and studied. Residual watermark is primarily related to the shape and other parameters of the encoding aperture cross section. The properties and essential causes of the residual watermark were analyzed, while the identification on equivalent radius of aperture was provided. By using the equivalent radius, the reconstruction can also be accomplished without knowing the point spread function (PSF) of actual aperture. The reconstruction result is close to that by using PSF of the actual aperture.

  19. Optimality Of Variable-Length Codes

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu; Miller, Warner H.; Rice, Robert F.

    1994-01-01

    Report presents analysis of performances of conceptual Rice universal noiseless coders designed to provide efficient compression of data over wide range of source-data entropies. Includes predictive preprocessor that maps source data into sequence of nonnegative integers and variable-length-coding processor, which adapts to varying entropy of source data by selecting whichever one of number of optional codes yields shortest codeword.

  20. tomo3d: a new 3-D joint refraction and reflection travel-time tomography code for active-source seismic data

    NASA Astrophysics Data System (ADS)

    Meléndez, A.; Korenaga, J.; Sallares, V.; Ranero, C. R.

    2012-12-01

    We present the development state of tomo3d, a code for three-dimensional refraction and reflection travel-time tomography of wide-angle seismic data based on the previous two-dimensional version of the code, tomo2d. The core of both forward and inverse problems is inherited from the 2-D version. The ray tracing is performed by a hybrid method combining the graph and bending methods. The graph method finds an ordered array of discrete model nodes, which satisfies Fermat's principle, that is, whose corresponding travel time is a global minimum within the space of discrete nodal connections. The bending method is then applied to produce a more accurate ray path by using the nodes as support points for an interpolation with beta-splines. Travel time tomography is formulated as an iterative linearized inversion, and each step is solved using an LSQR algorithm. In order to avoid the singularity of the sensitivity kernel and to reduce the instability of inversion, regularization parameters are introduced in the inversion in the form of smoothing and damping constraints. Velocity models are built as 3-D meshes, and velocity values at intermediate locations are obtained by trilinear interpolation within the corresponding pseudo-cubic cell. Meshes are sheared to account for topographic relief. A floating reflector is represented by a 2-D grid, and depths at intermediate locations are calculated by bilinear interpolation within the corresponding square cell. The trade-off between the resolution of the final model and the associated computational cost is controlled by the relation between the selected forward star for the graph method (i.e. the number of nodes that each node considers as its neighbors) and the refinement of the velocity mesh. Including reflected phases is advantageous because it provides a better coverage and allows us to define the geometry of those geological interfaces with velocity contrasts sharp enough to be observed on record sections. The code also

  1. tomo3d: a new 3-D joint refraction and reflection travel-time tomography code for active-source seismic data

    NASA Astrophysics Data System (ADS)

    Meléndez, A.; Korenaga, J.; Sallarès, V.; Ranero, C. R.

    2012-04-01

    We present the development state of tomo3d, a code for three-dimensional refraction and reflection travel-time tomography of wide-angle seismic data based on the previous two-dimensional version of the code, tomo2d. The core of both forward and inverse problems is inherited from the 2-D version. The ray tracing is performed by a hybrid method combining the graph and bending methods. The graph method finds an ordered array of discrete model nodes, which satisfies Fermat's principle, that is, whose corresponding travel time is a global minimum within the space of discrete nodal connections. The bending method is then applied to produce a more accurate ray path by using the nodes as support points for an interpolation with beta-splines. Travel time tomography is formulated as an iterative linearized inversion, and each step is solved using an LSQR algorithm. In order to avoid the singularity of the sensitivity kernel and to reduce the instability of inversion, regularization parameters are introduced in the inversion in the form of smoothing and damping constraints. Velocity models are built as 3-D meshes, and velocity values at intermediate locations are obtained by trilinear interpolation within the corresponding pseudo-cubic cell. Meshes are sheared to account for topographic relief. A floating reflector is represented by a 2-D grid, and depths at intermediate locations are calculated by bilinear interpolation within the corresponding square cell. The trade-off between the resolution of the final model and the associated computational cost is controlled by the relation between the selected forward star for the graph method (i.e. the number of nodes that each node considers as its neighbors) and the refinement of the velocity mesh. Including reflected phases is advantageous because it provides a better coverage and allows us to define the geometry of those geological interfaces with velocity contrasts sharp enough to be observed on record sections. The code also

  2. The Actual Apollo 13 Prime Crew

    NASA Technical Reports Server (NTRS)

    1970-01-01

    The actual Apollo 13 lunar landing mission prime crew from left to right are: Commander, James A. Lovell Jr., Command Module pilot, John L. Swigert Jr.and Lunar Module pilot, Fred W. Haise Jr. The original Command Module pilot for this mission was Thomas 'Ken' Mattingly Jr. but due to exposure to German measles he was replaced by his backup, Command Module pilot, John L. 'Jack' Swigert Jr.

  3. QR Codes

    ERIC Educational Resources Information Center

    Lai, Hsin-Chih; Chang, Chun-Yen; Li, Wen-Shiane; Fan, Yu-Lin; Wu, Ying-Tien

    2013-01-01

    This study presents an m-learning method that incorporates Integrated Quick Response (QR) codes. This learning method not only achieves the objectives of outdoor education, but it also increases applications of Cognitive Theory of Multimedia Learning (CTML) (Mayer, 2001) in m-learning for practical use in a diverse range of outdoor locations. When…

  4. Confocal coded aperture imaging

    DOEpatents

    Tobin, Jr., Kenneth William; Thomas, Jr., Clarence E.

    2001-01-01

    A method for imaging a target volume comprises the steps of: radiating a small bandwidth of energy toward the target volume; focusing the small bandwidth of energy into a beam; moving the target volume through a plurality of positions within the focused beam; collecting a beam of energy scattered from the target volume with a non-diffractive confocal coded aperture; generating a shadow image of said aperture from every point source of radiation in the target volume; and, reconstructing the shadow image into a 3-dimensional image of the every point source by mathematically correlating the shadow image with a digital or analog version of the coded aperture. The method can comprise the step of collecting the beam of energy scattered from the target volume with a Fresnel zone plate.

  5. Evaluation of help model replacement codes

    SciTech Connect

    Whiteside, Tad; Hang, Thong; Flach, Gregory

    2009-07-01

    This work evaluates the computer codes that are proposed to be used to predict percolation of water through the closure-cap and into the waste containment zone at the Department of Energy closure sites. This work compares the currently used water-balance code (HELP) with newly developed computer codes that use unsaturated flow (Richards’ equation). It provides a literature review of the HELP model and the proposed codes, which result in two recommended codes for further evaluation: HYDRUS-2D3D and VADOSE/W. This further evaluation involved performing actual simulations on a simple model and comparing the results of those simulations to those obtained with the HELP code and the field data. From the results of this work, we conclude that the new codes perform nearly the same, although moving forward, we recommend HYDRUS-2D3D.

  6. Air resistance measurements on actual airplane parts

    NASA Technical Reports Server (NTRS)

    Weiselsberger, C

    1923-01-01

    For the calculation of the parasite resistance of an airplane, a knowledge of the resistance of the individual structural and accessory parts is necessary. The most reliable basis for this is given by tests with actual airplane parts at airspeeds which occur in practice. The data given here relate to the landing gear of a Siemanms-Schuckert DI airplane; the landing gear of a 'Luftfahrzeug-Gesellschaft' airplane (type Roland Dlla); landing gear of a 'Flugzeugbau Friedrichshafen' G airplane; a machine gun, and the exhaust manifold of a 269 HP engine.

  7. RELM (the Working Group for the Development of Region Earthquake Likelihood Models) and the Development of new, Open-Source, Java-Based (Object Oriented) Code for Probabilistic Seismic Hazard Analysis

    NASA Astrophysics Data System (ADS)

    Field, E. H.

    2001-12-01

    Given problems with virtually all previous earthquake-forecast models for southern California, and a current lack of consensus on how such models should be constructed, a joint SCEC-USGS sponsored working group for the development of Regional Earthquake Likelihood Models (RELM) has been established (www.relm.org). The goals are as follows: 1) To develop and test a range of viable earthquake-potential models for southern California (not just one "consensus" model); 2) To examine and compare the implications of each model with respect to probabilistic seismic-hazard estimates (which will not only quantify existing hazard uncertainties, but will also indicate how future research should be focused in order to reduce the uncertainties); and 3) To design and document conclusive tests of each model with respect to existing and future geophysical observations. The variety of models under development reflects the variety of geophysical constraints available; these include geological fault information, historical seismicity, geodetic observations, stress-transfer interactions, and foreshock/aftershock statistics. One reason for developing and testing a range of models is to evaluate the extent to which any one can be exported to another region where the options are more limited. RELM is not intended to be a one-time effort. Rather, we are building an infrastructure that will facilitate an ongoing incorporation of new scientific findings into seismic-hazard models. The effort involves the development of several community models and databases, one of which is new Java-based code for probabilistic seismic hazard analysis (PSHA). Although several different PSHA codes presently exist, none are open source, well documented, and written in an object-oriented programming language (which is ideally suited for PSHA). Furthermore, we need code that is flexible enough to accommodate the wide range of models currently under development in RELM. The new code is being developed under

  8. Code inspection instructional validation

    NASA Technical Reports Server (NTRS)

    Orr, Kay; Stancil, Shirley

    1992-01-01

    The Shuttle Data Systems Branch (SDSB) of the Flight Data Systems Division (FDSD) at Johnson Space Center contracted with Southwest Research Institute (SwRI) to validate the effectiveness of an interactive video course on the code inspection process. The purpose of this project was to determine if this course could be effective for teaching NASA analysts the process of code inspection. In addition, NASA was interested in the effectiveness of this unique type of instruction (Digital Video Interactive), for providing training on software processes. This study found the Carnegie Mellon course, 'A Cure for the Common Code', effective for teaching the process of code inspection. In addition, analysts prefer learning with this method of instruction, or this method in combination with other methods. As is, the course is definitely better than no course at all; however, findings indicate changes are needed. Following are conclusions of this study. (1) The course is instructionally effective. (2) The simulation has a positive effect on student's confidence in his ability to apply new knowledge. (3) Analysts like the course and prefer this method of training, or this method in combination with current methods of training in code inspection, over the way training is currently being conducted. (4) Analysts responded favorably to information presented through scenarios incorporating full motion video. (5) Some course content needs to be changed. (6) Some content needs to be added to the course. SwRI believes this study indicates interactive video instruction combined with simulation is effective for teaching software processes. Based on the conclusions of this study, SwRI has outlined seven options for NASA to consider. SwRI recommends the option which involves creation of new source code and data files, but uses much of the existing content and design from the current course. Although this option involves a significant software development effort, SwRI believes this option

  9. 7 CFR 1437.101 - Actual production history.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 10 2012-01-01 2012-01-01 false Actual production history. 1437.101 Section 1437.101... Determining Yield Coverage Using Actual Production History § 1437.101 Actual production history. Actual production history (APH) is the unit's record of crop yield by crop year for the APH base period. The...

  10. 7 CFR 1437.101 - Actual production history.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 10 2014-01-01 2014-01-01 false Actual production history. 1437.101 Section 1437.101... Determining Yield Coverage Using Actual Production History § 1437.101 Actual production history. Actual production history (APH) is the unit's record of crop yield by crop year for the APH base period. The...

  11. 7 CFR 1437.101 - Actual production history.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 10 2013-01-01 2013-01-01 false Actual production history. 1437.101 Section 1437.101... Determining Yield Coverage Using Actual Production History § 1437.101 Actual production history. Actual production history (APH) is the unit's record of crop yield by crop year for the APH base period. The...

  12. 7 CFR 1437.101 - Actual production history.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 10 2011-01-01 2011-01-01 false Actual production history. 1437.101 Section 1437.101... Determining Yield Coverage Using Actual Production History § 1437.101 Actual production history. Actual production history (APH) is the unit's record of crop yield by crop year for the APH base period. The...

  13. 7 CFR 1437.101 - Actual production history.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 10 2010-01-01 2010-01-01 false Actual production history. 1437.101 Section 1437.101... Determining Yield Coverage Using Actual Production History § 1437.101 Actual production history. Actual production history (APH) is the unit's record of crop yield by crop year for the APH base period. The...

  14. The actual status of Astronomy in Moldova

    NASA Astrophysics Data System (ADS)

    Gaina, A.

    The astronomical research in the Republic of Moldova after Nicolae Donitch (Donici)(1874-1956(?)) were renewed in 1957, when a satellites observations station was open in Chisinau. Fotometric observations and rotations of first Soviet artificial satellites were investigated under a program SPIN put in action by the Academy of Sciences of former Socialist Countries. The works were conducted by Assoc. prof. Dr. V. Grigorevskij, which conducted also research in variable stars. Later, at the beginning of 60-th, an astronomical Observatory at the Chisinau State University named after Lenin (actually: the State University of Moldova), placed in Lozovo-Ciuciuleni villages was open, which were coordinated by Odessa State University (Prof. V.P. Tsesevich) and the Astrosovet of the USSR. Two main groups worked in this area: first conducted by V. Grigorevskij (till 1971) and second conducted by L.I. Shakun (till 1988), both graduated from Odessa State University. Besides this research areas another astronomical observations were made: Comets observations, astroclimate and atmospheric optics in collaboration with the Institute of the Atmospheric optics of the Siberian branch of the USSR (V. Chernobai, I. Nacu, C. Usov and A.F. Poiata). Comets observations were also made since 1988 by D. I. Gorodetskij which came to Chisinau from Alma-Ata and collaborated with Ukrainean astronomers conducted by K.I. Churyumov. Another part of space research was made at the State University of Tiraspol since the beggining of 70-th by a group of teaching staff of the Tiraspol State Pedagogical University: M.D. Polanuer, V.S. Sholokhov. No a collaboration between Moldovan astronomers and Transdniestrian ones actually exist due to War in Transdniestria in 1992. An important area of research concerned the Radiophysics of the Ionosphere, which was conducted in Beltsy at the Beltsy State Pedagogical Institute by a group of teaching staff of the University since the beginning of 70-th: N. D. Filip, E

  15. What Galvanic Vestibular Stimulation Actually Activates

    PubMed Central

    Curthoys, Ian S.; MacDougall, Hamish Gavin

    2012-01-01

    In a recent paper in Frontiers Cohen et al. (2012) asked “What does galvanic vestibular stimulation actually activate?” and concluded that galvanic vestibular stimulation (GVS) causes predominantly otolithic behavioral responses. In this Perspective paper we show that such a conclusion does not follow from the evidence. The evidence from neurophysiology is very clear: galvanic stimulation activates primary otolithic neurons as well as primary semicircular canal neurons (Kim and Curthoys, 2004). Irregular neurons are activated at lower currents. The answer to what behavior is activated depends on what is measured and how it is measured, including not just technical details, such as the frame rate of video, but the exact experimental context in which the measurement took place (visual fixation vs total darkness). Both canal and otolith dependent responses are activated by GVS. PMID:22833733

  16. MODIS Solar Diffuser: Modelled and Actual Performance

    NASA Technical Reports Server (NTRS)

    Waluschka, Eugene; Xiong, Xiao-Xiong; Esposito, Joe; Wang, Xin-Dong; Krebs, Carolyn (Technical Monitor)

    2001-01-01

    The Moderate Resolution Imaging Spectroradiometer (MODIS) instrument's solar diffuser is used in its radiometric calibration for the reflective solar bands (VIS, NTR, and SWIR) ranging from 0.41 to 2.1 micron. The sun illuminates the solar diffuser either directly or through a attenuation screen. The attenuation screen consists of a regular array of pin holes. The attenuated illumination pattern on the solar diffuser is not uniform, but consists of a multitude of pin-hole images of the sun. This non-uniform illumination produces small, but noticeable radiometric effects. A description of the computer model used to simulate the effects of the attenuation screen is given and the predictions of the model are compared with actual, on-orbit, calibration measurements.

  17. PARAVT: Parallel Voronoi Tessellation code

    NASA Astrophysics Data System (ADS)

    Gonzalez, Roberto E.

    2016-01-01

    We present a new open source code for massive parallel computation of Voronoi tessellations(VT hereafter) in large data sets. The code is focused for astrophysical purposes where VT densities and neighbors are widely used. There are several serial Voronoi tessellation codes, however no open source and parallel implementations are available to handle the large number of particles/galaxies in current N-body simulations and sky surveys. Parallelization is implemented under MPI and VT using Qhull library. Domain decomposition take into account consistent boundary computation between tasks, and support periodic conditions. In addition, the code compute neighbors lists, Voronoi density and Voronoi cell volumes for each particle, and can compute density on a regular grid.

  18. MELCOR computer code manuals

    SciTech Connect

    Summers, R.M.; Cole, R.K. Jr.; Smith, R.C.; Stuart, D.S.; Thompson, S.L.; Hodge, S.A.; Hyman, C.R.; Sanders, R.L.

    1995-03-01

    MELCOR is a fully integrated, engineering-level computer code that models the progression of severe accidents in light water reactor nuclear power plants. MELCOR is being developed at Sandia National Laboratories for the U.S. Nuclear Regulatory Commission as a second-generation plant risk assessment tool and the successor to the Source Term Code Package. A broad spectrum of severe accident phenomena in both boiling and pressurized water reactors is treated in MELCOR in a unified framework. These include: thermal-hydraulic response in the reactor coolant system, reactor cavity, containment, and confinement buildings; core heatup, degradation, and relocation; core-concrete attack; hydrogen production, transport, and combustion; fission product release and transport; and the impact of engineered safety features on thermal-hydraulic and radionuclide behavior. Current uses of MELCOR include estimation of severe accident source terms and their sensitivities and uncertainties in a variety of applications. This publication of the MELCOR computer code manuals corresponds to MELCOR 1.8.3, released to users in August, 1994. Volume 1 contains a primer that describes MELCOR`s phenomenological scope, organization (by package), and documentation. The remainder of Volume 1 contains the MELCOR Users Guides, which provide the input instructions and guidelines for each package. Volume 2 contains the MELCOR Reference Manuals, which describe the phenomenological models that have been implemented in each package.

  19. Discomfort Glare: What Do We Actually Know?

    SciTech Connect

    Clear, Robert D.

    2012-04-19

    We reviewed glare models with an eye for missing conditions or inconsistencies. We found ambiguities as to when to use small source versus large source models, and as to what constitutes a glare source in a complex scene. We also found surprisingly little information validating the assumed independence of the factors driving glare. A barrier to progress in glare research is the lack of a standardized dependent measure of glare. We inverted the glare models to predict luminance, and compared model predictions against the 1949 Luckiesh and Guth data that form the basis of many of them. The models perform surprisingly poorly, particularly with regards to the luminance-size relationship and additivity. Evaluating glare in complex scenes may require fundamental changes to form of the glare models.

  20. Coding in pigeons: Multiple-coding versus single-code/default strategies.

    PubMed

    Pinto, Carlos; Machado, Armando

    2015-05-01

    To investigate the coding strategies that pigeons may use in a temporal discrimination tasks, pigeons were trained on a matching-to-sample procedure with three sample durations (2s, 6s and 18s) and two comparisons (red and green hues). One comparison was correct following 2-s samples and the other was correct following both 6-s and 18-s samples. Tests were then run to contrast the predictions of two hypotheses concerning the pigeons' coding strategies, the multiple-coding and the single-code/default. According to the multiple-coding hypothesis, three response rules are acquired, one for each sample. According to the single-code/default hypothesis, only two response rules are acquired, one for the 2-s sample and a "default" rule for any other duration. In retention interval tests, pigeons preferred the "default" key, a result predicted by the single-code/default hypothesis. In no-sample tests, pigeons preferred the key associated with the 2-s sample, a result predicted by multiple-coding. Finally, in generalization tests, when the sample duration equaled 3.5s, the geometric mean of 2s and 6s, pigeons preferred the key associated with the 6-s and 18-s samples, a result predicted by the single-code/default hypothesis. The pattern of results suggests the need for models that take into account multiple sources of stimulus control. PMID:25894105

  1. Caustic-Side Solvent Extraction: Prediction of Cesium Extraction for Actual Wastes and Actual Waste Simulants

    SciTech Connect

    Delmau, L.H.; Haverlock, T.J.; Sloop, F.V., Jr.; Moyer, B.A.

    2003-02-01

    This report presents the work that followed the CSSX model development completed in FY2002. The developed cesium and potassium extraction model was based on extraction data obtained from simple aqueous media. It was tested to ensure the validity of the prediction for the cesium extraction from actual waste. Compositions of the actual tank waste were obtained from the Savannah River Site personnel and were used to prepare defined simulants and to predict cesium distribution ratios using the model. It was therefore possible to compare the cesium distribution ratios obtained from the actual waste, the simulant, and the predicted values. It was determined that the predicted values agree with the measured values for the simulants. Predicted values also agreed, with three exceptions, with measured values for the tank wastes. Discrepancies were attributed in part to the uncertainty in the cation/anion balance in the actual waste composition, but likely more so to the uncertainty in the potassium concentration in the waste, given the demonstrated large competing effect of this metal on cesium extraction. It was demonstrated that the upper limit for the potassium concentration in the feed ought to not exceed 0.05 M in order to maintain suitable cesium distribution ratios.

  2. Codes with special correlation.

    NASA Technical Reports Server (NTRS)

    Baumert, L. D.

    1964-01-01

    Uniform binary codes with special correlation including transorthogonality and simplex code, Hadamard matrices and difference sets uniform binary codes with special correlation including transorthogonality and simplex code, Hadamard matrices and difference sets

  3. Radioactive Doses - Predicted and Actual - and Likely Health Effects.

    PubMed

    Nagataki, S; Takamura, N

    2016-04-01

    Five years have passed since the nuclear accident at Fukushima Daiichi Nuclear Power Stations on 11 March 2011. Here we refer to reports from international organisations as sources of predicted values obtained from environmental monitoring and dose estimation models, and reports from various institutes in Japan are used as sources of individual actual values. The World Health Organization, based on information available up to 11 September 2011 (and published in 2012), reported that characteristic effective doses in the first year after the accident, to all age groups, were estimated to be in the 10-50 mSv dose band in example locations in evacuation areas. Estimated characteristic thyroid doses to infants in Namie Town were within the 100-200 mSv dose band. A report from the United Nations Scientific Committee on the Effects of Atomic Radiation published in 2014 shows that the effective dose received by adults in evacuation areas during the first year after the accident was 1.1-13 mSv. The absorbed dose to the thyroid in evacuated settlements was 7.2-35 mSv in adults and 15-83 mSv in 1-year-old infants. Individual external radiation exposure in the initial 4 months after the accident, estimated by superimposing individual behaviour data on to a daily dose rate map, was less than 3 mSv in 93.9% of residents (maximum 15 mSv) in evacuation areas. Actual individual thyroid equivalent doses were less than 15 mSv in 98.8% of children (maximum 25 mSv) in evacuation areas. When uncertainty exists in dose estimation models, it may be sensible to err on the side of caution, and final estimated doses are often much greater than actual radiation doses. However, overestimation of the dose at the time of an accident has a great influence on the psychology of residents. More than 100 000 residents have not returned to the evacuation areas 5 years after the Fukushima accident because of the social and mental effects during the initial period of the disaster. Estimates of

  4. Error-correction coding

    NASA Technical Reports Server (NTRS)

    Hinds, Erold W. (Principal Investigator)

    1996-01-01

    This report describes the progress made towards the completion of a specific task on error-correcting coding. The proposed research consisted of investigating the use of modulation block codes as the inner code of a concatenated coding system in order to improve the overall space link communications performance. The study proposed to identify and analyze candidate codes that will complement the performance of the overall coding system which uses the interleaved RS (255,223) code as the outer code.

  5. DNAD, a simple tool for automatic differentiation of Fortran codes using dual numbers

    NASA Astrophysics Data System (ADS)

    Yu, Wenbin; Blair, Maxwell

    2013-05-01

    DNAD (dual number automatic differentiation) is a simple, general-purpose tool to automatically differentiate Fortran codes written in modern Fortran (F90/ 95/2003) or legacy codes written in previous version of the Fortran language. It implements the forward mode of automatic differentiation using the arithmetic of dual numbers and the operator overloading feature of F90/ 95/2003. Very minimum changes of the source codes are needed to compute the first derivatives of Fortran programs. The advantages of DNAD in comparison to other existing similar computer codes are its programming simplicity, extensibility, and computational efficiency. Specifically, DNAD is more accurate and efficient than the popular complex-step approximation. Several examples are used to demonstrate its applications and advantages. Program summaryProgram title: DNAD Catalogue identifier: AEOS_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOS_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 3922 No. of bytes in distributed program, including test data, etc.: 18 275 Distribution format: tar.gz Programming language: Fortran 90/95/2003. Computer: All computers with a modern FORTRAN compiler. Operating system: All platforms with a modern FORTRAN compiler. Classification: 4.12, 6.2. Nature of problem: Derivatives of outputs with respect to inputs of a Fortran code are often needed in physics, chemistry, and engineering. The author of the analysis code may no longer be available and the user may not have a deep knowledge of the code. Thus a simple tool is necessary to automatically differentiate the code with very minimum change to the source codes. This can be achieved using dual number arithmetic and operator overloading. Solution method: A new data type is defined with the first scalar

  6. LFSC - Linac Feedback Simulation Code

    SciTech Connect

    Ivanov, Valentin; /Fermilab

    2008-05-01

    The computer program LFSC (Code>) is a numerical tool for simulation beam based feedback in high performance linacs. The code LFSC is based on the earlier version developed by a collective of authors at SLAC (L.Hendrickson, R. McEwen, T. Himel, H. Shoaee, S. Shah, P. Emma, P. Schultz) during 1990-2005. That code was successively used in simulation of SLC, TESLA, CLIC and NLC projects. It can simulate as pulse-to-pulse feedback on timescale corresponding to 5-100 Hz, as slower feedbacks, operating in the 0.1-1 Hz range in the Main Linac and Beam Delivery System. The code LFSC is running under Matlab for MS Windows operating system. It contains about 30,000 lines of source code in more than 260 subroutines. The code uses the LIAR ('Linear Accelerator Research code') for particle tracking under ground motion and technical noise perturbations. It uses the Guinea Pig code to simulate the luminosity performance. A set of input files includes the lattice description (XSIF format), and plane text files with numerical parameters, wake fields, ground motion data etc. The Matlab environment provides a flexible system for graphical output.

  7. Strongly Secure Linear Network Coding

    NASA Astrophysics Data System (ADS)

    Harada, Kunihiko; Yamamoto, Hirosuke

    In a network with capacity h for multicast, information Xh=(X1, X2, …, Xh) can be transmitted from a source node to sink nodes without error by a linear network code. Furthermore, secret information Sr=(S1, S2, …, Sr) can be transmitted securely against wiretappers by k-secure network coding for k≤h-r. In this case, no information of the secret leaks out even if an adversary wiretaps k edges, i. e. channels. However, if an adversary wiretaps k+1 edges, some Si may leak out explicitly. In this paper, we propose strongly k-secure network coding based on strongly secure ramp secret sharing schemes. In this coding, no information leaks out for every (Si1, Si2, …,Sir-j) even if an adversary wiretaps k+j channels. We also give an algorithm to construct a strongly k-secure network code directly and a transform to convert a nonsecure network code to a strongly k-secure network code. Furthermore, some sufficient conditions of alphabet size to realize the strongly k-secure network coding are derived for the case of k

  8. Evaluation of high-energy brachytherapy source electronic disequilibrium and dose from emitted electrons

    SciTech Connect

    Ballester, Facundo; Granero, Domingo; Perez-Calatayud, Jose; Melhus, Christopher S.; Rivard, Mark J.

    2009-09-15

    Purpose: The region of electronic disequilibrium near photon-emitting brachytherapy sources of high-energy radionuclides ({sup 60}Co, {sup 137}Cs, {sup 192}Ir, and {sup 169}Yb) and contributions to total dose from emitted electrons were studied using the GEANT4 and PENELOPE Monte Carlo codes. Methods: Hypothetical sources with active and capsule materials mimicking those of actual sources but with spherical shape were examined. Dose contributions due to source photons, x rays, and bremsstrahlung; source {beta}{sup -}, Auger electrons, and internal conversion electrons; and water collisional kerma were scored. To determine if conclusions obtained for electronic equilibrium conditions and electron dose contribution to total dose for the representative spherical sources could be applied to actual sources, the {sup 192}Ir mHDR-v2 source model (Nucletron B.V., Veenendaal, The Netherlands) was simulated for comparison to spherical source results and to published data. Results: Electronic equilibrium within 1% is reached for {sup 60}Co, {sup 137}Cs, {sup 192}Ir, and {sup 169}Yb at distances greater than 7, 3.5, 2, and 1 mm from the source center, respectively, in agreement with other published studies. At 1 mm from the source center, the electron contributions to total dose are 1.9% and 9.4% for {sup 60}Co and {sup 192}Ir, respectively. Electron emissions become important (i.e., >0.5%) within 3.3 mm of {sup 60}Co and 1.7 mm of {sup 192}Ir sources, yet are negligible over all distances for {sup 137}Cs and {sup 169}Yb. Electronic equilibrium conditions along the transversal source axis for the mHDR-v2 source are comparable to those of the spherical sources while electron dose to total dose contribution are quite different. Conclusions: Electronic equilibrium conditions obtained for spherical sources could be generalized to actual sources while electron contribution to total dose depends strongly on source dimensions, material composition, and electron spectra.

  9. Code portability and data management considerations in the SAS3D LMFBR accident-analysis code

    SciTech Connect

    Dunn, F.E.

    1981-01-01

    The SAS3D code was produced from a predecessor in order to reduce or eliminate interrelated problems in the areas of code portability, the large size of the code, inflexibility in the use of memory and the size of cases that can be run, code maintenance, and running speed. Many conventional solutions, such as variable dimensioning, disk storage, virtual memory, and existing code-maintenance utilities were not feasible or did not help in this case. A new data management scheme was developed, coding standards and procedures were adopted, special machine-dependent routines were written, and a portable source code processing code was written. The resulting code is quite portable, quite flexible in the use of memory and the size of cases that can be run, much easier to maintain, and faster running. SAS3D is still a large, long running code that only runs well if sufficient main memory is available.

  10. Consequences of Predicted or Actual Asteroid Impacts

    NASA Astrophysics Data System (ADS)

    Chapman, C. R.

    2003-12-01

    Earth impact by an asteroid could have enormous physical and environmental consequences. Impactors larger than 2 km diameter could be so destructive as to threaten civilization. Since such events greatly exceed any other natural or man-made catastrophe, much extrapolation is necessary just to understand environmental implications (e.g. sudden global cooling, tsunami magnitude, toxic effects). Responses of vital elements of the ecosystem (e.g. agriculture) and of human society to such an impact are conjectural. For instance, response to the Blackout of 2003 was restrained, but response to 9/11 terrorism was arguably exaggerated and dysfunctional; would society be fragile or robust in the face of global catastrophe? Even small impacts, or predictions of impacts (accurate or faulty), could generate disproportionate responses, especially if news media reports are hyped or inaccurate or if responsible entities (e.g. military organizations in regions of conflict) are inadequately aware of the phenomenology of small impacts. Asteroid impact is the one geophysical hazard of high potential consequence with which we, fortunately, have essentially no historical experience. It is thus important that decision makers familiarize themselves with the hazard and that society (perhaps using a formal procedure, like a National Academy of Sciences study) evaluate the priority of addressing the hazard by (a) further telescopic searches for dangerous but still-undiscovered asteroids and (b) development of mitigation strategies (including deflection of an oncoming asteroid and on- Earth civil defense). I exemplify these issues by discussing several representative cases that span the range of parameters. Many of the specific physical consequences of impact involve effects like those of other geophysical disasters (flood, fire, earthquake, etc.), but the psychological and sociological aspects of predicted and actual impacts are distinctive. Standard economic cost/benefit analyses may not

  11. Phobos lander coding system: Software and analysis

    NASA Technical Reports Server (NTRS)

    Cheung, K.-M.; Pollara, F.

    1988-01-01

    The software developed for the decoding system used in the telemetry link of the Phobos Lander mission is described. Encoders and decoders are provided to cover the three possible telemetry configurations. The software can be used to decode actual data or to simulate the performance of the telemetry system. The theoretical properties of the codes chosen for this mission are analyzed and discussed.

  12. Indicated and actual mass inventory measurements for an inverted U-tube steam generator

    SciTech Connect

    Loomis, G.G.; Plessinger, M.P.; Boucher, T.J.

    1986-01-01

    Results from an experimental investigation of actual versus indicated secondary liquid level in a steam generator at steaming conditions are presented. The experimental investigation was performed in two different small scale U-tube-in-shell steam generators at typical pressurized water reactor operating conditions (5-7 MPa; saturated) in the Semiscale facility. During steaming conditions, the indicated secondary liquid level was found to vary considerably from the actual ''bottled-up'' liquid level. These difference between indicated and actual liquid level are related to the frictional pressure drop associated with the two-phase steaming condition in the riser. Data from a series of bottle-up experiments (Simultaneously, the primary heat source and secondary feed and steam are terminated) are tabulated and the actual liquid level is correlated to the indicated liquid level.

  13. Student Codes of Conduct: A Guide to Policy Review and Code Development.

    ERIC Educational Resources Information Center

    New Jersey State Dept. of Education, Trenton. Div. of General Academic Education.

    Designed to assist New Jersey school districts in developing and implementing student codes of conduct, this document begins by examining the need for policy and clearly established rules, the rationale for codes of conduct, and the areas that such codes should address. Following a discussion of substantive and procedural rights and sources of…

  14. Coded apertures for efficient pyroelectric motion tracking.

    PubMed

    Gopinathan, U; Brady, D; Pitsianis, N

    2003-09-01

    Coded apertures may be designed to modulate the visibility between source and measurement spaces such that the position of a source among N resolution cells may be discriminated using logarithm of N measurements. We use coded apertures as reference structures in a pyroelectric motion tracking system. This sensor system is capable of detecting source motion in one of the 15 cells uniformly distributed over a 1.6m x 1.6m domain using 4 pyroelectric detectors. PMID:19466102

  15. Compressed image transmission based on fountain codes

    NASA Astrophysics Data System (ADS)

    Wu, Jiaji; Wu, Xinhong; Jiao, L. C.

    2011-11-01

    In this paper, we propose a joint source-channel coding (JSCC) scheme for image transmission over wireless channel. In the scheme, fountain codes are integrated into bit-plane coding for channel coding. Compared to traditional erasure codes for error correcting, such as Reed-Solomon codes, fountain codes are rateless and can generate sufficient symbols on the fly. Two schemes, the EEP (Equal Error Protection) scheme and the UEP (Unequal Error Protection) scheme are described in the paper. Furthermore, the UEP scheme performs better than the EEP scheme. The proposed scheme not only can adaptively adjust the length of fountain codes according to channel loss rate but also reconstruct image even on bad channel.

  16. Longwave infrared (LWIR) coded aperture dispersive spectrometer.

    PubMed

    Fernandez, C; Guenther, B D; Gehm, M E; Brady, D J; Sullivan, M E

    2007-04-30

    We describe a static aperture-coded, dispersive longwave infrared (LWIR) spectrometer that uses a microbolometer array at the detector plane. The two-dimensional aperture code is based on a row-doubled Hadamard mask with transmissive and opaque openings. The independent column code nature of the matrix makes for a mathematically well-defined pattern that spatially and spectrally maps the source information to the detector plane. Post-processing techniques on the data provide spectral estimates of the source. Comparative experimental results between a slit and coded aperture for emission spectroscopy from a CO(2) laser are demonstrated. PMID:19532832

  17. Hybrid concatenated codes and iterative decoding

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush (Inventor); Pollara, Fabrizio (Inventor)

    2000-01-01

    Several improved turbo code apparatuses and methods. The invention encompasses several classes: (1) A data source is applied to two or more encoders with an interleaver between the source and each of the second and subsequent encoders. Each encoder outputs a code element which may be transmitted or stored. A parallel decoder provides the ability to decode the code elements to derive the original source information d without use of a received data signal corresponding to d. The output may be coupled to a multilevel trellis-coded modulator (TCM). (2) A data source d is applied to two or more encoders with an interleaver between the source and each of the second and subsequent encoders. Each of the encoders outputs a code element. In addition, the original data source d is output from the encoder. All of the output elements are coupled to a TCM. (3) At least two data sources are applied to two or more encoders with an interleaver between each source and each of the second and subsequent encoders. The output may be coupled to a TCM. (4) At least two data sources are applied to two or more encoders with at least two interleavers between each source and each of the second and subsequent encoders. (5) At least one data source is applied to one or more serially linked encoders through at least one interleaver. The output may be coupled to a TCM. The invention includes a novel way of terminating a turbo coder.

  18. The moving mesh code SHADOWFAX

    NASA Astrophysics Data System (ADS)

    Vandenbroucke, B.; De Rijcke, S.

    2016-07-01

    We introduce the moving mesh code SHADOWFAX, which can be used to evolve a mixture of gas, subject to the laws of hydrodynamics and gravity, and any collisionless fluid only subject to gravity, such as cold dark matter or stars. The code is written in C++ and its source code is made available to the scientific community under the GNU Affero General Public Licence. We outline the algorithm and the design of our implementation, and demonstrate its validity through the results of a set of basic test problems, which are also part of the public version. We also compare SHADOWFAX with a number of other publicly available codes using different hydrodynamical integration schemes, illustrating the advantages and disadvantages of the moving mesh technique.

  19. Homological stabilizer codes

    SciTech Connect

    Anderson, Jonas T.

    2013-03-15

    In this paper we define homological stabilizer codes on qubits which encompass codes such as Kitaev's toric code and the topological color codes. These codes are defined solely by the graphs they reside on. This feature allows us to use properties of topological graph theory to determine the graphs which are suitable as homological stabilizer codes. We then show that all toric codes are equivalent to homological stabilizer codes on 4-valent graphs. We show that the topological color codes and toric codes correspond to two distinct classes of graphs. We define the notion of label set equivalencies and show that under a small set of constraints the only homological stabilizer codes without local logical operators are equivalent to Kitaev's toric code or to the topological color codes. - Highlights: Black-Right-Pointing-Pointer We show that Kitaev's toric codes are equivalent to homological stabilizer codes on 4-valent graphs. Black-Right-Pointing-Pointer We show that toric codes and color codes correspond to homological stabilizer codes on distinct graphs. Black-Right-Pointing-Pointer We find and classify all 2D homological stabilizer codes. Black-Right-Pointing-Pointer We find optimal codes among the homological stabilizer codes.

  20. 40 CFR 63.2853 - How do I determine the actual solvent loss?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 13 2012-07-01 2012-07-01 false How do I determine the actual solvent loss? 63.2853 Section 63.2853 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) NATIONAL EMISSION STANDARDS FOR HAZARDOUS AIR POLLUTANTS FOR SOURCE CATEGORIES (CONTINUED) National Emission Standards...

  1. Real-Time Motion Capture Toolbox (RTMocap): an open-source code for recording 3-D motion kinematics to study action-effect anticipations during motor and social interactions.

    PubMed

    Lewkowicz, Daniel; Delevoye-Turrell, Yvonne

    2016-03-01

    We present here a toolbox for the real-time motion capture of biological movements that runs in the cross-platform MATLAB environment (The MathWorks, Inc., Natick, MA). It provides instantaneous processing of the 3-D movement coordinates of up to 20 markers at a single instant. Available functions include (1) the setting of reference positions, areas, and trajectories of interest; (2) recording of the 3-D coordinates for each marker over the trial duration; and (3) the detection of events to use as triggers for external reinforcers (e.g., lights, sounds, or odors). Through fast online communication between the hardware controller and RTMocap, automatic trial selection is possible by means of either a preset or an adaptive criterion. Rapid preprocessing of signals is also provided, which includes artifact rejection, filtering, spline interpolation, and averaging. A key example is detailed, and three typical variations are developed (1) to provide a clear understanding of the importance of real-time control for 3-D motion in cognitive sciences and (2) to present users with simple lines of code that can be used as starting points for customizing experiments using the simple MATLAB syntax. RTMocap is freely available ( http://sites.google.com/site/RTMocap/ ) under the GNU public license for noncommercial use and open-source development, together with sample data and extensive documentation. PMID:25805426

  2. Coding of Neuroinfectious Diseases.

    PubMed

    Barkley, Gregory L

    2015-12-01

    Accurate coding is an important function of neurologic practice. This contribution to Continuum is part of an ongoing series that presents helpful coding information along with examples related to the issue topic. Tips for diagnosis coding, Evaluation and Management coding, procedure coding, or a combination are presented, depending on which is most applicable to the subject area of the issue. PMID:26633789

  3. Model Children's Code.

    ERIC Educational Resources Information Center

    New Mexico Univ., Albuquerque. American Indian Law Center.

    The Model Children's Code was developed to provide a legally correct model code that American Indian tribes can use to enact children's codes that fulfill their legal, cultural and economic needs. Code sections cover the court system, jurisdiction, juvenile offender procedures, minor-in-need-of-care, and termination. Almost every Code section is…

  4. Authentication codes that permit arbitration

    SciTech Connect

    Simmons, G.J.

    1987-01-01

    Objective of authentication is to detect attempted deceptions in a communications channel. Traditionally this has been restricted to providing the authorized receiver with a capability of detecting unauthentic messages. The known codes have all left open the possibility for either the transmitter to disavow a message that he actually sent to the receiver, i.e., an authentic message, or else for the receiver to falsely attribute a message of his own devising to the transmitter. Of course the party being deceived would know that he was the victim of a deception by the other, but would be unable to ''prove'' this to a third party. Ideally, authentication should provide a means to detect attempted deceptions by insiders (the transmitter or receiver) as well as outsiders (the opponent). It has been an open question of whether it was possible to devise authentication codes that would permit a third party, an arbiter, to decide (in probability) whether the transmitter or the receiver was cheating in the event of a dispute. We answer this question in that both permits the receiver to detect outsider deceptions, as well affirmative by first constructing an example of an authentication code as permitting a designated arbiter to detect insider deceptions and then by generalizing this construction to an infinite class of such codes.

  5. Gauging triple stores with actual biological data

    PubMed Central

    2012-01-01

    Background Semantic Web technologies have been developed to overcome the limitations of the current Web and conventional data integration solutions. The Semantic Web is expected to link all the data present on the Internet instead of linking just documents. One of the foundations of the Semantic Web technologies is the knowledge representation language Resource Description Framework (RDF). Knowledge expressed in RDF is typically stored in so-called triple stores (also known as RDF stores), from which it can be retrieved with SPARQL, a language designed for querying RDF-based models. The Semantic Web technologies should allow federated queries over multiple triple stores. In this paper we compare the efficiency of a set of biologically relevant queries as applied to a number of different triple store implementations. Results Previously we developed a library of queries to guide the use of our knowledge base Cell Cycle Ontology implemented as a triple store. We have now compared the performance of these queries on five non-commercial triple stores: OpenLink Virtuoso (Open-Source Edition), Jena SDB, Jena TDB, SwiftOWLIM and 4Store. We examined three performance aspects: the data uploading time, the query execution time and the scalability. The queries we had chosen addressed diverse ontological or biological questions, and we found that individual store performance was quite query-specific. We identified three groups of queries displaying similar behaviour across the different stores: 1) relatively short response time queries, 2) moderate response time queries and 3) relatively long response time queries. SwiftOWLIM proved to be a winner in the first group, 4Store in the second one and Virtuoso in the third one. Conclusions Our analysis showed that some queries behaved idiosyncratically, in a triple store specific manner, mainly with SwiftOWLIM and 4Store. Virtuoso, as expected, displayed a very balanced performance - its load time and its response time for all the

  6. Coordinated design of coding and modulation systems

    NASA Technical Reports Server (NTRS)

    Massey, J. L.

    1976-01-01

    Work on partial unit memory codes continued; it was shown that for a given virtual state complexity, the maximum free distance over the class of all convolutional codes is achieved within the class of unit memory codes. The effect of phase-lock loop (PLL) tracking error on coding system performance was studied by using the channel cut-off rate as the measure of quality of a modulation system. Optimum modulation signal sets for a non-white Gaussian channel considered an heuristic selection rule based on a water-filling argument. The use of error correcting codes to perform data compression by the technique of syndrome source coding was researched and a weight-and-error-locations scheme was developed that is closely related to LDSC coding.

  7. Leveraging Code Comments to Improve Software Reliability

    ERIC Educational Resources Information Center

    Tan, Lin

    2009-01-01

    Commenting source code has long been a common practice in software development. This thesis, consisting of three pieces of work, made novel use of the code comments written in natural language to improve software reliability. Our solution combines Natural Language Processing (NLP), Machine Learning, Statistics, and Program Analysis techniques to…

  8. A comprehensive approach to actual polychlorinated biphenyls environmental contamination.

    PubMed

    Risso, F; Magherini, A; Ottonelli, M; Magi, E; Lottici, S; Maggiolo, S; Garbarino, M; Narizzano, R

    2016-05-01

    Worldwide polychlorinated biphenyls (PCBs) pollution is due to complex mixtures with high number of congeners, making the determination of total PCBs in the environment an open challenge. Because the bulk of PCBs production was made of Aroclor mixtures, this analysis is usually faced by the empirical mixture identification via visual inspection of the chromatogram. However, the identification reliability is questionable, as patterns in real samples are strongly affected by the frequent occurrence of more than one mixture. Our approach is based on the determination of a limited number of congeners chosen to enable objective criteria for Aroclor identification, summing up the advantages of congener-specific analysis with the ones of total PCBs determination. A quantitative relationship is established between congeners and any single mixture, or mixtures combination, leading to the identification of the actual contamination composition. The approach, due to its generality, allows the use of different sets of congeners and any technical mixture, including the non-Aroclor ones. The results confirm that PCB environmental pollution in northern Italy is based on Aroclor. Our methodology represents an important tool to understand the source and fate of the PCBs contamination. PMID:26805927

  9. Are presolar dust grains from novae actually from supernovae?

    NASA Astrophysics Data System (ADS)

    Nittler, L. R.; Hoppe, P.

    2005-05-01

    Meteorites contain presolar stardust grains that formed in prior generations of stars and exhibit large isotopic anomalies reflecting the nuclear processes that occurred in their individual parent stars. RGB and AGB stars and supernovae are well established as sources of many of these grains. Novae have been proposed as sources for a few SiC and graphite grains with low 12}C/{13C and 14}N/{15N ratios and unusual Si isotopic ratios (Amari et al., ApJ, 551, 1065). We have found three SiC grains from the Murchison meteorite with C and N isotopic ratios similar to the previously-reported putative nova grains. However, the isotopic signatures of Si, Ca, Al and Ti in one of the grains (334-2) clearly indicate a supernova origin, especially excess 28Si correlated with excess 44Ca. The latter signature is attributable to in situ decay of (half-life=50yr) 44Ti. Another 13C- and 15N-rich grain (151-4) has a large 47Ti enrichment. This signature is not expected for nova nucleosynthesis. Thus, the new isotopic data raise the possibility that the grains previously reported to have formed in novae actually formed in supernovae, and that novae have not left a record in the presolar grain populations that have been so far studied. Moreover, the results in grain 334-2 indicate that supernovae contain regions highly enriched in both 13C and 15N. This is not predicted by current models but may bear on the cosmic origin of 15N. This work was funded in part by NASA.

  10. Final Technical Report for SBIR entitled Four-Dimensional Finite-Orbit-Width Fokker-Planck Code with Sources, for Neoclassical/Anomalous Transport Simulation of Ion and Electron Distributions

    SciTech Connect

    Harvey, R. W.; Petrov, Yu. V.

    2013-12-03

    Within the US Department of Energy/Office of Fusion Energy magnetic fusion research program, there is an important whole-plasma-modeling need for a radio-frequency/neutral-beam-injection (RF/NBI) transport-oriented finite-difference Fokker-Planck (FP) code with combined capabilities for 4D (2R2V) geometry near the fusion plasma periphery, and computationally less demanding 3D (1R2V) bounce-averaged capabilities for plasma in the core of fusion devices. Demonstration of proof-of-principle achievement of this goal has been carried out in research carried out under Phase I of the SBIR award. Two DOE-sponsored codes, the CQL3D bounce-average Fokker-Planck code in which CompX has specialized, and the COGENT 4D, plasma edge-oriented Fokker-Planck code which has been constructed by Lawrence Livermore National Laboratory and Lawrence Berkeley Laboratory scientists, where coupled. Coupling was achieved by using CQL3D calculated velocity distributions including an energetic tail resulting from NBI, as boundary conditions for the COGENT code over the two-dimensional velocity space on a spatial interface (flux) surface at a given radius near the plasma periphery. The finite-orbit-width fast ions from the CQL3D distributions penetrated into the peripheral plasma modeled by the COGENT code. This combined code demonstrates the feasibility of the proposed 3D/4D code. By combining these codes, the greatest computational efficiency is achieved subject to present modeling needs in toroidally symmetric magnetic fusion devices. The more efficient 3D code can be used in its regions of applicability, coupled to the more computationally demanding 4D code in higher collisionality edge plasma regions where that extended capability is necessary for accurate representation of the plasma. More efficient code leads to greater use and utility of the model. An ancillary aim of the project is to make the combined 3D/4D code user friendly. Achievement of full-coupling of these two Fokker

  11. Constant-quality constrained-rate allocation for FGS video coded bitstreams

    NASA Astrophysics Data System (ADS)

    Zhang, Xi Min; Vetro, Anthony; Shi, Yun-Qing; Sun, Huifang

    2002-01-01

    This paper proposes an optimal rate allocation scheme for Fine-Granular Scalability (FGS) coded bitstreams that can achieve constant quality reconstruction of frames under a dynamic rate budget constraint. In doing so, we also aim to minimize the overall distortion at the same time. To achieve this, we propose a novel R-D labeling scheme to characterize the R-D relationship of the source coding process. Specifically, sets of R-D points are extracted during the encoding process and linear interpolation is used to estimate the actual R-D curve of the enhancement layer signal. The extracted R-D information is then used by an enhancement layer transcoder to determine the bits that should be allocated per frame. A sliding window based rate allocation method is proposed to realize constant quality among frames. This scheme is first considered for a single FGS coded source, then extended to operate on multiple sources. With the proposed scheme, the rate allocation can be performed in a single pass, hence the complexity is quite low. Experimental results confirm the effectiveness of the proposed scheme under static and dynamic bandwidth conditions.

  12. Coding For Compression Of Low-Entropy Data

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu

    1994-01-01

    Improved method of encoding digital data provides for efficient lossless compression of partially or even mostly redundant data from low-information-content source. Method of coding implemented in relatively simple, high-speed arithmetic and logic circuits. Also increases coding efficiency beyond that of established Huffman coding method in that average number of bits per code symbol can be less than 1, which is the lower bound for Huffman code.

  13. Accumulate repeat accumulate codes

    NASA Technical Reports Server (NTRS)

    Abbasfar, Aliazam; Divsalar, Dariush; Yao, Kung

    2004-01-01

    In this paper we propose an innovative channel coding scheme called 'Accumulate Repeat Accumulate codes' (ARA). This class of codes can be viewed as serial turbo-like codes, or as a subclass of Low Density Parity Check (LDPC) codes, thus belief propagation can be used for iterative decoding of ARA codes on a graph. The structure of encoder for this class can be viewed as precoded Repeat Accumulate (RA) code or as precoded Irregular Repeat Accumulate (IRA) code, where simply an accumulator is chosen as a precoder. Thus ARA codes have simple, and very fast encoder structure when they representing LDPC codes. Based on density evolution for LDPC codes through some examples for ARA codes, we show that for maximum variable node degree 5 a minimum bit SNR as low as 0.08 dB from channel capacity for rate 1/2 can be achieved as the block size goes to infinity. Thus based on fixed low maximum variable node degree, its threshold outperforms not only the RA and IRA codes but also the best known LDPC codes with the dame maximum node degree. Furthermore by puncturing the accumulators any desired high rate codes close to code rate 1 can be obtained with thresholds that stay close to the channel capacity thresholds uniformly. Iterative decoding simulation results are provided. The ARA codes also have projected graph or protograph representation that allows for high speed decoder implementation.

  14. Concatenated Coding Using Trellis-Coded Modulation

    NASA Technical Reports Server (NTRS)

    Thompson, Michael W.

    1997-01-01

    In the late seventies and early eighties a technique known as Trellis Coded Modulation (TCM) was developed for providing spectrally efficient error correction coding. Instead of adding redundant information in the form of parity bits, redundancy is added at the modulation stage thereby increasing bandwidth efficiency. A digital communications system can be designed to use bandwidth-efficient multilevel/phase modulation such as Amplitude Shift Keying (ASK), Phase Shift Keying (PSK), Differential Phase Shift Keying (DPSK) or Quadrature Amplitude Modulation (QAM). Performance gain can be achieved by increasing the number of signals over the corresponding uncoded system to compensate for the redundancy introduced by the code. A considerable amount of research and development has been devoted toward developing good TCM codes for severely bandlimited applications. More recently, the use of TCM for satellite and deep space communications applications has received increased attention. This report describes the general approach of using a concatenated coding scheme that features TCM and RS coding. Results have indicated that substantial (6-10 dB) performance gains can be achieved with this approach with comparatively little bandwidth expansion. Since all of the bandwidth expansion is due to the RS code we see that TCM based concatenated coding results in roughly 10-50% bandwidth expansion compared to 70-150% expansion for similar concatenated scheme which use convolution code. We stress that combined coding and modulation optimization is important for achieving performance gains while maintaining spectral efficiency.

  15. Coset Codes Viewed as Terminated Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Fossorier, Marc P. C.; Lin, Shu

    1996-01-01

    In this paper, coset codes are considered as terminated convolutional codes. Based on this approach, three new general results are presented. First, it is shown that the iterative squaring construction can equivalently be defined from a convolutional code whose trellis terminates. This convolutional code determines a simple encoder for the coset code considered, and the state and branch labelings of the associated trellis diagram become straightforward. Also, from the generator matrix of the code in its convolutional code form, much information about the trade-off between the state connectivity and complexity at each section, and the parallel structure of the trellis, is directly available. Based on this generator matrix, it is shown that the parallel branches in the trellis diagram of the convolutional code represent the same coset code C(sub 1), of smaller dimension and shorter length. Utilizing this fact, a two-stage optimum trellis decoding method is devised. The first stage decodes C(sub 1), while the second stage decodes the associated convolutional code, using the branch metrics delivered by stage 1. Finally, a bidirectional decoding of each received block starting at both ends is presented. If about the same number of computations is required, this approach remains very attractive from a practical point of view as it roughly doubles the decoding speed. This fact is particularly interesting whenever the second half of the trellis is the mirror image of the first half, since the same decoder can be implemented for both parts.

  16. Random coding strategies for minimum entropy

    NASA Technical Reports Server (NTRS)

    Posner, E. C.

    1975-01-01

    This paper proves that there exists a fixed random coding strategy for block coding a memoryless information source to achieve the absolute epsilon entropy of the source. That is, the strategy can be chosen independent of the block length. The principal new tool is an easy result on the semicontinuity of the relative entropy functional of one probability distribution with respect to another. The theorem generalizes a result from rate-distortion theory to the 'zero-infinity' case.

  17. The Integrated TIGER Series Codes

    2006-01-15

    ITS is a powerful and user-friendly software package permitting state-of-the-art Monte Carlo solution of linear time-independent coupled electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. Our goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the makefile system is applied combines with anmore » input scheme based on order-independent descriptive keywords that makes maximum use of defaults and intemal error checking to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 5.0, the latest version of ITS, contains (1) improvements to the ITS 3.0 continuous-energy codes, (2) multigroup codes with adjoint transport capabilities, (3) parallel implementations of all ITS codes, (4) a general purpose geometry engine for linking with CAD or other geometry formats, and (5) the Cholla facet geometry library. Moreover, the general user friendliness of the software has been enhanced through increased internal error checking and improved code portability.« less

  18. The Integrated TIGER Series Codes

    SciTech Connect

    Kensek, Ronald P.; Franke, Brian C.; Laub, Thomas W.

    2006-01-15

    ITS is a powerful and user-friendly software package permitting state-of-the-art Monte Carlo solution of linear time-independent coupled electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. Our goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and intemal error checking to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 5.0, the latest version of ITS, contains (1) improvements to the ITS 3.0 continuous-energy codes, (2) multigroup codes with adjoint transport capabilities, (3) parallel implementations of all ITS codes, (4) a general purpose geometry engine for linking with CAD or other geometry formats, and (5) the Cholla facet geometry library. Moreover, the general user friendliness of the software has been enhanced through increased internal error checking and improved code portability.

  19. Finding the key to a better code: code team restructure to improve performance and outcomes.

    PubMed

    Prince, Cynthia R; Hines, Elizabeth J; Chyou, Po-Huang; Heegeman, David J

    2014-09-01

    Code teams respond to acute life threatening changes in a patient's status 24 hours a day, 7 days a week. If any variable, whether a medical skill or non-medical quality, is lacking, the effectiveness of a code team's resuscitation could be hindered. To improve the overall performance of our hospital's code team, we implemented an evidence-based quality improvement restructuring plan. The code team restructure, which occurred over a 3-month period, included a defined number of code team participants, clear identification of team members and their primary responsibilities and position relative to the patient, and initiation of team training events and surprise mock codes (simulations). Team member assessments of the restructured code team and its performance were collected through self-administered electronic questionnaires. Time-to-defibrillation, defined as the time the code was called until the start of defibrillation, was measured for each code using actual time recordings from code summary sheets. Significant improvements in team member confidence in the skills specific to their role and clarity in their role's position were identified. Smaller improvements were seen in team leadership and reduction in the amount of extra talking and noise during a code. The average time-to-defibrillation during real codes decreased each year since the code team restructure. This type of code team restructure resulted in improvements in several areas that impact the functioning of the team, as well as decreased the average time-to-defibrillation, making it beneficial to many, including the team members, medical institution, and patients. PMID:24667218

  20. Orthogonal-state-based deterministic secure quantum communication without actual transmission of the message qubits

    NASA Astrophysics Data System (ADS)

    Shukla, Chitra; Pathak, Anirban

    2014-09-01

    Recently, an orthogonal-state-based protocol of direct quantum communication without actual transmission of particles is proposed by Salih et al. (Phys Rev Lett 110:170502, 2013) using chained quantum Zeno effect. The counterfactual condition (claim) of Salih et al. is weakened here to the extent that transmission of particles is allowed, but transmission of the message qubits (the qubits on which the secret information is encoded) is not allowed. Remaining within this weaker (non-counterfactual) condition, an orthogonal-state-based protocol of deterministic secure quantum communication is proposed using entanglement swapping, where actual transmission of the message qubits is not required. Further, it is shown that there exists a large class of quantum states that can be used to implement the proposed protocol. The security of the proposed protocol originates from monogamy of entanglement. As the protocol can be implemented without using conjugate coding, its security is independent of non-commutativity.

  1. Discussion on LDPC Codes and Uplink Coding

    NASA Technical Reports Server (NTRS)

    Andrews, Ken; Divsalar, Dariush; Dolinar, Sam; Moision, Bruce; Hamkins, Jon; Pollara, Fabrizio

    2007-01-01

    This slide presentation reviews the progress that the workgroup on Low-Density Parity-Check (LDPC) for space link coding. The workgroup is tasked with developing and recommending new error correcting codes for near-Earth, Lunar, and deep space applications. Included in the presentation is a summary of the technical progress of the workgroup. Charts that show the LDPC decoder sensitivity to symbol scaling errors are reviewed, as well as a chart showing the performance of several frame synchronizer algorithms compared to that of some good codes and LDPC decoder tests at ESTL. Also reviewed is a study on Coding, Modulation, and Link Protocol (CMLP), and the recommended codes. A design for the Pseudo-Randomizer with LDPC Decoder and CRC is also reviewed. A chart that summarizes the three proposed coding systems is also presented.

  2. Manually operated coded switch

    DOEpatents

    Barnette, Jon H.

    1978-01-01

    The disclosure relates to a manually operated recodable coded switch in which a code may be inserted, tried and used to actuate a lever controlling an external device. After attempting a code, the switch's code wheels must be returned to their zero positions before another try is made.

  3. Binary primitive alternant codes

    NASA Technical Reports Server (NTRS)

    Helgert, H. J.

    1975-01-01

    In this note we investigate the properties of two classes of binary primitive alternant codes that are generalizations of the primitive BCH codes. For these codes we establish certain equivalence and invariance relations and obtain values of d and d*, the minimum distances of the prime and dual codes.

  4. Algebraic geometric codes

    NASA Technical Reports Server (NTRS)

    Shahshahani, M.

    1991-01-01

    The performance characteristics are discussed of certain algebraic geometric codes. Algebraic geometric codes have good minimum distance properties. On many channels they outperform other comparable block codes; therefore, one would expect them eventually to replace some of the block codes used in communications systems. It is suggested that it is unlikely that they will become useful substitutes for the Reed-Solomon codes used by the Deep Space Network in the near future. However, they may be applicable to systems where the signal to noise ratio is sufficiently high so that block codes would be more suitable than convolutional or concatenated codes.

  5. Layered Wyner-Ziv video coding.

    PubMed

    Xu, Qian; Xiong, Zixiang

    2006-12-01

    Following recent theoretical works on successive Wyner-Ziv coding (WZC), we propose a practical layered Wyner-Ziv video coder using the DCT, nested scalar quantization, and irregular LDPC code based Slepian-Wolf coding (or lossless source coding with side information at the decoder). Our main novelty is to use the base layer of a standard scalable video coder (e.g., MPEG-4/H.26L FGS or H.263+) as the decoder side information and perform layered WZC for quality enhancement. Similar to FGS coding, there is no performance difference between layered and monolithic WZC when the enhancement bitstream is generated in our proposed coder. Using an H.26L coded version as the base layer, experiments indicate that WZC gives slightly worse performance than FGS coding when the channel (for both the base and enhancement layers) is noiseless. However, when the channel is noisy, extensive simulations of video transmission over wireless networks conforming to the CDMA2000 1X standard show that H.26L base layer coding plus Wyner-Ziv enhancement layer coding are more robust against channel errors than H.26L FGS coding. These results demonstrate that layered Wyner-Ziv video coding is a promising new technique for video streaming over wireless networks. PMID:17153952

  6. 26 CFR 1.953-2 - Actual United States risks.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ..., and water damage risks incurred when property is actually located in the United States and marine... 26 Internal Revenue 10 2014-04-01 2013-04-01 true Actual United States risks. 1.953-2 Section 1... coverage as “.825% plus .3% fire, etc. risks plus .12% water risks = 1.245%”, a reasonable basis exists...

  7. Self-actualization: Its Use and Misuse in Teacher Education.

    ERIC Educational Resources Information Center

    Ivie, Stanley D.

    1982-01-01

    The writings of Abraham Maslow are analyzed to determine the meaning of the psychological term "self-actualization." After pointing out that self-actualization is a rare quality and that it has little to do with formal education, the author concludes that the concept has little practical relevance for teacher education. (PP)

  8. The Self-Actualization of Polk Community College Students.

    ERIC Educational Resources Information Center

    Pearsall, Howard E.; Thompson, Paul V., Jr.

    This article investigates the concept of self-actualization introduced by Abraham Maslow (1954). A summary of Maslow's Needs Hierarchy, along with a description of the characteristics of the self-actualized person, is presented. An analysis of humanistic education reveals it has much to offer as a means of promoting the principles of…

  9. From Self-Awareness to Self-Actualization

    ERIC Educational Resources Information Center

    Cangemi, Joseph P.; Englander, Meryl R.

    1974-01-01

    Highest priority of education is to help students utilize as much of their talent as is possible. Third Force psychologists would interpret this as becoming self-actualized. Self-awareness is required for psychological growth. Without self-awareness there can be no growth, no mental hygiene, and no self-actualization. (Author)

  10. 12 CFR 1806.203 - Selection Process, actual award amounts.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 12 Banks and Banking 7 2010-01-01 2010-01-01 false Selection Process, actual award amounts. 1806... OF THE TREASURY BANK ENTERPRISE AWARD PROGRAM Awards § 1806.203 Selection Process, actual award... round: (1) To select Applicants not previously selected, using the calculation and selection...

  11. Self-Actualization and the Effective Social Studies Teacher.

    ERIC Educational Resources Information Center

    Farmer, Rodney B.

    1980-01-01

    Discusses a study undertaken to investigate the relationship between social studies teachers' degrees of self-actualization and their teacher effectiveness. Investigates validity of using Maslow's theory of self-actualization as a way of identifying the effective social studies teacher personality. (Author/DB)

  12. Facebook as a Library Tool: Perceived vs. Actual Use

    ERIC Educational Resources Information Center

    Jacobson, Terra B.

    2011-01-01

    As Facebook has come to dominate the social networking site arena, more libraries have created their own library pages on Facebook to create library awareness and to function as a marketing tool. This paper examines reported versus actual use of Facebook in libraries to identify discrepancies between intended goals and actual use. The results of a…

  13. Perceived and Actual Student Support Needs in Distance Education.

    ERIC Educational Resources Information Center

    Visser, Lya; Visser, Yusra Laila

    2000-01-01

    This study sought to determine the academic, affective, and administrative support expectations of distance education students, and to compare actual expectations of distance education students with the instructor's perceptions of such expectations. Results demonstrated divergence between perceived and actual expectations of student support in…

  14. Gebrauchstexte im Fremdsprachenunterricht ("Actual" Texts in Foreign Language Teaching)

    ERIC Educational Resources Information Center

    Ziegesar, Detlef von

    1976-01-01

    Presents for analysis actual texts and texts specially written for teaching, arriving at a basis for a typology of actual texts. Defines teaching aims using such texts, and develops, from a TV program, a teaching unit used in a Karlsruhe school. (Text is in German.) (IFS/WGA)

  15. Self-Actualizing Men and Women: A Comparison Study.

    ERIC Educational Resources Information Center

    Hall, Eleanor G.; Hansen, Jan B.

    1997-01-01

    The self-actualization of 167 women who lived in the Martha Cook (MC) dormitory of the University of Michigan (1950-1970) was compared to that of a group of Ivy League men researched in another study. In addition, two groups of MC women were compared to each other to identify differences which might explain why some self-actualized while other did…

  16. SELF-ACTUALIZATION AND THE UTILIZATION OF TALENT.

    ERIC Educational Resources Information Center

    FRENCH, JOHN R.P.; MILLER, DANIEL R.

    THIS STUDY ATTEMPTED (1) TO DEVELOP A THEORY OF THE CAUSES AND CONSEQUENCES OF SELF-ACTUALIZATION AS RELATED TO THE UTILIZATION OF TALENT, (2) TO FIT THE THEORY TO EXISTING DATA, AND (3) TO PLAN ONE OR MORE RESEARCH PROJECTS TO TEST THE THEORY. TWO ARTICLES ON IDENTITY AND MOTIVATION AND SELF-ACTUALIZATION AND SELF-IDENTITY THEORY REPORTED THE…

  17. Self-Actualization Effects Of A Marathon Growth Group

    ERIC Educational Resources Information Center

    Jones, Dorothy S.; Medvene, Arnold M.

    1975-01-01

    This study examined the effects of a marathon group experience on university student's level of self-actualization two days and six weeks after the experience. Gains in self-actualization as a result of marathon group participation depended upon an individual's level of ego strength upon entering the group. (Author)

  18. 26 CFR 1.962-3 - Treatment of actual distributions.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 26 Internal Revenue 10 2013-04-01 2013-04-01 false Treatment of actual distributions. 1.962-3... TAX (CONTINUED) INCOME TAXES (CONTINUED) Controlled Foreign Corporations § 1.962-3 Treatment of actual... a foreign corporation. (ii) Treatment of section 962 earnings and profits under § 1.959-3....

  19. School Guidance Counselors' Perceptions of Actual and Preferred Job Duties

    ERIC Educational Resources Information Center

    Edwards, John Dexter

    2010-01-01

    The purpose of this study was to provide process data for school counselors, administrators, and the public, regarding school counselors' actual roles within the guidance counselor preferred job duties and actual job duties. In addition, factors including National Certification or no National Certification, years of counseling experience, and…

  20. ARA type protograph codes

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush (Inventor); Abbasfar, Aliazam (Inventor); Jones, Christopher R. (Inventor); Dolinar, Samuel J. (Inventor); Thorpe, Jeremy C. (Inventor); Andrews, Kenneth S. (Inventor); Yao, Kung (Inventor)

    2008-01-01

    An apparatus and method for encoding low-density parity check codes. Together with a repeater, an interleaver and an accumulator, the apparatus comprises a precoder, thus forming accumulate-repeat-accumulate (ARA codes). Protographs representing various types of ARA codes, including AR3A, AR4A and ARJA codes, are described. High performance is obtained when compared to the performance of current repeat-accumulate (RA) or irregular-repeat-accumulate (IRA) codes.

  1. QR Codes 101

    ERIC Educational Resources Information Center

    Crompton, Helen; LaFrance, Jason; van 't Hooft, Mark

    2012-01-01

    A QR (quick-response) code is a two-dimensional scannable code, similar in function to a traditional bar code that one might find on a product at the supermarket. The main difference between the two is that, while a traditional bar code can hold a maximum of only 20 digits, a QR code can hold up to 7,089 characters, so it can contain much more…

  2. An efficient code for the simulation of nonhydrostatic stratified flow over obstacles

    NASA Technical Reports Server (NTRS)

    Pihos, G. G.; Wurtele, M. G.

    1981-01-01

    The physical model and computational procedure of the code is described in detail. The code is validated in tests against a variety of known analytical solutions from the literature and is also compared against actual mountain wave observations. The code will receive as initial input either mathematically idealized or discrete observational data. The form of the obstacle or mountain is arbitrary.

  3. The Tractor: Probabilistic astronomical source detection and measurement

    NASA Astrophysics Data System (ADS)

    Lang, Dustin; Hogg, David W.; Mykytyn, David

    2016-04-01

    The Tractor optimizes or samples from models of astronomical objects. The approach is generative: given astronomical sources and a description of the image properties, the code produces pixel-space estimates or predictions of what will be observed in the images. This estimate can be used to produce a likelihood for the observed data given the model: assuming the model space actually includes the truth (it doesn’t, in detail), then if we had the optimal model parameters, the predicted image would differ from the actually observed image only by noise. Given a noise model of the instrument and assuming pixelwise independent noise, the log-likelihood is the negative chi-squared difference: (image - model) / noise.

  4. Arithmetic coding as a non-linear dynamical system

    NASA Astrophysics Data System (ADS)

    Nagaraj, Nithin; Vaidya, Prabhakar G.; Bhat, Kishor G.

    2009-04-01

    In order to perform source coding (data compression), we treat messages emitted by independent and identically distributed sources as imprecise measurements (symbolic sequence) of a chaotic, ergodic, Lebesgue measure preserving, non-linear dynamical system known as Generalized Luröth Series (GLS). GLS achieves Shannon's entropy bound and turns out to be a generalization of arithmetic coding, a popular source coding algorithm, used in international compression standards such as JPEG2000 and H.264. We further generalize GLS to piecewise non-linear maps (Skewed-nGLS). We motivate the use of Skewed-nGLS as a framework for joint source coding and encryption.

  5. Driver Code for Adaptive Optics

    NASA Technical Reports Server (NTRS)

    Rao, Shanti

    2007-01-01

    A special-purpose computer code for a deformable-mirror adaptive-optics control system transmits pixel-registered control from (1) a personal computer running software that generates the control data to (2) a circuit board with 128 digital-to-analog converters (DACs) that generate voltages to drive the deformable-mirror actuators. This program reads control-voltage codes from a text file, then sends them, via the computer s parallel port, to a circuit board with four AD5535 (or equivalent) chips. Whereas a similar prior computer program was capable of transmitting data to only one chip at a time, this program can send data to four chips simultaneously. This program is in the form of C-language code that can be compiled and linked into an adaptive-optics software system. The program as supplied includes source code for integration into the adaptive-optics software, documentation, and a component that provides a demonstration of loading DAC codes from a text file. On a standard Windows desktop computer, the software can update 128 channels in 10 ms. On Real-Time Linux with a digital I/O card, the software can update 1024 channels (8 boards in parallel) every 8 ms.

  6. A progressive data compression scheme based upon adaptive transform coding: Mixture block coding of natural images

    NASA Technical Reports Server (NTRS)

    Rost, Martin C.; Sayood, Khalid

    1991-01-01

    A method for efficiently coding natural images using a vector-quantized variable-blocksized transform source coder is presented. The method, mixture block coding (MBC), incorporates variable-rate coding by using a mixture of discrete cosine transform (DCT) source coders. Which coders are selected to code any given image region is made through a threshold driven distortion criterion. In this paper, MBC is used in two different applications. The base method is concerned with single-pass low-rate image data compression. The second is a natural extension of the base method which allows for low-rate progressive transmission (PT). Since the base method adapts easily to progressive coding, it offers the aesthetic advantage of progressive coding without incorporating extensive channel overhead. Image compression rates of approximately 0.5 bit/pel are demonstrated for both monochrome and color images.

  7. Nuclear and dosimetric features of an isotopic neutron source

    NASA Astrophysics Data System (ADS)

    Vega-Carrillo, H. R.; Hernández-Dávila, V. M.; Rivera, T.; Sánchez, A.

    2014-02-01

    A multisphere neutron spectrometer was used to determine the features of a 239PuBe neutron source that is used to operate the ESFM-IPN Subcritical Reactor. To determine the source main features it was located a 100 cm from the spectrometer which was a 6LiI(Eu) scintillator and 2, 3, 5, 8, 10 and 12 in.-diameter polyethylene spheres. Count rates obtained with the spectrometer were unfolded using the NSDUAZ code and neutron spectrum, total fluence, and ambient dose equivalent were determined. A Monte Carlo calculation was carried out to estimate the spectrum and integral features being less than values obtained experimentally due to the presence of 241Pu in the Pu used to fabricate the source. Actual neutron yield and the mass fraction of 241Pu was estimated.

  8. Recent advances in coding theory for near error-free communications

    NASA Technical Reports Server (NTRS)

    Cheung, K.-M.; Deutsch, L. J.; Dolinar, S. J.; Mceliece, R. J.; Pollara, F.; Shahshahani, M.; Swanson, L.

    1991-01-01

    Channel and source coding theories are discussed. The following subject areas are covered: large constraint length convolutional codes (the Galileo code); decoder design (the big Viterbi decoder); Voyager's and Galileo's data compression scheme; current research in data compression for images; neural networks for soft decoding; neural networks for source decoding; finite-state codes; and fractals for data compression.

  9. Asymmetric quantum convolutional codes

    NASA Astrophysics Data System (ADS)

    La Guardia, Giuliano G.

    2016-01-01

    In this paper, we construct the first families of asymmetric quantum convolutional codes (AQCCs). These new AQCCs are constructed by means of the CSS-type construction applied to suitable families of classical convolutional codes, which are also constructed here. The new codes have non-catastrophic generator matrices, and they have great asymmetry. Since our constructions are performed algebraically, i.e. we develop general algebraic methods and properties to perform the constructions, it is possible to derive several families of such codes and not only codes with specific parameters. Additionally, several different types of such codes are obtained.

  10. Length-Limited Variable-to-Variable Length Codes for High-Performance Entropy Coding

    SciTech Connect

    Duchaineau, M; Joy, K I; Senecal, J

    2003-11-17

    Arithmetic coding achieves a superior coding rate when encoding a binary source, but its lack of speed makes it an inferior choice when true high-performance encoding is needed. We present our work on a practical implementation of fast entropy coders for binary messages utilizing only bit shifts and table lookups. To limit code table size we limit our code lengths with a type of variable-to-variable (VV) length code created from source string merging. We refer to these codes as ''merged codes''. With merged codes it is possible to achieve a desired level of speed by adjusting the number of bits read from the source at each step. The most efficient merged codes yield a coder with a worst-case inefficiency of 0.4%, relative to the Shannon entropy. Using a hybrid Golomb-VV Bin Coder we are able to achieve a compression ratio that is competitive with other state-of-the-art coders, at a superior throughput.

  11. Length-Limited Variable-to-Variable Length Codes for High-Performance Entropy Coding

    SciTech Connect

    Duchaineau, M; Senecal, J; Joy, K I

    2004-01-05

    Arithmetic coding achieves a superior coding rate when encoding a binary source, but its lack of speed makes it an inferior choice when true high-performance encoding is needed. We present our work on a practical implementation of fast entropy coders for binary messages utilizing only bit shifts and table lookups. To limit code table size we limit our code lengths with a type of variable-to-variable (VV) length code created from source string merging. We refer to these codes as ''merged codes''. With merged codes it is possible to achieve a desired level of speed by adjusting the number of bits read from the source at each step. The most efficient merged codes yield a coder with a worst-case inefficiency of 0.4%, relative to the Shannon entropy. Using a hybrid Golomb-VV Bin Coder we are able to achieve a compression ratio that is competitive with other state-of-the-art coders, at a superior throughput.

  12. Safety of patients--actual problem of modern medicine (review).

    PubMed

    Tsintsadze, Neriman; Samnidze, L; Beridze, T; Tsintsadze, M; Tsintsadze, Nino

    2011-09-01

    Safety of patients is actual problem of up-to-date medicine. The current successful treatment of various sicknesses is achieved by implementation in clinical practice such medical preparations (medications), which are characterized with the high therapeutic activity, low toxicity and prolonged effects. In spite of evidence of the pharmacotherapeutical advances, the frequency of complications after medication has grown - that is why the safety of patients is the acute actual problem of medicine and ecological state of human population today. PMID:22156680

  13. Cellulases and coding sequences

    DOEpatents

    Li, Xin-Liang; Ljungdahl, Lars G.; Chen, Huizhong

    2001-01-01

    The present invention provides three fungal cellulases, their coding sequences, recombinant DNA molecules comprising the cellulase coding sequences, recombinant host cells and methods for producing same. The present cellulases are from Orpinomyces PC-2.

  14. Cellulases and coding sequences

    DOEpatents

    Li, Xin-Liang; Ljungdahl, Lars G.; Chen, Huizhong

    2001-02-20

    The present invention provides three fungal cellulases, their coding sequences, recombinant DNA molecules comprising the cellulase coding sequences, recombinant host cells and methods for producing same. The present cellulases are from Orpinomyces PC-2.

  15. Multiple Turbo Codes

    NASA Technical Reports Server (NTRS)

    Divsalar, D.; Pollara, F.

    1995-01-01

    A description is given of multiple turbo codes and a suitable decoder structure derived from an approximation to the maximum a posteriori probability (MAP) decision rule, which is substantially different from the decoder for two-code-based encoders.

  16. QR Code Mania!

    ERIC Educational Resources Information Center

    Shumack, Kellie A.; Reilly, Erin; Chamberlain, Nik

    2013-01-01

    space, has error-correction capacity, and can be read from any direction. These codes are used in manufacturing, shipping, and marketing, as well as in education. QR codes can be created to produce…

  17. Some practical universal noiseless coding techniques

    NASA Technical Reports Server (NTRS)

    Rice, R. F.

    1979-01-01

    Some practical adaptive techniques for the efficient noiseless coding of a broad class of such data sources are developed and analyzed. Algorithms are designed for coding discrete memoryless sources which have a known symbol probability ordering but unknown probability values. A general applicability of these algorithms to solving practical problems is obtained because most real data sources can be simply transformed into this form by appropriate preprocessing. These algorithms have exhibited performance only slightly above all entropy values when applied to real data with stationary characteristics over the measurement span. Performance considerably under a measured average data entropy may be observed when data characteristics are changing over the measurement span.

  18. Redundancy reduction in image coding

    NASA Technical Reports Server (NTRS)

    Rahman, Zia-Ur; Alter-Gartenberg, Rachel; Fales, Carl L.; Huck, Friedrich O.

    1993-01-01

    We assess redundancy reduction in image coding in terms of the information acquired by the image-gathering process and the amount of data required to convey this information. A clear distinction is made between the theoretically minimum rate of data transmission, as measured by the entropy of the completely decorrelated data, and the actual rate of data transmission, as measured by the entropy of the encoded (incompletely decorrelated) data. It is shown that the information efficiency of the visual communication channel depends not only on the characteristics of the radiance field and the decorrelation algorithm, as is generally perceived, but also on the design of the image-gathering device, as is commonly ignored.

  19. STEEP32 computer code

    NASA Technical Reports Server (NTRS)

    Goerke, W. S.

    1972-01-01

    A manual is presented as an aid in using the STEEP32 code. The code is the EXEC 8 version of the STEEP code (STEEP is an acronym for shock two-dimensional Eulerian elastic plastic). The major steps in a STEEP32 run are illustrated in a sample problem. There is a detailed discussion of the internal organization of the code, including a description of each subroutine.

  20. Construction and performance research on variable-length codes for multirate OCDMA multimedia networks

    NASA Astrophysics Data System (ADS)

    Li, Chuan-qi; Yang, Meng-jie; Luo, De-jun; Lu, Ye; Kong, Yi-pu; Zhang, Dong-chuang

    2014-09-01

    A new kind of variable-length codes with good correlation properties for the multirate asynchronous optical code division multiple access (OCDMA) multimedia networks is proposed, called non-repetition interval (NRI) codes. The NRI codes can be constructed by structuring the interval-sets with no repetition, and the code length depends on the number of users and the code weight. According to the structural characteristics of NRI codes, the formula of bit error rate (BER) is derived. Compared with other variable-length codes, the NRI codes have lower BER. A multirate OCDMA multimedia simulation system is designed and built, the longer codes are assigned to the users who need slow speed, while the shorter codes are assigned to the users who need high speed. It can be obtained by analyzing the eye diagram that the user with slower speed has lower BER, and the conclusion is the same as the actual demand in multimedia data transport.

  1. Certifying Auto-Generated Flight Code

    NASA Technical Reports Server (NTRS)

    Denney, Ewen

    2008-01-01

    itself is generic, and parametrized with respect to a library of coding patterns that depend on the safety policies and the code generator. The patterns characterize the notions of definitions and uses that are specific to the given safety property. For example, for initialization safety, definitions correspond to variable initializations while uses are statements which read a variable, whereas for array bounds safety, definitions are the array declarations, while uses are statements which access an array variable. The inferred annotations are thus highly dependent on the actual program and the properties being proven. The annotations, themselves, need not be trusted, but are crucial to obtain the automatic formal verification of the safety properties without requiring access to the internals of the code generator. The approach has been applied to both in-house and commercial code generators, but is independent of the particular generator used. It is currently being adapted to flight code generated using MathWorks Real-Time Workshop, an automatic code generator that translates from Simulink/Stateflow models into embedded C code.

  2. Color code identification in coded structured light.

    PubMed

    Zhang, Xu; Li, Youfu; Zhu, Limin

    2012-08-01

    Color code is widely employed in coded structured light to reconstruct the three-dimensional shape of objects. Before determining the correspondence, a very important step is to identify the color code. Until now, the lack of an effective evaluation standard has hindered the progress in this unsupervised classification. In this paper, we propose a framework based on the benchmark to explore the new frontier. Two basic facets of the color code identification are discussed, including color feature selection and clustering algorithm design. First, we adopt analysis methods to evaluate the performance of different color features, and the order of these color features in the discriminating power is concluded after a large number of experiments. Second, in order to overcome the drawback of K-means, a decision-directed method is introduced to find the initial centroids. Quantitative comparisons affirm that our method is robust with high accuracy, and it can find or closely approach the global peak. PMID:22859022

  3. The Particle Accelerator Simulation Code PyORBIT

    SciTech Connect

    Gorlov, Timofey V; Holmes, Jeffrey A; Cousineau, Sarah M; Shishlo, Andrei P

    2015-01-01

    The particle accelerator simulation code PyORBIT is presented. The structure, implementation, history, parallel and simulation capabilities, and future development of the code are discussed. The PyORBIT code is a new implementation and extension of algorithms of the original ORBIT code that was developed for the Spallation Neutron Source accelerator at the Oak Ridge National Laboratory. The PyORBIT code has a two level structure. The upper level uses the Python programming language to control the flow of intensive calculations performed by the lower level code implemented in the C++ language. The parallel capabilities are based on MPI communications. The PyORBIT is an open source code accessible to the public through the Google Open Source Projects Hosting service.

  4. The multiple codes of nucleotide sequences.

    PubMed

    Trifonov, E N

    1989-01-01

    Nucleotide sequences carry genetic information of many different kinds, not just instructions for protein synthesis (triplet code). Several codes of nucleotide sequences are discussed including: (1) the translation framing code, responsible for correct triplet counting by the ribosome during protein synthesis; (2) the chromatin code, which provides instructions on appropriate placement of nucleosomes along the DNA molecules and their spatial arrangement; (3) a putative loop code for single-stranded RNA-protein interactions. The codes are degenerate and corresponding messages are not only interspersed but actually overlap, so that some nucleotides belong to several messages simultaneously. Tandemly repeated sequences frequently considered as functionless "junk" are found to be grouped into certain classes of repeat unit lengths. This indicates some functional involvement of these sequences. A hypothesis is formulated according to which the tandem repeats are given the role of weak enhancer-silencers that modulate, in a copy number-dependent way, the expression of proximal genes. Fast amplification and elimination of the repeats provides an attractive mechanism of species adaptation to a rapidly changing environment. PMID:2673451

  5. Testing two temporal upscaling schemes for the estimation of the time variability of the actual evapotranspiration

    NASA Astrophysics Data System (ADS)

    Maltese, A.; Capodici, F.; Ciraolo, G.; La Loggia, G.

    2015-10-01

    Temporal availability of grapes actual evapotranspiration is an emerging issue since vineyards farms are more and more converted from rainfed to irrigated agricultural systems. The manuscript aims to verify the accuracy of the actual evapotranspiration retrieval coupling a single source energy balance approach and two different temporal upscaling schemes. The first scheme tests the temporal upscaling of the main input variables, namely the NDVI, albedo and LST; the second scheme tests the temporal upscaling of the energy balance output, the actual evapotranspiration. The temporal upscaling schemes were implemented on: i) airborne remote sensing data acquired monthly during a whole irrigation season over a Sicilian vineyard; ii) low resolution MODIS products released daily or weekly; iii) meteorological data acquired by standard gauge stations. Daily MODIS LST products (MOD11A1) were disaggregated using the DisTrad model, 8-days black and white sky albedo products (MCD43A) allowed modeling the total albedo, and 8-days NDVI products (MOD13Q1) were modeled using the Fisher approach. Results were validated both in time and space. The temporal validation was carried out using the actual evapotranspiration measured in situ using data collected by a flux tower through the eddy covariance technique. The spatial validation involved airborne images acquired at different times from June to September 2008. Results aim to test whether the upscaling of the energy balance input or output data performed better.

  6. Utilizing GPUs to Accelerate Turbomachinery CFD Codes

    NASA Technical Reports Server (NTRS)

    MacCalla, Weylin; Kulkarni, Sameer

    2016-01-01

    GPU computing has established itself as a way to accelerate parallel codes in the high performance computing world. This work focuses on speeding up APNASA, a legacy CFD code used at NASA Glenn Research Center, while also drawing conclusions about the nature of GPU computing and the requirements to make GPGPU worthwhile on legacy codes. Rewriting and restructuring of the source code was avoided to limit the introduction of new bugs. The code was profiled and investigated for parallelization potential, then OpenACC directives were used to indicate parallel parts of the code. The use of OpenACC directives was not able to reduce the runtime of APNASA on either the NVIDIA Tesla discrete graphics card, or the AMD accelerated processing unit. Additionally, it was found that in order to justify the use of GPGPU, the amount of parallel work being done within a kernel would have to greatly exceed the work being done by any one portion of the APNASA code. It was determined that in order for an application like APNASA to be accelerated on the GPU, it should not be modular in nature, and the parallel portions of the code must contain a large portion of the code's computation time.

  7. Limitations of Phased Array Beamforming in Open Rotor Noise Source Imaging

    NASA Technical Reports Server (NTRS)

    Horvath, Csaba; Envia, Edmane; Podboy, Gary G.

    2013-01-01

    Phased array beamforming results of the F31/A31 historical baseline counter-rotating open rotor blade set were investigated for measurement data taken on the NASA Counter-Rotating Open Rotor Propulsion Rig in the 9- by 15-Foot Low-Speed Wind Tunnel of NASA Glenn Research Center as well as data produced using the LINPROP open rotor tone noise code. The planar microphone array was positioned broadside and parallel to the axis of the open rotor, roughly 2.3 rotor diameters away. The results provide insight as to why the apparent noise sources of the blade passing frequency tones and interaction tones appear at their nominal Mach radii instead of at the actual noise sources, even if those locations are not on the blades. Contour maps corresponding to the sound fields produced by the radiating sound waves, taken from the simulations, are used to illustrate how the interaction patterns of circumferential spinning modes of rotating coherent noise sources interact with the phased array, often giving misleading results, as the apparent sources do not always show where the actual noise sources are located. This suggests that a more sophisticated source model would be required to accurately locate the sources of each tone. The results of this study also have implications with regard to the shielding of open rotor sources by airframe empennages.

  8. DLLExternalCode

    SciTech Connect

    Greg Flach, Frank Smith

    2014-05-14

    DLLExternalCode is the a general dynamic-link library (DLL) interface for linking GoldSim (www.goldsim.com) with external codes. The overall concept is to use GoldSim as top level modeling software with interfaces to external codes for specific calculations. The DLLExternalCode DLL that performs the linking function is designed to take a list of code inputs from GoldSim, create an input file for the external application, run the external code, and return a list of outputs, read from files created by the external application, back to GoldSim. Instructions for creating the input file, running the external code, and reading the output are contained in an instructions file that is read and interpreted by the DLL.

  9. DLLExternalCode

    2014-05-14

    DLLExternalCode is the a general dynamic-link library (DLL) interface for linking GoldSim (www.goldsim.com) with external codes. The overall concept is to use GoldSim as top level modeling software with interfaces to external codes for specific calculations. The DLLExternalCode DLL that performs the linking function is designed to take a list of code inputs from GoldSim, create an input file for the external application, run the external code, and return a list of outputs, read frommore » files created by the external application, back to GoldSim. Instructions for creating the input file, running the external code, and reading the output are contained in an instructions file that is read and interpreted by the DLL.« less

  10. Reusable State Machine Code Generator

    NASA Astrophysics Data System (ADS)

    Hoffstadt, A. A.; Reyes, C.; Sommer, H.; Andolfato, L.

    2010-12-01

    The State Machine model is frequently used to represent the behaviour of a system, allowing one to express and execute this behaviour in a deterministic way. A graphical representation such as a UML State Chart diagram tames the complexity of the system, thus facilitating changes to the model and communication between developers and domain experts. We present a reusable state machine code generator, developed by the Universidad Técnica Federico Santa María and the European Southern Observatory. The generator itself is based on the open source project architecture, and uses UML State Chart models as input. This allows for a modular design and a clean separation between generator and generated code. The generated state machine code has well-defined interfaces that are independent of the implementation artefacts such as the middle-ware. This allows using the generator in the substantially different observatory software of the Atacama Large Millimeter Array and the ESO Very Large Telescope. A project-specific mapping layer for event and transition notification connects the state machine code to its environment, which can be the Common Software of these projects, or any other project. This approach even allows to automatically create tests for a generated state machine, using techniques from software testing, such as path-coverage.

  11. Code generation of RHIC accelerator device objects

    SciTech Connect

    Olsen, R.H.; Hoff, L.; Clifford, T.

    1995-12-01

    A RHIC Accelerator Device Object is an abstraction which provides a software view of a collection of collider control points known as parameters. A grammar has been defined which allows these parameters, along with code describing methods for acquiring and modifying them, to be specified efficiently in compact definition files. These definition files are processed to produce C++ source code. This source code is compiled to produce an object file which can be loaded into a front end computer. Each loaded object serves as an Accelerator Device Object class definition. The collider will be controlled by applications which set and get the parameters in instances of these classes using a suite of interface routines. Significant features of the grammar are described with details about the generated C++ code.

  12. Entropy-Based Bounds On Redundancies Of Huffman Codes

    NASA Technical Reports Server (NTRS)

    Smyth, Padhraic J.

    1992-01-01

    Report presents extension of theory of redundancy of binary prefix code of Huffman type which includes derivation of variety of bounds expressed in terms of entropy of source and size of alphabet. Recent developments yielded bounds on redundancy of Huffman code in terms of probabilities of various components in source alphabet. In practice, redundancies of optimal prefix codes often closer to 0 than to 1.

  13. Systems Improved Numerical Fluids Analysis Code

    NASA Technical Reports Server (NTRS)

    Costello, F. A.

    1990-01-01

    Systems Improved Numerical Fluids Analysis Code, SINFAC, consists of additional routines added to April, 1983, version of SINDA. Additional routines provide for mathematical modeling of active heat-transfer loops. Simulates steady-state and pseudo-transient operations of 16 different components of heat-transfer loops, including radiators, evaporators, condensers, mechanical pumps, reservoirs, and many types of valves and fittings. Program contains property-analysis routine used to compute thermodynamic properties of 20 different refrigerants. Source code written in FORTRAN 77.

  14. The Los Alamos accelerator code group

    SciTech Connect

    Krawczyk, F.L.; Billen, J.H.; Ryne, R.D.; Takeda, Harunori; Young, L.M.

    1995-05-01

    The Los Alamos Accelerator Code Group (LAACG) is a national resource for members of the accelerator community who use and/or develop software for the design and analysis of particle accelerators, beam transport systems, light sources, storage rings, and components of these systems. Below the authors describe the LAACG`s activities in high performance computing, maintenance and enhancement of POISSON/SUPERFISH and related codes and the dissemination of information on the INTERNET.

  15. Experimental philosophy of actual and counterfactual free will intuitions.

    PubMed

    Feltz, Adam

    2015-11-01

    Five experiments suggested that everyday free will and moral responsibility judgments about some hypothetical thought examples differed from free will and moral responsibility judgments about the actual world. Experiment 1 (N=106) showed that free will intuitions about the actual world measured by the FAD-Plus poorly predicted free will intuitions about a hypothetical person performing a determined action (r=.13). Experiments 2-5 replicated this result and found the relations between actual free will judgments and free will judgments about hypothetical determined or fated actions (rs=.22-.35) were much smaller than the differences between them (ηp(2)=.2-.55). These results put some pressure on theoretical accounts of everyday intuitions about freedom and moral responsibility. PMID:26126174

  16. Optical image encryption based on real-valued coding and subtracting with the help of QR code

    NASA Astrophysics Data System (ADS)

    Deng, Xiaopeng

    2015-08-01

    A novel optical image encryption based on real-valued coding and subtracting is proposed with the help of quick response (QR) code. In the encryption process, the original image to be encoded is firstly transformed into the corresponding QR code, and then the corresponding QR code is encoded into two phase-only masks (POMs) by using basic vector operations. Finally, the absolute values of the real or imaginary parts of the two POMs are chosen as the ciphertexts. In decryption process, the QR code can be approximately restored by recording the intensity of the subtraction between the ciphertexts, and hence the original image can be retrieved without any quality loss by scanning the restored QR code with a smartphone. Simulation results and actual smartphone collected results show that the method is feasible and has strong tolerance to noise, phase difference and ratio between intensities of the two decryption light beams.

  17. Steady, Nonrotating, Blade-to-Blade Potential Transonic Cascade Flow Analysis Code

    NASA Technical Reports Server (NTRS)

    Dulikravich, D. S.

    1983-01-01

    CAS2D computer program numerically solves artifically time-dependent form of actual full potential equation, providing steady, nonrotating, bladeto-blade potential transonic cascade flow analysis code. CAS2D written in FORTRAN IV.

  18. GeoPhysical Analysis Code

    2011-05-21

    GPAC is a code that integrates open source libraries for element formulations, linear algebra, and I/O with two main LLNL-Written components: (i) a set of standard finite elements physics solvers for rersolving Darcy fluid flow, explicit mechanics, implicit mechanics, and fluid-mediated fracturing, including resolution of contact both implicity and explicity, and (ii) a MPI-based parallelization implementation for use on generic HPC distributed memory architectures. The resultant code can be used alone for linearly elastic problemsmore » and problems involving hydraulic fracturing, where the mesh topology is dynamically changed. The key application domain is for low-rate stimulation and fracture control in subsurface reservoirs (e.g., enhanced geothermal sites and unconventional shale gas stimulation). GPAC also has interfaces to call external libraries for, e.g., material models and equations of state; however, LLNL-developed EOS and material models will not be part of the current release.« less

  19. GeoPhysical Analysis Code

    SciTech Connect

    2011-05-21

    GPAC is a code that integrates open source libraries for element formulations, linear algebra, and I/O with two main LLNL-Written components: (i) a set of standard finite elements physics solvers for rersolving Darcy fluid flow, explicit mechanics, implicit mechanics, and fluid-mediated fracturing, including resolution of contact both implicity and explicity, and (ii) a MPI-based parallelization implementation for use on generic HPC distributed memory architectures. The resultant code can be used alone for linearly elastic problems and problems involving hydraulic fracturing, where the mesh topology is dynamically changed. The key application domain is for low-rate stimulation and fracture control in subsurface reservoirs (e.g., enhanced geothermal sites and unconventional shale gas stimulation). GPAC also has interfaces to call external libraries for, e.g., material models and equations of state; however, LLNL-developed EOS and material models will not be part of the current release.

  20. Secure Communication with Network Coding

    NASA Astrophysics Data System (ADS)

    Cao, Zhanghua; Tang, Yuansheng; Luo, Jinquan

    In this paper, we consider the problem of secure communication over wiretap multicast networks. Noticing that network coding renders the intermediate nodes to mix information from different data flows, we propose a secure communication scheme based on cryptographic means and network coding. Specifically, we employ a confidential cryptosystem to encrypt the source message packets, then treat the secret key as a message packet and mix the key with the obtained cryptograms. Furthermore, we can prove that, under suitable conditions, the wiretapper is unable to gain the secret key. Meanwhile, the confidential cryptosystem prohibits the wiretapper from extracting meaningful information from the obtained cryptograms. Our scheme doesn't need a private channel to transmit the secret key and enables the utilization of network capacity to reach 1 n n.

  1. A Flawed Argument Against Actual Infinity in Physics

    NASA Astrophysics Data System (ADS)

    Perez Laraudogoitia, Jon

    2010-12-01

    In “Nonconservation of Energy and loss of Determinism II. Colliding with an Open Set” (2010) Atkinson and Johnson argue in favour of the idea that an actual infinity should be excluded from physics, at least in the sense that physical systems involving an actual infinity of component elements should not be admitted. In this paper I show that the argument Atkinson and Johnson use is erroneous and that an analysis of the situation considered by them is possible without requiring any type of rejection of the idea of infinity.

  2. Pilot Eye Scanning under Actual Single Pilot Instrument Flight

    NASA Astrophysics Data System (ADS)

    Rinoie, Kenichi; Sunada, Yasuto

    Operations under single pilot instrument flight rules for general aviation aircraft is known to be one of the most demanding pilot tasks. Scanning numerous instruments plays a key role for perception and decision-making during flight. Flight experiments have been done by a single engine light airplane to investigate the pilot eye scanning technique for IFR flights. Comparisons between the results by an actual flight and those by a PC-based flight simulator are made. The experimental difficulties of pilot eye scanning measurements during the actual IFR flight are discussed.

  3. Comparison of simulated and actual wind shear radar data products

    NASA Technical Reports Server (NTRS)

    Britt, Charles L.; Crittenden, Lucille H.

    1992-01-01

    Prior to the development of the NASA experimental wind shear radar system, extensive computer simulations were conducted to determine the performance of the radar in combined weather and ground clutter environments. The simulation of the radar used analytical microburst models to determine weather returns and synthetic aperture radar (SAR) maps to determine ground clutter returns. These simulations were used to guide the development of hazard detection algorithms and to predict their performance. The structure of the radar simulation is reviewed. Actual flight data results from the Orlando and Denver tests are compared with simulated results. Areas of agreement and disagreement of actual and simulated results are shown.

  4. Adaptive entropy coded subband coding of images.

    PubMed

    Kim, Y H; Modestino, J W

    1992-01-01

    The authors describe a design approach, called 2-D entropy-constrained subband coding (ECSBC), based upon recently developed 2-D entropy-constrained vector quantization (ECVQ) schemes. The output indexes of the embedded quantizers are further compressed by use of noiseless entropy coding schemes, such as Huffman or arithmetic codes, resulting in variable-rate outputs. Depending upon the specific configurations of the ECVQ and the ECPVQ over the subbands, many different types of SBC schemes can be derived within the generic 2-D ECSBC framework. Among these, the authors concentrate on three representative types of 2-D ECSBC schemes and provide relative performance evaluations. They also describe an adaptive buffer instrumented version of 2-D ECSBC, called 2-D ECSBC/AEC, for use with fixed-rate channels which completely eliminates buffer overflow/underflow problems. This adaptive scheme achieves performance quite close to the corresponding ideal 2-D ECSBC system. PMID:18296138

  5. Results and code predictions for ABCOVE aerosol code validation - Test AB5

    SciTech Connect

    Hilliard, R K; McCormack, J D; Postma, A K

    1983-11-01

    A program for aerosol behavior code validation and evaluation (ABCOVE) has been developed in accordance with the LMFBR Safety Program Plan. The ABCOVE program is a cooperative effort between the USDOE, the USNRC, and their contractor organizations currently involved in aerosol code development, testing or application. The first large-scale test in the ABCOVE program, AB5, was performed in the 850-m{sup 3} CSTF vessel using a sodium spray as the aerosol source. Seven organizations made pretest predictions of aerosol behavior using seven different computer codes (HAA-3, HAA-4, HAARM-3, QUICK, MSPEC, MAEROS and CONTAIN). Three of the codes were used by more than one user so that the effect of user input could be assessed, as well as the codes themselves. Detailed test results are presented and compared with the code predictions for eight key parameters.

  6. Maximal codeword lengths in Huffman codes

    NASA Technical Reports Server (NTRS)

    Abu-Mostafa, Y. S.; Mceliece, R. J.

    1992-01-01

    The following question about Huffman coding, which is an important technique for compressing data from a discrete source, is considered. If p is the smallest source probability, how long, in terms of p, can the longest Huffman codeword be? It is shown that if p is in the range 0 less than p less than or equal to 1/2, and if K is the unique index such that 1/F(sub K+3) less than p less than or equal to 1/F(sub K+2), where F(sub K) denotes the Kth Fibonacci number, then the longest Huffman codeword for a source whose least probability is p is at most K, and no better bound is possible. Asymptotically, this implies the surprising fact that for small values of p, a Huffman code's longest codeword can be as much as 44 percent larger than that of the corresponding Shannon code.

  7. Proof-Carrying Code with Correct Compilers

    NASA Technical Reports Server (NTRS)

    Appel, Andrew W.

    2009-01-01

    In the late 1990s, proof-carrying code was able to produce machine-checkable safety proofs for machine-language programs even though (1) it was impractical to prove correctness properties of source programs and (2) it was impractical to prove correctness of compilers. But now it is practical to prove some correctness properties of source programs, and it is practical to prove correctness of optimizing compilers. We can produce more expressive proof-carrying code, that can guarantee correctness properties for machine code and not just safety. We will construct program logics for source languages, prove them sound w.r.t. the operational semantics of the input language for a proved-correct compiler, and then use these logics as a basis for proving the soundness of static analyses.

  8. Point-Kernel Shielding Code System.

    1982-02-17

    Version 00 QAD-BSA is a three-dimensional, point-kernel shielding code system based upon the CCC-48/QAD series. It is designed to calculate photon dose rates and heating rates using exponential attenuation and infinite medium buildup factors. Calculational provisions include estimates of fast neutron penetration using data computed by the moments method. Included geometry routines can describe complicated source and shield geometries. An internal library contains data for many frequently used structural and shielding materials, enabling the codemore » to solve most problems with only source strengths and problem geometry required as input. This code system adapts especially well to problems requiring multiple sources and sources with asymmetrical geometry. In addition to being edited separately, the total interaction rates from many sources may be edited at each detector point. Calculated photon interaction rates agree closely with those obtained using QAD-P5A.« less

  9. Mechanical code comparator

    DOEpatents

    Peter, Frank J.; Dalton, Larry J.; Plummer, David W.

    2002-01-01

    A new class of mechanical code comparators is described which have broad potential for application in safety, surety, and security applications. These devices can be implemented as micro-scale electromechanical systems that isolate a secure or otherwise controlled device until an access code is entered. This access code is converted into a series of mechanical inputs to the mechanical code comparator, which compares the access code to a pre-input combination, entered previously into the mechanical code comparator by an operator at the system security control point. These devices provide extremely high levels of robust security. Being totally mechanical in operation, an access control system properly based on such devices cannot be circumvented by software attack alone.

  10. Theory of epigenetic coding.

    PubMed

    Elder, D

    1984-06-01

    The logic of genetic control of development may be based on a binary epigenetic code. This paper revises the author's previous scheme dealing with the numerology of annelid metamerism in these terms. Certain features of the code had been deduced to be combinatorial, others not. This paradoxical contrast is resolved here by the interpretation that these features relate to different operations of the code; the combinatiorial to coding identity of units, the non-combinatorial to coding production of units. Consideration of a second paradox in the theory of epigenetic coding leads to a new solution which further provides a basis for epimorphic regeneration, and may in particular throw light on the "regeneration-duplication" phenomenon. A possible test of the model is also put forward. PMID:6748695

  11. Updating the Read Codes

    PubMed Central

    Robinson, David; Comp, Dip; Schulz, Erich; Brown, Philip; Price, Colin

    1997-01-01

    Abstract The Read Codes are a hierarchically-arranged controlled clinical vocabulary introduced in the early 1980s and now consisting of three maintained versions of differing complexity. The code sets are dynamic, and are updated quarterly in response to requests from users including clinicians in both primary and secondary care, software suppliers, and advice from a network of specialist healthcare professionals. The codes' continual evolution of content, both across and within versions, highlights tensions between different users and uses of coded clinical data. Internal processes, external interactions and new structural features implemented by the NHS Centre for Coding and Classification (NHSCCC) for user interactive maintenance of the Read Codes are described, and over 2000 items of user feedback episodes received over a 15-month period are analysed. PMID:9391934

  12. Who actually read Exner? Returning to the source of the frontal "writing centre" hypothesis.

    PubMed

    Roux, Franck-Emmanuel; Draper, Louisa; Köpke, Barbara; Démonet, Jean-François

    2010-10-01

    We have translated the most famous text of Sigmund Exner (1846-1926), which relates to the existence of a localised "writing centre" in the brain. We discuss its relevance to modern studies and understanding of writing and agraphia. In Exner's most famous text, he hypothesised about the eponymous "Exner's Area", a discrete area within the brain that was located in the left middle frontal gyrus, which was dedicated to the function of writing. This text in German, included in a book published in 1881 "Untersuchungen über die Lokalisation der Functionen in der Grosshirnrinde des Menschen" (Studies on the localisation of functions in the cerebral cortex of humans), lent itself to passionate debates during the following decades on the possibility of finding a specific writing centre in left middle frontal gyrus. Modern authors still refer back to the evidence cited in this seminal text. However, over the 281 pages of Exner's book, only a few chapters dealt with agraphia. Only four of the 167 case reports in the book explicitly mention agraphia. Although Exner describes the anatomical details of these lesions (from autopsies), no patient had pure agraphia, and only one case had an isolated lesion of the posterior part of the middle frontal gyrus. The small number of patients, the absence of pure agraphia symptoms, and the variation in the anatomy of these lesions are the main reasons why Exner's hypothesis of a writing centre in left middle frontal gyrus has been continually debated until now. More than the seminal publication of Sigmund Exner on agraphia, we think that the diffusion of his hypothesis was partly due to the influence that Exner and his family had within the scientific community at the turn of the 20th century. PMID:20392443

  13. Doubled Color Codes

    NASA Astrophysics Data System (ADS)

    Bravyi, Sergey

    Combining protection from noise and computational universality is one of the biggest challenges in the fault-tolerant quantum computing. Topological stabilizer codes such as the 2D surface code can tolerate a high level of noise but implementing logical gates, especially non-Clifford ones, requires a prohibitively large overhead due to the need of state distillation. In this talk I will describe a new family of 2D quantum error correcting codes that enable a transversal implementation of all logical gates required for the universal quantum computing. Transversal logical gates (TLG) are encoded operations that can be realized by applying some single-qubit rotation to each physical qubit. TLG are highly desirable since they introduce no overhead and do not spread errors. It has been known before that a quantum code can have only a finite number of TLGs which rules out computational universality. Our scheme circumvents this no-go result by combining TLGs of two different quantum codes using the gauge-fixing method pioneered by Paetznick and Reichardt. The first code, closely related to the 2D color code, enables a transversal implementation of all single-qubit Clifford gates such as the Hadamard gate and the π / 2 phase shift. The second code that we call a doubled color code provides a transversal T-gate, where T is the π / 4 phase shift. The Clifford+T gate set is known to be computationally universal. The two codes can be laid out on the honeycomb lattice with two qubits per site such that the code conversion requires parity measurements for six-qubit Pauli operators supported on faces of the lattice. I will also describe numerical simulations of logical Clifford+T circuits encoded by the distance-3 doubled color code. Based on a joint work with Andrew Cross.

  14. Phonological coding during reading

    PubMed Central

    Leinenger, Mallorie

    2014-01-01

    The exact role that phonological coding (the recoding of written, orthographic information into a sound based code) plays during silent reading has been extensively studied for more than a century. Despite the large body of research surrounding the topic, varying theories as to the time course and function of this recoding still exist. The present review synthesizes this body of research, addressing the topics of time course and function in tandem. The varying theories surrounding the function of phonological coding (e.g., that phonological codes aid lexical access, that phonological codes aid comprehension and bolster short-term memory, or that phonological codes are largely epiphenomenal in skilled readers) are first outlined, and the time courses that each maps onto (e.g., that phonological codes come online early (pre-lexical) or that phonological codes come online late (post-lexical)) are discussed. Next the research relevant to each of these proposed functions is reviewed, discussing the varying methodologies that have been used to investigate phonological coding (e.g., response time methods, reading while eyetracking or recording EEG and MEG, concurrent articulation) and highlighting the advantages and limitations of each with respect to the study of phonological coding. In response to the view that phonological coding is largely epiphenomenal in skilled readers, research on the use of phonological codes in prelingually, profoundly deaf readers is reviewed. Finally, implications for current models of word identification (activation-verification model (Van Order, 1987), dual-route model (e.g., Coltheart, Rastle, Perry, Langdon, & Ziegler, 2001), parallel distributed processing model (Seidenberg & McClelland, 1989)) are discussed. PMID:25150679

  15. Phonological coding during reading.

    PubMed

    Leinenger, Mallorie

    2014-11-01

    The exact role that phonological coding (the recoding of written, orthographic information into a sound based code) plays during silent reading has been extensively studied for more than a century. Despite the large body of research surrounding the topic, varying theories as to the time course and function of this recoding still exist. The present review synthesizes this body of research, addressing the topics of time course and function in tandem. The varying theories surrounding the function of phonological coding (e.g., that phonological codes aid lexical access, that phonological codes aid comprehension and bolster short-term memory, or that phonological codes are largely epiphenomenal in skilled readers) are first outlined, and the time courses that each maps onto (e.g., that phonological codes come online early [prelexical] or that phonological codes come online late [postlexical]) are discussed. Next the research relevant to each of these proposed functions is reviewed, discussing the varying methodologies that have been used to investigate phonological coding (e.g., response time methods, reading while eye-tracking or recording EEG and MEG, concurrent articulation) and highlighting the advantages and limitations of each with respect to the study of phonological coding. In response to the view that phonological coding is largely epiphenomenal in skilled readers, research on the use of phonological codes in prelingually, profoundly deaf readers is reviewed. Finally, implications for current models of word identification (activation-verification model, Van Orden, 1987; dual-route model, e.g., M. Coltheart, Rastle, Perry, Langdon, & Ziegler, 2001; parallel distributed processing model, Seidenberg & McClelland, 1989) are discussed. PMID:25150679

  16. Bar Code Labels

    NASA Technical Reports Server (NTRS)

    1988-01-01

    American Bar Codes, Inc. developed special bar code labels for inventory control of space shuttle parts and other space system components. ABC labels are made in a company-developed anodizing aluminum process and consecutively marketed with bar code symbology and human readable numbers. They offer extreme abrasion resistance and indefinite resistance to ultraviolet radiation, capable of withstanding 700 degree temperatures without deterioration and up to 1400 degrees with special designs. They offer high resistance to salt spray, cleaning fluids and mild acids. ABC is now producing these bar code labels commercially or industrial customers who also need labels to resist harsh environments.

  17. MORSE Monte Carlo code

    SciTech Connect

    Cramer, S.N.

    1984-01-01

    The MORSE code is a large general-use multigroup Monte Carlo code system. Although no claims can be made regarding its superiority in either theoretical details or Monte Carlo techniques, MORSE has been, since its inception at ORNL in the late 1960s, the most widely used Monte Carlo radiation transport code. The principal reason for this popularity is that MORSE is relatively easy to use, independent of any installation or distribution center, and it can be easily customized to fit almost any specific need. Features of the MORSE code are described.

  18. Tokamak Systems Code

    SciTech Connect

    Reid, R.L.; Barrett, R.J.; Brown, T.G.; Gorker, G.E.; Hooper, R.J.; Kalsi, S.S.; Metzler, D.H.; Peng, Y.K.M.; Roth, K.E.; Spampinato, P.T.

    1985-03-01

    The FEDC Tokamak Systems Code calculates tokamak performance, cost, and configuration as a function of plasma engineering parameters. This version of the code models experimental tokamaks. It does not currently consider tokamak configurations that generate electrical power or incorporate breeding blankets. The code has a modular (or subroutine) structure to allow independent modeling for each major tokamak component or system. A primary benefit of modularization is that a component module may be updated without disturbing the remainder of the systems code as long as the imput to or output from the module remains unchanged.

  19. FAA Smoke Transport Code

    SciTech Connect

    Domino, Stefan; Luketa-Hanlin, Anay; Gallegos, Carlos

    2006-10-27

    FAA Smoke Transport Code, a physics-based Computational Fluid Dynamics tool, which couples heat, mass, and momentum transfer, has been developed to provide information on smoke transport in cargo compartments with various geometries and flight conditions. The software package contains a graphical user interface for specification of geometry and boundary conditions, analysis module for solving the governing equations, and a post-processing tool. The current code was produced by making substantial improvements and additions to a code obtained from a university. The original code was able to compute steady, uniform, isothermal turbulent pressurization. In addition, a preprocessor and postprocessor were added to arrive at the current software package.

  20. Expander chunked codes

    NASA Astrophysics Data System (ADS)

    Tang, Bin; Yang, Shenghao; Ye, Baoliu; Yin, Yitong; Lu, Sanglu

    2015-12-01

    Chunked codes are efficient random linear network coding (RLNC) schemes with low computational cost, where the input packets are encoded into small chunks (i.e., subsets of the coded packets). During the network transmission, RLNC is performed within each chunk. In this paper, we first introduce a simple transfer matrix model to characterize the transmission of chunks and derive some basic properties of the model to facilitate the performance analysis. We then focus on the design of overlapped chunked codes, a class of chunked codes whose chunks are non-disjoint subsets of input packets, which are of special interest since they can be encoded with negligible computational cost and in a causal fashion. We propose expander chunked (EC) codes, the first class of overlapped chunked codes that have an analyzable performance, where the construction of the chunks makes use of regular graphs. Numerical and simulation results show that in some practical settings, EC codes can achieve rates within 91 to 97 % of the optimum and outperform the state-of-the-art overlapped chunked codes significantly.

  1. Actualizing Concepts in Home Management: Proceedings of a National Conference.

    ERIC Educational Resources Information Center

    American Home Economics Association, Washington, DC.

    The booklet prints the following papers delivered at a national conference: Actualizing Concepts in Home Management: Decision Making, Dorothy Z. Price; Innovations in Teaching: Ergonomics, Fern E. Hunt; Relevant Concepts of Home Management: Innovations in Teaching, Kay P. Edwards; Standards in a Managerial Context, Florence S. Walker; Organizing:…

  2. 26 CFR 513.8 - Addressee not actual owner.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... CONVENTIONS IRELAND Withholding of Tax § 513.8 Addressee not actual owner. (a) If any person with an address in Ireland who receives a dividend from a United States corporation with respect to which United... such reduced rate of 15 percent, such recipient in Ireland will withhold an additional amount of...

  3. Remote sensing estimates of actual evapotranspiration in an irrigation district

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Accurate estimates of the spatial distribution of actual evapotranspiration (AET) are useful in hydrology, but can be difficult to obtain. Remote sensing provides a potential capability for routinely monitoring AET by combining remotely sensed surface temperature and vegetation cover observations w...

  4. Self Actualization of Females in an Experimental Orientation Program

    ERIC Educational Resources Information Center

    Vander Wilt, Robert B.; Klocke, Ronald A.

    1971-01-01

    An alternative to the traditional orientation program was developed that forced students to consider their physical and psychological outer limits. Students were confronted in a new and unique way that contributed to the self actualization process of the female portion of the group. (Author/BY)

  5. Actual Leisure Participation of Norwegian Adolescents with Down Syndrome

    ERIC Educational Resources Information Center

    Dolva, Anne-Stine; Kleiven, Jo; Kollstad, Marit

    2014-01-01

    This article reports the actual participation in leisure activities by a sample of Norwegian adolescents with Down syndrome aged 14. Representing a first generation to grow up in a relatively inclusive context, they live with their families, attend mainstream schools, and are part of common community life. Leisure information was obtained in…

  6. Research into Students' Perceptions of Preferred and Actual Learning Environment.

    ERIC Educational Resources Information Center

    Hattie, John A.; And Others

    Measures of both preferred and actual classroom and school environment were administered to 1,675 secondary school students in New South Wales (Australia). Shortened versions of the My Class Inventory, Classroom Environment Scale, and Individualized Classroom Environment Questionnaire, as well as the Quality of School Life questionnaire were…

  7. MLCMS Actual Use, Perceived Use, and Experiences of Use

    ERIC Educational Resources Information Center

    Asiimwe, Edgar Napoleon; Grönlund, Åke

    2015-01-01

    Mobile learning involves use of mobile devices to participate in learning activities. Most e-learning activities are available to participants through learning systems such as learning content management systems (LCMS). Due to certain challenges, LCMS are not equally accessible on all mobile devices. This study investigates actual use, perceived…

  8. What Does the Force Concept Inventory Actually Measure?

    ERIC Educational Resources Information Center

    Huffman, Douglas; Heller, Patricia

    1995-01-01

    The Force Concept Inventory (FCI) is a 29-question, multiple-choice test designed to assess students' Newtonian and non-Newtonian conceptions of force. Presents an analysis of FCI results as one way to determine what the inventory actually measures. (LZ)

  9. Progressive Digressions: Home Schooling for Self-Actualization.

    ERIC Educational Resources Information Center

    Rivero, Lisa

    2002-01-01

    Maslow's (1971) theory of primary creativeness is used as the basis for a self-actualization model of education. Examples of how to use the model in creative homeschooling are provided. Key elements include digressive and immersion learning, self-directed learning, and the integration of work and play. Teaching suggestions are provided. (Contains…

  10. A Taxometric Analysis of Actual Internet Sports Gambling Behavior

    ERIC Educational Resources Information Center

    Braverman, Julia; LaBrie, Richard A.; Shaffer, Howard J.

    2011-01-01

    This article presents findings from the first taxometric study of actual gambling behavior to determine whether we can represent the characteristics of extreme gambling as qualitatively distinct (i.e., taxonic) or as a point along a dimension. We analyzed the bets made during a 24-month study period by the 4,595 most involved gamblers among a…

  11. 41 CFR 304-5.4 - May we authorize an employee to exceed the maximum subsistence allowances (per diem, actual...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 41 Public Contracts and Property Management 4 2012-07-01 2012-07-01 false May we authorize an employee to exceed the maximum subsistence allowances (per diem, actual expense, or conference lodging) prescribed in applicable travel regulations where we have authorized acceptance of payment from a non-Federal source for such allowances?...

  12. 41 CFR 304-5.4 - May we authorize an employee to exceed the maximum subsistence allowances (per diem, actual...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 41 Public Contracts and Property Management 4 2010-07-01 2010-07-01 false May we authorize an employee to exceed the maximum subsistence allowances (per diem, actual expense, or conference lodging) prescribed in applicable travel regulations where we have authorized acceptance of payment from a non-Federal source for such allowances?...

  13. Version 4.00 of the MINTEQ geochemical code

    SciTech Connect

    Eary, L.E.; Jenne, E.A.

    1992-09-01

    The MINTEQ code is a thermodynamic model that can be used to calculate solution equilibria for geochemical applications. Included in the MINTEQ code are formulations for ionic speciation, ion exchange, adsorption, solubility, redox, gas-phase equilibria, and the dissolution of finite amounts of specified solids. Since the initial development of the MINTEQ geochemical code, a number of undocumented versions of the source code and data files have come into use at the Pacific Northwest Laboratory (PNL). This report documents these changes, describes source code modifications made for the Aquifer Thermal Energy Storage (ATES) program, and provides comprehensive listings of the data files. A version number of 4.00 has been assigned to the MINTEQ source code and the individual data files described in this report.

  14. Version 4. 00 of the MINTEQ geochemical code

    SciTech Connect

    Eary, L.E.; Jenne, E.A.

    1992-09-01

    The MINTEQ code is a thermodynamic model that can be used to calculate solution equilibria for geochemical applications. Included in the MINTEQ code are formulations for ionic speciation, ion exchange, adsorption, solubility, redox, gas-phase equilibria, and the dissolution of finite amounts of specified solids. Since the initial development of the MINTEQ geochemical code, a number of undocumented versions of the source code and data files have come into use at the Pacific Northwest Laboratory (PNL). This report documents these changes, describes source code modifications made for the Aquifer Thermal Energy Storage (ATES) program, and provides comprehensive listings of the data files. A version number of 4.00 has been assigned to the MINTEQ source code and the individual data files described in this report.

  15. The Impact of Codes of Conduct on Stakeholders

    ERIC Educational Resources Information Center

    Newman, Wayne R.

    2015-01-01

    The purpose of this study was to determine how an urban school district's code of conduct aligned with actual school/class behaviors, and how stakeholders perceived the ability of this document to achieve its number one goal: safe and productive learning environments. Twenty participants including students, teachers, parents, and administrators…

  16. Reconstruction of coded aperture images

    NASA Technical Reports Server (NTRS)

    Bielefeld, Michael J.; Yin, Lo I.

    1987-01-01

    Balanced correlation method and the Maximum Entropy Method (MEM) were implemented to reconstruct a laboratory X-ray source as imaged by a Uniformly Redundant Array (URA) system. Although the MEM method has advantages over the balanced correlation method, it is computationally time consuming because of the iterative nature of its solution. Massively Parallel Processing, with its parallel array structure is ideally suited for such computations. These preliminary results indicate that it is possible to use the MEM method in future coded-aperture experiments with the help of the MPP.

  17. Mosaic of coded aperture arrays

    DOEpatents

    Fenimore, Edward E.; Cannon, Thomas M.

    1980-01-01

    The present invention pertains to a mosaic of coded aperture arrays which is capable of imaging off-axis sources with minimum detector size. Mosaics of the basic array pattern create a circular on periodic correlation of the object on a section of the picture plane. This section consists of elements of the central basic pattern as well as elements from neighboring patterns and is a cyclic version of the basic pattern. Since all object points contribute a complete cyclic version of the basic pattern, a section of the picture, which is the size of the basic aperture pattern, contains all the information necessary to image the object with no artifacts.

  18. Fast Coding Unit Encoding Mechanism for Low Complexity Video Coding

    PubMed Central

    Wu, Yueying; Jia, Kebin; Gao, Guandong

    2016-01-01

    In high efficiency video coding (HEVC), coding tree contributes to excellent compression performance. However, coding tree brings extremely high computational complexity. Innovative works for improving coding tree to further reduce encoding time are stated in this paper. A novel low complexity coding tree mechanism is proposed for HEVC fast coding unit (CU) encoding. Firstly, this paper makes an in-depth study of the relationship among CU distribution, quantization parameter (QP) and content change (CC). Secondly, a CU coding tree probability model is proposed for modeling and predicting CU distribution. Eventually, a CU coding tree probability update is proposed, aiming to address probabilistic model distortion problems caused by CC. Experimental results show that the proposed low complexity CU coding tree mechanism significantly reduces encoding time by 27% for lossy coding and 42% for visually lossless coding and lossless coding. The proposed low complexity CU coding tree mechanism devotes to improving coding performance under various application conditions. PMID:26999741

  19. Synthesizing Certified Code

    NASA Technical Reports Server (NTRS)

    Whalen, Michael; Schumann, Johann; Fischer, Bernd

    2002-01-01

    Code certification is a lightweight approach to demonstrate software quality on a formal level. Its basic idea is to require producers to provide formal proofs that their code satisfies certain quality properties. These proofs serve as certificates which can be checked independently. Since code certification uses the same underlying technology as program verification, it also requires many detailed annotations (e.g., loop invariants) to make the proofs possible. However, manually adding theses annotations to the code is time-consuming and error-prone. We address this problem by combining code certification with automatic program synthesis. We propose an approach to generate simultaneously, from a high-level specification, code and all annotations required to certify generated code. Here, we describe a certification extension of AUTOBAYES, a synthesis tool which automatically generates complex data analysis programs from compact specifications. AUTOBAYES contains sufficient high-level domain knowledge to generate detailed annotations. This allows us to use a general-purpose verification condition generator to produce a set of proof obligations in first-order logic. The obligations are then discharged using the automated theorem E-SETHEO. We demonstrate our approach by certifying operator safety for a generated iterative data classification program without manual annotation of the code.

  20. Lichenase and coding sequences

    DOEpatents

    Li, Xin-Liang; Ljungdahl, Lars G.; Chen, Huizhong

    2000-08-15

    The present invention provides a fungal lichenase, i.e., an endo-1,3-1,4-.beta.-D-glucanohydrolase, its coding sequence, recombinant DNA molecules comprising the lichenase coding sequences, recombinant host cells and methods for producing same. The present lichenase is from Orpinomyces PC-2.

  1. Codes of Conduct

    ERIC Educational Resources Information Center

    Million, June

    2004-01-01

    Most schools have a code of conduct, pledge, or behavioral standards, set by the district or school board with the school community. In this article, the author features some schools that created a new vision of instilling code of conducts to students based on work quality, respect, safety and courtesy. She suggests that communicating the code…

  2. Code of Ethics

    ERIC Educational Resources Information Center

    Division for Early Childhood, Council for Exceptional Children, 2009

    2009-01-01

    The Code of Ethics of the Division for Early Childhood (DEC) of the Council for Exceptional Children is a public statement of principles and practice guidelines supported by the mission of DEC. The foundation of this Code is based on sound ethical reasoning related to professional practice with young children with disabilities and their families…

  3. Legacy Code Modernization

    NASA Technical Reports Server (NTRS)

    Hribar, Michelle R.; Frumkin, Michael; Jin, Haoqiang; Waheed, Abdul; Yan, Jerry; Saini, Subhash (Technical Monitor)

    1998-01-01

    Over the past decade, high performance computing has evolved rapidly; systems based on commodity microprocessors have been introduced in quick succession from at least seven vendors/families. Porting codes to every new architecture is a difficult problem; in particular, here at NASA, there are many large CFD applications that are very costly to port to new machines by hand. The LCM ("Legacy Code Modernization") Project is the development of an integrated parallelization environment (IPE) which performs the automated mapping of legacy CFD (Fortran) applications to state-of-the-art high performance computers. While most projects to port codes focus on the parallelization of the code, we consider porting to be an iterative process consisting of several steps: 1) code cleanup, 2) serial optimization,3) parallelization, 4) performance monitoring and visualization, 5) intelligent tools for automated tuning using performance prediction and 6) machine specific optimization. The approach for building this parallelization environment is to build the components for each of the steps simultaneously and then integrate them together. The demonstration will exhibit our latest research in building this environment: 1. Parallelizing tools and compiler evaluation. 2. Code cleanup and serial optimization using automated scripts 3. Development of a code generator for performance prediction 4. Automated partitioning 5. Automated insertion of directives. These demonstrations will exhibit the effectiveness of an automated approach for all the steps involved with porting and tuning a legacy code application for a new architecture.

  4. Modified JPEG Huffman coding.

    PubMed

    Lakhani, Gopal

    2003-01-01

    It is a well observed characteristic that when a DCT block is traversed in the zigzag order, the AC coefficients generally decrease in size and the run-length of zero coefficients increase in number. This article presents a minor modification to the Huffman coding of the JPEG baseline compression algorithm to exploit this redundancy. For this purpose, DCT blocks are divided into bands so that each band can be coded using a separate code table. Three implementations are presented, which all move the end-of-block marker up in the middle of DCT block and use it to indicate the band boundaries. Experimental results are presented to compare reduction in the code size obtained by our methods with the JPEG sequential-mode Huffman coding and arithmetic coding methods. The average code reduction to the total image code size of one of our methods is 4%. Our methods can also be used for progressive image transmission and hence, experimental results are also given to compare them with two-, three-, and four-band implementations of the JPEG spectral selection method. PMID:18237897

  5. Binary concatenated coding system

    NASA Technical Reports Server (NTRS)

    Monford, L. G., Jr.

    1973-01-01

    Coding, using 3-bit binary words, is applicable to any measurement having integer scale up to 100. System using 6-bit data words can be expanded to read from 1 to 10,000, and 9-bit data words can increase range to 1,000,000. Code may be ''read'' directly by observation after memorizing simple listing of 9's and 10's.

  6. Computerized mega code recording.

    PubMed

    Burt, T W; Bock, H C

    1988-04-01

    A system has been developed to facilitate recording of advanced cardiac life support mega code testing scenarios. By scanning a paper "keyboard" using a bar code wand attached to a portable microcomputer, the person assigned to record the scenario can easily generate an accurate, complete, timed, and typewritten record of the given situations and the obtained responses. PMID:3354937

  7. Coding for optical channels

    NASA Technical Reports Server (NTRS)

    Baumert, L. D.; Mceliece, R. J.; Rumsey, H., Jr.

    1979-01-01

    In a previous paper Pierce considered the problem of optical communication from a novel viewpoint, and concluded that performance will likely be limited by issues of coding complexity rather than by thermal noise. This paper reviews the model proposed by Pierce and presents some results on the analysis and design of codes for this application.

  8. Combustion chamber analysis code

    NASA Technical Reports Server (NTRS)

    Przekwas, A. J.; Lai, Y. G.; Krishnan, A.; Avva, R. K.; Giridharan, M. G.

    1993-01-01

    A three-dimensional, time dependent, Favre averaged, finite volume Navier-Stokes code has been developed to model compressible and incompressible flows (with and without chemical reactions) in liquid rocket engines. The code has a non-staggered formulation with generalized body-fitted-coordinates (BFC) capability. Higher order differencing methodologies such as MUSCL and Osher-Chakravarthy schemes are available. Turbulent flows can be modeled using any of the five turbulent models present in the code. A two-phase, two-liquid, Lagrangian spray model has been incorporated into the code. Chemical equilibrium and finite rate reaction models are available to model chemically reacting flows. The discrete ordinate method is used to model effects of thermal radiation. The code has been validated extensively against benchmark experimental data and has been applied to model flows in several propulsion system components of the SSME and the STME.

  9. Energy Conservation Code Decoded

    SciTech Connect

    Cole, Pam C.; Taylor, Zachary T.

    2006-09-01

    Designing an energy-efficient, affordable, and comfortable home is a lot easier thanks to a slime, easier to read booklet, the 2006 International Energy Conservation Code (IECC), published in March 2006. States, counties, and cities have begun reviewing the new code as a potential upgrade to their existing codes. Maintained under the public consensus process of the International Code Council, the IECC is designed to do just what its title says: promote the design and construction of energy-efficient homes and commercial buildings. Homes in this case means traditional single-family homes, duplexes, condominiums, and apartment buildings having three or fewer stories. The U.S. Department of Energy, which played a key role in proposing the changes that resulted in the new code, is offering a free training course that covers the residential provisions of the 2006 IECC.

  10. Sensitivity of coded mask telescopes.

    PubMed

    Skinner, Gerald K

    2008-05-20

    Simple formulas are often used to estimate the sensitivity of coded mask x-ray or gamma-ray telescopes, but these are strictly applicable only if a number of basic assumptions are met. Complications arise, for example, if a grid structure is used to support the mask elements, if the detector spatial resolution is not good enough to completely resolve all the detail in the shadow of the mask, or if any of a number of other simplifying conditions are not fulfilled. We derive more general expressions for the Poisson-noise-limited sensitivity of astronomical telescopes using the coded mask technique, noting explicitly in what circumstances they are applicable. The emphasis is on using nomenclature and techniques that result in simple and revealing results. Where no convenient expression is available a procedure is given that allows the calculation of the sensitivity. We consider certain aspects of the optimization of the design of a coded mask telescope and show that when the detector spatial resolution and the mask to detector separation are fixed, the best source location accuracy is obtained when the mask elements are equal in size to the detector pixels. PMID:18493279

  11. An investigation of error characteristics and coding performance

    NASA Technical Reports Server (NTRS)

    Ebel, William J.; Ingels, Frank M.

    1993-01-01

    The first year's effort on NASA Grant NAG5-2006 was an investigation to characterize typical errors resulting from the EOS dorn link. The analysis methods developed for this effort were used on test data from a March 1992 White Sands Terminal Test. The effectiveness of a concatenated coding scheme of a Reed Solomon outer code and a convolutional inner code versus a Reed Solomon only code scheme has been investigated as well as the effectiveness of a Periodic Convolutional Interleaver in dispersing errors of certain types. The work effort consisted of development of software that allows simulation studies with the appropriate coding schemes plus either simulated data with errors or actual data with errors. The software program is entitled Communication Link Error Analysis (CLEAN) and models downlink errors, forward error correcting schemes, and interleavers.

  12. Recent developments in the Los Alamos radiation transport code system

    SciTech Connect

    Forster, R.A.; Parsons, K.

    1997-06-01

    A brief progress report on updates to the Los Alamos Radiation Transport Code System (LARTCS) for solving criticality and fixed-source problems is provided. LARTCS integrates the Diffusion Accelerated Neutral Transport (DANT) discrete ordinates codes with the Monte Carlo N-Particle (MCNP) code. The LARCTS code is being developed with a graphical user interface for problem setup and analysis. Progress in the DANT system for criticality applications include a two-dimensional module which can be linked to a mesh-generation code and a faster iteration scheme. Updates to MCNP Version 4A allow statistical checks of calculated Monte Carlo results.

  13. Urban rail transit projects: Forecast versus actual ridership and costs. Final report

    SciTech Connect

    Pickrell, D.H.

    1989-10-01

    Substantial errors in forecasting ridership and costs for the ten rail transit projects reviewed in the report put forth the possibility that more accurate forecasts would have led decision-makers to select projects other than those reviewed. The study examines the accuracy of forecasts prepared for ten major capital improvement projects in nine urban areas during 1971-1987. Each project includes construction of a fixed transit guideway: Rapid Rail or Metrorail (Washington DC, Atlanta, Baltimore, Miami); Light Rail Transit (Buffalo, Pittsburgh, Portland, Sacramento); and Downtown Peoplemover (Miami and Detroit). The study examines why actual costs and ridership differed so markedly from their forecast values. It focuses on the accuracy of projections made available to local decision-makers at the time when the choice among alternative projects was actually made. The study compares forecast and actual values for four types of measures: ridership, capital costs and financing, operating and maintenance costs, and cost-effectiveness. The report is organized into 6 chapters, numerous tables, and an appendix that documents the sources of all data appearing in the tables presented in the report.

  14. Actual curriculum development practices instrument: Testing for factorial validity

    NASA Astrophysics Data System (ADS)

    Foi, Liew Yon; Bakar, Kamariah Abu; Hamzah, Mohd Sahandri Gani; Alwi, Nor Hayati

    2014-09-01

    The Actual Curriculum Development Practices Instrument (ACDP-I) was developed and the factorial validity of the ACDP-I was tested (n = 107) using exploratory factor analysis procedures in the earlier work of [1]. Despite the ACDP-I appears to be content and construct valid instrument with very high internal reliability qualities for using in Malaysia, the accumulated evidences are still needed to provide a sound scientific basis for the proposed score interpretations. Therefore, the present study addresses this concern by utilising the confirmatory factor analysis to further confirm the theoretical structure of the variable Actual Curriculum Development Practices (ACDP) and enrich the psychometrical properties of ACDP-I. Results of this study have practical implication to both researchers and educators whose concerns focus on teachers' classroom practices and the instrument development and validation process.

  15. Quantum convolutional codes derived from constacyclic codes

    NASA Astrophysics Data System (ADS)

    Yan, Tingsu; Huang, Xinmei; Tang, Yuansheng

    2014-12-01

    In this paper, three families of quantum convolutional codes are constructed. The first one and the second one can be regarded as a generalization of Theorems 3, 4, 7 and 8 [J. Chen, J. Li, F. Yang and Y. Huang, Int. J. Theor. Phys., doi:10.1007/s10773-014-2214-6 (2014)], in the sense that we drop the constraint q ≡ 1 (mod 4). Furthermore, the second one and the third one attain the quantum generalized Singleton bound.

  16. Huffman coding in advanced audio coding standard

    NASA Astrophysics Data System (ADS)

    Brzuchalski, Grzegorz

    2012-05-01

    This article presents several hardware architectures of Advanced Audio Coding (AAC) Huffman noiseless encoder, its optimisations and working implementation. Much attention has been paid to optimise the demand of hardware resources especially memory size. The aim of design was to get as short binary stream as possible in this standard. The Huffman encoder with whole audio-video system has been implemented in FPGA devices.

  17. FRACTIONAL CRYSTALLIZATION FLOWSHEET TESTS WITH ACTUAL TANK WASTE

    SciTech Connect

    HERTING, D.L.

    2006-10-18

    Laboratory-scale flowsheet tests of the fractional crystallization process were conducted with actual tank waste samples in a hot cell at the 222-S Laboratory. The process is designed to separate medium-curie liquid waste into a low-curie stream for feeding to supplemental treatment and a high-curie stream for double-shell tank storage. Separations criteria (for Cs-137 sulfate, and sodium) were exceeded in all three of the flowsheet tests that were performed.

  18. FRACTIONAL CRYSTALLIZATION FLOWSHEET TESTS WITH ACTUAL TANK WASTE

    SciTech Connect

    HERTING, D.L.

    2007-04-13

    Laboratory-scale flowsheet tests of the fractional crystallization process were conducted with actual tank waste samples in a hot cell at the 2224 Laboratory. The process is designed to separate medium-curie liquid waste into a low-curie stream for feeding to supplemental treatment and a high-curie stream for double-shell tank storage. Separations criteria (for Cesium-137 sulfate and sodium) were exceeded in all three of the flowsheet tests that were performed.

  19. Northrop Triga facility decommissioning plan versus actual results

    SciTech Connect

    Gardner, F.W.

    1986-01-01

    This paper compares the Triga facility decontamination and decommissioning plan to the actual results and discusses key areas where operational activities were impacted upon by the final US Nuclear Regulatory Commission (NRC)-approved decontamination and decommissioning plan. Total exposures for fuel transfer were a factor of 4 less than planned. The design of the Triga reactor components allowed the majority of the components to be unconditionally released.

  20. 63. VIEW OF AUTOTRANSFERS. THE ACTUAL AUTOTRANSFERS ARE ENCLOSED IN ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    63. VIEW OF AUTOTRANSFERS. THE ACTUAL AUTOTRANSFERS ARE ENCLOSED IN THE OIL FILLED CYLINDERS ON THE RIGHT OF THE PHOTOGRAPH. THESE ELECTRICAL DEVICES BOOSTED THE GENERATOR OUTPUT OF 11,000 VOLTS TO 22,000 VOLTS PRIOR TO TRANSMISSION OUT TO THE MAIN FEEDER LINES. A SPARE INNER UNIT IS CONTAINED IN THE METAL BOX AT THE LEFT OF THE PHOTOGRAPH. - New York, New Haven & Hartford Railroad, Cos Cob Power Plant, Sound Shore Drive, Greenwich, Fairfield County, CT

  1. GNOME: an earth-penetrator code

    SciTech Connect

    Davie, N.T.; Richgels, M.A.

    1983-05-01

    The earth penetrator code GNOME is described, and its capabilities are illustrated by comparisons of computed results with actual field test data. GNOME uses decoupled approximate solution techniques to calculate the rigid body response of an earth penetrator. A modular structured programming method is employed, which allows a variety of pressure generating algorithms to be used without altering the basic program modules which consist of a time integrator and output routines. GNOME calculates axial and lateral loading on a cylindrical penetrator with an ogival or conical nose, but other geometrical shapes may be easily substituted for these by utilizing the modular program structure.

  2. Federal Act amending the Penal Code and the Code of Penal Procedure (Penal Code Amendments 1989), 27 April 1989.

    PubMed

    1989-01-01

    Austria's Federal act amending the Penal Code and the Code of Penal Procedure (Penal Code Amendments 1989), April 27, 1989, rewrites sections of the Penal Code relating to sexual crimes. Among other things, it makes these sections sex-neutral and criminalizes rape within marriage and cohabitation. Section 201 states that 1) whoever, by means of serious force or threat of actual serious danger to life or limb, compels a person to engage in sexual intercourse or an equivalent sexual act will be punished with imprisonment from 1 to 10 years. Rendering a person unconscious will be considered using serious force; 2) apart from the above subsection 1, whoever, by means of force or deprivation or personal freedom, or threat of actual danger to life or limb, compels a person to engage in sexual intercourse or an equivalent sexual act will be punished with imprisonment from 6 months to 5 years; and 3) specified circumstances will result in enhanced punishments. Section 202 states that 1) apart from the above Section 201, whoever by means of force or serious threat, compels a sexual act shall be punished with imprisonment for up to 3 years and 2) there will be enhanced punishments for special circumstances. Section 203 deals with perpetration of the crime in marriage or cohabitation, and states: 1) whoever perpetrates one of the acts described in Section 201 and Section 202 against a spouse or cohabiting partner will be prosecuted only upon the complaint of the injured party in so far as none of the results described in sections 201 or 202 occurs, and the criminal act contains none of the circumstances specified in those sections. Special commutation provisions are available when the injured party declares their wish to continue to live with the perpetrator. PMID:12344063

  3. Perceived accessibility versus actual physical accessibility of healthcare facilities.

    PubMed

    Sanchez, J; Byfield, G; Brown, T T; LaFavor, K; Murphy, D; Laud, P

    2000-01-01

    This study addressed how healthcare clinics perceive themselves in regard to accessibility for persons with spinal cord injuries (SCI). All 40 of the clinics surveyed reported that they were wheelchair accessible; however, there was significant variability in the number of sites that actually met the guidelines of the Americans with Disability Act. In general, a person using a wheelchair could enter the building, the examination room, and the bathroom. The majority of sites did not have an examination table that could be lowered to wheelchair level. Most reported limited experience in working with persons with (SCI), yet they claimed to be able to assist with difficult transfers. Only one site knew about autonomic dysreflexia. Problems of accessibility appeared to be seriously compounded by the clinics' perception of how they met physical accessibility guidelines without consideration of the actual needs of persons with SCI. This study addressed the perception of accessibility as reported by clinic managers versus actual accessibility in healthcare clinics in a Midwestern metropolitan area for persons using wheelchairs. PMID:10754921

  4. The actual citation impact of European oncological research.

    PubMed

    López-Illescas, Carmen; de Moya-Anegón, Félix; Moed, Henk F

    2008-01-01

    This study provides an overview of the research performance of major European countries in the field Oncology, the most important journals in which they published their research articles, and the most important academic institutions publishing them. The analysis was based on Thomson Scientific's Web of Science (WoS) and calculated bibliometric indicators of publication activity and actual citation impact. Studying the time period 2000-2006, it gives an update of earlier studies, but at the same time it expands their methodologies, using a broader definition of the field, calculating indicators of actual citation impact, and analysing new and policy relevant aspects. Findings suggest that the emergence of Asian countries in the field Oncology has displaced European articles more strongly than articles from the USA; that oncologists who have published their articles in important, more general journals or in journals covering other specialties, rather than in their own specialist journals, have generated a relatively high actual citation impact; and that universities from Germany, and--to a lesser extent--those from Italy, the Netherlands, UK, and Sweden, dominate a ranking of European universities based on number of articles in oncology. The outcomes illustrate that different bibliometric methodologies may lead to different outcomes, and that outcomes should be interpreted with care. PMID:18039565

  5. Coded aperture computed tomography

    NASA Astrophysics Data System (ADS)

    Choi, Kerkil; Brady, David J.

    2009-08-01

    Diverse physical measurements can be modeled by X-ray transforms. While X-ray tomography is the canonical example, reference structure tomography (RST) and coded aperture snapshot spectral imaging (CASSI) are examples of physically unrelated but mathematically equivalent sensor systems. Historically, most x-ray transform based systems sample continuous distributions and apply analytical inversion processes. On the other hand, RST and CASSI generate discrete multiplexed measurements implemented with coded apertures. This multiplexing of coded measurements allows for compression of measurements from a compressed sensing perspective. Compressed sensing (CS) is a revelation that if the object has a sparse representation in some basis, then a certain number, but typically much less than what is prescribed by Shannon's sampling rate, of random projections captures enough information for a highly accurate reconstruction of the object. This paper investigates the role of coded apertures in x-ray transform measurement systems (XTMs) in terms of data efficiency and reconstruction fidelity from a CS perspective. To conduct this, we construct a unified analysis using RST and CASSI measurement models. Also, we propose a novel compressive x-ray tomography measurement scheme which also exploits coding and multiplexing, and hence shares the analysis of the other two XTMs. Using this analysis, we perform a qualitative study on how coded apertures can be exploited to implement physical random projections by "regularizing" the measurement systems. Numerical studies and simulation results demonstrate several examples of the impact of coding.

  6. Report number codes

    SciTech Connect

    Nelson, R.N.

    1985-05-01

    This publication lists all report number codes processed by the Office of Scientific and Technical Information. The report codes are substantially based on the American National Standards Institute, Standard Technical Report Number (STRN)-Format and Creation Z39.23-1983. The Standard Technical Report Number (STRN) provides one of the primary methods of identifying a specific technical report. The STRN consists of two parts: The report code and the sequential number. The report code identifies the issuing organization, a specific program, or a type of document. The sequential number, which is assigned in sequence by each report issuing entity, is not included in this publication. Part I of this compilation is alphabetized by report codes followed by issuing installations. Part II lists the issuing organization followed by the assigned report code(s). In both Parts I and II, the names of issuing organizations appear for the most part in the form used at the time the reports were issued. However, for some of the more prolific installations which have had name changes, all entries have been merged under the current name.

  7. BERNAS ION SOURCE DISCHARGE SIMULATION

    SciTech Connect

    RUDSKOY,I.; KULEVOY, T.V.; PETRENKO, S.V.; KUIBEDA, R.P.; SELEZNEV, D.N.; PERSHIN, V.I.; HERSHCOVITCH, A.; JOHNSON, B.M.; GUSHENETS, V.I.; OKS, E.M.; POOLE, H.J.

    2007-08-26

    The joint research and development program is continued to develop steady-state ion source of decaborane beam for ion implantation industry. Bemas ion source is the wide used ion source for ion implantation industry. The new simulation code was developed for the Bemas ion source discharge simulation. We present first results of the simulation for several materials interested in semiconductors. As well the comparison of results obtained with experimental data obtained at the ITEP ion source test-bench is presented.

  8. Radionuclide daughter inventory generator code: DIG

    SciTech Connect

    Fields, D.E.; Sharp, R.D.

    1985-09-01

    The Daughter Inventory Generator (DIG) code accepts a tabulation of radionuclide initially present in a waste stream, specified as amounts present either by mass or by activity, and produces a tabulation of radionuclides present after a user-specified elapsed time. This resultant radionuclide inventory characterizes wastes that have undergone daughter ingrowth during subsequent processes, such as leaching and transport, and includes daughter radionuclides that should be considered in these subsequent processes or for inclusion in a pollutant source term. Output of the DIG code also summarizes radionuclide decay constants. The DIG code was developed specifically to assist the user of the PRESTO-II methodology and code in preparing data sets and accounting for possible daughter ingrowth in wastes buried in shallow-land disposal areas. The DIG code is also useful in preparing data sets for the PRESTO-EPA code. Daughter ingrowth in buried radionuclides and in radionuclides that have been leached from the wastes and are undergoing hydrologic transport are considered, and the quantities of daughter radionuclide are calculated. Radionuclide decay constants generated by DIG and included in the DIG output are required in the PRESTO-II code input data set. The DIG accesses some subroutines written for use with the CRRIS system and accesses files containing radionuclide data compiled by D.C. Kocher. 11 refs.

  9. Scalable video transmission over Rayleigh fading channels using LDPC codes

    NASA Astrophysics Data System (ADS)

    Bansal, Manu; Kondi, Lisimachos P.

    2005-03-01

    In this paper, we investigate an important problem of efficiently utilizing the available resources for video transmission over wireless channels while maintaining a good decoded video quality and resilience to channel impairments. Our system consists of the video codec based on 3-D set partitioning in hierarchical trees (3-D SPIHT) algorithm and employs two different schemes using low-density parity check (LDPC) codes for channel error protection. The first method uses the serial concatenation of the constant-rate LDPC code and rate-compatible punctured convolutional (RCPC) codes. Cyclic redundancy check (CRC) is used to detect transmission errors. In the other scheme, we use the product code structure consisting of a constant rate LDPC/CRC code across the rows of the `blocks' of source data and an erasure-correction systematic Reed-Solomon (RS) code as the column code. In both the schemes introduced here, we use fixed-length source packets protected with unequal forward error correction coding ensuring a strictly decreasing protection across the bitstream. A Rayleigh flat-fading channel with additive white Gaussian noise (AWGN) is modeled for the transmission. The rate-distortion optimization algorithm is developed and carried out for the selection of source coding and channel coding rates using Lagrangian optimization. The experimental results demonstrate the effectiveness of this system under different wireless channel conditions and both the proposed methods (LDPC+RCPC/CRC and RS+LDPC/CRC) outperform the more conventional schemes such as those employing RCPC/CRC.

  10. Forward error correcting codes in fiber-optic synchronous code-division multiple access networks

    NASA Astrophysics Data System (ADS)

    Srivastava, Anand; Kar, Subrat; Jain, V. K.

    2002-02-01

    In optical code-division multiple access (OCDMA) networks, the performance is limited by optical multiple access interference (OMAI), amplified spontaneous emission (ASE) noise and receiver noise. To reduce OMAI and noise effects, use of forward error correcting (FEC) (18880, 18865) and (2370, 2358) Hamming codes are explored for STM-1 (155 Mbps) bit stream. The encoding is carried out at the multiplex-section layer. The check bits are embedded in the unused bytes of multiplex section overhead (MSOH). The expression for probability of error is derived taking into consideration OMAI, various sources of noise and effect of group velocity dispersion (GVD). It is observed that for a BER of 10 -9, use of FEC gives a coding gain of 1.4-2.1 dB depending upon the type of coding scheme used. It is also seen that there is a sensitivity improvement of about 3 dB if the source is suitably pre-chirped.

  11. TRANSF code user manual

    SciTech Connect

    Weaver, H.J.

    1981-11-01

    The TRANSF code is a semi-interactive FORTRAN IV program which is designed to calculate the model parameters of a (structural) system by performing a least square parameter fit to measured transfer function data. The code is available at LLNL on both the 7600 and the Cray machines. The transfer function data to be fit is read into the code via a disk file. The primary mode of output is FR80 graphics, although, it is also possible to have results written to either the TTY or to a disk file.

  12. Office of Codes and Standards resource book. Section 1, Building energy codes and standards

    SciTech Connect

    Hattrup, M.P.

    1995-01-01

    The US Department of Energy`s (DOE`s) Office of Codes and Standards has developed this Resource Book to provide: A discussion of DOE involvement in building codes and standards; a current and accurate set of descriptions of residential, commercial, and Federal building codes and standards; information on State contacts, State code status, State building construction unit volume, and State needs; and a list of stakeholders in the building energy codes and standards arena. The Resource Book is considered an evolving document and will be updated occasionally. Users are requested to submit additional data (e.g., more current, widely accepted, and/or documented data) and suggested changes to the address listed below. Please provide sources for all data provided.

  13. FAST2 Code validation

    SciTech Connect

    Wilson, R.E.; Freeman, L.N.; Walker, S.N.

    1995-09-01

    The FAST2 Code which is capable of determining structural loads of a flexible, teetering, horizontal axis wind turbine is described and comparisons of calculated loads with test data at two wind speeds for the ESI-80 are given. The FAST2 Code models a two-bladed HAWT with degrees of freedom for blade flap, teeter, drive train flexibility, yaw, and windwise and crosswind tower motion. The code allows blade dimensions, stiffness, and weights to differ and models tower shadow, wind shear, and turbulence. Additionally, dynamic stall is included as are delta-3 and an underslung rotor. Load comparisons are made with ESI-80 test data in the form of power spectral density, rainflow counting, occurrence histograms and azimuth averaged bin plots. It is concluded that agreement between the FAST2 Code and test results is good.

  14. Compressible Astrophysics Simulation Code

    2007-07-18

    This is an astrophysics simulation code involving a radiation diffusion module developed at LLNL coupled to compressible hydrodynamics and adaptive mesh infrastructure developed at LBNL. One intended application is to neutrino diffusion in core collapse supernovae.

  15. Increasing the Reliability of Circulation Model Validation: Quantifying Drifter Slip to See how Currents are Actually Moving

    NASA Astrophysics Data System (ADS)

    Anderson, T.

    2015-12-01

    Ocean circulation forecasts can help answer questions regarding larval dispersal, passive movement of injured sea animals, oil spill mitigation, and search and rescue efforts. Circulation forecasts are often validated with GPS-tracked drifter paths, but how accurately do these drifters actually move with ocean currents? Drifters are not only moved by water, but are also forced by wind and waves acting on the exposed buoy and transmitter; this imperfect movement is referred to as drifter slip. The quantification and further understanding of drifter slip will allow scientists to differentiate between drifter imperfections and actual computer model error when comparing trajectory forecasts with actual drifter tracks. This will avoid falsely accrediting all discrepancies between a trajectory forecast and an actual drifter track to computer model error. During multiple deployments of drifters in Nantucket Sound and using observed wind and wave data, we attempt to quantify the slip of drifters developed by the Northeast Fisheries Science Center's (NEFSC) Student Drifters Program. While similar studies have been conducted previously, very few have directly attached current meters to drifters to quantify drifter slip. Furthermore, none have quantified slip of NEFSC drifters relative to the oceanographic-standard "CODE" drifter. The NEFSC drifter archive has over 1000 drifter tracks primarily off the New England coast. With a better understanding of NEFSC drifter slip, modelers can reliably use these tracks for model validation.

  16. The coding region of the UFGT gene is a source of diagnostic SNP markers that allow single-locus DNA genotyping for the assessment of cultivar identity and ancestry in grapevine (Vitis vinifera L.)

    PubMed Central

    2013-01-01

    Background Vitis vinifera L. is one of society’s most important agricultural crops with a broad genetic variability. The difficulty in recognizing grapevine genotypes based on ampelographic traits and secondary metabolites prompted the development of molecular markers suitable for achieving variety genetic identification. Findings Here, we propose a comparison between a multi-locus barcoding approach based on six chloroplast markers and a single-copy nuclear gene sequencing method using five coding regions combined with a character-based system with the aim of reconstructing cultivar-specific haplotypes and genotypes to be exploited for the molecular characterization of 157 V. vinifera accessions. The analysis of the chloroplast target regions proved the inadequacy of the DNA barcoding approach at the subspecies level, and hence further DNA genotyping analyses were targeted on the sequences of five nuclear single-copy genes amplified across all of the accessions. The sequencing of the coding region of the UFGT nuclear gene (UDP-glucose: flavonoid 3-0-glucosyltransferase, the key enzyme for the accumulation of anthocyanins in berry skins) enabled the discovery of discriminant SNPs (1/34 bp) and the reconstruction of 130 V. vinifera distinct genotypes. Most of the genotypes proved to be cultivar-specific, and only few genotypes were shared by more, although strictly related, cultivars. Conclusion On the whole, this technique was successful for inferring SNP-based genotypes of grapevine accessions suitable for assessing the genetic identity and ancestry of international cultivars and also useful for corroborating some hypotheses regarding the origin of local varieties, suggesting several issues of misidentification (synonymy/homonymy). PMID:24298902

  17. Seals Flow Code Development

    NASA Technical Reports Server (NTRS)

    1991-01-01

    In recognition of a deficiency in the current modeling capability for seals, an effort was established by NASA to develop verified computational fluid dynamic concepts, codes, and analyses for seals. The objectives were to develop advanced concepts for the design and analysis of seals, to effectively disseminate the information to potential users by way of annual workshops, and to provide experimental verification for the models and codes under a wide range of operating conditions.

  18. Robust Nonlinear Neural Codes

    NASA Astrophysics Data System (ADS)

    Yang, Qianli; Pitkow, Xaq

    2015-03-01

    Most interesting natural sensory stimuli are encoded in the brain in a form that can only be decoded nonlinearly. But despite being a core function of the brain, nonlinear population codes are rarely studied and poorly understood. Interestingly, the few existing models of nonlinear codes are inconsistent with known architectural features of the brain. In particular, these codes have information content that scales with the size of the cortical population, even if that violates the data processing inequality by exceeding the amount of information entering the sensory system. Here we provide a valid theory of nonlinear population codes by generalizing recent work on information-limiting correlations in linear population codes. Although these generalized, nonlinear information-limiting correlations bound the performance of any decoder, they also make decoding more robust to suboptimal computation, allowing many suboptimal decoders to achieve nearly the same efficiency as an optimal decoder. Although these correlations are extremely difficult to measure directly, particularly for nonlinear codes, we provide a simple, practical test by which one can use choice-related activity in small populations of neurons to determine whether decoding is suboptimal or optimal and limited by correlated noise. We conclude by describing an example computation in the vestibular system where this theory applies. QY and XP was supported by a grant from the McNair foundation.

  19. What to do with a Dead Research Code

    NASA Astrophysics Data System (ADS)

    Nemiroff, Robert J.

    2016-01-01

    The project has ended -- should all of the computer codes that enabled the project be deleted? No. Like research papers, research codes typically carry valuable information past project end dates. Several possible end states to the life of research codes are reviewed. Historically, codes are typically left dormant on an increasingly obscure local disk directory until forgotten. These codes will likely become any or all of: lost, impossible to compile and run, difficult to decipher, and likely deleted when the code's proprietor moves on or dies. It is argued here, though, that it would be better for both code authors and astronomy generally if project codes were archived after use in some way. Archiving is advantageous for code authors because archived codes might increase the author's ADS citable publications, while astronomy as a science gains transparency and reproducibility. Paper-specific codes should be included in the publication of the journal papers they support, just like figures and tables. General codes that support multiple papers, possibly written by multiple authors, including their supporting websites, should be registered with a code registry such as the Astrophysics Source Code Library (ASCL). Codes developed on GitHub can be archived with a third party service such as, currently, BackHub. An important code version might be uploaded to a web archiving service like, currently, Zenodo or Figshare, so that this version receives a Digital Object Identifier (DOI), enabling it to found at a stable address into the future. Similar archiving services that are not DOI-dependent include perma.cc and the Internet Archive Wayback Machine at archive.org. Perhaps most simply, copies of important codes with lasting value might be kept on a cloud service like, for example, Google Drive, while activating Google's Inactive Account Manager.

  20. Actual drawing of histological images improves knowledge retention.

    PubMed

    Balemans, Monique C M; Kooloos, Jan G M; Donders, A Rogier T; Van der Zee, Catharina E E M

    2016-01-01

    Medical students have to process a large amount of information during the first years of their study, which has to be retained over long periods of nonuse. Therefore, it would be beneficial when knowledge is gained in a way that promotes long-term retention. Paper-and-pencil drawings for the uptake of form-function relationships of basic tissues has been a teaching tool for a long time, but now seems to be redundant with virtual microscopy on computer-screens and printers everywhere. Several studies claimed that, apart from learning from pictures, actual drawing of images significantly improved knowledge retention. However, these studies applied only immediate post-tests. We investigated the effects of actual drawing of histological images, using randomized cross-over design and different retention periods. The first part of the study concerned esophageal and tracheal epithelium, with 384 medical and biomedical sciences students randomly assigned to either the drawing or the nondrawing group. For the second part of the study, concerning heart muscle cells, students from the previous drawing group were now assigned to the nondrawing group and vice versa. One, four, and six weeks after the experimental intervention, the students were given a free recall test and a questionnaire or drawing exercise, to determine the amount of knowledge retention. The data from this study showed that knowledge retention was significantly improved in the drawing groups compared with the nondrawing groups, even after four or six weeks. This suggests that actual drawing of histological images can be used as a tool to improve long-term knowledge retention. PMID:26033842

  1. The Frictional Force with Respect to the Actual Contact Surface

    NASA Technical Reports Server (NTRS)

    Holm, Ragnar

    1944-01-01

    Hardy's statement that the frictional force is largely adhesion, and to a lesser extent, deformation energy is proved by a simple experiment. The actual contact surface of sliding contacts and hence the friction per unit of contact surface was determined in several cases. It was found for contacts in normal atmosphere to be about one-third t-one-half as high as the macroscopic tearing strength of the softest contact link, while contacts annealed in vacuum and then tested, disclosed frictional forces which are greater than the macroscopic strength.

  2. Time experiences, self-actualizing values, and creativity.

    PubMed

    Yonge, G D

    1975-12-01

    The Personal Orientation Inventory (POI), the Inventory of Temporal Experiences (ITE), and the Adjective Check List (ACL) were administered to 80 subjects. Sixteen scores were derived from the POI, 4 from the ITE and a Creativity score for the ACL. The resulting intercorrelations were interpreted in the light of the theories of Maslow and Hugenholtz which postulate a convergence of self-actualization, creativity, and certain experiences of time. The present study presents some evidence for the expected convergence and contributes to the construct validity of several of the variables studied. PMID:1202191

  3. Power Delivery from an Actual Thermoelectric Generation System

    NASA Astrophysics Data System (ADS)

    Kaibe, Hiromasa; Kajihara, Takeshi; Nagano, Kouji; Makino, Kazuya; Hachiuma, Hirokuni; Natsuume, Daisuke

    2014-06-01

    Similar to photovoltaic (PV) and fuel cells, thermoelectric generators (TEGs) supply direct-current (DC) power, essentially requiring DC/alternating current (AC) conversion for delivery as electricity into the grid network. Use of PVs is already well established through power conditioning systems (PCSs) that enable DC/AC conversion with maximum-power-point tracking, which enables commercial use by customers. From the economic, legal, and regulatory perspectives, a commercial PCS for PVs should also be available for TEGs, preferably as is or with just simple adjustment. Herein, we report use of a PV PCS with an actual TEG. The results are analyzed, and proper application for TEGs is proposed.

  4. Interferometric measurement of actual oblique astigmatism of ophthalmic lenses

    NASA Astrophysics Data System (ADS)

    Wihardjo, Erning

    1995-03-01

    A technique for measuring oblique astigmatism error of ophthalmic lenses is described. The technique is based on a Mach-Zehnder interferometer, which allows us to simulate the actual conditions of the eye. The effects of the lens power, the pupilary aperture size and the viewing distance in calculating a projected pupil zone on the lens are discussed. The projected pupil size on the lens affects the measurement result of the oblique astigmatism error. Conversion of the interferogram to astigmatism error in diopters is given.

  5. New York State Code Adoption Analysis: Lighting Requirements

    SciTech Connect

    Richman, Eric E.

    2004-10-20

    The adoption of the IECC 2003 Energy code will include a set of Lighting Power Density (LPD) values that are effectively a subset of the values in Addendum g to the ASHRAE/IESNA/ANSI 90.1-2001 Standard which will soon be printed as part of the 90.1-2004 version. An analysis of the effectiveness of this adoption for New York State can be provided by a direct comparison of these values with existing LPD levels represented in the current IECC 2000 code, which are themselves a subset of the current ASHRAE/IESNA/ANSI 90.1-2001 Standard (without addenda). Because the complete ASHRAE 2001 and 2004 sets of LPDs are supported by a set of detailed models, they are best suited to provide the basis for an analysis comparison of the two code levels of lighting power density stringency. It is important to note that this kind of analysis is a point-to-point comparison where a fixed level of real world activity is assumed. It is understood that buildings are not built precisely to code levels and that actual percentage of compliance above and below codes will vary among individual buildings and building types. However, without specific knowledge of this real world activity for all buildings in existence and in the future (post-code adoption) it is not possible to analyze actual effects of code adoption. However, it is possible to compare code levels and determine the potential effect of changes from one code requirement level to another. This is the comparison and effectiveness assessment

  6. Bring out your codes! Bring out your codes! (Increasing Software Visibility and Re-use)

    NASA Astrophysics Data System (ADS)

    Allen, A.; Berriman, B.; Brunner, R.; Burger, D.; DuPrie, K.; Hanisch, R. J.; Mann, R.; Mink, J.; Sandin, C.; Shortridge, K.; Teuben, P.

    2013-10-01

    Progress is being made in code discoverability and preservation, but as discussed at ADASS XXI, many codes still remain hidden from public view. With the Astrophysics Source Code Library (ASCL) now indexed by the SAO/NASA Astrophysics Data System (ADS), the introduction of a new journal, Astronomy & Computing, focused on astrophysics software, and the increasing success of education efforts such as Software Carpentry and SciCoder, the community has the opportunity to set a higher standard for its science by encouraging the release of software for examination and possible reuse. We assembled representatives of the community to present issues inhibiting code release and sought suggestions for tackling these factors. The session began with brief statements by panelists; the floor was then opened for discussion and ideas. Comments covered a diverse range of related topics and points of view, with apparent support for the propositions that algorithms should be readily available, code used to produce published scientific results should be made available, and there should be discovery mechanisms to allow these to be found easily. With increased use of resources such as GitHub (for code availability), ASCL (for code discovery), and a stated strong preference from the new journal Astronomy & Computing for code release, we expect to see additional progress over the next few years.

  7. TEA: A Code Calculating Thermochemical Equilibrium Abundances

    NASA Astrophysics Data System (ADS)

    Blecic, Jasmina; Harrington, Joseph; Bowman, M. Oliver

    2016-07-01

    We present an open-source Thermochemical Equilibrium Abundances (TEA) code that calculates the abundances of gaseous molecular species. The code is based on the methodology of White et al. and Eriksson. It applies Gibbs free-energy minimization using an iterative, Lagrangian optimization scheme. Given elemental abundances, TEA calculates molecular abundances for a particular temperature and pressure or a list of temperature–pressure pairs. We tested the code against the method of Burrows & Sharp, the free thermochemical equilibrium code Chemical Equilibrium with Applications (CEA), and the example given by Burrows & Sharp. Using their thermodynamic data, TEA reproduces their final abundances, but with higher precision. We also applied the TEA abundance calculations to models of several hot-Jupiter exoplanets, producing expected results. TEA is written in Python in a modular format. There is a start guide, a user manual, and a code document in addition to this theory paper. TEA is available under a reproducible-research, open-source license via https://github.com/dzesmin/TEA.

  8. Experimental study on the regenerator under actual operating conditions

    NASA Astrophysics Data System (ADS)

    Nam, Kwanwoo; Jeong, Sangkwon

    2002-05-01

    An experimental apparatus was prepared to investigate thermal and hydrodynamic characteristics of the regenerator under its actual operating conditions. The apparatus included a compressor to pressurize and depressurize regenerator with various operating frequencies. Cold end of the regenerator was maintained around 100 K by means of liquid nitrogen container and heat exchanger. Instantaneous gas temperature and mass flow rate were measured at both ends of the regenerator during the whole pressure cycle. Pulsating pressure and pressure drop across the regenerator were also measured. The operating frequency of the pressure cycle was varied between 3 and 60 Hz, which are typical operating frequencies of Gifford-McMahon, pulse tube, and Stirling cryocoolers. First, friction factor for the wire screen mesh was directly determined from room temperature experiments. When the operating frequency was less than 9 Hz, the oscillating flow friction factor was nearly same as the steady flow friction factor for Reynolds number up to 100. For 60 Hz operations, the ratio of oscillating flow friction factor to steady flow one was increased as hydraulic Reynolds number became high. When the Reynolds number was 100, this ratio was about 1.6. Second, ineffectiveness of the regenerator was obtained when the cold-end was maintained around 100 K and the warm-end at 300 K to simulate the actual operating condition of the regenerator in cryocooler. Effect of the operating frequency on ineffectiveness of regenerator was discussed at low frequency range.

  9. Code CUGEL: A code to unfold Ge(Li) spectrometer polyenergetic gamma photon experimental distributions

    NASA Technical Reports Server (NTRS)

    Steyn, J. J.; Born, U.

    1970-01-01

    A FORTRAN code was developed for the Univac 1108 digital computer to unfold lithium-drifted germanium semiconductor spectrometers, polyenergetic gamma photon experimental distributions. It was designed to analyze the combination continuous and monoenergetic gamma radiation field of radioisotope volumetric sources. The code generates the detector system response matrix function and applies it to monoenergetic spectral components discretely and to the continuum iteratively. It corrects for system drift, source decay, background, and detection efficiency. Results are presented in digital form for differential and integrated photon number and energy distributions, and for exposure dose.

  10. Study and simulation of low rate video coding schemes

    NASA Technical Reports Server (NTRS)

    Sayood, Khalid; Chen, Yun-Chung; Kipp, G.

    1992-01-01

    The semiannual report is included. Topics covered include communication, information science, data compression, remote sensing, color mapped images, robust coding scheme for packet video, recursively indexed differential pulse code modulation, image compression technique for use on token ring networks, and joint source/channel coder design.

  11. Codes, Ciphers, and Cryptography--An Honors Colloquium

    ERIC Educational Resources Information Center

    Karls, Michael A.

    2010-01-01

    At the suggestion of a colleague, I read "The Code Book", [32], by Simon Singh to get a basic introduction to the RSA encryption scheme. Inspired by Singh's book, I designed a Ball State University Honors Colloquium in Mathematics for both majors and non-majors, with material coming from "The Code Book" and many other sources. This course became…

  12. 41 CFR 109-26.203 - Activity address codes.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 41 Public Contracts and Property Management 3 2010-07-01 2010-07-01 false Activity address codes...-PROCUREMENT SOURCES AND PROGRAM 26.2-Federal Requisitioning System § 109-26.203 Activity address codes. (a) DOE field organizations designated by OCMA are responsible for processing routine activity...

  13. 48 CFR 501.105-1 - Publication and code arrangement.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 501.105-1 Publication and code arrangement. The GSAR is published in the following sources: (a) Daily issue of the Federal Register. (b) Annual Code of Federal Regulations (CFR), as Chapter 5 of Title 48... 48 Federal Acquisition Regulations System 4 2010-10-01 2010-10-01 false Publication and...

  14. 48 CFR 501.105-1 - Publication and code arrangement.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 501.105-1 Publication and code arrangement. The GSAR is published in the following sources: (a) Daily issue of the Federal Register. (b) Annual Code of Federal Regulations (CFR), as Chapter 5 of Title 48... 48 Federal Acquisition Regulations System 4 2013-10-01 2013-10-01 false Publication and...

  15. 48 CFR 501.105-1 - Publication and code arrangement.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 501.105-1 Publication and code arrangement. The GSAR is published in the following sources: (a) Daily issue of the Federal Register. (b) Annual Code of Federal Regulations (CFR), as Chapter 5 of Title 48... 48 Federal Acquisition Regulations System 4 2012-10-01 2012-10-01 false Publication and...

  16. 48 CFR 501.105-1 - Publication and code arrangement.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 501.105-1 Publication and code arrangement. The GSAR is published in the following sources: (a) Daily issue of the Federal Register. (b) Annual Code of Federal Regulations (CFR), as Chapter 5 of Title 48... 48 Federal Acquisition Regulations System 4 2014-10-01 2014-10-01 false Publication and...

  17. Prioritized LT Codes

    NASA Technical Reports Server (NTRS)

    Woo, Simon S.; Cheng, Michael K.

    2011-01-01

    The original Luby Transform (LT) coding scheme is extended to account for data transmissions where some information symbols in a message block are more important than others. Prioritized LT codes provide unequal error protection (UEP) of data on an erasure channel by modifying the original LT encoder. The prioritized algorithm improves high-priority data protection without penalizing low-priority data recovery. Moreover, low-latency decoding is also obtained for high-priority data due to fast encoding. Prioritized LT codes only require a slight change in the original encoding algorithm, and no changes at all at the decoder. Hence, with a small complexity increase in the LT encoder, an improved UEP and low-decoding latency performance for high-priority data can be achieved. LT encoding partitions a data stream into fixed-sized message blocks each with a constant number of information symbols. To generate a code symbol from the information symbols in a message, the Robust-Soliton probability distribution is first applied in order to determine the number of information symbols to be used to compute the code symbol. Then, the specific information symbols are chosen uniform randomly from the message block. Finally, the selected information symbols are XORed to form the code symbol. The Prioritized LT code construction includes an additional restriction that code symbols formed by a relatively small number of XORed information symbols select some of these information symbols from the pool of high-priority data. Once high-priority data are fully covered, encoding continues with the conventional LT approach where code symbols are generated by selecting information symbols from the entire message block including all different priorities. Therefore, if code symbols derived from high-priority data experience an unusual high number of erasures, Prioritized LT codes can still reliably recover both high- and low-priority data. This hybrid approach decides not only "how to encode

  18. Spatial distribution of the plasma parameters in the RF negative ion source prototype for fusion

    SciTech Connect

    Lishev, S.; Schiesko, L.; Wünderlich, D.; Fantz, U.

    2015-04-08

    A numerical model, based on the fluid plasma theory, has been used for description of the spatial distribution of the plasma parameters (electron density and temperature, plasma potential as well as densities of the three types of positive hydrogen ions) in the IPP prototype RF negative hydrogen ion source. The model covers the driver and the expansion plasma region of the source with their actual size and accounts for the presence of the magnetic filter field with its actual value and location as well as for the bias potential applied to the plasma grid. The obtained results show that without a magnetic filter the two 2D geometries considered, respectively, with an axial symmetry and a planar one, represent accurately the complex 3D structure of the source. The 2D model with a planar symmetry (where the E×B and diamagnetic drifts could be involved in the description) has been used for analysis of the influence, via the charged-particle and electron-energy fluxes, of the magnetic filter and of the bias potential on the spatial structure of the plasma parameters in the source. Benchmarking of results from the code to experimental data shows that the model reproduces the general trend in the axial behavior of the plasma parameters in the source.

  19. Spatial distribution of the plasma parameters in the RF negative ion source prototype for fusion

    NASA Astrophysics Data System (ADS)

    Lishev, S.; Schiesko, L.; Wünderlich, D.; Fantz, U.

    2015-04-01

    A numerical model, based on the fluid plasma theory, has been used for description of the spatial distribution of the plasma parameters (electron density and temperature, plasma potential as well as densities of the three types of positive hydrogen ions) in the IPP prototype RF negative hydrogen ion source. The model covers the driver and the expansion plasma region of the source with their actual size and accounts for the presence of the magnetic filter field with its actual value and location as well as for the bias potential applied to the plasma grid. The obtained results show that without a magnetic filter the two 2D geometries considered, respectively, with an axial symmetry and a planar one, represent accurately the complex 3D structure of the source. The 2D model with a planar symmetry (where the E×B and diamagnetic drifts could be involved in the description) has been used for analysis of the influence, via the charged-particle and electron-energy fluxes, of the magnetic filter and of the bias potential on the spatial structure of the plasma parameters in the source. Benchmarking of results from the code to experimental data shows that the model reproduces the general trend in the axial behavior of the plasma parameters in the source.

  20. Error coding simulations

    NASA Technical Reports Server (NTRS)

    Noble, Viveca K.

    1993-01-01

    There are various elements such as radio frequency interference (RFI) which may induce errors in data being transmitted via a satellite communication link. When a transmission is affected by interference or other error-causing elements, the transmitted data becomes indecipherable. It becomes necessary to implement techniques to recover from these disturbances. The objective of this research is to develop software which simulates error control circuits and evaluate the performance of these modules in various bit error rate environments. The results of the evaluation provide the engineer with information which helps determine the optimal error control scheme. The Consultative Committee for Space Data Systems (CCSDS) recommends the use of Reed-Solomon (RS) and convolutional encoders and Viterbi and RS decoders for error correction. The use of forward error correction techniques greatly reduces the received signal to noise needed for a certain desired bit error rate. The use of concatenated coding, e.g. inner convolutional code and outer RS code, provides even greater coding gain. The 16-bit cyclic redundancy check (CRC) code is recommended by CCSDS for error detection.