Cramer, S.N.
1984-01-01
The MORSE code is a large general-use multigroup Monte Carlo code system. Although no claims can be made regarding its superiority in either theoretical details or Monte Carlo techniques, MORSE has been, since its inception at ORNL in the late 1960s, the most widely used Monte Carlo radiation transport code. The principal reason for this popularity is that MORSE is relatively easy to use, independent of any installation or distribution center, and it can be easily customized to fit almost any specific need. Features of the MORSE code are described.
47 CFR 80.100 - Morse code requirement.
Code of Federal Regulations, 2011 CFR
2011-10-01
... requirement. The code employed for telegraphy must be the Morse code specified in the Telegraph Regulations annexed to the International Telecommunication Convention. Pertinent extracts from the...
47 CFR 80.100 - Morse code requirement.
Code of Federal Regulations, 2013 CFR
2013-10-01
... requirement. The code employed for telegraphy must be the Morse code specified in the Telegraph Regulations annexed to the International Telecommunication Convention. Pertinent extracts from the...
47 CFR 80.100 - Morse code requirement.
Code of Federal Regulations, 2012 CFR
2012-10-01
... requirement. The code employed for telegraphy must be the Morse code specified in the Telegraph Regulations annexed to the International Telecommunication Convention. Pertinent extracts from the...
47 CFR 80.100 - Morse code requirement.
Code of Federal Regulations, 2010 CFR
2010-10-01
... requirement. The code employed for telegraphy must be the Morse code specified in the Telegraph Regulations annexed to the International Telecommunication Convention. Pertinent extracts from the...
MORSE Monte Carlo radiation transport code system
Emmett, M.B.
1983-02-01
This report is an addendum to the MORSE report, ORNL-4972, originally published in 1975. This addendum contains descriptions of several modifications to the MORSE Monte Carlo Code, replacement pages containing corrections, Part II of the report which was previously unpublished, and a new Table of Contents. The modifications include a Klein Nishina estimator for gamma rays. Use of such an estimator required changing the cross section routines to process pair production and Compton scattering cross sections directly from ENDF tapes and writing a new version of subroutine RELCOL. Another modification is the use of free form input for the SAMBO analysis data. This required changing subroutines SCORIN and adding new subroutine RFRE. References are updated, and errors in the original report have been corrected. (WHK)
Recent development and applications of the MORSE Code
Cramer, S.N.
1993-06-01
Several recent analyses using the multigroup MORSE Monte Carlo code are presented. In the calculation of a highly directional-dependent neutron streaming experiment it is shown that P{sub 7} cross section representation produces results virtually identical with those from an analog code. Use has been made here of a recently released ENDF/B-VI data set. In the analysis of neutron distributions inside the water-cooled ORELA accelerator target and positron source, an analytic hydrogen scattering model is incorporated into the otherwise multigroup treatment. The radiation from a nuclear weapon is analyzed in a large concrete building in Nagasaki by coupling MORSE and the DOT discrete ordinates code. The spatial variation of the DOT-generated free-field radiation is utilized, and the building is modeled with the array feature of the MORSE geometry package. An analytic directional biasing, applicable to the discrete scattering angle procedure in MORSE, is combined with the exponential transform. As in more general studies, it is shown that the combined biasing is more efficient than either biasing used separately. Other tracking improvements are included in a difficult streaming and penetration radiation analysis through a concrete structure. Proposals are given for the code generation of the required biasing parameters.
Applications guide to the MORSE Monte Carlo code
Cramer, S.N.
1985-08-01
A practical guide for the implementation of the MORESE-CG Monte Carlo radiation transport computer code system is presented. The various versions of the MORSE code are compared and contrasted, and the many references dealing explicitly with the MORSE-CG code are reviewed. The treatment of angular scattering is discussed, and procedures for obtaining increased differentiality of results in terms of reaction types and nuclides from a multigroup Monte Carlo code are explained in terms of cross-section and geometry data manipulation. Examples of standard cross-section data input and output are shown. Many other features of the code system are also reviewed, including (1) the concept of primary and secondary particles, (2) fission neutron generation, (3) albedo data capability, (4) DOMINO coupling, (5) history file use for post-processing of results, (6) adjoint mode operation, (7) variance reduction, and (8) input/output. In addition, examples of the combinatorial geometry are given, and the new array of arrays geometry feature (MARS) and its three-dimensional plotting code (JUNEBUG) are presented. Realistic examples of user routines for source, estimation, path-length stretching, and cross-section data manipulation are given. A deatiled explanation of the coupling between the random walk and estimation procedure is given in terms of both code parameters and physical analogies. The operation of the code in the adjoint mode is covered extensively. The basic concepts of adjoint theory and dimensionality are discussed and examples of adjoint source and estimator user routines are given for all common situations. Adjoint source normalization is explained, a few sample problems are given, and the concept of obtaining forward differential results from adjoint calculations is covered. Finally, the documentation of the standard MORSE-CG sample problem package is reviewed and on-going and future work is discussed.
Morse code application for wireless environmental control systems for severely disabled individuals.
Yang, Cheng-Hong; Chuang, Li-Yeh; Yang, Cheng-Huei; Luo, Ching-Hsing
2003-12-01
Some physically-disabled people with neuromuscular diseases such as amyotrophic lateral sclerosis, multiple sclerosis, muscular dystrophy, or other conditions that hinder their ability to write, type, and speak, require an assistive tool for purposes of augmentative and alternative communication in their daily lives. In this paper, we designed and implemented a wireless environmental control system using Morse code as an adapted access communication tool. The proposed system includes four parts: input-control module; recognition module; wireless-control module; and electronic-equipment-control module. The signals are transmitted using adopted radio frequencies, which permits long distance transmission without space limitation. Experimental results revealed that three participants with physical handicaps were able to gain access to electronic facilities after two months' practice with the new system. PMID:14960124
An Evaluation of Modality Preference Using a "Morse Code" Recall Task
ERIC Educational Resources Information Center
Hansen, Louise; Cottrell, David
2013-01-01
Advocates of modality preference posit that individuals have a dominant sense and that when new material is presented in this preferred modality, learning is enhanced. Despite the widespread belief in this position, there is little supporting evidence. In the present study, the authors implemented a Morse code-like recall task to examine whether…
A STRUCTURAL THEORY FOR THE PERCEPTION OF MORSE CODE SIGNALS AND RELATED RHYTHMIC PATTERNS.
ERIC Educational Resources Information Center
WISH, MYRON
THE PRIMARY PURPOSE OF THIS DISSERTATION IS TO DEVELOP A STRUCTURAL THEORY, ALONG FACET-THEORETIC LINES, FOR THE PERCEPTION OF MORSE CODE SIGNALS AND RELATED RHYTHMIC PATTERNS. AS STEPS IN THE DEVELOPMENT OF THIS THEORY, MODELS FOR TWO SETS OF SIGNALS ARE PROPOSED AND TESTED. THE FIRST MODEL IS FOR A SET COMPRISED OF ALL SIGNALS OF THE…
[Morse Fall Scale: translation and transcultural adaptation for the Portuguese language].
de Urbanetto, Janete Souza; Creutzberg, Marion; Franz, Flávia; Ojeda, Beatriz Sebben; da Gustavo, Andreia Silva; Bittencourt, Hélio Radke; Steinmetz, Quézia Lidiane; Farina, Veronica Alacarini
2013-06-01
The study aimed to translate and adapt the Morse Fall Scale from English into the Portuguese language. This was performed in seven steps: authorization by the author of the scale; translation into Portuguese; evaluation and structuring of the translated scale; reverse translation into English; evaluation and validation of the scale by a committee of experts; evaluation of clarity of items and operational definitions with 45 professionals; evaluation of agreement between raters and the reliability of reproducibility, related to data from the evaluation of 90 patients, performed by four evaluators/judges. The clarity of the scale was considered very satisfactory, with a confidence interval of 73.0% to 100% in the option very clear. For the concordance of responses, the results showed Kappa coefficients of approximately 0.80 or higher. It was concluded that the adaptation of the scale was successful, indicating that its use is appropriate for the population of Brazilian patients. PMID:24601131
Towards a Morse Code-Based Non-invasive Thought-to-Speech Converter
NASA Astrophysics Data System (ADS)
Nicolaou, Nicoletta; Georgiou, Julius
This paper presents our investigations towards a non-invasive custom-built thought-to-speech converter that decodes mental tasks into morse code, text and then speech. The proposed system is aimed primarily at people who have lost their ability to communicate via conventional means. The investigations presented here are part of our greater search for an appropriate set of features, classifiers and mental tasks that would maximise classification accuracy in such a system. Here Autoregressive (AR) coefficients and Power Spectral Density (PSD) features have been classified using a Support Vector Machine (SVM). The classification accuracy was higher with AR features compared to PSD. In addition, the use of an SVM to classify the AR coefficients increased the classification rate by up to 16.3% compared to that reported in different work, where other classifiers were used. It was also observed that the combination of mental tasks for which highest classification was obtained varied from subject to subject; hence the mental tasks to be used should be carefully chosen to match each subject.
1991-08-01
Version: 00 The original MORSE code was a multipurpose neutron and gamma-ray transport Monte Carlo code. It was designed as a tool for solving most shielding problems. Through the use of multigroup cross sections, the solution of neutron, gamma-ray, or coupled neutron-gamma-ray problems could be obtained in either the forward or adjoint mode. Time dependence for both shielding and criticality problems is provided. General three-dimensional geometry could be used with an albedo option available atmore » any material surface. Isotropic or anisotropic scattering up to a P16 expansion of the angular distribution was allowed. MORSE-CG incorporated the Mathematical Applications, Inc. (MAGI) combinatorial geometry routines. MORSE-B modifies the Monte Carlo neutron and photon transport computer code MORSE-CG by adding routines which allow various flexible options.« less
Webster, Michael A.
2011-01-01
Visual coding is a highly dynamic process and continuously adapting to the current viewing context. The perceptual changes that result from adaptation to recently viewed stimuli remain a powerful and popular tool for analyzing sensory mechanisms and plasticity. Over the last decade, the footprints of this adaptation have been tracked to both higher and lower levels of the visual pathway and over a wider range of timescales, revealing that visual processing is much more adaptable than previously thought. This work has also revealed that the pattern of aftereffects is similar across many stimulus dimensions, pointing to common coding principles in which adaptation plays a central role. However, why visual coding adapts has yet to be fully answered. PMID:21602298
1991-05-01
Version 00 MORSE-CGA was developed to add the capability of modelling rectangular lattices for nuclear reactor cores or for multipartitioned structures. It thus enhances the capability of the MORSE code system. The MORSE code is a multipurpose neutron and gamma-ray transport Monte Carlo code. It has been designed as a tool for solving most shielding problems. Through the use of multigroup cross sections, the solution of neutron, gamma-ray, or coupled neutron-gamma-ray problems may be obtainedmore » in either the forward or adjoint mode. Time dependence for both shielding and criticality problems is provided. General three-dimensional geometry may be used with an albedo option available at any material surface. Isotropic or anisotropic scattering up to a P16 expansion of the angular distribution is allowed.« less
Jiang, Jun; Zhou, Zongtan; Yin, Erwei; Yu, Yang; Liu, Yadong; Hu, Dewen
2015-11-01
Motor imagery (MI)-based brain-computer interfaces (BCIs) allow disabled individuals to control external devices voluntarily, helping us to restore lost motor functions. However, the number of control commands available in MI-based BCIs remains limited, limiting the usability of BCI systems in control applications involving multiple degrees of freedom (DOF), such as control of a robot arm. To address this problem, we developed a novel Morse code-inspired method for MI-based BCI design to increase the number of output commands. Using this method, brain activities are modulated by sequences of MI (sMI) tasks, which are constructed by alternately imagining movements of the left or right hand or no motion. The codes of the sMI task was detected from EEG signals and mapped to special commands. According to permutation theory, an sMI task with N-length allows 2 × (2(N)-1) possible commands with the left and right MI tasks under self-paced conditions. To verify its feasibility, the new method was used to construct a six-class BCI system to control the arm of a humanoid robot. Four subjects participated in our experiment and the averaged accuracy of the six-class sMI tasks was 89.4%. The Cohen's kappa coefficient and the throughput of our BCI paradigm are 0.88 ± 0.060 and 23.5bits per minute (bpm), respectively. Furthermore, all of the subjects could operate an actual three-joint robot arm to grasp an object in around 49.1s using our approach. These promising results suggest that the Morse code-inspired method could be used in the design of BCIs for multi-DOF control.
Telescope Adaptive Optics Code
2005-07-28
The Telescope AO Code has general adaptive optics capabilities plus specialized models for three telescopes with either adaptive optics or active optics systems. It has the capability to generate either single-layer or distributed Kolmogorov turbulence phase screens using the FFT. Missing low order spatial frequencies are added using the Karhunen-Loeve expansion. The phase structure curve is extremely dose to the theoreUcal. Secondly, it has the capability to simulate an adaptive optics control systems. The defaultmore » parameters are those of the Keck II adaptive optics system. Thirdly, it has a general wave optics capability to model the science camera halo due to scintillation from atmospheric turbulence and the telescope optics. Although this capability was implemented for the Gemini telescopes, the only default parameter specific to the Gemini telescopes is the primary mirror diameter. Finally, it has a model for the LSST active optics alignment strategy. This last model is highly specific to the LSST« less
The MASH 1.0 code system: Utilization of morse in the adjoint mode
Johnson, J.O.; Santoro, R.T.
1993-06-01
The Monte Carlo Adjoint Shielding Code System -- MASH 1.0, principally developed at Oak Ridge National Laboratory (ORNL), represents an advanced method of calculating neutron and gamma-ray environments and radiation protection factors for complex shielding configurations by coupling a forward discrete ordinates radiation environment (i.e. air-over-ground) transport calculation with an adjoint Monte Carlo treatment of the shielding geometry. The primary application to date has been to determine the radiation shielding characteristics of armored vehicles exposed to prompt radiation from a nuclear weapon detonation. Other potential applications include analyses of the mission equipment associated with space exploration, the civilian airline industry, and other problems associated with an external neutron and gamma-ray radiation environment. This paper will provide an overview of the MASH 1.0 code system, including the verification, validation, and application to {open_quotes}benchmark{close_quotes} experimental data. Attention will be given to the adjoint Monte Carlo calculation, the use of {open_quotes}in-group{close_quotes} biasing to control the weights of the adjoint particles, and the coupling of a new graphics package for the diagnosis of combinatorial geometry descriptions and visualization of radiation transport results.
Driver Code for Adaptive Optics
NASA Technical Reports Server (NTRS)
Rao, Shanti
2007-01-01
A special-purpose computer code for a deformable-mirror adaptive-optics control system transmits pixel-registered control from (1) a personal computer running software that generates the control data to (2) a circuit board with 128 digital-to-analog converters (DACs) that generate voltages to drive the deformable-mirror actuators. This program reads control-voltage codes from a text file, then sends them, via the computer s parallel port, to a circuit board with four AD5535 (or equivalent) chips. Whereas a similar prior computer program was capable of transmitting data to only one chip at a time, this program can send data to four chips simultaneously. This program is in the form of C-language code that can be compiled and linked into an adaptive-optics software system. The program as supplied includes source code for integration into the adaptive-optics software, documentation, and a component that provides a demonstration of loading DAC codes from a text file. On a standard Windows desktop computer, the software can update 128 channels in 10 ms. On Real-Time Linux with a digital I/O card, the software can update 1024 channels (8 boards in parallel) every 8 ms.
AEST: Adaptive Eigenvalue Stability Code
NASA Astrophysics Data System (ADS)
Zheng, L.-J.; Kotschenreuther, M.; Waelbroeck, F.; van Dam, J. W.; Berk, H.
2002-11-01
An adaptive eigenvalue linear stability code is developed. The aim is on one hand to include the non-ideal MHD effects into the global MHD stability calculation for both low and high n modes and on the other hand to resolve the numerical difficulty involving MHD singularity on the rational surfaces at the marginal stability. Our code follows some parts of philosophy of DCON by abandoning relaxation methods based on radial finite element expansion in favor of an efficient shooting procedure with adaptive gridding. The δ W criterion is replaced by the shooting procedure and subsequent matrix eigenvalue problem. Since the technique of expanding a general solution into a summation of the independent solutions employed, the rank of the matrices involved is just a few hundreds. This makes easier to solve the eigenvalue problem with non-ideal MHD effects, such as FLR or even full kinetic effects, as well as plasma rotation effect, taken into account. To include kinetic effects, the approach of solving for the distribution function as a local eigenvalue ω problem as in the GS2 code will be employed in the future. Comparison of the ideal MHD version of the code with DCON, PEST, and GATO will be discussed. The non-ideal MHD version of the code will be employed to study as an application the transport barrier physics in tokamak discharges.
1983-04-13
Version 00 MORSE-C is based on the original ORNL versions of CCC-127/MORSE and CCC-261/MORSE-L but is restricted to criticality problems. Continued efforts in criticality safety calculations led to the development of techniques which resulted in improvements in energy resolution of cross sections, upscatter in the thermal region, and a better cross section library. Only time-independent problems are treated in the packaged version.
Adaptive differential pulse-code modulation with adaptive bit allocation
NASA Astrophysics Data System (ADS)
Frangoulis, E. D.; Yoshida, K.; Turner, L. F.
1984-08-01
Studies have been conducted regarding the possibility to obtain good quality speech at data rates in the range from 16 kbit/s to 32 kbit/s. The techniques considered are related to adaptive predictive coding (APC) and adaptive differential pulse-code modulation (ADPCM). At 16 kbit/s adaptive transform coding (ATC) has also been used. The present investigation is concerned with a new method of speech coding. The described method employs adaptive bit allocation, similar to that used in adaptive transform coding, together with adaptive differential pulse-code modulation, employing first-order prediction. The new method has the objective to improve the quality of the speech over that which can be obtained with conventional ADPCM employing a fourth-order predictor. Attention is given to the ADPCM-AB system, the design of a subjective test, and the application of switched preemphasis to ADPCM.
Bolle, Caroline; Gustin, Marie-Paule; Fau, Didier; Boivin, Georges; Exbrayat, Patrick; Grosgogeat, Brigitte
2016-01-01
The purpose of this study was to investigate peri-implant tissue adaptation on platform-switched implants with a Morse cone-type connection, after 3 and 12 weeks of healing in dogs. Ten weeks after mandibular premolar extractions, eight beagle dogs received three implants each. At each biopsy interval, four animals were sacrificed and biopsies were processed for histologic analysis. The height of the peri-implant mucosa was 2.32 mm and 2.88 mm, respectively, whereas the bone level in relation to the implant platform was -0.39 mm and -0.67 mm, respectively, after 3 and 12 weeks of healing. Within the limits of the present study, platform-switched implants exhibited reduced values of biologic width and marginal bone loss when compared with previous data. PMID:26901300
Adaptive predictive image coding using local characteristics
NASA Astrophysics Data System (ADS)
Hsieh, C. H.; Lu, P. C.; Liou, W. G.
1989-12-01
The paper presents an efficient adaptive predictive coding method using the local characteristics of images. In this method, three coding schemes, namely, mean, subsampling combined with fixed DPCM, and ADPCM/PCM, are used and one of these is chosen adaptively based on the local characteristics of images. The prediction parameters of the two-dimensional linear predictor in the ADPCM/PCM are extracted on a block by block basis. Simulation results show that the proposed method is effective in reducing the slope overload distortion and the granular noise at low bit rates, and thus it can improve the visual quality of reconstructed images.
ERIC Educational Resources Information Center
Bruce, Guy V.
1985-01-01
Mechanically-minded middle school students who have been studying electromagnetism can construct inexpensive telegraphs resembling Samuel Morse's 1844 invention. Instructions (with diagrams), list of materials needed, and suggestions are given for a simple telegraph and for a two-way system. (DH)
Robust Morse decompositions of piecewise constant vector fields.
Szymczak, Andrzej; Zhang, Eugene
2012-06-01
In this paper, we introduce a new approach to computing a Morse decomposition of a vector field on a triangulated manifold surface. The basic idea is to convert the input vector field to a piecewise constant (PC) vector field, whose trajectories can be computed using simple geometric rules. To overcome the intrinsic difficulty in PC vector fields (in particular, discontinuity along mesh edges), we borrow results from the theory of differential inclusions. The input vector field and its PC variant have similar Morse decompositions. We introduce a robust and efficient algorithm to compute Morse decompositions of a PC vector field. Our approach provides subtriangle precision for Morse sets. In addition, we describe a Morse set classification framework which we use to color code the Morse sets in order to enhance the visualization. We demonstrate the benefits of our approach with three well-known simulation data sets, for which our method has produced Morse decompositions that are similar to or finer than those obtained using existing techniques, and is over an order of magnitude faster. PMID:21747131
Adaptive Dynamic Event Tree in RAVEN code
Alfonsi, Andrea; Rabiti, Cristian; Mandelli, Diego; Cogliati, Joshua Joseph; Kinoshita, Robert Arthur
2014-11-01
RAVEN is a software tool that is focused on performing statistical analysis of stochastic dynamic systems. RAVEN has been designed in a high modular and pluggable way in order to enable easy integration of different programming languages (i.e., C++, Python) and coupling with other applications (system codes). Among the several capabilities currently present in RAVEN, there are five different sampling strategies: Monte Carlo, Latin Hyper Cube, Grid, Adaptive and Dynamic Event Tree (DET) sampling methodologies. The scope of this paper is to present a new sampling approach, currently under definition and implementation: an evolution of the DET me
ICAN Computer Code Adapted for Building Materials
NASA Technical Reports Server (NTRS)
Murthy, Pappu L. N.
1997-01-01
The NASA Lewis Research Center has been involved in developing composite micromechanics and macromechanics theories over the last three decades. These activities have resulted in several composite mechanics theories and structural analysis codes whose applications range from material behavior design and analysis to structural component response. One of these computer codes, the Integrated Composite Analyzer (ICAN), is designed primarily to address issues related to designing polymer matrix composites and predicting their properties - including hygral, thermal, and mechanical load effects. Recently, under a cost-sharing cooperative agreement with a Fortune 500 corporation, Master Builders Inc., ICAN was adapted to analyze building materials. The high costs and technical difficulties involved with the fabrication of continuous-fiber-reinforced composites sometimes limit their use. Particulate-reinforced composites can be thought of as a viable alternative. They are as easily processed to near-net shape as monolithic materials, yet have the improved stiffness, strength, and fracture toughness that is characteristic of continuous-fiber-reinforced composites. For example, particlereinforced metal-matrix composites show great potential for a variety of automotive applications, such as disk brake rotors, connecting rods, cylinder liners, and other hightemperature applications. Building materials, such as concrete, can be thought of as one of the oldest materials in this category of multiphase, particle-reinforced materials. The adaptation of ICAN to analyze particle-reinforced composite materials involved the development of new micromechanics-based theories. A derivative of the ICAN code, ICAN/PART, was developed and delivered to Master Builders Inc. as a part of the cooperative activity.
A novel bit-wise adaptable entropy coding technique
NASA Technical Reports Server (NTRS)
Kiely, A.; Klimesh, M.
2001-01-01
We present a novel entropy coding technique which is adaptable in that each bit to be encoded may have an associated probability esitmate which depends on previously encoded bits. The technique may have advantages over arithmetic coding. The technique can achieve arbitrarily small redundancy and admits a simple and fast decoder.
Generating code adapted for interlinking legacy scalar code and extended vector code
Gschwind, Michael K
2013-06-04
Mechanisms for intermixing code are provided. Source code is received for compilation using an extended Application Binary Interface (ABI) that extends a legacy ABI and uses a different register configuration than the legacy ABI. First compiled code is generated based on the source code, the first compiled code comprising code for accommodating the difference in register configurations used by the extended ABI and the legacy ABI. The first compiled code and second compiled code are intermixed to generate intermixed code, the second compiled code being compiled code that uses the legacy ABI. The intermixed code comprises at least one call instruction that is one of a call from the first compiled code to the second compiled code or a call from the second compiled code to the first compiled code. The code for accommodating the difference in register configurations is associated with the at least one call instruction.
Simpler Adaptive Selection of Golomb Power-of-Two Codes
NASA Technical Reports Server (NTRS)
Kiely, Aaron
2007-01-01
An alternative method of adaptive selection of Golomb power-of-two (GPO2) codes has been devised for use in efficient, lossless encoding of sequences of non-negative integers from discrete sources. The method is intended especially for use in compression of digital image data. This method is somewhat suboptimal, but offers the advantage in that it involves significantly less computation than does a prior method of adaptive selection of optimum codes through brute force application of all code options to every block of samples.
MORSE Monte Carlo shielding calculations for the zirconium hydride reference reactor
NASA Technical Reports Server (NTRS)
Burgart, C. E.
1972-01-01
Verification of DOT-SPACETRAN transport calculations of a lithium hydride and tungsten shield for a SNAP reactor was performed using the MORSE (Monte Carlo) code. Transport of both neutrons and gamma rays was considered. Importance sampling was utilized in the MORSE calculations. Several quantities internal to the shield, as well as dose at several points outside of the configuration, were in satisfactory agreement with the DOT calculations of the same.
Adaptive Modulation and Coding for LTE Wireless Communication
NASA Astrophysics Data System (ADS)
Hadi, S. S.; Tiong, T. C.
2015-04-01
Long Term Evolution (LTE) is the new upgrade path for carrier with both GSM/UMTS networks and CDMA2000 networks. The LTE is targeting to become the first global mobile phone standard regardless of the different LTE frequencies and bands use in other countries barrier. Adaptive Modulation and Coding (AMC) is used to increase the network capacity or downlink data rates. Various modulation types are discussed such as Quadrature Phase Shift Keying (QPSK), Quadrature Amplitude Modulation (QAM). Spatial multiplexing techniques for 4×4 MIMO antenna configuration is studied. With channel station information feedback from the mobile receiver to the base station transmitter, adaptive modulation and coding can be applied to adapt to the mobile wireless channels condition to increase spectral efficiencies without increasing bit error rate in noisy channels. In High-Speed Downlink Packet Access (HSDPA) in Universal Mobile Telecommunications System (UMTS), AMC can be used to choose modulation types and forward error correction (FEC) coding rate.
More About Vector Adaptive/Predictive Coding Of Speech
NASA Technical Reports Server (NTRS)
Jedrey, Thomas C.; Gersho, Allen
1992-01-01
Report presents additional information about digital speech-encoding and -decoding system described in "Vector Adaptive/Predictive Encoding of Speech" (NPO-17230). Summarizes development of vector adaptive/predictive coding (VAPC) system and describes basic functions of algorithm. Describes refinements introduced enabling receiver to cope with errors. VAPC algorithm implemented in integrated-circuit coding/decoding processors (codecs). VAPC and other codecs tested under variety of operating conditions. Tests designed to reveal effects of various background quiet and noisy environments and of poor telephone equipment. VAPC found competitive with and, in some respects, superior to other 4.8-kb/s codecs and other codecs of similar complexity.
An adaptive algorithm for motion compensated color image coding
NASA Technical Reports Server (NTRS)
Kwatra, Subhash C.; Whyte, Wayne A.; Lin, Chow-Ming
1987-01-01
This paper presents an adaptive algorithm for motion compensated color image coding. The algorithm can be used for video teleconferencing or broadcast signals. Activity segmentation is used to reduce the bit rate and a variable stage search is conducted to save computations. The adaptive algorithm is compared with the nonadaptive algorithm and it is shown that with approximately 60 percent savings in computing the motion vector and 33 percent additional compression, the performance of the adaptive algorithm is similar to the nonadaptive algorithm. The adaptive algorithm results also show improvement of up to 1 bit/pel over interframe DPCM coding with nonuniform quantization. The test pictures used for this study were recorded directly from broadcast video in color.
The multidimensional Self-Adaptive Grid code, SAGE, version 2
NASA Technical Reports Server (NTRS)
Davies, Carol B.; Venkatapathy, Ethiraj
1995-01-01
This new report on Version 2 of the SAGE code includes all the information in the original publication plus all upgrades and changes to the SAGE code since that time. The two most significant upgrades are the inclusion of a finite-volume option and the ability to adapt and manipulate zonal-matching multiple-grid files. In addition, the original SAGE code has been upgraded to Version 1.1 and includes all options mentioned in this report, with the exception of the multiple grid option and its associated features. Since Version 2 is a larger and more complex code, it is suggested (but not required) that Version 1.1 be used for single-grid applications. This document contains all the information required to run both versions of SAGE. The formulation of the adaption method is described in the first section of this document. The second section is presented in the form of a user guide that explains the input and execution of the code. The third section provides many examples. Successful application of the SAGE code in both two and three dimensions for the solution of various flow problems has proven the code to be robust, portable, and simple to use. Although the basic formulation follows the method of Nakahashi and Deiwert, many modifications have been made to facilitate the use of the self-adaptive grid method for complex grid structures. Modifications to the method and the simple but extensive input options make this a flexible and user-friendly code. The SAGE code can accommodate two-dimensional and three-dimensional, finite-difference and finite-volume, single grid, and zonal-matching multiple grid flow problems.
The multidimensional self-adaptive grid code, SAGE
NASA Technical Reports Server (NTRS)
Davies, Carol B.; Venkatapathy, Ethiraj
1992-01-01
This report describes the multidimensional self-adaptive grid code SAGE. A two-dimensional version of this code was described in an earlier report by the authors. The formulation of the multidimensional version is described in the first section of this document. The second section is presented in the form of a user guide that explains the input and execution of the code and provides many examples. Successful application of the SAGE code in both two and three dimensions for the solution of various flow problems has proven the code to be robust, portable, and simple to use. Although the basic formulation follows the method of Nakahashi and Deiwert, many modifications have been made to facilitate the use of the self-adaptive grid method for complex grid structures. Modifications to the method and the simplified input options make this a flexible and user-friendly code. The new SAGE code can accommodate both two-dimensional and three-dimensional flow problems.
NASA Technical Reports Server (NTRS)
Rost, Martin C.; Sayood, Khalid
1991-01-01
A method for efficiently coding natural images using a vector-quantized variable-blocksized transform source coder is presented. The method, mixture block coding (MBC), incorporates variable-rate coding by using a mixture of discrete cosine transform (DCT) source coders. Which coders are selected to code any given image region is made through a threshold driven distortion criterion. In this paper, MBC is used in two different applications. The base method is concerned with single-pass low-rate image data compression. The second is a natural extension of the base method which allows for low-rate progressive transmission (PT). Since the base method adapts easily to progressive coding, it offers the aesthetic advantage of progressive coding without incorporating extensive channel overhead. Image compression rates of approximately 0.5 bit/pel are demonstrated for both monochrome and color images.
Morse Code, Scrabble, and the Alphabet
ERIC Educational Resources Information Center
Richardson, Mary; Gabrosek, John; Reischman, Diann; Curtiss, Phyliss
2004-01-01
In this paper we describe an interactive activity that illustrates simple linear regression. Students collect data and analyze it using simple linear regression techniques taught in an introductory applied statistics course. The activity is extended to illustrate checks for regression assumptions and regression diagnostics taught in an…
Scalable hologram video coding for adaptive transmitting service.
Seo, Young-Ho; Lee, Yoon-Hyuk; Yoo, Ji-Sang; Kim, Dong-Wook
2013-01-01
This paper discusses processing techniques for an adaptive digital holographic video service in various reconstruction environments, and proposes two new scalable coding schemes. The proposed schemes are constructed according to the hologram generation or acquisition schemes: hologram-based resolution-scalable coding (HRS) and light source-based signal-to-noise ratio scalable coding (LSS). HRS is applied for holograms that are already acquired or generated, while LSS is applied to the light sources before generating digital holograms. In the LSS scheme, the light source information is lossless coded because it is too important to lose, while the HRS scheme adopts a lossy coding method. In an experiment, we provide eight stages of an HRS scheme whose data compression ratios range from 1:1 to 100:1 for each layered data. For LSS, four layers and 16 layers of scalable coding schemes are provided. We experimentally show that the proposed techniques make it possible to service a digital hologram video adaptively to the various displays with different resolutions, computation capabilities of the receiver side, or bandwidths of the network.
ENZO: AN ADAPTIVE MESH REFINEMENT CODE FOR ASTROPHYSICS
Bryan, Greg L.; Turk, Matthew J.; Norman, Michael L.; Bordner, James; Xu, Hao; Kritsuk, Alexei G.; O'Shea, Brian W.; Smith, Britton; Abel, Tom; Wang, Peng; Skillman, Samuel W.; Wise, John H.; Reynolds, Daniel R.; Collins, David C.; Harkness, Robert P.; Kim, Ji-hoon; Kuhlen, Michael; Goldbaum, Nathan; Hummels, Cameron; Collaboration: Enzo Collaboration; and others
2014-04-01
This paper describes the open-source code Enzo, which uses block-structured adaptive mesh refinement to provide high spatial and temporal resolution for modeling astrophysical fluid flows. The code is Cartesian, can be run in one, two, and three dimensions, and supports a wide variety of physics including hydrodynamics, ideal and non-ideal magnetohydrodynamics, N-body dynamics (and, more broadly, self-gravity of fluids and particles), primordial gas chemistry, optically thin radiative cooling of primordial and metal-enriched plasmas (as well as some optically-thick cooling models), radiation transport, cosmological expansion, and models for star formation and feedback in a cosmological context. In addition to explaining the algorithms implemented, we present solutions for a wide range of test problems, demonstrate the code's parallel performance, and discuss the Enzo collaboration's code development methodology.
The Rotating Morse-Pekeris Oscillator Revisited
ERIC Educational Resources Information Center
Zuniga, Jose; Bastida, Adolfo; Requena, Alberto
2008-01-01
The Morse-Pekeris oscillator model for the calculation of the vibration-rotation energy levels of diatomic molecules is revisited. This model is based on the realization of a second-order exponential expansion of the centrifugal term about the minimum of the vibrational Morse oscillator and the subsequent analytical resolution of the resulting…
FLY: a Tree Code for Adaptive Mesh Refinement
NASA Astrophysics Data System (ADS)
Becciani, U.; Antonuccio-Delogu, V.; Costa, A.; Ferro, D.
FLY is a public domain parallel treecode, which makes heavy use of the one-sided communication paradigm to handle the management of the tree structure. It implements the equations for cosmological evolution and can be run for different cosmological models. This paper shows an example of the integration of a tree N-body code with an adaptive mesh, following the PARAMESH scheme. This new implementation will allow the FLY output, and more generally any binary output, to be used with any hydrodynamics code that adopts the PARAMESH data structure, to study compressible flow problems.
Cellular Adaptation Facilitates Sparse and Reliable Coding in Sensory Pathways
Farkhooi, Farzad; Froese, Anja; Muller, Eilif; Menzel, Randolf; Nawrot, Martin P.
2013-01-01
Most neurons in peripheral sensory pathways initially respond vigorously when a preferred stimulus is presented, but adapt as stimulation continues. It is unclear how this phenomenon affects stimulus coding in the later stages of sensory processing. Here, we show that a temporally sparse and reliable stimulus representation develops naturally in sequential stages of a sensory network with adapting neurons. As a modeling framework we employ a mean-field approach together with an adaptive population density treatment, accompanied by numerical simulations of spiking neural networks. We find that cellular adaptation plays a critical role in the dynamic reduction of the trial-by-trial variability of cortical spike responses by transiently suppressing self-generated fast fluctuations in the cortical balanced network. This provides an explanation for a widespread cortical phenomenon by a simple mechanism. We further show that in the insect olfactory system cellular adaptation is sufficient to explain the emergence of the temporally sparse and reliable stimulus representation in the mushroom body. Our results reveal a generic, biophysically plausible mechanism that can explain the emergence of a temporally sparse and reliable stimulus representation within a sequential processing architecture. PMID:24098101
Resonance and Revivals II. Morse Oscillator and Double Morse Well Dynamics
NASA Astrophysics Data System (ADS)
Li, Alvason Zhenhua; Harter, William G.
2012-06-01
Analytical solutions for the Morse oscillator are applied to investigate the quantum resonance and revivals that occur in position and momentum spaces. The anharmonicity of this oscillator appears to cause interesting space-time phenomena that includes relatively simple Farey-sum revival structure. Furthermore, a simple sum of two Morse oscillators leads to a double Morse well whose geometric symmetry provides a quasi-analytical solution. The resonant beats and revivals of wavepacket propagation involve quantum tunneling between the double Morse wells and mode dynamics local to each well. Such quantum dynamic systems may have applications for quantum information processing and quantum computing.
Adaptive shape coding for perceptual decisions in the human brain.
Kourtzi, Zoe; Welchman, Andrew E
2015-01-01
In its search for neural codes, the field of visual neuroscience has uncovered neural representations that reflect the structure of stimuli of variable complexity from simple features to object categories. However, accumulating evidence suggests an adaptive neural code that is dynamically shaped by experience to support flexible and efficient perceptual decisions. Here, we review work showing that experience plays a critical role in molding midlevel visual representations for perceptual decisions. Combining behavioral and brain imaging measurements, we demonstrate that learning optimizes feature binding for object recognition in cluttered scenes, and tunes the neural representations of informative image parts to support efficient categorical judgements. Our findings indicate that similar learning mechanisms may mediate long-term optimization through development, tune the visual system to fundamental principles of feature binding, and optimize feature templates for perceptual decisions. PMID:26024511
Adaptive shape coding for perceptual decisions in the human brain
Kourtzi, Zoe; Welchman, Andrew E.
2015-01-01
In its search for neural codes, the field of visual neuroscience has uncovered neural representations that reflect the structure of stimuli of variable complexity from simple features to object categories. However, accumulating evidence suggests an adaptive neural code that is dynamically shaped by experience to support flexible and efficient perceptual decisions. Here, we review work showing that experience plays a critical role in molding midlevel visual representations for perceptual decisions. Combining behavioral and brain imaging measurements, we demonstrate that learning optimizes feature binding for object recognition in cluttered scenes, and tunes the neural representations of informative image parts to support efficient categorical judgements. Our findings indicate that similar learning mechanisms may mediate long-term optimization through development, tune the visual system to fundamental principles of feature binding, and optimize feature templates for perceptual decisions. PMID:26024511
Adaptive neural coding: from biological to behavioral decision-making
Louie, Kenway; Glimcher, Paul W.; Webb, Ryan
2015-01-01
Empirical decision-making in diverse species deviates from the predictions of normative choice theory, but why such suboptimal behavior occurs is unknown. Here, we propose that deviations from optimality arise from biological decision mechanisms that have evolved to maximize choice performance within intrinsic biophysical constraints. Sensory processing utilizes specific computations such as divisive normalization to maximize information coding in constrained neural circuits, and recent evidence suggests that analogous computations operate in decision-related brain areas. These adaptive computations implement a relative value code that may explain the characteristic context-dependent nature of behavioral violations of classical normative theory. Examining decision-making at the computational level thus provides a crucial link between the architecture of biological decision circuits and the form of empirical choice behavior. PMID:26722666
AMRA: An Adaptive Mesh Refinement hydrodynamic code for astrophysics
NASA Astrophysics Data System (ADS)
Plewa, T.; Müller, E.
2001-08-01
Implementation details and test cases of a newly developed hydrodynamic code, amra, are presented. The numerical scheme exploits the adaptive mesh refinement technique coupled to modern high-resolution schemes which are suitable for relativistic and non-relativistic flows. Various physical processes are incorporated using the operator splitting approach, and include self-gravity, nuclear burning, physical viscosity, implicit and explicit schemes for conductive transport, simplified photoionization, and radiative losses from an optically thin plasma. Several aspects related to the accuracy and stability of the scheme are discussed in the context of hydrodynamic and astrophysical flows.
Adaptive Synaptogenesis Constructs Neural Codes That Benefit Discrimination
Thomas, Blake T.; Blalock, Davis W.; Levy, William B.
2015-01-01
Intelligent organisms face a variety of tasks requiring the acquisition of expertise within a specific domain, including the ability to discriminate between a large number of similar patterns. From an energy-efficiency perspective, effective discrimination requires a prudent allocation of neural resources with more frequent patterns and their variants being represented with greater precision. In this work, we demonstrate a biologically plausible means of constructing a single-layer neural network that adaptively (i.e., without supervision) meets this criterion. Specifically, the adaptive algorithm includes synaptogenesis, synaptic shedding, and bi-directional synaptic weight modification to produce a network with outputs (i.e. neural codes) that represent input patterns proportional to the frequency of related patterns. In addition to pattern frequency, the correlational structure of the input environment also affects allocation of neural resources. The combined synaptic modification mechanisms provide an explanation of neuron allocation in the case of self-taught experts. PMID:26176744
SAGE: The Self-Adaptive Grid Code. 3
NASA Technical Reports Server (NTRS)
Davies, Carol B.; Venkatapathy, Ethiraj
1999-01-01
The multi-dimensional self-adaptive grid code, SAGE, is an important tool in the field of computational fluid dynamics (CFD). It provides an efficient method to improve the accuracy of flow solutions while simultaneously reducing computer processing time. Briefly, SAGE enhances an initial computational grid by redistributing the mesh points into more appropriate locations. The movement of these points is driven by an equal-error-distribution algorithm that utilizes the relationship between high flow gradients and excessive solution errors. The method also provides a balance between clustering points in the high gradient regions and maintaining the smoothness and continuity of the adapted grid, The latest version, Version 3, includes the ability to change the boundaries of a given grid to more efficiently enclose flow structures and provides alternative redistribution algorithms.
An Adaptive Motion Estimation Scheme for Video Coding
Gao, Yuan; Jia, Kebin
2014-01-01
The unsymmetrical-cross multihexagon-grid search (UMHexagonS) is one of the best fast Motion Estimation (ME) algorithms in video encoding software. It achieves an excellent coding performance by using hybrid block matching search pattern and multiple initial search point predictors at the cost of the computational complexity of ME increased. Reducing time consuming of ME is one of the key factors to improve video coding efficiency. In this paper, we propose an adaptive motion estimation scheme to further reduce the calculation redundancy of UMHexagonS. Firstly, new motion estimation search patterns have been designed according to the statistical results of motion vector (MV) distribution information. Then, design a MV distribution prediction method, including prediction of the size of MV and the direction of MV. At last, according to the MV distribution prediction results, achieve self-adaptive subregional searching by the new estimation search patterns. Experimental results show that more than 50% of total search points are dramatically reduced compared to the UMHexagonS algorithm in JM 18.4 of H.264/AVC. As a result, the proposed algorithm scheme can save the ME time up to 20.86% while the rate-distortion performance is not compromised. PMID:24672313
An adaptive motion estimation scheme for video coding.
Liu, Pengyu; Gao, Yuan; Jia, Kebin
2014-01-01
The unsymmetrical-cross multihexagon-grid search (UMHexagonS) is one of the best fast Motion Estimation (ME) algorithms in video encoding software. It achieves an excellent coding performance by using hybrid block matching search pattern and multiple initial search point predictors at the cost of the computational complexity of ME increased. Reducing time consuming of ME is one of the key factors to improve video coding efficiency. In this paper, we propose an adaptive motion estimation scheme to further reduce the calculation redundancy of UMHexagonS. Firstly, new motion estimation search patterns have been designed according to the statistical results of motion vector (MV) distribution information. Then, design a MV distribution prediction method, including prediction of the size of MV and the direction of MV. At last, according to the MV distribution prediction results, achieve self-adaptive subregional searching by the new estimation search patterns. Experimental results show that more than 50% of total search points are dramatically reduced compared to the UMHexagonS algorithm in JM 18.4 of H.264/AVC. As a result, the proposed algorithm scheme can save the ME time up to 20.86% while the rate-distortion performance is not compromised.
Cooperative solutions coupling a geometry engine and adaptive solver codes
NASA Technical Reports Server (NTRS)
Dickens, Thomas P.
1995-01-01
Follow-on work has progressed in using Aero Grid and Paneling System (AGPS), a geometry and visualization system, as a dynamic real time geometry monitor, manipulator, and interrogator for other codes. In particular, AGPS has been successfully coupled with adaptive flow solvers which iterate, refining the grid in areas of interest, and continuing on to a solution. With the coupling to the geometry engine, the new grids represent the actual geometry much more accurately since they are derived directly from the geometry and do not use refits to the first-cut grids. Additional work has been done with design runs where the geometric shape is modified to achieve a desired result. Various constraints are used to point the solution in a reasonable direction which also more closely satisfies the desired results. Concepts and techniques are presented, as well as examples of sample case studies. Issues such as distributed operation of the cooperative codes versus running all codes locally and pre-calculation for performance are discussed. Future directions are considered which will build on these techniques in light of changing computer environments.
Conforming Morse-Smale Complexes
Gyulassy, Attila; Gunther, David; Levine, Joshua A.; Tierny, Julien; Pascucci, Valerio
2014-08-11
Morse-Smale (MS) complexes have been gaining popularity as a tool for feature-driven data analysis and visualization. However, the quality of their geometric embedding and the sole dependence on the input scalar field data can limit their applicability when expressing application-dependent features. In this paper we introduce a new combinatorial technique to compute an MS complex that conforms to both an input scalar field and an additional, prior segmentation of the domain. The segmentation constrains the MS complex computation guaranteeing that boundaries in the segmentation are captured as separatrices of the MS complex. We demonstrate the utility and versatility of our approach with two applications. First, we use streamline integration to determine numerically computed basins/mountains and use the resulting segmentation as an input to our algorithm. This strategy enables the incorporation of prior flow path knowledge, effectively resulting in an MS complex that is as geometrically accurate as the employed numerical integration. Our second use case is motivated by the observation that often the data itself does not explicitly contain features known to be present by a domain expert. We introduce edit operations for MS complexes so that a user can directly modify their features while maintaining all the advantages of a robust topology-based representation.
Conforming Morse-Smale Complexes.
Gyulassy, Attila; Günther, David; Levine, Joshua A; Tierny, Julien; Pascucci, Valerio
2014-12-01
Morse-Smale (MS) complexes have been gaining popularity as a tool for feature-driven data analysis and visualization. However, the quality of their geometric embedding and the sole dependence on the input scalar field data can limit their applicability when expressing application-dependent features. In this paper we introduce a new combinatorial technique to compute an MS complex that conforms to both an input scalar field and an additional, prior segmentation of the domain. The segmentation constrains the MS complex computation guaranteeing that boundaries in the segmentation are captured as separatrices of the MS complex. We demonstrate the utility and versatility of our approach with two applications. First, we use streamline integration to determine numerically computed basins/mountains and use the resulting segmentation as an input to our algorithm. This strategy enables the incorporation of prior flow path knowledge, effectively resulting in an MS complex that is as geometrically accurate as the employed numerical integration. Our second use case is motivated by the observation that often the data itself does not explicitly contain features known to be present by a domain expert. We introduce edit operations for MS complexes so that a user can directly modify their features while maintaining all the advantages of a robust topology-based representation.
Olfactory coding in Drosophila larvae investigated by cross-adaptation.
Boyle, Jennefer; Cobb, Matthew
2005-09-01
In order to reveal aspects of olfactory coding, the effects of sensory adaptation on the olfactory responses of first-instar Drosophila melanogaster larvae were tested. Larvae were pre-stimulated with a homologous series of acetic esters (C3-C9), and their responses to each of these odours were then measured. The overall patterns suggested that methyl acetate has no specific pathway but was detected by all the sensory pathways studied here, that butyl and pentyl acetate tended to have similar effects to each other and that hexyl acetate was processed separately from the other odours. In a number of cases, cross-adaptation transformed a control attractive response into a repulsive response; in no case was an increase in attractiveness observed. This was investigated by studying changes in dose-response curves following pre-stimulation. These findings are discussed in light of the possible intra- and intercellular mechanisms of adaptation and the advantage of altered sensitivity for the larva. PMID:16155221
MORSE/STORM: A generalized albedo option for Monte Carlo calculations
Gomes, I.C.; Stevens, P.N. )
1991-09-01
The advisability of using the albedo procedure for the Monte Carlo solution of deep penetration shielding problems that have ducts and other penetrations has been investigated. The use of albedo data can dramatically improve the computational efficiency of certain Monte Carlo calculations. However, the accuracy of these results may be unacceptable because of lost information during the albedo event and serious errors in the available differential albedo data. This study was done to evaluate and appropriately modify the MORSE/BREESE package, to develop new methods for generating the required albedo data, and to extend the adjoint capability to the albedo-modified calculations. Major modifications to MORSE/BREESE include an option to save for further use information that would be lost at the albedo event, an option to displace the point of emergence during an albedo event, and an option to use spatially dependent albedo data for both forward and adjoint calculations, which includes the point of emergence as a new random variable to be selected during an albedo event. The theoretical basis for using TORT-generated forward albedo information to produce adjuncton albedos was derived. The MORSE/STORM package was developed to perform both forward and adjoint modes of analysis using spatially dependent albedo data. Results obtained with MORSE/STORM for both forward and adjoint modes were compared with benchmark solutions. Excellent agreement and improved computational efficiency were achieved, demonstrating the full utilization of the albedo option in the MORSE code. 7 refs., 17 figs., 15 tabs.
N-Body Code with Adaptive Mesh Refinement
NASA Astrophysics Data System (ADS)
Yahagi, Hideki; Yoshii, Yuzuru
2001-09-01
We have developed a simulation code with the techniques that enhance both spatial and time resolution of the particle-mesh (PM) method, for which the spatial resolution is restricted by the spacing of structured mesh. The adaptive-mesh refinement (AMR) technique subdivides the cells that satisfy the refinement criterion recursively. The hierarchical meshes are maintained by the special data structure and are modified in accordance with the change of particle distribution. In general, as the resolution of the simulation increases, its time step must be shortened and more computational time is required to complete the simulation. Since the AMR enhances the spatial resolution locally, we reduce the time step locally also, instead of shortening it globally. For this purpose, we used a technique of hierarchical time steps (HTS), which changes the time step, from particle to particle, depending on the size of the cell in which particles reside. Some test calculations show that our implementation of AMR and HTS is successful. We have performed cosmological simulation runs based on our code and found that many of halo objects have density profiles that are well fitted to the universal profile proposed in 1996 by Navarro, Frenk, & White over the entire range of their radius.
3D Finite Element Trajectory Code with Adaptive Meshing
NASA Astrophysics Data System (ADS)
Ives, Lawrence; Bui, Thuc; Vogler, William; Bauer, Andy; Shephard, Mark; Beal, Mark; Tran, Hien
2004-11-01
Beam Optics Analysis, a new, 3D charged particle program is available and in use for the design of complex, 3D electron guns and charged particle devices. The code reads files directly from most CAD and solid modeling programs, includes an intuitive Graphical User Interface (GUI), and a robust mesh generator that is fully automatic. Complex problems can be set up, and analysis initiated in minutes. The program includes a user-friendly post processor for displaying field and trajectory data using 3D plots and images. The electrostatic solver is based on the standard nodal finite element method. The magnetostatic field solver is based on the vector finite element method and is also called during the trajectory simulation process to solve for self magnetic fields. The user imports the geometry from essentially any commercial CAD program and uses the GUI to assign parameters (voltages, currents, dielectric constant) and designate emitters (including work function, emitter temperature, and number of trajectories). The the mesh is generated automatically and analysis is performed, including mesh adaptation to improve accuracy and optimize computational resources. This presentation will provide information on the basic structure of the code, its operation, and it's capabilities.
RAM: a Relativistic Adaptive Mesh Refinement Hydrodynamics Code
Zhang, Wei-Qun; MacFadyen, Andrew I.; /Princeton, Inst. Advanced Study
2005-06-06
The authors have developed a new computer code, RAM, to solve the conservative equations of special relativistic hydrodynamics (SRHD) using adaptive mesh refinement (AMR) on parallel computers. They have implemented a characteristic-wise, finite difference, weighted essentially non-oscillatory (WENO) scheme using the full characteristic decomposition of the SRHD equations to achieve fifth-order accuracy in space. For time integration they use the method of lines with a third-order total variation diminishing (TVD) Runge-Kutta scheme. They have also implemented fourth and fifth order Runge-Kutta time integration schemes for comparison. The implementation of AMR and parallelization is based on the FLASH code. RAM is modular and includes the capability to easily swap hydrodynamics solvers, reconstruction methods and physics modules. In addition to WENO they have implemented a finite volume module with the piecewise parabolic method (PPM) for reconstruction and the modified Marquina approximate Riemann solver to work with TVD Runge-Kutta time integration. They examine the difficulty of accurately simulating shear flows in numerical relativistic hydrodynamics codes. They show that under-resolved simulations of simple test problems with transverse velocity components produce incorrect results and demonstrate the ability of RAM to correctly solve these problems. RAM has been tested in one, two and three dimensions and in Cartesian, cylindrical and spherical coordinates. they have demonstrated fifth-order accuracy for WENO in one and two dimensions and performed detailed comparison with other schemes for which they show significantly lower convergence rates. Extensive testing is presented demonstrating the ability of RAM to address challenging open questions in relativistic astrophysics.
Composite Thue-Morse zone plates.
Ma, Wenzhuo; Tao, Shaohua; Cheng, Shubo
2016-06-13
We present a new family of diffractive lenses, composite Thue-Morse zone plates (CTMZPs), formed by multiple orders of Thue-Morse zone plates (TMZPs). The typical structure of a CTMZP is a composite of two concentric TMZPs. The focusing properties of the CTMZPs with different parameters have been investigated both theoretically and experimentally. Compared with the TMZPs, the CTMZPs have higher performance in axial intensity and imaging resolution. The CTMZP beams are also found to possess the self-reconstruction property, and would be useful for three-dimensional optical tweezers, laser machining, and optical imaging. PMID:27410293
Adaptive distributed video coding with correlation estimation using expectation propagation
NASA Astrophysics Data System (ADS)
Cui, Lijuan; Wang, Shuang; Jiang, Xiaoqian; Cheng, Samuel
2012-10-01
Distributed video coding (DVC) is rapidly increasing in popularity by the way of shifting the complexity from encoder to decoder, whereas no compression performance degrades, at least in theory. In contrast with conventional video codecs, the inter-frame correlation in DVC is explored at decoder based on the received syndromes of Wyner-Ziv (WZ) frame and side information (SI) frame generated from other frames available only at decoder. However, the ultimate decoding performances of DVC are based on the assumption that the perfect knowledge of correlation statistic between WZ and SI frames should be available at decoder. Therefore, the ability of obtaining a good statistical correlation estimate is becoming increasingly important in practical DVC implementations. Generally, the existing correlation estimation methods in DVC can be classified into two main types: pre-estimation where estimation starts before decoding and on-the-fly (OTF) estimation where estimation can be refined iteratively during decoding. As potential changes between frames might be unpredictable or dynamical, OTF estimation methods usually outperforms pre-estimation techniques with the cost of increased decoding complexity (e.g., sampling methods). In this paper, we propose a low complexity adaptive DVC scheme using expectation propagation (EP), where correlation estimation is performed OTF as it is carried out jointly with decoding of the factor graph-based DVC code. Among different approximate inference methods, EP generally offers better tradeoff between accuracy and complexity. Experimental results show that our proposed scheme outperforms the benchmark state-of-the-art DISCOVER codec and other cases without correlation tracking, and achieves comparable decoding performance but with significantly low complexity comparing with sampling method.
Adaptive Distributed Video Coding with Correlation Estimation using Expectation Propagation.
Cui, Lijuan; Wang, Shuang; Jiang, Xiaoqian; Cheng, Samuel
2012-10-15
Distributed video coding (DVC) is rapidly increasing in popularity by the way of shifting the complexity from encoder to decoder, whereas no compression performance degrades, at least in theory. In contrast with conventional video codecs, the inter-frame correlation in DVC is explored at decoder based on the received syndromes of Wyner-Ziv (WZ) frame and side information (SI) frame generated from other frames available only at decoder. However, the ultimate decoding performances of DVC are based on the assumption that the perfect knowledge of correlation statistic between WZ and SI frames should be available at decoder. Therefore, the ability of obtaining a good statistical correlation estimate is becoming increasingly important in practical DVC implementations. Generally, the existing correlation estimation methods in DVC can be classified into two main types: pre-estimation where estimation starts before decoding and on-the-fly (OTF) estimation where estimation can be refined iteratively during decoding. As potential changes between frames might be unpredictable or dynamical, OTF estimation methods usually outperforms pre-estimation techniques with the cost of increased decoding complexity (e.g., sampling methods). In this paper, we propose a low complexity adaptive DVC scheme using expectation propagation (EP), where correlation estimation is performed OTF as it is carried out jointly with decoding of the factor graph-based DVC code. Among different approximate inference methods, EP generally offers better tradeoff between accuracy and complexity. Experimental results show that our proposed scheme outperforms the benchmark state-of-the-art DISCOVER codec and other cases without correlation tracking, and achieves comparable decoding performance but with significantly low complexity comparing with sampling method.
Estimation of Some Parameters from Morse-Morse-Spline-Van Der Waals Intermolecular Potential
Coroiu, I.
2007-04-23
Some parameters such as transport cross-sections and isotopic thermal diffusion factor have been calculated from an improved intermolecular potential, Morse-Morse-Spline-van der Waals (MMSV) potential proposed by R.A. Aziz et al. The treatment was completely classical and no corrections for quantum effects were made. The results would be employed for isotope separations of different spherical and quasi-spherical molecules.
Adaptive phase-coded reconstruction for cardiac CT
NASA Astrophysics Data System (ADS)
Hsieh, Jiang; Mayo, John; Acharya, Kishor; Pan, Tin-Su
2000-04-01
Cardiac imaging with conventional computed tomography (CT) has gained significant attention in recent years. New hardware development enables a CT scanner to rotate at a faster speed so that less cardiac motion is present in acquired projection data. Many new tomographic reconstruction techniques have also been developed to reduce the artifacts induced by the cardiac motion. Most of the algorithms make use of the projection data collected over several cardiac cycles to formulate a single projection data set. Because the data set is formed with samples collected roughly in the same phase of a cardiac cycle, the temporal resolution of the newly formed data set is significantly improved compared with projections collected continuously. In this paper, we present an adaptive phase- coded reconstruction scheme (APR) for cardiac CT. Unlike the previously proposed schemes where the projection sector size is identical, APR determines each sector size based on the tomographic reconstruction algorithm. The newly proposed scheme ensures that the temporal resolution of each sector is substantially equal. In addition, the scan speed is selected based on the measured EKG signal of the patient.
Adaptive Source Coding Schemes for Geometrically Distributed Integer Alphabets
NASA Technical Reports Server (NTRS)
Cheung, K-M.; Smyth, P.
1993-01-01
Revisit the Gallager and van Voorhis optimal source coding scheme for geometrically distributed non-negative integer alphabets and show that the various subcodes in the popular Rice algorithm can be derived from the Gallager and van Voorhis code.
Seaborg, David M
2010-08-01
The canonical genetic code is on a sub-optimal adaptive peak with respect to its ability to minimize errors, and is close to, but not quite, optimal. This is demonstrated by the near-total adjacency of synonymous codons, the similarity of adjacent codons, and comparisons of frequency of amino acid usage with number of codons in the code for each amino acid. As a rare empirical example of an adaptive peak in nature, it shows adaptive peaks are real, not merely theoretical. The evolution of deviant genetic codes illustrates how populations move from a lower to a higher adaptive peak. This is done by the use of "adaptive bridges," neutral pathways that cross over maladaptive valleys by virtue of masking of the phenotypic expression of some maladaptive aspects in the genotype. This appears to be the general mechanism by which populations travel from one adaptive peak to another. There are multiple routes a population can follow to cross from one adaptive peak to another. These routes vary in the probability that they will be used, and this probability is determined by the number and nature of the mutations that happen along each of the routes. A modification of the depiction of adaptive landscapes showing genetic distances and probabilities of travel along their multiple possible routes would throw light on this important concept.
Unexpected Properties of the Morse Oscillator
NASA Astrophysics Data System (ADS)
McCoy, Anne B.
2011-06-01
Analytical solutions for the Morse oscillator are used to evaluate < V >_n and < T >_n. For all bound states < V >_n= {hbar ω_e}/{2}(n+{1/2}). This result is identical to the result that is obtained for the harmonic oscillator with the same quadratic force constant. Consequently, all of the anharmonicity in the energy of the quantum states of a Morse oscillator is incorporated in < T >_n. This finding is tested for realistic diatomic potential functions for Ar-Xe, Be_2 and the E-state of Li_2. Analysis of < V >_n/(n+{1/2}) for these systems shows that this quantity is well approximated by ω_e/2 over large ranges of n. Implications of this result to polyatomic systems and for vibration to translation collisional energy transfer are discussed. A. B. McCoy, Chem. Phys. Lett., 501, 603-607 (2011).
Lin, Changyu; Zou, Ding; Liu, Tao; Djordjevic, Ivan B
2016-08-01
A mutual information inspired nonbinary coded modulation design with non-uniform shaping is proposed. Instead of traditional power of two signal constellation sizes, we design 5-QAM, 7-QAM and 9-QAM constellations, which can be used in adaptive optical networks. The non-uniform shaping and LDPC code rate are jointly considered in the design, which results in a better performance scheme for the same SNR values. The matched nonbinary (NB) LDPC code is used for this scheme, which further improves the coding gain and the overall performance. We analyze both coding performance and system SNR performance. We show that the proposed NB LDPC-coded 9-QAM has more than 2dB gain in symbol SNR compared to traditional LDPC-coded star-8-QAM. On the other hand, the proposed NB LDPC-coded 5-QAM and 7-QAM have even better performance than LDPC-coded QPSK.
Lin, Changyu; Zou, Ding; Liu, Tao; Djordjevic, Ivan B
2016-08-01
A mutual information inspired nonbinary coded modulation design with non-uniform shaping is proposed. Instead of traditional power of two signal constellation sizes, we design 5-QAM, 7-QAM and 9-QAM constellations, which can be used in adaptive optical networks. The non-uniform shaping and LDPC code rate are jointly considered in the design, which results in a better performance scheme for the same SNR values. The matched nonbinary (NB) LDPC code is used for this scheme, which further improves the coding gain and the overall performance. We analyze both coding performance and system SNR performance. We show that the proposed NB LDPC-coded 9-QAM has more than 2dB gain in symbol SNR compared to traditional LDPC-coded star-8-QAM. On the other hand, the proposed NB LDPC-coded 5-QAM and 7-QAM have even better performance than LDPC-coded QPSK. PMID:27505775
Adaptations in a Community-Based Family Intervention: Replication of Two Coding Schemes.
Cooper, Brittany Rhoades; Shrestha, Gitanjali; Hyman, Leah; Hill, Laura
2016-02-01
Although program adaptation is a reality in community-based implementations of evidence-based programs, much of the discussion about adaptation remains theoretical. The primary aim of this study was to replicate two coding systems to examine adaptations in large-scale, community-based disseminations of the Strengthening Families Program for Parents and Youth 10-14, a family-based substance use prevention program. Our second aim was to explore intersections between various dimensions of facilitator-reported adaptations from these two coding systems. Our results indicate that only a few types of adaptations and a few reasons accounted for a majority (over 70 %) of all reported adaptations. We also found that most adaptations were logistical, reactive, and not aligned with program's goals. In many ways, our findings replicate those of the original studies, suggesting the two coding systems are robust even when applied to self-reported data collected from community-based implementations. Our findings on the associations between adaptation dimensions can inform future studies assessing the relationship between adaptations and program outcomes. Studies of local adaptations, like the present one, should help researchers, program developers, and policymakers better understand the issues faced by implementers and guide efforts related to program development, transferability, and sustainability. PMID:26661413
Adaptive Mesh Refinement Algorithms for Parallel Unstructured Finite Element Codes
Parsons, I D; Solberg, J M
2006-02-03
This project produced algorithms for and software implementations of adaptive mesh refinement (AMR) methods for solving practical solid and thermal mechanics problems on multiprocessor parallel computers using unstructured finite element meshes. The overall goal is to provide computational solutions that are accurate to some prescribed tolerance, and adaptivity is the correct path toward this goal. These new tools will enable analysts to conduct more reliable simulations at reduced cost, both in terms of analyst and computer time. Previous academic research in the field of adaptive mesh refinement has produced a voluminous literature focused on error estimators and demonstration problems; relatively little progress has been made on producing efficient implementations suitable for large-scale problem solving on state-of-the-art computer systems. Research issues that were considered include: effective error estimators for nonlinear structural mechanics; local meshing at irregular geometric boundaries; and constructing efficient software for parallel computing environments.
Adaptation reduces variability of the neuronal population code
NASA Astrophysics Data System (ADS)
Farkhooi, Farzad; Muller, Eilif; Nawrot, Martin P.
2011-05-01
Sequences of events in noise-driven excitable systems with slow variables often show serial correlations among their intervals of events. Here, we employ a master equation for generalized non-renewal processes to calculate the interval and count statistics of superimposed processes governed by a slow adaptation variable. For an ensemble of neurons with spike-frequency adaptation, this results in the regularization of the population activity and an enhanced postsynaptic signal decoding. We confirm our theoretical results in a population of cortical neurons recorded in vivo.
ADAPTION OF NONSTANDARD PIPING COMPONENTS INTO PRESENT DAY SEISMIC CODES
D. T. Clark; M. J. Russell; R. E. Spears; S. R. Jensen
2009-07-01
With spiraling energy demand and flat energy supply, there is a need to extend the life of older nuclear reactors. This sometimes requires that existing systems be evaluated to present day seismic codes. Older reactors built in the 1960s and early 1970s often used fabricated piping components that were code compliant during their initial construction time period, but are outside the standard parameters of present-day piping codes. There are several approaches available to the analyst in evaluating these non-standard components to modern codes. The simplest approach is to use the flexibility factors and stress indices for similar standard components with the assumption that the non-standard component’s flexibility factors and stress indices will be very similar. This approach can require significant engineering judgment. A more rational approach available in Section III of the ASME Boiler and Pressure Vessel Code, which is the subject of this paper, involves calculation of flexibility factors using finite element analysis of the non-standard component. Such analysis allows modeling of geometric and material nonlinearities. Flexibility factors based on these analyses are sensitive to the load magnitudes used in their calculation, load magnitudes that need to be consistent with those produced by the linear system analyses where the flexibility factors are applied. This can lead to iteration, since the magnitude of the loads produced by the linear system analysis depend on the magnitude of the flexibility factors. After the loading applied to the nonstandard component finite element model has been matched to loads produced by the associated linear system model, the component finite element model can then be used to evaluate the performance of the component under the loads with the nonlinear analysis provisions of the Code, should the load levels lead to calculated stresses in excess of Allowable stresses. This paper details the application of component-level finite
Palermo, Romina; Rivolta, Davide; Wilson, C Ellie; Jeffery, Linda
2011-12-01
People with congenital prosopagnosia (CP) report difficulty recognising faces in everyday life and perform poorly on face recognition tests. Here, we investigate whether impaired adaptive face space coding might contribute to poor face recognition in CP. To pinpoint how adaptation may affect face processing, a group of CPs and matched controls completed two complementary face adaptation tasks: the figural aftereffect, which reflects adaptation to general distortions of shape, and the identity aftereffect, which directly taps the mechanisms involved in the discrimination of different face identities. CPs displayed a typical figural aftereffect, consistent with evidence that they are able to process some shape-based information from faces, e.g., cues to discriminate sex. CPs also demonstrated a significant identity aftereffect. However, unlike controls, CPs impression of the identity of the neutral average face was not significantly shifted by adaptation, suggesting that adaptive coding of identity is abnormal in CP. In sum, CPs show reduced aftereffects but only when the task directly taps the use of face norms used to code individual identity. This finding of a reduced face identity aftereffect in individuals with severe face recognition problems is consistent with suggestions that adaptive coding may have a functional role in face recognition.
Palermo, Romina; Rivolta, Davide; Wilson, C Ellie; Jeffery, Linda
2011-12-01
People with congenital prosopagnosia (CP) report difficulty recognising faces in everyday life and perform poorly on face recognition tests. Here, we investigate whether impaired adaptive face space coding might contribute to poor face recognition in CP. To pinpoint how adaptation may affect face processing, a group of CPs and matched controls completed two complementary face adaptation tasks: the figural aftereffect, which reflects adaptation to general distortions of shape, and the identity aftereffect, which directly taps the mechanisms involved in the discrimination of different face identities. CPs displayed a typical figural aftereffect, consistent with evidence that they are able to process some shape-based information from faces, e.g., cues to discriminate sex. CPs also demonstrated a significant identity aftereffect. However, unlike controls, CPs impression of the identity of the neutral average face was not significantly shifted by adaptation, suggesting that adaptive coding of identity is abnormal in CP. In sum, CPs show reduced aftereffects but only when the task directly taps the use of face norms used to code individual identity. This finding of a reduced face identity aftereffect in individuals with severe face recognition problems is consistent with suggestions that adaptive coding may have a functional role in face recognition. PMID:21986295
Adaptive Zero-Coefficient Distribution Scan for Inter Block Mode Coding of H.264/AVC
NASA Astrophysics Data System (ADS)
Wang, Jing-Xin; Su, Alvin W. Y.
Scanning quantized transform coefficients is an important tool for video coding. For example, the MPEG-4 video coder adopts three different scans to get better coding efficiency. This paper proposes an adaptive zero-coefficient distribution scan in inter block coding. The proposed method attempts to improve H.264/AVC zero coefficient coding by modifying the scan operation. Since the zero-coefficient distribution is changed by the proposed scan method, new VLC tables for syntax elements used in context-adaptive variable length coding (CAVLC) are also provided. The savings in bit-rate range from 2.2% to 5.1% in the high bit-rate cases, depending on different test sequences.
NASA Astrophysics Data System (ADS)
Bhowmik, Deepayan; Abhayaratne, Charith
2009-02-01
A framework for evaluating wavelet based watermarking schemes against scalable coded visual media content adaptation attacks is presented. The framework, Watermark Evaluation Bench for Content Adaptation Modes (WEBCAM), aims to facilitate controlled evaluation of wavelet based watermarking schemes under MPEG-21 part-7 digital item adaptations (DIA). WEBCAM accommodates all major wavelet based watermarking in single generalised framework by considering a global parameter space, from which the optimum parameters for a specific algorithm may be chosen. WEBCAM considers the traversing of media content along various links and required content adaptations at various nodes of media supply chains. In this paper, the content adaptation is emulated by the JPEG2000 coded bit stream extraction for various spatial resolution and quality levels of the content. The proposed framework is beneficial not only as an evaluation tool but also as design tool for new wavelet based watermark algorithms by picking and mixing of available tools and finding the optimum design parameters.
Deficits in context-dependent adaptive coding of reward in schizophrenia.
Kirschner, Matthias; Hager, Oliver M; Bischof, Martin; Hartmann-Riemer, Matthias N; Kluge, Agne; Seifritz, Erich; Tobler, Philippe N; Kaiser, Stefan
2016-01-01
Theoretical principles of information processing and empirical findings suggest that to efficiently represent all possible rewards in the natural environment, reward-sensitive neurons have to adapt their coding range dynamically to the current reward context. Adaptation ensures that the reward system is most sensitive for the most likely rewards, enabling the system to efficiently represent a potentially infinite range of reward information. A deficit in neural adaptation would prevent precise representation of rewards and could have detrimental effects for an organism's ability to optimally engage with its environment. In schizophrenia, reward processing is known to be impaired and has been linked to different symptom dimensions. However, despite the fundamental significance of coding reward adaptively, no study has elucidated whether adaptive reward processing is impaired in schizophrenia. We therefore studied patients with schizophrenia (n=27) and healthy controls (n=25), using functional magnetic resonance imaging in combination with a variant of the monetary incentive delay task. Compared with healthy controls, patients with schizophrenia showed less efficient neural adaptation to the current reward context, which leads to imprecise neural representation of reward. Importantly, the deficit correlated with total symptom severity. Our results suggest that some of the deficits in reward processing in schizophrenia might be due to inefficient neural adaptation to the current reward context. Furthermore, because adaptive coding is a ubiquitous feature of the brain, we believe that our findings provide an avenue in defining a general impairment in neural information processing underlying this debilitating disorder. PMID:27430009
Deficits in context-dependent adaptive coding of reward in schizophrenia
Kirschner, Matthias; Hager, Oliver M; Bischof, Martin; Hartmann-Riemer, Matthias N; Kluge, Agne; Seifritz, Erich; Tobler, Philippe N; Kaiser, Stefan
2016-01-01
Theoretical principles of information processing and empirical findings suggest that to efficiently represent all possible rewards in the natural environment, reward-sensitive neurons have to adapt their coding range dynamically to the current reward context. Adaptation ensures that the reward system is most sensitive for the most likely rewards, enabling the system to efficiently represent a potentially infinite range of reward information. A deficit in neural adaptation would prevent precise representation of rewards and could have detrimental effects for an organism’s ability to optimally engage with its environment. In schizophrenia, reward processing is known to be impaired and has been linked to different symptom dimensions. However, despite the fundamental significance of coding reward adaptively, no study has elucidated whether adaptive reward processing is impaired in schizophrenia. We therefore studied patients with schizophrenia (n=27) and healthy controls (n=25), using functional magnetic resonance imaging in combination with a variant of the monetary incentive delay task. Compared with healthy controls, patients with schizophrenia showed less efficient neural adaptation to the current reward context, which leads to imprecise neural representation of reward. Importantly, the deficit correlated with total symptom severity. Our results suggest that some of the deficits in reward processing in schizophrenia might be due to inefficient neural adaptation to the current reward context. Furthermore, because adaptive coding is a ubiquitous feature of the brain, we believe that our findings provide an avenue in defining a general impairment in neural information processing underlying this debilitating disorder. PMID:27430009
PHURBAS: AN ADAPTIVE, LAGRANGIAN, MESHLESS, MAGNETOHYDRODYNAMICS CODE. I. ALGORITHM
Maron, Jason L.; McNally, Colin P.; Mac Low, Mordecai-Mark E-mail: cmcnally@amnh.org
2012-05-01
We present an algorithm for simulating the equations of ideal magnetohydrodynamics and other systems of differential equations on an unstructured set of points represented by sample particles. Local, third-order, least-squares, polynomial interpolations (Moving Least Squares interpolations) are calculated from the field values of neighboring particles to obtain field values and spatial derivatives at the particle position. Field values and particle positions are advanced in time with a second-order predictor-corrector scheme. The particles move with the fluid, so the time step is not limited by the Eulerian Courant-Friedrichs-Lewy condition. Full spatial adaptivity is implemented to ensure the particles fill the computational volume, which gives the algorithm substantial flexibility and power. A target resolution is specified for each point in space, with particles being added and deleted as needed to meet this target. Particle addition and deletion is based on a local void and clump detection algorithm. Dynamic artificial viscosity fields provide stability to the integration. The resulting algorithm provides a robust solution for modeling flows that require Lagrangian or adaptive discretizations to resolve. This paper derives and documents the Phurbas algorithm as implemented in Phurbas version 1.1. A following paper presents the implementation and test problem results.
Effects of adaptation on neural coding by primary sensory interneurons in the cricket cercal system.
Clague, H; Theunissen, F; Miller, J P
1997-01-01
Methods of stochastic systems analysis were applied to examine the effect of adaptation on frequency encoding by two functionally identical primary interneurons of the cricket cercal system. Stimulus reconstructions were obtained from a linear filtering transformation of spike trains elicited in response to bursts of broadband white noise air current stimuli (5-400 Hz). Each linear reconstruction was compared with the actual stimulus in the frequency domain to obtain a measure of waveform coding accuracy as a function of frequency. The term adaptation in this paper refers to the decrease in firing rate of a cell after the onset or increase in power of a white noise stimulus. The increase in firing rate after stimulus offset or decrease in stimulus power is assumed to be a complementary aspect of the same phenomenon. As the spike rate decreased during the course of adaptation, the total amount of information carried about the velocity waveform of the stimulus also decreased. The quality of coding of frequencies between 70 and 400 Hz decreased dramatically. The quality of coding of frequencies between 5 and 70 Hz decreased only slightly or even increased in some cases. The disproportionate loss of information about the higher frequencies could be attributed in part to the more rapid loss of spikes correlated with high-frequency stimulus components than of spikes correlated with low-frequency components. An increase in the responsiveness of a cell to frequencies > 70 Hz was correlated with a decrease in the ability of that cell to encode frequencies in the 5-70 Hz range. This nonlinear property could explain the improvement seen in some cases in the coding accuracy of frequencies between 5 and 70 Hz during the course of adaptation. Waveform coding properties also were characterized for fully adapted neurons at several stimulus intensities. The changes in coding observed through the course of adaptation were similar in nature to those found across stimulus powers
Image subband coding using context-based classification and adaptive quantization.
Yoo, Y; Ortega, A; Yu, B
1999-01-01
Adaptive compression methods have been a key component of many proposed subband (or wavelet) image coding techniques. This paper deals with a particular type of adaptive subband image coding where we focus on the image coder's ability to adjust itself "on the fly" to the spatially varying statistical nature of image contents. This backward adaptation is distinguished from more frequently used forward adaptation in that forward adaptation selects the best operating parameters from a predesigned set and thus uses considerable amount of side information in order for the encoder and the decoder to operate with the same parameters. Specifically, we present backward adaptive quantization using a new context-based classification technique which classifies each subband coefficient based on the surrounding quantized coefficients. We couple this classification with online parametric adaptation of the quantizer applied to each class. A simple uniform threshold quantizer is employed as the baseline quantizer for which adaptation is achieved. Our subband image coder based on the proposed adaptive classification quantization idea exhibits excellent rate-distortion performance, in particular at very low rates. For popular test images, it is comparable or superior to most of the state-of-the-art coders in the literature.
Adaptive uniform grayscale coded aperture design for high dynamic range compressive spectral imaging
NASA Astrophysics Data System (ADS)
Diaz, Nelson; Rueda, Hoover; Arguello, Henry
2016-05-01
Imaging spectroscopy is an important area with many applications in surveillance, agriculture and medicine. The disadvantage of conventional spectroscopy techniques is that they collect the whole datacube. In contrast, compressive spectral imaging systems capture snapshot compressive projections, which are the input of reconstruction algorithms to yield the underlying datacube. Common compressive spectral imagers use coded apertures to perform the coded projections. The coded apertures are the key elements in these imagers since they define the sensing matrix of the system. The proper design of the coded aperture entries leads to a good quality in the reconstruction. In addition, the compressive measurements are prone to saturation due to the limited dynamic range of the sensor, hence the design of coded apertures must consider saturation. The saturation errors in compressive measurements are unbounded and compressive sensing recovery algorithms only provide solutions for bounded noise or bounded with high probability. In this paper it is proposed the design of uniform adaptive grayscale coded apertures (UAGCA) to improve the dynamic range of the estimated spectral images by reducing the saturation levels. The saturation is attenuated between snapshots using an adaptive filter which updates the entries of the grayscale coded aperture based on the previous snapshots. The coded apertures are optimized in terms of transmittance and number of grayscale levels. The advantage of the proposed method is the efficient use of the dynamic range of the image sensor. Extensive simulations show improvements in the image reconstruction of the proposed method compared with grayscale coded apertures (UGCA) and adaptive block-unblock coded apertures (ABCA) in up to 10 dB.
[The morse taper junction in modular revision hip replacement].
Gravius, S; Mumme, T; Andereya, S; Maus, U; Müller-Rath, R; Wirtz, D C
2007-01-01
Morse taper junctions of modular hip revision replacements are predilection sites for fretting, crevice corrosion, dissociation and breakage of the components. In this report we present the results of a retrieval analysis of a morse taper junction of a MRP-titanium modular revision replacement (MRP-Titanium, Peter Brehm GmbH, Weisendorf, Germany) after 11.5 years of in vivo use. In the context of this case report the significance of morse taper junctions in modular hip revision replacement under consideration of the current literature is also discussed.
Quantum revivals of Morse oscillators and Farey-Ford geometry
NASA Astrophysics Data System (ADS)
Li, Alvason Zhenhua; Harter, William G.
2015-07-01
Analytical eigensolutions for Morse oscillators are used to investigate quantum resonance and revivals and show how Morse anharmonicity affects revival times. A minimum semi-classical Morse revival time Tmin-rev found by Heller is related to a complete quantum revival time Trev using a quantum deviation δN parameter that in turn relates Trev to the maximum quantum beat period Tmax-beat. Also, number theory of Farey and Thales-circle geometry of Ford is shown to elegantly analyze and display fractional revivals. Such quantum dynamical analysis may have applications for spectroscopy or quantum information processing and computing.
Adaptive variable-length coding for efficient compression of spacecraft television data.
NASA Technical Reports Server (NTRS)
Rice, R. F.; Plaunt, J. R.
1971-01-01
An adaptive variable length coding system is presented. Although developed primarily for the proposed Grand Tour missions, many features of this system clearly indicate a much wider applicability. Using sample to sample prediction, the coding system produces output rates within 0.25 bit/picture element (pixel) of the one-dimensional difference entropy for entropy values ranging from 0 to 8 bit/pixel. This is accomplished without the necessity of storing any code words. Performance improvements of 0.5 bit/pixel can be simply achieved by utilizing previous line correlation. A Basic Compressor, using concatenated codes, adapts to rapid changes in source statistics by automatically selecting one of three codes to use for each block of 21 pixels. The system adapts to less frequent, but more dramatic, changes in source statistics by adjusting the mode in which the Basic Compressor operates on a line-to-line basis. Furthermore, the compression system is independent of the quantization requirements of the pulse-code modulation system.
Application of adaptive subband coding for noisy bandlimited ECG signal processing
NASA Astrophysics Data System (ADS)
Aditya, Krishna; Chu, Chee-Hung H.; Szu, Harold H.
1996-03-01
An approach to impulsive noise suppression and background normalization of digitized bandlimited electrovcardiogram signals is presented. This approach uses adaptive wavelet filters that incorporate the band-limited a priori information and the shape information of a signal to decompose the data. Empirical results show that the new algorithm has good performance in wideband impulsive noise suppression and background normalization for subsequent wave detection, when compared with subband coding using Daubechie's D4 wavelet, without the bandlimited adaptive wavelet transform.
NASA Astrophysics Data System (ADS)
Karwowski, Damian; Domański, Marek
2016-01-01
An improved context-based adaptive binary arithmetic coding (CABAC) is presented. The idea for the improvement is to use a more accurate mechanism for estimation of symbol probabilities in the standard CABAC algorithm. The authors' proposal of such a mechanism is based on the context-tree weighting technique. In the framework of a high-efficiency video coding (HEVC) video encoder, the improved CABAC allows 0.7% to 4.5% bitrate saving compared to the original CABAC algorithm. The application of the proposed algorithm marginally affects the complexity of HEVC video encoder, but the complexity of video decoder increases by 32% to 38%. In order to decrease the complexity of video decoding, a new tool has been proposed for the improved CABAC that enables scaling of the decoder complexity. Experiments show that this tool gives 5% to 7.5% reduction of the decoding time while still maintaining high efficiency in the data compression.
QoS-Aware Error Recovery in Wireless Body Sensor Networks Using Adaptive Network Coding
Razzaque, Mohammad Abdur; Javadi, Saeideh S.; Coulibaly, Yahaya; Hira, Muta Tah
2015-01-01
Wireless body sensor networks (WBSNs) for healthcare and medical applications are real-time and life-critical infrastructures, which require a strict guarantee of quality of service (QoS), in terms of latency, error rate and reliability. Considering the criticality of healthcare and medical applications, WBSNs need to fulfill users/applications and the corresponding network's QoS requirements. For instance, for a real-time application to support on-time data delivery, a WBSN needs to guarantee a constrained delay at the network level. A network coding-based error recovery mechanism is an emerging mechanism that can be used in these systems to support QoS at very low energy, memory and hardware cost. However, in dynamic network environments and user requirements, the original non-adaptive version of network coding fails to support some of the network and user QoS requirements. This work explores the QoS requirements of WBSNs in both perspectives of QoS. Based on these requirements, this paper proposes an adaptive network coding-based, QoS-aware error recovery mechanism for WBSNs. It utilizes network-level and user-/application-level information to make it adaptive in both contexts. Thus, it provides improved QoS support adaptively in terms of reliability, energy efficiency and delay. Simulation results show the potential of the proposed mechanism in terms of adaptability, reliability, real-time data delivery and network lifetime compared to its counterparts. PMID:25551485
Rhodes, Gillian; Jeffery, Linda; Taylor, Libby; Hayward, William G; Ewing, Louise
2014-06-01
Despite their similarity as visual patterns, we can discriminate and recognize many thousands of faces. This expertise has been linked to 2 coding mechanisms: holistic integration of information across the face and adaptive coding of face identity using norms tuned by experience. Recently, individual differences in face recognition ability have been discovered and linked to differences in holistic coding. Here we show that they are also linked to individual differences in adaptive coding of face identity, measured using face identity aftereffects. Identity aftereffects correlated significantly with several measures of face-selective recognition ability. They also correlated marginally with own-race face recognition ability, suggesting a role for adaptive coding in the well-known other-race effect. More generally, these results highlight the important functional role of adaptive face-coding mechanisms in face expertise, taking us beyond the traditional focus on holistic coding mechanisms.
Rhodes, Gillian; Jeffery, Linda; Taylor, Libby; Hayward, William G; Ewing, Louise
2014-06-01
Despite their similarity as visual patterns, we can discriminate and recognize many thousands of faces. This expertise has been linked to 2 coding mechanisms: holistic integration of information across the face and adaptive coding of face identity using norms tuned by experience. Recently, individual differences in face recognition ability have been discovered and linked to differences in holistic coding. Here we show that they are also linked to individual differences in adaptive coding of face identity, measured using face identity aftereffects. Identity aftereffects correlated significantly with several measures of face-selective recognition ability. They also correlated marginally with own-race face recognition ability, suggesting a role for adaptive coding in the well-known other-race effect. More generally, these results highlight the important functional role of adaptive face-coding mechanisms in face expertise, taking us beyond the traditional focus on holistic coding mechanisms. PMID:24684315
Gain-adaptive vector quantization for medium-rate speech coding
NASA Astrophysics Data System (ADS)
Chen, J.-H.; Gersho, A.
A class of adaptive vector quantizers (VQs) that can dynamically adjust the 'gain' of codevectors according to the input signal level is introduced. The encoder uses a gain estimator to determine a suitable normalization of each input vector prior to VQ coding. The normalized vectors have reduced dynamic range and can then be more efficiently coded. At the receiver, the VQ decoder output is multiplied by the estimated gain. Both forward and backward adaptation are considered and several different gain estimators are compared and evaluated. An approach to optimizing the design of gain estimators is introduced. Some of the more obvious techniques for achieving gain adaptation are substantially less effective than the use of optimized gain estimators. A novel design technique that is needed to generate the appropriate gain-normalized codebook for the vector quantizer is introduced. Experimental results show that a significant gain in segmental SNR can be obtained over nonadaptive VQ with a negligible increase in complexity.
Gain-adaptive vector quantization for medium-rate speech coding
NASA Technical Reports Server (NTRS)
Chen, J.-H.; Gersho, A.
1985-01-01
A class of adaptive vector quantizers (VQs) that can dynamically adjust the 'gain' of codevectors according to the input signal level is introduced. The encoder uses a gain estimator to determine a suitable normalization of each input vector prior to VQ coding. The normalized vectors have reduced dynamic range and can then be more efficiently coded. At the receiver, the VQ decoder output is multiplied by the estimated gain. Both forward and backward adaptation are considered and several different gain estimators are compared and evaluated. An approach to optimizing the design of gain estimators is introduced. Some of the more obvious techniques for achieving gain adaptation are substantially less effective than the use of optimized gain estimators. A novel design technique that is needed to generate the appropriate gain-normalized codebook for the vector quantizer is introduced. Experimental results show that a significant gain in segmental SNR can be obtained over nonadaptive VQ with a negligible increase in complexity.
Microlensing observations rapid search for exoplanets: MORSE code for GPUs
NASA Astrophysics Data System (ADS)
McDougall, Alistair; Albrow, Michael D.
2016-02-01
The rapid analysis of ongoing gravitational microlensing events has been integral to the successful detection and characterization of cool planets orbiting low-mass stars in the Galaxy. In this paper, we present an implementation of search and fit techniques on graphical processing unit (GPU) hardware. The method allows for the rapid identification of candidate planetary microlensing events and their subsequent follow-up for detailed characterization.
Context-Adaptive Arithmetic Coding Scheme for Lossless Bit Rate Reduction of MPEG Surround in USAC
NASA Astrophysics Data System (ADS)
Yoon, Sungyong; Pang, Hee-Suk; Sung, Koeng-Mo
We propose a new coding scheme for lossless bit rate reduction of the MPEG Surround module in unified speech and audio coding (USAC). The proposed scheme is based on context-adaptive arithmetic coding for efficient bit stream composition of spatial parameters. Experiments show that it achieves the significant lossless bit reduction of 9.93% to 12.14% for spatial parameters and 8.64% to 8.96% for the overall MPEG Surround bit streams compared to the original scheme. The proposed scheme, which is not currently included in USAC, can be used for the improved coding efficiency of MPEG Surround in USAC, where the saved bits can be utilized by the other modules in USAC.
The development and application of the self-adaptive grid code, SAGE
NASA Technical Reports Server (NTRS)
Davies, Carol B.
1993-01-01
The multidimensional self-adaptive grid code, SAGE, has proven to be a flexible and useful tool in the solution of complex flow problems. Both 2- and 3-D examples given in this report show the code to be reliable and to substantially improve flowfield solutions. Since the adaptive procedure is a marching scheme the code is extremely fast and uses insignificant CPU time compared to the corresponding flow solver. The SAGE program is also machine and flow solver independent. Significant effort was made to simplify user interaction, though some parameters still need to be chosen with care. It is also difficult to tell when the adaption process has provided its best possible solution. This is particularly true if no experimental data are available or if there is a lack of theoretical understanding of the flow. Another difficulty occurs if local features are important but missing in the original grid; the adaption to this solution will not result in any improvement, and only grid refinement can result in an improved solution. These are complex issues that need to be explored within the context of each specific problem.
An assessment of the adaptive unstructured tetrahedral grid, Euler Flow Solver Code FELISA
NASA Technical Reports Server (NTRS)
Djomehri, M. Jahed; Erickson, Larry L.
1994-01-01
A three-dimensional solution-adaptive Euler flow solver for unstructured tetrahedral meshes is assessed, and the accuracy and efficiency of the method for predicting sonic boom pressure signatures about simple generic models are demonstrated. Comparison of computational and wind tunnel data and enhancement of numerical solutions by means of grid adaptivity are discussed. The mesh generation is based on the advancing front technique. The FELISA code consists of two solvers, the Taylor-Galerkin and the Runge-Kutta-Galerkin schemes, both of which are spacially discretized by the usual Galerkin weighted residual finite-element methods but with different explicit time-marching schemes to steady state. The solution-adaptive grid procedure is based on either remeshing or mesh refinement techniques. An alternative geometry adaptive procedure is also incorporated.
NASA Astrophysics Data System (ADS)
Mahalanobis, A.; Reyner, C.; Patel, H.; Haberfelde, T.; Brady, David; Neifeld, Mark; Kumar, B. V. K. Vijaya; Rogers, Stanley
2007-09-01
Adaptive coded aperture sensing is an emerging technology enabling real time, wide-area IR/visible sensing and imaging. Exploiting unique imaging architectures, adaptive coded aperture sensors achieve wide field of view, near-instantaneous optical path repositioning, and high resolution while reducing weight, power consumption and cost of air- and space born sensors. Such sensors may be used for military, civilian, or commercial applications in all optical bands but there is special interest in diffraction imaging sensors for IR applications. Extension of coded apertures from Visible to the MWIR introduces the effects of diffraction and other distortions not observed in shorter wavelength systems. A new approach is being developed under the DARPA/SPO funded LACOSTE (Large Area Coverage Optical search-while Track and Engage) program, that addresses the effects of diffraction while gaining the benefits of coded apertures, thus providing flexibility to vary resolution, possess sufficient light gathering power, and achieve a wide field of view (WFOV). The photonic MEMS-Eyelid "sub-aperture" array technology is currently being instantiated in this DARPA program to be the heart of conducting the flow (heartbeat) of the incoming signal. However, packaging and scalability are critical factors for the MEMS "sub-aperture" technology which will determine system efficacy as well as military and commercial usefulness. As larger arrays with 1,000,000+ sub-apertures are produced for this LACOSTE effort, the available Degrees of Freedom (DOF) will enable better spatial resolution, control and refinement on the coding for the system. Studies (SNR simulations) will be performed (based on the Adaptive Coded Aperture algorithm implementation) to determine the efficacy of this diffractive MEMS approach and to determine the available system budget based on simulated bi-static shutter-element DOF degradation (1%, 5%, 10%, 20%, etc..) trials until the degradation level where it is
GAMER: A GRAPHIC PROCESSING UNIT ACCELERATED ADAPTIVE-MESH-REFINEMENT CODE FOR ASTROPHYSICS
Schive, H.-Y.; Tsai, Y.-C.; Chiueh Tzihong
2010-02-01
We present the newly developed code, GPU-accelerated Adaptive-MEsh-Refinement code (GAMER), which adopts a novel approach in improving the performance of adaptive-mesh-refinement (AMR) astrophysical simulations by a large factor with the use of the graphic processing unit (GPU). The AMR implementation is based on a hierarchy of grid patches with an oct-tree data structure. We adopt a three-dimensional relaxing total variation diminishing scheme for the hydrodynamic solver and a multi-level relaxation scheme for the Poisson solver. Both solvers have been implemented in GPU, by which hundreds of patches can be advanced in parallel. The computational overhead associated with the data transfer between the CPU and GPU is carefully reduced by utilizing the capability of asynchronous memory copies in GPU, and the computing time of the ghost-zone values for each patch is diminished by overlapping it with the GPU computations. We demonstrate the accuracy of the code by performing several standard test problems in astrophysics. GAMER is a parallel code that can be run in a multi-GPU cluster system. We measure the performance of the code by performing purely baryonic cosmological simulations in different hardware implementations, in which detailed timing analyses provide comparison between the computations with and without GPU(s) acceleration. Maximum speed-up factors of 12.19 and 10.47 are demonstrated using one GPU with 4096{sup 3} effective resolution and 16 GPUs with 8192{sup 3} effective resolution, respectively.
NASA Astrophysics Data System (ADS)
den, M.; Yamashita, K.; Ogawa, T.
A three-dimensional (3D) hydrodynamical (HD) and magneto-hydrodynamical (MHD) simulation codes using an adaptive mesh refinement (AMR) scheme are developed. This method places fine grids over areas of interest such as shock waves in order to obtain high resolution and places uniform grids with lower resolution in other area. Thus AMR scheme can provide a combination of high solution accuracy and computational robustness. We demonstrate numerical results for a simplified model of a shock propagation, which strongly indicate that the AMR techniques have the ability to resolve disturbances in an interplanetary space. We also present simulation results for MHD code.
Adaptive software-defined coded modulation for ultra-high-speed optical transport
NASA Astrophysics Data System (ADS)
Djordjevic, Ivan B.; Zhang, Yequn
2013-10-01
In optically-routed networks, different wavelength channels carrying the traffic to different destinations can have quite different optical signal-to-noise ratios (OSNRs) and signal is differently impacted by various channel impairments. Regardless of the data destination, an optical transport system (OTS) must provide the target bit-error rate (BER) performance. To provide target BER regardless of the data destination we adjust the forward error correction (FEC) strength. Depending on the information obtained from the monitoring channels, we select the appropriate code rate matching to the OSNR range that current channel OSNR falls into. To avoid frame synchronization issues, we keep the codeword length fixed independent of the FEC code being employed. The common denominator is the employment of quasi-cyclic (QC-) LDPC codes in FEC. For high-speed implementation, low-complexity LDPC decoding algorithms are needed, and some of them will be described in this invited paper. Instead of conventional QAM based modulation schemes, we employ the signal constellations obtained by optimum signal constellation design (OSCD) algorithm. To improve the spectral efficiency, we perform the simultaneous rate adaptation and signal constellation size selection so that the product of number of bits per symbol × code rate is closest to the channel capacity. Further, we describe the advantages of using 4D signaling instead of polarization-division multiplexed (PDM) QAM, by using the 4D MAP detection, combined with LDPC coding, in a turbo equalization fashion. Finally, to solve the problems related to the limited bandwidth of information infrastructure, high energy consumption, and heterogeneity of optical networks, we describe an adaptive energy-efficient hybrid coded-modulation scheme, which in addition to amplitude, phase, and polarization state employs the spatial modes as additional basis functions for multidimensional coded-modulation.
Context adaptive lossless and near-lossless coding for digital angiographies.
dos Santos, Rafael A P; Scharcanski, Jacob
2007-01-01
This paper presents a context adaptive coding method for image sequences in hemodynamics. The proposed method implements motion compensation through of a two-stage context adaptive linear predictor. It is robust to the local intensity changes and the noise that often degrades these image sequences, and provides lossless and near-lossless quality. Our preliminary experiments with lossless compression of 12 bits/pixel studies indicate that, potentially, our approach can perform 3.8%, 2% and 1.6% better than JPEG-2000, JPEG-LS and the method proposed in [1], respectively. The performance tends to improve for near-lossless compression.
An edge-based solution-adaptive method applied to the AIRPLANE code
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Thomas, Scott D.; Cliff, Susan E.
1995-01-01
Computational methods to solve large-scale realistic problems in fluid flow can be made more efficient and cost effective by using them in conjunction with dynamic mesh adaption procedures that perform simultaneous coarsening and refinement to capture flow features of interest. This work couples the tetrahedral mesh adaption scheme, 3D_TAG, with the AIRPLANE code to solve complete aircraft configuration problems in transonic and supersonic flow regimes. Results indicate that the near-field sonic boom pressure signature of a cone-cylinder is improved, the oblique and normal shocks are better resolved on a transonic wing, and the bow shock ahead of an unstarted inlet is better defined.
Volumetric data analysis using Morse-Smale complexes
Natarajan, V; Pascucci, V
2005-10-13
The 3D Morse-Smale complex is a fundamental topological construct that partitions the domain of a real-valued function into regions having uniform gradient flow behavior. In this paper, we consider the construction and selective presentation of cells of the Morse-Smale complex and their use in the analysis and visualization of scientific datasets. We take advantage of the fact that cells of different dimension often characterize different types of features present in the data. For example, critical points pinpoint changes in topology by showing where components of the level sets are created, destroyed or modified in genus. Edges of the Morse-Smale complex extract filament-like features that are not explicitly modeled in the original data. Interactive selection and rendering of portions of the Morse-Smale complex introduces fundamental data management challenges due to the unstructured nature of the complex even for structured inputs. We describe a data structure that stores the Morse-Smale complex and allows efficient selective traversal of regions of interest. Finally, we illustrate the practical use of this approach by applying it to cryo-electron microscopy data of protein molecules.
FLAG: A multi-dimensional adaptive free-Lagrange code for fully unstructured grids
Burton, D.E.; Miller, D.S.; Palmer, T.
1995-07-01
The authors describe FLAG, a 3D adaptive free-Lagrange method for unstructured grids. The grid elements were 3D polygons, which move with the flow, and are refined or reconnected as necessary to achieve uniform accuracy. The authors stressed that they were able to construct a 3D hydro version of this code in 3 months, using an object-oriented FORTRAN approach.
CRASH: A BLOCK-ADAPTIVE-MESH CODE FOR RADIATIVE SHOCK HYDRODYNAMICS-IMPLEMENTATION AND VERIFICATION
Van der Holst, B.; Toth, G.; Sokolov, I. V.; Myra, E. S.; Fryxell, B.; Drake, R. P.; Powell, K. G.; Holloway, J. P.; Stout, Q.; Adams, M. L.; Morel, J. E.; Karni, S.
2011-06-01
We describe the Center for Radiative Shock Hydrodynamics (CRASH) code, a block-adaptive-mesh code for multi-material radiation hydrodynamics. The implementation solves the radiation diffusion model with a gray or multi-group method and uses a flux-limited diffusion approximation to recover the free-streaming limit. Electrons and ions are allowed to have different temperatures and we include flux-limited electron heat conduction. The radiation hydrodynamic equations are solved in the Eulerian frame by means of a conservative finite-volume discretization in either one-, two-, or three-dimensional slab geometry or in two-dimensional cylindrical symmetry. An operator-split method is used to solve these equations in three substeps: (1) an explicit step of a shock-capturing hydrodynamic solver; (2) a linear advection of the radiation in frequency-logarithm space; and (3) an implicit solution of the stiff radiation diffusion, heat conduction, and energy exchange. We present a suite of verification test problems to demonstrate the accuracy and performance of the algorithms. The applications are for astrophysics and laboratory astrophysics. The CRASH code is an extension of the Block-Adaptive Tree Solarwind Roe Upwind Scheme (BATS-R-US) code with a new radiation transfer and heat conduction library and equation-of-state and multi-group opacity solvers. Both CRASH and BATS-R-US are part of the publicly available Space Weather Modeling Framework.
An adaptive source-channel coding with feedback for progressive transmission of medical images.
Lo, Jen-Lung; Sanei, Saeid; Nazarpour, Kianoush
2009-01-01
A novel adaptive source-channel coding with feedback for progressive transmission of medical images is proposed here. In the source coding part, the transmission starts from the region of interest (RoI). The parity length in the channel code varies with respect to both the proximity of the image subblock to the RoI and the channel noise, which is iteratively estimated in the receiver. The overall transmitted data can be controlled by the user (clinician). In the case of medical data transmission, it is vital to keep the distortion level under control as in most of the cases certain clinically important regions have to be transmitted without any visible error. The proposed system significantly reduces the transmission time and error. Moreover, the system is very user friendly since the selection of the RoI, its size, overall code rate, and a number of test features such as noise level can be set by the users in both ends. A MATLAB-based TCP/IP connection has been established to demonstrate the proposed interactive and adaptive progressive transmission system. The proposed system is simulated for both binary symmetric channel (BSC) and Rayleigh channel. The experimental results verify the effectiveness of the design.
An Adaptive Source-Channel Coding with Feedback for Progressive Transmission of Medical Images
Lo, Jen-Lung; Sanei, Saeid; Nazarpour, Kianoush
2009-01-01
A novel adaptive source-channel coding with feedback for progressive transmission of medical images is proposed here. In the source coding part, the transmission starts from the region of interest (RoI). The parity length in the channel code varies with respect to both the proximity of the image subblock to the RoI and the channel noise, which is iteratively estimated in the receiver. The overall transmitted data can be controlled by the user (clinician). In the case of medical data transmission, it is vital to keep the distortion level under control as in most of the cases certain clinically important regions have to be transmitted without any visible error. The proposed system significantly reduces the transmission time and error. Moreover, the system is very user friendly since the selection of the RoI, its size, overall code rate, and a number of test features such as noise level can be set by the users in both ends. A MATLAB-based TCP/IP connection has been established to demonstrate the proposed interactive and adaptive progressive transmission system. The proposed system is simulated for both binary symmetric channel (BSC) and Rayleigh channel. The experimental results verify the effectiveness of the design. PMID:19190770
Data compression using adaptive transform coding. Appendix 1: Item 1. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Rost, Martin Christopher
1988-01-01
Adaptive low-rate source coders are described in this dissertation. These coders adapt by adjusting the complexity of the coder to match the local coding difficulty of the image. This is accomplished by using a threshold driven maximum distortion criterion to select the specific coder used. The different coders are built using variable blocksized transform techniques, and the threshold criterion selects small transform blocks to code the more difficult regions and larger blocks to code the less complex regions. A theoretical framework is constructed from which the study of these coders can be explored. An algorithm for selecting the optimal bit allocation for the quantization of transform coefficients is developed. The bit allocation algorithm is more fully developed, and can be used to achieve more accurate bit assignments than the algorithms currently used in the literature. Some upper and lower bounds for the bit-allocation distortion-rate function are developed. An obtainable distortion-rate function is developed for a particular scalar quantizer mixing method that can be used to code transform coefficients at any rate.
ALEGRA -- A massively parallel h-adaptive code for solid dynamics
Summers, R.M.; Wong, M.K.; Boucheron, E.A.; Weatherby, J.R.
1997-12-31
ALEGRA is a multi-material, arbitrary-Lagrangian-Eulerian (ALE) code for solid dynamics designed to run on massively parallel (MP) computers. It combines the features of modern Eulerian shock codes, such as CTH, with modern Lagrangian structural analysis codes using an unstructured grid. ALEGRA is being developed for use on the teraflop supercomputers to conduct advanced three-dimensional (3D) simulations of shock phenomena important to a variety of systems. ALEGRA was designed with the Single Program Multiple Data (SPMD) paradigm, in which the mesh is decomposed into sub-meshes so that each processor gets a single sub-mesh with approximately the same number of elements. Using this approach the authors have been able to produce a single code that can scale from one processor to thousands of processors. A current major effort is to develop efficient, high precision simulation capabilities for ALEGRA, without the computational cost of using a global highly resolved mesh, through flexible, robust h-adaptivity of finite elements. H-adaptivity is the dynamic refinement of the mesh by subdividing elements, thus changing the characteristic element size and reducing numerical error. The authors are working on several major technical challenges that must be met to make effective use of HAMMER on MP computers.
Less can be more: RNA-adapters may enhance coding capacity of replicators.
de Boer, Folkert K; Hogeweg, Paulien
2012-01-01
It is still not clear how prebiotic replicators evolved towards the complexity found in present day organisms. Within the most realistic scenario for prebiotic evolution, known as the RNA world hypothesis, such complexity has arisen from replicators consisting solely of RNA. Within contemporary life, remarkably many RNAs are involved in modifying other RNAs. In hindsight, such RNA-RNA modification might have helped in alleviating the limits of complexity posed by the information threshold for RNA-only replicators. Here we study the possible role of such self-modification in early evolution, by modeling the evolution of protocells as evolving replicators, which have the opportunity to incorporate these mechanisms as a molecular tool. Evolution is studied towards a set of 25 arbitrary 'functional' structures, while avoiding all other (misfolded) structures, which are considered to be toxic and increase the death-rate of a protocell. The modeled protocells contain a genotype of different RNA-sequences while their phenotype is the ensemble of secondary structures they can potentially produce from these RNA-sequences. One of the secondary structures explicitly codes for a simple sequence-modification tool. This 'RNA-adapter' can block certain positions on other RNA-sequences through antisense base-pairing. The altered sequence can produce an alternative secondary structure, which may or may not be functional. We show that the modifying potential of interacting RNA-sequences enables these protocells to evolve high fitness under high mutation rates. Moreover, our model shows that because of toxicity of misfolded molecules, redundant coding impedes the evolution of self-modification machinery, in effect restraining the evolvability of coding structures. Hence, high mutation rates can actually promote the evolution of complex coding structures by reducing redundant coding. Protocells can successfully use RNA-adapters to modify their genotype-phenotype mapping in order to
Multiplane gravitational lensing. I. Morse theory and image counting.
NASA Astrophysics Data System (ADS)
Petters, A. O.
1995-08-01
The image counting problem for gravitational lensing by general matter deflectors distributed over finitely many lens planes is considered. Counting formulas and lower bounds are found via Morse theory for the number of images of a point source not on a caustic. Images are counted within a compact region D not necessarily assumed to properly contain the deflector space. In addition, it is shown that Morse theory is applicable because multiplane time-delay maps Ty generically satisfy the Morse boundary conditions relative to D. All results obtained depend only on the topological properties induced in the lens planes by the deflector potentials and the behavior of grad Ty at boundary points of D.
THE PLUTO CODE FOR ADAPTIVE MESH COMPUTATIONS IN ASTROPHYSICAL FLUID DYNAMICS
Mignone, A.; Tzeferacos, P.; Zanni, C.; Bodo, G.; Van Straalen, B.; Colella, P.
2012-01-01
We present a description of the adaptive mesh refinement (AMR) implementation of the PLUTO code for solving the equations of classical and special relativistic magnetohydrodynamics (MHD and RMHD). The current release exploits, in addition to the static grid version of the code, the distributed infrastructure of the CHOMBO library for multidimensional parallel computations over block-structured, adaptively refined grids. We employ a conservative finite-volume approach where primary flow quantities are discretized at the cell center in a dimensionally unsplit fashion using the Corner Transport Upwind method. Time stepping relies on a characteristic tracing step where piecewise parabolic method, weighted essentially non-oscillatory, or slope-limited linear interpolation schemes can be handily adopted. A characteristic decomposition-free version of the scheme is also illustrated. The solenoidal condition of the magnetic field is enforced by augmenting the equations with a generalized Lagrange multiplier providing propagation and damping of divergence errors through a mixed hyperbolic/parabolic explicit cleaning step. Among the novel features, we describe an extension of the scheme to include non-ideal dissipative processes, such as viscosity, resistivity, and anisotropic thermal conduction without operator splitting. Finally, we illustrate an efficient treatment of point-local, potentially stiff source terms over hierarchical nested grids by taking advantage of the adaptivity in time. Several multidimensional benchmarks and applications to problems of astrophysical relevance assess the potentiality of the AMR version of PLUTO in resolving flow features separated by large spatial and temporal disparities.
Less Can Be More: RNA-Adapters May Enhance Coding Capacity of Replicators
de Boer, Folkert K.; Hogeweg, Paulien
2012-01-01
It is still not clear how prebiotic replicators evolved towards the complexity found in present day organisms. Within the most realistic scenario for prebiotic evolution, known as the RNA world hypothesis, such complexity has arisen from replicators consisting solely of RNA. Within contemporary life, remarkably many RNAs are involved in modifying other RNAs. In hindsight, such RNA-RNA modification might have helped in alleviating the limits of complexity posed by the information threshold for RNA-only replicators. Here we study the possible role of such self-modification in early evolution, by modeling the evolution of protocells as evolving replicators, which have the opportunity to incorporate these mechanisms as a molecular tool. Evolution is studied towards a set of 25 arbitrary ‘functional’ structures, while avoiding all other (misfolded) structures, which are considered to be toxic and increase the death-rate of a protocell. The modeled protocells contain a genotype of different RNA-sequences while their phenotype is the ensemble of secondary structures they can potentially produce from these RNA-sequences. One of the secondary structures explicitly codes for a simple sequence-modification tool. This ‘RNA-adapter’ can block certain positions on other RNA-sequences through antisense base-pairing. The altered sequence can produce an alternative secondary structure, which may or may not be functional. We show that the modifying potential of interacting RNA-sequences enables these protocells to evolve high fitness under high mutation rates. Moreover, our model shows that because of toxicity of misfolded molecules, redundant coding impedes the evolution of self-modification machinery, in effect restraining the evolvability of coding structures. Hence, high mutation rates can actually promote the evolution of complex coding structures by reducing redundant coding. Protocells can successfully use RNA-adapters to modify their genotype-phenotype mapping in
Motion-adaptive model-assisted compatible coding with spatiotemporal scalability
NASA Astrophysics Data System (ADS)
Lee, JaeBeom; Eleftheriadis, Alexandros
1997-01-01
We introduce the concept of motion adaptive spatio-temporal model-assisted compatible (MA-STMAC) coding, a technique to selectively encode areas of different importance to the human eye in terms of space and time in moving images with the consideration of object motion. PRevious STMAC was proposed base don the fact that human 'eye contact' and 'lip synchronization' are very important in person-to-person communication. Several areas including the eyes and lips need different types of quality, since different areas have different perceptual significance to human observers. The approach provides a better rate-distortion tradeoff than conventional image coding techniques base don MPEG-1, MPEG- 2, H.261, as well as H.263. STMAC coding is applied on top of an encoder, taking full advantage of its core design. Model motion tracking in our previous STMAC approach was not automatic. The proposed MA-STMAC coding considers the motion of the human face within the STMAC concept using automatic area detection. Experimental results are given using ITU-T H.263, addressing very low bit-rate compression.
Adaptation of TRIPND Field Line Tracing Code to a Shaped, Poloidal Divertor Geometry
NASA Astrophysics Data System (ADS)
Monat, P.; Moyer, R. A.; Evans, T. E.
2001-10-01
The magnetic field line tracing code TRIPND(T.E. Evans, Proc. 18th Conf. on Control. Fusion and Plasma Phys., Berlin, Germany, Vol. 15C, Part II (European Physical Society, 1991) p. 65.) has been modified to use the axisymmetric equilibrium magnetic fields from an EFIT reconstruction in place of circular equilibria with multi-filament current profile expansions. This adaptation provides realistic plasma current profiles in shaped geometries. A major advantage of this modification is that it allows investigation of magnetic field line trajectories in any device for which an EFIT reconstruction is available. The TRIPND code has been used to study the structure of the magnetic field line topology in circular, limiter tokamaks, including Tore Supra and TFTR and has been benchmarked against the GOURDON code used in Europe for magnetic field line tracing. The new version of the code, called TRIP3D, is used to investigate the sensitivity of various shaped equilibria to non-axisymmetric perturbations such as a shifted F coil or error field correction coils.
Hierarchical prediction and context adaptive coding for lossless color image compression.
Kim, Seyun; Cho, Nam Ik
2014-01-01
This paper presents a new lossless color image compression algorithm, based on the hierarchical prediction and context-adaptive arithmetic coding. For the lossless compression of an RGB image, it is first decorrelated by a reversible color transform and then Y component is encoded by a conventional lossless grayscale image compression method. For encoding the chrominance images, we develop a hierarchical scheme that enables the use of upper, left, and lower pixels for the pixel prediction, whereas the conventional raster scan prediction methods use upper and left pixels. An appropriate context model for the prediction error is also defined and the arithmetic coding is applied to the error signal corresponding to each context. For several sets of images, it is shown that the proposed method further reduces the bit rates compared with JPEG2000 and JPEG-XR.
Radiographic image sequence coding using adaptive finite-state vector quantization
NASA Astrophysics Data System (ADS)
Joo, Chang-Hee; Choi, Jong S.
1990-11-01
Vector quantization is an effective spatial domain image coding technique at under 1 . 0 bits per pixel. To achieve the quality at lower rates it is necessary to exploit spatial redundancy over a larger region of pixels than is possible with memoryless VQ. A fmite state vector quant. izer can achieve the same performance as memoryless VQ at lower rates. This paper describes an athptive finite state vector quantization for radiographic image sequence coding. Simulation experiment has been carried out with 4*4 blocks of pixels from a sequence of cardiac angiogram consisting of 40 frames of size 256*256pixels each. At 0. 45 bpp the resulting adaptive FSVQ encoder achieves performance comparable to earlier memoryless VQs at 0. 8 bpp.
Coupling of MASH-MORSE Adjoint Leakages with Space- and Time-Dependent Plume Radiation Sources
Slater, C.O.
2001-04-20
In the past, forward-adjoint coupling procedures in air-over-ground geometry have typically involved forward fluences arising from a point source a great distance from a target or vehicle system. Various processing codes were used to create localized forward fluence files that could be used to couple with the MASH-MORSE adjoint leakages. In recent years, radiation plumes that result from reactor accidents or similar incidents have been modeled by others, and the source space and energy distributions as a function of time have been calculated. Additionally, with the point kernel method, they were able to calculate in relatively quick fashion free-field radiation doses for targets moving within the fluence field or for stationary targets within the field, the time dependence for the latter case coming from the changes in position, shape, source strength, and spectra of the plume with time. The work described herein applies the plume source to the MASH-MORSE coupling procedure. The plume source replaces the point source for generating the forward fluences that are folded with MASH-MORSE adjoint leakages. Two types of source calculations are described. The first is a ''rigorous'' calculation using the TORT code and a spatially large air-over-ground geometry. For each time step desired, directional fluences are calculated and are saved over a predetermined region that encompasses a structure within which it is desired to calculate dose rates. Processing codes then create the surface fluences (which may include contributions from radiation sources that deposit on the roof or plateout) that will be coupled with the MASH-MORSE adjoint leakages. Unlike the point kernel calculations of the free-field dose rates, the TORT calculations in practice include the effects of ground scatter on dose rates and directional fluences, although the effects may be underestimated or overestimated because of the use of necessarily coarse mesh and quadrature in order to reduce computational
Continuous Morse-Smale flows with three equilibrium positions
NASA Astrophysics Data System (ADS)
Zhuzhoma, E. V.; Medvedev, V. S.
2016-05-01
Continuous Morse-Smale flows on closed manifolds whose nonwandering set consists of three equilibrium positions are considered. Necessary and sufficient conditions for topological equivalence of such flows are obtained and the topological structure of the underlying manifolds is described. Bibliography: 36 titles.
PHURBAS: AN ADAPTIVE, LAGRANGIAN, MESHLESS, MAGNETOHYDRODYNAMICS CODE. II. IMPLEMENTATION AND TESTS
McNally, Colin P.; Mac Low, Mordecai-Mark; Maron, Jason L. E-mail: jmaron@amnh.org
2012-05-01
We present an algorithm for simulating the equations of ideal magnetohydrodynamics and other systems of differential equations on an unstructured set of points represented by sample particles. The particles move with the fluid, so the time step is not limited by the Eulerian Courant-Friedrichs-Lewy condition. Full spatial adaptivity is required to ensure the particles fill the computational volume and gives the algorithm substantial flexibility and power. A target resolution is specified for each point in space, with particles being added and deleted as needed to meet this target. We have parallelized the code by adapting the framework provided by GADGET-2. A set of standard test problems, including 10{sup -6} amplitude linear magnetohydrodynamics waves, magnetized shock tubes, and Kelvin-Helmholtz instabilities is presented. Finally, we demonstrate good agreement with analytic predictions of linear growth rates for magnetorotational instability in a cylindrical geometry. This paper documents the Phurbas algorithm as implemented in Phurbas version 1.1.
Channel Error Propagation In Predictor Adaptive Differential Pulse Code Modulation (DPCM) Coders
NASA Astrophysics Data System (ADS)
Devarajan, Venkat; Rao, K. R.
1980-11-01
New adaptive differential pulse code modulation (ADPCM) coders with adaptive prediction are proposed and compared with existing non-adaptive DPCM coders, for processing composite National Television System Commission (NTSC) television signals. Comparisons are based on quantitative criteria as well as subjective evaluation of the processed still frames. The performance of the proposed predictors is shown to be independent of well-designed quantizers and better than existing predictors in such critical regions of the pictures as edges ind contours. Test data consists of four color images with varying levels of activity, color and detail. The adaptive predictors, however, are sensitive to channel errors. Propagation of transmission noise is dependent on the type of prediction and on location of noise i.e., whether in an uniform region or in an active region. The transmission error propagation for different predictors is investigated. By introducing leak in predictor output and/or predictor function it is shown that this propagation can be significantly reduced. The combination predictors not only attenuate and/or terminate the channel error propagation but also improve the predictor performance based on quantitative evaluation such as essential peak value and mean square error between the original and reconstructed images.
Optimal joint power-rate adaptation for error resilient video coding
NASA Astrophysics Data System (ADS)
Lin, Yuan; Gürses, Eren; Kim, Anna N.; Perkis, Andrew
2008-01-01
In recent years digital imaging devices become an integral part of our daily lives due to the advancements in imaging, storage and wireless communication technologies. Power-Rate-Distortion efficiency is the key factor common to all resource constrained portable devices. In addition, especially in real-time wireless multimedia applications, channel adaptive and error resilient source coding techniques should be considered in conjunction with the P-R-D efficiency, since most of the time Automatic Repeat-reQuest (ARQ) and Forward Error Correction (FEC) are either not feasible or costly in terms of bandwidth efficiency delay. In this work, we focus on the scenarios of real-time video communication for resource constrained devices over bandwidth limited and lossy channels, and propose an analytic Power-channel Error-Rate-Distortion (P-E-R-D) model. In particular, probabilities of macroblocks coding modes are intelligently controlled through an optimization process according to their distinct rate-distortion-complexity performance for a given channel error rate. The framework provides theoretical guidelines for the joint analysis of error resilient source coding and resource allocation. Experimental results show that our optimal framework provides consistent rate-distortion performance gain under different power constraints.
Automatic network-adaptive ultra-low-bit-rate video coding
NASA Astrophysics Data System (ADS)
Chien, Wei-Jung; Lam, Tuyet-Trang; Abousleman, Glen P.; Karam, Lina J.
2006-05-01
This paper presents a software-only, real-time video coder/decoder (codec) for use with low-bandwidth channels where the bandwidth is unknown or varies with time. The codec incorporates a modified JPEG2000 core and interframe predictive coding, and can operate with network bandwidths of less than 1 kbits/second. The encoder and decoder establish two virtual connections over a single IP-based communications link. The first connection is UDP/IP guaranteed throughput, which is used to transmit the compressed video stream in real time, while the second is TCP/IP guaranteed delivery, which is used for two-way control and compression parameter updating. The TCP/IP link serves as a virtual feedback channel and enables the decoder to instruct the encoder to throttle back the transmission bit rate in response to the measured packet loss ratio. It also enables either side to initiate on-the-fly parameter updates such as bit rate, frame rate, frame size, and correlation parameter, among others. The codec also incorporates frame-rate throttling whereby the number of frames decoded is adjusted based upon the available processing resources. Thus, the proposed codec is capable of automatically adjusting the transmission bit rate and decoding frame rate to adapt to any network scenario. Video coding results for a variety of network bandwidths and configurations are presented to illustrate the vast capabilities of the proposed video coding system.
NASA Astrophysics Data System (ADS)
Nightingale, James; Wang, Qi; Grecos, Christos; Goma, Sergio
2014-02-01
High Efficiency Video Coding (HEVC), the latest video compression standard (also known as H.265), can deliver video streams of comparable quality to the current H.264 Advanced Video Coding (H.264/AVC) standard with a 50% reduction in bandwidth. Research into SHVC, the scalable extension to the HEVC standard, is still in its infancy. One important area for investigation is whether, given the greater compression ratio of HEVC (and SHVC), the loss of packets containing video content will have a greater impact on the quality of delivered video than is the case with H.264/AVC or its scalable extension H.264/SVC. In this work we empirically evaluate the layer-based, in-network adaptation of video streams encoded using SHVC in situations where dynamically changing bandwidths and datagram loss ratios require the real-time adaptation of video streams. Through the use of extensive experimentation, we establish a comprehensive set of benchmarks for SHVC-based highdefinition video streaming in loss prone network environments such as those commonly found in mobile networks. Among other results, we highlight that packet losses of only 1% can lead to a substantial reduction in PSNR of over 3dB and error propagation in over 130 pictures following the one in which the loss occurred. This work would be one of the earliest studies in this cutting-edge area that reports benchmark evaluation results for the effects of datagram loss on SHVC picture quality and offers empirical and analytical insights into SHVC adaptation to lossy, mobile networking conditions.
Zou, Ding; Djordjevic, Ivan B
2016-09-01
In this paper, we propose a rate-adaptive FEC scheme based on LDPC codes together with its software reconfigurable unified FPGA architecture. By FPGA emulation, we demonstrate that the proposed class of rate-adaptive LDPC codes based on shortening with an overhead from 25% to 42.9% provides a coding gain ranging from 13.08 dB to 14.28 dB at a post-FEC BER of 10^{-15} for BPSK transmission. In addition, the proposed rate-adaptive LDPC coding combined with higher-order modulations have been demonstrated including QPSK, 8-QAM, 16-QAM, 32-QAM, and 64-QAM, which covers a wide range of signal-to-noise ratios. Furthermore, we apply the unequal error protection by employing different LDPC codes on different bits in 16-QAM and 64-QAM, which results in additional 0.5dB gain compared to conventional LDPC coded modulation with the same code rate of corresponding LDPC code. PMID:27607718
Zou, Ding; Djordjevic, Ivan B
2016-09-01
In this paper, we propose a rate-adaptive FEC scheme based on LDPC codes together with its software reconfigurable unified FPGA architecture. By FPGA emulation, we demonstrate that the proposed class of rate-adaptive LDPC codes based on shortening with an overhead from 25% to 42.9% provides a coding gain ranging from 13.08 dB to 14.28 dB at a post-FEC BER of 10^{-15} for BPSK transmission. In addition, the proposed rate-adaptive LDPC coding combined with higher-order modulations have been demonstrated including QPSK, 8-QAM, 16-QAM, 32-QAM, and 64-QAM, which covers a wide range of signal-to-noise ratios. Furthermore, we apply the unequal error protection by employing different LDPC codes on different bits in 16-QAM and 64-QAM, which results in additional 0.5dB gain compared to conventional LDPC coded modulation with the same code rate of corresponding LDPC code.
Spatiotemporal Spike Coding of Behavioral Adaptation in the Dorsal Anterior Cingulate Cortex.
Logiaco, Laureline; Quilodran, René; Procyk, Emmanuel; Arleo, Angelo
2015-08-01
The frontal cortex controls behavioral adaptation in environments governed by complex rules. Many studies have established the relevance of firing rate modulation after informative events signaling whether and how to update the behavioral policy. However, whether the spatiotemporal features of these neuronal activities contribute to encoding imminent behavioral updates remains unclear. We investigated this issue in the dorsal anterior cingulate cortex (dACC) of monkeys while they adapted their behavior based on their memory of feedback from past choices. We analyzed spike trains of both single units and pairs of simultaneously recorded neurons using an algorithm that emulates different biologically plausible decoding circuits. This method permits the assessment of the performance of both spike-count and spike-timing sensitive decoders. In response to the feedback, single neurons emitted stereotypical spike trains whose temporal structure identified informative events with higher accuracy than mere spike count. The optimal decoding time scale was in the range of 70-200 ms, which is significantly shorter than the memory time scale required by the behavioral task. Importantly, the temporal spiking patterns of single units were predictive of the monkeys' behavioral response time. Furthermore, some features of these spiking patterns often varied between jointly recorded neurons. All together, our results suggest that dACC drives behavioral adaptation through complex spatiotemporal spike coding. They also indicate that downstream networks, which decode dACC feedback signals, are unlikely to act as mere neural integrators. PMID:26266537
Improving Inpatient Surveys: Web-Based Computer Adaptive Testing Accessed via Mobile Phone QR Codes
2016-01-01
Background The National Health Service (NHS) 70-item inpatient questionnaire surveys inpatients on their perceptions of their hospitalization experience. However, it imposes more burden on the patient than other similar surveys. The literature shows that computerized adaptive testing (CAT) based on item response theory can help shorten the item length of a questionnaire without compromising its precision. Objective Our aim was to investigate whether CAT can be (1) efficient with item reduction and (2) used with quick response (QR) codes scanned by mobile phones. Methods After downloading the 2008 inpatient survey data from the Picker Institute Europe website and analyzing the difficulties of this 70-item questionnaire, we used an author-made Excel program using the Rasch partial credit model to simulate 1000 patients’ true scores followed by a standard normal distribution. The CAT was compared to two other scenarios of answering all items (AAI) and the randomized selection method (RSM), as we investigated item length (efficiency) and measurement accuracy. The author-made Web-based CAT program for gathering patient feedback was effectively accessed from mobile phones by scanning the QR code. Results We found that the CAT can be more efficient for patients answering questions (ie, fewer items to respond to) than either AAI or RSM without compromising its measurement accuracy. A Web-based CAT inpatient survey accessed by scanning a QR code on a mobile phone was viable for gathering inpatient satisfaction responses. Conclusions With advances in technology, patients can now be offered alternatives for providing feedback about hospitalization satisfaction. This Web-based CAT is a possible option in health care settings for reducing the number of survey items, as well as offering an innovative QR code access. PMID:26935793
Takahasi Nearest-Neighbour Gas Revisited II: Morse Gases
NASA Astrophysics Data System (ADS)
Matsumoto, Akira
2011-12-01
Some thermodynamic quantities for the Morse potential are analytically evaluated at an isobaric process. The parameters of Morse gases for 21 substances are obtained by the second virial coefficient data and the spectroscopic data of diatomic molecules. Also some thermodynamic quantities for water are calculated numerically and drawn graphically. The inflexion point of the length L which depends on temperature T and pressure P corresponds physically to a boiling point. L indicates the liquid phase from lower temperature to the inflexion point and the gaseous phase from the inflexion point to higher temperature. The boiling temperatures indicate reasonable values compared with experimental data. The behaviour of L suggests a chance of a first-order phase transition in one dimension.
Adaptive coded spreading OFDM signal for dynamic-λ optical access network
NASA Astrophysics Data System (ADS)
Liu, Bo; Zhang, Lijia; Xin, Xiangjun
2015-12-01
This paper proposes and experimentally demonstrates a novel adaptive coded spreading (ACS) orthogonal frequency division multiplexing (OFDM) signal for dynamic distributed optical ring-based access network. The wavelength can be assigned to different remote nodes (RNs) according to the traffic demand of optical network unit (ONU). The ACS can provide dynamic spreading gain to different signals according to the split ratio or transmission length, which offers flexible power budget for the network. A 10×13.12 Gb/s OFDM access with ACS is successfully demonstrated over two RNs and 120 km transmission in the experiment. The demonstrated method may be viewed as one promising for future optical metro access network.
Non-parametric PCM to ADM conversion. [Pulse Code to Adaptive Delta Modulation
NASA Technical Reports Server (NTRS)
Locicero, J. L.; Schilling, D. L.
1977-01-01
An all-digital technique to convert pulse code modulated (PCM) signals into adaptive delta modulation (ADM) format is presented. The converter developed is shown to be independent of the statistical parameters of the encoded signal and can be constructed with only standard digital hardware. The structure of the converter is simple enough to be fabricated on a large scale integrated circuit where the advantages of reliability and cost can be optimized. A concise evaluation of this PCM to ADM translation technique is presented and several converters are simulated on a digital computer. A family of performance curves is given which displays the signal-to-noise ratio for sinusoidal test signals subjected to the conversion process, as a function of input signal power for several ratios of ADM rate to Nyquist rate.
Bernstein, A D; Camm, A J; Fletcher, R D; Gold, R D; Rickards, A F; Smyth, N P; Spielman, S R; Sutton, R
1987-07-01
A new generic pacemaker code, derived from and compatible with the Revised ICHD Code, was proposed jointly by the North American Society of Pacing and Electrophysiology (NASPE) Mode Code Committee and the British Pacing and Electrophysiology Group (BPEG), and has been adopted by the NASPE Board of Trustees. It is abbreviated as the NBG (for "NASPE/BPEG Generic") Code, and was developed to permit extension of the generic-code concept to pacemakers whose escape rate is continuously controlled by monitoring some physiologic variable, rather than determined by fixed escape intervals measured from stimuli or sensed depolarizations, and to antitachyarrhythmia devices including cardioverters and defibrillators. The NASPE/BPEG Code incorporates an "R" in the fourth position to signify rate modulation (adaptive-rate pacing), and one of four letters in the fifth position to indicate the presence of antitachyarrhythmia-pacing capability or of cardioversion or defibrillation functions. PMID:2441363
NASA Astrophysics Data System (ADS)
Jayaweera, Sudharman K.; Poor, H. Vincent
2003-12-01
A downlink receiver is proposed for space-time block coded CDMA systems operating in multipath channels. By combining the powerful RAKE receiver concept for a frequency selective channel with space-time decoding, it is shown that the performance of mobile receivers operating in the presence of channel fading can be improved significantly. The proposed receiver consists of a bank of decorrelating filters designed to suppress the multiple access interference embedded in the received signal before the space-time decoding. The new receiver performs the space-time decoding along each resolvable multipath component and then the outputs are diversity combined to obtain the final decision statistic. The proposed receiver relies on a key constraint imposed on the output of each filter in the bank of decorrelating filters in order to maintain the space-time block code structure embedded in the signal. The proposed receiver can easily be adapted blindly, requiring only the desired user's signature sequence, which is also attractive in the context of wireless mobile communications. Simulation results are provided to confirm the effectiveness of the proposed receiver in multipath CDMA systems.
Hybrid threshold adaptable quantum secret sharing scheme with reverse Huffman-Fibonacci-tree coding.
Lai, Hong; Zhang, Jun; Luo, Ming-Xing; Pan, Lei; Pieprzyk, Josef; Xiao, Fuyuan; Orgun, Mehmet A
2016-01-01
With prevalent attacks in communication, sharing a secret between communicating parties is an ongoing challenge. Moreover, it is important to integrate quantum solutions with classical secret sharing schemes with low computational cost for the real world use. This paper proposes a novel hybrid threshold adaptable quantum secret sharing scheme, using an m-bonacci orbital angular momentum (OAM) pump, Lagrange interpolation polynomials, and reverse Huffman-Fibonacci-tree coding. To be exact, we employ entangled states prepared by m-bonacci sequences to detect eavesdropping. Meanwhile, we encode m-bonacci sequences in Lagrange interpolation polynomials to generate the shares of a secret with reverse Huffman-Fibonacci-tree coding. The advantages of the proposed scheme is that it can detect eavesdropping without joint quantum operations, and permits secret sharing for an arbitrary but no less than threshold-value number of classical participants with much lower bandwidth. Also, in comparison with existing quantum secret sharing schemes, it still works when there are dynamic changes, such as the unavailability of some quantum channel, the arrival of new participants and the departure of participants. Finally, we provide security analysis of the new hybrid quantum secret sharing scheme and discuss its useful features for modern applications. PMID:27515908
Hybrid threshold adaptable quantum secret sharing scheme with reverse Huffman-Fibonacci-tree coding.
Lai, Hong; Zhang, Jun; Luo, Ming-Xing; Pan, Lei; Pieprzyk, Josef; Xiao, Fuyuan; Orgun, Mehmet A
2016-01-01
With prevalent attacks in communication, sharing a secret between communicating parties is an ongoing challenge. Moreover, it is important to integrate quantum solutions with classical secret sharing schemes with low computational cost for the real world use. This paper proposes a novel hybrid threshold adaptable quantum secret sharing scheme, using an m-bonacci orbital angular momentum (OAM) pump, Lagrange interpolation polynomials, and reverse Huffman-Fibonacci-tree coding. To be exact, we employ entangled states prepared by m-bonacci sequences to detect eavesdropping. Meanwhile, we encode m-bonacci sequences in Lagrange interpolation polynomials to generate the shares of a secret with reverse Huffman-Fibonacci-tree coding. The advantages of the proposed scheme is that it can detect eavesdropping without joint quantum operations, and permits secret sharing for an arbitrary but no less than threshold-value number of classical participants with much lower bandwidth. Also, in comparison with existing quantum secret sharing schemes, it still works when there are dynamic changes, such as the unavailability of some quantum channel, the arrival of new participants and the departure of participants. Finally, we provide security analysis of the new hybrid quantum secret sharing scheme and discuss its useful features for modern applications.
Hybrid threshold adaptable quantum secret sharing scheme with reverse Huffman-Fibonacci-tree coding
Lai, Hong; Zhang, Jun; Luo, Ming-Xing; Pan, Lei; Pieprzyk, Josef; Xiao, Fuyuan; Orgun, Mehmet A.
2016-01-01
With prevalent attacks in communication, sharing a secret between communicating parties is an ongoing challenge. Moreover, it is important to integrate quantum solutions with classical secret sharing schemes with low computational cost for the real world use. This paper proposes a novel hybrid threshold adaptable quantum secret sharing scheme, using an m-bonacci orbital angular momentum (OAM) pump, Lagrange interpolation polynomials, and reverse Huffman-Fibonacci-tree coding. To be exact, we employ entangled states prepared by m-bonacci sequences to detect eavesdropping. Meanwhile, we encode m-bonacci sequences in Lagrange interpolation polynomials to generate the shares of a secret with reverse Huffman-Fibonacci-tree coding. The advantages of the proposed scheme is that it can detect eavesdropping without joint quantum operations, and permits secret sharing for an arbitrary but no less than threshold-value number of classical participants with much lower bandwidth. Also, in comparison with existing quantum secret sharing schemes, it still works when there are dynamic changes, such as the unavailability of some quantum channel, the arrival of new participants and the departure of participants. Finally, we provide security analysis of the new hybrid quantum secret sharing scheme and discuss its useful features for modern applications. PMID:27515908
Hybrid threshold adaptable quantum secret sharing scheme with reverse Huffman-Fibonacci-tree coding
NASA Astrophysics Data System (ADS)
Lai, Hong; Zhang, Jun; Luo, Ming-Xing; Pan, Lei; Pieprzyk, Josef; Xiao, Fuyuan; Orgun, Mehmet A.
2016-08-01
With prevalent attacks in communication, sharing a secret between communicating parties is an ongoing challenge. Moreover, it is important to integrate quantum solutions with classical secret sharing schemes with low computational cost for the real world use. This paper proposes a novel hybrid threshold adaptable quantum secret sharing scheme, using an m-bonacci orbital angular momentum (OAM) pump, Lagrange interpolation polynomials, and reverse Huffman-Fibonacci-tree coding. To be exact, we employ entangled states prepared by m-bonacci sequences to detect eavesdropping. Meanwhile, we encode m-bonacci sequences in Lagrange interpolation polynomials to generate the shares of a secret with reverse Huffman-Fibonacci-tree coding. The advantages of the proposed scheme is that it can detect eavesdropping without joint quantum operations, and permits secret sharing for an arbitrary but no less than threshold-value number of classical participants with much lower bandwidth. Also, in comparison with existing quantum secret sharing schemes, it still works when there are dynamic changes, such as the unavailability of some quantum channel, the arrival of new participants and the departure of participants. Finally, we provide security analysis of the new hybrid quantum secret sharing scheme and discuss its useful features for modern applications.
White Dwarf Mergers on Adaptive Meshes. I. Methodology and Code Verification
NASA Astrophysics Data System (ADS)
Katz, Max P.; Zingale, Michael; Calder, Alan C.; Swesty, F. Douglas; Almgren, Ann S.; Zhang, Weiqun
2016-03-01
The Type Ia supernova (SN Ia) progenitor problem is one of the most perplexing and exciting problems in astrophysics, requiring detailed numerical modeling to complement observations of these explosions. One possible progenitor that has merited recent theoretical attention is the white dwarf (WD) merger scenario, which has the potential to naturally explain many of the observed characteristics of SNe Ia. To date there have been relatively few self-consistent simulations of merging WD systems using mesh-based hydrodynamics. This is the first paper in a series describing simulations of these systems using a hydrodynamics code with adaptive mesh refinement. In this paper we describe our numerical methodology and discuss our implementation in the compressible hydrodynamics code CASTRO, which solves the Euler equations, and the Poisson equation for self-gravity, and couples the gravitational and rotation forces to the hydrodynamics. Standard techniques for coupling gravitation and rotation forces to the hydrodynamics do not adequately conserve the total energy of the system for our problem, but recent advances in the literature allow progress and we discuss our implementation here. We present a set of test problems demonstrating the extent to which our software sufficiently models a system where large amounts of mass are advected on the computational domain over long timescales. Future papers in this series will describe our treatment of the initial conditions of these systems and will examine the early phases of the merger to determine its viability for triggering a thermonuclear detonation.
Cunningham, Andrew J.; Frank, Adam; Varniere, Peggy; Mitran, Sorin; Jones, Thomas W.
2009-06-15
A description is given of the algorithms implemented in the AstroBEAR adaptive mesh-refinement code for ideal magnetohydrodynamics. The code provides several high-resolution shock-capturing schemes which are constructed to maintain conserved quantities of the flow in a finite-volume sense. Divergence-free magnetic field topologies are maintained to machine precision by collating the components of the magnetic field on a cell-interface staggered grid and utilizing the constrained transport approach for integrating the induction equations. The maintenance of magnetic field topologies on adaptive grids is achieved using prolongation and restriction operators which preserve the divergence and curl of the magnetic field across collocated grids of different resolutions. The robustness and correctness of the code is demonstrated by comparing the numerical solution of various tests with analytical solutions or previously published numerical solutions obtained by other codes.
Context adaptive binary arithmetic coding-based data hiding in partially encrypted H.264/AVC videos
NASA Astrophysics Data System (ADS)
Xu, Dawen; Wang, Rangding
2015-05-01
A scheme of data hiding directly in a partially encrypted version of H.264/AVC videos is proposed which includes three parts, i.e., selective encryption, data embedding and data extraction. Selective encryption is performed on context adaptive binary arithmetic coding (CABAC) bin-strings via stream ciphers. By careful selection of CABAC entropy coder syntax elements for selective encryption, the encrypted bitstream is format-compliant and has exactly the same bit rate. Then a data-hider embeds the additional data into partially encrypted H.264/AVC videos using a CABAC bin-string substitution technique without accessing the plaintext of the video content. Since bin-string substitution is carried out on those residual coefficients with approximately the same magnitude, the quality of the decrypted video is satisfactory. Video file size is strictly preserved even after data embedding. In order to adapt to different application scenarios, data extraction can be done either in the encrypted domain or in the decrypted domain. Experimental results have demonstrated the feasibility and efficiency of the proposed scheme.
A New Real-coded Genetic Algorithm with an Adaptive Mating Selection for UV-landscapes
NASA Astrophysics Data System (ADS)
Oshima, Dan; Miyamae, Atsushi; Nagata, Yuichi; Kobayashi, Shigenobu; Ono, Isao; Sakuma, Jun
The purpose of this paper is to propose a new real-coded genetic algorithm (RCGA) named Networked Genetic Algorithm (NGA) that intends to find multiple optima simultaneously in deceptive globally multimodal landscapes. Most current techniques such as niching for finding multiple optima take into account big valley landscapes or non-deceptive globally multimodal landscapes but not deceptive ones called UV-landscapes. Adaptive Neighboring Search (ANS) is a promising approach for finding multiple optima in UV-landscapes. ANS utilizes a restricted mating scheme with a crossover-like mutation in order to find optima in deceptive globally multimodal landscapes. However, ANS has a fundamental problem that it does not find all the optima simultaneously in many cases. NGA overcomes the problem by an adaptive parent-selection scheme and an improved crossover-like mutation. We show the effectiveness of NGA over ANS in terms of the number of detected optima in a single run on Fletcher and Powell functions as benchmark problems that are known to have multiple optima, ill-scaledness, and UV-landscapes.
NASA Astrophysics Data System (ADS)
Ki, Dae Wook; Kim, Jae Ho
2013-07-01
We propose a fast new multiple run_before decoding method in context-adaptive variable length coding (CAVLC). The transform coefficients are coded using CAVLC, in which the run_before symbols are generated for a 4×4 block input. To speed up the CAVLC decoding, the run_before symbols need to be decoded in parallel. We implemented a new CAVLC table for simultaneous decoding of up to three run_befores. The simulation results show a Total Speed-up Factor of 205%˜144% over various resolutions and quantization steps.
Robust image transmission using a new joint source channel coding algorithm and dual adaptive OFDM
NASA Astrophysics Data System (ADS)
Farshchian, Masoud; Cho, Sungdae; Pearlman, William A.
2004-01-01
In this paper we consider the problem of robust image coding and packetization for the purpose of communications over slow fading frequency selective channels and channels with a shaped spectrum like those of digital subscribe lines (DSL). Towards this end, a novel and analytically based joint source channel coding (JSCC) algorithm to assign unequal error protection is presented. Under a block budget constraint, the image bitstream is de-multiplexed into two classes with different error responses. The algorithm assigns unequal error protection (UEP) in a way to minimize the expected mean square error (MSE) at the receiver while minimizing the probability of catastrophic failure. In order to minimize the expected mean square error at the receiver, the algorithm assigns unequal protection to the value bit class (VBC) stream. In order to minimizes the probability of catastrophic error which is a characteristic of progressive image coders, the algorithm assigns more protection to the location bit class (LBC) stream than the VBC stream. Besides having the advantage of being analytical and also numerically solvable, the algorithm is based on a new formula developed to estimate the distortion rate (D-R) curve for the VBC portion of SPIHT. The major advantage of our technique is that the worst case instantaneous minimum peak signal to noise ratio (PSNR) does not differ greatly from the averge MSE while this is not the case for the optimal single stream (UEP) system. Although both average PSNR of our method and the optimal single stream UEP are about the same, our scheme does not suffer erratic behavior because we have made the probability of catastrophic error arbitarily small. The coded image is sent via orthogonal frequency division multiplexing (OFDM) which is a known and increasing popular modulation scheme to combat ISI (Inter Symbol Interference) and impulsive noise. Using dual adaptive energy OFDM, we use the minimum energy necessary to send each bit stream at a
NASA Astrophysics Data System (ADS)
Zhang, Yongsheng; Xiong, Hongkai; He, Zhihai; Yu, Songyu
2010-07-01
An important issue in Wyner-Ziv video coding is the reconstruction of Wyner-Ziv frames with decoded bit-planes. So far, there are two major approaches: the Maximum a Posteriori (MAP) reconstruction and the Minimum Mean Square Error (MMSE) reconstruction algorithms. However, these approaches do not exploit smoothness constraints in natural images. In this paper, we model a Wyner-Ziv frame by Markov random fields (MRFs), and produce reconstruction results by finding an MAP estimation of the MRF model. In the MRF model, the energy function consists of two terms: a data term, MSE distortion metric in this paper, measuring the statistical correlation between side-information and the source, and a smoothness term enforcing spatial coherence. In order to better describe the spatial constraints of images, we propose a context-adaptive smoothness term by analyzing the correspondence between the output of Slepian-Wolf decoding and successive frames available at decoders. The significance of the smoothness term varies in accordance with the spatial variation within different regions. To some extent, the proposed approach is an extension to the MAP and MMSE approaches by exploiting the intrinsic smoothness characteristic of natural images. Experimental results demonstrate a considerable performance gain compared with the MAP and MMSE approaches.
Quasibound states and heteroclinic structures in the driven Morse potential
Jarukanont, Daungruthai; Na, Kyungsun; Reichl, L. E.
2007-02-15
We have studied the classical and quantum dynamics of the Morse system driven by time-periodic external field. Floquet energies and Husimi probability distributions of quasibound states of the driven system are obtained using exterior complex scaling method and Floquet theory. As we increase the external field strength, the number of quasibound states is decreased and the Husimi distribution of the quasibound state shows the enhanced positive momentum distribution that appears to be supported by the classical homoclinic tangles that develop on the positive momentum side of the phase space.
Analytical expressions for vibrational matrix elements of Morse oscillators
Zuniga, J.; Hidalgo, A.; Frances, J.M.; Requena, A.; Lopez Pineiro, A.; Olivares del Valle, F.J.
1988-10-15
Several exact recursion relations connecting different Morse oscillator matrix elements associated with the operators q/sup ..cap alpha../e/sup -//sup ..beta..//sup aq/ and q/sup ..cap alpha../e/sup -//sup ..beta..//sup aq/(d/dr) are derived. Matrix elements of the other useful operators may then be obtained easily. In particular, analytical expressions for (y/sup k/d/dr) and (y/sup k/d/dr+(d/dr)y/sup k/), matrix elements of interest in the study of the internuclear motion in polyatomic molecules, are obtained.
NASA Astrophysics Data System (ADS)
Shin, Frances B.; Kil, David H.
1998-09-01
One of the biggest challenges in distributed underwater mine warfare for area sanitization and safe power projection during regional conflicts is transmission of compressed raw imagery data to a central processing station via a limited bandwidth channel while preserving crucial target information for further detection and automatic target recognition processing. Moreover, operating in an extremely shallow water with fluctuating channels and numerous interfering sources makes it imperative that image compression algorithms effectively deal with background nonstationarity within an image as well as content variation between images. In this paper, we present a novel approach to lossy image compression that combines image- content classification, content-adaptive bit allocation, and hybrid wavelet tree-based coding for over 100:1 bandwidth reduction with little sacrifice in signal-to-noise ratio (SNR). Our algorithm comprises (1) content-adaptive coding that takes advantage of a classify-before-coding strategy to reduce data mismatch, (2) subimage transformation for energy compaction, and (3) a wavelet tree-based coding for efficient encoding of significant wavelet coefficients. Furthermore, instead of using the embedded zerotree coding with scalar quantization (SQ), we investigate the use of a hybrid coding strategy that combines SQ for high-magnitude outlier transform coefficients and classified vector quantization (CVQ) for compactly clustered coefficients. This approach helps us achieve reduced distortion error and robustness while achieving high compression ratio. Our analysis based on the high-frequency sonar real data that exhibit severe content variability and contain both mines and mine-like clutter indicates that we can achieve over 100:1 compression ratio without losing crucial signal attributes. In comparison, benchmarking of the same data set with the best still-picture compression algorithm called the set partitioning in hierarchical trees (SPIHT) reveals
Entropy, local order, and the freezing transition in Morse liquids.
Chakraborty, Somendra Nath; Chakravarty, Charusita
2007-07-01
The behavior of the excess entropy of Morse and Lennard-Jones liquids is examined as a function of temperature, density, and the structural order metrics. The dominant pair correlation contribution to the excess entropy is estimated from simulation data for the radial distribution function. The pair correlation entropy (S2) of these simple liquids is shown to have a threshold value of (-3.5+/-0.3)kB at freezing. Moreover, S2 shows a T(-2/5) temperature dependence. The temperature dependence of the pair correlation entropy as well as the behavior at freezing closely correspond to earlier predictions, based on density functional theory, for the excess entropy of repulsive inverse power and Yukawa potentials [Rosenfeld, Phys. Rev. E 62, 7524 (2000)]. The correlation between the pair correlation entropy and the local translational and bond orientational order parameters is examined, and, in the case of the bond orientational order, is shown to be sensitive to the definition of the nearest neighbors. The order map between translational and bond orientational order for Morse liquids and solids shows a very similar pattern to that seen in Lennard-Jones-type systems. PMID:17677432
Adaptive quarter-pel motion estimation and motion vector coding algorithm for the H.264/AVC standard
NASA Astrophysics Data System (ADS)
Jung, Seung-Won; Park, Chun-Su; Ha, Le Thanh; Ko, Sung-Jea
2009-11-01
We present an adaptive quarter-pel (Qpel) motion estimation (ME) method for H.264/AVC. Instead of applying Qpel ME to all macroblocks (MBs), the proposed method selectively performs Qpel ME in an MB level. In order to reduce the bit rate, we also propose a motion vector (MV) encoding technique that adaptively selects a different variable length coding (VLC) table according to the accuracy of the MV. Experimental results show that the proposed method can achieve about 3% average bit rate reduction.
Adaptive mesh simulations of astrophysical detonations using the ASCI flash code
NASA Astrophysics Data System (ADS)
Fryxell, B.; Calder, A. C.; Dursi, L. J.; Lamb, D. Q.; MacNeice, P.; Olson, K.; Ricker, P.; Rosner, R.; Timmes, F. X.; Truran, J. W.; Tufo, H. M.; Zingale, M.
2001-08-01
The Flash code was developed at the University of Chicago as part of the Department of Energy's Accelerated Strategic Computing Initiative (ASCI). The code was designed specifically to simulate thermonuclear flashes in compact stars (white dwarfs and neutron stars). This paper will give a brief introduction to the astrophysics problems we wish to address, followed by a description of the current version of the Flash code. Finally, we discuss two simulations of astrophysical detonations that we have carried out with the code. The first is of a helium detonation in an X-ray burst. The other simulation models a carbon detonation in a Type Ia supernova explosion. .
NASA Astrophysics Data System (ADS)
Muta, Osamu; Akaiwa, Yoshihiko
In this paper, we propose a simple peak power reduction (PPR) method based on adaptive inversion of parity-check block of codeword in BCH-coded OFDM system. In the proposed method, the entire parity-check block of the codeword is adaptively inversed by multiplying weighting factors (WFs) so as to minimize PAPR of the OFDM signal, symbol-by-symbol. At the receiver, these WFs are estimated based on the property of BCH decoding. When the primitive BCH code with single error correction such as (31,26) code is used, to estimate the WFs, the proposed method employs a significant bit protection method which assigns a significant bit to the best subcarrier selected among all possible subcarriers. With computer simulation, when (31,26), (31,21) and (32,21) BCH codes are employed, PAPR of the OFDM signal at the CCDF (Complementary Cumulative Distribution Function) of 10-4 is reduced by about 1.9, 2.5 and 2.5dB by applying the PPR method, while achieving the BER performance comparable to the case with the perfect WF estimation in exponentially decaying 12-path Rayleigh fading condition.
Adverse local tissue response lesion of the knee associated with Morse taper corrosion.
McMaster, William C; Patel, Jay
2013-02-01
Modularity in arthroplasty components has increased options for solving complex issues in primary and revision procedures. However, this technology introduces the risk of accelerated metal ion release as a result of fretting or passive crevice corrosion within the Morse taper junction. Cobalt toxicity locally and systemically has been described with hip metal bearing surfaces and may be accentuated with ion release from Morse tapers. This is a case report of a knee adverse local tissue response lesion associated with corrosion within the Morse taper of a revision knee arthroplasty in the absence of systemic metal allergy.
Generalized Morse wavelets for the phase evaluation of projected fringe pattern
NASA Astrophysics Data System (ADS)
Kocahan Yılmaz, Özlem; Coşkun, Emre; Özder, Serhat
2014-10-01
Generalized Morse wavelets are proposed to evaluate the phase information from projected fringe pattern with the spatial carrier frequency in the x direction. The height profile of the object is determined through the phase change distribution by using the phase of the continuous wavelet transform. The choice of an appropriate mother wavelet is an important step for the calculation of phase. As a mother wavelet, zero order generalized Morse wavelet is chosen because of the flexible spatial and frequency localization property, and it is exactly analytic. Experimental results for the Morlet and Paul wavelets are compared with the results of generalized Morse wavelets analysis.
Exciton photoluminescence in resonant quasi-periodic Thue-Morse quantum wells.
Hsueh, W J; Chang, C H; Lin, C T
2014-02-01
This Letter investigates exciton photoluminescence (PL) in resonant quasi-periodic Thue-Morse quantum wells (QWs). The results show that the PL properties of quasi-periodic Thue-Morse QWs are quite different from those of resonant Fibonacci QWs. The maximum and minimum PL intensities occur under the anti-Bragg and Bragg conditions, respectively. The maxima of the PL intensity gradually decline when the filling factor increases from 0.25 to 0.5. Accordingly, the squared electric field at the QWs decreases as the Thue-Morse QW deviates from the anti-Bragg condition. PMID:24487847
Robust Computation of Morse-Smale Complexes of Bilinear Functions
Norgard, G; Bremer, P T
2010-11-30
The Morse-Smale (MS) complex has proven to be a useful tool in extracting and visualizing features from scalar-valued data. However, existing algorithms to compute the MS complex are restricted to either piecewise linear or discrete scalar fields. This paper presents a new combinatorial algorithm to compute MS complexes for two dimensional piecewise bilinear functions defined on quadrilateral meshes. We derive a new invariant of the gradient flow within a bilinear cell and use it to develop a provably correct computation which is unaffected by numerical instabilities. This includes a combinatorial algorithm to detect and classify critical points as well as a way to determine the asymptotes of cell-based saddles and their intersection with cell edges. Finally, we introduce a simple data structure to compute and store integral lines on quadrilateral meshes which by construction prevents intersections and enables us to enforce constraints on the gradient flow to preserve known invariants.
Comparison between the Morse eigenfunctions and deformed oscillator wavefunctions
Recamier, J.; Mochan, W. L.; Gorayeb, M.; Paz, J. L.
2008-04-15
In this work we introduce deformed creation and annihilation operators which differ from the usual harmonic oscillator operators a, a{sup {dagger}} by a number operator function A circumflex = a circumflex f(n circumflex ), A circumflex {sup {dagger}} = f(n circumflex )a circumflex {sup {dagger}}. We construct the deformed coordinate and momentum in terms of the deformed operators and maintain only up to first order terms in the deformed operators. By application of the deformed annihilation operator upon the vacuum state we get the ground state wavefunction in the configuration space and the wavefunctions for excited states are obtained by repeated application of the deformed creation operator. Finally, we compare the wavefunctions obtained with the deformed operators with the corresponding Morse eigenfunctions.
García-Zambrana, Antonio; Castillo-Vázquez, Carmen; Castillo-Vázquez, Beatriz
2010-11-22
In this paper, a new and simple rate-adaptive transmission scheme for free-space optical (FSO) communication systems with intensity modulation and direct detection (IM/DD) over atmospheric turbulence channels is analyzed. This scheme is based on the joint use of repetition coding and variable silence periods, exploiting the potential time-diversity order (TDO) available in the turbulent channel as well as allowing the increase of the peak-to-average optical power ratio (PAOPR). Here, repetition coding is firstly used in order to accommodate the transmission rate to the channel conditions until the whole time diversity order available in the turbulent channel by interleaving is exploited. Then, once no more diversity gain is available, the rate reduction can be increased by using variable silence periods in order to increase the PAOPR. Novel closed-form expressions for the average bit-error rate (BER) as well as their corresponding asymptotic expressions are presented when the irradiance of the transmitted optical beam follows negative exponential and gamma-gamma distributions, covering a wide range of atmospheric turbulence conditions. Obtained results show a diversity order as in the corresponding rate-adaptive transmission scheme only based on repetition codes but providing a relevant improvement in coding gain. Simulation results are further demonstrated to confirm the analytical results. Here, not only rectangular pulses are considered but also OOK formats with any pulse shape, corroborating the advantage of using pulses with high PAOPR, such as gaussian or squared hyperbolic secant pulses. We also determine the achievable information rate for the rate-adaptive transmission schemes here analyzed.
Multi-level adaptive particle mesh (MLAPM): a c code for cosmological simulations
NASA Astrophysics Data System (ADS)
Knebe, Alexander; Green, Andrew; Binney, James
2001-08-01
We present a computer code written in c that is designed to simulate structure formation from collisionless matter. The code is purely grid-based and uses a recursively refined Cartesian grid to solve Poisson's equation for the potential, rather than obtaining the potential from a Green's function. Refinements can have arbitrary shapes and in practice closely follow the complex morphology of the density field that evolves. The time-step shortens by a factor of 2 with each successive refinement. Competing approaches to N-body simulation are discussed from the point of view of the basic theory of N-body simulation. It is argued that an appropriate choice of softening length ɛ is of great importance and that ɛ should be at all points an appropriate multiple of the local interparticle separation. Unlike tree and P3M codes, multigrid codes automatically satisfy this requirement. We show that at early times and low densities in cosmological simulations, ɛ needs to be significantly smaller relative to the interparticle separation than in virialized regions. Tests of the ability of the code's Poisson solver to recover the gravitational fields of both virialized haloes and Zel'dovich waves are presented, as are tests of the code's ability to reproduce analytic solutions for plane-wave evolution. The times required to conduct a ΛCDM cosmological simulation for various configurations are compared with the times required to complete the same simulation with the ART, AP3M and GADGET codes. The power spectra, halo mass functions and halo-halo correlation functions of simulations conducted with different codes are compared. The code is available from http://www-thphys.physics.ox.ac.uk/users/MLAPM.
Anti-Voice Adaptation Suggests Prototype-Based Coding of Voice Identity
Latinus, Marianne; Belin, Pascal
2011-01-01
We used perceptual aftereffects induced by adaptation with anti-voice stimuli to investigate voice identity representations. Participants learned a set of voices then were tested on a voice identification task with vowel stimuli morphed between identities, after different conditions of adaptation. In Experiment 1, participants chose the identity opposite to the adapting anti-voice significantly more often than the other two identities (e.g., after being adapted to anti-A, they identified the average voice as A). In Experiment 2, participants showed a bias for identities opposite to the adaptor specifically for anti-voice, but not for non-anti-voice adaptors. These results are strikingly similar to adaptation aftereffects observed for facial identity. They are compatible with a representation of individual voice identities in a multidimensional perceptual voice space referenced on a voice prototype. PMID:21847384
2012-06-01
We have entered a new era in agricultural and biomedical science made possible by remarkable advances in DNA sequencing technologies. The complete sequence of an individual's set of chromosomes (collectively, its genome) provides a primary genetic code for what makes that individual unique, just as the contents of every personal computer reflect the unique attributes of its owner. But a second code, composed of "epigenetic" layers of information, affects the accessibility of the stored information and the execution of specific tasks. Nature's second code is enigmatic and must be deciphered if we are to fully understand and optimize the genetic potential of crop plants. The goal of the Epigenomics of Plants International Consortium is to crack this second code, and ultimately master its control, to help catalyze a new green revolution.
Jedidiah Morse and the Bavarian Illuminati: An Essay in the Rhetoric of Conspiracy.
ERIC Educational Resources Information Center
Griffin, Charles J. G.
1989-01-01
Focuses on three widely publicized sermons given by the Reverend Jedidiah Morse to examine the role of the jeremiad (or political sermon) in shaping public attitudes toward political dissent during the Franco-American Crisis of 1798-1799. (MM)
Generalized Morse and Poeschl-Teller potentials: The connection via Schroedinger equation
Yahiaoui, S.-A.; Hattou, S.; Bentaiba, M.
2007-11-15
A systematic and unified treatment to connect the Schroedinger equation for generalized Morse and Poeschl-Teller potentials, generated by supersymmetry quantum mechanics, is used. An algebraic treatment of bound-state problems is presented.
On the homotopy type of spaces of Morse functions on surfaces
Kudryavtseva, Elena A
2013-01-31
Let M be a smooth closed orientable surface. Let F be the space of Morse functions on M with a fixed number of critical points of each index such that at least {chi}(M)+1 critical points are labelled by different labels (numbered). The notion of a skew cylindric-polyhedral complex is introduced, which generalizes the notion of a polyhedral complex. The skew cylindric-polyhedral complex K-tilde ('the complex of framed Morse functions') associated with the space F is defined. In the case M=S{sup 2} the polytope K-tilde is finite; its Euler characteristic {chi}(K-tilde) is calculated and the Morse inequalities for its Betti numbers {beta}{sub j}(K-tilde) are obtained. The relation between the homotopy types of the polytope K-tilde and the space F of Morse functions equipped with the C{sup {infinity}}-topology is indicated. Bibliography: 51 titles.
Solutions of the Klein-Gordon equation with the improved Rosen-Morse potential energy model
NASA Astrophysics Data System (ADS)
Chen, Tao; Lin, Shu-Rong; Jia, Chun-Sheng
2013-07-01
We solve the Klein-Gordon equation with the improved Rosen-Morse empirical potential energy model. The bound-state energy equation has been obtained by using the supersymmetric shape invariance approach. The relativistic vibrational transition frequencies for the 33Σg+ state of the Cs2 molecule have been computed by using the improved Rosen-Morse potential model. The relativistic vibrational transition frequencies are in good agreement with the experimental RKR values and DPF values.
The bound state solution for the Morse potential with a localized mass profile
NASA Astrophysics Data System (ADS)
Miraboutalebi, S.
2016-10-01
We investigate an analytical solution for the Schrödinger equation with a position-dependent mass distribution, with the Morse potential via Laplace transformations. We considered a mass function localized around the equilibrium position. The mass distribution depends on the energy spectrum of the state and the intrinsic parameters of the Morse potential. An exact bound state solution is obtained in the presence of this mass distribution.
Morse taper dental implants and platform switching: The new paradigm in oral implantology
Macedo, José Paulo; Pereira, Jorge; Vahey, Brendan R.; Henriques, Bruno; Benfatti, Cesar A. M.; Magini, Ricardo S.; López-López, José; Souza, Júlio C. M.
2016-01-01
The aim of this study was to conduct a literature review on the potential benefits with the use of Morse taper dental implant connections associated with small diameter platform switching abutments. A Medline bibliographical search (from 1961 to 2014) was carried out. The following search items were explored: “Bone loss and platform switching,” “bone loss and implant-abutment joint,” “bone resorption and platform switching,” “bone resorption and implant-abutment joint,” “Morse taper and platform switching.” “Morse taper and implant-abutment joint,” Morse taper and bone resorption,” “crestal bone remodeling and implant-abutment joint,” “crestal bone remodeling and platform switching.” The selection criteria used for the article were: meta-analysis; randomized controlled trials; prospective cohort studies; as well as reviews written in English, Portuguese, or Spanish languages. Within the 287 studies identified, 81 relevant and recent studies were selected. Results indicated a reduced occurrence of peri-implantitis and bone loss at the abutment/implant level associated with Morse taper implants and a reduced-diameter platform switching abutment. Extrapolation of data from previous studies indicates that Morse taper connections associated with platform switching have shown less inflammation and possible bone loss with the peri-implant soft tissues. However, more long-term studies are needed to confirm these trends. PMID:27011755
Morse taper dental implants and platform switching: The new paradigm in oral implantology.
Macedo, José Paulo; Pereira, Jorge; Vahey, Brendan R; Henriques, Bruno; Benfatti, Cesar A M; Magini, Ricardo S; López-López, José; Souza, Júlio C M
2016-01-01
The aim of this study was to conduct a literature review on the potential benefits with the use of Morse taper dental implant connections associated with small diameter platform switching abutments. A Medline bibliographical search (from 1961 to 2014) was carried out. The following search items were explored: "Bone loss and platform switching," "bone loss and implant-abutment joint," "bone resorption and platform switching," "bone resorption and implant-abutment joint," "Morse taper and platform switching." "Morse taper and implant-abutment joint," Morse taper and bone resorption," "crestal bone remodeling and implant-abutment joint," "crestal bone remodeling and platform switching." The selection criteria used for the article were: meta-analysis; randomized controlled trials; prospective cohort studies; as well as reviews written in English, Portuguese, or Spanish languages. Within the 287 studies identified, 81 relevant and recent studies were selected. Results indicated a reduced occurrence of peri-implantitis and bone loss at the abutment/implant level associated with Morse taper implants and a reduced-diameter platform switching abutment. Extrapolation of data from previous studies indicates that Morse taper connections associated with platform switching have shown less inflammation and possible bone loss with the peri-implant soft tissues. However, more long-term studies are needed to confirm these trends.
Rhodes, Gillian; Jeffery, Linda; Taylor, Libby; Ewing, Louise
2013-11-01
Our ability to discriminate and recognize thousands of faces despite their similarity as visual patterns relies on adaptive, norm-based, coding mechanisms that are continuously updated by experience. Reduced adaptive coding of face identity has been proposed as a neurocognitive endophenotype for autism, because it is found in autism and in relatives of individuals with autism. Autistic traits can also extend continuously into the general population, raising the possibility that reduced adaptive coding of face identity may be more generally associated with autistic traits. In the present study, we investigated whether adaptive coding of face identity decreases as autistic traits increase in an undergraduate population. Adaptive coding was measured using face identity aftereffects, and autistic traits were measured using the Autism-Spectrum Quotient (AQ) and its subscales. We also measured face and car recognition ability to determine whether autistic traits are selectively related to face recognition difficulties. We found that men who scored higher on levels of autistic traits related to social interaction had reduced adaptive coding of face identity. This result is consistent with the idea that atypical adaptive face-coding mechanisms are an endophenotype for autism. Autistic traits were also linked with face-selective recognition difficulties in men. However, there were some unexpected sex differences. In women, autistic traits were linked positively, rather than negatively, with adaptive coding of identity, and were unrelated to face-selective recognition difficulties. These sex differences indicate that autistic traits can have different neurocognitive correlates in men and women and raise the intriguing possibility that endophenotypes of autism can differ in males and females.
NASA Technical Reports Server (NTRS)
Kandula, Max; Caimi, Raoul; Steinrock, T. (Technical Monitor)
2001-01-01
An acoustic prediction capability for supersonic axisymmetric jets was developed on the basis of OVERFLOW Navier-Stokes CFD (Computational Fluid Dynamics) code of NASA Langley Research Center. Reynolds-averaged turbulent stresses in the flow field are modeled with the aid of Spalart-Allmaras one-equation turbulence model. Appropriate acoustic and outflow boundary conditions were implemented to compute time-dependent acoustic pressure in the nonlinear source-field. Based on the specification of acoustic pressure, its temporal and normal derivatives on the Kirchhoff surface, the near-field and the far-field sound pressure levels are computed via Kirchhoff surface integral, with the Kirchhoff surface chosen to enclose the nonlinear sound source region described by the CFD code. The methods are validated by a comparison of the predictions of sound pressure levels with the available data for an axisymmetric turbulent supersonic (Mach 2) perfectly expanded jet.
Graphene mechanics: I. Efficient first principles based Morse potential.
Costescu, Bogdan I; Baldus, Ilona B; Gräter, Frauke
2014-06-28
We present a computationally efficient pairwise potential for use in molecular dynamics simulations of large graphene or carbon nanotube systems, in particular, for those under mechanical deformation, and also for mixed systems including biomolecules. Based on the Morse potential, it is only slightly more complex and computationally expensive than a harmonic bond potential, allowing such large or mixed simulations to reach experimentally relevant time scales. By fitting to data obtained from quantum mechanics (QM) calculations to represent bond breaking in graphene patches, we obtain a dissociation energy of 805 kJ mol(-1) which reflects the steepness of the QM potential up to the inflection point. A distinctive feature of our potential is its truncation at the inflection point, allowing a realistic treatment of ruptured C-C bonds without relying on a bond order model. The results obtained from equilibrium MD simulations using our potential compare favorably with results obtained from experiments and from similar simulations with more complex and computationally expensive potentials.
Lee, Dongyul; Lee, Chaewoo
2014-01-01
The advancement in wideband wireless network supports real time services such as IPTV and live video streaming. However, because of the sharing nature of the wireless medium, efficient resource allocation has been studied to achieve a high level of acceptability and proliferation of wireless multimedia. Scalable video coding (SVC) with adaptive modulation and coding (AMC) provides an excellent solution for wireless video streaming. By assigning different modulation and coding schemes (MCSs) to video layers, SVC can provide good video quality to users in good channel conditions and also basic video quality to users in bad channel conditions. For optimal resource allocation, a key issue in applying SVC in the wireless multicast service is how to assign MCSs and the time resources to each SVC layer in the heterogeneous channel condition. We formulate this problem with integer linear programming (ILP) and provide numerical results to show the performance under 802.16 m environment. The result shows that our methodology enhances the overall system throughput compared to an existing algorithm.
Lee, Chaewoo
2014-01-01
The advancement in wideband wireless network supports real time services such as IPTV and live video streaming. However, because of the sharing nature of the wireless medium, efficient resource allocation has been studied to achieve a high level of acceptability and proliferation of wireless multimedia. Scalable video coding (SVC) with adaptive modulation and coding (AMC) provides an excellent solution for wireless video streaming. By assigning different modulation and coding schemes (MCSs) to video layers, SVC can provide good video quality to users in good channel conditions and also basic video quality to users in bad channel conditions. For optimal resource allocation, a key issue in applying SVC in the wireless multicast service is how to assign MCSs and the time resources to each SVC layer in the heterogeneous channel condition. We formulate this problem with integer linear programming (ILP) and provide numerical results to show the performance under 802.16 m environment. The result shows that our methodology enhances the overall system throughput compared to an existing algorithm. PMID:25276862
fMR-Adaptation Reveals Invariant Coding of Biological Motion on the Human STS
Grossman, Emily D.; Jardine, Nicole L.; Pyles, John A.
2009-01-01
Neuroimaging studies of biological motion perception have found a network of coordinated brain areas, the hub of which appears to be the human posterior superior temporal sulcus (STSp). Understanding the functional role of the STSp requires characterizing the response tuning of neuronal populations underlying the BOLD response. Thus far our understanding of these response properties comes from single-unit studies of the monkey anterior STS, which has individual neurons tuned to body actions, with a small population invariant to changes in viewpoint, position and size of the action being viewed. To measure for homologous functional properties on the human STS, we used fMR-adaptation to investigate action, position and size invariance. Observers viewed pairs of point-light animations depicting human actions that were either identical, differed in the action depicted, locally scrambled, or differed in the viewing perspective, the position or the size. While extrastriate hMT+ had neural signals indicative of viewpoint specificity, the human STS adapted for all of these changes, as compared to viewing two different actions. Similar findings were observed in more posterior brain areas also implicated in action recognition. Our findings are evidence for viewpoint invariance in the human STS and related brain areas, with the implication that actions are abstracted into object-centered representations during visual analysis. PMID:20431723
Query-Adaptive Hash Code Ranking for Large-Scale Multi-View Visual Search.
Liu, Xianglong; Huang, Lei; Deng, Cheng; Lang, Bo; Tao, Dacheng
2016-10-01
Hash-based nearest neighbor search has become attractive in many applications. However, the quantization in hashing usually degenerates the discriminative power when using Hamming distance ranking. Besides, for large-scale visual search, existing hashing methods cannot directly support the efficient search over the data with multiple sources, and while the literature has shown that adaptively incorporating complementary information from diverse sources or views can significantly boost the search performance. To address the problems, this paper proposes a novel and generic approach to building multiple hash tables with multiple views and generating fine-grained ranking results at bitwise and tablewise levels. For each hash table, a query-adaptive bitwise weighting is introduced to alleviate the quantization loss by simultaneously exploiting the quality of hash functions and their complement for nearest neighbor search. From the tablewise aspect, multiple hash tables are built for different data views as a joint index, over which a query-specific rank fusion is proposed to rerank all results from the bitwise ranking by diffusing in a graph. Comprehensive experiments on image search over three well-known benchmarks show that the proposed method achieves up to 17.11% and 20.28% performance gains on single and multiple table search over the state-of-the-art methods. PMID:27448359
Query-Adaptive Hash Code Ranking for Large-Scale Multi-View Visual Search.
Liu, Xianglong; Huang, Lei; Deng, Cheng; Lang, Bo; Tao, Dacheng
2016-10-01
Hash-based nearest neighbor search has become attractive in many applications. However, the quantization in hashing usually degenerates the discriminative power when using Hamming distance ranking. Besides, for large-scale visual search, existing hashing methods cannot directly support the efficient search over the data with multiple sources, and while the literature has shown that adaptively incorporating complementary information from diverse sources or views can significantly boost the search performance. To address the problems, this paper proposes a novel and generic approach to building multiple hash tables with multiple views and generating fine-grained ranking results at bitwise and tablewise levels. For each hash table, a query-adaptive bitwise weighting is introduced to alleviate the quantization loss by simultaneously exploiting the quality of hash functions and their complement for nearest neighbor search. From the tablewise aspect, multiple hash tables are built for different data views as a joint index, over which a query-specific rank fusion is proposed to rerank all results from the bitwise ranking by diffusing in a graph. Comprehensive experiments on image search over three well-known benchmarks show that the proposed method achieves up to 17.11% and 20.28% performance gains on single and multiple table search over the state-of-the-art methods.
A novel pseudoderivative-based mutation operator for real-coded adaptive genetic algorithms
Kanwal, Maxinder S; Ramesh, Avinash S; Huang, Lauren A
2013-01-01
Recent development of large databases, especially those in genetics and proteomics, is pushing the development of novel computational algorithms that implement rapid and accurate search strategies. One successful approach has been to use artificial intelligence and methods, including pattern recognition (e.g. neural networks) and optimization techniques (e.g. genetic algorithms). The focus of this paper is on optimizing the design of genetic algorithms by using an adaptive mutation rate that is derived from comparing the fitness values of successive generations. We propose a novel pseudoderivative-based mutation rate operator designed to allow a genetic algorithm to escape local optima and successfully continue to the global optimum. Once proven successful, this algorithm can be implemented to solve real problems in neurology and bioinformatics. As a first step towards this goal, we tested our algorithm on two 3-dimensional surfaces with multiple local optima, but only one global optimum, as well as on the N-queens problem, an applied problem in which the function that maps the curve is implicit. For all tests, the adaptive mutation rate allowed the genetic algorithm to find the global optimal solution, performing significantly better than other search methods, including genetic algorithms that implement fixed mutation rates. PMID:24627784
Application of a Morse filter in the processing of brain angiograms
NASA Astrophysics Data System (ADS)
Venegas Bayona, Santiago
2014-06-01
The angiograms are frequently used to find anomalies in the blood vessels. Hence, for improving the quality of the images with an angiogram, a Morse filter will be implemented (based on the model of the Morse Potential) in a brain's vessels angiogram using both softwares Maple ® and ImageJ ®. It will be shown the results of applying a Morse filter to an angiogram of the brain vessels. First, the image was processed with ImageJ using the plug-in Anisotropic Diffusion 2D and then, the filter was implemented. As it is illustrated in the results, the edges of the stringy elements are emphasized. Particularly, this is very useful in the medical image processing of blood vessels, like angiograms, due to the narrowing or obstruction which may be caused by illness like aneurysms, thrombosis or other diseases.
Zhang, S.; Yuen, D.A.; Zhu, A.; Song, S.; George, D.L.
2011-01-01
We parallelized the GeoClaw code on one-level grid using OpenMP in March, 2011 to meet the urgent need of simulating tsunami waves at near-shore from Tohoku 2011 and achieved over 75% of the potential speed-up on an eight core Dell Precision T7500 workstation [1]. After submitting that work to SC11 - the International Conference for High Performance Computing, we obtained an unreleased OpenMP version of GeoClaw from David George, who developed the GeoClaw code as part of his PH.D thesis. In this paper, we will show the complementary characteristics of the two approaches used in parallelizing GeoClaw and the speed-up obtained by combining the advantage of each of the two individual approaches with adaptive mesh refinement (AMR), demonstrating the capabilities of running GeoClaw efficiently on many-core systems. We will also show a novel simulation of the Tohoku 2011 Tsunami waves inundating the Sendai airport and Fukushima Nuclear Power Plants, over which the finest grid distance of 20 meters is achieved through a 4-level AMR. This simulation yields quite good predictions about the wave-heights and travel time of the tsunami waves. ?? 2011 IEEE.
Yin, Jun; Yang, Yuwang; Wang, Lei
2016-01-01
Joint design of compressed sensing (CS) and network coding (NC) has been demonstrated to provide a new data gathering paradigm for multi-hop wireless sensor networks (WSNs). By exploiting the correlation of the network sensed data, a variety of data gathering schemes based on NC and CS (Compressed Data Gathering—CDG) have been proposed. However, these schemes assume that the sparsity of the network sensed data is constant and the value of the sparsity is known before starting each data gathering epoch, thus they ignore the variation of the data observed by the WSNs which are deployed in practical circumstances. In this paper, we present a complete design of the feedback CDG scheme where the sink node adaptively queries those interested nodes to acquire an appropriate number of measurements. The adaptive measurement-formation procedure and its termination rules are proposed and analyzed in detail. Moreover, in order to minimize the number of overall transmissions in the formation procedure of each measurement, we have developed a NP-complete model (Maximum Leaf Nodes Minimum Steiner Nodes—MLMS) and realized a scalable greedy algorithm to solve the problem. Experimental results show that the proposed measurement-formation method outperforms previous schemes, and experiments on both datasets from ocean temperature and practical network deployment also prove the effectiveness of our proposed feedback CDG scheme. PMID:27043574
Adaptive Code Division Multiple Access Protocol for Wireless Network-on-Chip Architectures
NASA Astrophysics Data System (ADS)
Vijayakumaran, Vineeth
Massive levels of integration following Moore's Law ushered in a paradigm shift in the way on-chip interconnections were designed. With higher and higher number of cores on the same die traditional bus based interconnections are no longer a scalable communication infrastructure. On-chip networks were proposed enabled a scalable plug-and-play mechanism for interconnecting hundreds of cores on the same chip. Wired interconnects between the cores in a traditional Network-on-Chip (NoC) system, becomes a bottleneck with increase in the number of cores thereby increasing the latency and energy to transmit signals over them. Hence, there has been many alternative emerging interconnect technologies proposed, namely, 3D, photonic and multi-band RF interconnects. Although they provide better connectivity, higher speed and higher bandwidth compared to wired interconnects; they also face challenges with heat dissipation and manufacturing difficulties. On-chip wireless interconnects is one other alternative proposed which doesn't need physical interconnection layout as data travels over the wireless medium. They are integrated into a hybrid NOC architecture consisting of both wired and wireless links, which provides higher bandwidth, lower latency, lesser area overhead and reduced energy dissipation in communication. However, as the bandwidth of the wireless channels is limited, an efficient media access control (MAC) scheme is required to enhance the utilization of the available bandwidth. This thesis proposes using a multiple access mechanism such as Code Division Multiple Access (CDMA) to enable multiple transmitter-receiver pairs to send data over the wireless channel simultaneously. It will be shown that such a hybrid wireless NoC with an efficient CDMA based MAC protocol can significantly increase the performance of the system while lowering the energy dissipation in data transfer. In this work it is shown that the wireless NoC with the proposed CDMA based MAC protocol
EVALUATION OF AN INDIVIDUALLY PACED COURSE FOR AIRBORNE RADIO CODE OPERATORS. FINAL REPORT.
ERIC Educational Resources Information Center
BALDWIN, ROBERT O.; JOHNSON, KIRK A.
IN THIS STUDY COMPARISONS WERE MADE BETWEEN AN INDIVIDUALLY PACED VERSION OF THE AIRBORNE RADIO CODE OPERATOR (ARCO) COURSE AND TWO VERSIONS OF THE COURSE IN WHICH THE STUDENTS PROGRESSED AT A FIXED PACE. THE ARCO COURSE IS A CLASS C SCHOOL IN WHICH THE STUDENT LEARNS TO SEND AND RECEIVE MILITARY MESSAGES USING THE INTERNATIONAL MORSE CODE. THE…
Application of Morse Theory to Analysis of Rayleigh-Taylor Topology
Miller, P L; Bremer, P T; Cabot, W H; Cook, A W; Laney, D E; Mascarenhas, A A; Pascucci, V
2007-01-24
We present a novel Morse Theory approach for the analysis of the complex topology of the Rayleigh-Taylor mixing layer. We automatically extract bubble structures at multiple scales and identify the resolution of interest. Quantitative analysis of bubble counts over time highlights distinct mixing trends for a high-resolution Direct Numerical Simulation (DNS) [1].
Application of DOT-MORSE coupling to the analysis of three-dimensional SNAP shielding problems
NASA Technical Reports Server (NTRS)
Straker, E. A.; Childs, R. L.; Emmett, M. B.
1972-01-01
The use of discrete ordinates and Monte Carlo techniques to solve radiation transport problems is discussed. A general discussion of two possible coupling schemes is given for the two methods. The calculation of the reactor radiation scattered from a docked service and command module is used as an example of coupling discrete ordinates (DOT) and Monte Carlo (MORSE) calculations.
A Mechanical Apparatus for Hands-On Experience with the Morse Potential
ERIC Educational Resources Information Center
Everest, Michael A.
2010-01-01
A simple pulley apparatus is described that gives the student hands-on experience with the Morse potential. Students develop an internalized sense of what a covalent bond would feel like if atoms in a molecule could be manipulated by hand. This kinesthetic learning enhances the student's understanding and intuition of several chemical phenomena.…
Continuous Spectrum of Trigonometric Rosen-Morse and Eckart Potentials from Free Particle Spectrum
NASA Astrophysics Data System (ADS)
Panahi, H.; Pouraram, H.
2011-06-01
The shape invariant symmetry of the Trigonometric Rosen-Morse and Eckart potentials has been studied through realization of so(3) and so(2,1) Lie algebras respectively. In this work, by using the free particle eigenfunction, we obtain continuous spectrum of these potentials by means of their shape invariance symmetry in an algebraic method.
Convergence of the Approximation Scheme to American Option Pricing via the Discrete Morse Semiflow
Ishii, Katsuyuki; Omata, Seiro
2011-12-15
We consider the approximation scheme to the American call option via the discrete Morse semiflow, which is a minimizing scheme of a time semi-discretized variational functional. In this paper we obtain a rate of convergence of approximate solutions and the convergence of approximate free boundaries. We mainly apply the theory of variational inequalities and that of viscosity solutions to prove our results.
Schramm, M; Wirtz, D C; Holzwarth, U; Pitto, R P
2000-04-01
All biomaterials used for total joint surgery are subjected to wear mechanisms. Morse taper junctions of modular hip revision implants are predilection sites for both fretting and crevice corrosion, dissociation and breakage of the components. The aim of this study is to quantify wear and study metallurgical changes of Morse taper junctions of in-vitro and in-vivo loaded modular revision stems. Three modular revision stems (MRP-Titan, Peter Brehm GmbH, Germany) were loaded by a servohydraulic testing machine. The loads and conditions used exceeded by far the values required by ISO-standard 7206. The tests were performed with maximum axial loads of 3,500 N to 4,000 N over 10-12 x 10(6) cycles at 2 Hz. Additionally, the female part of the taper junctions were coated with blood and bone debris. The free length of the implant was set to 200 mm. One other MRP stem was investigated after retrieval following 5.5 years of in-vivo use. All contact surfaces of the modular elements were assessed by visual inspection, optical microscopy and scanning electron microscopy (SEM). The degree of plastic deformation of the male part of the morse taper junction was determined by contouroscopy. None of the morse taper junctions broke or failed mechanically. Corrosion and wear affected all tapers, especially at the medial side. The retrieved implant showed no cracks and the amount of debris measured only one third of that for the stems tested in-vitro. The present retrieval and laboratory investigations have proven, that the morse taper junctions of the MRP-titanium stem are stable and resistant to relevant wear mechanisms. The longevity of the junctions for clinical use is given. If an optimal taper design is selected, the advantages of modular femoral components in total hip revision arthroplasty will outweigh the possible risks.
NASA Astrophysics Data System (ADS)
Lee, Hoyoung; Jung, Bongsoo; Jung, Jooyoung; Jeon, Byeungwoo
2012-11-01
The quarter-pel motion vector accuracy supported by H.264/advanced video coding (AVC) in motion estimation (ME) and compensation (MC) provides high compression efficiency. However, it also increases the computational complexity. While various well-known fast integer-pel ME methods are already available, lack of a good, fast subpel ME method results in problems associated with relatively high computational complexity. This paper presents one way of solving the complexity problem of subpel ME by making adaptive motion vector (MV) accuracy decisions in inter-mode selection. The proposed MV accuracy decision is made using inter-mode selection of a macroblock with two decision criteria. Pixels are classified as stationary (and/or homogeneous) or nonstationary (and/or nonhomogeneous). In order to avoid unnecessary interpolation and processing, a proper subpel ME level is chosen among four different combinations, each of which has a different MV accuracy and number of subpel ME iterations based on the classification. Simulation results using an open source x264 software encoder show that without any noticeable degradation (by -0.07 dB on average), the proposed method reduces total encoding time and subpel ME time, respectively, by 51.78% and by 76.49% on average, as compared to the conventional full-pel pixel search.
47 CFR 80.355 - Distress, urgency, safety, call and reply Morse code frequencies.
Code of Federal Regulations, 2012 CFR
2012-10-01
... available for use by authorized ship stations equipped with crystal-controlled oscillators for A1A, J2A, J2B... frequencies for each geographic region. Ship stations with synthesized transmitters may operate on every...
47 CFR 80.355 - Distress, urgency, safety, call and reply Morse code frequencies.
Code of Federal Regulations, 2010 CFR
2010-10-01
... available for use by authorized ship stations equipped with crystal-controlled oscillators for A1A, J2A, J2B... frequencies for each geographic region. Ship stations with synthesized transmitters may operate on every...
47 CFR 80.355 - Distress, urgency, safety, call and reply Morse code frequencies.
Code of Federal Regulations, 2013 CFR
2013-10-01
... available for use by authorized ship stations equipped with crystal-controlled oscillators for A1A, J2A, J2B... frequencies for each geographic region. Ship stations with synthesized transmitters may operate on every...
47 CFR 80.355 - Distress, urgency, safety, call and reply Morse code frequencies.
Code of Federal Regulations, 2011 CFR
2011-10-01
... available for use by authorized ship stations equipped with crystal-controlled oscillators for A1A, J2A, J2B... frequencies for each geographic region. Ship stations with synthesized transmitters may operate on every...
47 CFR 80.355 - Distress, urgency, safety, call and reply Morse code frequencies.
Code of Federal Regulations, 2014 CFR
2014-10-01
... available for use by authorized ship stations equipped with crystal-controlled oscillators for A1A, J2A, J2B... frequencies for each geographic region. Ship stations with synthesized transmitters may operate on every...
Vision: Efficient Adaptive Coding
Burr, David; Cicchini, Guido Marco
2016-01-01
Recent studies show that perception is driven not only by the stimuli currently impinging on our senses, but also by the immediate past history. The influence of recent perceptual history on the present reflects the action of efficient mechanisms that exploit temporal redundancies in natural scenes. PMID:25458222
NASA Astrophysics Data System (ADS)
Bargatze, L. F.
2015-12-01
Active Data Archive Product Tracking (ADAPT) is a collection of software routines that permits one to generate XML metadata files to describe and register data products in support of the NASA Heliophysics Virtual Observatory VxO effort. ADAPT is also a philosophy. The ADAPT concept is to use any and all available metadata associated with scientific data to produce XML metadata descriptions in a consistent, uniform, and organized fashion to provide blanket access to the full complement of data stored on a targeted data server. In this poster, we present an application of ADAPT to describe all of the data products that are stored by using the Common Data File (CDF) format served out by the CDAWEB and SPDF data servers hosted at the NASA Goddard Space Flight Center. These data servers are the primary repositories for NASA Heliophysics data. For this purpose, the ADAPT routines have been used to generate data resource descriptions by using an XML schema named Space Physics Archive, Search, and Extract (SPASE). SPASE is the designated standard for documenting Heliophysics data products, as adopted by the Heliophysics Data and Model Consortium. The set of SPASE XML resource descriptions produced by ADAPT includes high-level descriptions of numerical data products, display data products, or catalogs and also includes low-level "Granule" descriptions. A SPASE Granule is effectively a universal access metadata resource; a Granule associates an individual data file (e.g. a CDF file) with a "parent" high-level data resource description, assigns a resource identifier to the file, and lists the corresponding assess URL(s). The CDAWEB and SPDF file systems were queried to provide the input required by the ADAPT software to create an initial set of SPASE metadata resource descriptions. Then, the CDAWEB and SPDF data repositories were queried subsequently on a nightly basis and the CDF file lists were checked for any changes such as the occurrence of new, modified, or deleted
Kirk, B.L.; Sartori, E.
1997-06-01
Subsequent to the introduction of High Performance Computing in the developed countries, the Organization for Economic Cooperation and Development/Nuclear Energy Agency (OECD/NEA) created the Task Force on Adapting Computer Codes in Nuclear Applications to Parallel Architectures (under the guidance of the Nuclear Science Committee`s Working Party on Advanced Computing) to study the growth area in supercomputing and its applicability to the nuclear community`s computer codes. The result has been four years of investigation for the Task Force in different subject fields - deterministic and Monte Carlo radiation transport, computational mechanics and fluid dynamics, nuclear safety, atmospheric models and waste management.
Locking strength of Morse tapers used for modular segmental bone defect replacement prostheses.
Duda, G N; Elias, J J; Valdevit, A; Chao, E Y
1997-01-01
Mechanical testing has been performed to characterize the locking strength of Morse taper locks used for reconstruction of large bone defects. Taper joint pairs were locked with a series of compressive loads increasing from 500 to 3500 N. Following each load application the taper locks were distracted with either an axial load or a torsional load. Additional tapers were loaded with 2 million cycles of axial compression or 2 million cycles of cantilever bending combined with axial compression, followed by axial distraction. The torsional and axial distraction loads increased linearly with the compressive load. Compared to a single compressive load application, cyclic axial loading had little influence on the joint strength, while a combination of axial loading and bending increased the joint strength. Based on these results, in vivo loading should increase the locking strength of Morse taper locks used for bone defect reconstruction.
NASA Astrophysics Data System (ADS)
Jia, Chun-Sheng; Dai, Jian-Wei; Zhang, Lie-Hui; Liu, Jian-Yi; Zhang, Guang-Dong
2015-01-01
We solve the Klein-Gordon equation with the modified Rosen-Morse potential energy model in D spatial dimensions. The bound state energy equation has been obtained by using the supersymmetric WKB approximation approach. We find that the inter-dimensional degeneracy symmetry exists for the molecular system represented by the modified Rosen-Morse potential. For fixed vibrational and rotational quantum numbers, the relativistic energies for the 61Πu state of the 7Li2 molecule and the X3Π state of the SiC radical increase as D increases. We observe that the behavior of the relativistic vibrational energies in higher dimensions remains similar to that of the three-dimensional system.
A Multi-Resolution Data Structure for Two-Dimensional Morse Functions
Bremer, P-T; Edelsbrunner, H; Hamann, B; Pascucci, V
2003-07-30
The efficient construction of simplified models is a central problem in the field of visualization. We combine topological and geometric methods to construct a multi-resolution data structure for functions over two-dimensional domains. Starting with the Morse-Smale complex we build a hierarchy by progressively canceling critical points in pairs. The data structure supports mesh traversal operations similar to traditional multi-resolution representations.
Energy level formula for the Morse oscillator with an additional kinetic coupling potential
NASA Astrophysics Data System (ADS)
Fan, Hong-yi; Chen, Bo-zhan; Fan, Yue
1996-02-01
Based on the <η| representation which is the common eigenstate of the relative position x1 - x2 and the total momentum P1 + P2 of two particles we derive the energy level formula for a Morse oscillator with an additional kinetic coupling potential. The <η| representation seems to provide a direct and convenient approach for solving certain dynamical problems for two-body systems.
Electric quadrupole transitions of the Bohr Hamiltonian with the Morse potential
Inci, I.; Bonatsos, D.; Boztosun, I.
2011-08-15
Eigenfunctions of the collective Bohr Hamiltonian with the Morse potential have been obtained by using the asymptotic iteration method (AIM) for both {gamma}-unstable and rotational structures. B(E2) transition rates have been calculated and compared to experimental data. Overall good agreement is obtained for transitions within the ground-state band, while some interband transitions appear to be systematically underpredicted in {gamma}-unstable nuclei and overpredicted in rotational nuclei.
Viewing MORSE-CG radiation transport with 3-D color graphics
Namito, Yoshihito; Jenkins, T.M.; Nelson, W.R.
1990-01-01
In this paper we present the coupling of MORSE-CG with the SLAC Unified Graphics System (UGS77) through an add-on package called MORSGRAF which allows for real-time display of neutron and photon tracks in the Monte Carlo simulation. In addition to displaying the myriad of complicated geometries that can be created with the MORSE Combinatorial Geometry program, MORSGRAF permits color tagging of neutrons (green) and photons (red) with the variation of track intensity an indicator of the energy of the particle. Particle types can be switched off and on by means of a mouse-icon system, and the perspective can be changed (i.e., rotated, translated, and zoomed). MORSGRAF also allows one to display the propagation of radiation through shields and mazes on an ordinary graphics terminal, as well as in documents printed on a laser printer. Several examples will be given to demonstrate the various capabilities of MORSGRAF coupled to MORSE-CG. 12 refs., 8 figs.
Morse-Novikov cohomology of locally conformally Kähler manifolds
NASA Astrophysics Data System (ADS)
Ornea, Liviu; Verbitsky, Misha
2009-03-01
A locally conformally Kähler (LCK) manifold is a complex manifold admitting a Kähler covering, with the monodromy acting on this covering by holomorphic homotheties. We define three cohomology invariants, the Lee class, the Morse-Novikov class, and the Bott-Chern class, of an LCK-structure. These invariants play together the same role as the Kähler class in Kähler geometry. If these classes coincide for two LCK-structures, the difference between these structures can be expressed by a smooth potential, similar to the Kähler case. We show that the Morse-Novikov class and the Bott-Chern class of a Vaisman manifold vanish. Moreover, for any LCK-structure on a manifold, admitting a Vaisman structure, we prove that its Morse-Novikov class vanishes. We show that a compact LCK-manifold M with vanishing Bott-Chern class admits a holomorphic embedding into a Hopf manifold, if dimCM⩾3, a result which parallels the Kodaira embedding theorem.
NASA Astrophysics Data System (ADS)
Fabian, Dedecker; Peter, Cundall; Daniel, Billaux; Torsten, Groeger
Digging a shaft or drift inside a rock mass is a common practice in civil engineering when a transportation way, such as a motorway, railway tunnel or storage shaft is to be built. In most cases, the consequences of the disturbance on the medium must be known in order to estimate the behaviour of the disturbed rock mass. Indeed, excavating part of the rock causes a new distribution of the stress field around the excavation that can lead to micro-cracking and even to the failure of some rock volume in the vicinity of the shaft. Consequently, the formed micro-cracks modify the mechanical and hydraulic properties of the rock. In this paper, we present an original method for the evaluation of damage-induced permeability. ITASCA has developed and used discontinuum models to study rock damage by building particle assemblies and checking the breakage of bonds under stress. However, such models are limited in size by the very large number of particles needed to model even a comparatively small volume of rock. In fact, a large part of most models never experiences large strains and does not require the accurate description of large-strain/damage/post-peak behaviour afforded by a discontinuum model. Thus, a large model frequently can be separated into a strongly strained “core” area to be represented by a Discontinuum and a peripheral area for which continuum zones would be adequate. Based on this observation, Itasca has developed a coupled, three-dimensional, continuum/discontinuum modelling approach. The new approach, termed Adaptive Continuum/Discontinuum Code (AC/DC), is based on the use of a periodic discontinuum “base brick” for which more or less simplified continuum equivalents are derived. Depending on the level of deformation in each part of the model, the AC/DC code can dynamically select the appropriate brick type to be used. In this paper, we apply the new approach to an excavation performed in the Bure site, at which the French nuclear waste agency
NASA Astrophysics Data System (ADS)
Wang, Jianhua; Cheng, Lianglun; Wang, Tao; Peng, Xiaodong
2016-03-01
Table look-up operation plays a very important role during the decoding processing of context-based adaptive variable length decoding (CAVLD) in H.264/advanced video coding (AVC). However, frequent table look-up operation can result in big table memory access, and then lead to high table power consumption. Aiming to solve the problem of big table memory access of current methods, and then reduce high power consumption, a memory-efficient table look-up optimized algorithm is presented for CAVLD. The contribution of this paper lies that index search technology is introduced to reduce big memory access for table look-up, and then reduce high table power consumption. Specifically, in our schemes, we use index search technology to reduce memory access by reducing the searching and matching operations for code_word on the basis of taking advantage of the internal relationship among length of zero in code_prefix, value of code_suffix and code_lengh, thus saving the power consumption of table look-up. The experimental results show that our proposed table look-up algorithm based on index search can lower about 60% memory access consumption compared with table look-up by sequential search scheme, and then save much power consumption for CAVLD in H.264/AVC.
2012-01-01
We have entered a new era in agricultural and biomedical science made possible by remarkable advances in DNA sequencing technologies. The complete sequence of an individual’s set of chromosomes (collectively, its genome) provides a primary genetic code for what makes that individual unique, just as the contents of every personal computer reflect the unique attributes of its owner. But a second code, composed of “epigenetic” layers of information, affects the accessibility of the stored information and the execution of specific tasks. Nature’s second code is enigmatic and must be deciphered if we are to fully understand and optimize the genetic potential of crop plants. The goal of the Epigenomics of Plants International Consortium is to crack this second code, and ultimately master its control, to help catalyze a new green revolution. PMID:22751210
Kumar, Ravi
2014-01-01
Semiblind channel estimation method provides the best trade-off in terms of bandwidth overhead, computational complexity and latency. The result after using multiple input multiple output (MIMO) systems shows higher data rate and longer transmit range without any requirement for additional bandwidth or transmit power. This paper presents the detailed analysis of diversity coding techniques using MIMO antenna systems. Different space time block codes (STBCs) schemes have been explored and analyzed with the proposed higher code rate. STBCs with higher code rates have been simulated for different modulation schemes using MATLAB environment and the simulated results have been compared in the semiblind environment which shows the improvement even in highly correlated antenna arrays and is found very close to the condition when channel state information (CSI) is known to the channel. PMID:24688379
Pansard, E; Fouilleron, N; Dereudre, G; Migaud, H; Girard, J
2012-04-01
Morse tapers are frequently used in total hip replacement to achieve precise adjustment of lengths and femoral offset. Mechanically, they do not raise any specific problems so long as strict positioning requirements are observed and elements from different manufacturers are not mixed together. We report a case in which the implant induced unexplained pain at 2 years, in relation to a defective fit between the metallic head and the Morse taper. Asymmetric partial fit of the head onto the taper was detected on control X-ray and was implicated as causing metallosis due to excessive release of metal debris from the Morse taper. Revision required femoral stem exchange because of the damage to the Morse taper as well as replacing the cup with new metal-metal bearings. Evolution was favorable at 3 years' follow-up. Most hip replacements include a Morse taper; the present clinical case is a reminder that strict positioning rules are to be respected, without which corrosion and wear may lead to mechanical failure.
NASA Astrophysics Data System (ADS)
Eckert, C. H. J.; Zenker, E.; Bussmann, M.; Albach, D.
2016-10-01
We present an adaptive Monte Carlo algorithm for computing the amplified spontaneous emission (ASE) flux in laser gain media pumped by pulsed lasers. With the design of high power lasers in mind, which require large size gain media, we have developed the open source code HASEonGPU that is capable of utilizing multiple graphic processing units (GPUs). With HASEonGPU, time to solution is reduced to minutes on a medium size GPU cluster of 64 NVIDIA Tesla K20m GPUs and excellent speedup is achieved when scaling to multiple GPUs. Comparison of simulation results to measurements of ASE in Y b 3 + : Y AG ceramics show perfect agreement.
Greenough, Jeffrey A.; de Supinski, Bronis R.; Yates, Robert K.; Rendleman, Charles A.; Skinner, David; Beckner, Vince; Lijewski, Mike; Bell, John; Sexton, James C.
2005-04-25
We describe the performance of the block-structured Adaptive Mesh Refinement (AMR) code Raptor on the 32k node IBM BlueGene/L computer. This machine represents a significant step forward towards petascale computing. As such, it presents Raptor with many challenges for utilizing the hardware efficiently. In terms of performance, Raptor shows excellent weak and strong scaling when running in single level mode (no adaptivity). Hardware performance monitors show Raptor achieves an aggregate performance of 3:0 Tflops in the main integration kernel on the 32k system. Results from preliminary AMR runs on a prototype astrophysical problem demonstrate the efficiency of the current software when running at large scale. The BG/L system is enabling a physics problem to be considered that represents a factor of 64 increase in overall size compared to the largest ones of this type computed to date. Finally, we provide a description of the development work currently underway to address our inefficiencies.
NASA Astrophysics Data System (ADS)
Sierra-Suarez, Jonatan A.; Majumdar, Shubhaditya; McGaughey, Alan J. H.; Malen, Jonathan A.; Higgs, C. Fred
2016-04-01
This work formulates a rough surface contact model that accounts for adhesion through a Morse potential and plasticity through the Kogut-Etsion finite element-based approximation. Compared to the commonly used Lennard-Jones (LJ) potential, the Morse potential provides a more accurate and generalized description for modeling covalent materials and surface interactions. An extension of this contact model to describe composite layered surfaces is presented and implemented to study a self-assembled monolayer (SAM) grown on a gold substrate placed in contact with a second gold substrate. Based on a comparison with prior experimental measurements of the thermal conductance of this SAM junction [Majumdar et al., Nano Lett. 15, 2985-2991 (2015)], the more general Morse potential-based contact model provides a better prediction of the percentage contact area than an equivalent LJ potential-based model.
Solutions of Morse potential with position-dependent mass by Laplace transform
NASA Astrophysics Data System (ADS)
Miraboutalebi, S.
2016-08-01
In the framework of the position-dependent mass quantum mechanics, the three dimensional Schrödinger equation is studied by applying the Laplace transforms combining with the point canonical transforms. For the potential analogues to Morse potential and via the Pekeris approximation, we introduce the general solutions appropriate for any kind of position dependent mass profile which obeys a key condition. For a specific position-dependent mass profile, the bound state solutions are obtained through an analytical form. The constant mass solutions are also relived.
The effects of blood and fat on Morse taper disassembly forces.
Lavernia, Carlos J; Baerga, Luis; Barrack, Robert L; Tozakoglou, Evangelos; Cook, Stephen D; Lata, Loren; Rossi, Mark D
2009-04-01
Biological debris between modular components using Morse tapers in hip arthroplasty can lead to weakening of the implant construct. We conducted a study to determine the effect of blood and fat within the taper interface. Tapers were divided into groups 1 (clean), 2 (surface covered with blood and fat), and 3 (blood and fat wiped off). Each taper was impacted and disassembled 5 times. There was a difference in mean disassembly force between pulls within group 2. Thus, blood and fat contamination can have a significant effect on the potential for disassembly.
Exact solution to laser rate equations: three-level laser as a Morse-like oscillator
NASA Astrophysics Data System (ADS)
León-Montiel, R. de J.; Moya-Cessa, Héctor M.
2016-08-01
It is shown how the rate equations that model a three-level laser can be cast into a single second-order differential equation, whose form describes a time-dependent harmonic oscillator. Using this result, we demonstrate that the resulting equation can be identified as a Schrödinger equation for a Morse-like potential, thus allowing us to derive exact closed-form expressions for the dynamics of the number of photons inside the laser cavity, as well as the atomic population inversion.
On embedding a Morse-Smale diffeomorphism on a 3-manifold in a topological flow
Grines, Vyacheslav Z; Gurevich, E Ya; Medvedev, Vladislav S; Pochinka, Olga V
2012-12-31
In this paper, for the case of 3-dimensional manifolds, we solve the Palis problem on finding necessary and sufficient conditions for a Morse-Smale cascade to embed in a topological flow. The set of such cascades is open in the space of all diffeomorphisms, while the set of arbitrary diffeomorphisms that embed in a smooth flow is nowhere dense. Also, we consider a class of diffeomorphisms that embed in a topological flow and prove that a complete topological invariant for this class is similar to the Andronova-Maier scheme and the Peixoto graph. Bibliography: 26 titles.
Electronic dynamics under effect of a nonlinear Morse interaction and a static electric field
NASA Astrophysics Data System (ADS)
Ranciaro Neto, A.; de Moura, F. A. B. F.
2016-11-01
Considering non-interacting electrons in a one-dimension alloy in which atoms are coupled by a Morse potential, we study the system dynamics in the presence of a static electric field. Calculations are performed assuming a quantum mechanical treatment for the electronic transport and a classical Hamiltonian model for the lattice vibrations. We report numerical evidence of the existence of a soliton-electron pair, even when the electric field is turned on, and we offer a description of how the existence of such a phase depends on the magnitude of the electric field and the electron-phonon interaction.
Inner Structure of Gauss-Bonnet-Chern Theorem and the Morse Theory
NASA Astrophysics Data System (ADS)
Duan, Yi-Shi; Zhang, Peng-Ming
We define a new one-form HA based on the second fundamental tensor HabA¯, the Gauss-Bonnet-Chern form can be novelly expressed with this one-form. Using the φ-mapping theory we find that the Gauss-Bonnet-Chern density can be expressed in terms of the δ-function δ(φ) and the relationship between the Gauss-Bonnet-Chern theorem and Hopf-Poincaré theorem is given straightforwardly. The topological current of the Gauss-Bonnet-Chern theorem and its topological structure are discussed in details. At last, the Morse theory formula of the Euler characteristic is generalized.
Alemgadmi, Khaled I. K. Suparmi; Cari; Deta, U. A.
2015-09-30
The approximate analytical solution of Schrodinger equation for Q-Deformed Rosen-Morse potential was investigated using Supersymmetry Quantum Mechanics (SUSY QM) method. The approximate bound state energy is given in the closed form and the corresponding approximate wave function for arbitrary l-state given for ground state wave function. The first excited state obtained using upper operator and ground state wave function. The special case is given for the ground state in various number of q. The existence of Rosen-Morse potential reduce energy spectra of system. The larger value of q, the smaller energy spectra of system.
NASA Astrophysics Data System (ADS)
Xie, Xiang-Jun; Jia, Chun-Sheng
2015-03-01
We solve the Klein-Gordon equation with the Morse potential energy model to obtain the relativistic bound state energy equation in D spatial dimensions. We find that the inter-dimensional degeneracy symmetry exists for the molecular system represented by the Morse potential model. For a fixed vibrational quantum number and various rotational quantum numbers, the relativistic energies for the X1Σ+ state of the ScI molecule diverge as D increases. We observe that the behavior of the relativistic vibrational energies in higher dimensions remains similar to that of the three-dimensional system.
Ganapol, Barry; Maldonado, Ivan
2014-01-23
The generation of multigroup cross sections lies at the heart of the very high temperature reactor (VHTR) core design, whether the prismatic (block) or pebble-bed type. The design process, generally performed in three steps, is quite involved and its execution is crucial to proper reactor physics analyses. The primary purpose of this project is to develop the CENTRM cross-section processing module of the SCALE code package for application to prismatic or pebble-bed core designs. The team will include a detailed outline of the entire processing procedure for application of CENTRM in a final report complete with demonstration. In addition, they will conduct a thorough verification of the CENTRM code, which has yet to be performed. The tasks for this project are to: Thoroughly test the panel algorithm for neutron slowing down; Develop the panel algorithm for multi-materials; Establish a multigroup convergence 1D transport acceleration algorithm in the panel formalism; Verify CENTRM in 1D plane geometry; Create and test the corresponding transport/panel algorithm in spherical and cylindrical geometries; and, Apply the verified CENTRM code to current VHTR core design configurations for an infinite lattice, including assessing effectiveness of Dancoff corrections to simulate TRISO particle heterogeneity.
Parameterizing the Morse potential for coarse-grained modeling of blood plasma
Zhang, Na; Zhang, Peng; Kang, Wei; Bluestein, Danny; Deng, Yuefan
2014-01-15
Multiscale simulations of fluids such as blood represent a major computational challenge of coupling the disparate spatiotemporal scales between molecular and macroscopic transport phenomena characterizing such complex fluids. In this paper, a coarse-grained (CG) particle model is developed for simulating blood flow by modifying the Morse potential, traditionally used in Molecular Dynamics for modeling vibrating structures. The modified Morse potential is parameterized with effective mass scales for reproducing blood viscous flow properties, including density, pressure, viscosity, compressibility and characteristic flow dynamics of human blood plasma fluid. The parameterization follows a standard inverse-problem approach in which the optimal micro parameters are systematically searched, by gradually decoupling loosely correlated parameter spaces, to match the macro physical quantities of viscous blood flow. The predictions of this particle based multiscale model compare favorably to classic viscous flow solutions such as Counter-Poiseuille and Couette flows. It demonstrates that such coarse grained particle model can be applied to replicate the dynamics of viscous blood flow, with the advantage of bridging the gap between macroscopic flow scales and the cellular scales characterizing blood flow that continuum based models fail to handle adequately.
Modeling of diatomic molecule using the Morse potential and the Verlet algorithm
NASA Astrophysics Data System (ADS)
Fidiani, Elok
2016-03-01
Performing molecular modeling usually uses special software for Molecular Dynamics (MD) such as: GROMACS, NAMD, JMOL etc. Molecular dynamics is a computational method to calculate the time dependent behavior of a molecular system. In this work, MATLAB was used as numerical method for a simple modeling of some diatomic molecules: HCl, H2 and O2. MATLAB is a matrix based numerical software, in order to do numerical analysis, all the functions and equations describing properties of atoms and molecules must be developed manually in MATLAB. In this work, a Morse potential was generated to describe the bond interaction between the two atoms. In order to analyze the simultaneous motion of molecules, the Verlet Algorithm derived from Newton's Equations of Motion (classical mechanics) was operated. Both the Morse potential and the Verlet algorithm were integrated using MATLAB to derive physical properties and the trajectory of the molecules. The data computed by MATLAB is always in the form of a matrix. To visualize it, Visualized Molecular Dynamics (VMD) was performed. Such method is useful for development and testing some types of interaction on a molecular scale. Besides, this can be very helpful for describing some basic principles of molecular interaction for educational purposes.
Inci, I.; Boztosun, I.; Bonatsos, D.
2008-11-11
Analytical solutions of the collective Bohr Hamiltonian with the Morse potential have been obtained for the U(5)-O(6) and U(5)-SU(3) transition regions through the Asymptotic Iteration Method (AIM). The obtained energy eigenvalue equations have been used to get the experimental excitation energy spectrum of Xe and Yb isotopes. The results are in good agreement with experimental data.
Strain analysis of different diameter Morse taper implants under overloading compressive conditions.
Castro, Carolina Guimarães; Zancopé, Karla; Veríssimo, Crisnicaw; Soares, Carlos José; Neves, Flávio Domingues das
2015-01-01
The aim of this study was to evaluate the amount of deformation from compression caused by different diameters of Morse taper implants and the residual deformation after load removal. Thirty Morse taper implants lacking external threads were divided into 3 groups (n = 10) according to their diameter as follows: 3.5 mm, 4.0 mm and 5.0 mm. Two-piece abutments were fixed into the implants, and the samples were subjected to compressive axial loading up to 1500 N of force. During the test, one strain gauge remained fixed to the cervical portion of each implant to measure the strain variation. The strain values were recorded at two different time points: at the maximum load (1500 N) and 60 seconds after load removal. To calculate the strain at the implant/abutment interface, a mathematical formula was applied. Data were analyzed using a one-way Anova and Tukey's test (α = 0.05). The 5.0 mm diameter implant showed a significantly lower strain (650.5 μS ± 170.0) than the 4.0 mm group (1170.2 μS ± 374.7) and the 3.5 mm group (1388.1 μS ± 326.6) (p < 0.001), regardless of the load presence. The strain values decreased by approximately 50% after removal of the load, regardless of the implant diameter. The 5.0 mm implant showed a significantly lower strain at the implant/abutment interface (943.4 μS ± 504.5) than the 4.0 mm group (1057.4 μS ± 681.3) and the 3.5 mm group (1159.6 μS ± 425.9) (p < 0.001). According to the results of this study, the diameter influenced the strain around the internal and external walls of the cervical region of Morse taper implants; all diameters demonstrated clinically acceptable values of strain.
NASA Astrophysics Data System (ADS)
Melsa, J. L.; Mills, J. D.; Arora, A. A.
1983-06-01
This report describes the results of a fifteen month study of the real-time implementation of an algorithm combining time-domain harmonic scaling and Adaptive Residual Coding at a transmission bit rate of 16 kb/s. The modifications of this encoding algorithm as originally presented by Melso and Pande to allow real-time implementation are described in detail. A non real-time FORTRAN simulation using a sixteen-bit word length was developed and tested to establish feasibility. The hardware implementation of a full-duplex, real-time system has demonstrated that this algorithm is capable of producing toll quality speech digitization. This report has been divided into two volumes. The second volume discusses details of the hardware implementation, schematics for the system and operating instructions.
NASA Astrophysics Data System (ADS)
Melsa, J. L.; Mills, J. D.; Arora, A. A.
1983-06-01
This report describes the results of a fifteen-month study of the real-time implementation of algorithm combining time-domain harmonic scaling and Adaptive Residual Coding at a transmission bit rate of 16 kb/s. The modifications of this encoding algorithm as originally presented by Melsa and Pande to allow real-time implementation are described in detail. A non real-time FORTRAN simulation using a sixteen-bit word length was developed and tested to establish feasibility. The hardware implementation of a full-duplex, real-time system has demonstrated that this algorithm is capable of producing toll quality speech digitization. This report has been divided into two volumes. The first volume discusses the algorithm modifications and FORTRAN simulation. The details of the hardware implementation, schematics for the system and operating instructions are included in Volume 2 of this final report.
Morse-type tapers: factors that may influence taper strength during total hip arthroplasty.
Pennock, Andrew T; Schmidt, Andrew H; Bourgeault, Craig A
2002-09-01
We studied the effect of varying impaction force, repeated impactions, and fluid contamination on the disassembly strength of Morse-type tapers in 4 commercially available, modular femoral total hip components. The effect of varying techniques of taper assembly on the distraction force was studied. Our results show a reproducible and linear relationship between the taper impaction force and the disassembly force. The force necessary to separate the taper for a given impaction force varied, however, among manufacturers. Repeated impactions added little strength, and we found that when multiple impactions of varying force are used, the strength is roughly equivalent to the expected strength from the single strongest blow. Fluid contamination at the taper interface had unpredictable effects on taper strength.
On the exact solubility in momentum space of the trigonometric Rosen-Morse potential
NASA Astrophysics Data System (ADS)
Compean, C. B.; Kirchbach, M.
2011-01-01
The Schrödinger equation with the trigonometric Rosen-Morse potential in a flat three-dimensional Euclidean space, E3, and its exact solutions are shown to be exactly Fourier transformable to momentum space, though the resulting equation is purely algebraic and cannot be cast into the canonical form of an integral Lippmann-Schwinger equation. This is because the cotangent function does not allow for an exact Fourier transform in E3. In addition, we recall that the above potential can also be viewed as an angular function of the second polar angle parametrizing the three-dimensional spherical surface, S3, of a constant radius, in which case the cotangent function would allow for an exact integral transform to momentum space. On that basis, we obtain a momentum space Lippmann-Schwinger-type equation, though the corresponding wavefunctions have to be obtained numerically.
Holden, Richard J; Rivera-Rodriguez, A Joy; Faye, Héléne; Scanlon, Matthew C; Karsh, Ben-Tzion
2013-08-01
The most common change facing nurses today is new technology, particularly bar coded medication administration technology (BCMA). However, there is a dearth of knowledge on how BCMA alters nursing work. This study investigated how BCMA technology affected nursing work, particularly nurses' operational problem-solving behavior. Cognitive systems engineering observations and interviews were conducted after the implementation of BCMA in three nursing units of a freestanding pediatric hospital. Problem-solving behavior, associated problems, and goals, were specifically defined and extracted from observed episodes of care. Three broad themes regarding BCMA's impact on problem solving were identified. First, BCMA allowed nurses to invent new problem-solving behavior to deal with pre-existing problems. Second, BCMA made it difficult or impossible to apply some problem-solving behaviors that were commonly used pre-BCMA, often requiring nurses to use potentially risky workarounds to achieve their goals. Third, BCMA created new problems that nurses were either able to solve using familiar or novel problem-solving behaviors, or unable to solve effectively. Results from this study shed light on hidden hazards and suggest three critical design needs: (1) ecologically valid design; (2) anticipatory control; and (3) basic usability. Principled studies of the actual nature of clinicians' work, including problem solving, are necessary to uncover hidden hazards and to inform health information technology design and redesign.
Lillo, Ricardo; Parra, Carlos; Fuentes, Ramón; Borie, Eduardo; Engelke, Wilfried; Beltrán, Víctor
2015-01-01
The aim of this study was to evaluate the compressive resistance under oblique loads of abutments with two different diameters and transmucosal heights used for cement-retained implant-supported prostheses in Morse-taper implants. Forty Morse-taper implants were divided into four groups with different abutment sizes for cement-retained prostheses in order to perform the compressive test. The groups were divided by abutment diameter and transmucosal height as follows: Group 1: 4.5 x 2.5 mm; Group 2: 4.5 x 3.5 mm; Group 3: 3.3 x 2.5 mm; and Group 4: 3.3 x 3.5 mm. An oblique compressive loading test was performed on each sample located in a platform at 30° using a universal testing machine with a load cell of 1,000 kgf and 0.5 mm/min speed until achieving the deformation of abutment's neck. The compressive resistance and its mechanical behavior were recorded for each group and the data were analyzed using ANOVA, the Shapiro-Wilk and Scheffé tests. In addition, the detailed damage of all samples was recorded with a conventional camera linked to the endoscopic equipment. Significant differences were observed among the groups, except between Groups 2 and 3 (p>0.005). All the abutments showed permanent deformations in the upper region and at the transmucosal portion, but the threads of the screws were intact. Fractures were only identified in Groups 3 and 4. Stronger mechanical behavior and compressive resistance was observed in the abutments with 4.5 mm diameter and 2.5 mm transmucosal height.
ERIC Educational Resources Information Center
Pattavina, Paul
1980-01-01
Excerpts from an interview with William C. Morse on teacher burnout concern special educators' sense of failure and impotence, the issues connected with individualized educational programs, and the importance of the first year of teaching. (CL)
Suparmi, A. Cari, C.; Angraini, L. M.
2014-09-30
The bound state solutions of Dirac equation for Hulthen and trigonometric Rosen Morse non-central potential are obtained using finite Romanovski polynomials. The approximate relativistic energy spectrum and the radial wave functions which are given in terms of Romanovski polynomials are obtained from solution of radial Dirac equation. The angular wave functions and the orbital quantum number are found from angular Dirac equation solution. In non-relativistic limit, the relativistic energy spectrum reduces into non-relativistic energy.
NASA Technical Reports Server (NTRS)
Johnson, F. T.; Samant, S. S.; Bieterman, M. B.; Melvin, R. G.; Young, D. P.; Bussoletti, J. E.; Hilmes, C. L.
1992-01-01
A new computer program, called TranAir, for analyzing complex configurations in transonic flow (with subsonic or supersonic freestream) was developed. This program provides accurate and efficient simulations of nonlinear aerodynamic flows about arbitrary geometries with the ease and flexibility of a typical panel method program. The numerical method implemented in TranAir is described. The method solves the full potential equation subject to a set of general boundary conditions and can handle regions with differing total pressure and temperature. The boundary value problem is discretized using the finite element method on a locally refined rectangular grid. The grid is automatically constructed by the code and is superimposed on the boundary described by networks of panels; thus no surface fitted grid generation is required. The nonlinear discrete system arising from the finite element method is solved using a preconditioned Krylov subspace method embedded in an inexact Newton method. The solution is obtained on a sequence of successively refined grids which are either constructed adaptively based on estimated solution errors or are predetermined based on user inputs. Many results obtained by using TranAir to analyze aerodynamic configurations are presented.
Chan, A D; Lovely, D F; Hudgins, B
1998-03-01
Muscle activity produces an electrical signal termed the myo-electric signal (MES). The MES is a useful clinical tool, used in diagnostics and rehabilitation. This signal is typically stored in 2 bytes as 12-bit data, sampled at 3 kHz, resulting in a 6 kbyte s-1 storage requirement. Processing MES data requires large bit manipulations and heavy memory storage requirements. Adaptive differential pulse code modulation (ADPCM) is a popular and successful compression technique for speech. Its application to MES would reduce 12-bit data to a 4-bit representation, providing a 3:1 compression. As, in most practical applications, memory is organised in bytes, the realisable compression is 4:1, as pairs of data can be stored in a single byte. The performance of the ADPCM compression technique, using a real-time system at 1 kHz, 2 kHz and 4 kHz sampling rates, is evaluated. The data used include MES from both isometric and dynamic contractions. The percent residual difference (PRD) between an unprocessed and processed MES is used as a performance measure. Errors in computed parameters, such as median frequency and variance, which are used in clinical diagnostics, and waveform features employed in prosthetic control are also used to evaluate the system. The results of the study demonstrate that the ADPCM compression technique is an excellent solution for relieving the data storage requirements of MES both in isometric and dynamic situations. PMID:9684462
Stress on external hexagon and Morse taper implants submitted to immediate loading
Odo, Caroline H.; Pimentel, Marcele J.; Consani, Rafael L.X.; Mesquita, Marcelo F.; Nóbilo, Mauro A.A.
2015-01-01
Background/Aims This study aimed to evaluate the stress distribution around external hexagon (EH) and Morse taper (MT) implants with different prosthetic systems of immediate loading (distal bar (DB), casting technique (CT), and laser welding (LW)) by using photoelastic method. Methods Three infrastructures were manufactured on a model simulating an edentulous lower jaw. All models were composed by five implants (4.1 mm × 13.0 mm) simulating a conventional lower protocol. The samples were divided into six groups. G1: EH implants with DB and acrylic resin; G2: EH implants with titanium infrastructure CT; G3: EH implants with titanium infrastructure attached using LW; G4: MT implants with DB and acrylic resin; G5: MT implants with titanium infrastructure CT; G6: MT implants with titanium infrastructure attached using LW. After the infrastructures construction, the photoelastic models were manufactured and a loading of 4.9 N was applied in the cantilever. Five pre-determined points were analyzed by Fringes software. Results Data showed significant differences between the connection types (p < 0.0001), and there was no significant difference among the techniques used for infrastructure. Conclusion The reduction of the stress levels was more influenced by MT connection (except for CT). Different bar types submitted to immediate loading not influenced stress concentration. PMID:26605142
Single tooth replacement by Morse taper connection implants: a retrospective study of 80 implants.
Mangano, C; Bartolucci, E G
2001-01-01
The goal of this study was to provide data relative to the use of a new implant system (Mac System, Cabon, Milan, Italy) with a Morse taper implant-abutment connection for single implant restorations. The implant system is composed of an endosseous screw made of commercially pure titanium grade 2, while the abutment is titanium alloy (Ti-6Al-4V). A total of 80 single implants were placed in 69 patients (36 women and 33 men, mean age 42 years, range 16 to 61). All patients gave their informed consent and received a thorough clinical and radiographic examination. Smokers and diabetics were excluded from the study. Three implants were placed in areas of previous tooth impaction, 5 were placed in posttraumatic edentulous areas, 2 were used in situations involving tooth agenesis, and 60 replaced teeth lost because of caries or periodontal disease. All patients were edentulous for at least 1 year prior to treatment. The implants received a definitive prosthesis and had been in function for a mean period of 3.5 years. At second-stage surgery, 2 implants were removed because of lack of osseointegration. After 2 years of loading, 1 implant showed evidence of peri-implantitis and was removed. In addition, 2 fractured abutments and 1 loosened abutment were observed. Few mechanical or infectious complications were seen, and this may have been the result of high stability of the conical connection.
Evaluation of torque maintenance of abutment and cylinder screws with Morse taper implants.
Ferreira, Mayara Barbosa; Delben, Juliana Aparecida; Barão, Valentim Adelino Ricardo; Faverani, Leonardo Perez; Dos Santos, Paulo Henrique; Assunção, Wirley Gonçalves
2012-11-01
The screw loosening of implant-supported prostheses is a common mechanical failure and is related to several factors as insertion torque and preload. The aim of this study was to evaluate the torque maintenance of retention screws of tapered abutments and cylinders of Morse taper implants submitted to retightening and detorque measurements. Two groups were obtained (n = 12): group I-tapered abutment connected to the implant with titanium retention screw and group II-cylinder with metallic base connected to tapered abutment with titanium retention screw. The detorque values were measured by an analogic torque gauge after 3 minutes of torque insertion. The detorque was measured 10 times for each retention screw of groups I and II, totalizing 120 detorque measurements in each group. Data were submitted to ANOVA and Fisher exact test (P < 0.05). Both groups presented reduced detorque value (P < 0.05) in comparison to the insertion torque in all measurement periods. There was a statistically significant difference (P < 0.05) between the detorque values of the first measurement and the other measurement periods for the abutment screw. However, there was no statistically significant difference (P > 0.05) for the detorque values of all measurement periods for the cylinder screw. In conclusion, the abutment and cylinder screws exhibited torque loss after insertion, which indicates the need for retightening during function of the implant-supported prostheses.
Construction of the Barut–Girardello quasi coherent states for the Morse potential
Popov, Dušan; Dong, Shi-Hai; Pop, Nicolina; Sajfert, Vjekoslav; Şimon, Simona
2013-12-15
The Morse oscillator (MO) potential occupies a privileged place among the anharmonic oscillator potentials due to its applications in quantum mechanics to diatomic or polyatomic molecules, spectroscopy and so on. For this potential some kinds of coherent states (especially of the Klauder–Perelomov and Gazeau–Klauder kinds) have been constructed previously. In this paper we construct the coherent states of the Barut–Girardello kind (BG-CSs) for the MO potential, which have received less attention in the scientific literature. We obtain these CSs and demonstrate that they fulfil all conditions required by the coherent state. The Mandel parameter for the pure BG-CSs and Husimi’s and P-quasi distribution functions (for the mixed-thermal states) are also presented. Finally, we show that all obtained results for the BG-CSs of MO tend, in the harmonic limit, to the corresponding results for the coherent states of the one dimensional harmonic oscillator (CSs for the HO-1D). -- Highlights: •Construct the coherent states of the Barut–Girardello kind (BG-CSs) for the MO potential. •They fulfil all the conditions needed to a coherent state. •Present the Mandel parameter and Husimi’s and P-quasi distribution functions. •All results tend to those for the one dimensional harmonic oscillator in its harmonic limit.
Quantum state engineering of spin-orbit-coupled ultracold atoms in a Morse potential
NASA Astrophysics Data System (ADS)
Ban, Yue; Chen, Xi; Muga, J. G.; Sherman, E. Ya
2015-02-01
Achieving full control of a Bose-Einstein condensate can have valuable applications in metrology, quantum information processing, and quantum condensed matter physics. We propose protocols to simultaneously control the internal (related to its pseudospin-1/2) and motional (position-related) states of a spin-orbit-coupled Bose-Einstein condensate confined in a Morse potential. In the presence of synthetic spin-orbit coupling, the state transition of a noninteracting condensate can be implemented by Raman coupling and detuning terms designed by invariant-based inverse engineering. The state transfer may also be driven by tuning the direction of the spin-orbit-coupling field and modulating the magnitude of the effective synthetic magnetic field. The results can be generalized for interacting condensates by changing the time-dependent detuning to compensate for the interaction. We find that a two-level algorithm for the inverse engineering remains numerically accurate even if the entire set of possible states is considered. The proposed approach is robust against the laser-field noise and systematic device-dependent errors.
NASA Astrophysics Data System (ADS)
Durmaz, Murat; Karslioglu, Mahmut Onur
2015-04-01
There are various global and regional methods that have been proposed for the modeling of ionospheric vertical total electron content (VTEC). Global distribution of VTEC is usually modeled by spherical harmonic expansions, while tensor products of compactly supported univariate B-splines can be used for regional modeling. In these empirical parametric models, the coefficients of the basis functions as well as differential code biases (DCBs) of satellites and receivers can be treated as unknown parameters which can be estimated from geometry-free linear combinations of global positioning system observables. In this work we propose a new semi-parametric multivariate adaptive regression B-splines (SP-BMARS) method for the regional modeling of VTEC together with satellite and receiver DCBs, where the parametric part of the model is related to the DCBs as fixed parameters and the non-parametric part adaptively models the spatio-temporal distribution of VTEC. The latter is based on multivariate adaptive regression B-splines which is a non-parametric modeling technique making use of compactly supported B-spline basis functions that are generated from the observations automatically. This algorithm takes advantage of an adaptive scale-by-scale model building strategy that searches for best-fitting B-splines to the data at each scale. The VTEC maps generated from the proposed method are compared numerically and visually with the global ionosphere maps (GIMs) which are provided by the Center for Orbit Determination in Europe (CODE). The VTEC values from SP-BMARS and CODE GIMs are also compared with VTEC values obtained through calibration using local ionospheric model. The estimated satellite and receiver DCBs from the SP-BMARS model are compared with the CODE distributed DCBs. The results show that the SP-BMARS algorithm can be used to estimate satellite and receiver DCBs while adaptively and flexibly modeling the daily regional VTEC.
Compressible Astrophysics Simulation Code
Howell, L.; Singer, M.
2007-07-18
This is an astrophysics simulation code involving a radiation diffusion module developed at LLNL coupled to compressible hydrodynamics and adaptive mesh infrastructure developed at LBNL. One intended application is to neutrino diffusion in core collapse supernovae.
Beauvais, Z S; Thompson, K H; Kearfott, K J
2009-07-01
Due to a recent upward trend in the price of uranium and subsequent increased interest in uranium mining, accurate modeling of baseline dose from environmental sources of radioactivity is of increasing interest. Residual radioactivity model and code (RESRAD) is a program used to model environmental movement and calculate the dose due to the inhalation, ingestion, and exposure to radioactive materials following a placement. This paper presents a novel use of RESRAD for the calculation of dose from non-enhanced, or ancient, naturally occurring radioactive material (NORM). In order to use RESRAD to calculate the total effective dose (TED) due to ancient NORM, a procedural adaptation was developed to negate the effects of time progressive distribution of radioactive materials. A dose due to United States' average concentrations of uranium, actinium, and thorium series radionuclides was then calculated. For adults exposed in a residential setting and assumed to eat significant amounts of food grown in NORM concentrated areas, the annual dose due to national average NORM concentrations was 0.935 mSv y(-1). A set of environmental dose factors were calculated for simple estimation of dose from uranium, thorium, and actinium series radionuclides for various age groups and exposure scenarios as a function of elemental uranium and thorium activity concentrations in groundwater and soil. The values of these factors for uranium were lowest for an adult exposed in an industrial setting: 0.00476 microSv kg Bq(-1) y(-1) for soil and 0.00596 microSv m(3) Bq(-1) y(-1) for water (assuming a 1:1 234U:238U activity ratio in water). The uranium factors were highest for infants exposed in a residential setting and assumed to ingest food grown onsite: 34.8 microSv kg Bq(-1) y(-1) in soil and 13.0 microSv m(3) Bq(-1) y(-1) in water. PMID:19509509
NASA Astrophysics Data System (ADS)
Pavlou, Andrew Theodore
The Monte Carlo simulation of full-core neutron transport requires high fidelity data to represent not only the various types of possible interactions that can occur, but also the temperature and energy regimes for which these data are relevant. For isothermal conditions, nuclear cross section data are processed in advance of running a simulation. In reality, the temperatures in a neutronics simulation are not fixed, but change with respect to the temperatures computed from an associated heat transfer or thermal hydraulic (TH) code. To account for the temperature change, a code user must either 1) compute new data at the problem temperature inline during the Monte Carlo simulation or 2) pre-compute data at a variety of temperatures over the range of possible values. Inline data processing is computationally inefficient while pre-computing data at many temperatures can be memory expensive. An alternative on-the-fly approach to handle the temperature component of nuclear data is desired. By on-the-fly we mean a procedure that adjusts cross section data to the correct temperature adaptively during the Monte Carlo random walk instead of before the running of a simulation. The on-the-fly procedure should also preserve simulation runtime efficiency. While on-the-fly methods have recently been developed for higher energy regimes, the double differential scattering of thermal neutrons has not been examined in detail until now. In this dissertation, an on-the-fly sampling method is developed by investigating the temperature dependence of the thermal double differential scattering distributions. The temperature dependence is analyzed with a linear least squares regression test to develop fit coefficients that are used to sample thermal scattering data at any temperature. The amount of pre-stored thermal scattering data has been drastically reduced from around 25 megabytes per temperature per nuclide to only a few megabytes per nuclide by eliminating the need to compute data
NASA Technical Reports Server (NTRS)
Rice, R. F.; Lee, J. J.
1986-01-01
Scheme for coding facsimile messages promises to reduce data transmission requirements to one-tenth current level. Coding scheme paves way for true electronic mail in which handwritten, typed, or printed messages or diagrams sent virtually instantaneously - between buildings or between continents. Scheme, called Universal System for Efficient Electronic Mail (USEEM), uses unsupervised character recognition and adaptive noiseless coding of text. Image quality of resulting delivered messages improved over messages transmitted by conventional coding. Coding scheme compatible with direct-entry electronic mail as well as facsimile reproduction. Text transmitted in this scheme automatically translated to word-processor form.
Sefcik, Jan; Demiralp, Ersan; Cagin, Tahir; Goddard, William A
2002-12-01
We present the Dynamic Charge Equilibration (DQEq) method for a self-consistent treatment of charge transfer in force field modeling, where atomic charges are designed to reproduce electrostatic potentials calculated quantum mechanically. Force fields coupled with DQEq allow charges to readjust as geometry changes in classical simulations, using appropriate algorithms for periodic boundary conditions. The full electrostatic energy functional is used to derive the corresponding forces and the second derivatives (hessian) for vibrational calculations. Using DQEq electrostatics, we develop a simple nonbond force field for simulation of silica molecular sieves, where nonelectrostatic interactions are described by two-body Morse stretch terms. Energy minimization calculations with the new force field yield accurate unit cell geometries for siliceous zeolites. Relative enthalpies with respect to quartz and third-law entropies calculated from harmonic vibrational analysis agree very well with available calorimetric data: calculated SiO(2) enthalpies relative to alpha-quartz are within 2 kJ/mol and entropies at 298 K are within 3 J/mol K of the respective experimental values. Contributions from the zero point energy and vibrational degrees of freedom were found to be only about 1 kJ/mol for the free energy of mutual transformations between microporous silica polymorphs. The approach presented here can be applied to interfaces and other oxides as well and it is suitable for development of force fields for accurate modeling of geometry and energetics of microporous and mesoporous materials, while providing a realistic description of electrostatic fields near surfaces and inside pores of adsorbents and catalysts.
Scattering States of l-Wave Schrödinger Equation with Modified Rosen—Morse Potential
NASA Astrophysics Data System (ADS)
Chen, Wen-Li; Shi, Yan-Wei; Wei, Gao-Feng
2016-08-01
Within a Pekeris-type approximation to the centrifugal term, we examine the approximately analytical scattering state solutions of the l-wave Schrödinger equation with the modified Rosen—Morse potential. The calculation formula of phase shifts is derived, and the corresponding bound state energy levels are also obtained from the poles of the scattering amplitude. Supported by the National Natural Science Foundation of China under Grant No. 11405128, and Natural Science Basic Research Plan in Shaanxi Province of China under Grant No. 15JK2093
NASA Astrophysics Data System (ADS)
Valdivia, Valeska; Hennebelle, Patrick
2014-11-01
Context. Ultraviolet radiation plays a crucial role in molecular clouds. Radiation and matter are tightly coupled and their interplay influences the physical and chemical properties of gas. In particular, modeling the radiation propagation requires calculating column densities, which can be numerically expensive in high-resolution multidimensional simulations. Aims: Developing fast methods for estimating column densities is mandatory if we are interested in the dynamical influence of the radiative transfer. In particular, we focus on the effect of the UV screening on the dynamics and on the statistical properties of molecular clouds. Methods: We have developed a tree-based method for a fast estimate of column densities, implemented in the adaptive mesh refinement code RAMSES. We performed numerical simulations using this method in order to analyze the influence of the screening on the clump formation. Results: We find that the accuracy for the extinction of the tree-based method is better than 10%, while the relative error for the column density can be much more. We describe the implementation of a method based on precalculating the geometrical terms that noticeably reduces the calculation time. To study the influence of the screening on the statistical properties of molecular clouds we present the probability distribution function of gas and the associated temperature per density bin and the mass spectra for different density thresholds. Conclusions: The tree-based method is fast and accurate enough to be used during numerical simulations since no communication is needed between CPUs when using a fully threaded tree. It is then suitable to parallel computing. We show that the screening for far UV radiation mainly affects the dense gas, thereby favoring low temperatures and affecting the fragmentation. We show that when we include the screening, more structures are formed with higher densities in comparison to the case that does not include this effect. We
Bremer, P. -T.
2014-08-26
ADAPT is a topological analysis code that allow to compute local threshold, in particular relevance based thresholds for features defined in scalar fields. The initial target application is vortex detection but the software is more generally applicable to all threshold based feature definitions.
NASA Astrophysics Data System (ADS)
Barutello, Vivina; Jadanza, Riccardo D.; Portaluri, Alessandro
2016-01-01
It is well known that the linear stability of the Lagrangian elliptic solutions in the classical planar three-body problem depends on a mass parameter β and on the eccentricity e of the orbit. We consider only the circular case ( e = 0) but under the action of a broader family of singular potentials: α-homogeneous potentials, for α in (0, 2), and the logarithmic one. It turns out indeed that the Lagrangian circular orbit persists also in this more general setting. We discover a region of linear stability expressed in terms of the homogeneity parameter α and the mass parameter β, then we compute the Morse index of this orbit and of its iterates and we find that the boundary of the stability region is the envelope of a family of curves on which the Morse indices of the iterates jump. In order to conduct our analysis we rely on a Maslov-type index theory devised and developed by Y. Long, X. Hu and S. Sun; a key role is played by an appropriate index theorem and by some precise computations of suitable Maslov-type indices.
Maximizing Adaptivity in Hierarchical Topological Models Using Cancellation Trees
Bremer, P; Pascucci, V; Hamann, B
2008-12-08
We present a highly adaptive hierarchical representation of the topology of functions defined over two-manifold domains. Guided by the theory of Morse-Smale complexes, we encode dependencies between cancellations of critical points using two independent structures: a traditional mesh hierarchy to store connectivity information and a new structure called cancellation trees to encode the configuration of critical points. Cancellation trees provide a powerful method to increase adaptivity while using a simple, easy-to-implement data structure. The resulting hierarchy is significantly more flexible than the one previously reported. In particular, the resulting hierarchy is guaranteed to be of logarithmic height.
Adaptive EAGLE dynamic solution adaptation and grid quality enhancement
NASA Technical Reports Server (NTRS)
Luong, Phu Vinh; Thompson, J. F.; Gatlin, B.; Mastin, C. W.; Kim, H. J.
1992-01-01
In the effort described here, the elliptic grid generation procedure in the EAGLE grid code was separated from the main code into a subroutine, and a new subroutine which evaluates several grid quality measures at each grid point was added. The elliptic grid routine can now be called, either by a computational fluid dynamics (CFD) code to generate a new adaptive grid based on flow variables and quality measures through multiple adaptation, or by the EAGLE main code to generate a grid based on quality measure variables through static adaptation. Arrays of flow variables can be read into the EAGLE grid code for use in static adaptation as well. These major changes in the EAGLE adaptive grid system make it easier to convert any CFD code that operates on a block-structured grid (or single-block grid) into a multiple adaptive code.
Application of three-dimensional transport code to the analysis of the neutron streaming experiment
Chatani, K.; Slater, C.O.
1990-01-01
This paper summarized the calculational results of neutron streaming through a Clinch River Breeder Reactor (CRBR) Prototype coolant pipe chaseway. Particular emphasis is placed on results at bends in the chaseway. Calculations were performed with three three-dimensional codes: the discrete ordinates radiation transport code TORT and Monte Carlo radiation transport code MORSE, which were developed by Oak Ridge National Laboratory (ORNL), and the discrete ordinates code ENSEMBLE, which was developed in Japan. The purpose of the calculations is not only to compare the calculational results with the experimental results, but also to compare the results of TORT and MORSE with those of ENSEMBLE. In the TORT calculations, two types of difference methods, weighted-difference method was applied in ENSEMBLE calculation. Both TORT and ENSEMBLE produced nearly the same calculational results, but differed in the number of iterations required for converging each neutron group. Also, the two types of difference methods in the TORT calculations showed no appreciable variance in the number of iterations required. However, a noticeable disparity in the computer times and some variation in the calculational results did occur. The comparisons of the calculational results with the experimental results, showed for the epithermal neutron flux generally good agreement in the first and second legs and at the first bend where the two-dimensional modeling might be difficult. Results were fair to poor along the centerline of the first leg near the opening to the second leg because of discrete ordinates ray effects. Additionally, the agreement was good throughout the first and second legs for the thermal neutron region. Calculations with MORSE were made. These calculational results and comparisons are described also. 8 refs., 4 figs.
Onishi, Yasuo
2013-03-29
Four JAEA researchers visited PNNL for two weeks in February, 2013 to learn the PNNL-developed, unsteady, one-dimensional, river model, TODAM and the PNNL-developed, time-dependent, three dimensional, coastal water model, FLESCOT. These codes predict sediment and contaminant concentrations by accounting sediment-radionuclide interactions, e.g., adsorption/desorption and transport-deposition-resuspension of sediment-sorbed radionuclides. The objective of the river and coastal water modeling is to simulate • 134Cs and 137Cs migration in Fukushima rivers and the coastal water, and • their accumulation in the river and ocean bed along the Fukushima coast. Forecasting the future cesium behavior in the river and coastal water under various scenarios would enable JAEA to assess the effectiveness of various on-land remediation activities and if required, possible river and coastal water clean-up operations to reduce the contamination of the river and coastal water, agricultural products, fish and other aquatic biota. PNNL presented the following during the JAEA visit to PNNL: • TODAM and FLESCOT’s theories and mathematical formulations • TODAM and FLESCOT model structures • Past TODAM and FLESCOT applications • Demonstrating these two codes' capabilities by applying them to simple hypothetical river and coastal water cases. • Initial application of TODAM to the Ukedo River in Fukushima and JAEA researchers' participation in its modeling. PNNL also presented the relevant topics relevant to Fukushima environmental assessment and remediation, including • PNNL molecular modeling and EMSL computer facilities • Cesium adsorption/desorption characteristics • Experiences of connecting molecular science research results to macro model applications to the environment • EMSL tour • Hanford Site road tour. PNNL and JAEA also developed future course of actions for joint research projects on the Fukushima environmental and remediation assessments.
Malmierca, Manuel S.; Anderson, Lucy A.; Antunes, Flora M.
2015-01-01
To follow an ever-changing auditory scene, the auditory brain is continuously creating a representation of the past to form expectations about the future. Unexpected events will produce an error in the predictions that should “trigger” the network’s response. Indeed, neurons in the auditory midbrain, thalamus and cortex, respond to rarely occurring sounds while adapting to frequently repeated ones, i.e., they exhibit stimulus specific adaptation (SSA). SSA cannot be explained solely by intrinsic membrane properties, but likely involves the participation of the network. Thus, SSA is envisaged as a high order form of adaptation that requires the influence of cortical areas. However, present research supports the hypothesis that SSA, at least in its simplest form (i.e., to frequency deviants), can be transmitted in a bottom-up manner through the auditory pathway. Here, we briefly review the underlying neuroanatomy of the corticofugal projections before discussing state of the art studies which demonstrate that SSA present in the medial geniculate body (MGB) and inferior colliculus (IC) is not inherited from the cortex but can be modulated by the cortex via the corticofugal pathways. By modulating the gain of neurons in the thalamus and midbrain, the auditory cortex (AC) would refine SSA subcortically, preventing irrelevant information from reaching the cortex. PMID:25805974
Shareef, N; Levine, D
1996-03-01
This study reports on the examination of the effect of manufacturing tolerances on the micromotion at the Morse taper interface in modular hip implants. The finite element technique was used as a tool of analysis. Special emphasis was placed on the consideration of the transient dynamic conditions under which a prosthesis works inside the human body. In order to simulate approximately the repetitive forces acting on a hip implant during the human walking cycle, a time-variant sinusoidal load was applied on the head of the taper. The locking of the Morse taper joint by the surgeon in the operating room at the time of implantation was simulated by specifying an axial displacement to the female taper component as an initial condition.
NASA Astrophysics Data System (ADS)
Westre, S. G.; Liu, X.; Getty, J. D.; Kelly, P. B.
1991-12-01
The local mode-coupled Morse oscillator model was utilized to determine the quadratic, cubic, and quartic force constants for the vibrational stretching potential energy functions of CH3, CD3, CH2D, and CHD2 using stretching fundamentals and overtones derived from resonance Raman studies. The Morse harmonic frequency and anharmonic constant of the methyl radical indicate that bonding in the methyl radical and a variety of ethylenic molecules is primarily a function of the sp(2) hybridization of the central atom and that the bonding is not extensively influenced by the methyl radical's unpaired electron or the pi bonding in the ethylenic molecules. The vibrational states of the methyl radical are best described by wave functions containing significant amounts of normal mode character. The stretching frequencies for the tritiated methyl radical isotopomers are calculated.
Xantheas, Sotiris S.; Werhahn, Jasper C.
2014-08-14
Based on the formulation of the analytical expression of the potential V(r) describing intermolecular interactions in terms of the dimensionless variables r*=r/rm and !*=V/!, where rm is the separation at the minimum and ! the well depth, we propose more generalized scalable forms for the commonly used Lennard-Jones, Mie, Morse and Buckingham exponential-6 potential energy functions (PEFs). These new generalized forms have an additional parameter from and revert to the original ones for some choice of that parameter. In this respect, the original forms can be considered as special cases of the more general forms that are introduced. We also propose a scalable, but nonrevertible to the original one, 4-parameter extended Morse potential.
Bressan, Eriberto; Lops, Diego; Tomasi, Cristiano; Ricci, Sara; Stocchero, Michele; Carniel, Emanuele Luigi
2014-07-01
Nowadays, dental implantology is a reliable technique for treatment of partially and completely edentulous patients. The achievement of stable dentition is ensured by implant-supported fixed dental prostheses. Morse taper conometric system may provide fixed retention between implants and dental prostheses. The aim of this study was to investigate retentive performance and mechanical strength of a Morse taper conometric system used as implant-supported fixed dental prostheses retention. Experimental and finite element investigations were performed. Experimental tests were achieved on a specific abutment-coping system, accounting for both cemented and non-cemented situations. The results from the experimental activities were processed to identify the mechanical behavior of the coping-abutment interface. Finally, the achieved information was applied to develop reliable finite element models of different abutment-coping systems. The analyses were developed accounting for different geometrical conformations of the abutment-coping system, such as different taper angle. The results showed that activation process, occurred through a suitable insertion force, could provide retentive performances equal to a cemented system without compromising the mechanical functionality of the system. These findings suggest that Morse taper conometrical system can provide a fixed connection between implants and dental prostheses if proper insertion force is applied. Activation process does not compromise the mechanical functionality of the system.
Vector Adaptive/Predictive Encoding Of Speech
NASA Technical Reports Server (NTRS)
Chen, Juin-Hwey; Gersho, Allen
1989-01-01
Vector adaptive/predictive technique for digital encoding of speech signals yields decoded speech of very good quality after transmission at coding rate of 9.6 kb/s and of reasonably good quality at 4.8 kb/s. Requires 3 to 4 million multiplications and additions per second. Combines advantages of adaptive/predictive coding, and code-excited linear prediction, yielding speech of high quality but requires 600 million multiplications and additions per second at encoding rate of 4.8 kb/s. Vector adaptive/predictive coding technique bridges gaps in performance and complexity between adaptive/predictive coding and code-excited linear prediction.
Monte Carlo Code System for High-Energy Radiation Transport Calculations.
2000-02-16
Version 00 HERMES-KFA consists of a set of Monte Carlo Codes used to simulate particle radiation and interaction with matter. The main codes are HETC, MORSE, and EGS. They are supported by a common geometry package, common random routines, a command interpreter, and auxiliary codes like NDEM that is used to generate a gamma-ray source from nuclear de-excitation after spallation processes. The codes have been modified so that any particle history falling outside the domainmore » of the physical theory of one program can be submitted to another program in the suite to complete the work. Also response data can be submitted by each program, to be collected and combined by a statistic package included within the command interpreter.« less
NASA Technical Reports Server (NTRS)
Pollara, Fabrizio; Hamkins, Jon; Dolinar, Sam; Andrews, Ken; Divsalar, Dariush
2006-01-01
This viewgraph presentation reviews uplink coding. The purpose and goals of the briefing are (1) Show a plan for using uplink coding and describe benefits (2) Define possible solutions and their applicability to different types of uplink, including emergency uplink (3) Concur with our conclusions so we can embark on a plan to use proposed uplink system (4) Identify the need for the development of appropriate technology and infusion in the DSN (5) Gain advocacy to implement uplink coding in flight projects Action Item EMB04-1-14 -- Show a plan for using uplink coding, including showing where it is useful or not (include discussion of emergency uplink coding).
Mangano, Francesco Guido; Zecca, Piero; Luongo, Fabrizia; Iezzi, Giovanna; Mangano, Carlo
2014-01-01
The aim of this study was to achieve aesthetically pleasing soft tissue contours in a severely compromised tooth in the anterior region of the maxilla. For a right-maxillary central incisor with localized advanced chronic periodontitis a tooth extraction followed by reconstructive procedures and delayed implant placement was proposed and accepted by the patient. Guided bone regeneration (GBR) technique was employed, with a biphasic calcium-phosphate (BCP) block graft placed in the extraction socket in conjunction with granules of the same material and a resorbable barrier membrane. After 6 months of healing, an implant was installed. The acrylic provisional restoration remained in situ for 3 months and then was substituted with the definitive crown. This ridge reconstruction technique enabled preserving both hard and soft tissues and counteracting vertical and horizontal bone resorption after tooth extraction and allowed for an ideal three-dimensional implant placement. Localized severe alveolar bone resorption of the anterior maxilla associated with chronic periodontal disease can be successfully treated by means of ridge reconstruction with GBR and delayed implant insertion; the placement of an early-loaded, Morse taper connection implant in the grafted site was effective to create an excellent clinical aesthetic result and to maintain it along time.
Mangano, Francesco Guido; Zecca, Piero; Luongo, Fabrizia; Iezzi, Giovanna; Mangano, Carlo
2014-01-01
The aim of this study was to achieve aesthetically pleasing soft tissue contours in a severely compromised tooth in the anterior region of the maxilla. For a right-maxillary central incisor with localized advanced chronic periodontitis a tooth extraction followed by reconstructive procedures and delayed implant placement was proposed and accepted by the patient. Guided bone regeneration (GBR) technique was employed, with a biphasic calcium-phosphate (BCP) block graft placed in the extraction socket in conjunction with granules of the same material and a resorbable barrier membrane. After 6 months of healing, an implant was installed. The acrylic provisional restoration remained in situ for 3 months and then was substituted with the definitive crown. This ridge reconstruction technique enabled preserving both hard and soft tissues and counteracting vertical and horizontal bone resorption after tooth extraction and allowed for an ideal three-dimensional implant placement. Localized severe alveolar bone resorption of the anterior maxilla associated with chronic periodontal disease can be successfully treated by means of ridge reconstruction with GBR and delayed implant insertion; the placement of an early-loaded, Morse taper connection implant in the grafted site was effective to create an excellent clinical aesthetic result and to maintain it along time. PMID:25431687
Menani, Luiz Ricardo; Tiossi, Rodrigo; de Torres, Érica Miranda; Ribeiro, Ricardo Faria; de Almeida, Rossana Pereira
2011-03-01
There is no consensus in literature regarding the best plan for prosthetic rehabilitation with partial multiple adjacent implants to minimize stress generated in the bone-implant interface. The aim of this study was to evaluate the biomechanical behavior of cemented fixed partial dentures, splinted and nonsplinted, on Morse taper implants and with different types of coating material (ceramic and resin), using photoelastic stress analysis. A photoelastic model of an interposed edentulous space, missing a second premolar and a first molar, and rehabilitated with 4 different types of cemented crowns and supported by 2 adjacent implants was used. Groups were as follows: UC, splinted ceramic crowns; IC, nonsplinted ceramic crowns; UR, splinted resin crowns; and IR, nonsplinted resin crowns. Different vertical static loading conditions were performed: balanced occlusal load, 10 kgf; simultaneous punctiform load on the implanted premolar and molar, 10 kgf; and alternate punctiform load on the implanted premolar and molar, 5 kgf. Changes in stress distribution were analyzed in a polariscope, and digital photographs were taken of each condition to allow comparison of stress pattern distribution around the implants. Cementation of the fixed partial dentures generated stresses between implants. Splinted restorations distributed the stresses more evenly between the implants than nonsplinted when force was applied. Ceramic restorations presented better distribution of stresses than resin restorations. Based on the results obtained, it was concluded that splinted ceramic restorations promote better stress distribution around osseointegrated implants when compared with nonsplinted crowns; metal-ceramic restorations present less stress concentration and magnitude than metal-plastic restorations.
Constantoudis, Vassilios; Nicolaides, Cleanthes A
2005-02-22
The dissociation dynamics of a dichromatically laser-driven diatomic Morse molecule vibrating in the ground state is investigated by applying tools of the nonlinear theory of classical Hamiltonian systems. Emphasis is placed on the role of the relative phase of the two fields, phi. First, it is found that, just like in quantum mechanics, there is dependence of the dissociation probability on phi. Then, it is demonstrated that addition of the second laser leads to suppression of probability (stabilization), when the intensity of the first laser is kept constant just above or below the single laser dissociation threshold. This "chemical bond hardening" diminishes as phi increases. These effects are investigated and interpreted in terms of modifications in phase space topology. Variations of phi as well as of the intensity of the second laser may cause (i) appearance/disappearance of the stability island corresponding to the common resonance with the lowest energy and (ii) deformation and movement of the region of Kolmogorov-Arnold-Moser tori that survive from the undriven system. The latter is the main origin in phase space of stabilization and phi dependence. Finally, it is shown that the use of short laser pulses enhances both effects.
Chatani, K. )
1992-08-01
This report summarizes the calculational results from analyses of a Clinch River Breeder Reactor (CRBR) prototypic coolant pipe chaseway neutron streaming experiment Comparisons of calculated and measured results are presented, major emphasis being placed on results at bends in the chaseway. Calculations were performed with three three-dimensional radiation transport codes: the discrete ordinates code TORT and the Monte Carlo code MORSE, both developed by the Oak Ridge National Laboratory (ORNL), and the discrete ordinates code ENSEMBLE, developed by Japan. The calculated results from the three codes are compared (1) with previously-calculated DOT3.5 two-dimensional results, (2) among themselves, and (3) with measured results. Calculations with TORT used both the weighted-difference and nodal methods. Only the weighted-difference method was used in ENSEMBLE. When the calculated results were compared to measured results, it was found that calculation-to-experiment (C/E) ratios were good in the regions of the chaseway where two-dimensional modeling might be difficult and where there were no significant discrete ordinates ray effects. Excellent agreement was observed for responses dominated by thermal neutron contributions. MORSE-calculated results and comparisons are described also, and detailed results are presented in an appendix.
Multiple component codes based generalized LDPC codes for high-speed optical transport.
Djordjevic, Ivan B; Wang, Ting
2014-07-14
A class of generalized low-density parity-check (GLDPC) codes suitable for optical communications is proposed, which consists of multiple local codes. It is shown that Hamming, BCH, and Reed-Muller codes can be used as local codes, and that the maximum a posteriori probability (MAP) decoding of these local codes by Ashikhmin-Lytsin algorithm is feasible in terms of complexity and performance. We demonstrate that record coding gains can be obtained from properly designed GLDPC codes, derived from multiple component codes. We then show that several recently proposed classes of LDPC codes such as convolutional and spatially-coupled codes can be described using the concept of GLDPC coding, which indicates that the GLDPC coding can be used as a unified platform for advanced FEC enabling ultra-high speed optical transport. The proposed class of GLDPC codes is also suitable for code-rate adaption, to adjust the error correction strength depending on the optical channel conditions.
Kubilius, Jonas
2014-01-01
Sharing code is becoming increasingly important in the wake of Open Science. In this review I describe and compare two popular code-sharing utilities, GitHub and Open Science Framework (OSF). GitHub is a mature, industry-standard tool but lacks focus towards researchers. In comparison, OSF offers a one-stop solution for researchers but a lot of functionality is still under development. I conclude by listing alternative lesser-known tools for code and materials sharing.
Arbitrary Lagrangian Eulerian Adaptive Mesh Refinement
Koniges, A.; Eder, D.; Masters, N.; Fisher, A.; Anderson, R.; Gunney, B.; Wang, P.; Benson, D.; Dixit, P.
2009-09-29
This is a simulation code involving an ALE (arbitrary Lagrangian-Eulerian) hydrocode with AMR (adaptive mesh refinement) and pluggable physics packages for material strength, heat conduction, radiation diffusion, and laser ray tracing developed a LLNL, UCSD, and Berkeley Lab. The code is an extension of the open source SAMRAI (Structured Adaptive Mesh Refinement Application Interface) code/library. The code can be used in laser facilities such as the National Ignition Facility. The code is alsi being applied to slurry flow (landslides).
High Order Modulation Protograph Codes
NASA Technical Reports Server (NTRS)
Nguyen, Thuy V. (Inventor); Nosratinia, Aria (Inventor); Divsalar, Dariush (Inventor)
2014-01-01
Digital communication coding methods for designing protograph-based bit-interleaved code modulation that is general and applies to any modulation. The general coding framework can support not only multiple rates but also adaptive modulation. The method is a two stage lifting approach. In the first stage, an original protograph is lifted to a slightly larger intermediate protograph. The intermediate protograph is then lifted via a circulant matrix to the expected codeword length to form a protograph-based low-density parity-check code.
Adaptive Mesh Refinement in CTH
Crawford, David
1999-05-04
This paper reports progress on implementing a new capability of adaptive mesh refinement into the Eulerian multimaterial shock- physics code CTH. The adaptivity is block-based with refinement and unrefinement occurring in an isotropic 2:1 manner. The code is designed to run on serial, multiprocessor and massive parallel platforms. An approximate factor of three in memory and performance improvements over comparable resolution non-adaptive calculations has-been demonstrated for a number of problems.
Alves, Deceles Cristina Costa; Carvalho, Paulo Sérgio Perri de; Martinez, Elizabeth Ferreira
2014-01-01
The objective of this study was to evaluate the bacterial seal at the implant-abutment interface using two morse taper implant models, by means of an in vitro microbiological analysis. For that were used 15 implants with mini-abutments tightened by friction, no screws (Group 1); and 30 implants with screw-tightened abutments, of which 15 received 20 N.cm of closing torque (Group 2) and the other 15 received 30 N.cm (Group 3). Microbiological analysis was carried out using colonies of Escherichia coli transported directly from a culture dish to the prosthetic component. Friction implants (Group 1) were activated by tapping and a torque wrench was used for screw-tightened implants (Groups 2 and 3). Each abutment/implant set was immersed in test tubes containing 5 mL of brain-heart infusion broth and incubated at 37 °C for 14 days, observed daily for the presence of contamination. A statistically significant difference was observed regarding the number of contaminated implants. There was greater contamination in Group 2 implants (p<0.05), with no statistically significant difference between the other groups (Group 1 = 20% and Group 3 = 0%). It was concluded that there was no significant difference in in vitro bacterial sealing between implants with mini-abutments tightened by friction without screws and implants with screw-tightened abutments with 30 N.cm of closing torque. The difference in closing torque altered the in vitro sealing ability of the tested abutments, with a greater contamination for components that received a closing torque of 20 N.cm.
Mangano, Francesco G; Mangano, Carlo; Ricci, Massimiliano; Sammons, Rachel L; Shibli, Jamil A; Piattelli, Adriano
2013-04-01
The aim of this study was to compare the esthetic outcome of single implants placed in fresh extraction sockets with those placed in fully healed sites of the anterior maxilla. This retrospective study was based on data from patients treated with single-tooth Morse taper connection implants placed in fresh extraction sockets and in fully healed sites of the anterior maxilla. Only single implant treatments were considered with both neighboring teeth present. Additional prerequisites for immediate implant treatment were intact socket walls and a thick gingival biotype. The esthetic outcome was objectively rated using the pink esthetic/white esthetic score (PES/WES). The Mann-Whitney U test was used to compare the PES and the WES between the 2 groups. Twenty-two patients received an immediate implant, and 18 patients had conventional implant surgery. The mean follow-up was 31.09 months (SD 5.57; range 24-46) and 34.44 months (SD 7.10; range 24-48) for immediately and conventionally inserted implants, respectively. No implants were lost. All implants fulfilled the success criteria. The mean PES/WES was 14.50 (SD 2.52; range 9-19) and 15.61 (SD 3.20; range 8-20) for immediately and conventionally placed implants, respectively. Immediate implants had a mean PES of 7.45 (SD 1.62; range 4-10) and a mean WES of 7.04 (SD 1.29; range 5-10). Conventional implants had a mean PES of 7.83 (SD 1.58; range 4-10) and a mean WES of 7.77 (SD 1.66; range 4-10). The difference between the 2 groups was not significant. Immediate and conventional single implant treatment yielded comparable esthetic outcomes.
Tritzant-Martinez, Yalina; Zeng, Tao; Broom, Aron; Meiering, Elizabeth; Le Roy, Robert J; Roy, Pierre-Nicholas
2013-06-21
We investigate the analytical representation of potentials of mean force (pmf) using the Morse/long-range (MLR) potential approach. The MLR method had previously been used to represent potential energy surfaces, and we assess its validity for representing free-energies. The advantage of the approach is that the potential of mean force data only needs to be calculated in the short to medium range region of the reaction coordinate while the long range can be handled analytically. This can result in significant savings in terms of computational effort since one does not need to cover the whole range of the reaction coordinate during simulations. The water dimer with rigid monomers whose interactions are described by the commonly used TIP4P model [W. Jorgensen and J. Madura, Mol. Phys. 56, 1381 (1985)] is used as a test case. We first calculate an "exact" pmf using direct Monte Carlo (MC) integration and term such a calculation as our gold standard (GS). Second, we compare this GS with several MLR fits to the GS to test the validity of the fitting procedure. We then obtain the water dimer pmf using metadynamics simulations in a limited range of the reaction coordinate and show how the MLR treatment allows the accurate generation of the full pmf. We finally calculate the transition state theory rate constant for the water dimer dissociation process using the GS, the GS MLR fits, and the metadynamics MLR fits. Our approach can yield a compact, smooth, and accurate analytical representation of pmf data with reduced computational cost.
NASA Astrophysics Data System (ADS)
Tritzant-Martinez, Yalina; Zeng, Tao; Broom, Aron; Meiering, Elizabeth; Le Roy, Robert J.; Roy, Pierre-Nicholas
2013-06-01
We investigate the analytical representation of potentials of mean force (pmf) using the Morse/long-range (MLR) potential approach. The MLR method had previously been used to represent potential energy surfaces, and we assess its validity for representing free-energies. The advantage of the approach is that the potential of mean force data only needs to be calculated in the short to medium range region of the reaction coordinate while the long range can be handled analytically. This can result in significant savings in terms of computational effort since one does not need to cover the whole range of the reaction coordinate during simulations. The water dimer with rigid monomers whose interactions are described by the commonly used TIP4P model [W. Jorgensen and J. Madura, Mol. Phys. 56, 1381 (1985)], 10.1080/00268978500103111 is used as a test case. We first calculate an "exact" pmf using direct Monte Carlo (MC) integration and term such a calculation as our gold standard (GS). Second, we compare this GS with several MLR fits to the GS to test the validity of the fitting procedure. We then obtain the water dimer pmf using metadynamics simulations in a limited range of the reaction coordinate and show how the MLR treatment allows the accurate generation of the full pmf. We finally calculate the transition state theory rate constant for the water dimer dissociation process using the GS, the GS MLR fits, and the metadynamics MLR fits. Our approach can yield a compact, smooth, and accurate analytical representation of pmf data with reduced computational cost.
Wilson, J.T.; Morlock, S.E.; Baker, N.T.
1997-01-01
Acoustic Doppler current profiler, global positioning system, and geographic information system technology were used to map the bathymetry of Morse and Geist Reservoirs, two artificial lakes used for public water supply in central Indiana. The project was a pilot study to evaluate the use of the technologies for bathymetric surveys. Bathymetric surveys were last conducted in 1978 on Morse Reservoir and in 1980 on Geist Reservoir; those surveys were done with conventional methods using networks of fathometer transects. The 1996 bathymetric surveys produced updated estimates of reservoir volumes that will serve as base-line data for future estimates of storage capacity and sedimentation rates.An acoustic Doppler current profiler and global positioning system receiver were used to collect water-depth and position data from April 1996 through October 1996. All water-depth and position data were imported to a geographic information system to create a data base. The geographic information system then was used to generate water-depth contour maps and to compute the volumes for each reservoir.The computed volume of Morse Reservoir was 22,820 acre-feet (7.44 billion gallons), with a surface area of 1,484 acres. The computed volume of Geist Reservoir was 19,280 acre-feet (6.29 billion gallons), with a surface area of 1,848 acres. The computed 1996 reservoir volumes are less than the design volumes and indicate that sedimentation has occurred in both reservoirs. Cross sections were constructed from the computer-generated surfaces for 1996 and compared to the fathometer profiles from the 1978 and 1980 surveys; analysis of these cross sections also indicates that some sedimentation has occurred in both reservoirs.The acoustic Doppler current profiler, global positioning system, and geographic information system technologies described in this report produced bathymetric maps and volume estimates more efficiently and with comparable or greater resolution than conventional
Hess, Peter
2014-08-07
An improved microscopic cleavage model, based on a Morse-type and Lennard-Jones-type interaction instead of the previously employed half-sine function, is used to determine the maximum cleavage strength for the brittle materials diamond, tungsten, molybdenum, silicon, GaAs, silica, and graphite. The results of both interaction potentials are in much better agreement with the theoretical strength values obtained by ab initio calculations for diamond, tungsten, molybdenum, and silicon than the previous model. Reasonable estimates of the intrinsic strength are presented for GaAs, silica, and graphite, where first principles values are not available.
Automated detection of semagram-laden images using adaptive neural networks
NASA Astrophysics Data System (ADS)
Cerkez, Paul S.; Cannady, James D.
2010-04-01
Digital steganography has been used extensively for electronic copyright stamping, but also for criminal or covert activities. While a variety of techniques exist for detecting steganography the identification of semagrams, messages transmitted visually in a non-textual format remain elusive. The work that will be presented describes the creation of a novel application which uses hierarchical neural network architectures to detect the likely presence of a semagram message in an image. The application was used to detect semagrams containing Morse Code messages with over 80% accuracy. These preliminary results indicate a significant advance in the detection of complex semagram patterns.
Edge equilibrium code for tokamaks
Li, Xujing; Drozdov, Vladimir V.
2014-01-15
The edge equilibrium code (EEC) described in this paper is developed for simulations of the near edge plasma using the finite element method. It solves the Grad-Shafranov equation in toroidal coordinate and uses adaptive grids aligned with magnetic field lines. Hermite finite elements are chosen for the numerical scheme. A fast Newton scheme which is the same as implemented in the equilibrium and stability code (ESC) is applied here to adjust the grids.
Ravishankar, C., Hughes Network Systems, Germantown, MD
1998-05-08
Speech is the predominant means of communication between human beings and since the invention of the telephone by Alexander Graham Bell in 1876, speech services have remained to be the core service in almost all telecommunication systems. Original analog methods of telephony had the disadvantage of speech signal getting corrupted by noise, cross-talk and distortion Long haul transmissions which use repeaters to compensate for the loss in signal strength on transmission links also increase the associated noise and distortion. On the other hand digital transmission is relatively immune to noise, cross-talk and distortion primarily because of the capability to faithfully regenerate digital signal at each repeater purely based on a binary decision. Hence end-to-end performance of the digital link essentially becomes independent of the length and operating frequency bands of the link Hence from a transmission point of view digital transmission has been the preferred approach due to its higher immunity to noise. The need to carry digital speech became extremely important from a service provision point of view as well. Modem requirements have introduced the need for robust, flexible and secure services that can carry a multitude of signal types (such as voice, data and video) without a fundamental change in infrastructure. Such a requirement could not have been easily met without the advent of digital transmission systems, thereby requiring speech to be coded digitally. The term Speech Coding is often referred to techniques that represent or code speech signals either directly as a waveform or as a set of parameters by analyzing the speech signal. In either case, the codes are transmitted to the distant end where speech is reconstructed or synthesized using the received set of codes. A more generic term that is applicable to these techniques that is often interchangeably used with speech coding is the term voice coding. This term is more generic in the sense that the
Cramer, S.N.
1984-01-01
The MCNP code is the major Monte Carlo coupled neutron-photon transport research tool at the Los Alamos National Laboratory, and it represents the most extensive Monte Carlo development program in the United States which is available in the public domain. The present code is the direct descendent of the original Monte Carlo work of Fermi, von Neumaum, and Ulam at Los Alamos in the 1940s. Development has continued uninterrupted since that time, and the current version of MCNP (or its predecessors) has always included state-of-the-art methods in the Monte Carlo simulation of radiation transport, basic cross section data, geometry capability, variance reduction, and estimation procedures. The authors of the present code have oriented its development toward general user application. The documentation, though extensive, is presented in a clear and simple manner with many examples, illustrations, and sample problems. In addition to providing the desired results, the output listings give a a wealth of detailed information (some optional) concerning each state of the calculation. The code system is continually updated to take advantage of advances in computer hardware and software, including interactive modes of operation, diagnostic interrupts and restarts, and a variety of graphical and video aids.
ERIC Educational Resources Information Center
Lai, Hsin-Chih; Chang, Chun-Yen; Li, Wen-Shiane; Fan, Yu-Lin; Wu, Ying-Tien
2013-01-01
This study presents an m-learning method that incorporates Integrated Quick Response (QR) codes. This learning method not only achieves the objectives of outdoor education, but it also increases applications of Cognitive Theory of Multimedia Learning (CTML) (Mayer, 2001) in m-learning for practical use in a diverse range of outdoor locations. When…
Adaptation and perceptual norms
NASA Astrophysics Data System (ADS)
Webster, Michael A.; Yasuda, Maiko; Haber, Sara; Leonard, Deanne; Ballardini, Nicole
2007-02-01
We used adaptation to examine the relationship between perceptual norms--the stimuli observers describe as psychologically neutral, and response norms--the stimulus levels that leave visual sensitivity in a neutral or balanced state. Adapting to stimuli on opposite sides of a neutral point (e.g. redder or greener than white) biases appearance in opposite ways. Thus the adapting stimulus can be titrated to find the unique adapting level that does not bias appearance. We compared these response norms to subjectively defined neutral points both within the same observer (at different retinal eccentricities) and between observers. These comparisons were made for visual judgments of color, image focus, and human faces, stimuli that are very different and may depend on very different levels of processing, yet which share the property that for each there is a well defined and perceptually salient norm. In each case the adaptation aftereffects were consistent with an underlying sensitivity basis for the perceptual norm. Specifically, response norms were similar to and thus covaried with the perceptual norm, and under common adaptation differences between subjectively defined norms were reduced. These results are consistent with models of norm-based codes and suggest that these codes underlie an important link between visual coding and visual experience.
Jones, Dean P.
2015-01-01
Abstract Significance: The redox code is a set of principles that defines the positioning of the nicotinamide adenine dinucleotide (NAD, NADP) and thiol/disulfide and other redox systems as well as the thiol redox proteome in space and time in biological systems. The code is richly elaborated in an oxygen-dependent life, where activation/deactivation cycles involving O2 and H2O2 contribute to spatiotemporal organization for differentiation, development, and adaptation to the environment. Disruption of this organizational structure during oxidative stress represents a fundamental mechanism in system failure and disease. Recent Advances: Methodology in assessing components of the redox code under physiological conditions has progressed, permitting insight into spatiotemporal organization and allowing for identification of redox partners in redox proteomics and redox metabolomics. Critical Issues: Complexity of redox networks and redox regulation is being revealed step by step, yet much still needs to be learned. Future Directions: Detailed knowledge of the molecular patterns generated from the principles of the redox code under defined physiological or pathological conditions in cells and organs will contribute to understanding the redox component in health and disease. Ultimately, there will be a scientific basis to a modern redox medicine. Antioxid. Redox Signal. 23, 734–746. PMID:25891126
AEDS Property Classification Code Manual.
ERIC Educational Resources Information Center
Association for Educational Data Systems, Washington, DC.
The control and inventory of property items using data processing machines requires a form of numerical description or code which will allow a maximum of description in a minimum of space on the data card. An adaptation of a standard industrial classification system is given to cover any expendable warehouse item or non-expendable piece of…
Generalization of Prism Adaptation
ERIC Educational Resources Information Center
Redding, Gordon M.; Wallace, Benjamin
2006-01-01
Prism exposure produces 2 kinds of adaptive response. Recalibration is ordinary strategic remapping of spatially coded movement commands to rapidly reduce performance error. Realignment is the extraordinary process of transforming spatial maps to bring the origins of coordinate systems into correspondence. Realignment occurs when spatial…
Video coding with dynamic background
NASA Astrophysics Data System (ADS)
Paul, Manoranjan; Lin, Weisi; Lau, Chiew Tong; Lee, Bu-Sung
2013-12-01
Motion estimation (ME) and motion compensation (MC) using variable block size, sub-pixel search, and multiple reference frames (MRFs) are the major reasons for improved coding performance of the H.264 video coding standard over other contemporary coding standards. The concept of MRFs is suitable for repetitive motion, uncovered background, non-integer pixel displacement, lighting change, etc. The requirement of index codes of the reference frames, computational time in ME & MC, and memory buffer for coded frames limits the number of reference frames used in practical applications. In typical video sequences, the previous frame is used as a reference frame with 68-92% of cases. In this article, we propose a new video coding method using a reference frame [i.e., the most common frame in scene (McFIS)] generated by dynamic background modeling. McFIS is more effective in terms of rate-distortion and computational time performance compared to the MRFs techniques. It has also inherent capability of scene change detection (SCD) for adaptive group of picture (GOP) size determination. As a result, we integrate SCD (for GOP determination) with reference frame generation. The experimental results show that the proposed coding scheme outperforms the H.264 video coding with five reference frames and the two relevant state-of-the-art algorithms by 0.5-2.0 dB with less computational time.
Deta, U. A.; Suparmi,; Cari,; Husein, A. S.; Yuliani, H.; Khaled, I. K. A.; Luqman, H.; Supriyanto
2014-09-30
The Energy Spectra and Wave Function of Schrodinger equation in D-Dimensions for trigonometric Rosen-Morse potential were investigated analytically using Nikiforov-Uvarov method. This potential captures the essential traits of the quark-gluon dynamics of Quantum Chromodynamics. The approximate energy spectra are given in the close form and the corresponding approximate wave function for arbitrary l-state (l ≠ 0) in D-dimensions are formulated in the form of differential polynomials. The wave function of this potential unnormalizable for general case. The wave function of this potential unnormalizable for general case. The existence of extra dimensions (centrifugal factor) and this potential increase the energy spectra of system.
NASA Astrophysics Data System (ADS)
Schuch, Dieter
2015-06-01
It is shown that a nonlinear reformulation of time-dependent and time-independent quantum mechanics in terms of Riccati equations not only provides additional information about the physical system, but also allows for formal comparison with other nonlinear theories. This is demonstrated for the nonlinear Burgers and Korteweg-de Vries equations with soliton solutions. As Riccati equations can be linearized to corresponding Schrödinger equations, this also applies to the Riccati equations that can be obtained by integrating the nonlinear soliton equations, resulting in a time-independent Schrödinger equation with Rosen-Morse potential and its supersymmetric partner. Because both soliton equations lead to the same Riccati equation, relations between the Burgers and Korteweg-de Vries equations can be established. Finally, a connection with the inverse scattering method is mentioned.
Khorshidi, Hooman; Raoofi, Saeed; Moattari, Afagh; Bagheri, Atoosa; Kalantari, Mohammad Hassan
2016-01-01
Background and Aim. The geometry of implant-abutment interface (IAI) affects the risk of bacterial leakage and invasion into the internal parts of the implant. The aim of this study was to compare the bacterial leakage of an 11-degree Morse taper IAI with that of a butt joint connection. Materials and Methods. Two implants systems were tested (n = 10 per group): CSM (submerged) and TBR (connect). The deepest inner parts of the implants were inoculated with 2 μL of Streptococcus mutans suspension with a concentration of 108 CFU/mL. The abutments were tightened on the implants. The specimens were stored in the incubator at a temperature of 37°C for 14 days and the penetration of the bacterium in the surrounding area was determined by the observation of the solution turbidity and comparison with control specimens. Kaplan-Meier survival curve was traced for the estimation of bacterial leakage and the results between two groups of implants were statistically analyzed by chi-square test. Results. No case of the implant system with the internal conical connection design revealed bacterial leakage in 14 days and no turbidity of the solution was reported for it. In the system with butt joint implant-abutment connection, 1 case showed leakage on the third day, 1 case on the eighth day, and 5 cases on the 13th day. In total, 7 (70%) cases showed bacterial leakage in this system. Significant differences were found between the two groups of implants based on the incidence of bacterial leakage (p < 0.05). Conclusion. The 11-degree Morse taper demonstrated better resistance to microbial leakage than butt joint connection. PMID:27242903
Khorshidi, Hooman; Raoofi, Saeed; Moattari, Afagh; Bagheri, Atoosa; Kalantari, Mohammad Hassan
2016-01-01
Background and Aim. The geometry of implant-abutment interface (IAI) affects the risk of bacterial leakage and invasion into the internal parts of the implant. The aim of this study was to compare the bacterial leakage of an 11-degree Morse taper IAI with that of a butt joint connection. Materials and Methods. Two implants systems were tested (n = 10 per group): CSM (submerged) and TBR (connect). The deepest inner parts of the implants were inoculated with 2 μL of Streptococcus mutans suspension with a concentration of 108 CFU/mL. The abutments were tightened on the implants. The specimens were stored in the incubator at a temperature of 37°C for 14 days and the penetration of the bacterium in the surrounding area was determined by the observation of the solution turbidity and comparison with control specimens. Kaplan-Meier survival curve was traced for the estimation of bacterial leakage and the results between two groups of implants were statistically analyzed by chi-square test. Results. No case of the implant system with the internal conical connection design revealed bacterial leakage in 14 days and no turbidity of the solution was reported for it. In the system with butt joint implant-abutment connection, 1 case showed leakage on the third day, 1 case on the eighth day, and 5 cases on the 13th day. In total, 7 (70%) cases showed bacterial leakage in this system. Significant differences were found between the two groups of implants based on the incidence of bacterial leakage (p < 0.05). Conclusion. The 11-degree Morse taper demonstrated better resistance to microbial leakage than butt joint connection.
Khorshidi, Hooman; Raoofi, Saeed; Moattari, Afagh; Bagheri, Atoosa; Kalantari, Mohammad Hassan
2016-01-01
Background and Aim. The geometry of implant-abutment interface (IAI) affects the risk of bacterial leakage and invasion into the internal parts of the implant. The aim of this study was to compare the bacterial leakage of an 11-degree Morse taper IAI with that of a butt joint connection. Materials and Methods. Two implants systems were tested (n = 10 per group): CSM (submerged) and TBR (connect). The deepest inner parts of the implants were inoculated with 2 μL of Streptococcus mutans suspension with a concentration of 108 CFU/mL. The abutments were tightened on the implants. The specimens were stored in the incubator at a temperature of 37°C for 14 days and the penetration of the bacterium in the surrounding area was determined by the observation of the solution turbidity and comparison with control specimens. Kaplan-Meier survival curve was traced for the estimation of bacterial leakage and the results between two groups of implants were statistically analyzed by chi-square test. Results. No case of the implant system with the internal conical connection design revealed bacterial leakage in 14 days and no turbidity of the solution was reported for it. In the system with butt joint implant-abutment connection, 1 case showed leakage on the third day, 1 case on the eighth day, and 5 cases on the 13th day. In total, 7 (70%) cases showed bacterial leakage in this system. Significant differences were found between the two groups of implants based on the incidence of bacterial leakage (p < 0.05). Conclusion. The 11-degree Morse taper demonstrated better resistance to microbial leakage than butt joint connection. PMID:27242903
Nevada Administrative Code for Special Education Programs.
ERIC Educational Resources Information Center
Nevada State Dept. of Education, Carson City. Special Education Branch.
This document presents excerpts from Chapter 388 of the Nevada Administrative Code, which concerns definitions, eligibility, and programs for students who are disabled or gifted/talented. The first section gathers together 36 relevant definitions from the Code for such concepts as "adaptive behavior,""autism,""gifted and talented,""mental…
Optimality Of Variable-Length Codes
NASA Technical Reports Server (NTRS)
Yeh, Pen-Shu; Miller, Warner H.; Rice, Robert F.
1994-01-01
Report presents analysis of performances of conceptual Rice universal noiseless coders designed to provide efficient compression of data over wide range of source-data entropies. Includes predictive preprocessor that maps source data into sequence of nonnegative integers and variable-length-coding processor, which adapts to varying entropy of source data by selecting whichever one of number of optional codes yields shortest codeword.
Improvements to SOIL: An Eulerian hydrodynamics code
Davis, C.G.
1988-04-01
Possible improvements to SOIL, an Eulerian hydrodynamics code that can do coupled radiation diffusion and strength of materials, are presented in this report. Our research is based on the inspection of other Eulerian codes and theoretical reports on hydrodynamics. Several conclusions from the present study suggest that some improvements are in order, such as second-order advection, adaptive meshes, and speedup of the code by vectorization and/or multitasking. 29 refs., 2 figs.
Hybrid subband image coding scheme using DWT, DPCM, and ADPCM
NASA Astrophysics Data System (ADS)
Oh, Kyung-Seak; Kim, Sung-Jin; Joo, Chang-Bok
1998-07-01
Subband image coding techniques have received considerable attention as a powerful source coding ones. These techniques provide good compression results, and also can be extended for progressive transmission and multiresolution analysis. In this paper, we propose a hybrid subband image coding scheme using DWT (discrete wavelet transform), DPCM (differential pulse code modulation), and ADPCM (adaptive DPCM). This scheme produces both simple, but significant, image compression and transmission coding.
Arbitrary Lagrangian Eulerian Adaptive Mesh Refinement
2009-09-29
This is a simulation code involving an ALE (arbitrary Lagrangian-Eulerian) hydrocode with AMR (adaptive mesh refinement) and pluggable physics packages for material strength, heat conduction, radiation diffusion, and laser ray tracing developed a LLNL, UCSD, and Berkeley Lab. The code is an extension of the open source SAMRAI (Structured Adaptive Mesh Refinement Application Interface) code/library. The code can be used in laser facilities such as the National Ignition Facility. The code is alsi being appliedmore » to slurry flow (landslides).« less
A user's manual for MASH 1. 0: A Monte Carlo Adjoint Shielding Code System
Johnson, J.O.
1992-03-01
The Monte Carlo Adjoint Shielding Code System, MASH, calculates neutron and gamma-ray environments and radiation protection factors for armored military vehicles, structures, trenches, and other shielding configurations by coupling a forward discrete ordinates air-over-ground transport calculation with an adjoint Monte Carlo treatment of the shielding geometry. Efficiency and optimum use of computer time are emphasized. The code system include the GRTUNCL and DORT codes for air-over-ground transport calculations, the MORSE code with the GIFT5 combinatorial geometry package for adjoint shielding calculations, and several peripheral codes that perform the required data preparations, transformations, and coupling functions. MASH is the successor to the Vehicle Code System (VCS) initially developed at Oak Ridge National Laboratory (ORNL). The discrete ordinates calculation determines the fluence on a coupling surface surrounding the shielding geometry due to an external neutron/gamma-ray source. The Monte Carlo calculation determines the effectiveness of the fluence at that surface in causing a response in a detector within the shielding geometry, i.e., the dose importance'' of the coupling surface fluence. A coupling code folds the fluence together with the dose importance, giving the desired dose response. The coupling code can determine the dose response a a function of the shielding geometry orientation relative to the source, distance from the source, and energy response of the detector. This user's manual includes a short description of each code, the input required to execute the code along with some helpful input data notes, and a representative sample problem (input data and selected output edits) for each code.
1997-03-01
This Manual represents Revision 5 of the user documentation for the modular code system referred to as SCALE. The history of the SCALE code system dates back to 1969 when the current Computational Physics and Engineering Division at Oak Ridge National Laboratory (ORNL) began providing the transportation package certification staff at the U.S. Atomic Energy Commission with computational support in the use of the new KENO code for performing criticality safety assessments with the statistical Monte Carlo method. From 1969 to 1976 the certification staff relied on the ORNL staff to assist them in the correct use of codes and data for criticality, shielding, and heat transfer analyses of transportation packages. However, the certification staff learned that, with only occasional use of the codes, it was difficult to become proficient in performing the calculations often needed for an independent safety review. Thus, shortly after the move of the certification staff to the U.S. Nuclear Regulatory Commission (NRC), the NRC staff proposed the development of an easy-to-use analysis system that provided the technical capabilities of the individual modules with which they were familiar. With this proposal, the concept of the Standardized Computer Analyses for Licensing Evaluation (SCALE) code system was born. This volume consists of the section of the manual dealing with three of the functional modules in the code. Those are the Morse-SGC for the SCALE system, Heating 7.2, and KENO V.a. The manual describes the latest released versions of the codes.
NASA Astrophysics Data System (ADS)
Aciksoz, Esra; Bayrak, Orhan; Soylu, Asim
2016-10-01
The behavior of a donor in the GaAs-Ga1-x Al x As quantum well wire represented by the Morse potential is examined within the framework of the effective-mass approximation. The donor binding energies are numerically calculated for with and without the electric and magnetic fields in order to show their influence on the binding energies. Moreover, how the donor binding energies change for the constant potential parameters (D e, r e, and a) as well as with the different values of the electric and magnetic field strengths is determined. It is found that the donor binding energy is highly dependent on the external electric and magnetic fields as well as parameters of the Morse potential. Project supported by the Turkish Science Research Council (TÜBİTAK) and the Financial Supports from Akdeniz and Nigde Universities.
A User's Manual for MASH V1.5 - A Monte Carlo Adjoint Shielding Code System
C. O. Slater; J. M. Barnes; J. O. Johnson; J.D. Drischler
1998-10-01
The Monte Carlo ~djoint ~ielding Code System, MASH, calculates neutron and gamma- ray environments and radiation protection factors for armored military vehicles, structures, trenches, and other shielding configurations by coupling a forward discrete ordinates air- over-ground transport calculation with an adjoint Monte Carlo treatment of the shielding geometry. Efficiency and optimum use of computer time are emphasized. The code system includes the GRTUNCL and DORT codes for air-over-ground transport calculations, the MORSE code with the GIFT5 combinatorial geometry package for adjoint shielding calculations, and several peripheral codes that perform the required data preparations, transformations, and coupling functions. The current version, MASH v 1.5, is the successor to the original MASH v 1.0 code system initially developed at Oak Ridge National Laboratory (ORNL). The discrete ordinates calculation determines the fluence on a coupling surface surrounding the shielding geometry due to an external neutron/gamma-ray source. The Monte Carlo calculation determines the effectiveness of the fluence at that surface in causing a response in a detector within the shielding geometry, i.e., the "dose importance" of the coupling surface fluence. A coupling code folds the fluence together with the dose importance, giving the desired dose response. The coupling code can determine the dose response as a function of the shielding geometry orientation relative to the source, distance from the source, and energy response of the detector. This user's manual includes a short description of each code, the input required to execute the code along with some helpful input data notes, and a representative sample problem.
Predictive coding of multisensory timing
Shi, Zhuanghua; Burr, David
2016-01-01
The sense of time is foundational for perception and action, yet it frequently departs significantly from physical time. In the paper we review recent progress on temporal contextual effects, multisensory temporal integration, temporal recalibration, and related computational models. We suggest that subjective time arises from minimizing prediction errors and adaptive recalibration, which can be unified in the framework of predictive coding, a framework rooted in Helmholtz’s ‘perception as inference’.
Predictive coding of multisensory timing
Shi, Zhuanghua; Burr, David
2016-01-01
The sense of time is foundational for perception and action, yet it frequently departs significantly from physical time. In the paper we review recent progress on temporal contextual effects, multisensory temporal integration, temporal recalibration, and related computational models. We suggest that subjective time arises from minimizing prediction errors and adaptive recalibration, which can be unified in the framework of predictive coding, a framework rooted in Helmholtz’s ‘perception as inference’. PMID:27695705
ERIC Educational Resources Information Center
Rhodes, Gillian; Jeffery, Linda; Boeing, Alexandra; Calder, Andrew J.
2013-01-01
Despite the discovery of body-selective neural areas in occipitotemporal cortex, little is known about how bodies are visually coded. We used perceptual adaptation to determine how body identity is coded. Brief exposure to a body (e.g., anti-Rose) biased perception toward an identity with opposite properties (Rose). Moreover, the size of this…
Anderson, Jonas T.
2013-03-15
In this paper we define homological stabilizer codes on qubits which encompass codes such as Kitaev's toric code and the topological color codes. These codes are defined solely by the graphs they reside on. This feature allows us to use properties of topological graph theory to determine the graphs which are suitable as homological stabilizer codes. We then show that all toric codes are equivalent to homological stabilizer codes on 4-valent graphs. We show that the topological color codes and toric codes correspond to two distinct classes of graphs. We define the notion of label set equivalencies and show that under a small set of constraints the only homological stabilizer codes without local logical operators are equivalent to Kitaev's toric code or to the topological color codes. - Highlights: Black-Right-Pointing-Pointer We show that Kitaev's toric codes are equivalent to homological stabilizer codes on 4-valent graphs. Black-Right-Pointing-Pointer We show that toric codes and color codes correspond to homological stabilizer codes on distinct graphs. Black-Right-Pointing-Pointer We find and classify all 2D homological stabilizer codes. Black-Right-Pointing-Pointer We find optimal codes among the homological stabilizer codes.
Is the Left Hemisphere Specialized for Speech, Language and/or Something Else?
ERIC Educational Resources Information Center
Papcun, George; And Others
1974-01-01
Morse code signals were presented dichotically to Morse code operators and to naive subjects with no knowledge of Morse code. The operators showed right ear superiority, indicating left hemisphere dominance for the perception of dichotically presented Morse code letters. Naive subjects showed the same right ear superiority when presented with a…
Adaptive management is an approach to natural resource management that emphasizes learning through management where knowledge is incomplete, and when, despite inherent uncertainty, managers and policymakers must act. Unlike a traditional trial and error approach, adaptive managem...
Parallel Adaptive Mesh Refinement Library
NASA Technical Reports Server (NTRS)
Mac-Neice, Peter; Olson, Kevin
2005-01-01
Parallel Adaptive Mesh Refinement Library (PARAMESH) is a package of Fortran 90 subroutines designed to provide a computer programmer with an easy route to extension of (1) a previously written serial code that uses a logically Cartesian structured mesh into (2) a parallel code with adaptive mesh refinement (AMR). Alternatively, in its simplest use, and with minimal effort, PARAMESH can operate as a domain-decomposition tool for users who want to parallelize their serial codes but who do not wish to utilize adaptivity. The package builds a hierarchy of sub-grids to cover the computational domain of a given application program, with spatial resolution varying to satisfy the demands of the application. The sub-grid blocks form the nodes of a tree data structure (a quad-tree in two or an oct-tree in three dimensions). Each grid block has a logically Cartesian mesh. The package supports one-, two- and three-dimensional models.
Coding of Neuroinfectious Diseases.
Barkley, Gregory L
2015-12-01
Accurate coding is an important function of neurologic practice. This contribution to Continuum is part of an ongoing series that presents helpful coding information along with examples related to the issue topic. Tips for diagnosis coding, Evaluation and Management coding, procedure coding, or a combination are presented, depending on which is most applicable to the subject area of the issue. PMID:26633789
ERIC Educational Resources Information Center
New Mexico Univ., Albuquerque. American Indian Law Center.
The Model Children's Code was developed to provide a legally correct model code that American Indian tribes can use to enact children's codes that fulfill their legal, cultural and economic needs. Code sections cover the court system, jurisdiction, juvenile offender procedures, minor-in-need-of-care, and termination. Almost every Code section is…
ERIC Educational Resources Information Center
Parkinson, Brian; Sandhu, Parveen; Lacorte, Manel; Gourlay, Lesley
1998-01-01
This article considers arguments for and against the use of coding systems in classroom-based language research and touches on some relevant considerations from ethnographic and conversational analysis approaches. The four authors each explain and elaborate on their practical decision to code or not to code events or utterances at a specific point…
NASA Astrophysics Data System (ADS)
Clair, Jean J.
1980-05-01
The Bare code system will be used, in every market and supermarket. The code, which is normalised in US and Europe (code EAN) gives informations on price, storage, nature and allows in real time the gestion of theshop.
Hunt, R.L.
1983-12-27
An adapter is disclosed for use with a fireplace. The stove pipe of a stove standing in a room to be heated may be connected to the flue of the chimney so that products of combustion from the stove may be safely exhausted through the flue and outwardly of the chimney. The adapter may be easily installed within the fireplace by removing the damper plate and fitting the adapter to the damper frame. Each of a pair of bolts has a portion which hooks over a portion of the damper frame and a threaded end depending from the hook portion and extending through a hole in the adapter. Nuts are threaded on the bolts and are adapted to force the adapter into a tight fit with the adapter frame.
Pittsburgh Adapts to Changing Times.
ERIC Educational Resources Information Center
States, Deidre
1985-01-01
The Samuel F. B. Morse School, built in 1874 and closed in 1980, is a historic landmark in Pittsburgh, Pennsylvania. Now the building serves as low-income housing for 70 elderly tenants and is praised as being an imaginative and creative use of an old school structure. (MLF)
Edge Equilibrium Code (EEC) For Tokamaks
Li, Xujling
2014-02-24
The edge equilibrium code (EEC) described in this paper is developed for simulations of the near edge plasma using the finite element method. It solves the Grad-Shafranov equation in toroidal coordinate and uses adaptive grids aligned with magnetic field lines. Hermite finite elements are chosen for the numerical scheme. A fast Newton scheme which is the same as implemented in the equilibrium and stability code (ESC) is applied here to adjust the grids
Accumulate repeat accumulate codes
NASA Technical Reports Server (NTRS)
Abbasfar, Aliazam; Divsalar, Dariush; Yao, Kung
2004-01-01
In this paper we propose an innovative channel coding scheme called 'Accumulate Repeat Accumulate codes' (ARA). This class of codes can be viewed as serial turbo-like codes, or as a subclass of Low Density Parity Check (LDPC) codes, thus belief propagation can be used for iterative decoding of ARA codes on a graph. The structure of encoder for this class can be viewed as precoded Repeat Accumulate (RA) code or as precoded Irregular Repeat Accumulate (IRA) code, where simply an accumulator is chosen as a precoder. Thus ARA codes have simple, and very fast encoder structure when they representing LDPC codes. Based on density evolution for LDPC codes through some examples for ARA codes, we show that for maximum variable node degree 5 a minimum bit SNR as low as 0.08 dB from channel capacity for rate 1/2 can be achieved as the block size goes to infinity. Thus based on fixed low maximum variable node degree, its threshold outperforms not only the RA and IRA codes but also the best known LDPC codes with the dame maximum node degree. Furthermore by puncturing the accumulators any desired high rate codes close to code rate 1 can be obtained with thresholds that stay close to the channel capacity thresholds uniformly. Iterative decoding simulation results are provided. The ARA codes also have projected graph or protograph representation that allows for high speed decoder implementation.
A user`s manual for MASH 1.0: A Monte Carlo Adjoint Shielding Code System
Johnson, J.O.
1992-03-01
The Monte Carlo Adjoint Shielding Code System, MASH, calculates neutron and gamma-ray environments and radiation protection factors for armored military vehicles, structures, trenches, and other shielding configurations by coupling a forward discrete ordinates air-over-ground transport calculation with an adjoint Monte Carlo treatment of the shielding geometry. Efficiency and optimum use of computer time are emphasized. The code system include the GRTUNCL and DORT codes for air-over-ground transport calculations, the MORSE code with the GIFT5 combinatorial geometry package for adjoint shielding calculations, and several peripheral codes that perform the required data preparations, transformations, and coupling functions. MASH is the successor to the Vehicle Code System (VCS) initially developed at Oak Ridge National Laboratory (ORNL). The discrete ordinates calculation determines the fluence on a coupling surface surrounding the shielding geometry due to an external neutron/gamma-ray source. The Monte Carlo calculation determines the effectiveness of the fluence at that surface in causing a response in a detector within the shielding geometry, i.e., the ``dose importance`` of the coupling surface fluence. A coupling code folds the fluence together with the dose importance, giving the desired dose response. The coupling code can determine the dose response a a function of the shielding geometry orientation relative to the source, distance from the source, and energy response of the detector. This user`s manual includes a short description of each code, the input required to execute the code along with some helpful input data notes, and a representative sample problem (input data and selected output edits) for each code.
Barrett, Harrison H.; Furenlid, Lars R.; Freed, Melanie; Hesterman, Jacob Y.; Kupinski, Matthew A.; Clarkson, Eric; Whitaker, Meredith K.
2008-01-01
Adaptive imaging systems alter their data-acquisition configuration or protocol in response to the image information received. An adaptive pinhole single-photon emission computed tomography (SPECT) system might acquire an initial scout image to obtain preliminary information about the radiotracer distribution and then adjust the configuration or sizes of the pinholes, the magnifications, or the projection angles in order to improve performance. This paper briefly describes two small-animal SPECT systems that allow this flexibility and then presents a framework for evaluating adaptive systems in general, and adaptive SPECT systems in particular. The evaluation is in terms of the performance of linear observers on detection or estimation tasks. Expressions are derived for the ideal linear (Hotelling) observer and the ideal linear (Wiener) estimator with adaptive imaging. Detailed expressions for the performance figures of merit are given, and possible adaptation rules are discussed. PMID:18541485
ERIC Educational Resources Information Center
Harrell, William
1999-01-01
Provides information on various adaptive technology resources available to people with disabilities. (Contains 19 references, an annotated list of 129 websites, and 12 additional print resources.) (JOW)
Anstis, Stuart
2013-01-01
It is known that adaptation to a disk that flickers between black and white at 3-8 Hz on a gray surround renders invisible a congruent gray test disk viewed afterwards. This is contrast adaptation. We now report that adapting simply to the flickering circular outline of the disk can have the same effect. We call this "contour adaptation." This adaptation does not transfer interocularly, and apparently applies only to luminance, not color. One can adapt selectively to only some of the contours in a display, making only these contours temporarily invisible. For instance, a plaid comprises a vertical grating superimposed on a horizontal grating. If one first adapts to appropriate flickering vertical lines, the vertical components of the plaid disappears and it looks like a horizontal grating. Also, we simulated a Cornsweet (1970) edge, and we selectively adapted out the subjective and objective contours of a Kanisza (1976) subjective square. By temporarily removing edges, contour adaptation offers a new technique to study the role of visual edges, and it demonstrates how brightness information is concentrated in edges and propagates from them as it fills in surfaces.
Toniollo, Marcelo Bighetti; Macedo, Ana Paula; Rodrigues, Renata Cristina Silveira; Ribeiro, Ricardo Faria; de Mattos, Maria da Gloria Chiarello
2012-11-01
This finite element analysis (FEA) compared stress distribution on different bony ridges rehabilitated with different lengths of morse taper implants, varying dimensions of metal-ceramic crowns to maintain the occlusal alignment. Three-dimensional FE models were designed representing a posterior left side segment of the mandible: group control, 3 implants of 11 mm length; group 1, implants of 13 mm, 11 mm and 5 mm length; group 2, 1 implant of 11 mm and 2 implants of 5 mm length; and group 3, 3 implants of 5 mm length. The abutments heights were 3.5 mm for 13- and 11-mm implants (regular), and 0.8 mm for 5-mm implants (short). Evaluation was performed on Ansys software, oblique loads of 365N for molars and 200N for premolars. There was 50% higher stress on cortical bone for the short implants than regular implants. There was 80% higher stress on trabecular bone for the short implants than regular implants. There was higher stress concentration on the bone region of the short implants neck. However, these implants were capable of dissipating the stress to the bones, given the applied loads, but achieving near the threshold between elastic and plastic deformation to the trabecular bone. Distal implants and/or with biggest occlusal table generated greatest stress regions on the surrounding bone. It was concluded that patients requiring short implants associated with increased proportions implant prostheses need careful evaluation and occlusal adjustment, as a possible overload in these short implants, and even in regular ones, can generate stress beyond the physiological threshold of the surrounding bone, compromising the whole system.
Discussion on LDPC Codes and Uplink Coding
NASA Technical Reports Server (NTRS)
Andrews, Ken; Divsalar, Dariush; Dolinar, Sam; Moision, Bruce; Hamkins, Jon; Pollara, Fabrizio
2007-01-01
This slide presentation reviews the progress that the workgroup on Low-Density Parity-Check (LDPC) for space link coding. The workgroup is tasked with developing and recommending new error correcting codes for near-Earth, Lunar, and deep space applications. Included in the presentation is a summary of the technical progress of the workgroup. Charts that show the LDPC decoder sensitivity to symbol scaling errors are reviewed, as well as a chart showing the performance of several frame synchronizer algorithms compared to that of some good codes and LDPC decoder tests at ESTL. Also reviewed is a study on Coding, Modulation, and Link Protocol (CMLP), and the recommended codes. A design for the Pseudo-Randomizer with LDPC Decoder and CRC is also reviewed. A chart that summarizes the three proposed coding systems is also presented.
Research on pre-processing of QR Code
NASA Astrophysics Data System (ADS)
Sun, Haixing; Xia, Haojie; Dong, Ning
2013-10-01
QR code encodes many kinds of information because of its advantages: large storage capacity, high reliability, full arrange of utter-high-speed reading, small printing size and high-efficient representation of Chinese characters, etc. In order to obtain the clearer binarization image from complex background, and improve the recognition rate of QR code, this paper researches on pre-processing methods of QR code (Quick Response Code), and shows algorithms and results of image pre-processing for QR code recognition. Improve the conventional method by changing the Souvola's adaptive text recognition method. Additionally, introduce the QR code Extraction which adapts to different image size, flexible image correction approach, and improve the efficiency and accuracy of QR code image processing.
Papamichos, Spyros I.; Margaritis, Dimitrios; Kotsianidis, Ioannis
2015-01-01
The incidence of cancer in human is high as compared to chimpanzee. However previous analysis has documented that numerous human cancer-related genes are highly conserved in chimpanzee. Till date whether human genome includes species-specific cancer-related genes that could potentially contribute to a higher cancer susceptibility remains obscure. This study focuses on MYEOV, an oncogene encoding for two protein isoforms, reported as causally involved in promoting cancer cell proliferation and metastasis in both haematological malignancies and solid tumours. First we document, via stringent in silico analysis, that MYEOV arose de novo in Catarrhini. We show that MYEOV short-isoform start codon was evolutionarily acquired after Catarrhini/Platyrrhini divergence. Throughout the course of Catarrhini evolution MYEOV acquired a gradually elongated translatable open reading frame (ORF), a gradually shortened translation-regulatory upstream ORF, and alternatively spliced mRNA variants. A point mutation introduced in human allowed for the acquisition of MYEOV long-isoform start codon. Second, we demonstrate the precious impact of exonized transposable elements on the creation of MYEOV gene structure. Third, we highlight that the initial part of MYEOV long-isoform coding DNA sequence was under positive selection pressure during Catarrhini evolution. MYEOV represents a Primate Orphan Gene that acquired, via ORF expansion, a human-protein-specific coding potential. PMID:26568894
Manually operated coded switch
Barnette, Jon H.
1978-01-01
The disclosure relates to a manually operated recodable coded switch in which a code may be inserted, tried and used to actuate a lever controlling an external device. After attempting a code, the switch's code wheels must be returned to their zero positions before another try is made.
NASA Astrophysics Data System (ADS)
Güngördü, Utkan; Nepal, Rabindra; Kovalev, Alexey A.
2014-10-01
We define and study parafermion stabilizer codes, which can be viewed as generalizations of Kitaev's one-dimensional (1D) model of unpaired Majorana fermions. Parafermion stabilizer codes can protect against low-weight errors acting on a small subset of parafermion modes in analogy to qudit stabilizer codes. Examples of several smallest parafermion stabilizer codes are given. A locality-preserving embedding of qudit operators into parafermion operators is established that allows one to map known qudit stabilizer codes to parafermion codes. We also present a local 2D parafermion construction that combines topological protection of Kitaev's toric code with additional protection relying on parity conservation.
Flexible Generation of Kalman Filter Code
NASA Technical Reports Server (NTRS)
Richardson, Julian; Wilson, Edward
2006-01-01
Domain-specific program synthesis can automatically generate high quality code in complex domains from succinct specifications, but the range of programs which can be generated by a given synthesis system is typically narrow. Obtaining code which falls outside this narrow scope necessitates either 1) extension of the code generator, which is usually very expensive, or 2) manual modification of the generated code, which is often difficult and which must be redone whenever changes are made to the program specification. In this paper, we describe adaptations and extensions of the AUTOFILTER Kalman filter synthesis system which greatly extend the range of programs which can be generated. Users augment the input specification with a specification of code fragments and how those fragments should interleave with or replace parts of the synthesized filter. This allows users to generate a much wider range of programs without their needing to modify the synthesis system or edit generated code. We demonstrate the usefulness of the approach by applying it to the synthesis of a complex state estimator which combines code from several Kalman filters with user-specified code. The work described in this paper allows the complex design decisions necessary for real-world applications to be reflected in the synthesized code. When executed on simulated input data, the generated state estimator was found to produce comparable estimates to those produced by a handcoded estimator
NASA Technical Reports Server (NTRS)
Divsalar, Dariush (Inventor); Abbasfar, Aliazam (Inventor); Jones, Christopher R. (Inventor); Dolinar, Samuel J. (Inventor); Thorpe, Jeremy C. (Inventor); Andrews, Kenneth S. (Inventor); Yao, Kung (Inventor)
2008-01-01
An apparatus and method for encoding low-density parity check codes. Together with a repeater, an interleaver and an accumulator, the apparatus comprises a precoder, thus forming accumulate-repeat-accumulate (ARA codes). Protographs representing various types of ARA codes, including AR3A, AR4A and ARJA codes, are described. High performance is obtained when compared to the performance of current repeat-accumulate (RA) or irregular-repeat-accumulate (IRA) codes.
ERIC Educational Resources Information Center
Crompton, Helen; LaFrance, Jason; van 't Hooft, Mark
2012-01-01
A QR (quick-response) code is a two-dimensional scannable code, similar in function to a traditional bar code that one might find on a product at the supermarket. The main difference between the two is that, while a traditional bar code can hold a maximum of only 20 digits, a QR code can hold up to 7,089 characters, so it can contain much more…
NASA Astrophysics Data System (ADS)
Kinzig, Ann P.
2015-03-01
This paper is intended as a brief introduction to climate adaptation in a conference devoted otherwise to the physics of sustainable energy. Whereas mitigation involves measures to reduce the probability of a potential event, such as climate change, adaptation refers to actions that lessen the impact of climate change. Mitigation and adaptation differ in other ways as well. Adaptation does not necessarily have to be implemented immediately to be effective; it only needs to be in place before the threat arrives. Also, adaptation does not necessarily require global, coordinated action; many effective adaptation actions can be local. Some urban communities, because of land-use change and the urban heat-island effect, currently face changes similar to some expected under climate change, such as changes in water availability, heat-related morbidity, or changes in disease patterns. Concern over those impacts might motivate the implementation of measures that would also help in climate adaptation, despite skepticism among some policy makers about anthropogenic global warming. Studies of ancient civilizations in the southwestern US lends some insight into factors that may or may not be important to successful adaptation.
Tanguy, A; Moraga, D
2001-07-25
Cases of heavy metal resistance acquisition have already been demonstrated in eukaryotes, which involve metallothionein (MT) gene duplication or amplification mechanisms. We characterized in a marine bivalve, Crassostrea gigas, a gene coding for an unusual MT, which has never been described in other species. Our results illustrate a unique case of exon duplication and rearrangement in the MT gene family. The particular organization of the third exon of this gene allows the synthesis of a MT that presents a higher metal ion binding capacity compared to previously described MTs. The formation of a supplementary third structural beta-domain is proposed to explain results obtained in in vitro experiments. Differences in the metal responsive element (MRE) copy number and MRE core sequence observed in the promoter of CgMT2 also suggest differential regulation of CgMT2 transcription and possible implication in the detoxification processes.
Sodhi, M; Mukesh, M; Kishore, A; Mishra, B P; Kataria, R S; Joshi, B K
2013-09-25
Due to evolutionary divergence, cattle (taurine, and indicine) and buffalo are speculated to have different responses to heat stress condition. Variation in candidate genes associated with a heat-shock response may provide an insight into the dissimilarity and suggest targets for intervention. The present work was undertaken to characterize one of the inducible heat shock protein genes promoter and coding regions in diverse breeds of Indian zebu cattle and buffaloes. The genomic DNA from a panel of 117 unrelated animals representing 14 diversified native cattle breeds and 6 buffalo breeds were utilized to determine the complete sequence and gene diversity of HSP70.1 gene. The coding region of HSP70.1 gene in Indian zebu cattle, Bos taurus and buffalo was similar in length (1,926 bp) encoding a HSP70 protein of 641 amino acids with a calculated molecular weight (Mw) of 70.26 kDa. However buffalo had a longer 5' and 3' untranslated region (UTR) of 204 and 293 nucleotides respectively, in comparison to Indian zebu cattle and Bos taurus wherein length of 5' and 3'-UTR was 172 and 286 nucleotides, respectively. The increased length of buffalo HSP70.1 gene compared to indicine and taurine gene was due to two insertions each in 5' and 3'-UTR. Comparative sequence analysis of cattle (taurine and indicine) and buffalo HSP70.1 gene revealed a total of 54 gene variations (50 SNPs and 4 INDELs) among the three species in the HSP70.1 gene. The minor allele frequencies of these nucleotide variations varied from 0.03 to 0.5 with an average of 0.26. Among the 14 B. indicus cattle breeds studied, a total of 19 polymorphic sites were identified: 4 in the 5'-UTR and 15 in the coding region (of these 2 were non-synonymous). Analysis among buffalo breeds revealed 15 SNPs throughout the gene: 6 at the 5' flanking region and 9 in the coding region. In bubaline 5'-UTR, 2 additional putative transcription factor binding sites (Elk-1 and C-Re1) were identified, other than three common sites
Nonbinary Quantum Convolutional Codes Derived from Negacyclic Codes
NASA Astrophysics Data System (ADS)
Chen, Jianzhang; Li, Jianping; Yang, Fan; Huang, Yuanyuan
2015-01-01
In this paper, some families of nonbinary quantum convolutional codes are constructed by using negacyclic codes. These nonbinary quantum convolutional codes are different from quantum convolutional codes in the literature. Moreover, we construct a family of optimal quantum convolutional codes.
LOT coding for arbitrarily shaped object regions.
Sohn, Y W; Park, R H
2001-01-01
Two coding methods for arbitrarily shaped objects in still images using the lapped orthogonal transform (LOT) are proposed. The LOT is applied to the projection onto convex sets (POCS) based algorithm and the shape adaptive-discrete cosine transform (SA-DCT) with the even number of basis vectors. Simulation results show improved reconstruction quality compared with the conventional methods.
Asymmetric quantum convolutional codes
NASA Astrophysics Data System (ADS)
La Guardia, Giuliano G.
2016-01-01
In this paper, we construct the first families of asymmetric quantum convolutional codes (AQCCs). These new AQCCs are constructed by means of the CSS-type construction applied to suitable families of classical convolutional codes, which are also constructed here. The new codes have non-catastrophic generator matrices, and they have great asymmetry. Since our constructions are performed algebraically, i.e. we develop general algebraic methods and properties to perform the constructions, it is possible to derive several families of such codes and not only codes with specific parameters. Additionally, several different types of such codes are obtained.
ERIC Educational Resources Information Center
Exceptional Parent, 1987
1987-01-01
Suggestions are presented for helping disabled individuals learn to use or adapt toothbrushes for proper dental care. A directory lists dental health instructional materials available from various organizations. (CB)
Cellulases and coding sequences
Li, Xin-Liang; Ljungdahl, Lars G.; Chen, Huizhong
2001-02-20
The present invention provides three fungal cellulases, their coding sequences, recombinant DNA molecules comprising the cellulase coding sequences, recombinant host cells and methods for producing same. The present cellulases are from Orpinomyces PC-2.
Cellulases and coding sequences
Li, Xin-Liang; Ljungdahl, Lars G.; Chen, Huizhong
2001-01-01
The present invention provides three fungal cellulases, their coding sequences, recombinant DNA molecules comprising the cellulase coding sequences, recombinant host cells and methods for producing same. The present cellulases are from Orpinomyces PC-2.
ERIC Educational Resources Information Center
Shumack, Kellie A.; Reilly, Erin; Chamberlain, Nik
2013-01-01
space, has error-correction capacity, and can be read from any direction. These codes are used in manufacturing, shipping, and marketing, as well as in education. QR codes can be created to produce…
FEMHD: An adaptive finite element method for MHD and edge modelling
Strauss, H.R.
1995-07-01
This paper describes the code FEMHD, an adaptive finite element MHD code, which is applied in a number of different manners to model MHD behavior and edge plasma phenomena on a diverted tokamak. The code uses an unstructured triangular mesh in 2D and wedge shaped mesh elements in 3D. The code has been adapted to look at neutral and charged particle dynamics in the plasma scrape off region, and into a full MHD-particle code.
SYMTRAN - A Time-dependent Symmetric Tandem Mirror Transport Code
Hua, D; Fowler, T
2004-06-15
A time-dependent version of the steady-state radial transport model in symmetric tandem mirrors in Ref. [1] has been coded up and first tests performed. Our code, named SYMTRAN, is an adaptation of the earlier SPHERE code for spheromaks, now modified for tandem mirror physics. Motivated by Post's new concept of kinetic stabilization of symmetric mirrors, it is an extension of the earlier TAMRAC rate-equation code omitting radial transport [2], which successfully accounted for experimental results in TMX. The SYMTRAN code differs from the earlier tandem mirror radial transport code TMT in that our code is focused on axisymmetric tandem mirrors and classical diffusion, whereas TMT emphasized non-ambipolar transport in TMX and MFTF-B due to yin-yang plugs and non-symmetric transitions between the plugs and axisymmetric center cell. Both codes exhibit interesting but different non-linear behavior.
The Clawpack Community of Codes
NASA Astrophysics Data System (ADS)
Mandli, K. T.; LeVeque, R. J.; Ketcheson, D.; Ahmadia, A. J.
2014-12-01
Clawpack, the Conservation Laws Package, has long been one of the standards for solving hyperbolic conservation laws but over the years has extended well beyond this role. Today a community of open-source codes have been developed that address a multitude of different needs including non-conservative balance laws, high-order accurate methods, and parallelism while remaining extensible and easy to use, largely by the judicious use of Python and the original Fortran codes that it wraps. This talk will present some of the recent developments in projects under the Clawpack umbrella, notably the GeoClaw and PyClaw projects. GeoClaw was originally developed as a tool for simulating tsunamis using adaptive mesh refinement but has since encompassed a large number of other geophysically relevant flows including storm surge and debris-flows. PyClaw originated as a Python version of the original Clawpack algorithms but has since been both a testing ground for new algorithmic advances in the Clawpack framework but also an easily extensible framework for solving hyperbolic balance laws. Some of these extensions include the addition of WENO high-order methods, massively parallel capabilities, and adaptive mesh refinement technologies, made possible largely by the flexibility of the Python language and community libraries such as NumPy and PETSc. Because of the tight integration with Python tecnologies, both packages have benefited also from the focus on reproducibility in the Python community, notably IPython notebooks.
Code System for Analysis of Piping Reliability Including Seismic Events.
1999-04-26
Version 00 PC-PRAISE is a probabilistic fracture mechanics computer code developed for IBM or IBM compatible personal computers to estimate probabilities of leaks and breaks in nuclear power plant cooling piping. It iwas adapted from LLNL's PRAISE computer code.
Jones, T.
1993-11-01
This paper examines the results of previous wire code research to determines the relationship with childhood cancer, wire codes and electromagnetic fields. The paper suggests that, in the original Savitz study, biases toward producing a false positive association between high wire codes and childhood cancer were created by the selection procedure.
Perceptual adaptation helps us identify faces.
Rhodes, Gillian; Watson, Tamara L; Jeffery, Linda; Clifford, Colin W G
2010-05-12
Adaptation is a fundamental property of perceptual processing. In low-level vision, it can calibrate perception to current inputs, increasing coding efficiency and enhancing discrimination around the adapted level. Adaptation also occurs in high-level vision, as illustrated by face aftereffects. However, the functional consequences of face adaptation remain uncertain. Here we investigated whether adaptation can enhance identification performance for faces from an adapted, relative to an unadapted, population. Five minutes of adaptation to an average Asian or Caucasian face reduced identification thresholds for faces from the adapted relative to the unadapted race. We replicated this interaction in two studies, using different participants, faces and adapting procedures. These results suggest that adaptation has a functional role in high-level, as well as low-level, visual processing. We suggest that adaptation to the average of a population may reduce responses to common properties shared by all members of the population, effectively orthogonalizing identity vectors in a multi-dimensional face space and freeing neural resources to code distinctive properties, which are useful for identification.
Software Certification - Coding, Code, and Coders
NASA Technical Reports Server (NTRS)
Havelund, Klaus; Holzmann, Gerard J.
2011-01-01
We describe a certification approach for software development that has been adopted at our organization. JPL develops robotic spacecraft for the exploration of the solar system. The flight software that controls these spacecraft is considered to be mission critical. We argue that the goal of a software certification process cannot be the development of "perfect" software, i.e., software that can be formally proven to be correct under all imaginable and unimaginable circumstances. More realistically, the goal is to guarantee a software development process that is conducted by knowledgeable engineers, who follow generally accepted procedures to control known risks, while meeting agreed upon standards of workmanship. We target three specific issues that must be addressed in such a certification procedure: the coding process, the code that is developed, and the skills of the coders. The coding process is driven by standards (e.g., a coding standard) and tools. The code is mechanically checked against the standard with the help of state-of-the-art static source code analyzers. The coders, finally, are certified in on-site training courses that include formal exams.
Jow, Hong-Nian; Murfin, W.B.; Johnson, J.D.
1993-11-01
This report describes the source term estimation codes, XSORs. The codes are written for three pressurized water reactors (Surry, Sequoyah, and Zion) and two boiling water reactors (Peach Bottom and Grand Gulf). The ensemble of codes has been named ``XSOR``. The purpose of XSOR codes is to estimate the source terms which would be released to the atmosphere in severe accidents. A source term includes the release fractions of several radionuclide groups, the timing and duration of releases, the rates of energy release, and the elevation of releases. The codes have been developed by Sandia National Laboratories for the US Nuclear Regulatory Commission (NRC) in support of the NUREG-1150 program. The XSOR codes are fast running parametric codes and are used as surrogates for detailed mechanistic codes. The XSOR codes also provide the capability to explore the phenomena and their uncertainty which are not currently modeled by the mechanistic codes. The uncertainty distributions of input parameters may be used by an. XSOR code to estimate the uncertainty of source terms.
Greg Flach, Frank Smith
2014-05-14
DLLExternalCode is the a general dynamic-link library (DLL) interface for linking GoldSim (www.goldsim.com) with external codes. The overall concept is to use GoldSim as top level modeling software with interfaces to external codes for specific calculations. The DLLExternalCode DLL that performs the linking function is designed to take a list of code inputs from GoldSim, create an input file for the external application, run the external code, and return a list of outputs, read from files created by the external application, back to GoldSim. Instructions for creating the input file, running the external code, and reading the output are contained in an instructions file that is read and interpreted by the DLL.
2014-05-14
DLLExternalCode is the a general dynamic-link library (DLL) interface for linking GoldSim (www.goldsim.com) with external codes. The overall concept is to use GoldSim as top level modeling software with interfaces to external codes for specific calculations. The DLLExternalCode DLL that performs the linking function is designed to take a list of code inputs from GoldSim, create an input file for the external application, run the external code, and return a list of outputs, read frommore » files created by the external application, back to GoldSim. Instructions for creating the input file, running the external code, and reading the output are contained in an instructions file that is read and interpreted by the DLL.« less
NASA Astrophysics Data System (ADS)
Gungordu, Utkan; Nepal, Rabindra; Kovalev, Alexey
2015-03-01
We define and study parafermion stabilizer codes [Phys. Rev. A 90, 042326 (2014)] which can be viewed as generalizations of Kitaev's one dimensional model of unpaired Majorana fermions. Parafermion stabilizer codes can protect against low-weight errors acting on a small subset of parafermion modes in analogy to qudit stabilizer codes. Examples of several smallest parafermion stabilizer codes are given. Our results show that parafermions can achieve a better encoding rate than Majorana fermions. A locality preserving embedding of qudit operators into parafermion operators is established which allows one to map known qudit stabilizer codes to parafermion codes. We also present a local 2D parafermion construction that combines topological protection of Kitaev's toric code with additional protection relying on parity conservation. This work was supported in part by the NSF under Grants No. Phy-1415600 and No. NSF-EPSCoR 1004094.
Contour inflections are adaptable features.
Bell, Jason; Sampasivam, Sinthujaa; McGovern, David P; Meso, Andrew Isaac; Kingdom, Frederick A A
2014-06-03
An object's shape is a strong cue for visual recognition. Most models of shape coding emphasize the role of oriented lines and curves for coding an object's shape. Yet inflection points, which occur at the junction of two oppositely signed curves, are ubiquitous features in natural scenes and carry important information about the shape of an object. Using a visual aftereffect in which the perceived shape of a contour is changed following prolonged viewing of a slightly different-shaped contour, we demonstrate a specific aftereffect for a contour inflection. Control conditions show that this aftereffect cannot be explained by adaptation to either the component curves or to the local orientation at the point of inflection. Further, we show that the aftereffect transfers weakly to a compound curve without an inflection, ruling out a general compound curvature detector as an explanation of our findings. We assume however that there are adaptable mechanisms for coding other specific forms of compound curves. Taken together, our findings provide evidence that the human visual system contains specific mechanisms for coding contour inflections, further highlighting their role in shape and object coding.
NASA Technical Reports Server (NTRS)
2005-01-01
The goal of this research is to develop and demonstrate innovative adaptive seal technologies that can lead to dramatic improvements in engine performance, life, range, and emissions, and enhance operability for next generation gas turbine engines. This work is concentrated on the development of self-adaptive clearance control systems for gas turbine engines. Researchers have targeted the high-pressure turbine (HPT) blade tip seal location for following reasons: Current active clearance control (ACC) systems (e.g., thermal case-cooling schemes) cannot respond to blade tip clearance changes due to mechanical, thermal, and aerodynamic loads. As such they are prone to wear due to the required tight running clearances during operation. Blade tip seal wear (increased clearances) reduces engine efficiency, performance, and service life. Adaptive sealing technology research has inherent impact on all envisioned 21st century propulsion systems (e.g. distributed vectored, hybrid and electric drive propulsion concepts).
Allen, Craig R.; Garmestani, Ahjond S.
2015-01-01
Adaptive management is an approach to natural resource management that emphasizes learning through management where knowledge is incomplete, and when, despite inherent uncertainty, managers and policymakers must act. Unlike a traditional trial and error approach, adaptive management has explicit structure, including a careful elucidation of goals, identification of alternative management objectives and hypotheses of causation, and procedures for the collection of data followed by evaluation and reiteration. The process is iterative, and serves to reduce uncertainty, build knowledge and improve management over time in a goal-oriented and structured process.
Three-dimensional subband coding of video.
Podilchuk, C I; Jayant, N S; Farvardin, N
1995-01-01
We describe and show the results of video coding based on a three-dimensional (3-D) spatio-temporal subband decomposition. The results include a 1-Mbps coder based on a new adaptive differential pulse code modulation scheme (ADPCM) and adaptive bit allocation. This rate is useful for video storage on CD-ROM. Coding results are also shown for a 384-kbps rate that are based on ADPCM for the lowest frequency band and a new form of vector quantization (geometric vector quantization (GVQ)) for the data in the higher frequency bands. GVQ takes advantage of the inherent structure and sparseness of the data in the higher bands. Results are also shown for a 128-kbps coder that is based on an unbalanced tree-structured vector quantizer (UTSVQ) for the lowest frequency band and GVQ for the higher frequency bands. The results are competitive with traditional video coding techniques and provide the motivation for investigating the 3-D subband framework for different coding schemes and various applications. PMID:18289965
Visual adaptation and face perception
Webster, Michael A.; MacLeod, Donald I. A.
2011-01-01
The appearance of faces can be strongly affected by the characteristics of faces viewed previously. These perceptual after-effects reflect processes of sensory adaptation that are found throughout the visual system, but which have been considered only relatively recently in the context of higher level perceptual judgements. In this review, we explore the consequences of adaptation for human face perception, and the implications of adaptation for understanding the neural-coding schemes underlying the visual representation of faces. The properties of face after-effects suggest that they, in part, reflect response changes at high and possibly face-specific levels of visual processing. Yet, the form of the after-effects and the norm-based codes that they point to show many parallels with the adaptations and functional organization that are thought to underlie the encoding of perceptual attributes like colour. The nature and basis for human colour vision have been studied extensively, and we draw on ideas and principles that have been developed to account for norms and normalization in colour vision to consider potential similarities and differences in the representation and adaptation of faces. PMID:21536555
The Case for an Error Minimizing Standard Genetic Code
NASA Astrophysics Data System (ADS)
Freeland, Stephen J.; Wu, Tao; Keulmann, Nick
2003-10-01
Since discovering the pattern by which amino acids are assigned to codons within the standard genetic code, investigators have explored the idea that natural selection placed biochemically similar amino acids near to one another in coding space so as to minimize the impact of mutations and/or mistranslations. The analytical evidence to support this theory has grown in sophistication and strength over the years, and counterclaims questioning its plausibility and quantitative support have yet to transcend some significant weaknesses in their approach. These weaknesses are illustrated here by means of a simple simulation model for adaptive genetic code evolution. There remain ill explored facets of the `error minimizing' code hypothesis, however, including the mechanism and pathway by which an adaptive pattern of codon assignments emerged, the extent to which natural selection created synonym redundancy, its role in shaping the amino acid and nucleotide languages, and even the correct interpretation of the adaptive codon assignment pattern: these represent fertile areas for future research.
Certifying Auto-Generated Flight Code
NASA Technical Reports Server (NTRS)
Denney, Ewen
2008-01-01
itself is generic, and parametrized with respect to a library of coding patterns that depend on the safety policies and the code generator. The patterns characterize the notions of definitions and uses that are specific to the given safety property. For example, for initialization safety, definitions correspond to variable initializations while uses are statements which read a variable, whereas for array bounds safety, definitions are the array declarations, while uses are statements which access an array variable. The inferred annotations are thus highly dependent on the actual program and the properties being proven. The annotations, themselves, need not be trusted, but are crucial to obtain the automatic formal verification of the safety properties without requiring access to the internals of the code generator. The approach has been applied to both in-house and commercial code generators, but is independent of the particular generator used. It is currently being adapted to flight code generated using MathWorks Real-Time Workshop, an automatic code generator that translates from Simulink/Stateflow models into embedded C code.
One Hidden Object, Two Spatial Codes: Young Children's Use of Relational and Vector Coding
ERIC Educational Resources Information Center
Uttal, David H.; Sandstrom, Lisa B.; Newcombe, Nora S.
2006-01-01
An important characteristic of mature spatial cognition is the ability to encode spatial locations in terms of relations among landmarks as well as in terms of vectors that include distance and direction. In this study, we examined children's use of the relation "middle" to code the location of a hidden toy, using a procedure adapted from prior…
Some practical universal noiseless coding techniques
NASA Technical Reports Server (NTRS)
Rice, R. F.
1979-01-01
Some practical adaptive techniques for the efficient noiseless coding of a broad class of such data sources are developed and analyzed. Algorithms are designed for coding discrete memoryless sources which have a known symbol probability ordering but unknown probability values. A general applicability of these algorithms to solving practical problems is obtained because most real data sources can be simply transformed into this form by appropriate preprocessing. These algorithms have exhibited performance only slightly above all entropy values when applied to real data with stationary characteristics over the measurement span. Performance considerably under a measured average data entropy may be observed when data characteristics are changing over the measurement span.
NASA Technical Reports Server (NTRS)
Shapiro, Wilbur
1991-01-01
The industrial codes will consist of modules of 2-D and simplified 2-D or 1-D codes, intended for expeditious parametric studies, analysis, and design of a wide variety of seals. Integration into a unified system is accomplished by the industrial Knowledge Based System (KBS), which will also provide user friendly interaction, contact sensitive and hypertext help, design guidance, and an expandable database. The types of analysis to be included with the industrial codes are interfacial performance (leakage, load, stiffness, friction losses, etc.), thermoelastic distortions, and dynamic response to rotor excursions. The first three codes to be completed and which are presently being incorporated into the KBS are the incompressible cylindrical code, ICYL, and the compressible cylindrical code, GCYL.
Robinson, David; Comp, Dip; Schulz, Erich; Brown, Philip; Price, Colin
1997-01-01
Abstract The Read Codes are a hierarchically-arranged controlled clinical vocabulary introduced in the early 1980s and now consisting of three maintained versions of differing complexity. The code sets are dynamic, and are updated quarterly in response to requests from users including clinicians in both primary and secondary care, software suppliers, and advice from a network of specialist healthcare professionals. The codes' continual evolution of content, both across and within versions, highlights tensions between different users and uses of coded clinical data. Internal processes, external interactions and new structural features implemented by the NHS Centre for Coding and Classification (NHSCCC) for user interactive maintenance of the Read Codes are described, and over 2000 items of user feedback episodes received over a 15-month period are analysed. PMID:9391934
Peter, Frank J.; Dalton, Larry J.; Plummer, David W.
2002-01-01
A new class of mechanical code comparators is described which have broad potential for application in safety, surety, and security applications. These devices can be implemented as micro-scale electromechanical systems that isolate a secure or otherwise controlled device until an access code is entered. This access code is converted into a series of mechanical inputs to the mechanical code comparator, which compares the access code to a pre-input combination, entered previously into the mechanical code comparator by an operator at the system security control point. These devices provide extremely high levels of robust security. Being totally mechanical in operation, an access control system properly based on such devices cannot be circumvented by software attack alone.
Phonological coding during reading
Leinenger, Mallorie
2014-01-01
The exact role that phonological coding (the recoding of written, orthographic information into a sound based code) plays during silent reading has been extensively studied for more than a century. Despite the large body of research surrounding the topic, varying theories as to the time course and function of this recoding still exist. The present review synthesizes this body of research, addressing the topics of time course and function in tandem. The varying theories surrounding the function of phonological coding (e.g., that phonological codes aid lexical access, that phonological codes aid comprehension and bolster short-term memory, or that phonological codes are largely epiphenomenal in skilled readers) are first outlined, and the time courses that each maps onto (e.g., that phonological codes come online early (pre-lexical) or that phonological codes come online late (post-lexical)) are discussed. Next the research relevant to each of these proposed functions is reviewed, discussing the varying methodologies that have been used to investigate phonological coding (e.g., response time methods, reading while eyetracking or recording EEG and MEG, concurrent articulation) and highlighting the advantages and limitations of each with respect to the study of phonological coding. In response to the view that phonological coding is largely epiphenomenal in skilled readers, research on the use of phonological codes in prelingually, profoundly deaf readers is reviewed. Finally, implications for current models of word identification (activation-verification model (Van Order, 1987), dual-route model (e.g., Coltheart, Rastle, Perry, Langdon, & Ziegler, 2001), parallel distributed processing model (Seidenberg & McClelland, 1989)) are discussed. PMID:25150679
NASA Technical Reports Server (NTRS)
Shapiro, Wilbur
1996-01-01
This is an overview of new and updated industrial codes for seal design and testing. GCYLT (gas cylindrical seals -- turbulent), SPIRALI (spiral-groove seals -- incompressible), KTK (knife to knife) Labyrinth Seal Code, and DYSEAL (dynamic seal analysis) are covered. CGYLT uses G-factors for Poiseuille and Couette turbulence coefficients. SPIRALI is updated to include turbulence and inertia, but maintains the narrow groove theory. KTK labyrinth seal code handles straight or stepped seals. And DYSEAL provides dynamics for the seal geometry.
NASA Astrophysics Data System (ADS)
Bravyi, Sergey
Combining protection from noise and computational universality is one of the biggest challenges in the fault-tolerant quantum computing. Topological stabilizer codes such as the 2D surface code can tolerate a high level of noise but implementing logical gates, especially non-Clifford ones, requires a prohibitively large overhead due to the need of state distillation. In this talk I will describe a new family of 2D quantum error correcting codes that enable a transversal implementation of all logical gates required for the universal quantum computing. Transversal logical gates (TLG) are encoded operations that can be realized by applying some single-qubit rotation to each physical qubit. TLG are highly desirable since they introduce no overhead and do not spread errors. It has been known before that a quantum code can have only a finite number of TLGs which rules out computational universality. Our scheme circumvents this no-go result by combining TLGs of two different quantum codes using the gauge-fixing method pioneered by Paetznick and Reichardt. The first code, closely related to the 2D color code, enables a transversal implementation of all single-qubit Clifford gates such as the Hadamard gate and the π / 2 phase shift. The second code that we call a doubled color code provides a transversal T-gate, where T is the π / 4 phase shift. The Clifford+T gate set is known to be computationally universal. The two codes can be laid out on the honeycomb lattice with two qubits per site such that the code conversion requires parity measurements for six-qubit Pauli operators supported on faces of the lattice. I will also describe numerical simulations of logical Clifford+T circuits encoded by the distance-3 doubled color code. Based on a joint work with Andrew Cross.
PARAMESH: A Parallel Adaptive Mesh Refinement Community Toolkit
NASA Technical Reports Server (NTRS)
MacNeice, Peter; Olson, Kevin M.; Mobarry, Clark; deFainchtein, Rosalinda; Packer, Charles
1999-01-01
In this paper, we describe a community toolkit which is designed to provide parallel support with adaptive mesh capability for a large and important class of computational models, those using structured, logically cartesian meshes. The package of Fortran 90 subroutines, called PARAMESH, is designed to provide an application developer with an easy route to extend an existing serial code which uses a logically cartesian structured mesh into a parallel code with adaptive mesh refinement. Alternatively, in its simplest use, and with minimal effort, it can operate as a domain decomposition tool for users who want to parallelize their serial codes, but who do not wish to use adaptivity. The package can provide them with an incremental evolutionary path for their code, converting it first to uniformly refined parallel code, and then later if they so desire, adding adaptivity.
Domino, Stefan; Luketa-Hanlin, Anay; Gallegos, Carlos
2006-10-27
FAA Smoke Transport Code, a physics-based Computational Fluid Dynamics tool, which couples heat, mass, and momentum transfer, has been developed to provide information on smoke transport in cargo compartments with various geometries and flight conditions. The software package contains a graphical user interface for specification of geometry and boundary conditions, analysis module for solving the governing equations, and a post-processing tool. The current code was produced by making substantial improvements and additions to a code obtained from a university. The original code was able to compute steady, uniform, isothermal turbulent pressurization. In addition, a preprocessor and postprocessor were added to arrive at the current software package.
NASA Technical Reports Server (NTRS)
1988-01-01
American Bar Codes, Inc. developed special bar code labels for inventory control of space shuttle parts and other space system components. ABC labels are made in a company-developed anodizing aluminum process and consecutively marketed with bar code symbology and human readable numbers. They offer extreme abrasion resistance and indefinite resistance to ultraviolet radiation, capable of withstanding 700 degree temperatures without deterioration and up to 1400 degrees with special designs. They offer high resistance to salt spray, cleaning fluids and mild acids. ABC is now producing these bar code labels commercially or industrial customers who also need labels to resist harsh environments.
Reid, R.L.; Barrett, R.J.; Brown, T.G.; Gorker, G.E.; Hooper, R.J.; Kalsi, S.S.; Metzler, D.H.; Peng, Y.K.M.; Roth, K.E.; Spampinato, P.T.
1985-03-01
The FEDC Tokamak Systems Code calculates tokamak performance, cost, and configuration as a function of plasma engineering parameters. This version of the code models experimental tokamaks. It does not currently consider tokamak configurations that generate electrical power or incorporate breeding blankets. The code has a modular (or subroutine) structure to allow independent modeling for each major tokamak component or system. A primary benefit of modularization is that a component module may be updated without disturbing the remainder of the systems code as long as the imput to or output from the module remains unchanged.
NASA Technical Reports Server (NTRS)
Hacker, Scott C. (Inventor); Dean, Richard J. (Inventor); Burge, Scott W. (Inventor); Dartez, Toby W. (Inventor)
2007-01-01
An adapter for installing a connector to a terminal post, wherein the connector is attached to a cable, is presented. In an embodiment, the adapter is comprised of an elongated collet member having a longitudinal axis comprised of a first collet member end, a second collet member end, an outer collet member surface, and an inner collet member surface. The inner collet member surface at the first collet member end is used to engage the connector. The outer collet member surface at the first collet member end is tapered for a predetermined first length at a predetermined taper angle. The collet includes a longitudinal slot that extends along the longitudinal axis initiating at the first collet member end for a predetermined second length. The first collet member end is formed of a predetermined number of sections segregated by a predetermined number of channels and the longitudinal slot.
NASA Astrophysics Data System (ADS)
Odriozola, Iñigo; Lazkano, Elena; Sierra, Basi
2011-10-01
This paper investigates the improvement of the Vector Field Histogram (VFH) local planning algorithm for mobile robot systems. The Adaptive Vector Field Histogram (AVFH) algorithm has been developed to improve the effectiveness of the traditional VFH path planning algorithm overcoming the side effects of using static parameters. This new algorithm permits the adaptation of planning parameters for the different type of areas in an environment. Genetic Algorithms are used to fit the best VFH parameters to each type of sector and, afterwards, every section in the map is labelled with the sector-type which best represents it. The Player/Stage simulation platform has been chosen for making all sort of tests and to prove the new algorithm's adequateness. Even though there is still much work to be carried out, the developed algorithm showed good navigation properties and turned out to be softer and more effective than the traditional VFH algorithm.
Research on universal combinatorial coding.
Lu, Jun; Zhang, Zhuo; Mo, Juan
2014-01-01
The conception of universal combinatorial coding is proposed. Relations exist more or less in many coding methods. It means that a kind of universal coding method is objectively existent. It can be a bridge connecting many coding methods. Universal combinatorial coding is lossless and it is based on the combinatorics theory. The combinational and exhaustive property make it closely related with the existing code methods. Universal combinatorial coding does not depend on the probability statistic characteristic of information source, and it has the characteristics across three coding branches. It has analyzed the relationship between the universal combinatorial coding and the variety of coding method and has researched many applications technologies of this coding method. In addition, the efficiency of universal combinatorial coding is analyzed theoretically. The multicharacteristic and multiapplication of universal combinatorial coding are unique in the existing coding methods. Universal combinatorial coding has theoretical research and practical application value.
Research on universal combinatorial coding.
Lu, Jun; Zhang, Zhuo; Mo, Juan
2014-01-01
The conception of universal combinatorial coding is proposed. Relations exist more or less in many coding methods. It means that a kind of universal coding method is objectively existent. It can be a bridge connecting many coding methods. Universal combinatorial coding is lossless and it is based on the combinatorics theory. The combinational and exhaustive property make it closely related with the existing code methods. Universal combinatorial coding does not depend on the probability statistic characteristic of information source, and it has the characteristics across three coding branches. It has analyzed the relationship between the universal combinatorial coding and the variety of coding method and has researched many applications technologies of this coding method. In addition, the efficiency of universal combinatorial coding is analyzed theoretically. The multicharacteristic and multiapplication of universal combinatorial coding are unique in the existing coding methods. Universal combinatorial coding has theoretical research and practical application value. PMID:24772019
Watson, B.L.; Aeby, I.
1980-08-26
An adaptive data compression device for compressing data is described. The device has a frequency content, including a plurality of digital filters for analyzing the content of the data over a plurality of frequency regions, a memory, and a control logic circuit for generating a variable rate memory clock corresponding to the analyzed frequency content of the data in the frequency region and for clocking the data into the memory in response to the variable rate memory clock.
Watson, Bobby L.; Aeby, Ian
1982-01-01
An adaptive data compression device for compressing data having variable frequency content, including a plurality of digital filters for analyzing the content of the data over a plurality of frequency regions, a memory, and a control logic circuit for generating a variable rate memory clock corresponding to the analyzed frequency content of the data in the frequency region and for clocking the data into the memory in response to the variable rate memory clock.
NASA Astrophysics Data System (ADS)
Barton, P.
1987-04-01
The basic principles of adaptive antennas are outlined in terms of the Wiener-Hopf expression for maximizing signal to noise ratio in an arbitrary noise environment; the analogy with generalized matched filter theory provides a useful aid to understanding. For many applications, there is insufficient information to achieve the above solution and thus non-optimum constrained null steering algorithms are also described, together with a summary of methods for preventing wanted signals being nulled by the adaptive system. The three generic approaches to adaptive weight control are discussed; correlation steepest descent, weight perturbation and direct solutions based on sample matrix conversion. The tradeoffs between hardware complexity and performance in terms of null depth and convergence rate are outlined. The sidelobe cancellor technique is described. Performance variation with jammer power and angular distribution is summarized and the key performance limitations identified. The configuration and performance characteristics of both multiple beam and phase scan array antennas are covered, with a brief discussion of performance factors.
Fast Coding Unit Encoding Mechanism for Low Complexity Video Coding
Wu, Yueying; Jia, Kebin; Gao, Guandong
2016-01-01
In high efficiency video coding (HEVC), coding tree contributes to excellent compression performance. However, coding tree brings extremely high computational complexity. Innovative works for improving coding tree to further reduce encoding time are stated in this paper. A novel low complexity coding tree mechanism is proposed for HEVC fast coding unit (CU) encoding. Firstly, this paper makes an in-depth study of the relationship among CU distribution, quantization parameter (QP) and content change (CC). Secondly, a CU coding tree probability model is proposed for modeling and predicting CU distribution. Eventually, a CU coding tree probability update is proposed, aiming to address probabilistic model distortion problems caused by CC. Experimental results show that the proposed low complexity CU coding tree mechanism significantly reduces encoding time by 27% for lossy coding and 42% for visually lossless coding and lossless coding. The proposed low complexity CU coding tree mechanism devotes to improving coding performance under various application conditions. PMID:26999741
ERIC Educational Resources Information Center
Division for Early Childhood, Council for Exceptional Children, 2009
2009-01-01
The Code of Ethics of the Division for Early Childhood (DEC) of the Council for Exceptional Children is a public statement of principles and practice guidelines supported by the mission of DEC. The foundation of this Code is based on sound ethical reasoning related to professional practice with young children with disabilities and their families…
Lichenase and coding sequences
Li, Xin-Liang; Ljungdahl, Lars G.; Chen, Huizhong
2000-08-15
The present invention provides a fungal lichenase, i.e., an endo-1,3-1,4-.beta.-D-glucanohydrolase, its coding sequence, recombinant DNA molecules comprising the lichenase coding sequences, recombinant host cells and methods for producing same. The present lichenase is from Orpinomyces PC-2.
NASA Technical Reports Server (NTRS)
Hribar, Michelle R.; Frumkin, Michael; Jin, Haoqiang; Waheed, Abdul; Yan, Jerry; Saini, Subhash (Technical Monitor)
1998-01-01
Over the past decade, high performance computing has evolved rapidly; systems based on commodity microprocessors have been introduced in quick succession from at least seven vendors/families. Porting codes to every new architecture is a difficult problem; in particular, here at NASA, there are many large CFD applications that are very costly to port to new machines by hand. The LCM ("Legacy Code Modernization") Project is the development of an integrated parallelization environment (IPE) which performs the automated mapping of legacy CFD (Fortran) applications to state-of-the-art high performance computers. While most projects to port codes focus on the parallelization of the code, we consider porting to be an iterative process consisting of several steps: 1) code cleanup, 2) serial optimization,3) parallelization, 4) performance monitoring and visualization, 5) intelligent tools for automated tuning using performance prediction and 6) machine specific optimization. The approach for building this parallelization environment is to build the components for each of the steps simultaneously and then integrate them together. The demonstration will exhibit our latest research in building this environment: 1. Parallelizing tools and compiler evaluation. 2. Code cleanup and serial optimization using automated scripts 3. Development of a code generator for performance prediction 4. Automated partitioning 5. Automated insertion of directives. These demonstrations will exhibit the effectiveness of an automated approach for all the steps involved with porting and tuning a legacy code application for a new architecture.
NASA Technical Reports Server (NTRS)
Whalen, Michael; Schumann, Johann; Fischer, Bernd
2002-01-01
Code certification is a lightweight approach to demonstrate software quality on a formal level. Its basic idea is to require producers to provide formal proofs that their code satisfies certain quality properties. These proofs serve as certificates which can be checked independently. Since code certification uses the same underlying technology as program verification, it also requires many detailed annotations (e.g., loop invariants) to make the proofs possible. However, manually adding theses annotations to the code is time-consuming and error-prone. We address this problem by combining code certification with automatic program synthesis. We propose an approach to generate simultaneously, from a high-level specification, code and all annotations required to certify generated code. Here, we describe a certification extension of AUTOBAYES, a synthesis tool which automatically generates complex data analysis programs from compact specifications. AUTOBAYES contains sufficient high-level domain knowledge to generate detailed annotations. This allows us to use a general-purpose verification condition generator to produce a set of proof obligations in first-order logic. The obligations are then discharged using the automated theorem E-SETHEO. We demonstrate our approach by certifying operator safety for a generated iterative data classification program without manual annotation of the code.
Efficient sensory cortical coding optimizes pursuit eye movements
Liu, Bing; Macellaio, Matthew V.; Osborne, Leslie C.
2016-01-01
In the natural world, the statistics of sensory stimuli fluctuate across a wide range. In theory, the brain could maximize information recovery if sensory neurons adaptively rescale their sensitivity to the current range of inputs. Such adaptive coding has been observed in a variety of systems, but the premise that adaptation optimizes behaviour has not been tested. Here we show that adaptation in cortical sensory neurons maximizes information about visual motion in pursuit eye movements guided by that cortical activity. We find that gain adaptation drives a rapid (<100 ms) recovery of information after shifts in motion variance, because the neurons and behaviour rescale their sensitivity to motion fluctuations. Both neurons and pursuit rapidly adopt a response gain that maximizes motion information and minimizes tracking errors. Thus, efficient sensory coding is not simply an ideal standard but a description of real sensory computation that manifests in improved behavioural performance. PMID:27611214
Efficient sensory cortical coding optimizes pursuit eye movements.
Liu, Bing; Macellaio, Matthew V; Osborne, Leslie C
2016-01-01
In the natural world, the statistics of sensory stimuli fluctuate across a wide range. In theory, the brain could maximize information recovery if sensory neurons adaptively rescale their sensitivity to the current range of inputs. Such adaptive coding has been observed in a variety of systems, but the premise that adaptation optimizes behaviour has not been tested. Here we show that adaptation in cortical sensory neurons maximizes information about visual motion in pursuit eye movements guided by that cortical activity. We find that gain adaptation drives a rapid (<100 ms) recovery of information after shifts in motion variance, because the neurons and behaviour rescale their sensitivity to motion fluctuations. Both neurons and pursuit rapidly adopt a response gain that maximizes motion information and minimizes tracking errors. Thus, efficient sensory coding is not simply an ideal standard but a description of real sensory computation that manifests in improved behavioural performance. PMID:27611214
Combustion chamber analysis code
NASA Astrophysics Data System (ADS)
Przekwas, A. J.; Lai, Y. G.; Krishnan, A.; Avva, R. K.; Giridharan, M. G.
1993-05-01
A three-dimensional, time dependent, Favre averaged, finite volume Navier-Stokes code has been developed to model compressible and incompressible flows (with and without chemical reactions) in liquid rocket engines. The code has a non-staggered formulation with generalized body-fitted-coordinates (BFC) capability. Higher order differencing methodologies such as MUSCL and Osher-Chakravarthy schemes are available. Turbulent flows can be modeled using any of the five turbulent models present in the code. A two-phase, two-liquid, Lagrangian spray model has been incorporated into the code. Chemical equilibrium and finite rate reaction models are available to model chemically reacting flows. The discrete ordinate method is used to model effects of thermal radiation. The code has been validated extensively against benchmark experimental data and has been applied to model flows in several propulsion system components of the SSME and the STME.
Combustion chamber analysis code
NASA Technical Reports Server (NTRS)
Przekwas, A. J.; Lai, Y. G.; Krishnan, A.; Avva, R. K.; Giridharan, M. G.
1993-01-01
A three-dimensional, time dependent, Favre averaged, finite volume Navier-Stokes code has been developed to model compressible and incompressible flows (with and without chemical reactions) in liquid rocket engines. The code has a non-staggered formulation with generalized body-fitted-coordinates (BFC) capability. Higher order differencing methodologies such as MUSCL and Osher-Chakravarthy schemes are available. Turbulent flows can be modeled using any of the five turbulent models present in the code. A two-phase, two-liquid, Lagrangian spray model has been incorporated into the code. Chemical equilibrium and finite rate reaction models are available to model chemically reacting flows. The discrete ordinate method is used to model effects of thermal radiation. The code has been validated extensively against benchmark experimental data and has been applied to model flows in several propulsion system components of the SSME and the STME.
Energy Conservation Code Decoded
Cole, Pam C.; Taylor, Zachary T.
2006-09-01
Designing an energy-efficient, affordable, and comfortable home is a lot easier thanks to a slime, easier to read booklet, the 2006 International Energy Conservation Code (IECC), published in March 2006. States, counties, and cities have begun reviewing the new code as a potential upgrade to their existing codes. Maintained under the public consensus process of the International Code Council, the IECC is designed to do just what its title says: promote the design and construction of energy-efficient homes and commercial buildings. Homes in this case means traditional single-family homes, duplexes, condominiums, and apartment buildings having three or fewer stories. The U.S. Department of Energy, which played a key role in proposing the changes that resulted in the new code, is offering a free training course that covers the residential provisions of the 2006 IECC.
OHAMA, Takeshi; INAGAKI, Yuji; BESSHO, Yoshitaka; OSAWA, Syozo
2008-01-01
In 1985, we reported that a bacterium, Mycoplasma capricolum, used a deviant genetic code, namely UGA, a “universal” stop codon, was read as tryptophan. This finding, together with the deviant nuclear genetic codes in not a few organisms and a number of mitochondria, shows that the genetic code is not universal, and is in a state of evolution. To account for the changes in codon meanings, we proposed the codon capture theory stating that all the code changes are non-disruptive without accompanied changes of amino acid sequences of proteins. Supporting evidence for the theory is presented in this review. A possible evolutionary process from the ancient to the present-day genetic code is also discussed. PMID:18941287
Physics and numerics of the tensor code (incomplete preliminary documentation)
Burton, D.E.; Lettis, L.A. Jr.; Bryan, J.B.; Frary, N.R.
1982-07-15
The present TENSOR code is a descendant of a code originally conceived by Maenchen and Sack and later adapted by Cherry. Originally, the code was a two-dimensional Lagrangian explicit finite difference code which solved the equations of continuum mechanics. Since then, implicit and arbitrary Lagrange-Euler (ALE) algorithms have been added. The code has been used principally to solve problems involving the propagation of stress waves through earth materials, and considerable development of rock and soil constitutive relations has been done. The code has been applied extensively to the containment of underground nuclear tests, nuclear and high explosive surface and subsurface cratering, and energy and resource recovery. TENSOR is supported by a substantial array of ancillary routines. The initial conditions are set up by a generator code TENGEN. ZON is a multipurpose code which can be used for zoning, rezoning, overlaying, and linking from other codes. Linking from some codes is facilitated by another code RADTEN. TENPLT is a fixed time graphics code which provides a wide variety of plotting options and output devices, and which is capable of producing computer movies by postprocessing problem dumps. Time history graphics are provided by the TIMPLT code from temporal dumps produced during production runs. While TENSOR can be run as a stand-alone controllee, a special controller code TCON is available to better interface the code with the LLNL computer system during production jobs. In order to standardize compilation procedures and provide quality control, a special compiler code BC is used. A number of equation of state generators are available among them ROC and PMUGEN.
CFD code evaluation for internal flow modeling
NASA Technical Reports Server (NTRS)
Chung, T. J.
1990-01-01
Research on the computational fluid dynamics (CFD) code evaluation with emphasis on supercomputing in reacting flows is discussed. Advantages of unstructured grids, multigrids, adaptive methods, improved flow solvers, vector processing, parallel processing, and reduction of memory requirements are discussed. As examples, researchers include applications of supercomputing to reacting flow Navier-Stokes equations including shock waves and turbulence and combustion instability problems associated with solid and liquid propellants. Evaluation of codes developed by other organizations are not included. Instead, the basic criteria for accuracy and efficiency have been established, and some applications on rocket combustion have been made. Research toward an ultimate goal, the most accurate and efficient CFD code, is in progress and will continue for years to come.
Quantum convolutional codes derived from constacyclic codes
NASA Astrophysics Data System (ADS)
Yan, Tingsu; Huang, Xinmei; Tang, Yuansheng
2014-12-01
In this paper, three families of quantum convolutional codes are constructed. The first one and the second one can be regarded as a generalization of Theorems 3, 4, 7 and 8 [J. Chen, J. Li, F. Yang and Y. Huang, Int. J. Theor. Phys., doi:10.1007/s10773-014-2214-6 (2014)], in the sense that we drop the constraint q ≡ 1 (mod 4). Furthermore, the second one and the third one attain the quantum generalized Singleton bound.
NASA Technical Reports Server (NTRS)
Watson, Andrew B.
1990-01-01
All vision systems, both human and machine, transform the spatial image into a coded representation. Particular codes may be optimized for efficiency or to extract useful image features. Researchers explored image codes based on primary visual cortex in man and other primates. Understanding these codes will advance the art in image coding, autonomous vision, and computational human factors. In cortex, imagery is coded by features that vary in size, orientation, and position. Researchers have devised a mathematical model of this transformation, called the Hexagonal oriented Orthogonal quadrature Pyramid (HOP). In a pyramid code, features are segregated by size into layers, with fewer features in the layers devoted to large features. Pyramid schemes provide scale invariance, and are useful for coarse-to-fine searching and for progressive transmission of images. The HOP Pyramid is novel in three respects: (1) it uses a hexagonal pixel lattice, (2) it uses oriented features, and (3) it accurately models most of the prominent aspects of primary visual cortex. The transform uses seven basic features (kernels), which may be regarded as three oriented edges, three oriented bars, and one non-oriented blob. Application of these kernels to non-overlapping seven-pixel neighborhoods yields six oriented, high-pass pyramid layers, and one low-pass (blob) layer.
Nelson, R.N.
1985-05-01
This publication lists all report number codes processed by the Office of Scientific and Technical Information. The report codes are substantially based on the American National Standards Institute, Standard Technical Report Number (STRN)-Format and Creation Z39.23-1983. The Standard Technical Report Number (STRN) provides one of the primary methods of identifying a specific technical report. The STRN consists of two parts: The report code and the sequential number. The report code identifies the issuing organization, a specific program, or a type of document. The sequential number, which is assigned in sequence by each report issuing entity, is not included in this publication. Part I of this compilation is alphabetized by report codes followed by issuing installations. Part II lists the issuing organization followed by the assigned report code(s). In both Parts I and II, the names of issuing organizations appear for the most part in the form used at the time the reports were issued. However, for some of the more prolific installations which have had name changes, all entries have been merged under the current name.
Lossless Video Sequence Compression Using Adaptive Prediction
NASA Technical Reports Server (NTRS)
Li, Ying; Sayood, Khalid
2007-01-01
We present an adaptive lossless video compression algorithm based on predictive coding. The proposed algorithm exploits temporal, spatial, and spectral redundancies in a backward adaptive fashion with extremely low side information. The computational complexity is further reduced by using a caching strategy. We also study the relationship between the operational domain for the coder (wavelet or spatial) and the amount of temporal and spatial redundancy in the sequence being encoded. Experimental results show that the proposed scheme provides significant improvements in compression efficiencies.
NASA Technical Reports Server (NTRS)
1991-01-01
In recognition of a deficiency in the current modeling capability for seals, an effort was established by NASA to develop verified computational fluid dynamic concepts, codes, and analyses for seals. The objectives were to develop advanced concepts for the design and analysis of seals, to effectively disseminate the information to potential users by way of annual workshops, and to provide experimental verification for the models and codes under a wide range of operating conditions.
NASA Astrophysics Data System (ADS)
Yang, Qianli; Pitkow, Xaq
2015-03-01
Most interesting natural sensory stimuli are encoded in the brain in a form that can only be decoded nonlinearly. But despite being a core function of the brain, nonlinear population codes are rarely studied and poorly understood. Interestingly, the few existing models of nonlinear codes are inconsistent with known architectural features of the brain. In particular, these codes have information content that scales with the size of the cortical population, even if that violates the data processing inequality by exceeding the amount of information entering the sensory system. Here we provide a valid theory of nonlinear population codes by generalizing recent work on information-limiting correlations in linear population codes. Although these generalized, nonlinear information-limiting correlations bound the performance of any decoder, they also make decoding more robust to suboptimal computation, allowing many suboptimal decoders to achieve nearly the same efficiency as an optimal decoder. Although these correlations are extremely difficult to measure directly, particularly for nonlinear codes, we provide a simple, practical test by which one can use choice-related activity in small populations of neurons to determine whether decoding is suboptimal or optimal and limited by correlated noise. We conclude by describing an example computation in the vestibular system where this theory applies. QY and XP was supported by a grant from the McNair foundation.
Cramer, S.N.
1984-01-01
The KENO-V code is the current release of the Oak Ridge multigroup Monte Carlo criticality code development. The original KENO, with 16 group Hansen-Roach cross sections and P/sub 1/ scattering, was one ot the first multigroup Monte Carlo codes and it and its successors have always been a much-used research tool for criticality studies. KENO-V is able to accept large neutron cross section libraries (a 218 group set is distributed with the code) and has a general P/sub N/ scattering capability. A supergroup feature allows execution of large problems on small computers, but at the expense of increased calculation time and system input/output operations. This supergroup feature is activated automatically by the code in a manner which utilizes as much computer memory as is available. The primary purpose of KENO-V is to calculate the system k/sub eff/, from small bare critical assemblies to large reflected arrays of differing fissile and moderator elements. In this respect KENO-V neither has nor requires the many options and sophisticated biasing techniques of general Monte Carlo codes.
RBMK-LOCA-Analyses with the ATHLET-Code
Petry, A.; Domoradov, A.; Finjakin, A.
1995-09-01
The scientific technical cooperation between Germany and Russia includes the area of adaptation of several German codes for the Russian-designed RBMK-reactor. One point of this cooperation is the adaptation of the Thermal-Hydraulic code ATHLET (Analyses of the Thermal-Hydraulics of LEaks and Transients), for RBMK-specific safety problems. This paper contains a short description of a RBMK-1000 reactor circuit. Furthermore, the main features of the thermal-hydraulic code ATHLET are presented. The main assumptions for the ATHLET-RBMK model are discussed. As an example for the application, the results of test calculations concerning a guillotine type rupture of a distribution group header are presented and discussed, and the general analysis conditions are described. A comparison with corresponding RELAP-calculations is given. This paper gives an overview on some problems posed and experience by application of Western best-estimate codes for RBMK-calculations.
Coded aperture compressive temporal imaging.
Llull, Patrick; Liao, Xuejun; Yuan, Xin; Yang, Jianbo; Kittle, David; Carin, Lawrence; Sapiro, Guillermo; Brady, David J
2013-05-01
We use mechanical translation of a coded aperture for code division multiple access compression of video. We discuss the compressed video's temporal resolution and present experimental results for reconstructions of > 10 frames of temporal data per coded snapshot.
Temporal Coding of Volumetric Imagery
NASA Astrophysics Data System (ADS)
Llull, Patrick Ryan
of other information within that video; namely, focal and spectral information. The next part of the thesis demonstrates derivative works of CACTI: compressive extended depth of field and compressive spectral-temporal imaging. These works successfully show the technique's extension of temporal coding to improve sensing performance in these other dimensions. Geometrical optics-related tradeoffs, such as the classic challenges of wide-field-of-view and high resolution photography, have motivated the development of mulitscale camera arrays. The advent of such designs less than a decade ago heralds a new era of research- and engineering-related challenges. One significant challenge is that of managing the focal volume (x,y,z ) over wide fields of view and resolutions. The fourth chapter shows advances on focus and image quality assessment for a class of multiscale gigapixel cameras developed at Duke. Along the same line of work, we have explored methods for dynamic and adaptive addressing of focus via point spread function engineering. We demonstrate another form of temporal coding in the form of physical translation of the image plane from its nominal focal position. We demonstrate this technique's capability to generate arbitrary point spread functions.
NASA Technical Reports Server (NTRS)
Woo, Simon S.; Cheng, Michael K.
2011-01-01
The original Luby Transform (LT) coding scheme is extended to account for data transmissions where some information symbols in a message block are more important than others. Prioritized LT codes provide unequal error protection (UEP) of data on an erasure channel by modifying the original LT encoder. The prioritized algorithm improves high-priority data protection without penalizing low-priority data recovery. Moreover, low-latency decoding is also obtained for high-priority data due to fast encoding. Prioritized LT codes only require a slight change in the original encoding algorithm, and no changes at all at the decoder. Hence, with a small complexity increase in the LT encoder, an improved UEP and low-decoding latency performance for high-priority data can be achieved. LT encoding partitions a data stream into fixed-sized message blocks each with a constant number of information symbols. To generate a code symbol from the information symbols in a message, the Robust-Soliton probability distribution is first applied in order to determine the number of information symbols to be used to compute the code symbol. Then, the specific information symbols are chosen uniform randomly from the message block. Finally, the selected information symbols are XORed to form the code symbol. The Prioritized LT code construction includes an additional restriction that code symbols formed by a relatively small number of XORed information symbols select some of these information symbols from the pool of high-priority data. Once high-priority data are fully covered, encoding continues with the conventional LT approach where code symbols are generated by selecting information symbols from the entire message block including all different priorities. Therefore, if code symbols derived from high-priority data experience an unusual high number of erasures, Prioritized LT codes can still reliably recover both high- and low-priority data. This hybrid approach decides not only "how to encode
Induction technology optimization code
Caporaso, G.J.; Brooks, A.L.; Kirbie, H.C.
1992-08-21
A code has been developed to evaluate relative costs of induction accelerator driver systems for relativistic klystrons. The code incorporates beam generation, transport and pulsed power system constraints to provide an integrated design tool. The code generates an injector/accelerator combination which satisfies the top level requirements and all system constraints once a small number of design choices have been specified (rise time of the injector voltage and aspect ratio of the ferrite induction cores, for example). The code calculates dimensions of accelerator mechanical assemblies and values of all electrical components. Cost factors for machined parts, raw materials and components are applied to yield a total system cost. These costs are then plotted as a function of the two design choices to enable selection of an optimum design based on various criteria. The Induction Technology Optimization Study (ITOS) was undertaken to examine viable combinations of a linear induction accelerator and a relativistic klystron (RK) for high power microwave production. It is proposed, that microwaves from the RK will power a high-gradient accelerator structure for linear collider development. Previous work indicates that the RK will require a nominal 3-MeV, 3-kA electron beam with a 100-ns flat top. The proposed accelerator-RK combination will be a high average power system capable of sustained microwave output at a 300-Hz pulse repetition frequency. The ITOS code models many combinations of injector, accelerator, and pulse power designs that will supply an RK with the beam parameters described above.
Bingham, Philip R; Santos-Villalobos, Hector J
2011-01-01
Coded aperture techniques have been applied to neutron radiography to address limitations in neutron flux and resolution of neutron detectors in a system labeled coded source imaging (CSI). By coding the neutron source, a magnified imaging system is designed with small spot size aperture holes (10 and 100 m) for improved resolution beyond the detector limits and with many holes in the aperture (50% open) to account for flux losses due to the small pinhole size. An introduction to neutron radiography and coded aperture imaging is presented. A system design is developed for a CSI system with a development of equations for limitations on the system based on the coded image requirements and the neutron source characteristics of size and divergence. Simulation has been applied to the design using McStas to provide qualitative measures of performance with simulations of pinhole array objects followed by a quantitative measure through simulation of a tilted edge and calculation of the modulation transfer function (MTF) from the line spread function. MTF results for both 100um and 10um aperture hole diameters show resolutions matching the hole diameters.
Lossless coding using predictors and VLCs optimized for each image
NASA Astrophysics Data System (ADS)
Matsuda, Ichiro; Shirai, Noriyuki; Itoh, Susumu
2003-06-01
This paper proposes an efficient lossless coding scheme for still images. The scheme utilizes an adaptive prediction technique where a set of linear predictors are designed for a given image and an appropriate predictor is selected from the set block-by-block. The resulting prediction errors are encoded using context-adaptive variable-length codes (VLCs). Context modeling, or adaptive selection of VLCs, is carried out pel-by-pel and the VLC assigned to each context is designed on a probability distribution model of the prediction errors. In order to improve coding efficiency, a generalized Gaussian function is used as the model for each context. Moreover, not only the predictors but also parameters of the probability distribution models are iteratively optimized for each image so that a coding rate of the prediction errors can have a minimum. Experimental results show that the proposed coding scheme attains comparable coding performance to the state-of-the-art TMW scheme with much lower complexity in the decoding process.
NASA Astrophysics Data System (ADS)
Vaucouleur, Sebastien
2011-02-01
We introduce code query by example for customisation of evolvable software products in general and of enterprise resource planning systems (ERPs) in particular. The concept is based on an initial empirical study on practices around ERP systems. We motivate our design choices based on those empirical results, and we show how the proposed solution helps with respect to the infamous upgrade problem: the conflict between the need for customisation and the need for upgrade of ERP systems. We further show how code query by example can be used as a form of lightweight static analysis, to detect automatically potential defects in large software products. Code query by example as a form of lightweight static analysis is particularly interesting in the context of ERP systems: it is often the case that programmers working in this field are not computer science specialists but more of domain experts. Hence, they require a simple language to express custom rules.
2006-10-27
FAA Smoke Transport Code, a physics-based Computational Fluid Dynamics tool, which couples heat, mass, and momentum transfer, has been developed to provide information on smoke transport in cargo compartments with various geometries and flight conditions. The software package contains a graphical user interface for specification of geometry and boundary conditions, analysis module for solving the governing equations, and a post-processing tool. The current code was produced by making substantial improvements and additions to a codemore » obtained from a university. The original code was able to compute steady, uniform, isothermal turbulent pressurization. In addition, a preprocessor and postprocessor were added to arrive at the current software package.« less
Seals Code Development Workshop
NASA Technical Reports Server (NTRS)
Hendricks, Robert C. (Compiler); Liang, Anita D. (Compiler)
1996-01-01
Seals Workshop of 1995 industrial code (INDSEAL) release include ICYL, GCYLT, IFACE, GFACE, SPIRALG, SPIRALI, DYSEAL, and KTK. The scientific code (SCISEAL) release includes conjugate heat transfer and multidomain with rotordynamic capability. Several seals and bearings codes (e.g., HYDROFLEX, HYDROTRAN, HYDROB3D, FLOWCON1, FLOWCON2) are presented and results compared. Current computational and experimental emphasis includes multiple connected cavity flows with goals of reducing parasitic losses and gas ingestion. Labyrinth seals continue to play a significant role in sealing with face, honeycomb, and new sealing concepts under investigation for advanced engine concepts in view of strict environmental constraints. The clean sheet approach to engine design is advocated with program directions and anticipated percentage SFC reductions cited. Future activities center on engine applications with coupled seal/power/secondary flow streams.
Autocatalysis, information and coding.
Wills, P R
2001-01-01
Autocatalytic self-construction in macromolecular systems requires the existence of a reflexive relationship between structural components and the functional operations they perform to synthesise themselves. The possibility of reflexivity depends on formal, semiotic features of the catalytic structure-function relationship, that is, the embedding of catalytic functions in the space of polymeric structures. Reflexivity is a semiotic property of some genetic sequences. Such sequences may serve as the basis for the evolution of coding as a result of autocatalytic self-organisation in a population of assignment catalysts. Autocatalytic selection is a mechanism whereby matter becomes differentiated in primitive biochemical systems. In the case of coding self-organisation, it corresponds to the creation of symbolic information. Prions are present-day entities whose replication through autocatalysis reflects aspects of biological semiotics less obvious than genetic coding.
Embedded multiple description coding of video.
Verdicchio, Fabio; Munteanu, Adrian; Gavrilescu, Augustin I; Cornelis, Jan; Schelkens, Peter
2006-10-01
Real-time delivery of video over best-effort error-prone packet networks requires scalable erasure-resilient compression systems in order to 1) meet the users' requirements in terms of quality, resolution, and frame-rate; 2) dynamically adapt the rate to the available channel capacity; and 3) provide robustness to data losses, as retransmission is often impractical. Furthermore, the employed erasure-resilience mechanisms should be scalable in order to adapt the degree of resiliency against transmission errors to the varying channel conditions. Driven by these constraints, we propose in this paper a novel design for scalable erasure-resilient video coding that couples the compression efficiency of the open-loop architecture with the robustness provided by multiple description coding. In our approach, scalability and packet-erasure resilience are jointly provided via embedded multiple description scalar quantization. Furthermore, a novel channel-aware rate-allocation technique is proposed that allows for shaping on-the-fly the output bit rate and the degree of resiliency without resorting to channel coding. As a result, robustness to data losses is traded for better visual quality when transmission occurs over reliable channels, while erasure resilience is introduced when noisy links are involved. Numerical results clearly demonstrate the advantages of the proposed approach over equivalent codec instantiations employing 1) no erasure-resilience mechanisms, 2) erasure-resilience with nonscalable redundancy, or 3) data-partitioning principles.
Code inspection instructional validation
NASA Technical Reports Server (NTRS)
Orr, Kay; Stancil, Shirley
1992-01-01
The Shuttle Data Systems Branch (SDSB) of the Flight Data Systems Division (FDSD) at Johnson Space Center contracted with Southwest Research Institute (SwRI) to validate the effectiveness of an interactive video course on the code inspection process. The purpose of this project was to determine if this course could be effective for teaching NASA analysts the process of code inspection. In addition, NASA was interested in the effectiveness of this unique type of instruction (Digital Video Interactive), for providing training on software processes. This study found the Carnegie Mellon course, 'A Cure for the Common Code', effective for teaching the process of code inspection. In addition, analysts prefer learning with this method of instruction, or this method in combination with other methods. As is, the course is definitely better than no course at all; however, findings indicate changes are needed. Following are conclusions of this study. (1) The course is instructionally effective. (2) The simulation has a positive effect on student's confidence in his ability to apply new knowledge. (3) Analysts like the course and prefer this method of training, or this method in combination with current methods of training in code inspection, over the way training is currently being conducted. (4) Analysts responded favorably to information presented through scenarios incorporating full motion video. (5) Some course content needs to be changed. (6) Some content needs to be added to the course. SwRI believes this study indicates interactive video instruction combined with simulation is effective for teaching software processes. Based on the conclusions of this study, SwRI has outlined seven options for NASA to consider. SwRI recommends the option which involves creation of new source code and data files, but uses much of the existing content and design from the current course. Although this option involves a significant software development effort, SwRI believes this option
Link, Hamilton E.; Schroeppel, Richard Crabtree; Neumann, William Douglas; Campbell, Philip LaRoche; Beaver, Cheryl Lynn; Pierson, Lyndon George; Anderson, William Erik
2004-10-01
If software is designed so that the software can issue functions that will move that software from one computing platform to another, then the software is said to be 'mobile'. There are two general areas of security problems associated with mobile code. The 'secure host' problem involves protecting the host from malicious mobile code. The 'secure mobile code' problem, on the other hand, involves protecting the code from malicious hosts. This report focuses on the latter problem. We have found three distinct camps of opinions regarding how to secure mobile code. There are those who believe special distributed hardware is necessary, those who believe special distributed software is necessary, and those who believe neither is necessary. We examine all three camps, with a focus on the third. In the distributed software camp we examine some commonly proposed techniques including Java, D'Agents and Flask. For the specialized hardware camp, we propose a cryptographic technique for 'tamper-proofing' code over a large portion of the software/hardware life cycle by careful modification of current architectures. This method culminates by decrypting/authenticating each instruction within a physically protected CPU, thereby protecting against subversion by malicious code. Our main focus is on the camp that believes that neither specialized software nor hardware is necessary. We concentrate on methods of code obfuscation to render an entire program or a data segment on which a program depends incomprehensible. The hope is to prevent or at least slow down reverse engineering efforts and to prevent goal-oriented attacks on the software and execution. The field of obfuscation is still in a state of development with the central problem being the lack of a basis for evaluating the protection schemes. We give a brief introduction to some of the main ideas in the field, followed by an in depth analysis of a technique called 'white-boxing'. We put forth some new attacks and improvements
NASA Technical Reports Server (NTRS)
Gliebe, P; Mani, R.; Shin, H.; Mitchell, B.; Ashford, G.; Salamah, S.; Connell, S.; Huff, Dennis (Technical Monitor)
2000-01-01
This report describes work performed on Contract NAS3-27720AoI 13 as part of the NASA Advanced Subsonic Transport (AST) Noise Reduction Technology effort. Computer codes were developed to provide quantitative prediction, design, and analysis capability for several aircraft engine noise sources. The objective was to provide improved, physics-based tools for exploration of noise-reduction concepts and understanding of experimental results. Methods and codes focused on fan broadband and 'buzz saw' noise and on low-emissions combustor noise and compliment work done by other contractors under the NASA AST program to develop methods and codes for fan harmonic tone noise and jet noise. The methods and codes developed and reported herein employ a wide range of approaches, from the strictly empirical to the completely computational, with some being semiempirical analytical, and/or analytical/computational. Emphasis was on capturing the essential physics while still considering method or code utility as a practical design and analysis tool for everyday engineering use. Codes and prediction models were developed for: (1) an improved empirical correlation model for fan rotor exit flow mean and turbulence properties, for use in predicting broadband noise generated by rotor exit flow turbulence interaction with downstream stator vanes: (2) fan broadband noise models for rotor and stator/turbulence interaction sources including 3D effects, noncompact-source effects. directivity modeling, and extensions to the rotor supersonic tip-speed regime; (3) fan multiple-pure-tone in-duct sound pressure prediction methodology based on computational fluid dynamics (CFD) analysis; and (4) low-emissions combustor prediction methodology and computer code based on CFD and actuator disk theory. In addition. the relative importance of dipole and quadrupole source mechanisms was studied using direct CFD source computation for a simple cascadeigust interaction problem, and an empirical combustor
Bit-wise arithmetic coding for data compression
NASA Technical Reports Server (NTRS)
Kiely, A. B.
1994-01-01
This article examines the problem of compressing a uniformly quantized independent and identically distributed (IID) source. We present a new compression technique, bit-wise arithmetic coding, that assigns fixed-length codewords to the quantizer output and uses arithmetic coding to compress the codewords, treating the codeword bits as independent. We examine the performance of this method and evaluate the overhead required when used block-adaptively. Simulation results are presented for Gaussian and Laplacian sources. This new technique could be used as the entropy coder in a transform or subband coding system.
Block adaptive rate controlled image data compression
NASA Technical Reports Server (NTRS)
Rice, R. F.; Hilbert, E.; Lee, J.-J.; Schlutsmeyer, A.
1979-01-01
A block adaptive rate controlled (BARC) image data compression algorithm is described. It is noted that in the algorithm's principal rate controlled mode, image lines can be coded at selected rates by combining practical universal noiseless coding techniques with block adaptive adjustments in linear quantization. Compression of any source data at chosen rates of 3.0 bits/sample and above can be expected to yield visual image quality with imperceptible degradation. Exact reconstruction will be obtained if the one-dimensional difference entropy is below the selected compression rate. It is noted that the compressor can also be operated as a floating rate noiseless coder by simply not altering the input data quantization. Here, the universal noiseless coder ensures that the code rate is always close to the entropy. Application of BARC image data compression to the Galileo orbiter mission of Jupiter is considered.
A multi-grid code for 3-D transonic potential flow about axisymmetric inlets at angle of attack
NASA Technical Reports Server (NTRS)
Mccarthy, D. R.; Reyhner, T. A.
1980-01-01
In the present work, an existing transonic potential code is adapted to utilize the Multiple Level Adaptive technique proposed by A. Brandt. It is shown that order of magnitude improvements in speed and greatly improved accuracy over the unmodified code are achieved. Consideration is given to the difficulties of multi-grid programming, and possible future applications are surveyed.
Codes with Monotonic Codeword Lengths.
ERIC Educational Resources Information Center
Abrahams, Julia
1994-01-01
Discusses the minimum average codeword length coding under the constraint that the codewords are monotonically nondecreasing in length. Bounds on the average length of an optimal monotonic code are derived, and sufficient conditions are given such that algorithms for optimal alphabetic codes can be used to find the optimal monotonic code. (six…
Adaptive GOP structure based on motion coherence
NASA Astrophysics Data System (ADS)
Ma, Yanzhuo; Wan, Shuai; Chang, Yilin; Yang, Fuzheng; Wang, Xiaoyu
2009-08-01
Adaptive Group of Pictures (GOP) is helpful for increasing the efficiency of video encoding by taking account of characteristics of video content. This paper proposes a method for adaptive GOP structure selection for video encoding based on motion coherence, which extracts key frames according to motion acceleration, and assigns coding type for each key and non-key frame correspondingly. Motion deviation is then used instead of motion magnitude in the selection of the number of B frames. Experimental results show that the proposed method for adaptive GOP structure selection achieves performance gain of 0.2-1dB over the fixed GOP, and has the advantage of better transmission resilience. Moreover, this method can be used in real-time video coding due to its low complexity.
Parallel Adaptive Multi-Mechanics Simulations using Diablo
Parsons, D; Solberg, J
2004-12-03
Coupled multi-mechanics simulations (such as thermal-stress and fluidstructure interaction problems) are of substantial interest to engineering analysts. In addition, adaptive mesh refinement techniques present an attractive alternative to current mesh generation procedures and provide quantitative error bounds that can be used for model verification. This paper discusses spatially adaptive multi-mechanics implicit simulations using the Diablo computer code. (U)
Fast mode decision algorithm for scalable video coding based on luminance coded block pattern
NASA Astrophysics Data System (ADS)
Kim, Tae-Jung; Yoo, Jeong-Ju; Hong, Jin-Woo; Suh, Jae-Won
2013-01-01
A fast mode decision algorithm is proposed to reduce the computation complexity of adaptive inter layer prediction method, which is a motion estimation algorithm for video compression in scalable video coding (SVC) encoder systems. SVC is standard as an extension of H.264/AVC to provide multimedia services within variable transport environments and across various terminal systems. SVC supports an adaptive inter mode prediction, which includes not only the temporal prediction modes with varying block sizes but also inter layer prediction modes based on correlation between the lower layer information and the current layer. To achieve high coding efficiency, a rate distortion optimization technique is employed to select the best coding mode and reference frame for each MB. As a result, the performance gains of SVC come with increased computational complexity. To overcome this problem, we propose fast mode decision based on coded block pattern (CBP) of 16×16 mode and reference block of best CBP. The experimental results in SVC with combined scalability structure show that the proposed algorithm achieves up to an average 61.65% speed up factor in the encoding time with a negligible bit increment and a minimal image quality loss. In addition, experimental results in spatial and quality scalability show that the computational complexity has been reduced about 55.32% and 52.69%, respectively.
Accumulate Repeat Accumulate Coded Modulation
NASA Technical Reports Server (NTRS)
Abbasfar, Aliazam; Divsalar, Dariush; Yao, Kung
2004-01-01
In this paper we propose an innovative coded modulation scheme called 'Accumulate Repeat Accumulate Coded Modulation' (ARA coded modulation). This class of codes can be viewed as serial turbo-like codes, or as a subclass of Low Density Parity Check (LDPC) codes that are combined with high level modulation. Thus at the decoder belief propagation can be used for iterative decoding of ARA coded modulation on a graph, provided a demapper transforms the received in-phase and quadrature samples to reliability of the bits.
Electrical Circuit Simulation Code
Wix, Steven D.; Waters, Arlon J.; Shirley, David
2001-08-09
Massively-Parallel Electrical Circuit Simulation Code. CHILESPICE is a massively-arallel distributed-memory electrical circuit simulation tool that contains many enhanced radiation, time-based, and thermal features and models. Large scale electronic circuit simulation. Shared memory, parallel processing, enhance convergence. Sandia specific device models.
NASA Astrophysics Data System (ADS)
Ninio, Jacques
1990-03-01
Recent findings on the genetic code are reviewed, including selenocysteine usage, deviations in the assignments of sense and nonsense codons, RNA editing, natural ribosomal frameshifts and non-orthodox codon-anticodon pairings. A multi-stage codon reading process is presented.
ERIC Educational Resources Information Center
Burton, John K.; Wildman, Terry M.
The purpose of this study was to test the applicability of the dual coding hypothesis to children's recall performance. The hypothesis predicts that visual interference will have a small effect on the recall of visually presented words or pictures, but that acoustic interference will cause a decline in recall of visually presented words and…
ERIC Educational Resources Information Center
Lumsden, Linda; Miller, Gabriel
2002-01-01
Students do not always make choices that adults agree with in their choice of school dress. Dress-code issues are explored in this Research Roundup, and guidance is offered to principals seeking to maintain a positive school climate. In "Do School Uniforms Fit?" Kerry White discusses arguments for and against school uniforms and summarizes the…
ERIC Educational Resources Information Center
Association of College Unions-International, Bloomington, IN.
The code of ethics for the college union and student activities professional is presented by the Association of College Unions-International. The preamble identifies the objectives of the college union as providing campus community centers and social programs that enhance the quality of life for members of the academic community. Ethics for…
NASA Astrophysics Data System (ADS)
Hayashi, Kenshi
Odor is a one of important sensing parameters for human life. However, odor has not been quantified by a measuring instrument because of its vagueness. In this paper, a measuring of odor with odor coding, which are vector quantities of plural odor molecular information, and its applications are described.
ERIC Educational Resources Information Center
Olsen, Florence
2003-01-01
Colleges and universities are beginning to consider collaborating on open-source-code projects as a way to meet critical software and computing needs. Points out the attractive features of noncommercial open-source software and describes some examples in use now, especially for the creation of Web infrastructure. (SLD)
Building Codes and Regulations.
ERIC Educational Resources Information Center
Fisher, John L.
The hazard of fire is of great concern to libraries due to combustible books and new plastics used in construction and interiors. Building codes and standards can offer architects and planners guidelines to follow but these standards should be closely monitored, updated, and researched for fire prevention. (DS)
MAGEE,GLEN I.
2000-08-03
Computers transfer data in a number of different ways. Whether through a serial port, a parallel port, over a modem, over an ethernet cable, or internally from a hard disk to memory, some data will be lost. To compensate for that loss, numerous error detection and correction algorithms have been developed. One of the most common error correction codes is the Reed-Solomon code, which is a special subset of BCH (Bose-Chaudhuri-Hocquenghem) linear cyclic block codes. In the AURA project, an unmanned aircraft sends the data it collects back to earth so it can be analyzed during flight and possible flight modifications made. To counter possible data corruption during transmission, the data is encoded using a multi-block Reed-Solomon implementation with a possibly shortened final block. In order to maximize the amount of data transmitted, it was necessary to reduce the computation time of a Reed-Solomon encoding to three percent of the processor's time. To achieve such a reduction, many code optimization techniques were employed. This paper outlines the steps taken to reduce the processing time of a Reed-Solomon encoding and the insight into modern optimization techniques gained from the experience.
ERIC Educational Resources Information Center
American Sociological Association, Washington, DC.
The American Sociological Association's code of ethics for sociologists is presented. For sociological research and practice, 10 requirements for ethical behavior are identified, including: maintaining objectivity and integrity; fully reporting findings and research methods, without omission of significant data; reporting fully all sources of…
Transfert radiatif numerique pour un code SPH
NASA Astrophysics Data System (ADS)
Viau, Joseph Edmour Serge
2001-03-01
Le besoin de reproduire la formation d'etoiles par simulations numeriques s'est fait de plus en plus present au cours des 30 dernieres annees. Depuis Larson (1968), les codes de simulations n'ont eu de cesse de s'ameliorer. D'ailleurs, en 1977, Lucy introduit une autre methode de calcul venant concurrencer la methode par grille. Cette nouvelle facon de calculer utilise en effet des points a defaut d'utiliser des grilles, ce qui est une bien meilleure adaptation aux calculs d'un effondrement gravitationnel. Il restait cependant le probleme d'ajouter le transfert radiatif a un tel code. Malgre la proposition de Brookshaw (1984), qui nous montre une formule permettant d'ajouter le transfert radiatif sous la forme SPH tout en evitant la double sommation genante qu'elle implique, aucun code SPH a ce jour ne contient un transfert radiatif satisfaisant. Cette these presente pour la premiere fois un code SPH muni d'un transfert radiatif adequat. Toutes les difficultes ont pu etre surmontees afin d'obtenir finalement le transfert radiatif "vrai" qui survient dans l'effondrement d'un nuage moleculaire. Pour verifier l'integrite de nos resultats, une comparaison avec le nonisothermal test case de Boss & Myhill (1993) nous revele un resultat fort satisfaisant. En plus de suivre fidelement la courbe de l'evolution de la temperature centrale en fonction de la densite centrale, notre code est exempt de toutes les anomalies rencontrees par les codes par grille. Le test du cas de la conduction thermique nous a lui aussi servit a verifier la fiabilite de notre code. La aussi les resultats sont fort satisfaisants. Faisant suite a ces resultats, le code fut utilise dans deux situations reelles de recherche, ce qui nous a permis de demontrer les nombreuses possibilites que nous donne notre nouveau code. Dans un premier temps, nous avons tudie le comportement de la temperature dans un disque d'accretion durant son evolution. Ensuite nous avons refait en partie une experience de Bonnell
Stereo image coding: a projection approach.
Aydinoğlu, H; Hayes, M H
1998-01-01
Recently, due to advances in display technology, three-dimensional (3-D) imaging systems are becoming increasingly popular. One way of stimulating 3-D perception is to use stereo pairs, a pair of images of the same scene acquired from different perspectives. Since there is an inherent redundancy between the images of a stereo pair, data compression algorithms should be employed to represent stereo pairs efficiently. This paper focuses on the stereo image coding problem. We begin with a description of the problem and a survey of current stereo coding techniques. A new stereo image coding algorithm that is based on disparity compensation and subspace projection is described. This algorithm, the subspace projection technique (SPT), is a transform domain approach with a space-varying transformation matrix and may be interpreted as a spatial-transform domain representation of the stereo data. The advantage of the proposed approach is that it can locally adapt to the changes in the cross-correlation characteristics of the stereo pairs. Several design issues and implementations of the algorithm are discussed. Finally, we present empirical results suggesting that the SPT approach outperforms current stereo coding techniques. PMID:18276269
The new Italian code of medical ethics.
Fineschi, V; Turillazzi, E; Cateni, C
1997-01-01
In June 1995, the Italian code of medical ethics was revised in order that its principles should reflect the ever-changing relationship between the medical profession and society and between physicians and patients. The updated code is also a response to new ethical problems created by scientific progress; the discussion of such problems often shows up a need for better understanding on the part of the medical profession itself. Medical deontology is defined as the discipline for the study of norms of conduct for the health care professions, including moral and legal norms as well as those pertaining more strictly to professional performance. The aim of deontology is therefore, the in-depth investigation and revision of the code of medical ethics. It is in the light of this conceptual definition that one should interpret a review of the different codes which have attempted, throughout the various periods of Italy's recent history, to adapt ethical norms to particular social and health care climates. PMID:9279746
The new Italian code of medical ethics.
Fineschi, V; Turillazzi, E; Cateni, C
1997-08-01
In June 1995, the Italian code of medical ethics was revised in order that its principles should reflect the ever-changing relationship between the medical profession and society and between physicians and patients. The updated code is also a response to new ethical problems created by scientific progress; the discussion of such problems often shows up a need for better understanding on the part of the medical profession itself. Medical deontology is defined as the discipline for the study of norms of conduct for the health care professions, including moral and legal norms as well as those pertaining more strictly to professional performance. The aim of deontology is therefore, the in-depth investigation and revision of the code of medical ethics. It is in the light of this conceptual definition that one should interpret a review of the different codes which have attempted, throughout the various periods of Italy's recent history, to adapt ethical norms to particular social and health care climates. PMID:9279746
Point-Kernel Shielding Code System.
1982-02-17
Version 00 QAD-BSA is a three-dimensional, point-kernel shielding code system based upon the CCC-48/QAD series. It is designed to calculate photon dose rates and heating rates using exponential attenuation and infinite medium buildup factors. Calculational provisions include estimates of fast neutron penetration using data computed by the moments method. Included geometry routines can describe complicated source and shield geometries. An internal library contains data for many frequently used structural and shielding materials, enabling the codemore » to solve most problems with only source strengths and problem geometry required as input. This code system adapts especially well to problems requiring multiple sources and sources with asymmetrical geometry. In addition to being edited separately, the total interaction rates from many sources may be edited at each detector point. Calculated photon interaction rates agree closely with those obtained using QAD-P5A.« less
ADAPTATION AND ADAPTABILITY, THE BELLEFAIRE FOLLOWUP STUDY.
ERIC Educational Resources Information Center
ALLERHAND, MELVIN E.; AND OTHERS
A RESEARCH TEAM STUDIED INFLUENCES, ADAPTATION, AND ADAPTABILITY IN 50 POORLY ADAPTING BOYS AT BELLEFAIRE, A REGIONAL CHILD CARE CENTER FOR EMOTIONALLY DISTURBED CHILDREN. THE TEAM ATTEMPTED TO GAUGE THE SUCCESS OF THE RESIDENTIAL TREATMENT CENTER IN TERMS OF THE PSYCHOLOGICAL PATTERNS AND ROLE PERFORMANCES OF THE BOYS DURING INDIVIDUAL CASEWORK…
Visual Coding in Locust Photoreceptors
Faivre, Olivier; Juusola, Mikko
2008-01-01
Information capture by photoreceptors ultimately limits the quality of visual processing in the brain. Using conventional sharp microelectrodes, we studied how locust photoreceptors encode random (white-noise, WN) and naturalistic (1/f stimuli, NS) light patterns in vivo and how this coding changes with mean illumination and ambient temperature. We also examined the role of their plasma membrane in shaping voltage responses. We found that brightening or warming increase and accelerate voltage responses, but reduce noise, enabling photoreceptors to encode more information. For WN stimuli, this was accompanied by broadening of the linear frequency range. On the contrary, with NS the signaling took place within a constant bandwidth, possibly revealing a ‘preference’ for inputs with 1/f statistics. The faster signaling was caused by acceleration of the elementary phototransduction current - leading to bumps - and their distribution. The membrane linearly translated phototransduction currents into voltage responses without limiting the throughput of these messages. As the bumps reflected fast changes in membrane resistance, the data suggest that their shape is predominantly driven by fast changes in the light-gated conductance. On the other hand, the slower bump latency distribution is likely to represent slower enzymatic intracellular reactions. Furthermore, the Q10s of bump duration and latency distribution depended on light intensity. Altogether, this study suggests that biochemical constraints imposed upon signaling change continuously as locust photoreceptors adapt to environmental light and temperature conditions. PMID:18478123
Ma, Yong-Tao; Li, Hui; Zeng, Tao
2014-06-07
Four-dimensional ab initio intermolecular potential energy surfaces (PESs) for CH{sub 3}F–He that explicitly incorporates dependence on the Q{sub 3} stretching normal mode of the CH{sub 3}F molecule and are parametrically dependent on the other averaged intramolecular coordinates have been calculated. Analytical three-dimensional PESs for v{sub 3}(CH{sub 3}F) = 0 and 1 are obtained by least-squares fitting the vibrationally averaged potentials to the Morse/Long-Range potential function form. With the 3D PESs, we employ Lanczos algorithm to calculate rovibrational levels of the dimer system. Following some re-assignments, the predicted transition frequencies are in good agreement with experimental microwave data for ortho-CH{sub 3}F, with the root-mean-square deviation of 0.042 cm{sup −1}. We then provide the first prediction of the infrared and microwave spectra for the para-CH{sub 3}F–He dimer. The calculated infrared band origin shifts associated with the ν{sub 3} fundamental of CH{sub 3}F are 0.039 and 0.069 cm{sup −1} for para-CH{sub 3}F–He and ortho-CH{sub 3}F–He, respectively.
ACDOS2: an improved neutron-induced dose rate code
Lagache, J.C.
1981-06-01
To calculate the expected dose rate from fusion reactors as a function of geometry, composition, and time after shutdown a computer code, ACDOS2, was written, which utilizes up-to-date libraries of cross-sections and radioisotope decay data. ACDOS2 is in ANSI FORTRAN IV, in order to make it readily adaptable elsewhere.
Adaptive mode-dependent scan for H.264/AVC intracoding
NASA Astrophysics Data System (ADS)
Wei, Yung-Chiang; Yang, Jar-Ferr
2010-07-01
In image/video coding standards, the zigzag scan provides an effective encoding order of the quantized transform coefficients such that the quantized coefficients can be arranged statistically from large to small magnitudes. Generally, the optimal scan should transfer the 2-D transform coefficients into 1-D data in descending order of their average power levels. With the optimal scan order, we can achieve more efficient variable length coding. In H.264 advanced video coding (AVC), the residuals resulting from various intramode predictions have different statistical characteristics. After analyzing the transformed residuals, we propose an adaptive scan order scheme, which optimally matches up with intraprediction mode, to further improve the efficiency of intracoding. Simulation results show that the proposed adaptive scan scheme can improve the context-adaptive variable length coding to achieve better rate-distortion performance for the H.264/AVC video coder without the increase of computation.
Binary coding for hyperspectral imagery
NASA Astrophysics Data System (ADS)
Wang, Jing; Chang, Chein-I.; Chang, Chein-Chi; Lin, Chinsu
2004-10-01
Binary coding is one of simplest ways to characterize spectral features. One commonly used method is a binary coding-based image software system, called Spectral Analysis Manager (SPAM) for remotely sensed imagery developed by Mazer et al. For a given spectral signature, the SPAM calculates its spectral mean and inter-band spectral difference and uses them as thresholds to generate a binary code word for this particular spectral signature. Such coding scheme is generally effective and also very simple to implement. This paper revisits the SPAM and further develops three new SPAM-based binary coding methods, called equal probability partition (EPP) binary coding, halfway partition (HP) binary coding and median partition (MP) binary coding. These three binary coding methods along with the SPAM well be evaluated for spectral discrimination and identification. In doing so, a new criterion, called a posteriori discrimination probability (APDP) is also introduced for performance measure.
West, J.T.; Hoffman, T.J.; Emmett, M.B.; Childs, K.W.; Petrie, L.M.; Landers, N.F.; Bryan, C.B.; Giles, G.E.
1995-04-01
SCALE--a modular code system for Standardized Computer Analyses Licensing Evaluation--has been developed by Oak Ridge National Laboratory at the request of the US Nuclear Regulatory Commission. The SCALE system utilizes well-established computer codes and methods within standard analysis sequences that (1) allow an input format designed for the occasional user and/or novice, (2) automate the data processing and coupling between modules, and (3) provide accurate and reliable results. System development has been directed at problem-dependent cross-section processing and analysis of criticality safety, shielding, heat transfer, and depletion/decay problems. Since the initial release of SCALE in 1980, the code system has been heavily used for evaluation of nuclear fuel facility and package designs. This revision documents Version 4.2 of the system. The manual is divided into three volumes: Volume 1--for the control module documentation, Volume 2--for functional module documentation; and Volume 3--for documentation of the data libraries and subroutine libraries. This volume discusses the following functional modules: MORSE-SGC; HEATING 7.2; KENO V.a; JUNEBUG-II; HEATPLOT-S; REGPLOT 6; PLORIGEN; and OCULAR.
2006-03-08
MAPVAR-KD is designed to transfer solution results from one finite element mesh to another. MAPVAR-KD draws heavily from the structure and coding of MERLIN II, but it employs a new finite element data base, EXODUS II, and offers enhanced speed and new capabilities not available in MERLIN II. In keeping with the MERLIN II documentation, the computational algorithms used in MAPVAR-KD are described. User instructions are presented. Example problems are included to demonstrate the operationmore » of the code and the effects of various input options. MAPVAR-KD is a modification of MAPVAR in which the search algorithm was replaced by a kd-tree-based search for better performance on large problems.« less
NASA Astrophysics Data System (ADS)
Schnack, D. D.; Glasser, A. H.
1996-11-01
NIMROD is a new code system that is being developed for the analysis of modern fusion experiments. It is being designed from the beginning to make the maximum use of massively parallel computer architectures and computer graphics. The NIMROD physics kernel solves the three-dimensional, time-dependent two-fluid equations with neo-classical effects in toroidal geometry of arbitrary poloidal cross section. The NIMROD system also includes a pre-processor, a grid generator, and a post processor. User interaction with NIMROD is facilitated by a modern graphical user interface (GUI). The NIMROD project is using Quality Function Deployment (QFD) team management techniques to minimize re-engineering and reduce code development time. This paper gives an overview of the NIMROD project. Operation of the GUI is demonstrated, and the first results from the physics kernel are given.
Sjaardema, G.; Wellman, G.; Gartling, D.
2006-03-08
MAPVAR-KD is designed to transfer solution results from one finite element mesh to another. MAPVAR-KD draws heavily from the structure and coding of MERLIN II, but it employs a new finite element data base, EXODUS II, and offers enhanced speed and new capabilities not available in MERLIN II. In keeping with the MERLIN II documentation, the computational algorithms used in MAPVAR-KD are described. User instructions are presented. Example problems are included to demonstrate the operation of the code and the effects of various input options. MAPVAR-KD is a modification of MAPVAR in which the search algorithm was replaced by a kd-tree-based search for better performance on large problems.
Confocal coded aperture imaging
Tobin, Jr., Kenneth William; Thomas, Jr., Clarence E.
2001-01-01
A method for imaging a target volume comprises the steps of: radiating a small bandwidth of energy toward the target volume; focusing the small bandwidth of energy into a beam; moving the target volume through a plurality of positions within the focused beam; collecting a beam of energy scattered from the target volume with a non-diffractive confocal coded aperture; generating a shadow image of said aperture from every point source of radiation in the target volume; and, reconstructing the shadow image into a 3-dimensional image of the every point source by mathematically correlating the shadow image with a digital or analog version of the coded aperture. The method can comprise the step of collecting the beam of energy scattered from the target volume with a Fresnel zone plate.
NASA Technical Reports Server (NTRS)
Mcaulay, Robert J.; Quatieri, Thomas F.
1988-01-01
It has been shown that an analysis/synthesis system based on a sinusoidal representation of speech leads to synthetic speech that is essentially perceptually indistinguishable from the original. Strategies for coding the amplitudes, frequencies and phases of the sine waves have been developed that have led to a multirate coder operating at rates from 2400 to 9600 bps. The encoded speech is highly intelligible at all rates with a uniformly improving quality as the data rate is increased. A real-time fixed-point implementation has been developed using two ADSP2100 DSP chips. The methods used for coding and quantizing the sine-wave parameters for operation at the various frame rates are described.
Adaptive Image Denoising by Mixture Adaptation
NASA Astrophysics Data System (ADS)
Luo, Enming; Chan, Stanley H.; Nguyen, Truong Q.
2016-10-01
We propose an adaptive learning procedure to learn patch-based image priors for image denoising. The new algorithm, called the Expectation-Maximization (EM) adaptation, takes a generic prior learned from a generic external database and adapts it to the noisy image to generate a specific prior. Different from existing methods that combine internal and external statistics in ad-hoc ways, the proposed algorithm is rigorously derived from a Bayesian hyper-prior perspective. There are two contributions of this paper: First, we provide full derivation of the EM adaptation algorithm and demonstrate methods to improve the computational complexity. Second, in the absence of the latent clean image, we show how EM adaptation can be modified based on pre-filtering. Experimental results show that the proposed adaptation algorithm yields consistently better denoising results than the one without adaptation and is superior to several state-of-the-art algorithms.
Forsythe, C.; Smith, M.; Sjaardema, G.
2005-06-26
Exotxt is an analysis code that reads finite element results data stored in an exodusII file and generates a file in a structured text format. The text file can be edited or modified via a number of text formatting tools. Exotxt is used by analysis to translate data from the binary exodusII format into a structured text format which can then be edited or modified and then either translated back to exodusII format or to another format.
N.V. Mokhov
2003-04-09
Status and recent developments of the MARS 14 Monte Carlo code system for simulation of hadronic and electromagnetic cascades in shielding, accelerator and detector components in the energy range from a fraction of an electronvolt up to 100 TeV are described. these include physics models both in strong and electromagnetic interaction sectors, variance reduction techniques, residual dose, geometry, tracking, histograming. MAD-MARS Beam Line Build and Graphical-User Interface.
NASA Technical Reports Server (NTRS)
Bjork, C.
1981-01-01
The REEDS (rocket exhaust effluent diffusion single layer) computer code is used for the estimation of certain rocket exhaust effluent concentrations and dosages and their distributions near the Earth's surface following a rocket launch event. Output from REEDS is used in producing near real time air quality and environmental assessments of the effects of certain potentially harmful effluents, namely HCl, Al2O3, CO, and NO.
Bar coded retroreflective target
Vann, C.S.
2000-01-25
This small, inexpensive, non-contact laser sensor can detect the location of a retroreflective target in a relatively large volume and up to six degrees of position. The tracker's laser beam is formed into a plane of light which is swept across the space of interest. When the beam illuminates the retroreflector, some of the light returns to the tracker. The intensity, angle, and time of the return beam is measured to calculate the three dimensional location of the target. With three retroreflectors on the target, the locations of three points on the target are measured, enabling the calculation of all six degrees of target position. Until now, devices for three-dimensional tracking of objects in a large volume have been heavy, large, and very expensive. Because of the simplicity and unique characteristics of this tracker, it is capable of three-dimensional tracking of one to several objects in a large volume, yet it is compact, light-weight, and relatively inexpensive. Alternatively, a tracker produces a diverging laser beam which is directed towards a fixed position, and senses when a retroreflective target enters the fixed field of view. An optically bar coded target can be read by the tracker to provide information about the target. The target can be formed of a ball lens with a bar code on one end. As the target moves through the field, the ball lens causes the laser beam to scan across the bar code.
Summers, R.M.; Cole, R.K. Jr.; Smith, R.C.; Stuart, D.S.; Thompson, S.L.; Hodge, S.A.; Hyman, C.R.; Sanders, R.L.
1995-03-01
MELCOR is a fully integrated, engineering-level computer code that models the progression of severe accidents in light water reactor nuclear power plants. MELCOR is being developed at Sandia National Laboratories for the U.S. Nuclear Regulatory Commission as a second-generation plant risk assessment tool and the successor to the Source Term Code Package. A broad spectrum of severe accident phenomena in both boiling and pressurized water reactors is treated in MELCOR in a unified framework. These include: thermal-hydraulic response in the reactor coolant system, reactor cavity, containment, and confinement buildings; core heatup, degradation, and relocation; core-concrete attack; hydrogen production, transport, and combustion; fission product release and transport; and the impact of engineered safety features on thermal-hydraulic and radionuclide behavior. Current uses of MELCOR include estimation of severe accident source terms and their sensitivities and uncertainties in a variety of applications. This publication of the MELCOR computer code manuals corresponds to MELCOR 1.8.3, released to users in August, 1994. Volume 1 contains a primer that describes MELCOR`s phenomenological scope, organization (by package), and documentation. The remainder of Volume 1 contains the MELCOR Users Guides, which provide the input instructions and guidelines for each package. Volume 2 contains the MELCOR Reference Manuals, which describe the phenomenological models that have been implemented in each package.
Bar coded retroreflective target
Vann, Charles S.
2000-01-01
This small, inexpensive, non-contact laser sensor can detect the location of a retroreflective target in a relatively large volume and up to six degrees of position. The tracker's laser beam is formed into a plane of light which is swept across the space of interest. When the beam illuminates the retroreflector, some of the light returns to the tracker. The intensity, angle, and time of the return beam is measured to calculate the three dimensional location of the target. With three retroreflectors on the target, the locations of three points on the target are measured, enabling the calculation of all six degrees of target position. Until now, devices for three-dimensional tracking of objects in a large volume have been heavy, large, and very expensive. Because of the simplicity and unique characteristics of this tracker, it is capable of three-dimensional tracking of one to several objects in a large volume, yet it is compact, light-weight, and relatively inexpensive. Alternatively, a tracker produces a diverging laser beam which is directed towards a fixed position, and senses when a retroreflective target enters the fixed field of view. An optically bar coded target can be read by the tracker to provide information about the target. The target can be formed of a ball lens with a bar code on one end. As the target moves through the field, the ball lens causes the laser beam to scan across the bar code.
Orthopedics coding and funding.
Baron, S; Duclos, C; Thoreux, P
2014-02-01
The French tarification à l'activité (T2A) prospective payment system is a financial system in which a health-care institution's resources are based on performed activity. Activity is described via the PMSI medical information system (programme de médicalisation du système d'information). The PMSI classifies hospital cases by clinical and economic categories known as diagnosis-related groups (DRG), each with an associated price tag. Coding a hospital case involves giving as realistic a description as possible so as to categorize it in the right DRG and thus ensure appropriate payment. For this, it is essential to understand what determines the pricing of inpatient stay: namely, the code for the surgical procedure, the patient's principal diagnosis (reason for admission), codes for comorbidities (everything that adds to management burden), and the management of the length of inpatient stay. The PMSI is used to analyze the institution's activity and dynamism: change on previous year, relation to target, and comparison with competing institutions based on indicators such as the mean length of stay performance indicator (MLS PI). The T2A system improves overall care efficiency. Quality of care, however, is not presently taken account of in the payment made to the institution, as there are no indicators for this; work needs to be done on this topic.
Structural coding versus free-energy predictive coding.
van der Helm, Peter A
2016-06-01
Focusing on visual perceptual organization, this article contrasts the free-energy (FE) version of predictive coding (a recent Bayesian approach) to structural coding (a long-standing representational approach). Both use free-energy minimization as metaphor for processing in the brain, but their formal elaborations of this metaphor are fundamentally different. FE predictive coding formalizes it by minimization of prediction errors, whereas structural coding formalizes it by minimization of the descriptive complexity of predictions. Here, both sides are evaluated. A conclusion regarding competence is that FE predictive coding uses a powerful modeling technique, but that structural coding has more explanatory power. A conclusion regarding performance is that FE predictive coding-though more detailed in its account of neurophysiological data-provides a less compelling cognitive architecture than that of structural coding, which, for instance, supplies formal support for the computationally powerful role it attributes to neuronal synchronization.
Computer-Based Coding of Occupation Codes for Epidemiological Analyses.
Russ, Daniel E; Ho, Kwan-Yuet; Johnson, Calvin A; Friesen, Melissa C
2014-05-01
Mapping job titles to standardized occupation classification (SOC) codes is an important step in evaluating changes in health risks over time as measured in inspection databases. However, manual SOC coding is cost prohibitive for very large studies. Computer based SOC coding systems can improve the efficiency of incorporating occupational risk factors into large-scale epidemiological studies. We present a novel method of mapping verbatim job titles to SOC codes using a large table of prior knowledge available in the public domain that included detailed description of the tasks and activities and their synonyms relevant to each SOC code. Job titles are compared to our knowledge base to find the closest matching SOC code. A soft Jaccard index is used to measure the similarity between a previously unseen job title and the knowledge base. Additional information such as standardized industrial codes can be incorporated to improve the SOC code determination by providing additional context to break ties in matches. PMID:25221787
Preliminary Assessment of Turbomachinery Codes
NASA Technical Reports Server (NTRS)
Mazumder, Quamrul H.
2007-01-01
This report assesses different CFD codes developed and currently being used at Glenn Research Center to predict turbomachinery fluid flow and heat transfer behavior. This report will consider the following codes: APNASA, TURBO, GlennHT, H3D, and SWIFT. Each code will be described separately in the following section with their current modeling capabilities, level of validation, pre/post processing, and future development and validation requirements. This report addresses only previously published and validations of the codes. However, the codes have been further developed to extend the capabilities of the codes.
Adaptation and perceptual norms in color vision.
Webster, Michael A; Leonard, Deanne
2008-11-01
Many perceptual dimensions are thought to be represented relative to an average value or norm. Models of norm-based coding assume that the norm appears psychologically neutral because it reflects a neutral response in the underlying neural code. We tested this assumption in human color vision by asking how judgments of "white" are affected as neural responses are altered by adaptation. The adapting color was varied to determine the stimulus level that did not bias the observer's subjective white point. This level represents a response norm at the stages at which sensitivity is regulated by the adaptation, and we show that these response norms correspond to the perceptually neutral stimulus and that they can account for how the perception of white varies both across different observers and within the same observer at different locations in the visual field. We also show that individual differences in perceived white are reduced when observers are exposed to a common white adapting stimulus, suggesting that the perceptual differences are due in part to differences in how neural responses are normalized. These results suggest a close link between the norms for appearance and coding in color vision and illustrate a general paradigm for exploring this link in other perceptual domains.
Adaptive Sampling Algorithms for Probabilistic Risk Assessment of Nuclear Simulations
Diego Mandelli; Dan Maljovec; Bei Wang; Valerio Pascucci; Peer-Timo Bremer
2013-09-01
Nuclear simulations are often computationally expensive, time-consuming, and high-dimensional with respect to the number of input parameters. Thus exploring the space of all possible simulation outcomes is infeasible using finite computing resources. During simulation-based probabilistic risk analysis, it is important to discover the relationship between a potentially large number of input parameters and the output of a simulation using as few simulation trials as possible. This is a typical context for performing adaptive sampling where a few observations are obtained from the simulation, a surrogate model is built to represent the simulation space, and new samples are selected based on the model constructed. The surrogate model is then updated based on the simulation results of the sampled points. In this way, we attempt to gain the most information possible with a small number of carefully selected sampled points, limiting the number of expensive trials needed to understand features of the simulation space. We analyze the specific use case of identifying the limit surface, i.e., the boundaries in the simulation space between system failure and system success. In this study, we explore several techniques for adaptively sampling the parameter space in order to reconstruct the limit surface. We focus on several adaptive sampling schemes. First, we seek to learn a global model of the entire simulation space using prediction models or neighborhood graphs and extract the limit surface as an iso-surface of the global model. Second, we estimate the limit surface by sampling in the neighborhood of the current estimate based on topological segmentations obtained locally. Our techniques draw inspirations from topological structure known as the Morse-Smale complex. We highlight the advantages and disadvantages of using a global prediction model versus local topological view of the simulation space, comparing several different strategies for adaptive sampling in both
New quantum MDS-convolutional codes derived from constacyclic codes
NASA Astrophysics Data System (ADS)
Li, Fengwei; Yue, Qin
2015-12-01
In this paper, we utilize a family of Hermitian dual-containing constacyclic codes to construct classical and quantum MDS convolutional codes. Our classical and quantum convolutional codes are optimal in the sense that they attain the classical (quantum) generalized Singleton bound.
Foveation scalable video coding with automatic fixation selection.
Wang, Zhou; Lu, Ligang; Bovik, Alan Conrad
2003-01-01
Image and video coding is an optimization problem. A successful image and video coding algorithm delivers a good tradeoff between visual quality and other coding performance measures, such as compression, complexity, scalability, robustness, and security. In this paper, we follow two recent trends in image and video coding research. One is to incorporate human visual system (HVS) models to improve the current state-of-the-art of image and video coding algorithms by better exploiting the properties of the intended receiver. The other is to design rate scalable image and video codecs, which allow the extraction of coded visual information at continuously varying bit rates from a single compressed bitstream. Specifically, we propose a foveation scalable video coding (FSVC) algorithm which supplies good quality-compression performance as well as effective rate scalability. The key idea is to organize the encoded bitstream to provide the best decoded video at an arbitrary bit rate in terms of foveated visual quality measurement. A foveation-based HVS model plays an important role in the algorithm. The algorithm is adaptable to different applications, such as knowledge-based video coding and video communications over time-varying, multiuser and interactive networks. PMID:18237905
Discrete adaptive zone light elements (DAZLE): a new approach to adaptive imaging
NASA Astrophysics Data System (ADS)
Kellogg, Robert L.; Escuti, Michael J.
2007-09-01
New advances in Liquid Crystal Spatial Light Modulators (LCSLM) offer opportunities for large adaptive optics in the midwave infrared spectrum. A light focusing adaptive imaging system, using the zero-order diffraction state of a polarizer-free liquid crystal polarization grating modulator to create millions of high transmittance apertures, is envisioned in a system called DAZLE (Discrete Adaptive Zone Light Elements). DAZLE adaptively selects large sets of LCSLM apertures using the principles of coded masks, embodied in a hybrid Discrete Fresnel Zone Plate (DFZP) design. Issues of system architecture, including factors of LCSLM aperture pattern and adaptive control, image resolution and focal plane array (FPA) matching, and trade-offs between filter bandwidths, background photon noise, and chromatic aberration are discussed.
Code-Switching and Bilingual Schooling: An Examination of Jacobson's New Concurrent Approach.
ERIC Educational Resources Information Center
Faltis, Christian J.
1989-01-01
Describes Jacobson's New Concurrent Approach to bilingual instruction, which systematically incorporates intersentential code-switching to teach content to limited English proficient children raised in a bilingual environment, and how such incorporation and adaptation contributes to the balanced distribution of the two codes in question. (24…
Authorship Attribution of Source Code
ERIC Educational Resources Information Center
Tennyson, Matthew F.
2013-01-01
Authorship attribution of source code is the task of deciding who wrote a program, given its source code. Applications include software forensics, plagiarism detection, and determining software ownership. A number of methods for the authorship attribution of source code have been presented in the past. A review of those existing methods is…
Energy Codes and Standards: Facilities
Bartlett, Rosemarie; Halverson, Mark A.; Shankle, Diana L.
2007-01-01
Energy codes and standards play a vital role in the marketplace by setting minimum requirements for energy-efficient design and construction. They outline uniform requirements for new buildings as well as additions and renovations. This article covers basic knowledge of codes and standards; development processes of each; adoption, implementation, and enforcement of energy codes and standards; and voluntary energy efficiency programs.
Coding Issues in Grounded Theory
ERIC Educational Resources Information Center
Moghaddam, Alireza
2006-01-01
This paper discusses grounded theory as one of the qualitative research designs. It describes how grounded theory generates from data. Three phases of grounded theory--open coding, axial coding, and selective coding--are discussed, along with some of the issues which are the source of debate among grounded theorists, especially between its…
2005-06-26
Exotxt is an analysis code that reads finite element results data stored in an exodusII file and generates a file in a structured text format. The text file can be edited or modified via a number of text formatting tools. Exotxt is used by analysis to translate data from the binary exodusII format into a structured text format which can then be edited or modified and then either translated back to exodusII format or tomore » another format.« less
Sjaardema, G.; Forsythe, C.
2005-05-07
CONEX is a code for joining sequentially in time multiple exodusll database files which all represent the same base mesh topology and geometry. It is used to create a single results or restart file from multiple results or restart files which typically arise as the result of multiple restarted analyses. CONEX is used to postprocess the results from a series of finite element analyses. It can join sequentially the data from multiple results databases into a single database which makes it easier to postprocess the results data.
2005-05-07
CONEX is a code for joining sequentially in time multiple exodusll database files which all represent the same base mesh topology and geometry. It is used to create a single results or restart file from multiple results or restart files which typically arise as the result of multiple restarted analyses. CONEX is used to postprocess the results from a series of finite element analyses. It can join sequentially the data from multiple results databases intomore » a single database which makes it easier to postprocess the results data.« less
Bilayer Protograph Codes for Half-Duplex Relay Channels
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; VanNguyen, Thuy; Nosratinia, Aria
2013-01-01
Direct to Earth return links are limited by the size and power of lander devices. A standard alternative is provided by a two-hops return link: a proximity link (from lander to orbiter relay) and a deep-space link (from orbiter relay to Earth). Although direct to Earth return links are limited by the size and power of lander devices, using an additional link and a proposed coding for relay channels, one can obtain a more reliable signal. Although significant progress has been made in the relay coding problem, existing codes must be painstakingly optimized to match to a single set of channel conditions, many of them do not offer easy encoding, and most of them do not have structured design. A high-performing LDPC (low-density parity-check) code for the relay channel addresses simultaneously two important issues: a code structure that allows low encoding complexity, and a flexible rate-compatible code that allows matching to various channel conditions. Most of the previous high-performance LDPC codes for the relay channel are tightly optimized for a given channel quality, and are not easily adapted without extensive re-optimization for various channel conditions. This code for the relay channel combines structured design and easy encoding with rate compatibility to allow adaptation to the three links involved in the relay channel, and furthermore offers very good performance. The proposed code is constructed by synthesizing a bilayer structure with a pro to graph. In addition to the contribution to relay encoding, an improved family of protograph codes was produced for the point-to-point AWGN (additive white Gaussian noise) channel whose high-rate members enjoy thresholds that are within 0.07 dB of capacity. These LDPC relay codes address three important issues in an integrative manner: low encoding complexity, modular structure allowing for easy design, and rate compatibility so that the code can be easily matched to a variety of channel conditions without extensive
New quantum codes constructed from quaternary BCH codes
NASA Astrophysics Data System (ADS)
Xu, Gen; Li, Ruihu; Guo, Luobin; Ma, Yuena
2016-10-01
In this paper, we firstly study construction of new quantum error-correcting codes (QECCs) from three classes of quaternary imprimitive BCH codes. As a result, the improved maximal designed distance of these narrow-sense imprimitive Hermitian dual-containing quaternary BCH codes are determined to be much larger than the result given according to Aly et al. (IEEE Trans Inf Theory 53:1183-1188, 2007) for each different code length. Thus, families of new QECCs are newly obtained, and the constructed QECCs have larger distance than those in the previous literature. Secondly, we apply a combinatorial construction to the imprimitive BCH codes with their corresponding primitive counterpart and construct many new linear quantum codes with good parameters, some of which have parameters exceeding the finite Gilbert-Varshamov bound for linear quantum codes.
Low Density Parity Check Codes: Bandwidth Efficient Channel Coding
NASA Technical Reports Server (NTRS)
Fong, Wai; Lin, Shu; Maki, Gary; Yeh, Pen-Shu
2003-01-01
Low Density Parity Check (LDPC) Codes provide near-Shannon Capacity performance for NASA Missions. These codes have high coding rates R=0.82 and 0.875 with moderate code lengths, n=4096 and 8176. Their decoders have inherently parallel structures which allows for high-speed implementation. Two codes based on Euclidean Geometry (EG) were selected for flight ASIC implementation. These codes are cyclic and quasi-cyclic in nature and therefore have a simple encoder structure. This results in power and size benefits. These codes also have a large minimum distance as much as d,,, = 65 giving them powerful error correcting capabilities and error floors less than lo- BER. This paper will present development of the LDPC flight encoder and decoder, its applications and status.
New quantum codes constructed from quaternary BCH codes
NASA Astrophysics Data System (ADS)
Xu, Gen; Li, Ruihu; Guo, Luobin; Ma, Yuena
2016-07-01
In this paper, we firstly study construction of new quantum error-correcting codes (QECCs) from three classes of quaternary imprimitive BCH codes. As a result, the improved maximal designed distance of these narrow-sense imprimitive Hermitian dual-containing quaternary BCH codes are determined to be much larger than the result given according to Aly et al. (IEEE Trans Inf Theory 53:1183-1188, 2007) for each different code length. Thus, families of new QECCs are newly obtained, and the constructed QECCs have larger distance than those in the previous literature. Secondly, we apply a combinatorial construction to the imprimitive BCH codes with their corresponding primitive counterpart and construct many new linear quantum codes with good parameters, some of which have parameters exceeding the finite Gilbert-Varshamov bound for linear quantum codes.
Adaptive changes in visual cortex following prolonged contrast reduction
Kwon, MiYoung; Legge, Gordon E.; Fang, Fang; Cheong, Allen M. Y.; He, Sheng
2009-01-01
How does prolonged reduction in retinal-image contrast affect visual-contrast coding? Recent evidence indicates that some forms of long-term visual deprivation result in compensatory perceptual and neural changes in the adult visual pathway. It has not been established whether changes due to contrast adaptation are best characterized as “contrast gain” or “response gain.” We present a theoretical rationale for predicting that adaptation to long-term contrast reduction should result in response gain. To test this hypothesis, normally sighted subjects adapted for four hours by viewing their environment through contrast-reducing goggles. During the adaptation period, the subjects went about their usual daily activities. Subjects' contrast-discrimination thresholds and fMRI BOLD responses in cortical areas V1 and V2 were obtained before and after adaptation. Following adaptation, we observed a significant decrease in contrast-discrimination thresholds, and significant increase in BOLD responses in V1 and V2. The observed interocular transfer of the adaptation effect suggests that the adaptation has a cortical origin. These results reveal a new kind of adaptability of the adult visual cortex, an adjustment in the gain of the contrast-response in the presence of a reduced range of stimulus contrasts, which is consistent with a response-gain mechanism. The adaptation appears to be compensatory, such that the precision of contrast coding is improved for low retinal-image contrasts. PMID:19271930
Modeling anomalous radial transport in kinetic transport codes
NASA Astrophysics Data System (ADS)
Bodi, K.; Krasheninnikov, S. I.; Cohen, R. H.; Rognlien, T. D.
2009-11-01
Anomalous transport is typically the dominant component of the radial transport in magnetically confined plasmas, where the physical origin of this transport is believed to be plasma turbulence. A model is presented for anomalous transport that can be used in continuum kinetic edge codes like TEMPEST, NEO and the next-generation code being developed by the Edge Simulation Laboratory. The model can also be adapted to particle-based codes. It is demonstrated that the model with a velocity-dependent diffusion and convection terms can match a diagonal gradient-driven transport matrix as found in contemporary fluid codes, but can also include off-diagonal effects. The anomalous transport model is also combined with particle drifts and a particle/energy-conserving Krook collision operator to study possible synergistic effects with neoclassical transport. For the latter study, a velocity-independent anomalous diffusion coefficient is used to mimic the effect of long-wavelength ExB turbulence.
Streamlined Genome Sequence Compression using Distributed Source Coding
Wang, Shuang; Jiang, Xiaoqian; Chen, Feng; Cui, Lijuan; Cheng, Samuel
2014-01-01
We aim at developing a streamlined genome sequence compression algorithm to support alternative miniaturized sequencing devices, which have limited communication, storage, and computation power. Existing techniques that require heavy client (encoder side) cannot be applied. To tackle this challenge, we carefully examined distributed source coding theory and developed a customized reference-based genome compression protocol to meet the low-complexity need at the client side. Based on the variation between source and reference, our protocol will pick adaptively either syndrome coding or hash coding to compress subsequences of changing code length. Our experimental results showed promising performance of the proposed method when compared with the state-of-the-art algorithm (GRS). PMID:25520552
Measuring Diagnoses: ICD Code Accuracy
O'Malley, Kimberly J; Cook, Karon F; Price, Matt D; Wildes, Kimberly Raiford; Hurdle, John F; Ashton, Carol M
2005-01-01
Objective To examine potential sources of errors at each step of the described inpatient International Classification of Diseases (ICD) coding process. Data Sources/Study Setting The use of disease codes from the ICD has expanded from classifying morbidity and mortality information for statistical purposes to diverse sets of applications in research, health care policy, and health care finance. By describing a brief history of ICD coding, detailing the process for assigning codes, identifying where errors can be introduced into the process, and reviewing methods for examining code accuracy, we help code users more systematically evaluate code accuracy for their particular applications. Study Design/Methods We summarize the inpatient ICD diagnostic coding process from patient admission to diagnostic code assignment. We examine potential sources of errors at each step and offer code users a tool for systematically evaluating code accuracy. Principle Findings Main error sources along the “patient trajectory” include amount and quality of information at admission, communication among patients and providers, the clinician's knowledge and experience with the illness, and the clinician's attention to detail. Main error sources along the “paper trail” include variance in the electronic and written records, coder training and experience, facility quality-control efforts, and unintentional and intentional coder errors, such as misspecification, unbundling, and upcoding. Conclusions By clearly specifying the code assignment process and heightening their awareness of potential error sources, code users can better evaluate the applicability and limitations of codes for their particular situations. ICD codes can then be used in the most appropriate ways. PMID:16178999
Yang, Yang; Stanković, Vladimir; Xiong, Zixiang; Zhao, Wei
2009-03-01
Following recent works on the rate region of the quadratic Gaussian two-terminal source coding problem and limit-approaching code designs, this paper examines multiterminal source coding of two correlated, i.e., stereo, video sequences to save the sum rate over independent coding of both sequences. Two multiterminal video coding schemes are proposed. In the first scheme, the left sequence of the stereo pair is coded by H.264/AVC and used at the joint decoder to facilitate Wyner-Ziv coding of the right video sequence. The first I-frame of the right sequence is successively coded by H.264/AVC Intracoding and Wyner-Ziv coding. An efficient stereo matching algorithm based on loopy belief propagation is then adopted at the decoder to produce pixel-level disparity maps between the corresponding frames of the two decoded video sequences on the fly. Based on the disparity maps, side information for both motion vectors and motion-compensated residual frames of the right sequence are generated at the decoder before Wyner-Ziv encoding. In the second scheme, source splitting is employed on top of classic and Wyner-Ziv coding for compression of both I-frames to allow flexible rate allocation between the two sequences. Experiments with both schemes on stereo video sequences using H.264/AVC, LDPC codes for Slepian-Wolf coding of the motion vectors, and scalar quantization in conjunction with LDPC codes for Wyner-Ziv coding of the residual coefficients give a slightly lower sum rate than separate H.264/AVC coding of both sequences at the same video quality.
NASA Astrophysics Data System (ADS)
Abdullah, Alyasa Gan; Wah, Yap Bee
2015-02-01
The computation of the approximate values of the trigonometric sines was discovered by Bhaskara I (c. 600-c.680), a seventh century Indian mathematician and is known as the Bjaskara's I's sine approximation formula. The formula is given in his treatise titled Mahabhaskariya. In the 14th century, Madhava of Sangamagrama, a Kerala mathematician astronomer constructed the table of trigonometric sines of various angles. Madhava's table gives the measure of angles in arcminutes, arcseconds and sixtieths of an arcsecond. The search for more accurate formulas led to the discovery of the power series expansion by Madhava of Sangamagrama (c.1350-c. 1425), the founder of the Kerala school of astronomy and mathematics. In 1715, the Taylor series was introduced by Brook Taylor an English mathematician. If the Taylor series is centered at zero, it is called a Maclaurin series, named after the Scottish mathematician Colin Maclaurin. Some of the important Maclaurin series expansions include trigonometric functions. This paper introduces the genetic code of the sine of an angle without using power series expansion. The genetic code using square root approach reveals the pattern in the signs (plus, minus) and sequence of numbers in the sine of an angle. The square root approach complements the Pythagoras method, provides a better understanding of calculating an angle and will be useful for teaching the concepts of angles in trigonometry.
Fleishman, Gregory D.; Kuznetsov, Alexey A.
2010-10-01
Radiation produced by charged particles gyrating in a magnetic field is highly significant in the astrophysics context. Persistently increasing resolution of astrophysical observations calls for corresponding three-dimensional modeling of the radiation. However, available exact equations are prohibitively slow in computing a comprehensive table of high-resolution models required for many practical applications. To remedy this situation, we develop approximate gyrosynchrotron (GS) codes capable of quickly calculating the GS emission (in non-quantum regime) from both isotropic and anisotropic electron distributions in non-relativistic, mildly relativistic, and ultrarelativistic energy domains applicable throughout a broad range of source parameters including dense or tenuous plasmas and weak or strong magnetic fields. The computation time is reduced by several orders of magnitude compared with the exact GS algorithm. The new algorithm performance can gradually be adjusted to the user's needs depending on whether precision or computation speed is to be optimized for a given model. The codes are made available for users as a supplement to this paper.
New optimal quantum convolutional codes
NASA Astrophysics Data System (ADS)
Zhu, Shixin; Wang, Liqi; Kai, Xiaoshan
2015-04-01
One of the most challenges to prove the feasibility of quantum computers is to protect the quantum nature of information. Quantum convolutional codes are aimed at protecting a stream of quantum information in a long distance communication, which are the correct generalization to the quantum domain of their classical analogs. In this paper, we construct some classes of quantum convolutional codes by employing classical constacyclic codes. These codes are optimal in the sense that they attain the Singleton bound for pure convolutional stabilizer codes.
Estimating the size of Huffman code preambles
NASA Technical Reports Server (NTRS)
Mceliece, R. J.; Palmatier, T. H.
1993-01-01
Data compression via block-adaptive Huffman coding is considered. The compressor consecutively processes blocks of N data symbols, estimates source statistics by computing the relative frequencies of each source symbol in the block, and then synthesizes a Huffman code based on these estimates. In order to let the decompressor know which Huffman code is being used, the compressor must begin the transmission of each compressed block with a short preamble or header file. This file is an encoding of the list n = (n(sub 1), n(sub 2)....,n(sub m)), where n(sub i) is the length of the Hufffman codeword associated with the ith source symbol. A simple method of doing this encoding is to individually encode each n(sub i) into a fixed-length binary word of length log(sub 2)l, where l is an a priori upper bound on the codeword length. This method produces a maximum preamble length of mlog(sub 2)l bits. The object is to show that, in most cases, no substantially shorter header of any kind is possible.
Circular codes, symmetries and transformations.
Fimmel, Elena; Giannerini, Simone; Gonzalez, Diego Luis; Strüngmann, Lutz
2015-06-01
Circular codes, putative remnants of primeval comma-free codes, have gained considerable attention in the last years. In fact they represent a second kind of genetic code potentially involved in detecting and maintaining the normal reading frame in protein coding sequences. The discovering of an universal code across species suggested many theoretical and experimental questions. However, there is a key aspect that relates circular codes to symmetries and transformations that remains to a large extent unexplored. In this article we aim at addressing the issue by studying the symmetries and transformations that connect different circular codes. The main result is that the class of 216 C3 maximal self-complementary codes can be partitioned into 27 equivalence classes defined by a particular set of transformations. We show that such transformations can be put in a group theoretic framework with an intuitive geometric interpretation. More general mathematical results about symmetry transformations which are valid for any kind of circular codes are also presented. Our results pave the way to the study of the biological consequences of the mathematical structure behind circular codes and contribute to shed light on the evolutionary steps that led to the observed symmetries of present codes. PMID:25008961
Making your code citable with the Astrophysics Source Code Library
NASA Astrophysics Data System (ADS)
Allen, Alice; DuPrie, Kimberly; Schmidt, Judy; Berriman, G. Bruce; Hanisch, Robert J.; Mink, Jessica D.; Nemiroff, Robert J.; Shamir, Lior; Shortridge, Keith; Taylor, Mark B.; Teuben, Peter J.; Wallin, John F.
2016-01-01
The Astrophysics Source Code Library (ASCL, ascl.net) is a free online registry of codes used in astronomy research. With nearly 1,200 codes, it is the largest indexed resource for astronomy codes in existence. Established in 1999, it offers software authors a path to citation of their research codes even without publication of a paper describing the software, and offers scientists a way to find codes used in refereed publications, thus improving the transparency of the research. It also provides a method to quantify the impact of source codes in a fashion similar to the science metrics of journal articles. Citations using ASCL IDs are accepted by major astronomy journals and if formatted properly are tracked by ADS and other indexing services. The number of citations to ASCL entries increased sharply from 110 citations in January 2014 to 456 citations in September 2015. The percentage of code entries in ASCL that were cited at least once rose from 7.5% in January 2014 to 17.4% in September 2015. The ASCL's mid-2014 infrastructure upgrade added an easy entry submission form, more flexible browsing, search capabilities, and an RSS feeder for updates. A Changes/Additions form added this past fall lets authors submit links for papers that use their codes for addition to the ASCL entry even if those papers don't formally cite the codes, thus increasing the transparency of that research and capturing the value of their software to the community.
Practices in Code Discoverability: Astrophysics Source Code Library
NASA Astrophysics Data System (ADS)
Allen, A.; Teuben, P.; Nemiroff, R. J.; Shamir, L.
2012-09-01
Here we describe the Astrophysics Source Code Library (ASCL), which takes an active approach to sharing astrophysics source code. ASCL's editor seeks out both new and old peer-reviewed papers that describe methods or experiments that involve the development or use of source code, and adds entries for the found codes to the library. This approach ensures that source codes are added without requiring authors to actively submit them, resulting in a comprehensive listing that covers a significant number of the astrophysics source codes used in peer-reviewed studies. The ASCL now has over 340 codes in it and continues to grow. In 2011, the ASCL has on average added 19 codes per month. An advisory committee has been established to provide input and guide the development and expansion of the new site, and a marketing plan has been developed and is being executed. All ASCL source codes have been used to generate results published in or submitted to a refereed journal and are freely available either via a download site or from an identified source. This paper provides the history and description of the ASCL. It lists the requirements for including codes, examines the advantages of the ASCL, and outlines some of its future plans.
Easy Web Interfaces to IDL Code for NSTX Data Analysis
W.M. Davis
2011-08-16
Reusing code is a well-known Software Engineering practice to substantially increase the efficiency of code production, as well as to reduce errors and debugging time. A variety of "Web Tools" for the analysis and display of raw and analyzed physics data are in use on NSTX [1], and new ones can be produced quickly from existing IDL [2] code. A Web Tool with only a few inputs, and which calls an IDL routine written in the proper style, can be created in less than an hour; more typical Web Tools with dozens of inputs, and the need for some adaptation of existing IDL code, can be working in a day or so. Efficiency is also increased for users of Web Tools because o f the familiar interface of the web browser, and not needing X-windows, accounts, passwords, etc. Web Tools were adapted for use by PPPL physicists accessing EAST data stored in MDSplus with only a few man-weeks of effort; adapting to additional sites should now be even easier. An overview of Web Tools in use on NSTX, and a list of the most useful features, is also presented.
Expressing Adaptation Strategies Using Adaptation Patterns
ERIC Educational Resources Information Center
Zemirline, N.; Bourda, Y.; Reynaud, C.
2012-01-01
Today, there is a real challenge to enable personalized access to information. Several systems have been proposed to address this challenge including Adaptive Hypermedia Systems (AHSs). However, the specification of adaptation strategies remains a difficult task for creators of such systems. In this paper, we consider the problem of the definition…
Epigenetic Codes Programing Class Switch Recombination.
Vaidyanathan, Bharat; Chaudhuri, Jayanta
2015-01-01
Class switch recombination imparts B cells with a fitness-associated adaptive -advantage during a humoral immune response by using a precision-tailored DNA excision and ligation process to swap the default constant region gene of the antibody with a new one that has unique effector functions. This secondary diversification of the antibody repertoire is a hallmark of the adaptability of B cells when confronted with environmental and pathogenic challenges. Given that the nucleotide sequence of genes during class switching remains unchanged (genetic constraints), it is logical and necessary therefore, to integrate the adaptability of B cells to an epigenetic state, which is dynamic and can be heritably modulated before, after, or even during an antibody-dependent immune response. Epigenetic regulation encompasses heritable changes that affect function (phenotype) without altering the sequence information embedded in a gene, and include histone, DNA and RNA modifications. Here, we review current literature on how B cells use an epigenetic code language as a means to ensure antibody plasticity in light of pathogenic insults. PMID:26441954
Epigenetic Codes Programing Class Switch Recombination
Vaidyanathan, Bharat; Chaudhuri, Jayanta
2015-01-01
Class switch recombination imparts B cells with a fitness-associated adaptive advantage during a humoral immune response by using a precision-tailored DNA excision and ligation process to swap the default constant region gene of the antibody with a new one that has unique effector functions. This secondary diversification of the antibody repertoire is a hallmark of the adaptability of B cells when confronted with environmental and pathogenic challenges. Given that the nucleotide sequence of genes during class switching remains unchanged (genetic constraints), it is logical and necessary therefore, to integrate the adaptability of B cells to an epigenetic state, which is dynamic and can be heritably modulated before, after, or even during an antibody-dependent immune response. Epigenetic regulation encompasses heritable changes that affect function (phenotype) without altering the sequence information embedded in a gene, and include histone, DNA and RNA modifications. Here, we review current literature on how B cells use an epigenetic code language as a means to ensure antibody plasticity in light of pathogenic insults. PMID:26441954
Tuning Complex Computer Codes to Data and Optimal Designs
NASA Astrophysics Data System (ADS)
Park, Jeong Soo
Modern scientific researchers often use complex computer simulation codes for theoretical investigations. We model the response of computer simulation code as the realization of a stochastic process. This approach, design and analysis of computer experiments (DACE), provides a statistical basis for analysing computer data, for designing experiments for efficient prediction and for comparing computer-encoded theory to experiments. An objective of research in a large class of dynamic systems is to determine any unknown coefficients in a theory. The coefficients can be determined by "tuning" the computer model to the real data so that the tuned code gives a good match to the real experimental data. Three design strategies for computer experiments are considered: data-adaptive sequential A-optimal design, maximum entropy design and optimal Latin-hypercube design. The following "code tuning" methodologies are proposed: nonlinear least squares, joint MLE, "separated" joint MLE and Bayesian method. The performance of these methods have been studied in several toy models. In the application to nuclear fusion devices, a cheaper emulator of the simulation code (BALDUR) has been constructed, and the transport coefficients were estimated from data of two tokamaks (ASDEX and PDX). Tuning complex computer codes to data using some statistical estimation methods and a cheap emulator of the code along with careful designs of computer experiments, with applications to nuclear fusion devices, is the topic of this thesis.
Liman, Emily R.; Zhang, Yali V.; Montell, Craig
2014-01-01
Five canonical tastes, bitter, sweet, umami (amino acid), salty and sour (acid) are detected by animals as diverse as fruit flies and humans, consistent with a near universal drive to consume fundamental nutrients and to avoid toxins or other harmful compounds. Surprisingly, despite this strong conservation of basic taste qualities between vertebrates and invertebrates, the receptors and signaling mechanisms that mediate taste in each are highly divergent. The identification over the last two decades of receptors and other molecules that mediate taste has led to stunning advances in our understanding of the basic mechanisms of transduction and coding of information by the gustatory systems of vertebrates and invertebrates. In this review, we discuss recent advances in taste research, mainly from the fly and mammalian systems, and we highlight principles that are common across species, despite stark differences in receptor types. PMID:24607224
Electromagnetic particle simulation codes
NASA Technical Reports Server (NTRS)
Pritchett, P. L.
1985-01-01
Electromagnetic particle simulations solve the full set of Maxwell's equations. They thus include the effects of self-consistent electric and magnetic fields, magnetic induction, and electromagnetic radiation. The algorithms for an electromagnetic code which works directly with the electric and magnetic fields are described. The fields and current are separated into transverse and longitudinal components. The transverse E and B fields are integrated in time using a leapfrog scheme applied to the Fourier components. The particle pushing is performed via the relativistic Lorentz force equation for the particle momentum. As an example, simulation results are presented for the electron cyclotron maser instability which illustrate the importance of relativistic effects on the wave-particle resonance condition and on wave dispersion.
Surface acoustic wave coding for orthogonal frequency coded devices
NASA Technical Reports Server (NTRS)
Malocha, Donald (Inventor); Kozlovski, Nikolai (Inventor)
2011-01-01
Methods and systems for coding SAW OFC devices to mitigate code collisions in a wireless multi-tag system. Each device producing plural stepped frequencies as an OFC signal with a chip offset delay to increase code diversity. A method for assigning a different OCF to each device includes using a matrix based on the number of OFCs needed and the number chips per code, populating each matrix cell with OFC chip, and assigning the codes from the matrix to the devices. The asynchronous passive multi-tag system includes plural surface acoustic wave devices each producing a different OFC signal having the same number of chips and including a chip offset time delay, an algorithm for assigning OFCs to each device, and a transceiver to transmit an interrogation signal and receive OFC signals in response with minimal code collisions during transmission.
Transionospheric Propagation Code (TIPC)
Roussel-Dupre, R.; Kelley, T.A.
1990-10-01
The Transionospheric Propagation Code is a computer program developed at Los Alamos National Lab to perform certain tasks related to the detection of vhf signals following propagation through the ionosphere. The code is written in Fortran 77, runs interactively and was designed to be as machine independent as possible. A menu format in which the user is prompted to supply appropriate parameters for a given task has been adopted for the input while the output is primarily in the form of graphics. The user has the option of selecting from five basic tasks, namely transionospheric propagation, signal filtering, signal processing, DTOA study, and DTOA uncertainty study. For the first task a specified signal is convolved against the impulse response function of the ionosphere to obtain the transionospheric signal. The user is given a choice of four analytic forms for the input pulse or of supplying a tabular form. The option of adding Gaussian-distributed white noise of spectral noise to the input signal is also provided. The deterministic ionosphere is characterized to first order in terms of a total electron content (TEC) along the propagation path. In addition, a scattering model parameterized in terms of a frequency coherence bandwidth is also available. In the second task, detection is simulated by convolving a given filter response against the transionospheric signal. The user is given a choice of a wideband filter or a narrowband Gaussian filter. It is also possible to input a filter response. The third task provides for quadrature detection, envelope detection, and three different techniques for time-tagging the arrival of the transionospheric signal at specified receivers. The latter algorithms can be used to determine a TEC and thus take out the effects of the ionosphere to first order. Task four allows the user to construct a table of delta-times-of-arrival (DTOAs) vs TECs for a specified pair of receivers.
Seligmann, Hervé
2012-12-01
Mitochondrial genes code for additional proteins after +2 frameshifts by reassigning stops to code for amino acids, which defines overlapping genetic codes for overlapping genes. Turtles recode stops UAR → Trp and AGR → Lys (AGR → Gly in the marine Olive Ridley turtle, Lepidochelys olivacea). In Lepidochelys the +2 frameshifted mitochondrial Cytb gene lacks stops, open reading frames from other genes code for unknown proteins, and for regular mitochondrial proteins after frameshifts according to the overlapping genetic code. Lepidochelys' inversion between proteins coded by regular and overlapping genetic codes substantiates the existence of overlap coding. ND4 differs among Lepidochelys mitochondrial genomes: it is regular in DQ486893; in NC_011516, the open reading frame codes for another protein, the regular ND4 protein is coded by the frameshifted sequence reassigning stops as in other turtles. These systematic patterns are incompatible with Genbank/sequencing errors and DNA decay. Random mixing of synonymous codons, conserving main frame coding properties, shows optimization of natural sequences for overlap coding; Ka/Ks analyses show high positive (directional) selection on overlapping genes. Tests based on circular genetic codes confirm programmed frameshifts in ND3 and ND4l genes, and predicted frameshift sites for overlap coding in Lepidochelys. Chelonian mitochondria adapt for overlapping gene expression: cloverleaf formation by antisense tRNAs with predicted anticodons matching stops coevolves with overlap coding; antisense tRNAs with predicted expanded anticodons (frameshift suppressor tRNAs) associate with frameshift-coding in ND3 and ND4l, a potential regulation of frameshifted overlap coding. Anaeroby perhaps switched between regular and overlap coding genes in Lepidochelys.
Some easily analyzable convolutional codes
NASA Technical Reports Server (NTRS)
Mceliece, R.; Dolinar, S.; Pollara, F.; Vantilborg, H.
1989-01-01
Convolutional codes have played and will play a key role in the downlink telemetry systems on many NASA deep-space probes, including Voyager, Magellan, and Galileo. One of the chief difficulties associated with the use of convolutional codes, however, is the notorious difficulty of analyzing them. Given a convolutional code as specified, say, by its generator polynomials, it is no easy matter to say how well that code will perform on a given noisy channel. The usual first step in such an analysis is to computer the code's free distance; this can be done with an algorithm whose complexity is exponential in the code's constraint length. The second step is often to calculate the transfer function in one, two, or three variables, or at least a few terms in its power series expansion. This step is quite hard, and for many codes of relatively short constraint lengths, it can be intractable. However, a large class of convolutional codes were discovered for which the free distance can be computed by inspection, and for which there is a closed-form expression for the three-variable transfer function. Although for large constraint lengths, these codes have relatively low rates, they are nevertheless interesting and potentially useful. Furthermore, the ideas developed here to analyze these specialized codes may well extend to a much larger class.