Cramer, S.N.
1984-01-01
The MORSE code is a large general-use multigroup Monte Carlo code system. Although no claims can be made regarding its superiority in either theoretical details or Monte Carlo techniques, MORSE has been, since its inception at ORNL in the late 1960s, the most widely used Monte Carlo radiation transport code. The principal reason for this popularity is that MORSE is relatively easy to use, independent of any installation or distribution center, and it can be easily customized to fit almost any specific need. Features of the MORSE code are described.
47 CFR 80.100 - Morse code requirement.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 5 2010-10-01 2010-10-01 false Morse code requirement. 80.100 Section 80.100... MARITIME SERVICES Operating Requirements and Procedures Operating Procedures-General § 80.100 Morse code requirement. The code employed for telegraphy must be the Morse code specified in the Telegraph...
MORSE Monte Carlo radiation transport code system
Emmett, M.B.
1983-02-01
This report is an addendum to the MORSE report, ORNL-4972, originally published in 1975. This addendum contains descriptions of several modifications to the MORSE Monte Carlo Code, replacement pages containing corrections, Part II of the report which was previously unpublished, and a new Table of Contents. The modifications include a Klein Nishina estimator for gamma rays. Use of such an estimator required changing the cross section routines to process pair production and Compton scattering cross sections directly from ENDF tapes and writing a new version of subroutine RELCOL. Another modification is the use of free form input for the SAMBO analysis data. This required changing subroutines SCORIN and adding new subroutine RFRE. References are updated, and errors in the original report have been corrected. (WHK)
Status of the MORSE multigroup Monte Carlo radiation transport code
Emmett, M.B.
1993-06-01
There are two versions of the MORSE multigroup Monte Carlo radiation transport computer code system at Oak Ridge National Laboratory. MORSE-CGA is the most well-known and has undergone extensive use for many years. MORSE-SGC was originally developed in about 1980 in order to restructure the cross-section handling and thereby save storage. However, with the advent of new computer systems having much larger storage capacity, that aspect of SGC has become unnecessary. Both versions use data from multigroup cross-section libraries, although in somewhat different formats. MORSE-SGC is the version of MORSE that is part of the SCALE system, but it can also be run stand-alone. Both CGA and SGC use the Multiple Array System (MARS) geometry package. In the last six months the main focus of the work on these two versions has been on making them operational on workstations, in particular, the IBM RISC 6000 family. A new version of SCALE for workstations is being released to the Radiation Shielding Information Center (RSIC). MORSE-CGA, Version 2.0, is also being released to RSIC. Both SGC and CGA have undergone other revisions recently. This paper reports on the current status of the MORSE code system.
Recent development and applications of the MORSE Code
Cramer, S.N.
1993-06-01
Several recent analyses using the multigroup MORSE Monte Carlo code are presented. In the calculation of a highly directional-dependent neutron streaming experiment it is shown that P{sub 7} cross section representation produces results virtually identical with those from an analog code. Use has been made here of a recently released ENDF/B-VI data set. In the analysis of neutron distributions inside the water-cooled ORELA accelerator target and positron source, an analytic hydrogen scattering model is incorporated into the otherwise multigroup treatment. The radiation from a nuclear weapon is analyzed in a large concrete building in Nagasaki by coupling MORSE and the DOT discrete ordinates code. The spatial variation of the DOT-generated free-field radiation is utilized, and the building is modeled with the array feature of the MORSE geometry package. An analytic directional biasing, applicable to the discrete scattering angle procedure in MORSE, is combined with the exponential transform. As in more general studies, it is shown that the combined biasing is more efficient than either biasing used separately. Other tracking improvements are included in a difficult streaming and penetration radiation analysis through a concrete structure. Proposals are given for the code generation of the required biasing parameters.
47 CFR 80.355 - Distress, urgency, safety, call and reply Morse code frequencies.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 5 2010-10-01 2010-10-01 false Distress, urgency, safety, call and reply Morse... Distress, urgency, safety, call and reply Morse code frequencies. This section describes the distress, urgency, safety, call and reply carrier frequencies assignable to stations for Morse code...
Applications guide to the MORSE Monte Carlo code
Cramer, S.N.
1985-08-01
A practical guide for the implementation of the MORESE-CG Monte Carlo radiation transport computer code system is presented. The various versions of the MORSE code are compared and contrasted, and the many references dealing explicitly with the MORSE-CG code are reviewed. The treatment of angular scattering is discussed, and procedures for obtaining increased differentiality of results in terms of reaction types and nuclides from a multigroup Monte Carlo code are explained in terms of cross-section and geometry data manipulation. Examples of standard cross-section data input and output are shown. Many other features of the code system are also reviewed, including (1) the concept of primary and secondary particles, (2) fission neutron generation, (3) albedo data capability, (4) DOMINO coupling, (5) history file use for post-processing of results, (6) adjoint mode operation, (7) variance reduction, and (8) input/output. In addition, examples of the combinatorial geometry are given, and the new array of arrays geometry feature (MARS) and its three-dimensional plotting code (JUNEBUG) are presented. Realistic examples of user routines for source, estimation, path-length stretching, and cross-section data manipulation are given. A deatiled explanation of the coupling between the random walk and estimation procedure is given in terms of both code parameters and physical analogies. The operation of the code in the adjoint mode is covered extensively. The basic concepts of adjoint theory and dimensionality are discussed and examples of adjoint source and estimator user routines are given for all common situations. Adjoint source normalization is explained, a few sample problems are given, and the concept of obtaining forward differential results from adjoint calculations is covered. Finally, the documentation of the standard MORSE-CG sample problem package is reviewed and on-going and future work is discussed.
47 CFR 80.357 - Working frequencies for Morse code and data transmission.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 47 Telecommunication 5 2011-10-01 2011-10-01 false Working frequencies for Morse code and data transmission. 80.357 Section 80.357 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES STATIONS IN THE MARITIME SERVICES Frequencies Radiotelegraphy § 80.357 Working frequencies for Morse code and...
47 CFR 80.357 - Working frequencies for Morse code and data transmission.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 5 2010-10-01 2010-10-01 false Working frequencies for Morse code and data transmission. 80.357 Section 80.357 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES STATIONS IN THE MARITIME SERVICES Frequencies Radiotelegraphy § 80.357 Working frequencies for Morse code and...
Morse code application for wireless environmental control systems for severely disabled individuals.
Yang, Cheng-Hong; Chuang, Li-Yeh; Yang, Cheng-Huei; Luo, Ching-Hsing
2003-12-01
Some physically-disabled people with neuromuscular diseases such as amyotrophic lateral sclerosis, multiple sclerosis, muscular dystrophy, or other conditions that hinder their ability to write, type, and speak, require an assistive tool for purposes of augmentative and alternative communication in their daily lives. In this paper, we designed and implemented a wireless environmental control system using Morse code as an adapted access communication tool. The proposed system includes four parts: input-control module; recognition module; wireless-control module; and electronic-equipment-control module. The signals are transmitted using adopted radio frequencies, which permits long distance transmission without space limitation. Experimental results revealed that three participants with physical handicaps were able to gain access to electronic facilities after two months' practice with the new system. PMID:14960124
A STRUCTURAL THEORY FOR THE PERCEPTION OF MORSE CODE SIGNALS AND RELATED RHYTHMIC PATTERNS.
ERIC Educational Resources Information Center
WISH, MYRON
THE PRIMARY PURPOSE OF THIS DISSERTATION IS TO DEVELOP A STRUCTURAL THEORY, ALONG FACET-THEORETIC LINES, FOR THE PERCEPTION OF MORSE CODE SIGNALS AND RELATED RHYTHMIC PATTERNS. AS STEPS IN THE DEVELOPMENT OF THIS THEORY, MODELS FOR TWO SETS OF SIGNALS ARE PROPOSED AND TESTED. THE FIRST MODEL IS FOR A SET COMPRISED OF ALL SIGNALS OF THE…
An Evaluation of Modality Preference Using a "Morse Code" Recall Task
ERIC Educational Resources Information Center
Hansen, Louise; Cottrell, David
2013-01-01
Advocates of modality preference posit that individuals have a dominant sense and that when new material is presented in this preferred modality, learning is enhanced. Despite the widespread belief in this position, there is little supporting evidence. In the present study, the authors implemented a Morse code-like recall task to examine whether…
Calculations of the giant-dipole-resonance photoneutrons using a coupled EGS4-morse code
Liu, J.C.; Nelson, W.R.; Kase, K.R.; Mao, X.S.
1995-10-01
The production and transport of the photoneutrons from the giant-dipoleresonance reaction have been implemented in a coupled EGS4-MORSE code. The total neutron yield (including both the direct neutron and evaporation neutron components) is calculated by folding the photoneutron yield cross sections with the photon track length distribution in the target. Empirical algorithms based on the measurements have been developed to estimate the fraction and energy of the direct neutron component for each photon. The statistical theory in the EVAP4 code, incorporated as a MORSE subroutine, is used to determine the energies of the evaporation neutrons. These represent major improvements over other calculations that assumed no direct neutrons, a constant fraction of direct neutrons, monoenergetic direct neutron, or a constant nuclear temperature for the evaporation neutrons. It was also assumed that the slow neutrons (< 2.5 MeV) are emitted isotropically and the fast neutrons are emitted anisotropically in the form of 1+Csin{sup 2}{theta}, which have a peak emission at 900. Comparisons between the calculated and the measured photoneutron results (spectra of the direct, evaporation and total neutrons; nuclear temperatures; direct neutron fractions) for materials of lead, tungsten, tantalum and copper have been made. The results show that the empirical algorithms, albeit simple, can produce reasonable results over the interested photon energy range.
Towards a Morse Code-Based Non-invasive Thought-to-Speech Converter
NASA Astrophysics Data System (ADS)
Nicolaou, Nicoletta; Georgiou, Julius
This paper presents our investigations towards a non-invasive custom-built thought-to-speech converter that decodes mental tasks into morse code, text and then speech. The proposed system is aimed primarily at people who have lost their ability to communicate via conventional means. The investigations presented here are part of our greater search for an appropriate set of features, classifiers and mental tasks that would maximise classification accuracy in such a system. Here Autoregressive (AR) coefficients and Power Spectral Density (PSD) features have been classified using a Support Vector Machine (SVM). The classification accuracy was higher with AR features compared to PSD. In addition, the use of an SVM to classify the AR coefficients increased the classification rate by up to 16.3% compared to that reported in different work, where other classifiers were used. It was also observed that the combination of mental tasks for which highest classification was obtained varied from subject to subject; hence the mental tasks to be used should be carefully chosen to match each subject.
1991-08-01
Version: 00 The original MORSE code was a multipurpose neutron and gamma-ray transport Monte Carlo code. It was designed as a tool for solving most shielding problems. Through the use of multigroup cross sections, the solution of neutron, gamma-ray, or coupled neutron-gamma-ray problems could be obtained in either the forward or adjoint mode. Time dependence for both shielding and criticality problems is provided. General three-dimensional geometry could be used with an albedo option available atmore » any material surface. Isotropic or anisotropic scattering up to a P16 expansion of the angular distribution was allowed. MORSE-CG incorporated the Mathematical Applications, Inc. (MAGI) combinatorial geometry routines. MORSE-B modifies the Monte Carlo neutron and photon transport computer code MORSE-CG by adding routines which allow various flexible options.« less
1991-05-01
Version 00 MORSE-CGA was developed to add the capability of modelling rectangular lattices for nuclear reactor cores or for multipartitioned structures. It thus enhances the capability of the MORSE code system. The MORSE code is a multipurpose neutron and gamma-ray transport Monte Carlo code. It has been designed as a tool for solving most shielding problems. Through the use of multigroup cross sections, the solution of neutron, gamma-ray, or coupled neutron-gamma-ray problems may be obtainedmore » in either the forward or adjoint mode. Time dependence for both shielding and criticality problems is provided. General three-dimensional geometry may be used with an albedo option available at any material surface. Isotropic or anisotropic scattering up to a P16 expansion of the angular distribution is allowed.« less
Telescope Adaptive Optics Code
Phillion, D.
2005-07-28
The Telescope AO Code has general adaptive optics capabilities plus specialized models for three telescopes with either adaptive optics or active optics systems. It has the capability to generate either single-layer or distributed Kolmogorov turbulence phase screens using the FFT. Missing low order spatial frequencies are added using the Karhunen-Loeve expansion. The phase structure curve is extremely dose to the theoreUcal. Secondly, it has the capability to simulate an adaptive optics control systems. The default parameters are those of the Keck II adaptive optics system. Thirdly, it has a general wave optics capability to model the science camera halo due to scintillation from atmospheric turbulence and the telescope optics. Although this capability was implemented for the Gemini telescopes, the only default parameter specific to the Gemini telescopes is the primary mirror diameter. Finally, it has a model for the LSST active optics alignment strategy. This last model is highly specific to the LSST
Jiang, Jun; Zhou, Zongtan; Yin, Erwei; Yu, Yang; Liu, Yadong; Hu, Dewen
2015-11-01
Motor imagery (MI)-based brain-computer interfaces (BCIs) allow disabled individuals to control external devices voluntarily, helping us to restore lost motor functions. However, the number of control commands available in MI-based BCIs remains limited, limiting the usability of BCI systems in control applications involving multiple degrees of freedom (DOF), such as control of a robot arm. To address this problem, we developed a novel Morse code-inspired method for MI-based BCI design to increase the number of output commands. Using this method, brain activities are modulated by sequences of MI (sMI) tasks, which are constructed by alternately imagining movements of the left or right hand or no motion. The codes of the sMI task was detected from EEG signals and mapped to special commands. According to permutation theory, an sMI task with N-length allows 2 × (2(N)-1) possible commands with the left and right MI tasks under self-paced conditions. To verify its feasibility, the new method was used to construct a six-class BCI system to control the arm of a humanoid robot. Four subjects participated in our experiment and the averaged accuracy of the six-class sMI tasks was 89.4%. The Cohen's kappa coefficient and the throughput of our BCI paradigm are 0.88 ± 0.060 and 23.5bits per minute (bpm), respectively. Furthermore, all of the subjects could operate an actual three-joint robot arm to grasp an object in around 49.1s using our approach. These promising results suggest that the Morse code-inspired method could be used in the design of BCIs for multi-DOF control. PMID:26340647
Telescope Adaptive Optics Code
2005-07-28
The Telescope AO Code has general adaptive optics capabilities plus specialized models for three telescopes with either adaptive optics or active optics systems. It has the capability to generate either single-layer or distributed Kolmogorov turbulence phase screens using the FFT. Missing low order spatial frequencies are added using the Karhunen-Loeve expansion. The phase structure curve is extremely dose to the theoreUcal. Secondly, it has the capability to simulate an adaptive optics control systems. The defaultmore » parameters are those of the Keck II adaptive optics system. Thirdly, it has a general wave optics capability to model the science camera halo due to scintillation from atmospheric turbulence and the telescope optics. Although this capability was implemented for the Gemini telescopes, the only default parameter specific to the Gemini telescopes is the primary mirror diameter. Finally, it has a model for the LSST active optics alignment strategy. This last model is highly specific to the LSST« less
Adaptive entropy coded subband coding of images.
Kim, Y H; Modestino, J W
1992-01-01
The authors describe a design approach, called 2-D entropy-constrained subband coding (ECSBC), based upon recently developed 2-D entropy-constrained vector quantization (ECVQ) schemes. The output indexes of the embedded quantizers are further compressed by use of noiseless entropy coding schemes, such as Huffman or arithmetic codes, resulting in variable-rate outputs. Depending upon the specific configurations of the ECVQ and the ECPVQ over the subbands, many different types of SBC schemes can be derived within the generic 2-D ECSBC framework. Among these, the authors concentrate on three representative types of 2-D ECSBC schemes and provide relative performance evaluations. They also describe an adaptive buffer instrumented version of 2-D ECSBC, called 2-D ECSBC/AEC, for use with fixed-rate channels which completely eliminates buffer overflow/underflow problems. This adaptive scheme achieves performance quite close to the corresponding ideal 2-D ECSBC system. PMID:18296138
Driver Code for Adaptive Optics
NASA Technical Reports Server (NTRS)
Rao, Shanti
2007-01-01
A special-purpose computer code for a deformable-mirror adaptive-optics control system transmits pixel-registered control from (1) a personal computer running software that generates the control data to (2) a circuit board with 128 digital-to-analog converters (DACs) that generate voltages to drive the deformable-mirror actuators. This program reads control-voltage codes from a text file, then sends them, via the computer s parallel port, to a circuit board with four AD5535 (or equivalent) chips. Whereas a similar prior computer program was capable of transmitting data to only one chip at a time, this program can send data to four chips simultaneously. This program is in the form of C-language code that can be compiled and linked into an adaptive-optics software system. The program as supplied includes source code for integration into the adaptive-optics software, documentation, and a component that provides a demonstration of loading DAC codes from a text file. On a standard Windows desktop computer, the software can update 128 channels in 10 ms. On Real-Time Linux with a digital I/O card, the software can update 1024 channels (8 boards in parallel) every 8 ms.
Adaptive decoding of convolutional codes
NASA Astrophysics Data System (ADS)
Hueske, K.; Geldmacher, J.; Götze, J.
2007-06-01
Convolutional codes, which are frequently used as error correction codes in digital transmission systems, are generally decoded using the Viterbi Decoder. On the one hand the Viterbi Decoder is an optimum maximum likelihood decoder, i.e. the most probable transmitted code sequence is obtained. On the other hand the mathematical complexity of the algorithm only depends on the used code, not on the number of transmission errors. To reduce the complexity of the decoding process for good transmission conditions, an alternative syndrome based decoder is presented. The reduction of complexity is realized by two different approaches, the syndrome zero sequence deactivation and the path metric equalization. The two approaches enable an easy adaptation of the decoding complexity for different transmission conditions, which results in a trade-off between decoding complexity and error correction performance.
AEST: Adaptive Eigenvalue Stability Code
NASA Astrophysics Data System (ADS)
Zheng, L.-J.; Kotschenreuther, M.; Waelbroeck, F.; van Dam, J. W.; Berk, H.
2002-11-01
An adaptive eigenvalue linear stability code is developed. The aim is on one hand to include the non-ideal MHD effects into the global MHD stability calculation for both low and high n modes and on the other hand to resolve the numerical difficulty involving MHD singularity on the rational surfaces at the marginal stability. Our code follows some parts of philosophy of DCON by abandoning relaxation methods based on radial finite element expansion in favor of an efficient shooting procedure with adaptive gridding. The δ W criterion is replaced by the shooting procedure and subsequent matrix eigenvalue problem. Since the technique of expanding a general solution into a summation of the independent solutions employed, the rank of the matrices involved is just a few hundreds. This makes easier to solve the eigenvalue problem with non-ideal MHD effects, such as FLR or even full kinetic effects, as well as plasma rotation effect, taken into account. To include kinetic effects, the approach of solving for the distribution function as a local eigenvalue ω problem as in the GS2 code will be employed in the future. Comparison of the ideal MHD version of the code with DCON, PEST, and GATO will be discussed. The non-ideal MHD version of the code will be employed to study as an application the transport barrier physics in tokamak discharges.
Local intensity adaptive image coding
NASA Technical Reports Server (NTRS)
Huck, Friedrich O.
1989-01-01
The objective of preprocessing for machine vision is to extract intrinsic target properties. The most important properties ordinarily are structure and reflectance. Illumination in space, however, is a significant problem as the extreme range of light intensity, stretching from deep shadow to highly reflective surfaces in direct sunlight, impairs the effectiveness of standard approaches to machine vision. To overcome this critical constraint, an image coding scheme is being investigated which combines local intensity adaptivity, image enhancement, and data compression. It is very effective under the highly variant illumination that can exist within a single frame or field of view, and it is very robust to noise at low illuminations. Some of the theory and salient features of the coding scheme are reviewed. Its performance is characterized in a simulated space application, the research and development activities are described.
1983-04-13
Version 00 MORSE-C is based on the original ORNL versions of CCC-127/MORSE and CCC-261/MORSE-L but is restricted to criticality problems. Continued efforts in criticality safety calculations led to the development of techniques which resulted in improvements in energy resolution of cross sections, upscatter in the thermal region, and a better cross section library. Only time-independent problems are treated in the packaged version.
Adaptive differential pulse-code modulation with adaptive bit allocation
NASA Astrophysics Data System (ADS)
Frangoulis, E. D.; Yoshida, K.; Turner, L. F.
1984-08-01
Studies have been conducted regarding the possibility to obtain good quality speech at data rates in the range from 16 kbit/s to 32 kbit/s. The techniques considered are related to adaptive predictive coding (APC) and adaptive differential pulse-code modulation (ADPCM). At 16 kbit/s adaptive transform coding (ATC) has also been used. The present investigation is concerned with a new method of speech coding. The described method employs adaptive bit allocation, similar to that used in adaptive transform coding, together with adaptive differential pulse-code modulation, employing first-order prediction. The new method has the objective to improve the quality of the speech over that which can be obtained with conventional ADPCM employing a fourth-order predictor. Attention is given to the ADPCM-AB system, the design of a subjective test, and the application of switched preemphasis to ADPCM.
Perceptually-Based Adaptive JPEG Coding
NASA Technical Reports Server (NTRS)
Watson, Andrew B.; Rosenholtz, Ruth; Null, Cynthia H. (Technical Monitor)
1996-01-01
An extension to the JPEG standard (ISO/IEC DIS 10918-3) allows spatial adaptive coding of still images. As with baseline JPEG coding, one quantization matrix applies to an entire image channel, but in addition the user may specify a multiplier for each 8 x 8 block, which multiplies the quantization matrix, yielding the new matrix for the block. MPEG 1 and 2 use much the same scheme, except there the multiplier changes only on macroblock boundaries. We propose a method for perceptual optimization of the set of multipliers. We compute the perceptual error for each block based upon DCT quantization error adjusted according to contrast sensitivity, light adaptation, and contrast masking, and pick the set of multipliers which yield maximally flat perceptual error over the blocks of the image. We investigate the bitrate savings due to this adaptive coding scheme and the relative importance of the different sorts of masking on adaptive coding.
Motion-adaptive compressive coded apertures
NASA Astrophysics Data System (ADS)
Harmany, Zachary T.; Oh, Albert; Marcia, Roummel; Willett, Rebecca
2011-09-01
This paper describes an adaptive compressive coded aperture imaging system for video based on motion-compensated video sparsity models. In particular, motion models based on optical flow and sparse deviations from optical flow (i.e. salient motion) can be used to (a) predict future video frames from previous compressive measurements, (b) perform reconstruction using efficient online convex programming techniques, and (c) adapt the coded aperture to yield higher reconstruction fidelity in the vicinity of this salient motion.
ERIC Educational Resources Information Center
Bruce, Guy V.
1985-01-01
Mechanically-minded middle school students who have been studying electromagnetism can construct inexpensive telegraphs resembling Samuel Morse's 1844 invention. Instructions (with diagrams), list of materials needed, and suggestions are given for a simple telegraph and for a two-way system. (DH)
Gerber, Samuel; Rübel, Oliver; Bremer, Peer-Timo; Pascucci, Valerio; Whitaker, Ross T.
2012-01-01
This paper introduces a novel partition-based regression approach that incorporates topological information. Partition-based regression typically introduce a quality-of-fit-driven decomposition of the domain. The emphasis in this work is on a topologically meaningful segmentation. Thus, the proposed regression approach is based on a segmentation induced by a discrete approximation of the Morse-Smale complex. This yields a segmentation with partitions corresponding to regions of the function with a single minimum and maximum that are often well approximated by a linear model. This approach yields regression models that are amenable to interpretation and have good predictive capacity. Typically, regression estimates are quantified by their geometrical accuracy. For the proposed regression, an important aspect is the quality of the segmentation itself. Thus, this paper introduces a new criterion that measures the topological accuracy of the estimate. The topological accuracy provides a complementary measure to the classical geometrical error measures and is very sensitive to over-fitting. The Morse-Smale regression is compared to state-of-the-art approaches in terms of geometry and topology and yields comparable or improved fits in many cases. Finally, a detailed study on climate-simulation data demonstrates the application of the Morse-Smale regression. Supplementary materials are available online and contain an implementation of the proposed approach in the R package msr, an analysis and simulations on the stability of the Morse-Smale complex approximation and additional tables for the climate-simulation study. PMID:23687424
SAGE - MULTIDIMENSIONAL SELF-ADAPTIVE GRID CODE
NASA Technical Reports Server (NTRS)
Davies, C. B.
1994-01-01
SAGE, Self Adaptive Grid codE, is a flexible tool for adapting and restructuring both 2D and 3D grids. Solution-adaptive grid methods are useful tools for efficient and accurate flow predictions. In supersonic and hypersonic flows, strong gradient regions such as shocks, contact discontinuities, shear layers, etc., require careful distribution of grid points to minimize grid error and produce accurate flow-field predictions. SAGE helps the user obtain more accurate solutions by intelligently redistributing (i.e. adapting) the original grid points based on an initial or interim flow-field solution. The user then computes a new solution using the adapted grid as input to the flow solver. The adaptive-grid methodology poses the problem in an algebraic, unidirectional manner for multi-dimensional adaptations. The procedure is analogous to applying tension and torsion spring forces proportional to the local flow gradient at every grid point and finding the equilibrium position of the resulting system of grid points. The multi-dimensional problem of grid adaption is split into a series of one-dimensional problems along the computational coordinate lines. The reduced one dimensional problem then requires a tridiagonal solver to find the location of grid points along a coordinate line. Multi-directional adaption is achieved by the sequential application of the method in each coordinate direction. The tension forces direct the redistribution of points to the strong gradient region. To maintain smoothness and a measure of orthogonality of grid lines, torsional forces are introduced that relate information between the family of lines adjacent to one another. The smoothness and orthogonality constraints are direction-dependent, since they relate only the coordinate lines that are being adapted to the neighboring lines that have already been adapted. Therefore the solutions are non-unique and depend on the order and direction of adaption. Non-uniqueness of the adapted grid is
MORSE: current status of the two Oak Ridge versions
Emmett, M.B.; West, J.T.
1980-01-01
There are two versions of the MORSE Monte Carlo radiation transport computer code system at Oak Ridge National Laboratory. MORSE-CG is the most well-known and has undergone extensive use for many years. Development of MORSE-SGC was originally begun in order to restructure the cross section handling and thereby save storage, but the more recent goal has been to incorporate some of the KENO ability to handle multiple arrays in the geometry and to improve on 3-D plotting capabilities. New capabilities recently added to MORSE-CG include a generalized form for a Klein Nishina estimator, a new version of BREESE, the albedo package, which now allows multiple albedo materials and a revised DOMINO which handles DOT-IV tapes.
Adaptive Dynamic Event Tree in RAVEN code
Alfonsi, Andrea; Rabiti, Cristian; Mandelli, Diego; Cogliati, Joshua Joseph; Kinoshita, Robert Arthur
2014-11-01
RAVEN is a software tool that is focused on performing statistical analysis of stochastic dynamic systems. RAVEN has been designed in a high modular and pluggable way in order to enable easy integration of different programming languages (i.e., C++, Python) and coupling with other applications (system codes). Among the several capabilities currently present in RAVEN, there are five different sampling strategies: Monte Carlo, Latin Hyper Cube, Grid, Adaptive and Dynamic Event Tree (DET) sampling methodologies. The scope of this paper is to present a new sampling approach, currently under definition and implementation: an evolution of the DET me
ICAN Computer Code Adapted for Building Materials
NASA Technical Reports Server (NTRS)
Murthy, Pappu L. N.
1997-01-01
The NASA Lewis Research Center has been involved in developing composite micromechanics and macromechanics theories over the last three decades. These activities have resulted in several composite mechanics theories and structural analysis codes whose applications range from material behavior design and analysis to structural component response. One of these computer codes, the Integrated Composite Analyzer (ICAN), is designed primarily to address issues related to designing polymer matrix composites and predicting their properties - including hygral, thermal, and mechanical load effects. Recently, under a cost-sharing cooperative agreement with a Fortune 500 corporation, Master Builders Inc., ICAN was adapted to analyze building materials. The high costs and technical difficulties involved with the fabrication of continuous-fiber-reinforced composites sometimes limit their use. Particulate-reinforced composites can be thought of as a viable alternative. They are as easily processed to near-net shape as monolithic materials, yet have the improved stiffness, strength, and fracture toughness that is characteristic of continuous-fiber-reinforced composites. For example, particlereinforced metal-matrix composites show great potential for a variety of automotive applications, such as disk brake rotors, connecting rods, cylinder liners, and other hightemperature applications. Building materials, such as concrete, can be thought of as one of the oldest materials in this category of multiphase, particle-reinforced materials. The adaptation of ICAN to analyze particle-reinforced composite materials involved the development of new micromechanics-based theories. A derivative of the ICAN code, ICAN/PART, was developed and delivered to Master Builders Inc. as a part of the cooperative activity.
Efficient morse decompositions of vector fields.
Chen, Guoning; Mischaikow, Konstantin; Laramee, Robert S; Zhang, Eugene
2008-01-01
Existing topology-based vector field analysis techniques rely on the ability to extract the individual trajectories such as fixed points, periodic orbits, and separatrices that are sensitive to noise and errors introduced by simulation and interpolation. This can make such vector field analysis unsuitable for rigorous interpretations. We advocate the use of Morse decompositions, which are robust with respect to perturbations, to encode the topological structures of a vector field in the form of a directed graph, called a Morse connection graph (MCG). While an MCG exists for every vector field, it need not be unique. Previous techniques for computing MCG's, while fast, are overly conservative and usually results in MCG's that are too coarse to be useful for the applications. To address this issue, we present a new technique for performing Morse decomposition based on the concept of tau-maps, which typically provides finer MCG's than existing techniques. Furthermore, the choice of tau provides a natural tradeoff between the fineness of the MCG's and the computational costs. We provide efficient implementations of Morse decomposition based on tau-maps, which include the use of forward and backward mapping techniques and an adaptive approach in constructing better approximations of the images of the triangles in the meshes used for simulation.. Furthermore, we propose the use of spatial tau-maps in addition to the original temporal tau-maps. These techniques provide additional trade-offs between the quality of the MCGs and the speed of computation. We demonstrate the utility of our technique with various examples in the plane and on surfaces including engine simulation data sets. PMID:18467759
Design of Pel Adaptive DPCM coding based upon image partition
NASA Astrophysics Data System (ADS)
Saitoh, T.; Harashima, H.; Miyakawa, H.
1982-01-01
A Pel Adaptive DPCM coding system based on image partition is developed which possesses coding characteristics superior to those of the Block Adaptive DPCM coding system. This method uses multiple DPCM coding loops and nonhierarchical cluster analysis. It is found that the coding performances of the Pel Adaptive DPCM coding method differ depending on the subject images. The Pel Adaptive DPCM designed using these methods is shown to yield a maximum performance advantage of 2.9 dB for the Girl and Couple images and 1.5 dB for the Aerial image, although no advantage was obtained for the moon image. These results show an improvement over the optimally designed Block Adaptive DPCM coding method proposed by Saito et al. (1981).
Adaptation of gasdynamical codes to the modern supercomputers
NASA Astrophysics Data System (ADS)
Kaygorodov, P. V.
2016-02-01
During last decades, supercomputer architecture has changed significantly and now it is impossible to achieve a peak performance without an adaptation of the numerical codes to modern supercomputer architectures. In this paper, I want to share my experience in adaptation of astrophysics gasdynamical numerical codes to multi-node computing clusters with multi-CPU and multi-GPU nodes.
Adaptation of bit error rate by coding
NASA Astrophysics Data System (ADS)
Marguinaud, A.; Sorton, G.
1984-07-01
The use of coding in spacecraft wideband communication to reduce power transmission, save bandwith, and lower antenna specifications was studied. The feasibility of a coder decoder functioning at a bit rate of 10 Mb/sec with a raw bit error rate (BER) of 0.001 and an output BER of 0.000000001 is demonstrated. A single block code protection, and two coding levels protection are examined. A single level protection BCH code with 5 errors correction capacity, 16% redundancy, and interleaving depth 4 giving a coded block of 1020 bits is simple to implement, but has BER = 0.000000007. A single level BCH code with 7 errors correction capacity and 12% redundancy meets specifications, but is more difficult to implement. Two level protection with 9% BCH outer and 10% BCH inner codes, both levels with 3 errors correction capacity and 8% redundancy for a coded block of 7050 bits is the most complex, but offers performance advantages.
An Adaptive Code for Radial Stellar Model Pulsations
NASA Astrophysics Data System (ADS)
Buchler, J. Robert; Kolláth, Zoltán; Marom, Ariel
1997-09-01
We describe an implicit 1-D adaptive mesh hydrodynamics code that is specially tailored for radial stellar pulsations. In the Lagrangian limit the code reduces to the well tested Fraley scheme. The code has the useful feature that unwanted, long lasting transients can be avoided by smoothly switching on the adaptive mesh features starting from the Lagrangean code. Thus, a limit cycle pulsation that can readily be computed with the relaxation method of Stellingwerf will converge in a few tens of pulsation cycles when put into the adaptive mesh code. The code has been checked with two shock problems, viz. Noh and Sedov, for which analytical solutions are known, and it has been found to be both accurate and stable. Superior results were obtained through the solution of the total energy (gravitational + kinetic + internal) equation rather than that of the internal energy only.
Results of investigation of adaptive speech codes
NASA Astrophysics Data System (ADS)
Nekhayev, A. L.; Pertseva, V. A.; Sitnyakovskiy, I. V.
1984-06-01
A search for ways of increasing the effectiveness of speech signals in digital form lead to the appearance of various methods of encoding, to reduce the excessiveness of specific properties of the speech signal. It is customary to divide speech codes into two large classes: codes of signal parameters (or vocoders), and codes of the signal form (CSF. In telephony, preference is given to a second class of systems, which maintains naturalness of sound. The class of CSF expanded considerably because of the development of codes based on the frequency representation of a signal. The greatest interest is given to such methods of encoding as pulse modulation (PCM), differential PCM (DPCM), and delta modulation (DM). However, developers of digital transmission systems find it difficult to compile a complete pattern of the applicability of specific types of codes. The best known versions of the codes are evaluated by means of subjective-statistical measurements of their characteristics. The results obtained help developers to draw conclusions regarding the applicability of the codes considered in various communication systems.
Framed Morse functions on surfaces
Kudryavtseva, Elena A; Permyakov, Dmitrii A
2010-06-09
Let M be a smooth, compact, not necessarily orientable surface with (maybe empty) boundary, and let F be the space of Morse functions on M that are constant on each component of the boundary and have no critical points at the boundary. The notion of framing is defined for a Morse function f element of F. In the case of an orientable surface M this is a closed 1-form {alpha} on M with punctures at the critical points of local minimum and maximum of f such that in a neighbourhood of each critical point the pair (f,{alpha}) has a canonical form in a suitable local coordinate chart and the 2-form df and {alpha} does not vanish on M punctured at the critical points and defines there a positive orientation. Each Morse function on M is shown to have a framing, and the space F endowed with the C{sup {infinity}-}topology is homotopy equivalent to the space F of framed Morse functions. The results obtained make it possible to reduce the problem of describing the homotopy type of F to the simpler problem of finding the homotopy type of F. As a solution of the latter, an analogue of the parametric h-principle is stated for the space F. Bibliography: 41 titles.
A novel bit-wise adaptable entropy coding technique
NASA Technical Reports Server (NTRS)
Kiely, A.; Klimesh, M.
2001-01-01
We present a novel entropy coding technique which is adaptable in that each bit to be encoded may have an associated probability esitmate which depends on previously encoded bits. The technique may have advantages over arithmetic coding. The technique can achieve arbitrarily small redundancy and admits a simple and fast decoder.
JPEG 2000 coding of image data over adaptive refinement grids
NASA Astrophysics Data System (ADS)
Gamito, Manuel N.; Dias, Miguel S.
2003-06-01
An extension of the JPEG 2000 standard is presented for non-conventional images resulting from an adaptive subdivision process. Samples, generated through adaptive subdivision, can have different sizes, depending on the amount of subdivision that was locally introduced in each region of the image. The subdivision principle allows each individual sample to be recursively subdivided into sets of four progressively smaller samples. Image datasets generated through adaptive subdivision find application in Computational Physics where simulations of natural processes are often performed over adaptive grids. It is also found that compression gains can be achieved for non-natural imagery, like text or graphics, if they first undergo an adaptive subdivision process. The representation of adaptive subdivision images is performed by first coding the subdivision structure into the JPEG 2000 bitstream, ina lossless manner, followed by the entropy coded and quantized transform coefficients. Due to the irregular distribution of sample sizes across the image, the wavelet transform must be applied on irregular image subsets that are nested across all the resolution levels. Using the conventional JPEG 2000 coding standard, adaptive subdivision images would first have to be upsampled to the smallest sample size in order to attain a uniform resolution. The proposed method for coding adaptive subdivision images is shown to perform better than conventional JPEG 2000 for medium to high bitrates.
Adaptive Coding and Modulation Scheme for Ka Band Space Communications
NASA Astrophysics Data System (ADS)
Lee, Jaeyoon; Yoon, Dongweon; Lee, Wooju
2010-06-01
Rain attenuation can cause a serious problem that an availability of space communication link on Ka band becomes low. To reduce the effect of rain attenuation on the error performance of space communications in Ka band, an adaptive coding and modulation (ACM) scheme is required. In this paper, to achieve a reliable telemetry data transmission, we propose an adaptive coding and modulation level using turbo code recommended by the consultative committee for space data systems (CCSDS) and various modulation methods (QPSK, 8PSK, 4+12 APSK, and 4+12+16 APSK) adopted in the digital video broadcasting-satellite2 (DVB-S2).
Generating code adapted for interlinking legacy scalar code and extended vector code
Gschwind, Michael K
2013-06-04
Mechanisms for intermixing code are provided. Source code is received for compilation using an extended Application Binary Interface (ABI) that extends a legacy ABI and uses a different register configuration than the legacy ABI. First compiled code is generated based on the source code, the first compiled code comprising code for accommodating the difference in register configurations used by the extended ABI and the legacy ABI. The first compiled code and second compiled code are intermixed to generate intermixed code, the second compiled code being compiled code that uses the legacy ABI. The intermixed code comprises at least one call instruction that is one of a call from the first compiled code to the second compiled code or a call from the second compiled code to the first compiled code. The code for accommodating the difference in register configurations is associated with the at least one call instruction.
MORSE Monte Carlo shielding calculations for the zirconium hydride reference reactor
NASA Technical Reports Server (NTRS)
Burgart, C. E.
1972-01-01
Verification of DOT-SPACETRAN transport calculations of a lithium hydride and tungsten shield for a SNAP reactor was performed using the MORSE (Monte Carlo) code. Transport of both neutrons and gamma rays was considered. Importance sampling was utilized in the MORSE calculations. Several quantities internal to the shield, as well as dose at several points outside of the configuration, were in satisfactory agreement with the DOT calculations of the same.
Adaptive Modulation and Coding for LTE Wireless Communication
NASA Astrophysics Data System (ADS)
Hadi, S. S.; Tiong, T. C.
2015-04-01
Long Term Evolution (LTE) is the new upgrade path for carrier with both GSM/UMTS networks and CDMA2000 networks. The LTE is targeting to become the first global mobile phone standard regardless of the different LTE frequencies and bands use in other countries barrier. Adaptive Modulation and Coding (AMC) is used to increase the network capacity or downlink data rates. Various modulation types are discussed such as Quadrature Phase Shift Keying (QPSK), Quadrature Amplitude Modulation (QAM). Spatial multiplexing techniques for 4×4 MIMO antenna configuration is studied. With channel station information feedback from the mobile receiver to the base station transmitter, adaptive modulation and coding can be applied to adapt to the mobile wireless channels condition to increase spectral efficiencies without increasing bit error rate in noisy channels. In High-Speed Downlink Packet Access (HSDPA) in Universal Mobile Telecommunications System (UMTS), AMC can be used to choose modulation types and forward error correction (FEC) coding rate.
Adaptive error correction codes for face identification
NASA Astrophysics Data System (ADS)
Hussein, Wafaa R.; Sellahewa, Harin; Jassim, Sabah A.
2012-06-01
Face recognition in uncontrolled environments is greatly affected by fuzziness of face feature vectors as a result of extreme variation in recording conditions (e.g. illumination, poses or expressions) in different sessions. Many techniques have been developed to deal with these variations, resulting in improved performances. This paper aims to model template fuzziness as errors and investigate the use of error detection/correction techniques for face recognition in uncontrolled environments. Error correction codes (ECC) have recently been used for biometric key generation but not on biometric templates. We have investigated error patterns in binary face feature vectors extracted from different image windows of differing sizes and for different recording conditions. By estimating statistical parameters for the intra-class and inter-class distributions of Hamming distances in each window, we encode with appropriate ECC's. The proposed approached is tested for binarised wavelet templates using two face databases: Extended Yale-B and Yale. We shall demonstrate that using different combinations of BCH-based ECC's for different blocks and different recording conditions leads to in different accuracy rates, and that using ECC's results in significantly improved recognition results.
An adaptive algorithm for motion compensated color image coding
NASA Technical Reports Server (NTRS)
Kwatra, Subhash C.; Whyte, Wayne A.; Lin, Chow-Ming
1987-01-01
This paper presents an adaptive algorithm for motion compensated color image coding. The algorithm can be used for video teleconferencing or broadcast signals. Activity segmentation is used to reduce the bit rate and a variable stage search is conducted to save computations. The adaptive algorithm is compared with the nonadaptive algorithm and it is shown that with approximately 60 percent savings in computing the motion vector and 33 percent additional compression, the performance of the adaptive algorithm is similar to the nonadaptive algorithm. The adaptive algorithm results also show improvement of up to 1 bit/pel over interframe DPCM coding with nonuniform quantization. The test pictures used for this study were recorded directly from broadcast video in color.
More About Vector Adaptive/Predictive Coding Of Speech
NASA Technical Reports Server (NTRS)
Jedrey, Thomas C.; Gersho, Allen
1992-01-01
Report presents additional information about digital speech-encoding and -decoding system described in "Vector Adaptive/Predictive Encoding of Speech" (NPO-17230). Summarizes development of vector adaptive/predictive coding (VAPC) system and describes basic functions of algorithm. Describes refinements introduced enabling receiver to cope with errors. VAPC algorithm implemented in integrated-circuit coding/decoding processors (codecs). VAPC and other codecs tested under variety of operating conditions. Tests designed to reveal effects of various background quiet and noisy environments and of poor telephone equipment. VAPC found competitive with and, in some respects, superior to other 4.8-kb/s codecs and other codecs of similar complexity.
The multidimensional Self-Adaptive Grid code, SAGE, version 2
NASA Technical Reports Server (NTRS)
Davies, Carol B.; Venkatapathy, Ethiraj
1995-01-01
This new report on Version 2 of the SAGE code includes all the information in the original publication plus all upgrades and changes to the SAGE code since that time. The two most significant upgrades are the inclusion of a finite-volume option and the ability to adapt and manipulate zonal-matching multiple-grid files. In addition, the original SAGE code has been upgraded to Version 1.1 and includes all options mentioned in this report, with the exception of the multiple grid option and its associated features. Since Version 2 is a larger and more complex code, it is suggested (but not required) that Version 1.1 be used for single-grid applications. This document contains all the information required to run both versions of SAGE. The formulation of the adaption method is described in the first section of this document. The second section is presented in the form of a user guide that explains the input and execution of the code. The third section provides many examples. Successful application of the SAGE code in both two and three dimensions for the solution of various flow problems has proven the code to be robust, portable, and simple to use. Although the basic formulation follows the method of Nakahashi and Deiwert, many modifications have been made to facilitate the use of the self-adaptive grid method for complex grid structures. Modifications to the method and the simple but extensive input options make this a flexible and user-friendly code. The SAGE code can accommodate two-dimensional and three-dimensional, finite-difference and finite-volume, single grid, and zonal-matching multiple grid flow problems.
A trellis-searched APC (adaptive predictive coding) speech coder
Malone, K.T. ); Fischer, T.R. . Dept. of Electrical and Computer Engineering)
1990-01-01
In this paper we formulate a speech coding system that incorporates trellis coded vector quantization (TCVQ) and adaptive predictive coding (APC). A method for optimizing'' the TCVQ codebooks is presented and experimental results concerning survivor path mergings are reported. Simulation results are given for encoding rates of 16 and 9.6 kbps for a variety of coder parameters. The quality of the encoded speech is deemed excellent at an encoding rate of 16 kbps and very good at 9.6 kbps. 13 refs., 2 figs., 4 tabs.
The multidimensional self-adaptive grid code, SAGE
NASA Technical Reports Server (NTRS)
Davies, Carol B.; Venkatapathy, Ethiraj
1992-01-01
This report describes the multidimensional self-adaptive grid code SAGE. A two-dimensional version of this code was described in an earlier report by the authors. The formulation of the multidimensional version is described in the first section of this document. The second section is presented in the form of a user guide that explains the input and execution of the code and provides many examples. Successful application of the SAGE code in both two and three dimensions for the solution of various flow problems has proven the code to be robust, portable, and simple to use. Although the basic formulation follows the method of Nakahashi and Deiwert, many modifications have been made to facilitate the use of the self-adaptive grid method for complex grid structures. Modifications to the method and the simplified input options make this a flexible and user-friendly code. The new SAGE code can accommodate both two-dimensional and three-dimensional flow problems.
Diederen, Kelly M J; Spencer, Tom; Vestergaard, Martin D; Fletcher, Paul C; Schultz, Wolfram
2016-06-01
Effective error-driven learning benefits from scaling of prediction errors to reward variability. Such behavioral adaptation may be facilitated by neurons coding prediction errors relative to the standard deviation (SD) of reward distributions. To investigate this hypothesis, we required participants to predict the magnitude of upcoming reward drawn from distributions with different SDs. After each prediction, participants received a reward, yielding trial-by-trial prediction errors. In line with the notion of adaptive coding, BOLD response slopes in the Substantia Nigra/Ventral Tegmental Area (SN/VTA) and ventral striatum were steeper for prediction errors occurring in distributions with smaller SDs. SN/VTA adaptation was not instantaneous but developed across trials. Adaptive prediction error coding was paralleled by behavioral adaptation, as reflected by SD-dependent changes in learning rate. Crucially, increased SN/VTA and ventral striatal adaptation was related to improved task performance. These results suggest that adaptive coding facilitates behavioral adaptation and supports efficient learning. PMID:27181060
The Rotating Morse-Pekeris Oscillator Revisited
ERIC Educational Resources Information Center
Zuniga, Jose; Bastida, Adolfo; Requena, Alberto
2008-01-01
The Morse-Pekeris oscillator model for the calculation of the vibration-rotation energy levels of diatomic molecules is revisited. This model is based on the realization of a second-order exponential expansion of the centrifugal term about the minimum of the vibrational Morse oscillator and the subsequent analytical resolution of the resulting…
NASA Technical Reports Server (NTRS)
Rost, Martin C.; Sayood, Khalid
1991-01-01
A method for efficiently coding natural images using a vector-quantized variable-blocksized transform source coder is presented. The method, mixture block coding (MBC), incorporates variable-rate coding by using a mixture of discrete cosine transform (DCT) source coders. Which coders are selected to code any given image region is made through a threshold driven distortion criterion. In this paper, MBC is used in two different applications. The base method is concerned with single-pass low-rate image data compression. The second is a natural extension of the base method which allows for low-rate progressive transmission (PT). Since the base method adapts easily to progressive coding, it offers the aesthetic advantage of progressive coding without incorporating extensive channel overhead. Image compression rates of approximately 0.5 bit/pel are demonstrated for both monochrome and color images.
Skeletonization and Partitioning of Digital Images Using Discrete Morse Theory.
Delgado-Friedrichs, Olaf; Robins, Vanessa; Sheppard, Adrian
2015-03-01
We show how discrete Morse theory provides a rigorous and unifying foundation for defining skeletons and partitions of grayscale digital images. We model a grayscale image as a cubical complex with a real-valued function defined on its vertices (the voxel values). This function is extended to a discrete gradient vector field using the algorithm presented in Robins, Wood, Sheppard TPAMI 33:1646 (2011). In the current paper we define basins (the building blocks of a partition) and segments of the skeleton using the stable and unstable sets associated with critical cells. The natural connection between Morse theory and homology allows us to prove the topological validity of these constructions; for example, that the skeleton is homotopic to the initial object. We simplify the basins and skeletons via Morse-theoretic cancellation of critical cells in the discrete gradient vector field using a strategy informed by persistent homology. Simple working Python code for our algorithms for efficient vector field traversal is included. Example data are taken from micro-CT images of porous materials, an application area where accurate topological models of pore connectivity are vital for fluid-flow modelling. PMID:26353267
Adaptation improves neural coding efficiency despite increasing correlations in variability.
Adibi, Mehdi; McDonald, James S; Clifford, Colin W G; Arabzadeh, Ehsan
2013-01-30
Exposure of cortical cells to sustained sensory stimuli results in changes in the neuronal response function. This phenomenon, known as adaptation, is a common feature across sensory modalities. Here, we quantified the functional effect of adaptation on the ensemble activity of cortical neurons in the rat whisker-barrel system. A multishank array of electrodes was used to allow simultaneous sampling of neuronal activity. We characterized the response of neurons to sinusoidal whisker vibrations of varying amplitude in three states of adaptation. The adaptors produced a systematic rightward shift in the neuronal response function. Consistently, mutual information revealed that peak discrimination performance was not aligned to the adaptor but to test amplitudes 3-9 μm higher. Stimulus presentation reduced single neuron trial-to-trial response variability (captured by Fano factor) and correlations in the population response variability (noise correlation). We found that these two types of variability were inversely proportional to the average firing rate regardless of the adaptation state. Adaptation transferred the neuronal operating regime to lower rates with higher Fano factor and noise correlations. Noise correlations were positive and in the direction of signal, and thus detrimental to coding efficiency. Interestingly, across all population sizes, the net effect of adaptation was to increase the total information despite increasing the noise correlation between neurons. PMID:23365247
Adaptive norm-based coding of facial identity.
Rhodes, Gillian; Jeffery, Linda
2006-09-01
Identification of a face is facilitated by adapting to its computationally opposite identity, suggesting that the average face functions as a norm for coding identity [Leopold, D. A., O'Toole, A. J., Vetter, T., & Blanz, V. (2001). Prototype-referenced shape encoding revealed by high-level aftereffects. Nature Neuroscience, 4, 89-94; Leopold, D. A., Rhodes, G., Müller, K. -M., & Jeffery, L. (2005). The dynamics of visual adaptation to faces. Proceedings of the Royal Society of London, Series B, 272, 897-904]. Crucially, this interpretation requires that the aftereffect is selective for the opposite identity, but this has not been convincingly demonstrated. We demonstrate such selectivity, observing a larger aftereffect for opposite than non-opposite adapt-test pairs that are matched on perceptual contrast (dissimilarity). Component identities were also harder to detect in morphs of opposite than non-opposite face pairs. We propose an adaptive norm-based coding model of face identity. PMID:16647736
Application of MORSE to radiation analysis of nuclear flight propulsion modules
NASA Technical Reports Server (NTRS)
Woolson, W. A.
1972-01-01
Several modifications and additions were made to the multigroup Monte Carlo code (MORSE) to implement its use in a computational procedure for performing radiation analyses of NERVA nuclear flight propulsion modules. These changes include the incorporation of a new general geometry module; the inclusion of an expectation tracklength estimator; and the option to obtain source information from two-dimensional discrete ordinates calculations. Computations comparing MORSE and a point cross section Monte Carlo code, COHORT, were made in which a coupled discrete ordinates/Monte Carlo procedure was used to calculate the gamma dose rate at tank top locations of a typical propulsion module. The dose rates obtained from the MORSE computation agreed with the dose rates obtained from the COHORT computation to within the limits of the statistical accuracy of the calculations.
Adaptive coded aperture imaging: progress and potential future applications
NASA Astrophysics Data System (ADS)
Gottesman, Stephen R.; Isser, Abraham; Gigioli, George W., Jr.
2011-09-01
Interest in Adaptive Coded Aperture Imaging (ACAI) continues to grow as the optical and systems engineering community becomes increasingly aware of ACAI's potential benefits in the design and performance of both imaging and non-imaging systems , such as good angular resolution (IFOV), wide distortion-free field of view (FOV), excellent image quality, and light weight construct. In this presentation we first review the accomplishments made over the past five years, then expand on previously published work to show how replacement of conventional imaging optics with coded apertures can lead to a reduction in system size and weight. We also present a trade space analysis of key design parameters of coded apertures and review potential applications as replacement for traditional imaging optics. Results will be presented, based on last year's work of our investigation into the trade space of IFOV, resolution, effective focal length, and wavelength of incident radiation for coded aperture architectures. Finally we discuss the potential application of coded apertures for replacing objective lenses of night vision goggles (NVGs).
Adaptive shape coding for perceptual decisions in the human brain
Kourtzi, Zoe; Welchman, Andrew E.
2015-01-01
In its search for neural codes, the field of visual neuroscience has uncovered neural representations that reflect the structure of stimuli of variable complexity from simple features to object categories. However, accumulating evidence suggests an adaptive neural code that is dynamically shaped by experience to support flexible and efficient perceptual decisions. Here, we review work showing that experience plays a critical role in molding midlevel visual representations for perceptual decisions. Combining behavioral and brain imaging measurements, we demonstrate that learning optimizes feature binding for object recognition in cluttered scenes, and tunes the neural representations of informative image parts to support efficient categorical judgements. Our findings indicate that similar learning mechanisms may mediate long-term optimization through development, tune the visual system to fundamental principles of feature binding, and optimize feature templates for perceptual decisions. PMID:26024511
Adaptive rezoner in a two-dimensional Lagrangian hydrodynamic code
Pyun, J.J.; Saltzman, J.S.; Scannapieco, A.J.; Carroll, D.
1985-01-01
In an effort to increase spatial resolution without adding additional meshes, an adaptive mesh was incorporated into a two-dimensional Lagrangian hydrodynamics code along with two-dimensional flux corrected (FCT) remapper. The adaptive mesh automatically generates a mesh based on smoothness and orthogonality, and at the same time also tracks physical conditions of interest by focusing mesh points in regions that exhibit those conditions; this is done by defining a weighting function associated with the physical conditions to be tracked. The FCT remapper calculates the net transportive fluxes based on a weighted average of two fluxes computed by a low-order scheme and a high-order scheme. This averaging procedure produces solutions which are conservative and nondiffusive, and maintains positivity. 10 refs., 12 figs.
SAGE: The Self-Adaptive Grid Code. 3
NASA Technical Reports Server (NTRS)
Davies, Carol B.; Venkatapathy, Ethiraj
1999-01-01
The multi-dimensional self-adaptive grid code, SAGE, is an important tool in the field of computational fluid dynamics (CFD). It provides an efficient method to improve the accuracy of flow solutions while simultaneously reducing computer processing time. Briefly, SAGE enhances an initial computational grid by redistributing the mesh points into more appropriate locations. The movement of these points is driven by an equal-error-distribution algorithm that utilizes the relationship between high flow gradients and excessive solution errors. The method also provides a balance between clustering points in the high gradient regions and maintaining the smoothness and continuity of the adapted grid, The latest version, Version 3, includes the ability to change the boundaries of a given grid to more efficiently enclose flow structures and provides alternative redistribution algorithms.
Adaptive Synaptogenesis Constructs Neural Codes That Benefit Discrimination.
Thomas, Blake T; Blalock, Davis W; Levy, William B
2015-07-01
Intelligent organisms face a variety of tasks requiring the acquisition of expertise within a specific domain, including the ability to discriminate between a large number of similar patterns. From an energy-efficiency perspective, effective discrimination requires a prudent allocation of neural resources with more frequent patterns and their variants being represented with greater precision. In this work, we demonstrate a biologically plausible means of constructing a single-layer neural network that adaptively (i.e., without supervision) meets this criterion. Specifically, the adaptive algorithm includes synaptogenesis, synaptic shedding, and bi-directional synaptic weight modification to produce a network with outputs (i.e. neural codes) that represent input patterns proportional to the frequency of related patterns. In addition to pattern frequency, the correlational structure of the input environment also affects allocation of neural resources. The combined synaptic modification mechanisms provide an explanation of neuron allocation in the case of self-taught experts. PMID:26176744
Conforming Morse-Smale Complexes
Gyulassy, Attila; Gunther, David; Levine, Joshua A.; Tierny, Julien; Pascucci, Valerio
2014-08-11
Morse-Smale (MS) complexes have been gaining popularity as a tool for feature-driven data analysis and visualization. However, the quality of their geometric embedding and the sole dependence on the input scalar field data can limit their applicability when expressing application-dependent features. In this paper we introduce a new combinatorial technique to compute an MS complex that conforms to both an input scalar field and an additional, prior segmentation of the domain. The segmentation constrains the MS complex computation guaranteeing that boundaries in the segmentation are captured as separatrices of the MS complex. We demonstrate the utility and versatility of our approach with two applications. First, we use streamline integration to determine numerically computed basins/mountains and use the resulting segmentation as an input to our algorithm. This strategy enables the incorporation of prior flow path knowledge, effectively resulting in an MS complex that is as geometrically accurate as the employed numerical integration. Our second use case is motivated by the observation that often the data itself does not explicitly contain features known to be present by a domain expert. We introduce edit operations for MS complexes so that a user can directly modify their features while maintaining all the advantages of a robust topology-based representation.
An Adaptive Motion Estimation Scheme for Video Coding
Gao, Yuan; Jia, Kebin
2014-01-01
The unsymmetrical-cross multihexagon-grid search (UMHexagonS) is one of the best fast Motion Estimation (ME) algorithms in video encoding software. It achieves an excellent coding performance by using hybrid block matching search pattern and multiple initial search point predictors at the cost of the computational complexity of ME increased. Reducing time consuming of ME is one of the key factors to improve video coding efficiency. In this paper, we propose an adaptive motion estimation scheme to further reduce the calculation redundancy of UMHexagonS. Firstly, new motion estimation search patterns have been designed according to the statistical results of motion vector (MV) distribution information. Then, design a MV distribution prediction method, including prediction of the size of MV and the direction of MV. At last, according to the MV distribution prediction results, achieve self-adaptive subregional searching by the new estimation search patterns. Experimental results show that more than 50% of total search points are dramatically reduced compared to the UMHexagonS algorithm in JM 18.4 of H.264/AVC. As a result, the proposed algorithm scheme can save the ME time up to 20.86% while the rate-distortion performance is not compromised. PMID:24672313
MORSE/STORM: A generalized albedo option for Monte Carlo calculations
Gomes, I.C.; Stevens, P.N. )
1991-09-01
The advisability of using the albedo procedure for the Monte Carlo solution of deep penetration shielding problems that have ducts and other penetrations has been investigated. The use of albedo data can dramatically improve the computational efficiency of certain Monte Carlo calculations. However, the accuracy of these results may be unacceptable because of lost information during the albedo event and serious errors in the available differential albedo data. This study was done to evaluate and appropriately modify the MORSE/BREESE package, to develop new methods for generating the required albedo data, and to extend the adjoint capability to the albedo-modified calculations. Major modifications to MORSE/BREESE include an option to save for further use information that would be lost at the albedo event, an option to displace the point of emergence during an albedo event, and an option to use spatially dependent albedo data for both forward and adjoint calculations, which includes the point of emergence as a new random variable to be selected during an albedo event. The theoretical basis for using TORT-generated forward albedo information to produce adjuncton albedos was derived. The MORSE/STORM package was developed to perform both forward and adjoint modes of analysis using spatially dependent albedo data. Results obtained with MORSE/STORM for both forward and adjoint modes were compared with benchmark solutions. Excellent agreement and improved computational efficiency were achieved, demonstrating the full utilization of the albedo option in the MORSE code. 7 refs., 17 figs., 15 tabs.
Cooperative solutions coupling a geometry engine and adaptive solver codes
NASA Technical Reports Server (NTRS)
Dickens, Thomas P.
1995-01-01
Follow-on work has progressed in using Aero Grid and Paneling System (AGPS), a geometry and visualization system, as a dynamic real time geometry monitor, manipulator, and interrogator for other codes. In particular, AGPS has been successfully coupled with adaptive flow solvers which iterate, refining the grid in areas of interest, and continuing on to a solution. With the coupling to the geometry engine, the new grids represent the actual geometry much more accurately since they are derived directly from the geometry and do not use refits to the first-cut grids. Additional work has been done with design runs where the geometric shape is modified to achieve a desired result. Various constraints are used to point the solution in a reasonable direction which also more closely satisfies the desired results. Concepts and techniques are presented, as well as examples of sample case studies. Issues such as distributed operation of the cooperative codes versus running all codes locally and pre-calculation for performance are discussed. Future directions are considered which will build on these techniques in light of changing computer environments.
3D Finite Element Trajectory Code with Adaptive Meshing
NASA Astrophysics Data System (ADS)
Ives, Lawrence; Bui, Thuc; Vogler, William; Bauer, Andy; Shephard, Mark; Beal, Mark; Tran, Hien
2004-11-01
Beam Optics Analysis, a new, 3D charged particle program is available and in use for the design of complex, 3D electron guns and charged particle devices. The code reads files directly from most CAD and solid modeling programs, includes an intuitive Graphical User Interface (GUI), and a robust mesh generator that is fully automatic. Complex problems can be set up, and analysis initiated in minutes. The program includes a user-friendly post processor for displaying field and trajectory data using 3D plots and images. The electrostatic solver is based on the standard nodal finite element method. The magnetostatic field solver is based on the vector finite element method and is also called during the trajectory simulation process to solve for self magnetic fields. The user imports the geometry from essentially any commercial CAD program and uses the GUI to assign parameters (voltages, currents, dielectric constant) and designate emitters (including work function, emitter temperature, and number of trajectories). The the mesh is generated automatically and analysis is performed, including mesh adaptation to improve accuracy and optimize computational resources. This presentation will provide information on the basic structure of the code, its operation, and it's capabilities.
RAM: a Relativistic Adaptive Mesh Refinement Hydrodynamics Code
Zhang, Wei-Qun; MacFadyen, Andrew I.; /Princeton, Inst. Advanced Study
2005-06-06
The authors have developed a new computer code, RAM, to solve the conservative equations of special relativistic hydrodynamics (SRHD) using adaptive mesh refinement (AMR) on parallel computers. They have implemented a characteristic-wise, finite difference, weighted essentially non-oscillatory (WENO) scheme using the full characteristic decomposition of the SRHD equations to achieve fifth-order accuracy in space. For time integration they use the method of lines with a third-order total variation diminishing (TVD) Runge-Kutta scheme. They have also implemented fourth and fifth order Runge-Kutta time integration schemes for comparison. The implementation of AMR and parallelization is based on the FLASH code. RAM is modular and includes the capability to easily swap hydrodynamics solvers, reconstruction methods and physics modules. In addition to WENO they have implemented a finite volume module with the piecewise parabolic method (PPM) for reconstruction and the modified Marquina approximate Riemann solver to work with TVD Runge-Kutta time integration. They examine the difficulty of accurately simulating shear flows in numerical relativistic hydrodynamics codes. They show that under-resolved simulations of simple test problems with transverse velocity components produce incorrect results and demonstrate the ability of RAM to correctly solve these problems. RAM has been tested in one, two and three dimensions and in Cartesian, cylindrical and spherical coordinates. they have demonstrated fifth-order accuracy for WENO in one and two dimensions and performed detailed comparison with other schemes for which they show significantly lower convergence rates. Extensive testing is presented demonstrating the ability of RAM to address challenging open questions in relativistic astrophysics.
Estimation of Some Parameters from Morse-Morse-Spline-Van Der Waals Intermolecular Potential
Coroiu, I.
2007-04-23
Some parameters such as transport cross-sections and isotopic thermal diffusion factor have been calculated from an improved intermolecular potential, Morse-Morse-Spline-van der Waals (MMSV) potential proposed by R.A. Aziz et al. The treatment was completely classical and no corrections for quantum effects were made. The results would be employed for isotope separations of different spherical and quasi-spherical molecules.
Adaptive distributed video coding with correlation estimation using expectation propagation
NASA Astrophysics Data System (ADS)
Cui, Lijuan; Wang, Shuang; Jiang, Xiaoqian; Cheng, Samuel
2012-10-01
Distributed video coding (DVC) is rapidly increasing in popularity by the way of shifting the complexity from encoder to decoder, whereas no compression performance degrades, at least in theory. In contrast with conventional video codecs, the inter-frame correlation in DVC is explored at decoder based on the received syndromes of Wyner-Ziv (WZ) frame and side information (SI) frame generated from other frames available only at decoder. However, the ultimate decoding performances of DVC are based on the assumption that the perfect knowledge of correlation statistic between WZ and SI frames should be available at decoder. Therefore, the ability of obtaining a good statistical correlation estimate is becoming increasingly important in practical DVC implementations. Generally, the existing correlation estimation methods in DVC can be classified into two main types: pre-estimation where estimation starts before decoding and on-the-fly (OTF) estimation where estimation can be refined iteratively during decoding. As potential changes between frames might be unpredictable or dynamical, OTF estimation methods usually outperforms pre-estimation techniques with the cost of increased decoding complexity (e.g., sampling methods). In this paper, we propose a low complexity adaptive DVC scheme using expectation propagation (EP), where correlation estimation is performed OTF as it is carried out jointly with decoding of the factor graph-based DVC code. Among different approximate inference methods, EP generally offers better tradeoff between accuracy and complexity. Experimental results show that our proposed scheme outperforms the benchmark state-of-the-art DISCOVER codec and other cases without correlation tracking, and achieves comparable decoding performance but with significantly low complexity comparing with sampling method.
Adaptive lifting scheme with sparse criteria for image coding
NASA Astrophysics Data System (ADS)
Kaaniche, Mounir; Pesquet-Popescu, Béatrice; Benazza-Benyahia, Amel; Pesquet, Jean-Christophe
2012-12-01
Lifting schemes (LS) were found to be efficient tools for image coding purposes. Since LS-based decompositions depend on the choice of the prediction/update operators, many research efforts have been devoted to the design of adaptive structures. The most commonly used approaches optimize the prediction filters by minimizing the variance of the detail coefficients. In this article, we investigate techniques for optimizing sparsity criteria by focusing on the use of an ℓ 1 criterion instead of an ℓ 2 one. Since the output of a prediction filter may be used as an input for the other prediction filters, we then propose to optimize such a filter by minimizing a weighted ℓ 1 criterion related to the global rate-distortion performance. More specifically, it will be shown that the optimization of the diagonal prediction filter depends on the optimization of the other prediction filters and vice-versa. Related to this fact, we propose to jointly optimize the prediction filters by using an algorithm that alternates between the optimization of the filters and the computation of the weights. Experimental results show the benefits which can be drawn from the proposed optimization of the lifting operators.
Adaptive phase-coded reconstruction for cardiac CT
NASA Astrophysics Data System (ADS)
Hsieh, Jiang; Mayo, John; Acharya, Kishor; Pan, Tin-Su
2000-04-01
Cardiac imaging with conventional computed tomography (CT) has gained significant attention in recent years. New hardware development enables a CT scanner to rotate at a faster speed so that less cardiac motion is present in acquired projection data. Many new tomographic reconstruction techniques have also been developed to reduce the artifacts induced by the cardiac motion. Most of the algorithms make use of the projection data collected over several cardiac cycles to formulate a single projection data set. Because the data set is formed with samples collected roughly in the same phase of a cardiac cycle, the temporal resolution of the newly formed data set is significantly improved compared with projections collected continuously. In this paper, we present an adaptive phase- coded reconstruction scheme (APR) for cardiac CT. Unlike the previously proposed schemes where the projection sector size is identical, APR determines each sector size based on the tomographic reconstruction algorithm. The newly proposed scheme ensures that the temporal resolution of each sector is substantially equal. In addition, the scan speed is selected based on the measured EKG signal of the patient.
Adaptive Source Coding Schemes for Geometrically Distributed Integer Alphabets
NASA Technical Reports Server (NTRS)
Cheung, K-M.; Smyth, P.
1993-01-01
Revisit the Gallager and van Voorhis optimal source coding scheme for geometrically distributed non-negative integer alphabets and show that the various subcodes in the popular Rice algorithm can be derived from the Gallager and van Voorhis code.
22. Photocopy of photograph (Original print, Anacortes Museum.) Morse of ...
22. Photocopy of photograph (Original print, Anacortes Museum.) Morse of San Francisco, Photographer. Date unknown. Portrait of Melville Curtis. - Curtis Wharf, O & Second Streets, Anacortes, Skagit County, WA
Barut-Girardello coherent states for the Morse potential
NASA Astrophysics Data System (ADS)
Fakhri, H.; Chenaghlou, A.
2003-04-01
Using the shape invariance idea, it is shown that the quantum states of Morse potential represent an infinite-dimensional Lie algebra the so-called Morse algebra. Then, we derive a representation of the Lie algebra u(1,1) by means of using the generators of the Morse algebra. Meanwhile, we obtain the Barut-Girardello coherent states which are constructed as a linear combination of the quantum states corresponding to the Morse potential. Finally, we realise the resolution of the identity condition for the coherent states.
Lin, Changyu; Zou, Ding; Liu, Tao; Djordjevic, Ivan B
2016-08-01
A mutual information inspired nonbinary coded modulation design with non-uniform shaping is proposed. Instead of traditional power of two signal constellation sizes, we design 5-QAM, 7-QAM and 9-QAM constellations, which can be used in adaptive optical networks. The non-uniform shaping and LDPC code rate are jointly considered in the design, which results in a better performance scheme for the same SNR values. The matched nonbinary (NB) LDPC code is used for this scheme, which further improves the coding gain and the overall performance. We analyze both coding performance and system SNR performance. We show that the proposed NB LDPC-coded 9-QAM has more than 2dB gain in symbol SNR compared to traditional LDPC-coded star-8-QAM. On the other hand, the proposed NB LDPC-coded 5-QAM and 7-QAM have even better performance than LDPC-coded QPSK. PMID:27505775
Adaptations in a Community-Based Family Intervention: Replication of Two Coding Schemes.
Cooper, Brittany Rhoades; Shrestha, Gitanjali; Hyman, Leah; Hill, Laura
2016-02-01
Although program adaptation is a reality in community-based implementations of evidence-based programs, much of the discussion about adaptation remains theoretical. The primary aim of this study was to replicate two coding systems to examine adaptations in large-scale, community-based disseminations of the Strengthening Families Program for Parents and Youth 10-14, a family-based substance use prevention program. Our second aim was to explore intersections between various dimensions of facilitator-reported adaptations from these two coding systems. Our results indicate that only a few types of adaptations and a few reasons accounted for a majority (over 70 %) of all reported adaptations. We also found that most adaptations were logistical, reactive, and not aligned with program's goals. In many ways, our findings replicate those of the original studies, suggesting the two coding systems are robust even when applied to self-reported data collected from community-based implementations. Our findings on the associations between adaptation dimensions can inform future studies assessing the relationship between adaptations and program outcomes. Studies of local adaptations, like the present one, should help researchers, program developers, and policymakers better understand the issues faced by implementers and guide efforts related to program development, transferability, and sustainability. PMID:26661413
BUGLE-96 validation with MORSE-SGC/S using water and iron experiments from SINBAD 97
Blanchard, A.
1999-12-03
This document summarizes the validation of MORSE-SGC/S with the BUGLE-96 cross section library. SINBAD Benchmark Experiment 2.004, Winfrith Water Benchmark Experiment and SBE 6.001, Karlsruhe Iron Sphere Benchmark Experiment were utilized for this validation. The MORESE-SGC/S code with the BUGLE-96 cross-section library was used to model the experimental configurations as given in SINDBAD 97. SINDBAD is a shielding integral benchmark archive and database developed at the Oak Ridge National Laboratory (ORNL). For means of comparison, the experimental models were also executed with MORSE-SGC/S using the BUGLE-80 cross-section library. BUGLE-96 cross section will be used for shielding applications only as recommended by ORNL.
ADAPTION OF NONSTANDARD PIPING COMPONENTS INTO PRESENT DAY SEISMIC CODES
D. T. Clark; M. J. Russell; R. E. Spears; S. R. Jensen
2009-07-01
With spiraling energy demand and flat energy supply, there is a need to extend the life of older nuclear reactors. This sometimes requires that existing systems be evaluated to present day seismic codes. Older reactors built in the 1960s and early 1970s often used fabricated piping components that were code compliant during their initial construction time period, but are outside the standard parameters of present-day piping codes. There are several approaches available to the analyst in evaluating these non-standard components to modern codes. The simplest approach is to use the flexibility factors and stress indices for similar standard components with the assumption that the non-standard component’s flexibility factors and stress indices will be very similar. This approach can require significant engineering judgment. A more rational approach available in Section III of the ASME Boiler and Pressure Vessel Code, which is the subject of this paper, involves calculation of flexibility factors using finite element analysis of the non-standard component. Such analysis allows modeling of geometric and material nonlinearities. Flexibility factors based on these analyses are sensitive to the load magnitudes used in their calculation, load magnitudes that need to be consistent with those produced by the linear system analyses where the flexibility factors are applied. This can lead to iteration, since the magnitude of the loads produced by the linear system analysis depend on the magnitude of the flexibility factors. After the loading applied to the nonstandard component finite element model has been matched to loads produced by the associated linear system model, the component finite element model can then be used to evaluate the performance of the component under the loads with the nonlinear analysis provisions of the Code, should the load levels lead to calculated stresses in excess of Allowable stresses. This paper details the application of component-level finite
Wavelet based ECG compression with adaptive thresholding and efficient coding.
Alshamali, A
2010-01-01
This paper proposes a new wavelet-based ECG compression technique. It is based on optimized thresholds to determine significant wavelet coefficients and an efficient coding for their positions. Huffman encoding is used to enhance the compression ratio. The proposed technique is tested using several records taken from the MIT-BIH arrhythmia database. Simulation results show that the proposed technique outperforms others obtained by previously published schemes. PMID:20608811
Palermo, Romina; Rivolta, Davide; Wilson, C Ellie; Jeffery, Linda
2011-12-01
People with congenital prosopagnosia (CP) report difficulty recognising faces in everyday life and perform poorly on face recognition tests. Here, we investigate whether impaired adaptive face space coding might contribute to poor face recognition in CP. To pinpoint how adaptation may affect face processing, a group of CPs and matched controls completed two complementary face adaptation tasks: the figural aftereffect, which reflects adaptation to general distortions of shape, and the identity aftereffect, which directly taps the mechanisms involved in the discrimination of different face identities. CPs displayed a typical figural aftereffect, consistent with evidence that they are able to process some shape-based information from faces, e.g., cues to discriminate sex. CPs also demonstrated a significant identity aftereffect. However, unlike controls, CPs impression of the identity of the neutral average face was not significantly shifted by adaptation, suggesting that adaptive coding of identity is abnormal in CP. In sum, CPs show reduced aftereffects but only when the task directly taps the use of face norms used to code individual identity. This finding of a reduced face identity aftereffect in individuals with severe face recognition problems is consistent with suggestions that adaptive coding may have a functional role in face recognition. PMID:21986295
Deficits in context-dependent adaptive coding of reward in schizophrenia
Kirschner, Matthias; Hager, Oliver M; Bischof, Martin; Hartmann-Riemer, Matthias N; Kluge, Agne; Seifritz, Erich; Tobler, Philippe N; Kaiser, Stefan
2016-01-01
Theoretical principles of information processing and empirical findings suggest that to efficiently represent all possible rewards in the natural environment, reward-sensitive neurons have to adapt their coding range dynamically to the current reward context. Adaptation ensures that the reward system is most sensitive for the most likely rewards, enabling the system to efficiently represent a potentially infinite range of reward information. A deficit in neural adaptation would prevent precise representation of rewards and could have detrimental effects for an organism’s ability to optimally engage with its environment. In schizophrenia, reward processing is known to be impaired and has been linked to different symptom dimensions. However, despite the fundamental significance of coding reward adaptively, no study has elucidated whether adaptive reward processing is impaired in schizophrenia. We therefore studied patients with schizophrenia (n=27) and healthy controls (n=25), using functional magnetic resonance imaging in combination with a variant of the monetary incentive delay task. Compared with healthy controls, patients with schizophrenia showed less efficient neural adaptation to the current reward context, which leads to imprecise neural representation of reward. Importantly, the deficit correlated with total symptom severity. Our results suggest that some of the deficits in reward processing in schizophrenia might be due to inefficient neural adaptation to the current reward context. Furthermore, because adaptive coding is a ubiquitous feature of the brain, we believe that our findings provide an avenue in defining a general impairment in neural information processing underlying this debilitating disorder. PMID:27430009
Deficits in context-dependent adaptive coding of reward in schizophrenia.
Kirschner, Matthias; Hager, Oliver M; Bischof, Martin; Hartmann-Riemer, Matthias N; Kluge, Agne; Seifritz, Erich; Tobler, Philippe N; Kaiser, Stefan
2016-01-01
Theoretical principles of information processing and empirical findings suggest that to efficiently represent all possible rewards in the natural environment, reward-sensitive neurons have to adapt their coding range dynamically to the current reward context. Adaptation ensures that the reward system is most sensitive for the most likely rewards, enabling the system to efficiently represent a potentially infinite range of reward information. A deficit in neural adaptation would prevent precise representation of rewards and could have detrimental effects for an organism's ability to optimally engage with its environment. In schizophrenia, reward processing is known to be impaired and has been linked to different symptom dimensions. However, despite the fundamental significance of coding reward adaptively, no study has elucidated whether adaptive reward processing is impaired in schizophrenia. We therefore studied patients with schizophrenia (n=27) and healthy controls (n=25), using functional magnetic resonance imaging in combination with a variant of the monetary incentive delay task. Compared with healthy controls, patients with schizophrenia showed less efficient neural adaptation to the current reward context, which leads to imprecise neural representation of reward. Importantly, the deficit correlated with total symptom severity. Our results suggest that some of the deficits in reward processing in schizophrenia might be due to inefficient neural adaptation to the current reward context. Furthermore, because adaptive coding is a ubiquitous feature of the brain, we believe that our findings provide an avenue in defining a general impairment in neural information processing underlying this debilitating disorder. PMID:27430009
PHURBAS: AN ADAPTIVE, LAGRANGIAN, MESHLESS, MAGNETOHYDRODYNAMICS CODE. I. ALGORITHM
Maron, Jason L.; McNally, Colin P.; Mac Low, Mordecai-Mark E-mail: cmcnally@amnh.org
2012-05-01
We present an algorithm for simulating the equations of ideal magnetohydrodynamics and other systems of differential equations on an unstructured set of points represented by sample particles. Local, third-order, least-squares, polynomial interpolations (Moving Least Squares interpolations) are calculated from the field values of neighboring particles to obtain field values and spatial derivatives at the particle position. Field values and particle positions are advanced in time with a second-order predictor-corrector scheme. The particles move with the fluid, so the time step is not limited by the Eulerian Courant-Friedrichs-Lewy condition. Full spatial adaptivity is implemented to ensure the particles fill the computational volume, which gives the algorithm substantial flexibility and power. A target resolution is specified for each point in space, with particles being added and deleted as needed to meet this target. Particle addition and deletion is based on a local void and clump detection algorithm. Dynamic artificial viscosity fields provide stability to the integration. The resulting algorithm provides a robust solution for modeling flows that require Lagrangian or adaptive discretizations to resolve. This paper derives and documents the Phurbas algorithm as implemented in Phurbas version 1.1. A following paper presents the implementation and test problem results.
On a Class of Thue-Morse Type Sequences
NASA Astrophysics Data System (ADS)
Astudillo, Ricardo
2003-12-01
We consider a class of binary sequences that generalize the Thue-Morse sequence. In particular, we investigate the occurrences of palindromes in such sequences. We also introduce the notion of the first difference of a binary sequence and characterize first differences of our class of Thue-Morse type sequences. Finally, we define the concept of a "change sequence" of a given binary sequence, a sequence which encodes the positions at which a binary sequence changes values. We characterize the change sequences corresponding to our class of Thue-Morse type sequences.
Morse Potential in the Momentum Representation
NASA Astrophysics Data System (ADS)
Sun, Guo-Hua; Dong, Shi-Hai
2012-12-01
The momentum representation of the Morse potential is presented analytically by hypergeometric function. The properties with respect to the momentum p and potential parameter β are studied. Note that |Ψ(p)| is a nodeless function and the mutual orthogonality of functions is ensured by the phase functions arg[Ψ(p)]. It is interesting to see that the |Ψ(p)| is symmetric with respect to the axis p = 0 and the number of wave crest of |Ψ(p)| is equal to n + 1. We also study the variation of |Ψ(p)| with respect to β. The amplitude of |Ψ(p)| first increases with the quantum number n and then deceases. Finally, we notice that the discontinuity in phase occurs at some points of the momentum p and the position and momentum probability densities are symmetric with respect to their arguments.
Adaptation of a neutron diffraction detector to coded aperture imaging
Vanier, P.E.; Forman, L.
1997-02-01
A coded aperture neutron imaging system developed at Brookhaven National Laboratory (BNL) has demonstrated that it is possible to record not only a flux of thermal neutrons at some position, but also the directions from whence they came. This realization of an idea which defied the conventional wisdom has provided a device which has never before been available to the nuclear physics community. A number of potential applications have been explored, including (1) counting warheads on a bus or in a storage area, (2) investigating inhomogeneities in drums of Pu-containing waste to facilitate non-destructive assays, (3) monitoring of vaults containing accountable materials, (4) detection of buried land mines, and (5) locating solid deposits of nuclear material held up in gaseous diffusion plants.
Adaptive uniform grayscale coded aperture design for high dynamic range compressive spectral imaging
NASA Astrophysics Data System (ADS)
Diaz, Nelson; Rueda, Hoover; Arguello, Henry
2016-05-01
Imaging spectroscopy is an important area with many applications in surveillance, agriculture and medicine. The disadvantage of conventional spectroscopy techniques is that they collect the whole datacube. In contrast, compressive spectral imaging systems capture snapshot compressive projections, which are the input of reconstruction algorithms to yield the underlying datacube. Common compressive spectral imagers use coded apertures to perform the coded projections. The coded apertures are the key elements in these imagers since they define the sensing matrix of the system. The proper design of the coded aperture entries leads to a good quality in the reconstruction. In addition, the compressive measurements are prone to saturation due to the limited dynamic range of the sensor, hence the design of coded apertures must consider saturation. The saturation errors in compressive measurements are unbounded and compressive sensing recovery algorithms only provide solutions for bounded noise or bounded with high probability. In this paper it is proposed the design of uniform adaptive grayscale coded apertures (UAGCA) to improve the dynamic range of the estimated spectral images by reducing the saturation levels. The saturation is attenuated between snapshots using an adaptive filter which updates the entries of the grayscale coded aperture based on the previous snapshots. The coded apertures are optimized in terms of transmittance and number of grayscale levels. The advantage of the proposed method is the efficient use of the dynamic range of the image sensor. Extensive simulations show improvements in the image reconstruction of the proposed method compared with grayscale coded apertures (UGCA) and adaptive block-unblock coded apertures (ABCA) in up to 10 dB.
Correctable noise of quantum-error-correcting codes under adaptive concatenation
NASA Astrophysics Data System (ADS)
Fern, Jesse
2008-01-01
We examine the transformation of noise under a quantum-error-correcting code (QECC) concatenated repeatedly with itself, by analyzing the effects of a quantum channel after each level of concatenation using recovery operators that are optimally adapted to use error syndrome information from the previous levels of the code. We use the Shannon entropy of these channels to estimate the thresholds of correctable noise for QECCs and find considerable improvements under this adaptive concatenation. Similar methods could be used to increase quantum-fault-tolerant thresholds.
Nanoparticle-dispersed metamaterial sensors for adaptive coded aperture imaging applications
NASA Astrophysics Data System (ADS)
Nehmetallah, Georges; Banerjee, Partha; Aylo, Rola; Rogers, Stanley
2011-09-01
We propose tunable single-layer and multi-layer (periodic and with defect) structures comprising nanoparticle dispersed metamaterials in suitable hosts, including adaptive coded aperture constructs, for possible Adaptive Coded Aperture Imaging (ACAI) applications such as in microbolometry, pressure/temperature sensors, and directed energy transfer, over a wide frequency range, from visible to terahertz. These structures are easy to fabricate, are low-cost and tunable, and offer enhanced functionality, such as perfect absorption (in the case of bolometry) and low cross-talk (for sensors). Properties of the nanoparticle dispersed metamaterial are determined using effective medium theory.
Application of adaptive subband coding for noisy bandlimited ECG signal processing
NASA Astrophysics Data System (ADS)
Aditya, Krishna; Chu, Chee-Hung H.; Szu, Harold H.
1996-03-01
An approach to impulsive noise suppression and background normalization of digitized bandlimited electrovcardiogram signals is presented. This approach uses adaptive wavelet filters that incorporate the band-limited a priori information and the shape information of a signal to decompose the data. Empirical results show that the new algorithm has good performance in wideband impulsive noise suppression and background normalization for subsequent wave detection, when compared with subband coding using Daubechie's D4 wavelet, without the bandlimited adaptive wavelet transform.
Adaptive variable-length coding for efficient compression of spacecraft television data.
NASA Technical Reports Server (NTRS)
Rice, R. F.; Plaunt, J. R.
1971-01-01
An adaptive variable length coding system is presented. Although developed primarily for the proposed Grand Tour missions, many features of this system clearly indicate a much wider applicability. Using sample to sample prediction, the coding system produces output rates within 0.25 bit/picture element (pixel) of the one-dimensional difference entropy for entropy values ranging from 0 to 8 bit/pixel. This is accomplished without the necessity of storing any code words. Performance improvements of 0.5 bit/pixel can be simply achieved by utilizing previous line correlation. A Basic Compressor, using concatenated codes, adapts to rapid changes in source statistics by automatically selecting one of three codes to use for each block of 21 pixels. The system adapts to less frequent, but more dramatic, changes in source statistics by adjusting the mode in which the Basic Compressor operates on a line-to-line basis. Furthermore, the compression system is independent of the quantization requirements of the pulse-code modulation system.
Code division controlled-MAC in wireless sensor network by adaptive binary signature design
NASA Astrophysics Data System (ADS)
Wei, Lili; Batalama, Stella N.; Pados, Dimitris A.; Suter, Bruce
2007-04-01
We consider the problem of signature waveform design for code division medium-access-control (MAC) of wireless sensor networks (WSN). In contract to conventional randomly chosen orthogonal codes, an adaptive signature design strategy is developed under the maximum pre-detection SINR (signal to interference plus noise ratio) criterion. The proposed algorithm utilizes slowest descent cords of the optimization surface to move toward the optimum solution and exhibits, upon eigenvector decomposition, linear computational complexity with respect to signature length. Numerical and simulation studies demonstrate the performance of the proposed method and offer comparisons with conventional signature code sets.
NASA Astrophysics Data System (ADS)
Karwowski, Damian; Domański, Marek
2016-01-01
An improved context-based adaptive binary arithmetic coding (CABAC) is presented. The idea for the improvement is to use a more accurate mechanism for estimation of symbol probabilities in the standard CABAC algorithm. The authors' proposal of such a mechanism is based on the context-tree weighting technique. In the framework of a high-efficiency video coding (HEVC) video encoder, the improved CABAC allows 0.7% to 4.5% bitrate saving compared to the original CABAC algorithm. The application of the proposed algorithm marginally affects the complexity of HEVC video encoder, but the complexity of video decoder increases by 32% to 38%. In order to decrease the complexity of video decoding, a new tool has been proposed for the improved CABAC that enables scaling of the decoder complexity. Experiments show that this tool gives 5% to 7.5% reduction of the decoding time while still maintaining high efficiency in the data compression.
Incorporating spike-rate adaptation into a rate code in mathematical and biological neurons.
Ralston, Bridget N; Flagg, Lucas Q; Faggin, Eric; Birmingham, John T
2016-06-01
For a slowly varying stimulus, the simplest relationship between a neuron's input and output is a rate code, in which the spike rate is a unique function of the stimulus at that instant. In the case of spike-rate adaptation, there is no unique relationship between input and output, because the spike rate at any time depends both on the instantaneous stimulus and on prior spiking (the "history"). To improve the decoding of spike trains produced by neurons that show spike-rate adaptation, we developed a simple scheme that incorporates "history" into a rate code. We utilized this rate-history code successfully to decode spike trains produced by 1) mathematical models of a neuron in which the mechanism for adaptation (IAHP) is specified, and 2) the gastropyloric receptor (GPR2), a stretch-sensitive neuron in the stomatogastric nervous system of the crab Cancer borealis, that exhibits long-lasting adaptation of unknown origin. Moreover, when we modified the spike rate either mathematically in a model system or by applying neuromodulatory agents to the experimental system, we found that changes in the rate-history code could be related to the biophysical mechanisms responsible for altering the spiking. PMID:26888106
QOS-aware error recovery in wireless body sensor networks using adaptive network coding.
Razzaque, Mohammad Abdur; Javadi, Saeideh S; Coulibaly, Yahaya; Hira, Muta Tah
2015-01-01
Wireless body sensor networks (WBSNs) for healthcare and medical applications are real-time and life-critical infrastructures, which require a strict guarantee of quality of service (QoS), in terms of latency, error rate and reliability. Considering the criticality of healthcare and medical applications, WBSNs need to fulfill users/applications and the corresponding network's QoS requirements. For instance, for a real-time application to support on-time data delivery, a WBSN needs to guarantee a constrained delay at the network level. A network coding-based error recovery mechanism is an emerging mechanism that can be used in these systems to support QoS at very low energy, memory and hardware cost. However, in dynamic network environments and user requirements, the original non-adaptive version of network coding fails to support some of the network and user QoS requirements. This work explores the QoS requirements of WBSNs in both perspectives of QoS. Based on these requirements, this paper proposes an adaptive network coding-based, QoS-aware error recovery mechanism for WBSNs. It utilizes network-level and user-/application-level information to make it adaptive in both contexts. Thus, it provides improved QoS support adaptively in terms of reliability, energy efficiency and delay. Simulation results show the potential of the proposed mechanism in terms of adaptability, reliability, real-time data delivery and network lifetime compared to its counterparts. PMID:25551485
QoS-Aware Error Recovery in Wireless Body Sensor Networks Using Adaptive Network Coding
Razzaque, Mohammad Abdur; Javadi, Saeideh S.; Coulibaly, Yahaya; Hira, Muta Tah
2015-01-01
Wireless body sensor networks (WBSNs) for healthcare and medical applications are real-time and life-critical infrastructures, which require a strict guarantee of quality of service (QoS), in terms of latency, error rate and reliability. Considering the criticality of healthcare and medical applications, WBSNs need to fulfill users/applications and the corresponding network's QoS requirements. For instance, for a real-time application to support on-time data delivery, a WBSN needs to guarantee a constrained delay at the network level. A network coding-based error recovery mechanism is an emerging mechanism that can be used in these systems to support QoS at very low energy, memory and hardware cost. However, in dynamic network environments and user requirements, the original non-adaptive version of network coding fails to support some of the network and user QoS requirements. This work explores the QoS requirements of WBSNs in both perspectives of QoS. Based on these requirements, this paper proposes an adaptive network coding-based, QoS-aware error recovery mechanism for WBSNs. It utilizes network-level and user-/application-level information to make it adaptive in both contexts. Thus, it provides improved QoS support adaptively in terms of reliability, energy efficiency and delay. Simulation results show the potential of the proposed mechanism in terms of adaptability, reliability, real-time data delivery and network lifetime compared to its counterparts. PMID:25551485
Gain-adaptive vector quantization for medium-rate speech coding
NASA Technical Reports Server (NTRS)
Chen, J.-H.; Gersho, A.
1985-01-01
A class of adaptive vector quantizers (VQs) that can dynamically adjust the 'gain' of codevectors according to the input signal level is introduced. The encoder uses a gain estimator to determine a suitable normalization of each input vector prior to VQ coding. The normalized vectors have reduced dynamic range and can then be more efficiently coded. At the receiver, the VQ decoder output is multiplied by the estimated gain. Both forward and backward adaptation are considered and several different gain estimators are compared and evaluated. An approach to optimizing the design of gain estimators is introduced. Some of the more obvious techniques for achieving gain adaptation are substantially less effective than the use of optimized gain estimators. A novel design technique that is needed to generate the appropriate gain-normalized codebook for the vector quantizer is introduced. Experimental results show that a significant gain in segmental SNR can be obtained over nonadaptive VQ with a negligible increase in complexity.
Microlensing observations rapid search for exoplanets: MORSE code for GPUs
NASA Astrophysics Data System (ADS)
McDougall, Alistair; Albrow, Michael D.
2016-02-01
The rapid analysis of ongoing gravitational microlensing events has been integral to the successful detection and characterization of cool planets orbiting low-mass stars in the Galaxy. In this paper, we present an implementation of search and fit techniques on graphical processing unit (GPU) hardware. The method allows for the rapid identification of candidate planetary microlensing events and their subsequent follow-up for detailed characterization.
Object-adaptive depth compensated inter prediction for depth video coding in 3D video system
NASA Astrophysics Data System (ADS)
Kang, Min-Koo; Lee, Jaejoon; Lim, Ilsoon; Ho, Yo-Sung
2011-01-01
Nowadays, the 3D video system using the MVD (multi-view video plus depth) data format is being actively studied. The system has many advantages with respect to virtual view synthesis such as an auto-stereoscopic functionality, but compression of huge input data remains a problem. Therefore, efficient 3D data compression is extremely important in the system, and problems of low temporal consistency and viewpoint correlation should be resolved for efficient depth video coding. In this paper, we propose an object-adaptive depth compensated inter prediction method to resolve the problems where object-adaptive mean-depth difference between a current block, to be coded, and a reference block are compensated during inter prediction. In addition, unique properties of depth video are exploited to reduce side information required for signaling decoder to conduct the same process. To evaluate the coding performance, we have implemented the proposed method into MVC (multiview video coding) reference software, JMVC 8.2. Experimental results have demonstrated that our proposed method is especially efficient for depth videos estimated by DERS (depth estimation reference software) discussed in the MPEG 3DV coding group. The coding gain was up to 11.69% bit-saving, and it was even increased when we evaluated it on synthesized views of virtual viewpoints.
Performance of Adaptive Trellis Coded Modulation Applied to MC-CDMA with Bi-orthogonal Keying
NASA Astrophysics Data System (ADS)
Tanaka, Hirokazu; Yamasaki, Shoichiro; Haseyama, Miki
A Generalized Symbol-rate-increased (GSRI) Pragmatic Adaptive Trellis Coded Modulation (ATCM) is applied to a Multi-carrier CDMA (MC-CDMA) system with bi-orthogonal keying is analyzed. The MC-CDMA considered in this paper is that the input sequence of a bi-orthogonal modulator has code selection bit sequence and sign bit sequence. In [9], an efficient error correction code using Reed-Solomon (RS) code for the code selection bit sequence has been proposed. However, since BPSK is employed for the sign bit modulation, no error correction code is applied to it. In order to realize a high speed wireless system, a multi-level modulation scheme (e.g. MPSK, MQAM, etc.) is desired. In this paper, we investigate the performance of the MC-CDMA with bi-orthogonal keying employing GSRI ATCM. GSRI TC-MPSK can arbitrarily set the bandwidth expansion ratio keeping higher coding gain than the conventional pragmatic TCM scheme. By changing the modulation scheme and the bandwidth expansion ratio (coding rate), this scheme can optimize the performance according to the channel conditions. The performance evaluations by simulations on an AWGN channel and multi-path fading channels are presented. It is shown that the proposed scheme has remarkable throughput performance than that of the conventional scheme.
A Neural Mechanism for Time-Window Separation Resolves Ambiguity of Adaptive Coding
Hildebrandt, K. Jannis; Ronacher, Bernhard; Hennig, R. Matthias; Benda, Jan
2015-01-01
The senses of animals are confronted with changing environments and different contexts. Neural adaptation is one important tool to adjust sensitivity to varying intensity ranges. For instance, in a quiet night outdoors, our hearing is more sensitive than when we are confronted with the plurality of sounds in a large city during the day. However, adaptation also removes available information on absolute sound levels and may thus cause ambiguity. Experimental data on the trade-off between benefits and loss through adaptation is scarce and very few mechanisms have been proposed to resolve it. We present an example where adaptation is beneficial for one task—namely, the reliable encoding of the pattern of an acoustic signal—but detrimental for another—the localization of the same acoustic stimulus. With a combination of neurophysiological data, modeling, and behavioral tests, we show that adaptation in the periphery of the auditory pathway of grasshoppers enables intensity-invariant coding of amplitude modulations, but at the same time, degrades information available for sound localization. We demonstrate how focusing the response of localization neurons to the onset of relevant signals separates processing of localization and pattern information temporally. In this way, the ambiguity of adaptive coding can be circumvented and both absolute and relative levels can be processed using the same set of peripheral neurons. PMID:25761097
A neural mechanism for time-window separation resolves ambiguity of adaptive coding.
Hildebrandt, K Jannis; Ronacher, Bernhard; Hennig, R Matthias; Benda, Jan
2015-03-01
The senses of animals are confronted with changing environments and different contexts. Neural adaptation is one important tool to adjust sensitivity to varying intensity ranges. For instance, in a quiet night outdoors, our hearing is more sensitive than when we are confronted with the plurality of sounds in a large city during the day. However, adaptation also removes available information on absolute sound levels and may thus cause ambiguity. Experimental data on the trade-off between benefits and loss through adaptation is scarce and very few mechanisms have been proposed to resolve it. We present an example where adaptation is beneficial for one task--namely, the reliable encoding of the pattern of an acoustic signal-but detrimental for another--the localization of the same acoustic stimulus. With a combination of neurophysiological data, modeling, and behavioral tests, we show that adaptation in the periphery of the auditory pathway of grasshoppers enables intensity-invariant coding of amplitude modulations, but at the same time, degrades information available for sound localization. We demonstrate how focusing the response of localization neurons to the onset of relevant signals separates processing of localization and pattern information temporally. In this way, the ambiguity of adaptive coding can be circumvented and both absolute and relative levels can be processed using the same set of peripheral neurons. PMID:25761097
An assessment of the adaptive unstructured tetrahedral grid, Euler Flow Solver Code FELISA
NASA Technical Reports Server (NTRS)
Djomehri, M. Jahed; Erickson, Larry L.
1994-01-01
A three-dimensional solution-adaptive Euler flow solver for unstructured tetrahedral meshes is assessed, and the accuracy and efficiency of the method for predicting sonic boom pressure signatures about simple generic models are demonstrated. Comparison of computational and wind tunnel data and enhancement of numerical solutions by means of grid adaptivity are discussed. The mesh generation is based on the advancing front technique. The FELISA code consists of two solvers, the Taylor-Galerkin and the Runge-Kutta-Galerkin schemes, both of which are spacially discretized by the usual Galerkin weighted residual finite-element methods but with different explicit time-marching schemes to steady state. The solution-adaptive grid procedure is based on either remeshing or mesh refinement techniques. An alternative geometry adaptive procedure is also incorporated.
The development and application of the self-adaptive grid code, SAGE
NASA Astrophysics Data System (ADS)
Davies, Carol B.
The multidimensional self-adaptive grid code, SAGE, has proven to be a flexible and useful tool in the solution of complex flow problems. Both 2- and 3-D examples given in this report show the code to be reliable and to substantially improve flowfield solutions. Since the adaptive procedure is a marching scheme the code is extremely fast and uses insignificant CPU time compared to the corresponding flow solver. The SAGE program is also machine and flow solver independent. Significant effort was made to simplify user interaction, though some parameters still need to be chosen with care. It is also difficult to tell when the adaption process has provided its best possible solution. This is particularly true if no experimental data are available or if there is a lack of theoretical understanding of the flow. Another difficulty occurs if local features are important but missing in the original grid; the adaption to this solution will not result in any improvement, and only grid refinement can result in an improved solution. These are complex issues that need to be explored within the context of each specific problem.
The development and application of the self-adaptive grid code, SAGE
NASA Technical Reports Server (NTRS)
Davies, Carol B.
1993-01-01
The multidimensional self-adaptive grid code, SAGE, has proven to be a flexible and useful tool in the solution of complex flow problems. Both 2- and 3-D examples given in this report show the code to be reliable and to substantially improve flowfield solutions. Since the adaptive procedure is a marching scheme the code is extremely fast and uses insignificant CPU time compared to the corresponding flow solver. The SAGE program is also machine and flow solver independent. Significant effort was made to simplify user interaction, though some parameters still need to be chosen with care. It is also difficult to tell when the adaption process has provided its best possible solution. This is particularly true if no experimental data are available or if there is a lack of theoretical understanding of the flow. Another difficulty occurs if local features are important but missing in the original grid; the adaption to this solution will not result in any improvement, and only grid refinement can result in an improved solution. These are complex issues that need to be explored within the context of each specific problem.
Asynchrony adaptation reveals neural population code for audio-visual timing
Roach, Neil W.; Heron, James; Whitaker, David; McGraw, Paul V.
2011-01-01
The relative timing of auditory and visual stimuli is a critical cue for determining whether sensory signals relate to a common source and for making inferences about causality. However, the way in which the brain represents temporal relationships remains poorly understood. Recent studies indicate that our perception of multisensory timing is flexible—adaptation to a regular inter-modal delay alters the point at which subsequent stimuli are judged to be simultaneous. Here, we measure the effect of audio-visual asynchrony adaptation on the perception of a wide range of sub-second temporal relationships. We find distinctive patterns of induced biases that are inconsistent with the previous explanations based on changes in perceptual latency. Instead, our results can be well accounted for by a neural population coding model in which: (i) relative audio-visual timing is represented by the distributed activity across a relatively small number of neurons tuned to different delays; (ii) the algorithm for reading out this population code is efficient, but subject to biases owing to under-sampling; and (iii) the effect of adaptation is to modify neuronal response gain. These results suggest that multisensory timing information is represented by a dedicated population code and that shifts in perceived simultaneity following asynchrony adaptation arise from analogous neural processes to well-known perceptual after-effects. PMID:20961905
Volumetric data analysis using Morse-Smale complexes
Natarajan, V; Pascucci, V
2005-10-13
The 3D Morse-Smale complex is a fundamental topological construct that partitions the domain of a real-valued function into regions having uniform gradient flow behavior. In this paper, we consider the construction and selective presentation of cells of the Morse-Smale complex and their use in the analysis and visualization of scientific datasets. We take advantage of the fact that cells of different dimension often characterize different types of features present in the data. For example, critical points pinpoint changes in topology by showing where components of the level sets are created, destroyed or modified in genus. Edges of the Morse-Smale complex extract filament-like features that are not explicitly modeled in the original data. Interactive selection and rendering of portions of the Morse-Smale complex introduces fundamental data management challenges due to the unstructured nature of the complex even for structured inputs. We describe a data structure that stores the Morse-Smale complex and allows efficient selective traversal of regions of interest. Finally, we illustrate the practical use of this approach by applying it to cryo-electron microscopy data of protein molecules.
Rate-adaptive modulation and coding for optical fiber transmission systems
NASA Astrophysics Data System (ADS)
Gho, Gwang-Hyun; Kahn, Joseph M.
2011-01-01
Rate-adaptive optical transmission techniques adjust information bit rate based on transmission distance and other factors affecting signal quality. These techniques enable increased bit rates over shorter links, while enabling transmission over longer links when regeneration is not available. They are likely to become more important with increasing network traffic and a continuing evolution toward optically switched mesh networks, which make signal quality more variable. We propose a rate-adaptive scheme using variable-rate forward error correction (FEC) codes and variable constellations with a fixed symbol rate, quantifying how achievable bit rates vary with distance. The scheme uses serially concatenated Reed-Solomon codes and an inner repetition code to vary the code rate, combined with singlecarrier polarization-multiplexed M-ary quadrature amplitude modulation (PM-M-QAM) with variable M and digital coherent detection. A rate adaptation algorithm uses the signal-to-noise ratio (SNR) or the FEC decoder input bit-error ratio (BER) estimated by a receiver to determine the FEC code rate and constellation size that maximizes the information bit rate while satisfying a target FEC decoder output BER and an SNR margin, yielding a peak rate of 200 Gbit/s in a nominal 50-GHz channel bandwidth. We simulate single-channel transmission through a long-haul fiber system incorporating numerous optical switches, evaluating the impact of fiber nonlinearity and bandwidth narrowing. With zero SNR margin, we achieve bit rates of 200/100/50 Gbit/s over distances of 650/2000/3000 km. Compared to an ideal coding scheme, the proposed scheme exhibits a performance gap ranging from about 6.4 dB at 650 km to 7.5 dB at 5000 km.
An edge-based solution-adaptive method applied to the AIRPLANE code
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Thomas, Scott D.; Cliff, Susan E.
1995-01-01
Computational methods to solve large-scale realistic problems in fluid flow can be made more efficient and cost effective by using them in conjunction with dynamic mesh adaption procedures that perform simultaneous coarsening and refinement to capture flow features of interest. This work couples the tetrahedral mesh adaption scheme, 3D_TAG, with the AIRPLANE code to solve complete aircraft configuration problems in transonic and supersonic flow regimes. Results indicate that the near-field sonic boom pressure signature of a cone-cylinder is improved, the oblique and normal shocks are better resolved on a transonic wing, and the bow shock ahead of an unstarted inlet is better defined.
FLAG: A multi-dimensional adaptive free-Lagrange code for fully unstructured grids
Burton, D.E.; Miller, D.S.; Palmer, T.
1995-07-01
The authors describe FLAG, a 3D adaptive free-Lagrange method for unstructured grids. The grid elements were 3D polygons, which move with the flow, and are refined or reconnected as necessary to achieve uniform accuracy. The authors stressed that they were able to construct a 3D hydro version of this code in 3 months, using an object-oriented FORTRAN approach.
Adapting a Navier-Stokes code to the ICL-DAP
NASA Technical Reports Server (NTRS)
Grosch, C. E.
1985-01-01
The results of an experiment are reported, i.c., to adapt a Navier-Stokes code, originally developed on a serial computer, to concurrent processing on the CL Distributed Array Processor (DAP). The algorithm used in solving the Navier-Stokes equations is briefly described. The architecture of the DAP and DAP FORTRAN are also described. The modifications of the algorithm so as to fit the DAP are given and discussed. Finally, performance results are given and conclusions are drawn.
Morse bifurcations of transition states in bimolecular reactions
NASA Astrophysics Data System (ADS)
MacKay, R. S.; Strub, D. C.
2015-12-01
The transition states and dividing surfaces used to find rate constants for bimolecular reactions are shown to undergo Morse bifurcations, in which they change diffeomorphism class, and to exist for a large range of energies, not just immediately above the critical energy for first connection between reactants and products. Specifically, we consider capture between two molecules and the associated transition states for the case of non-zero angular momentum and general attitudes. The capture between an atom and a diatom, and then a general molecule are presented, providing concrete examples of Morse bifurcations of transition states and dividing surfaces.
CRASH: A BLOCK-ADAPTIVE-MESH CODE FOR RADIATIVE SHOCK HYDRODYNAMICS-IMPLEMENTATION AND VERIFICATION
Van der Holst, B.; Toth, G.; Sokolov, I. V.; Myra, E. S.; Fryxell, B.; Drake, R. P.; Powell, K. G.; Holloway, J. P.; Stout, Q.; Adams, M. L.; Morel, J. E.; Karni, S.
2011-06-01
We describe the Center for Radiative Shock Hydrodynamics (CRASH) code, a block-adaptive-mesh code for multi-material radiation hydrodynamics. The implementation solves the radiation diffusion model with a gray or multi-group method and uses a flux-limited diffusion approximation to recover the free-streaming limit. Electrons and ions are allowed to have different temperatures and we include flux-limited electron heat conduction. The radiation hydrodynamic equations are solved in the Eulerian frame by means of a conservative finite-volume discretization in either one-, two-, or three-dimensional slab geometry or in two-dimensional cylindrical symmetry. An operator-split method is used to solve these equations in three substeps: (1) an explicit step of a shock-capturing hydrodynamic solver; (2) a linear advection of the radiation in frequency-logarithm space; and (3) an implicit solution of the stiff radiation diffusion, heat conduction, and energy exchange. We present a suite of verification test problems to demonstrate the accuracy and performance of the algorithms. The applications are for astrophysics and laboratory astrophysics. The CRASH code is an extension of the Block-Adaptive Tree Solarwind Roe Upwind Scheme (BATS-R-US) code with a new radiation transfer and heat conduction library and equation-of-state and multi-group opacity solvers. Both CRASH and BATS-R-US are part of the publicly available Space Weather Modeling Framework.
CRASH: A Block-adaptive-mesh Code for Radiative Shock Hydrodynamics—Implementation and Verification
NASA Astrophysics Data System (ADS)
van der Holst, B.; Tóth, G.; Sokolov, I. V.; Powell, K. G.; Holloway, J. P.; Myra, E. S.; Stout, Q.; Adams, M. L.; Morel, J. E.; Karni, S.; Fryxell, B.; Drake, R. P.
2011-06-01
We describe the Center for Radiative Shock Hydrodynamics (CRASH) code, a block-adaptive-mesh code for multi-material radiation hydrodynamics. The implementation solves the radiation diffusion model with a gray or multi-group method and uses a flux-limited diffusion approximation to recover the free-streaming limit. Electrons and ions are allowed to have different temperatures and we include flux-limited electron heat conduction. The radiation hydrodynamic equations are solved in the Eulerian frame by means of a conservative finite-volume discretization in either one-, two-, or three-dimensional slab geometry or in two-dimensional cylindrical symmetry. An operator-split method is used to solve these equations in three substeps: (1) an explicit step of a shock-capturing hydrodynamic solver; (2) a linear advection of the radiation in frequency-logarithm space; and (3) an implicit solution of the stiff radiation diffusion, heat conduction, and energy exchange. We present a suite of verification test problems to demonstrate the accuracy and performance of the algorithms. The applications are for astrophysics and laboratory astrophysics. The CRASH code is an extension of the Block-Adaptive Tree Solarwind Roe Upwind Scheme (BATS-R-US) code with a new radiation transfer and heat conduction library and equation-of-state and multi-group opacity solvers. Both CRASH and BATS-R-US are part of the publicly available Space Weather Modeling Framework.
CRASH: A Block-Adaptive-Mesh Code for Radiative Shock Hydrodynamics
NASA Astrophysics Data System (ADS)
van der Holst, B.; Toth, G.; Sokolov, I. V.; Powell, K. G.; Holloway, J. P.; Myra, E. S.; Stout, Q.; Adams, M. L.; Morel, J. E.; Drake, R. P.
2011-01-01
We describe the CRASH (Center for Radiative Shock Hydrodynamics) code, a block adaptive mesh code for multi-material radiation hydrodynamics. The implementation solves the radiation diffusion model with the gray or multigroup method and uses a flux limited diffusion approximation to recover the free-streaming limit. The electrons and ions are allowed to have different temperatures and we include a flux limited electron heat conduction. The radiation hydrodynamic equations are solved in the Eulerian frame by means of a conservative finite volume discretization in either one, two, or three-dimensional slab geometry or in two-dimensional cylindrical symmetry. An operator split method is used to solve these equations in three substeps: (1) solve the hydrodynamic equations with shock-capturing schemes, (2) a linear advection of the radiation in frequency-logarithm space, and (3) an implicit solve of the stiff radiation diffusion, heat conduction, and energy exchange. We present a suite of verification test problems to demonstrate the accuracy and performance of the algorithms. The CRASH code is an extension of the Block-Adaptive Tree Solarwind Roe Upwind Scheme (BATS-R-US) code with this new radiation transfer and heat conduction library and equation-of-state and multigroup opacity solvers. Both CRASH and BATS-R-US are part of the publicly available Space Weather Modeling Framework (SWMF).
An Adaptive Source-Channel Coding with Feedback for Progressive Transmission of Medical Images
Lo, Jen-Lung; Sanei, Saeid; Nazarpour, Kianoush
2009-01-01
A novel adaptive source-channel coding with feedback for progressive transmission of medical images is proposed here. In the source coding part, the transmission starts from the region of interest (RoI). The parity length in the channel code varies with respect to both the proximity of the image subblock to the RoI and the channel noise, which is iteratively estimated in the receiver. The overall transmitted data can be controlled by the user (clinician). In the case of medical data transmission, it is vital to keep the distortion level under control as in most of the cases certain clinically important regions have to be transmitted without any visible error. The proposed system significantly reduces the transmission time and error. Moreover, the system is very user friendly since the selection of the RoI, its size, overall code rate, and a number of test features such as noise level can be set by the users in both ends. A MATLAB-based TCP/IP connection has been established to demonstrate the proposed interactive and adaptive progressive transmission system. The proposed system is simulated for both binary symmetric channel (BSC) and Rayleigh channel. The experimental results verify the effectiveness of the design. PMID:19190770
An adaptive source-channel coding with feedback for progressive transmission of medical images.
Lo, Jen-Lung; Sanei, Saeid; Nazarpour, Kianoush
2009-01-01
A novel adaptive source-channel coding with feedback for progressive transmission of medical images is proposed here. In the source coding part, the transmission starts from the region of interest (RoI). The parity length in the channel code varies with respect to both the proximity of the image subblock to the RoI and the channel noise, which is iteratively estimated in the receiver. The overall transmitted data can be controlled by the user (clinician). In the case of medical data transmission, it is vital to keep the distortion level under control as in most of the cases certain clinically important regions have to be transmitted without any visible error. The proposed system significantly reduces the transmission time and error. Moreover, the system is very user friendly since the selection of the RoI, its size, overall code rate, and a number of test features such as noise level can be set by the users in both ends. A MATLAB-based TCP/IP connection has been established to demonstrate the proposed interactive and adaptive progressive transmission system. The proposed system is simulated for both binary symmetric channel (BSC) and Rayleigh channel. The experimental results verify the effectiveness of the design. PMID:19190770
ALEGRA -- A massively parallel h-adaptive code for solid dynamics
Summers, R.M.; Wong, M.K.; Boucheron, E.A.; Weatherby, J.R.
1997-12-31
ALEGRA is a multi-material, arbitrary-Lagrangian-Eulerian (ALE) code for solid dynamics designed to run on massively parallel (MP) computers. It combines the features of modern Eulerian shock codes, such as CTH, with modern Lagrangian structural analysis codes using an unstructured grid. ALEGRA is being developed for use on the teraflop supercomputers to conduct advanced three-dimensional (3D) simulations of shock phenomena important to a variety of systems. ALEGRA was designed with the Single Program Multiple Data (SPMD) paradigm, in which the mesh is decomposed into sub-meshes so that each processor gets a single sub-mesh with approximately the same number of elements. Using this approach the authors have been able to produce a single code that can scale from one processor to thousands of processors. A current major effort is to develop efficient, high precision simulation capabilities for ALEGRA, without the computational cost of using a global highly resolved mesh, through flexible, robust h-adaptivity of finite elements. H-adaptivity is the dynamic refinement of the mesh by subdividing elements, thus changing the characteristic element size and reducing numerical error. The authors are working on several major technical challenges that must be met to make effective use of HAMMER on MP computers.
Adaptive coded aperture imaging in the infrared: towards a practical implementation
NASA Astrophysics Data System (ADS)
Slinger, Chris W.; Gilholm, Kevin; Gordon, Neil; McNie, Mark; Payne, Doug; Ridley, Kevin; Strens, Malcolm; Todd, Mike; De Villiers, Geoff; Watson, Philip; Wilson, Rebecca; Dyer, Gavin; Eismann, Mike; Meola, Joe; Rogers, Stanley
2008-08-01
An earlier paper [1] discussed the merits of adaptive coded apertures for use as lensless imaging systems in the thermal infrared and visible. It was shown how diffractive (rather than the more conventional geometric) coding could be used, and that 2D intensity measurements from multiple mask patterns could be combined and decoded to yield enhanced imagery. Initial experimental results in the visible band were presented. Unfortunately, radiosity calculations, also presented in that paper, indicated that the signal to noise performance of systems using this approach was likely to be compromised, especially in the infrared. This paper will discuss how such limitations can be overcome, and some of the tradeoffs involved. Experimental results showing tracking and imaging performance of these modified, diffractive, adaptive coded aperture systems in the visible and infrared will be presented. The subpixel imaging and tracking performance is compared to that of conventional imaging systems and shown to be superior. System size, weight and cost calculations indicate that the coded aperture approach, employing novel photonic MOEMS micro-shutter architectures, has significant merits for a given level of performance in the MWIR when compared to more conventional imaging approaches.
Data compression using adaptive transform coding. Appendix 1: Item 1. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Rost, Martin Christopher
1988-01-01
Adaptive low-rate source coders are described in this dissertation. These coders adapt by adjusting the complexity of the coder to match the local coding difficulty of the image. This is accomplished by using a threshold driven maximum distortion criterion to select the specific coder used. The different coders are built using variable blocksized transform techniques, and the threshold criterion selects small transform blocks to code the more difficult regions and larger blocks to code the less complex regions. A theoretical framework is constructed from which the study of these coders can be explored. An algorithm for selecting the optimal bit allocation for the quantization of transform coefficients is developed. The bit allocation algorithm is more fully developed, and can be used to achieve more accurate bit assignments than the algorithms currently used in the literature. Some upper and lower bounds for the bit-allocation distortion-rate function are developed. An obtainable distortion-rate function is developed for a particular scalar quantizer mixing method that can be used to code transform coefficients at any rate.
Less can be more: RNA-adapters may enhance coding capacity of replicators.
de Boer, Folkert K; Hogeweg, Paulien
2012-01-01
It is still not clear how prebiotic replicators evolved towards the complexity found in present day organisms. Within the most realistic scenario for prebiotic evolution, known as the RNA world hypothesis, such complexity has arisen from replicators consisting solely of RNA. Within contemporary life, remarkably many RNAs are involved in modifying other RNAs. In hindsight, such RNA-RNA modification might have helped in alleviating the limits of complexity posed by the information threshold for RNA-only replicators. Here we study the possible role of such self-modification in early evolution, by modeling the evolution of protocells as evolving replicators, which have the opportunity to incorporate these mechanisms as a molecular tool. Evolution is studied towards a set of 25 arbitrary 'functional' structures, while avoiding all other (misfolded) structures, which are considered to be toxic and increase the death-rate of a protocell. The modeled protocells contain a genotype of different RNA-sequences while their phenotype is the ensemble of secondary structures they can potentially produce from these RNA-sequences. One of the secondary structures explicitly codes for a simple sequence-modification tool. This 'RNA-adapter' can block certain positions on other RNA-sequences through antisense base-pairing. The altered sequence can produce an alternative secondary structure, which may or may not be functional. We show that the modifying potential of interacting RNA-sequences enables these protocells to evolve high fitness under high mutation rates. Moreover, our model shows that because of toxicity of misfolded molecules, redundant coding impedes the evolution of self-modification machinery, in effect restraining the evolvability of coding structures. Hence, high mutation rates can actually promote the evolution of complex coding structures by reducing redundant coding. Protocells can successfully use RNA-adapters to modify their genotype-phenotype mapping in order to
The Morse Oscillator and Second-Order Perturbation Theory
NASA Astrophysics Data System (ADS)
Pettitt, B. A.
1998-09-01
This article shows how the energies of the Morse oscillator are obtained exactly from a second-order perturbation expansion in a harmonic oscillator basis. This exercise is recommended for its instructional value in intermediate quantum chemistry, in that the second-order term is entirely tractable, it arises within an important context (anharmonicity of vibrations), and it gives the right answer.
The PLUTO Code for Adaptive Mesh Computations in Astrophysical Fluid Dynamics
NASA Astrophysics Data System (ADS)
Mignone, A.; Zanni, C.; Tzeferacos, P.; van Straalen, B.; Colella, P.; Bodo, G.
2012-01-01
We present a description of the adaptive mesh refinement (AMR) implementation of the PLUTO code for solving the equations of classical and special relativistic magnetohydrodynamics (MHD and RMHD). The current release exploits, in addition to the static grid version of the code, the distributed infrastructure of the CHOMBO library for multidimensional parallel computations over block-structured, adaptively refined grids. We employ a conservative finite-volume approach where primary flow quantities are discretized at the cell center in a dimensionally unsplit fashion using the Corner Transport Upwind method. Time stepping relies on a characteristic tracing step where piecewise parabolic method, weighted essentially non-oscillatory, or slope-limited linear interpolation schemes can be handily adopted. A characteristic decomposition-free version of the scheme is also illustrated. The solenoidal condition of the magnetic field is enforced by augmenting the equations with a generalized Lagrange multiplier providing propagation and damping of divergence errors through a mixed hyperbolic/parabolic explicit cleaning step. Among the novel features, we describe an extension of the scheme to include non-ideal dissipative processes, such as viscosity, resistivity, and anisotropic thermal conduction without operator splitting. Finally, we illustrate an efficient treatment of point-local, potentially stiff source terms over hierarchical nested grids by taking advantage of the adaptivity in time. Several multidimensional benchmarks and applications to problems of astrophysical relevance assess the potentiality of the AMR version of PLUTO in resolving flow features separated by large spatial and temporal disparities.
THE PLUTO CODE FOR ADAPTIVE MESH COMPUTATIONS IN ASTROPHYSICAL FLUID DYNAMICS
Mignone, A.; Tzeferacos, P.; Zanni, C.; Bodo, G.; Van Straalen, B.; Colella, P.
2012-01-01
We present a description of the adaptive mesh refinement (AMR) implementation of the PLUTO code for solving the equations of classical and special relativistic magnetohydrodynamics (MHD and RMHD). The current release exploits, in addition to the static grid version of the code, the distributed infrastructure of the CHOMBO library for multidimensional parallel computations over block-structured, adaptively refined grids. We employ a conservative finite-volume approach where primary flow quantities are discretized at the cell center in a dimensionally unsplit fashion using the Corner Transport Upwind method. Time stepping relies on a characteristic tracing step where piecewise parabolic method, weighted essentially non-oscillatory, or slope-limited linear interpolation schemes can be handily adopted. A characteristic decomposition-free version of the scheme is also illustrated. The solenoidal condition of the magnetic field is enforced by augmenting the equations with a generalized Lagrange multiplier providing propagation and damping of divergence errors through a mixed hyperbolic/parabolic explicit cleaning step. Among the novel features, we describe an extension of the scheme to include non-ideal dissipative processes, such as viscosity, resistivity, and anisotropic thermal conduction without operator splitting. Finally, we illustrate an efficient treatment of point-local, potentially stiff source terms over hierarchical nested grids by taking advantage of the adaptivity in time. Several multidimensional benchmarks and applications to problems of astrophysical relevance assess the potentiality of the AMR version of PLUTO in resolving flow features separated by large spatial and temporal disparities.
Coupling of MASH-MORSE Adjoint Leakages with Space- and Time-Dependent Plume Radiation Sources
Slater, C.O.
2001-04-20
In the past, forward-adjoint coupling procedures in air-over-ground geometry have typically involved forward fluences arising from a point source a great distance from a target or vehicle system. Various processing codes were used to create localized forward fluence files that could be used to couple with the MASH-MORSE adjoint leakages. In recent years, radiation plumes that result from reactor accidents or similar incidents have been modeled by others, and the source space and energy distributions as a function of time have been calculated. Additionally, with the point kernel method, they were able to calculate in relatively quick fashion free-field radiation doses for targets moving within the fluence field or for stationary targets within the field, the time dependence for the latter case coming from the changes in position, shape, source strength, and spectra of the plume with time. The work described herein applies the plume source to the MASH-MORSE coupling procedure. The plume source replaces the point source for generating the forward fluences that are folded with MASH-MORSE adjoint leakages. Two types of source calculations are described. The first is a ''rigorous'' calculation using the TORT code and a spatially large air-over-ground geometry. For each time step desired, directional fluences are calculated and are saved over a predetermined region that encompasses a structure within which it is desired to calculate dose rates. Processing codes then create the surface fluences (which may include contributions from radiation sources that deposit on the roof or plateout) that will be coupled with the MASH-MORSE adjoint leakages. Unlike the point kernel calculations of the free-field dose rates, the TORT calculations in practice include the effects of ground scatter on dose rates and directional fluences, although the effects may be underestimated or overestimated because of the use of necessarily coarse mesh and quadrature in order to reduce computational
Less Can Be More: RNA-Adapters May Enhance Coding Capacity of Replicators
de Boer, Folkert K.; Hogeweg, Paulien
2012-01-01
It is still not clear how prebiotic replicators evolved towards the complexity found in present day organisms. Within the most realistic scenario for prebiotic evolution, known as the RNA world hypothesis, such complexity has arisen from replicators consisting solely of RNA. Within contemporary life, remarkably many RNAs are involved in modifying other RNAs. In hindsight, such RNA-RNA modification might have helped in alleviating the limits of complexity posed by the information threshold for RNA-only replicators. Here we study the possible role of such self-modification in early evolution, by modeling the evolution of protocells as evolving replicators, which have the opportunity to incorporate these mechanisms as a molecular tool. Evolution is studied towards a set of 25 arbitrary ‘functional’ structures, while avoiding all other (misfolded) structures, which are considered to be toxic and increase the death-rate of a protocell. The modeled protocells contain a genotype of different RNA-sequences while their phenotype is the ensemble of secondary structures they can potentially produce from these RNA-sequences. One of the secondary structures explicitly codes for a simple sequence-modification tool. This ‘RNA-adapter’ can block certain positions on other RNA-sequences through antisense base-pairing. The altered sequence can produce an alternative secondary structure, which may or may not be functional. We show that the modifying potential of interacting RNA-sequences enables these protocells to evolve high fitness under high mutation rates. Moreover, our model shows that because of toxicity of misfolded molecules, redundant coding impedes the evolution of self-modification machinery, in effect restraining the evolvability of coding structures. Hence, high mutation rates can actually promote the evolution of complex coding structures by reducing redundant coding. Protocells can successfully use RNA-adapters to modify their genotype-phenotype mapping in
Adaptive inter color residual prediction for efficient red-green-blue intra coding
NASA Astrophysics Data System (ADS)
Jeong, Jinwoo; Choe, Yoonsik; Kim, Yong-Goo
2011-07-01
Intra coding of an RGB video is important to many high fidelity multimedia applications because video acquisition is mostly done in RGB space, and the coding of decorrelated color video loses its virtue in high quality ranges. In order to improve the compression performance of an RGB video, this paper proposes an inter color prediction using adaptive weights. For making full use of spatial, as well as inter color correlation of an RGB video, the proposed scheme is based on a residual prediction approach, and thus the incorporated prediction is performed on the transformed frequency components of spatially predicted residual data of each color plane. With the aid of efficient prediction employing frequency domain inter color residual correlation, the proposed scheme achieves up to 24.3% of bitrate reduction, compared to the common mode of H.264/AVC high 4:4:4 intra profile.
AstroBEAR: Adaptive Mesh Refinement Code for Ideal Hydrodynamics & Magnetohydrodynamics
NASA Astrophysics Data System (ADS)
Cunningham, Andrew J.; Frank, Adam; Varniere, Peggy; Mitran, Sorin; Jones, Thomas W.
2011-04-01
AstroBEAR is a modular hydrodynamic & magnetohydrodynamic code environment designed for a variety of astrophysical applications. It uses the BEARCLAW package, a multidimensional, Eulerian computational code used to solve hyperbolic systems of equations. AstroBEAR allows adaptive-mesh-refinment (AMR) simulations in 2, 2.5 (i.e., cylindrical), and 3 dimensions, in either cartesian or curvilinear coordinates. Parallel applications are supported through the MPI architecture. AstroBEAR is written in Fortran 90/95 using standard libraries. AstroBEAR supports hydrodynamic (HD) and magnetohydrodynamic (MHD) applications using a variety of spatial and temporal methods. MHD simulations are kept divergence-free via the constrained transport (CT) methods of Balsara & Spicer. Three different equation of state environments are available: ideal gas, gas with differing isentropic γ, and the analytic Thomas-Fermi formulation of A.R. Bell [2]. Current work is being done to develop a more advanced real gas equation of state.
Pilot-Assisted Adaptive Channel Estimation for Coded MC-CDMA with ICI Cancellation
NASA Astrophysics Data System (ADS)
Yui, Tatsunori; Tomeba, Hiromichi; Adachi, Fumiyuki
One of the promising wireless access techniques for the next generation mobile communications systems is multi-carrier code division multiple access (MC-CDMA). MC-CDMA can provide good transmission performance owing to the frequency diversity effect in a severe frequency-selective fading channel. However, the bit error rate (BER) performance of coded MC-CDMA is inferior to that of orthogonal frequency division multiplexing (OFDM) due to the residual inter-code interference (ICI) after frequency-domain equalization (FDE). Recently, we proposed a frequency-domain soft interference cancellation (FDSIC) to reduce the residual ICI and confirmed by computer simulation that the MC-CDMA with FDSIC provides better BER performance than OFDM. However, ideal channel estimation was assumed. In this paper, we propose adaptive decision-feedback channel estimation (ADFCE) and evaluate by computer simulation the average BER and throughput performances of turbo-coded MC-CDMA with FDSIC. We show that even if a practical channel estimation is used, MC-CDMA with FDSIC can still provide better performance than OFDM.
An experimental infrared sensor using adaptive coded apertures for enhanced resolution
NASA Astrophysics Data System (ADS)
Gordon, Neil T.; de Villiers, Geoffrey D.; Ridley, Kevin D.; Bennett, Charlotte R.; McNie, Mark E.; Proudler, Ian K.; Russell, Lee; Slinger, Christopher W.; Gilholm, Kevin
2010-08-01
Adaptive coded aperture imaging (ACAI) has the potential to enhance greatly the performance of sensing systems by allowing sub detector pixel image and tracking resolution. A small experimental system has been set up to allow the practical demonstration of these benefits in the mid infrared, as well as investigating the calibration and stability of the system. The system can also be used to test modeling of similar ACAI systems in the infrared. The demonstrator can use either a set of fixed masks or a novel MOEMS adaptive transmissive spatial light modulator. This paper discusses the design and testing of the system including the development of novel decoding algorithms and some initial imaging results are presented.
PHURBAS: AN ADAPTIVE, LAGRANGIAN, MESHLESS, MAGNETOHYDRODYNAMICS CODE. II. IMPLEMENTATION AND TESTS
McNally, Colin P.; Mac Low, Mordecai-Mark; Maron, Jason L. E-mail: jmaron@amnh.org
2012-05-01
We present an algorithm for simulating the equations of ideal magnetohydrodynamics and other systems of differential equations on an unstructured set of points represented by sample particles. The particles move with the fluid, so the time step is not limited by the Eulerian Courant-Friedrichs-Lewy condition. Full spatial adaptivity is required to ensure the particles fill the computational volume and gives the algorithm substantial flexibility and power. A target resolution is specified for each point in space, with particles being added and deleted as needed to meet this target. We have parallelized the code by adapting the framework provided by GADGET-2. A set of standard test problems, including 10{sup -6} amplitude linear magnetohydrodynamics waves, magnetized shock tubes, and Kelvin-Helmholtz instabilities is presented. Finally, we demonstrate good agreement with analytic predictions of linear growth rates for magnetorotational instability in a cylindrical geometry. This paper documents the Phurbas algorithm as implemented in Phurbas version 1.1.
Effects of Selective Adaptation on Coding Sugar and Salt Tastes in Mixtures
Goyert, Holly F.; Formaker, Bradley K.; Hettinger, Thomas P.
2012-01-01
Little is known about coding of taste mixtures in complex dynamic stimulus environments. A protocol developed for odor stimuli was used to test whether rapid selective adaptation extracted sugar and salt component tastes from mixtures as it did component odors. Seventeen human subjects identified taste components of “salt + sugar” mixtures. In 4 sessions, 16 adapt–test stimulus pairs were presented as atomized, 150-μL “taste puffs” to the tongue tip to simulate odor sniffs. Stimuli were NaCl, sucrose, “NaCl + sucrose,” and water. The sugar was 98% identified but the suppressed salt 65% identified in unadapted mixtures of 2 concentrations of NaCl, 0.1 or 0.05 M, and sucrose at 3 times those concentrations, 0.3 or 0.15 M. Rapid selective adaptation decreased identification of sugar and salt preadapted ambient components to 35%, well below the 74% self-adapted level, despite variation in stimulus concentration and adapting time (<5 or >10 s). The 96% identification of sugar and salt extra mixture components was as certain as identification of single compounds. The results revealed that salt–sugar mixture suppression, dependent on relative mixture-component concentration, was mutual. Furthermore, like odors, stronger and recent tastes are emphasized in dynamic experimental conditions replicating natural situations. PMID:22562765
Dynamic optical aberration correction with adaptive coded apertures techniques in conformal imaging
NASA Astrophysics Data System (ADS)
Li, Yan; Hu, Bin; Zhang, Pengbin; Zhang, Binglong
2015-02-01
Conformal imaging systems are confronted with dynamic aberration in optical design processing. In classical optical designs, for combination high requirements of field of view, optical speed, environmental adaption and imaging quality, further enhancements can be achieved only by the introduction of increased complexity of aberration corrector. In recent years of computational imaging, the adaptive coded apertures techniques which has several potential advantages over more traditional optical systems is particularly suitable for military infrared imaging systems. The merits of this new concept include low mass, volume and moments of inertia, potentially lower costs, graceful failure modes, steerable fields of regard with no macroscopic moving parts. Example application for conformal imaging system design where the elements of a set of binary coded aperture masks are applied are optimization designed is presented in this paper, simulation results show that the optical performance is closely related to the mask design and the reconstruction algorithm optimization. As a dynamic aberration corrector, a binary-amplitude mask located at the aperture stop is optimized to mitigate dynamic optical aberrations when the field of regard changes and allow sufficient information to be recorded by the detector for the recovery of a sharp image using digital image restoration in conformal optical system.
NASA Astrophysics Data System (ADS)
Slinger, Christopher W.; Bennett, Charlotte R.; Dyer, Gavin; Gilholm, Kevin; Gordon, Neil; Huckridge, David; McNie, Mark; Penney, Richard W.; Proudler, Ian K.; Rice, Kevin; Ridley, Kevin D.; Russell, Lee; de Villiers, Geoffrey D.; Watson, Philip J.
2011-09-01
There is an increasingly important requirement for day and night, wide field of view imaging and tracking for both imaging and sensing applications. Applications include military, security and remote sensing. We describe the development of a proof of concept demonstrator of an adaptive coded-aperture imager operating in the mid-wave infrared to address these requirements. This consists of a coded-aperture mask, a set of optics and a 4k x 4k focal plane array (FPA). This system can produce images with a resolution better than that achieved by the detector pixel itself (i.e. superresolution) by combining multiple frames of data recorded with different coded-aperture mask patterns. This superresolution capability has been demonstrated both in the laboratory and in imaging of real-world scenes, the highest resolution achieved being ½ the FPA pixel pitch. The resolution for this configuration is currently limited by vibration and theoretically ¼ pixel pitch should be possible. Comparisons have been made between conventional and ACAI solutions to these requirements and show significant advantages in size, weight and cost for the ACAI approach.
Spatiotemporal Spike Coding of Behavioral Adaptation in the Dorsal Anterior Cingulate Cortex
Logiaco, Laureline; Quilodran, René; Procyk, Emmanuel; Arleo, Angelo
2015-01-01
The frontal cortex controls behavioral adaptation in environments governed by complex rules. Many studies have established the relevance of firing rate modulation after informative events signaling whether and how to update the behavioral policy. However, whether the spatiotemporal features of these neuronal activities contribute to encoding imminent behavioral updates remains unclear. We investigated this issue in the dorsal anterior cingulate cortex (dACC) of monkeys while they adapted their behavior based on their memory of feedback from past choices. We analyzed spike trains of both single units and pairs of simultaneously recorded neurons using an algorithm that emulates different biologically plausible decoding circuits. This method permits the assessment of the performance of both spike-count and spike-timing sensitive decoders. In response to the feedback, single neurons emitted stereotypical spike trains whose temporal structure identified informative events with higher accuracy than mere spike count. The optimal decoding time scale was in the range of 70–200 ms, which is significantly shorter than the memory time scale required by the behavioral task. Importantly, the temporal spiking patterns of single units were predictive of the monkeys’ behavioral response time. Furthermore, some features of these spiking patterns often varied between jointly recorded neurons. All together, our results suggest that dACC drives behavioral adaptation through complex spatiotemporal spike coding. They also indicate that downstream networks, which decode dACC feedback signals, are unlikely to act as mere neural integrators. PMID:26266537
Spatially adaptive bases in wavelet-based coding of semi-regular meshes
NASA Astrophysics Data System (ADS)
Denis, Leon; Florea, Ruxandra; Munteanu, Adrian; Schelkens, Peter
2010-05-01
In this paper we present a wavelet-based coding approach for semi-regular meshes, which spatially adapts the employed wavelet basis in the wavelet transformation of the mesh. The spatially-adaptive nature of the transform requires additional information to be stored in the bit-stream in order to allow the reconstruction of the transformed mesh at the decoder side. In order to limit this overhead, the mesh is first segmented into regions of approximately equal size. For each spatial region, a predictor is selected in a rate-distortion optimal manner by using a Lagrangian rate-distortion optimization technique. When compared against the classical wavelet transform employing the butterfly subdivision filter, experiments reveal that the proposed spatially-adaptive wavelet transform significantly decreases the energy of the wavelet coefficients for all subbands. Preliminary results show also that employing the proposed transform for the lowest-resolution subband systematically yields improved compression performance at low-to-medium bit-rates. For the Venus and Rabbit test models the compression improvements add up to 1.47 dB and 0.95 dB, respectively.
Spatiotemporal Spike Coding of Behavioral Adaptation in the Dorsal Anterior Cingulate Cortex.
Logiaco, Laureline; Quilodran, René; Procyk, Emmanuel; Arleo, Angelo
2015-08-01
The frontal cortex controls behavioral adaptation in environments governed by complex rules. Many studies have established the relevance of firing rate modulation after informative events signaling whether and how to update the behavioral policy. However, whether the spatiotemporal features of these neuronal activities contribute to encoding imminent behavioral updates remains unclear. We investigated this issue in the dorsal anterior cingulate cortex (dACC) of monkeys while they adapted their behavior based on their memory of feedback from past choices. We analyzed spike trains of both single units and pairs of simultaneously recorded neurons using an algorithm that emulates different biologically plausible decoding circuits. This method permits the assessment of the performance of both spike-count and spike-timing sensitive decoders. In response to the feedback, single neurons emitted stereotypical spike trains whose temporal structure identified informative events with higher accuracy than mere spike count. The optimal decoding time scale was in the range of 70-200 ms, which is significantly shorter than the memory time scale required by the behavioral task. Importantly, the temporal spiking patterns of single units were predictive of the monkeys' behavioral response time. Furthermore, some features of these spiking patterns often varied between jointly recorded neurons. All together, our results suggest that dACC drives behavioral adaptation through complex spatiotemporal spike coding. They also indicate that downstream networks, which decode dACC feedback signals, are unlikely to act as mere neural integrators. PMID:26266537
Zou, Ding; Djordjevic, Ivan B
2016-09-01
In this paper, we propose a rate-adaptive FEC scheme based on LDPC codes together with its software reconfigurable unified FPGA architecture. By FPGA emulation, we demonstrate that the proposed class of rate-adaptive LDPC codes based on shortening with an overhead from 25% to 42.9% provides a coding gain ranging from 13.08 dB to 14.28 dB at a post-FEC BER of 10^{-15} for BPSK transmission. In addition, the proposed rate-adaptive LDPC coding combined with higher-order modulations have been demonstrated including QPSK, 8-QAM, 16-QAM, 32-QAM, and 64-QAM, which covers a wide range of signal-to-noise ratios. Furthermore, we apply the unequal error protection by employing different LDPC codes on different bits in 16-QAM and 64-QAM, which results in additional 0.5dB gain compared to conventional LDPC coded modulation with the same code rate of corresponding LDPC code. PMID:27607718
Takahasi Nearest-Neighbour Gas Revisited II: Morse Gases
NASA Astrophysics Data System (ADS)
Matsumoto, Akira
2011-12-01
Some thermodynamic quantities for the Morse potential are analytically evaluated at an isobaric process. The parameters of Morse gases for 21 substances are obtained by the second virial coefficient data and the spectroscopic data of diatomic molecules. Also some thermodynamic quantities for water are calculated numerically and drawn graphically. The inflexion point of the length L which depends on temperature T and pressure P corresponds physically to a boiling point. L indicates the liquid phase from lower temperature to the inflexion point and the gaseous phase from the inflexion point to higher temperature. The boiling temperatures indicate reasonable values compared with experimental data. The behaviour of L suggests a chance of a first-order phase transition in one dimension.
Improving Inpatient Surveys: Web-Based Computer Adaptive Testing Accessed via Mobile Phone QR Codes
2016-01-01
Background The National Health Service (NHS) 70-item inpatient questionnaire surveys inpatients on their perceptions of their hospitalization experience. However, it imposes more burden on the patient than other similar surveys. The literature shows that computerized adaptive testing (CAT) based on item response theory can help shorten the item length of a questionnaire without compromising its precision. Objective Our aim was to investigate whether CAT can be (1) efficient with item reduction and (2) used with quick response (QR) codes scanned by mobile phones. Methods After downloading the 2008 inpatient survey data from the Picker Institute Europe website and analyzing the difficulties of this 70-item questionnaire, we used an author-made Excel program using the Rasch partial credit model to simulate 1000 patients’ true scores followed by a standard normal distribution. The CAT was compared to two other scenarios of answering all items (AAI) and the randomized selection method (RSM), as we investigated item length (efficiency) and measurement accuracy. The author-made Web-based CAT program for gathering patient feedback was effectively accessed from mobile phones by scanning the QR code. Results We found that the CAT can be more efficient for patients answering questions (ie, fewer items to respond to) than either AAI or RSM without compromising its measurement accuracy. A Web-based CAT inpatient survey accessed by scanning a QR code on a mobile phone was viable for gathering inpatient satisfaction responses. Conclusions With advances in technology, patients can now be offered alternatives for providing feedback about hospitalization satisfaction. This Web-based CAT is a possible option in health care settings for reducing the number of survey items, as well as offering an innovative QR code access. PMID:26935793
Amino acids and our genetic code: a highly adaptive and interacting defense system.
Verheesen, R H; Schweitzer, C M
2012-04-01
Since the discovery of the genetic code, Mendel's heredity theory and Darwin's evolution theory, science believes that adaptations to the environment are processes in which the adaptation of the genes is a matter of probability, in which finally the specie will survive which is evolved by chance. We hypothesize that evolution and the adaptation of the genes is a well-organized fully adaptive system in which there is no rigidity of the genes. The dividing of the genes will take place in line with the environment to be expected, sensed through the mother. The encoding triplets can encode for more than one amino acid depending on the availability of the amino acids and the needed micronutrients. Those nutrients can cause disease but also prevent diseases, even cancer and auto immunity. In fact we hypothesize that auto immunity is an effective process of the organism to clear suboptimal proteins, formed due to amino acid and micronutrient deficiencies. Only when deficiencies sustain, disease will develop, otherwise the autoantibodies will function as all antibodies function, in a protective way. Furthermore, we hypothesize that essential amino acids are less important than nonessential amino acid (NEA). Species developed the ability to produce the nonessential amino acids themselves because they were not provided by food sufficiently. In contrast essential amino acids are widely available, without any evolutionary pressure. Since we can only produce small amounts of NEA and the availability in food can be reasoned to be too low they are still our main concern in amino acid availability. In conclusion, we hypothesize that increasing health will only be possible by improving our natural environment and living circumstances, not by changing the genes, since they are our last line of defense in surviving our environmental changes. PMID:22289341
Adaptive coded spreading OFDM signal for dynamic-λ optical access network
NASA Astrophysics Data System (ADS)
Liu, Bo; Zhang, Lijia; Xin, Xiangjun
2015-12-01
This paper proposes and experimentally demonstrates a novel adaptive coded spreading (ACS) orthogonal frequency division multiplexing (OFDM) signal for dynamic distributed optical ring-based access network. The wavelength can be assigned to different remote nodes (RNs) according to the traffic demand of optical network unit (ONU). The ACS can provide dynamic spreading gain to different signals according to the split ratio or transmission length, which offers flexible power budget for the network. A 10×13.12 Gb/s OFDM access with ACS is successfully demonstrated over two RNs and 120 km transmission in the experiment. The demonstrated method may be viewed as one promising for future optical metro access network.
Bernstein, A D; Camm, A J; Fletcher, R D; Gold, R D; Rickards, A F; Smyth, N P; Spielman, S R; Sutton, R
1987-07-01
A new generic pacemaker code, derived from and compatible with the Revised ICHD Code, was proposed jointly by the North American Society of Pacing and Electrophysiology (NASPE) Mode Code Committee and the British Pacing and Electrophysiology Group (BPEG), and has been adopted by the NASPE Board of Trustees. It is abbreviated as the NBG (for "NASPE/BPEG Generic") Code, and was developed to permit extension of the generic-code concept to pacemakers whose escape rate is continuously controlled by monitoring some physiologic variable, rather than determined by fixed escape intervals measured from stimuli or sensed depolarizations, and to antitachyarrhythmia devices including cardioverters and defibrillators. The NASPE/BPEG Code incorporates an "R" in the fourth position to signify rate modulation (adaptive-rate pacing), and one of four letters in the fifth position to indicate the presence of antitachyarrhythmia-pacing capability or of cardioversion or defibrillation functions. PMID:2441363
White Dwarf Mergers on Adaptive Meshes. I. Methodology and Code Verification
NASA Astrophysics Data System (ADS)
Katz, Max P.; Zingale, Michael; Calder, Alan C.; Swesty, F. Douglas; Almgren, Ann S.; Zhang, Weiqun
2016-03-01
The Type Ia supernova (SN Ia) progenitor problem is one of the most perplexing and exciting problems in astrophysics, requiring detailed numerical modeling to complement observations of these explosions. One possible progenitor that has merited recent theoretical attention is the white dwarf (WD) merger scenario, which has the potential to naturally explain many of the observed characteristics of SNe Ia. To date there have been relatively few self-consistent simulations of merging WD systems using mesh-based hydrodynamics. This is the first paper in a series describing simulations of these systems using a hydrodynamics code with adaptive mesh refinement. In this paper we describe our numerical methodology and discuss our implementation in the compressible hydrodynamics code CASTRO, which solves the Euler equations, and the Poisson equation for self-gravity, and couples the gravitational and rotation forces to the hydrodynamics. Standard techniques for coupling gravitation and rotation forces to the hydrodynamics do not adequately conserve the total energy of the system for our problem, but recent advances in the literature allow progress and we discuss our implementation here. We present a set of test problems demonstrating the extent to which our software sufficiently models a system where large amounts of mass are advected on the computational domain over long timescales. Future papers in this series will describe our treatment of the initial conditions of these systems and will examine the early phases of the merger to determine its viability for triggering a thermonuclear detonation.
Hybrid threshold adaptable quantum secret sharing scheme with reverse Huffman-Fibonacci-tree coding.
Lai, Hong; Zhang, Jun; Luo, Ming-Xing; Pan, Lei; Pieprzyk, Josef; Xiao, Fuyuan; Orgun, Mehmet A
2016-01-01
With prevalent attacks in communication, sharing a secret between communicating parties is an ongoing challenge. Moreover, it is important to integrate quantum solutions with classical secret sharing schemes with low computational cost for the real world use. This paper proposes a novel hybrid threshold adaptable quantum secret sharing scheme, using an m-bonacci orbital angular momentum (OAM) pump, Lagrange interpolation polynomials, and reverse Huffman-Fibonacci-tree coding. To be exact, we employ entangled states prepared by m-bonacci sequences to detect eavesdropping. Meanwhile, we encode m-bonacci sequences in Lagrange interpolation polynomials to generate the shares of a secret with reverse Huffman-Fibonacci-tree coding. The advantages of the proposed scheme is that it can detect eavesdropping without joint quantum operations, and permits secret sharing for an arbitrary but no less than threshold-value number of classical participants with much lower bandwidth. Also, in comparison with existing quantum secret sharing schemes, it still works when there are dynamic changes, such as the unavailability of some quantum channel, the arrival of new participants and the departure of participants. Finally, we provide security analysis of the new hybrid quantum secret sharing scheme and discuss its useful features for modern applications. PMID:27515908
Hybrid threshold adaptable quantum secret sharing scheme with reverse Huffman-Fibonacci-tree coding
Lai, Hong; Zhang, Jun; Luo, Ming-Xing; Pan, Lei; Pieprzyk, Josef; Xiao, Fuyuan; Orgun, Mehmet A.
2016-01-01
With prevalent attacks in communication, sharing a secret between communicating parties is an ongoing challenge. Moreover, it is important to integrate quantum solutions with classical secret sharing schemes with low computational cost for the real world use. This paper proposes a novel hybrid threshold adaptable quantum secret sharing scheme, using an m-bonacci orbital angular momentum (OAM) pump, Lagrange interpolation polynomials, and reverse Huffman-Fibonacci-tree coding. To be exact, we employ entangled states prepared by m-bonacci sequences to detect eavesdropping. Meanwhile, we encode m-bonacci sequences in Lagrange interpolation polynomials to generate the shares of a secret with reverse Huffman-Fibonacci-tree coding. The advantages of the proposed scheme is that it can detect eavesdropping without joint quantum operations, and permits secret sharing for an arbitrary but no less than threshold-value number of classical participants with much lower bandwidth. Also, in comparison with existing quantum secret sharing schemes, it still works when there are dynamic changes, such as the unavailability of some quantum channel, the arrival of new participants and the departure of participants. Finally, we provide security analysis of the new hybrid quantum secret sharing scheme and discuss its useful features for modern applications. PMID:27515908
EMMA: an adaptive mesh refinement cosmological simulation code with radiative transfer
NASA Astrophysics Data System (ADS)
Aubert, Dominique; Deparis, Nicolas; Ocvirk, Pierre
2015-11-01
EMMA is a cosmological simulation code aimed at investigating the reionization epoch. It handles simultaneously collisionless and gas dynamics, as well as radiative transfer physics using a moment-based description with the M1 approximation. Field quantities are stored and computed on an adaptive three-dimensional mesh and the spatial resolution can be dynamically modified based on physically motivated criteria. Physical processes can be coupled at all spatial and temporal scales. We also introduce a new and optional approximation to handle radiation: the light is transported at the resolution of the non-refined grid and only once the dynamics has been fully updated, whereas thermo-chemical processes are still tracked on the refined elements. Such an approximation reduces the overheads induced by the treatment of radiation physics. A suite of standard tests are presented and passed by EMMA, providing a validation for its future use in studies of the reionization epoch. The code is parallel and is able to use graphics processing units (GPUs) to accelerate hydrodynamics and radiative transfer calculations. Depending on the optimizations and the compilers used to generate the CPU reference, global GPU acceleration factors between ×3.9 and ×16.9 can be obtained. Vectorization and transfer operations currently prevent better GPU performance and we expect that future optimizations and hardware evolution will lead to greater accelerations.
Context adaptive binary arithmetic coding-based data hiding in partially encrypted H.264/AVC videos
NASA Astrophysics Data System (ADS)
Xu, Dawen; Wang, Rangding
2015-05-01
A scheme of data hiding directly in a partially encrypted version of H.264/AVC videos is proposed which includes three parts, i.e., selective encryption, data embedding and data extraction. Selective encryption is performed on context adaptive binary arithmetic coding (CABAC) bin-strings via stream ciphers. By careful selection of CABAC entropy coder syntax elements for selective encryption, the encrypted bitstream is format-compliant and has exactly the same bit rate. Then a data-hider embeds the additional data into partially encrypted H.264/AVC videos using a CABAC bin-string substitution technique without accessing the plaintext of the video content. Since bin-string substitution is carried out on those residual coefficients with approximately the same magnitude, the quality of the decrypted video is satisfactory. Video file size is strictly preserved even after data embedding. In order to adapt to different application scenarios, data extraction can be done either in the encrypted domain or in the decrypted domain. Experimental results have demonstrated the feasibility and efficiency of the proposed scheme.
A New Real-coded Genetic Algorithm with an Adaptive Mating Selection for UV-landscapes
NASA Astrophysics Data System (ADS)
Oshima, Dan; Miyamae, Atsushi; Nagata, Yuichi; Kobayashi, Shigenobu; Ono, Isao; Sakuma, Jun
The purpose of this paper is to propose a new real-coded genetic algorithm (RCGA) named Networked Genetic Algorithm (NGA) that intends to find multiple optima simultaneously in deceptive globally multimodal landscapes. Most current techniques such as niching for finding multiple optima take into account big valley landscapes or non-deceptive globally multimodal landscapes but not deceptive ones called UV-landscapes. Adaptive Neighboring Search (ANS) is a promising approach for finding multiple optima in UV-landscapes. ANS utilizes a restricted mating scheme with a crossover-like mutation in order to find optima in deceptive globally multimodal landscapes. However, ANS has a fundamental problem that it does not find all the optima simultaneously in many cases. NGA overcomes the problem by an adaptive parent-selection scheme and an improved crossover-like mutation. We show the effectiveness of NGA over ANS in terms of the number of detected optima in a single run on Fletcher and Powell functions as benchmark problems that are known to have multiple optima, ill-scaledness, and UV-landscapes.
Low Complex Forward Adaptive Loss Compression Algorithm and Its Application in Speech Coding
NASA Astrophysics Data System (ADS)
Nikolić, Jelena; Perić, Zoran; Antić, Dragan; Jovanović, Aleksandra; Denić, Dragan
2011-01-01
This paper proposes a low complex forward adaptive loss compression algorithm that works on the frame by frame basis. Particularly, the algorithm we propose performs frame by frame analysis of the input speech signal, estimates and quantizes the gain within the frames in order to enable the quantization by the forward adaptive piecewise linear optimal compandor. In comparison to the solution designed according to the G.711 standard, our algorithm provides not only higher level of the average signal to quantization noise ratio, but also performs a reduction of the PCM bit rate for about 1 bits/sample. Moreover, the algorithm we propose completely satisfies the G.712 standard, since it provides overreaching the curve defined by the G.712 standard in the whole of variance range. Accordingly, we can reasonably believe that our algorithm will find its practical implementation in the high quality coding of signals, represented with less than 8 bits/sample, which as well as speech signals follow Laplacian distribution and have the time varying variances.
Cunningham, Andrew J.; Frank, Adam; Varniere, Peggy; Mitran, Sorin; Jones, Thomas W.
2009-06-15
A description is given of the algorithms implemented in the AstroBEAR adaptive mesh-refinement code for ideal magnetohydrodynamics. The code provides several high-resolution shock-capturing schemes which are constructed to maintain conserved quantities of the flow in a finite-volume sense. Divergence-free magnetic field topologies are maintained to machine precision by collating the components of the magnetic field on a cell-interface staggered grid and utilizing the constrained transport approach for integrating the induction equations. The maintenance of magnetic field topologies on adaptive grids is achieved using prolongation and restriction operators which preserve the divergence and curl of the magnetic field across collocated grids of different resolutions. The robustness and correctness of the code is demonstrated by comparing the numerical solution of various tests with analytical solutions or previously published numerical solutions obtained by other codes.
Analytical expressions for vibrational matrix elements of Morse oscillators
Zuniga, J.; Hidalgo, A.; Frances, J.M.; Requena, A.; Lopez Pineiro, A.; Olivares del Valle, F.J.
1988-10-15
Several exact recursion relations connecting different Morse oscillator matrix elements associated with the operators q/sup ..cap alpha../e/sup -//sup ..beta..//sup aq/ and q/sup ..cap alpha../e/sup -//sup ..beta..//sup aq/(d/dr) are derived. Matrix elements of the other useful operators may then be obtained easily. In particular, analytical expressions for (y/sup k/d/dr) and (y/sup k/d/dr+(d/dr)y/sup k/), matrix elements of interest in the study of the internuclear motion in polyatomic molecules, are obtained.
Entropy, local order, and the freezing transition in Morse liquids.
Chakraborty, Somendra Nath; Chakravarty, Charusita
2007-07-01
The behavior of the excess entropy of Morse and Lennard-Jones liquids is examined as a function of temperature, density, and the structural order metrics. The dominant pair correlation contribution to the excess entropy is estimated from simulation data for the radial distribution function. The pair correlation entropy (S2) of these simple liquids is shown to have a threshold value of (-3.5+/-0.3)kB at freezing. Moreover, S2 shows a T(-2/5) temperature dependence. The temperature dependence of the pair correlation entropy as well as the behavior at freezing closely correspond to earlier predictions, based on density functional theory, for the excess entropy of repulsive inverse power and Yukawa potentials [Rosenfeld, Phys. Rev. E 62, 7524 (2000)]. The correlation between the pair correlation entropy and the local translational and bond orientational order parameters is examined, and, in the case of the bond orientational order, is shown to be sensitive to the definition of the nearest neighbors. The order map between translational and bond orientational order for Morse liquids and solids shows a very similar pattern to that seen in Lennard-Jones-type systems. PMID:17677432
NASA Astrophysics Data System (ADS)
Ledet, Mary M.; Starman, LaVern A.; Coutu, Ronald A., Jr.; Rogers, Stanley
2009-08-01
Coded aperture imaging (CAI) has been used in both the astronomical and medical communities for years due to its ability to image light at short wavelengths and thus replacing conventional lenses. Where CAI is limited, adaptive coded aperture imaging (ACAI) can recover what is lost. The use of photonic micro-electro-mechanical-systems (MEMS) for creating adaptive coded apertures has been gaining momentum since 2007. Successful implementation of micro-shutter technologies would potentially enable the use of adaptive coded aperture imaging and non-imaging systems in current and future military surveillance and intelligence programs. In this effort, a prototype of MEMS microshutters has been designed and fabricated onto a 3 mm x 3 mm square of silicon substrate using the PolyMUMPSTM process. This prototype is a line-drivable array using thin flaps of polysilicon to cover and uncover an 8 x 8 array of 20 μm apertures. A characterization of the micro-shutters to include mechanical, electrical and optical properties is provided. This prototype, its actuation scheme, and other designs for individual microshutters have been modeled and studied for feasibility purposes. In addition, microshutters fabricated from an Al-Au alloy on a quartz wafer were optically tested and characterized with a 632 nm HeNe laser.
NASA Astrophysics Data System (ADS)
Shin, Frances B.; Kil, David H.
1998-09-01
One of the biggest challenges in distributed underwater mine warfare for area sanitization and safe power projection during regional conflicts is transmission of compressed raw imagery data to a central processing station via a limited bandwidth channel while preserving crucial target information for further detection and automatic target recognition processing. Moreover, operating in an extremely shallow water with fluctuating channels and numerous interfering sources makes it imperative that image compression algorithms effectively deal with background nonstationarity within an image as well as content variation between images. In this paper, we present a novel approach to lossy image compression that combines image- content classification, content-adaptive bit allocation, and hybrid wavelet tree-based coding for over 100:1 bandwidth reduction with little sacrifice in signal-to-noise ratio (SNR). Our algorithm comprises (1) content-adaptive coding that takes advantage of a classify-before-coding strategy to reduce data mismatch, (2) subimage transformation for energy compaction, and (3) a wavelet tree-based coding for efficient encoding of significant wavelet coefficients. Furthermore, instead of using the embedded zerotree coding with scalar quantization (SQ), we investigate the use of a hybrid coding strategy that combines SQ for high-magnitude outlier transform coefficients and classified vector quantization (CVQ) for compactly clustered coefficients. This approach helps us achieve reduced distortion error and robustness while achieving high compression ratio. Our analysis based on the high-frequency sonar real data that exhibit severe content variability and contain both mines and mine-like clutter indicates that we can achieve over 100:1 compression ratio without losing crucial signal attributes. In comparison, benchmarking of the same data set with the best still-picture compression algorithm called the set partitioning in hierarchical trees (SPIHT) reveals
Exciton photoluminescence in resonant quasi-periodic Thue-Morse quantum wells.
Hsueh, W J; Chang, C H; Lin, C T
2014-02-01
This Letter investigates exciton photoluminescence (PL) in resonant quasi-periodic Thue-Morse quantum wells (QWs). The results show that the PL properties of quasi-periodic Thue-Morse QWs are quite different from those of resonant Fibonacci QWs. The maximum and minimum PL intensities occur under the anti-Bragg and Bragg conditions, respectively. The maxima of the PL intensity gradually decline when the filling factor increases from 0.25 to 0.5. Accordingly, the squared electric field at the QWs decreases as the Thue-Morse QW deviates from the anti-Bragg condition. PMID:24487847
Analytical solutions of the Bohr Hamiltonian with the Morse potential
Boztosun, I.; Inci, I.; Bonatsos, D.
2008-04-15
Analytical solutions of the Bohr Hamiltonian are obtained in the {gamma}-unstable case, as well as in an exactly separable rotational case with {gamma}{approx_equal}0, called the exactly separable Morse (ES-M) solution. Closed expressions for the energy eigenvalues are obtained through the asymptotic iteration method (AIM), the effectiveness of which is demonstrated by solving the relevant Bohr equations for the Davidson and Kratzer potentials. All medium mass and heavy nuclei with known {beta}{sub 1} and {gamma}{sub 1} bandheads have been fitted by using the two-parameter {gamma}-unstable solution for transitional nuclei and the three-parameter ES-M for rotational ones. It is shown that bandheads and energy spacings within the bands are well reproduced for more than 50 nuclei in each case.
Comparison between the Morse eigenfunctions and deformed oscillator wavefunctions
Recamier, J.; Mochan, W. L.; Gorayeb, M.; Paz, J. L.
2008-04-15
In this work we introduce deformed creation and annihilation operators which differ from the usual harmonic oscillator operators a, a{sup {dagger}} by a number operator function A circumflex = a circumflex f(n circumflex ), A circumflex {sup {dagger}} = f(n circumflex )a circumflex {sup {dagger}}. We construct the deformed coordinate and momentum in terms of the deformed operators and maintain only up to first order terms in the deformed operators. By application of the deformed annihilation operator upon the vacuum state we get the ground state wavefunction in the configuration space and the wavefunctions for excited states are obtained by repeated application of the deformed creation operator. Finally, we compare the wavefunctions obtained with the deformed operators with the corresponding Morse eigenfunctions.
Robust Computation of Morse-Smale Complexes of Bilinear Functions
Norgard, G; Bremer, P T
2010-11-30
The Morse-Smale (MS) complex has proven to be a useful tool in extracting and visualizing features from scalar-valued data. However, existing algorithms to compute the MS complex are restricted to either piecewise linear or discrete scalar fields. This paper presents a new combinatorial algorithm to compute MS complexes for two dimensional piecewise bilinear functions defined on quadrilateral meshes. We derive a new invariant of the gradient flow within a bilinear cell and use it to develop a provably correct computation which is unaffected by numerical instabilities. This includes a combinatorial algorithm to detect and classify critical points as well as a way to determine the asymptotes of cell-based saddles and their intersection with cell edges. Finally, we introduce a simple data structure to compute and store integral lines on quadrilateral meshes which by construction prevents intersections and enables us to enforce constraints on the gradient flow to preserve known invariants.
NASA Astrophysics Data System (ADS)
Muta, Osamu; Akaiwa, Yoshihiko
In this paper, we propose a simple peak power reduction (PPR) method based on adaptive inversion of parity-check block of codeword in BCH-coded OFDM system. In the proposed method, the entire parity-check block of the codeword is adaptively inversed by multiplying weighting factors (WFs) so as to minimize PAPR of the OFDM signal, symbol-by-symbol. At the receiver, these WFs are estimated based on the property of BCH decoding. When the primitive BCH code with single error correction such as (31,26) code is used, to estimate the WFs, the proposed method employs a significant bit protection method which assigns a significant bit to the best subcarrier selected among all possible subcarriers. With computer simulation, when (31,26), (31,21) and (32,21) BCH codes are employed, PAPR of the OFDM signal at the CCDF (Complementary Cumulative Distribution Function) of 10-4 is reduced by about 1.9, 2.5 and 2.5dB by applying the PPR method, while achieving the BER performance comparable to the case with the perfect WF estimation in exponentially decaying 12-path Rayleigh fading condition.
Jedidiah Morse and the Bavarian Illuminati: An Essay in the Rhetoric of Conspiracy.
ERIC Educational Resources Information Center
Griffin, Charles J. G.
1989-01-01
Focuses on three widely publicized sermons given by the Reverend Jedidiah Morse to examine the role of the jeremiad (or political sermon) in shaping public attitudes toward political dissent during the Franco-American Crisis of 1798-1799. (MM)
On the homotopy type of spaces of Morse functions on surfaces
Kudryavtseva, Elena A
2013-01-31
Let M be a smooth closed orientable surface. Let F be the space of Morse functions on M with a fixed number of critical points of each index such that at least {chi}(M)+1 critical points are labelled by different labels (numbered). The notion of a skew cylindric-polyhedral complex is introduced, which generalizes the notion of a polyhedral complex. The skew cylindric-polyhedral complex K-tilde ('the complex of framed Morse functions') associated with the space F is defined. In the case M=S{sup 2} the polytope K-tilde is finite; its Euler characteristic {chi}(K-tilde) is calculated and the Morse inequalities for its Betti numbers {beta}{sub j}(K-tilde) are obtained. The relation between the homotopy types of the polytope K-tilde and the space F of Morse functions equipped with the C{sup {infinity}}-topology is indicated. Bibliography: 51 titles.
Anti-Voice Adaptation Suggests Prototype-Based Coding of Voice Identity
Latinus, Marianne; Belin, Pascal
2011-01-01
We used perceptual aftereffects induced by adaptation with anti-voice stimuli to investigate voice identity representations. Participants learned a set of voices then were tested on a voice identification task with vowel stimuli morphed between identities, after different conditions of adaptation. In Experiment 1, participants chose the identity opposite to the adapting anti-voice significantly more often than the other two identities (e.g., after being adapted to anti-A, they identified the average voice as A). In Experiment 2, participants showed a bias for identities opposite to the adaptor specifically for anti-voice, but not for non-anti-voice adaptors. These results are strikingly similar to adaptation aftereffects observed for facial identity. They are compatible with a representation of individual voice identities in a multidimensional perceptual voice space referenced on a voice prototype. PMID:21847384
Adaptation of the Advanced Spray Combustion Code to Cavitating Flow Problems
NASA Technical Reports Server (NTRS)
Liang, Pak-Yan
1993-01-01
A very important consideration in turbopump design is the prediction and prevention of cavitation. Thus far conventional CFD codes have not been generally applicable to the treatment of cavitating flows. Taking advantage of its two-phase capability, the Advanced Spray Combustion Code is being modified to handle flows with transient as well as steady-state cavitation bubbles. The volume-of-fluid approach incorporated into the code is extended and augmented with a liquid phase energy equation and a simple evaporation model. The strategy adopted also successfully deals with the cavity closure issue. Simple test cases will be presented and remaining technical challenges will be discussed.
Morse taper dental implants and platform switching: The new paradigm in oral implantology
Macedo, José Paulo; Pereira, Jorge; Vahey, Brendan R.; Henriques, Bruno; Benfatti, Cesar A. M.; Magini, Ricardo S.; López-López, José; Souza, Júlio C. M.
2016-01-01
The aim of this study was to conduct a literature review on the potential benefits with the use of Morse taper dental implant connections associated with small diameter platform switching abutments. A Medline bibliographical search (from 1961 to 2014) was carried out. The following search items were explored: “Bone loss and platform switching,” “bone loss and implant-abutment joint,” “bone resorption and platform switching,” “bone resorption and implant-abutment joint,” “Morse taper and platform switching.” “Morse taper and implant-abutment joint,” Morse taper and bone resorption,” “crestal bone remodeling and implant-abutment joint,” “crestal bone remodeling and platform switching.” The selection criteria used for the article were: meta-analysis; randomized controlled trials; prospective cohort studies; as well as reviews written in English, Portuguese, or Spanish languages. Within the 287 studies identified, 81 relevant and recent studies were selected. Results indicated a reduced occurrence of peri-implantitis and bone loss at the abutment/implant level associated with Morse taper implants and a reduced-diameter platform switching abutment. Extrapolation of data from previous studies indicates that Morse taper connections associated with platform switching have shown less inflammation and possible bone loss with the peri-implant soft tissues. However, more long-term studies are needed to confirm these trends. PMID:27011755
Morse taper dental implants and platform switching: The new paradigm in oral implantology.
Macedo, José Paulo; Pereira, Jorge; Vahey, Brendan R; Henriques, Bruno; Benfatti, Cesar A M; Magini, Ricardo S; López-López, José; Souza, Júlio C M
2016-01-01
The aim of this study was to conduct a literature review on the potential benefits with the use of Morse taper dental implant connections associated with small diameter platform switching abutments. A Medline bibliographical search (from 1961 to 2014) was carried out. The following search items were explored: "Bone loss and platform switching," "bone loss and implant-abutment joint," "bone resorption and platform switching," "bone resorption and implant-abutment joint," "Morse taper and platform switching." "Morse taper and implant-abutment joint," Morse taper and bone resorption," "crestal bone remodeling and implant-abutment joint," "crestal bone remodeling and platform switching." The selection criteria used for the article were: meta-analysis; randomized controlled trials; prospective cohort studies; as well as reviews written in English, Portuguese, or Spanish languages. Within the 287 studies identified, 81 relevant and recent studies were selected. Results indicated a reduced occurrence of peri-implantitis and bone loss at the abutment/implant level associated with Morse taper implants and a reduced-diameter platform switching abutment. Extrapolation of data from previous studies indicates that Morse taper connections associated with platform switching have shown less inflammation and possible bone loss with the peri-implant soft tissues. However, more long-term studies are needed to confirm these trends. PMID:27011755
2012-06-01
We have entered a new era in agricultural and biomedical science made possible by remarkable advances in DNA sequencing technologies. The complete sequence of an individual's set of chromosomes (collectively, its genome) provides a primary genetic code for what makes that individual unique, just as the contents of every personal computer reflect the unique attributes of its owner. But a second code, composed of "epigenetic" layers of information, affects the accessibility of the stored information and the execution of specific tasks. Nature's second code is enigmatic and must be deciphered if we are to fully understand and optimize the genetic potential of crop plants. The goal of the Epigenomics of Plants International Consortium is to crack this second code, and ultimately master its control, to help catalyze a new green revolution. PMID:22751210
Petty, I T; Carter, S C; Morra, M R; Jeffrey, J L; Olivey, H E
2000-11-25
Bipartite geminiviruses are small, plant-infecting viruses with genomes composed of circular, single-stranded DNA molecules, designated A and B. Although they are closely related genetically, individual bipartite geminiviruses frequently exhibit host-specific adaptation. Two such viruses are bean golden mosaic virus (BGMV) and tomato golden mosaic virus (TGMV), which are well adapted to common bean (Phaseolus vulgaris) and Nicotiana benthamiana, respectively. In previous studies, partial host adaptation was conferred on BGMV-based or TGMV-based hybrid viruses by separately exchanging open reading frames (ORFs) on DNA A or DNA B. Here we analyzed hybrid viruses in which all of the ORFs on both DNAs were exchanged except for AL1, which encodes a protein with strictly virus-specific activity. These hybrid viruses exhibited partial transfer of host-adapted phenotypes. In contrast, exchange of noncoding regions (NCRs) upstream from the AR1 and BR1 ORFs did not confer any host-specific gain of function on hybrid viruses. However, when the exchangeable ORFs and NCRs from TGMV were combined in a single BGMV-based hybrid virus, complete transfer of TGMV-like adaptation to N. benthamiana was achieved. Interestingly, the reciprocal TGMV-based hybrid virus displayed only partial gain of function in bean. This may be, in part, the result of defective virus-specific interactions between TGMV and BGMV sequences present in the hybrid, although a potential role in adaptation to bean for additional regions of the BGMV genome cannot be ruled out. PMID:11080490
Fine-Granularity Loading Schemes Using Adaptive Reed-Solomon Coding for xDSL-DMT Systems
NASA Astrophysics Data System (ADS)
Panigrahi, Saswat; Le-Ngoc, Tho
2006-12-01
While most existing loading algorithms for xDSL-DMT systems strive for the optimal energy distribution to maximize their rate, the amounts of bits loaded to subcarriers are constrained to be integers and the associated granularity losses can represent a significant percentage of the achievable data rate, especially in the presence of the peak-power constraint. To recover these losses, we propose a fine-granularity loading scheme using joint optimization of adaptive modulation and flexible coding parameters based on programmable Reed-Solomon (RS) codes and bit-error probability criterion. Illustrative examples of applications to VDSL-DMT systems indicate that the proposed scheme can offer a rate increase of about[InlineEquation not available: see fulltext.] in most cases as compared to various existing integer-bit-loading algorithms. This improvement is in good agreement with the theoretical estimates developed to quantify the granularity loss.
Noise Estimation and Adaptive Encoding for Asymmetric Quantum Error Correcting Codes
NASA Astrophysics Data System (ADS)
Florjanczyk, Jan; Brun, Todd; Center for Quantum Information Science; Technology Team
We present a technique that improves the performance of asymmetric quantum error correcting codes in the presence of biased qubit noise channels. Our study is motivated by considering what useful information can be learned from the statistics of syndrome measurements in stabilizer quantum error correcting codes (QECC). We consider the case of a qubit dephasing channel where the dephasing axis is unknown and time-varying. We are able to estimate the dephasing angle from the statistics of the standard syndrome measurements used in stabilizer QECC's. We use this estimate to rotate the computational basis of the code in such a way that the most likely type of error is covered by the highest distance of the asymmetric code. In particular, we use the [ [ 15 , 1 , 3 ] ] shortened Reed-Muller code which can correct one phase-flip error but up to three bit-flip errors. In our simulations, we tune the computational basis to match the estimated dephasing axis which in turn leads to a decrease in the probability of a phase-flip error. With a sufficiently accurate estimate of the dephasing axis, our memory's effective error is dominated by the much lower probability of four bit-flips. Aro MURI Grant No. W911NF-11-1-0268.
DEMOCRITUS: An adaptive particle in cell (PIC) code for object-plasma interactions
NASA Astrophysics Data System (ADS)
Lapenta, Giovanni
2011-06-01
A new method for the simulation of plasma materials interactions is presented. The method is based on the particle in cell technique for the description of the plasma and on the immersed boundary method for the description of the interactions between materials and plasma particles. A technique to adapt the local number of particles and grid adaptation are used to reduce the truncation error and the noise of the simulations, to increase the accuracy per unit cost. In the present work, the computational method is verified against known results. Finally, the simulation method is applied to a number of specific examples of practical scientific and engineering interest.
Sato, Marc; Vilain, Coriandre; Lamalle, Laurent; Grabski, Krystyna
2015-02-01
Studies of speech motor control suggest that articulatory and phonemic goals are defined in multidimensional motor, somatosensory, and auditory spaces. To test whether motor simulation might rely on sensory-motor coding common with those for motor execution, we used a repetition suppression (RS) paradigm while measuring neural activity with sparse sampling fMRI during repeated overt and covert orofacial and speech actions. RS refers to the phenomenon that repeated stimuli or motor acts lead to decreased activity in specific neural populations and are associated with enhanced adaptive learning related to the repeated stimulus attributes. Common suppressed neural responses were observed in motor and posterior parietal regions in the achievement of both repeated overt and covert orofacial and speech actions, including the left premotor cortex and inferior frontal gyrus, the superior parietal cortex and adjacent intraprietal sulcus, and the left IC and the SMA. Interestingly, reduced activity of the auditory cortex was observed during overt but not covert speech production, a finding likely reflecting a motor rather an auditory imagery strategy by the participants. By providing evidence for adaptive changes in premotor and associative somatosensory brain areas, the observed RS suggests online state coding of both orofacial and speech actions in somatosensory and motor spaces with and without motor behavior and sensory feedback. PMID:25203272
NASA Technical Reports Server (NTRS)
Kandula, Max; Caimi, Raoul; Steinrock, T. (Technical Monitor)
2001-01-01
An acoustic prediction capability for supersonic axisymmetric jets was developed on the basis of OVERFLOW Navier-Stokes CFD (Computational Fluid Dynamics) code of NASA Langley Research Center. Reynolds-averaged turbulent stresses in the flow field are modeled with the aid of Spalart-Allmaras one-equation turbulence model. Appropriate acoustic and outflow boundary conditions were implemented to compute time-dependent acoustic pressure in the nonlinear source-field. Based on the specification of acoustic pressure, its temporal and normal derivatives on the Kirchhoff surface, the near-field and the far-field sound pressure levels are computed via Kirchhoff surface integral, with the Kirchhoff surface chosen to enclose the nonlinear sound source region described by the CFD code. The methods are validated by a comparison of the predictions of sound pressure levels with the available data for an axisymmetric turbulent supersonic (Mach 2) perfectly expanded jet.
Lee, Dongyul; Lee, Chaewoo
2014-01-01
The advancement in wideband wireless network supports real time services such as IPTV and live video streaming. However, because of the sharing nature of the wireless medium, efficient resource allocation has been studied to achieve a high level of acceptability and proliferation of wireless multimedia. Scalable video coding (SVC) with adaptive modulation and coding (AMC) provides an excellent solution for wireless video streaming. By assigning different modulation and coding schemes (MCSs) to video layers, SVC can provide good video quality to users in good channel conditions and also basic video quality to users in bad channel conditions. For optimal resource allocation, a key issue in applying SVC in the wireless multicast service is how to assign MCSs and the time resources to each SVC layer in the heterogeneous channel condition. We formulate this problem with integer linear programming (ILP) and provide numerical results to show the performance under 802.16 m environment. The result shows that our methodology enhances the overall system throughput compared to an existing algorithm. PMID:25276862
Lee, Chaewoo
2014-01-01
The advancement in wideband wireless network supports real time services such as IPTV and live video streaming. However, because of the sharing nature of the wireless medium, efficient resource allocation has been studied to achieve a high level of acceptability and proliferation of wireless multimedia. Scalable video coding (SVC) with adaptive modulation and coding (AMC) provides an excellent solution for wireless video streaming. By assigning different modulation and coding schemes (MCSs) to video layers, SVC can provide good video quality to users in good channel conditions and also basic video quality to users in bad channel conditions. For optimal resource allocation, a key issue in applying SVC in the wireless multicast service is how to assign MCSs and the time resources to each SVC layer in the heterogeneous channel condition. We formulate this problem with integer linear programming (ILP) and provide numerical results to show the performance under 802.16 m environment. The result shows that our methodology enhances the overall system throughput compared to an existing algorithm. PMID:25276862
Simplified APC for Space Shuttle applications. [Adaptive Predictive Coding for speech transmission
NASA Technical Reports Server (NTRS)
Hutchins, S. E.; Batson, B. H.
1975-01-01
This paper describes an 8 kbps adaptive predictive digital speech transmission system which was designed for potential use in the Space Shuttle Program. The system was designed to provide good voice quality in the presence of both cabin noise on board the Shuttle and the anticipated bursty channel. Minimal increase in size, weight, and power over the current high data rate system was also a design objective.
Vasserman, Genadiy; Schneidman, Elad; Segev, Ronen
2013-01-01
The visual system continually adjusts its sensitivity to the statistical properties of the environment through an adaptation process that starts in the retina. Colour perception and processing is commonly thought to occur mainly in high visual areas, and indeed most evidence for chromatic colour contrast adaptation comes from cortical studies. We show that colour contrast adaptation starts in the retina where ganglion cells adjust their responses to the spectral properties of the environment. We demonstrate that the ganglion cells match their responses to red-blue stimulus combinations according to the relative contrast of each of the input channels by rotating their functional response properties in colour space. Using measurements of the chromatic statistics of natural environments, we show that the retina balances inputs from the two (red and blue) stimulated colour channels, as would be expected from theoretical optimal behaviour. Our results suggest that colour is encoded in the retina based on the efficient processing of spectral information that matches spectral combinations in natural scenes on the colour processing level. PMID:24205373
Convexity of momentum map, Morse index, and quantum entanglement
NASA Astrophysics Data System (ADS)
Sawicki, Adam; Oszmaniec, Michał; Kuś, Marek
2014-03-01
We analyze from the topological perspective the space of all SLOCC (Stochastic Local Operations with Classical Communication) classes of pure states for composite quantum systems. We do it for both distinguishable and indistinguishable particles. In general, the topology of this space is rather complicated as it is a non-Hausdorff space. Using geometric invariant theory (GIT) and momentum map geometry, we propose a way to divide the space of all SLOCC classes into mathematically and physically meaningful families. Each family consists of possibly many "asymptotically" equivalent SLOCC classes. Moreover, each contains exactly one distinguished SLOCC class on which the total variance (a well-defined measure of entanglement) of the state Var[v] attains maximum. We provide an algorithm for finding critical sets of Var[v], which makes use of the convexity of the momentum map and allows classification of such defined families of SLOCC classes. The number of families is in general infinite. We introduce an additional refinement into finitely many groups of families using some developments in the momentum map geometry known as the Kirwan-Ness stratification. We also discuss how to define it equivalently using the convexity of the momentum map applied to SLOCC classes. Moreover, we note that the Morse index at the critical set of the total variance of state has an interpretation of number of non-SLOCC directions in which entanglement increases and calculate it for several exemplary systems. Finally, we introduce the SLOCC-invariant measure of entanglement as a square root of the total variance of state at the critical point and explain its geometric meaning.
A video coding scheme based on joint spatiotemporal and adaptive prediction.
Jiang, Wenfei; Latecki, Longin Jan; Liu, Wenyu; Liang, Hui; Gorman, Ken
2009-05-01
We propose a video coding scheme that departs from traditional Motion Estimation/DCT frameworks and instead uses Karhunen-Loeve Transform (KLT)/Joint Spatiotemporal Prediction framework. In particular, a novel approach that performs joint spatial and temporal prediction simultaneously is introduced. It bypasses the complex H.26x interframe techniques and it is less computationally intensive. Because of the advantage of the effective joint prediction and the image-dependent color space transformation (KLT), the proposed approach is demonstrated experimentally to consistently lead to improved video quality, and in many cases to better compression rates and improved computational speed. PMID:19342337
A 2-D orientation-adaptive prediction filter in lifting structures for image coding.
Gerek, Omer N; Cetin, A Enis
2006-01-01
Lifting-style implementations of wavelets are widely used in image coders. A two-dimensional (2-D) edge adaptive lifting structure, which is similar to Daubechies 5/3 wavelet, is presented. The 2-D prediction filter predicts the value of the next polyphase component according to an edge orientation estimator of the image. Consequently, the prediction domain is allowed to rotate +/-45 degrees in regions with diagonal gradient. The gradient estimator is computationally inexpensive with additional costs of only six subtractions per lifting instruction, and no multiplications are required. PMID:16435541
Broom, Donald M
2006-01-01
The term adaptation is used in biology in three different ways. It may refer to changes which occur at the cell and organ level, or at the individual level, or at the level of gene action and evolutionary processes. Adaptation by cells, especially nerve cells helps in: communication within the body, the distinguishing of stimuli, the avoidance of overload and the conservation of energy. The time course and complexity of these mechanisms varies. Adaptive characters of organisms, including adaptive behaviours, increase fitness so this adaptation is evolutionary. The major part of this paper concerns adaptation by individuals and its relationships to welfare. In complex animals, feed forward control is widely used. Individuals predict problems and adapt by acting before the environmental effect is substantial. Much of adaptation involves brain control and animals have a set of needs, located in the brain and acting largely via motivational mechanisms, to regulate life. Needs may be for resources but are also for actions and stimuli which are part of the mechanism which has evolved to obtain the resources. Hence pigs do not just need food but need to be able to carry out actions like rooting in earth or manipulating materials which are part of foraging behaviour. The welfare of an individual is its state as regards its attempts to cope with its environment. This state includes various adaptive mechanisms including feelings and those which cope with disease. The part of welfare which is concerned with coping with pathology is health. Disease, which implies some significant effect of pathology, always results in poor welfare. Welfare varies over a range from very good, when adaptation is effective and there are feelings of pleasure or contentment, to very poor. A key point concerning the concept of individual adaptation in relation to welfare is that welfare may be good or poor while adaptation is occurring. Some adaptation is very easy and energetically cheap and
Application of a Morse filter in the processing of brain angiograms
NASA Astrophysics Data System (ADS)
Venegas Bayona, Santiago
2014-06-01
The angiograms are frequently used to find anomalies in the blood vessels. Hence, for improving the quality of the images with an angiogram, a Morse filter will be implemented (based on the model of the Morse Potential) in a brain's vessels angiogram using both softwares Maple ® and ImageJ ®. It will be shown the results of applying a Morse filter to an angiogram of the brain vessels. First, the image was processed with ImageJ using the plug-in Anisotropic Diffusion 2D and then, the filter was implemented. As it is illustrated in the results, the edges of the stringy elements are emphasized. Particularly, this is very useful in the medical image processing of blood vessels, like angiograms, due to the narrowing or obstruction which may be caused by illness like aneurysms, thrombosis or other diseases.
Query-Adaptive Hash Code Ranking for Large-Scale Multi-View Visual Search.
Liu, Xianglong; Huang, Lei; Deng, Cheng; Lang, Bo; Tao, Dacheng
2016-10-01
Hash-based nearest neighbor search has become attractive in many applications. However, the quantization in hashing usually degenerates the discriminative power when using Hamming distance ranking. Besides, for large-scale visual search, existing hashing methods cannot directly support the efficient search over the data with multiple sources, and while the literature has shown that adaptively incorporating complementary information from diverse sources or views can significantly boost the search performance. To address the problems, this paper proposes a novel and generic approach to building multiple hash tables with multiple views and generating fine-grained ranking results at bitwise and tablewise levels. For each hash table, a query-adaptive bitwise weighting is introduced to alleviate the quantization loss by simultaneously exploiting the quality of hash functions and their complement for nearest neighbor search. From the tablewise aspect, multiple hash tables are built for different data views as a joint index, over which a query-specific rank fusion is proposed to rerank all results from the bitwise ranking by diffusing in a graph. Comprehensive experiments on image search over three well-known benchmarks show that the proposed method achieves up to 17.11% and 20.28% performance gains on single and multiple table search over the state-of-the-art methods. PMID:27448359
A novel pseudoderivative-based mutation operator for real-coded adaptive genetic algorithms
Kanwal, Maxinder S; Ramesh, Avinash S; Huang, Lauren A
2013-01-01
Recent development of large databases, especially those in genetics and proteomics, is pushing the development of novel computational algorithms that implement rapid and accurate search strategies. One successful approach has been to use artificial intelligence and methods, including pattern recognition (e.g. neural networks) and optimization techniques (e.g. genetic algorithms). The focus of this paper is on optimizing the design of genetic algorithms by using an adaptive mutation rate that is derived from comparing the fitness values of successive generations. We propose a novel pseudoderivative-based mutation rate operator designed to allow a genetic algorithm to escape local optima and successfully continue to the global optimum. Once proven successful, this algorithm can be implemented to solve real problems in neurology and bioinformatics. As a first step towards this goal, we tested our algorithm on two 3-dimensional surfaces with multiple local optima, but only one global optimum, as well as on the N-queens problem, an applied problem in which the function that maps the curve is implicit. For all tests, the adaptive mutation rate allowed the genetic algorithm to find the global optimal solution, performing significantly better than other search methods, including genetic algorithms that implement fixed mutation rates. PMID:24627784
An adaptive algorithm for removing the blocking artifacts in block-transform coded images
NASA Astrophysics Data System (ADS)
Yang, Jingzhong; Ma, Zheng
2005-11-01
JPEG and MPEG compression standards adopt the macro block encoding approach, but this method can lead to annoying blocking effects-the artificial rectangular discontinuities in the decoded images. Many powerful postprocessing algorithms have been developed to remove the blocking effects. However, all but the simplest algorithms can be too complex for real-time applications, such as video decoding. We propose an adaptive and easy-to-implement algorithm that can removes the artificial discontinuities. This algorithm contains two steps, firstly, to perform a fast linear smoothing of the block edge's pixel by average value replacement strategy, the next one, by comparing the variance that is derived from the difference of the processed image with a reasonable threshold, to determine whether the first step should stop or not. Experiments have proved that this algorithm can quickly remove the artificial discontinuities without destroying the key information of the decoded images, it is robust to different images and transform strategy.
Zhang, S.; Yuen, D.A.; Zhu, A.; Song, S.; George, D.L.
2011-01-01
We parallelized the GeoClaw code on one-level grid using OpenMP in March, 2011 to meet the urgent need of simulating tsunami waves at near-shore from Tohoku 2011 and achieved over 75% of the potential speed-up on an eight core Dell Precision T7500 workstation [1]. After submitting that work to SC11 - the International Conference for High Performance Computing, we obtained an unreleased OpenMP version of GeoClaw from David George, who developed the GeoClaw code as part of his PH.D thesis. In this paper, we will show the complementary characteristics of the two approaches used in parallelizing GeoClaw and the speed-up obtained by combining the advantage of each of the two individual approaches with adaptive mesh refinement (AMR), demonstrating the capabilities of running GeoClaw efficiently on many-core systems. We will also show a novel simulation of the Tohoku 2011 Tsunami waves inundating the Sendai airport and Fukushima Nuclear Power Plants, over which the finest grid distance of 20 meters is achieved through a 4-level AMR. This simulation yields quite good predictions about the wave-heights and travel time of the tsunami waves. ?? 2011 IEEE.
Yin, Jun; Yang, Yuwang; Wang, Lei
2016-01-01
Joint design of compressed sensing (CS) and network coding (NC) has been demonstrated to provide a new data gathering paradigm for multi-hop wireless sensor networks (WSNs). By exploiting the correlation of the network sensed data, a variety of data gathering schemes based on NC and CS (Compressed Data Gathering-CDG) have been proposed. However, these schemes assume that the sparsity of the network sensed data is constant and the value of the sparsity is known before starting each data gathering epoch, thus they ignore the variation of the data observed by the WSNs which are deployed in practical circumstances. In this paper, we present a complete design of the feedback CDG scheme where the sink node adaptively queries those interested nodes to acquire an appropriate number of measurements. The adaptive measurement-formation procedure and its termination rules are proposed and analyzed in detail. Moreover, in order to minimize the number of overall transmissions in the formation procedure of each measurement, we have developed a NP-complete model (Maximum Leaf Nodes Minimum Steiner Nodes-MLMS) and realized a scalable greedy algorithm to solve the problem. Experimental results show that the proposed measurement-formation method outperforms previous schemes, and experiments on both datasets from ocean temperature and practical network deployment also prove the effectiveness of our proposed feedback CDG scheme. PMID:27043574
Yin, Jun; Yang, Yuwang; Wang, Lei
2016-01-01
Joint design of compressed sensing (CS) and network coding (NC) has been demonstrated to provide a new data gathering paradigm for multi-hop wireless sensor networks (WSNs). By exploiting the correlation of the network sensed data, a variety of data gathering schemes based on NC and CS (Compressed Data Gathering—CDG) have been proposed. However, these schemes assume that the sparsity of the network sensed data is constant and the value of the sparsity is known before starting each data gathering epoch, thus they ignore the variation of the data observed by the WSNs which are deployed in practical circumstances. In this paper, we present a complete design of the feedback CDG scheme where the sink node adaptively queries those interested nodes to acquire an appropriate number of measurements. The adaptive measurement-formation procedure and its termination rules are proposed and analyzed in detail. Moreover, in order to minimize the number of overall transmissions in the formation procedure of each measurement, we have developed a NP-complete model (Maximum Leaf Nodes Minimum Steiner Nodes—MLMS) and realized a scalable greedy algorithm to solve the problem. Experimental results show that the proposed measurement-formation method outperforms previous schemes, and experiments on both datasets from ocean temperature and practical network deployment also prove the effectiveness of our proposed feedback CDG scheme. PMID:27043574
Motion-vector-based adaptive quantization in MPEG-4 fine granular scalable coding
NASA Astrophysics Data System (ADS)
Yang, Shuping; Lin, Xinggang; Wang, Guijin
2003-05-01
Selective enhancement mechanism of Fine-Granular-Scalability (FGS) In MPEG-4 is able to enhance specific objects under bandwidth variation. A novel technique for self-adaptive enhancement of interested regions based on Motion Vectors (MVs) of the base layer is proposed, which is suitable for those video sequences having still background and what we are interested in is only the moving objects in the scene, such as news broadcasting, video surveillance, Internet education, etc. Motion vectors generated during base layer encoding are obtained and analyzed. A Gaussian model is introduced to describe non-moving macroblocks which may have non-zero MVs caused by random noise or luminance variation. MVs of these macroblocks are set to zero to prevent them from being enhanced. A segmentation algorithm, region growth, based on MV values is exploited to separate foreground from background. Post-process is needed to reduce the influence of burst noise so that only the interested moving regions are left. Applying the result in selective enhancement during enhancement layer encoding can significantly improves the visual quality of interested regions within an aforementioned video transmitted at different bit-rate in our experiments.
NASA Astrophysics Data System (ADS)
McNie, Mark E.; Combes, David J.; Smith, Gilbert W.; Price, Nicola; Ridley, Kevin D.; Brunson, Kevin M.; Lewis, Keith L.; Slinger, Chris W.; Rogers, Stanley
2007-09-01
Coded aperture imaging has been used for astronomical applications for several years. Typical implementations use a fixed mask pattern and are designed to operate in the X-Ray or gamma ray bands. More recent applications have emerged in the visible and infra red bands for low cost lens-less imaging systems. System studies have shown that considerable advantages in image resolution may accrue from the use of multiple different images of the same scene - requiring a reconfigurable mask. We report on work to develop a novel, reconfigurable mask based on micro-opto-electro-mechanical systems (MOEMS) technology employing interference effects to modulate incident light in the mid-IR band (3-5μm). This is achieved by tuning a large array of asymmetric Fabry-Perot cavities by applying an electrostatic force to adjust the gap between a moveable upper polysilicon mirror plate supported on suspensions and underlying fixed (electrode) layers on a silicon substrate. A key advantage of the modulator technology developed is that it is transmissive and high speed (e.g. 100kHz) - allowing simpler imaging system configurations. It is also realised using a modified standard polysilicon surface micromachining process (i.e. MUMPS-like) that is widely available and hence should have a low production cost in volume. We have developed designs capable of operating across the entire mid-IR band with peak transmissions approaching 100% and high contrast. By using a pixelated array of small mirrors, a large area device comprising individually addressable elements may be realised that allows reconfiguring of the whole mask at speeds in excess of video frame rates.
Adaptive Code Division Multiple Access Protocol for Wireless Network-on-Chip Architectures
NASA Astrophysics Data System (ADS)
Vijayakumaran, Vineeth
Massive levels of integration following Moore's Law ushered in a paradigm shift in the way on-chip interconnections were designed. With higher and higher number of cores on the same die traditional bus based interconnections are no longer a scalable communication infrastructure. On-chip networks were proposed enabled a scalable plug-and-play mechanism for interconnecting hundreds of cores on the same chip. Wired interconnects between the cores in a traditional Network-on-Chip (NoC) system, becomes a bottleneck with increase in the number of cores thereby increasing the latency and energy to transmit signals over them. Hence, there has been many alternative emerging interconnect technologies proposed, namely, 3D, photonic and multi-band RF interconnects. Although they provide better connectivity, higher speed and higher bandwidth compared to wired interconnects; they also face challenges with heat dissipation and manufacturing difficulties. On-chip wireless interconnects is one other alternative proposed which doesn't need physical interconnection layout as data travels over the wireless medium. They are integrated into a hybrid NOC architecture consisting of both wired and wireless links, which provides higher bandwidth, lower latency, lesser area overhead and reduced energy dissipation in communication. However, as the bandwidth of the wireless channels is limited, an efficient media access control (MAC) scheme is required to enhance the utilization of the available bandwidth. This thesis proposes using a multiple access mechanism such as Code Division Multiple Access (CDMA) to enable multiple transmitter-receiver pairs to send data over the wireless channel simultaneously. It will be shown that such a hybrid wireless NoC with an efficient CDMA based MAC protocol can significantly increase the performance of the system while lowering the energy dissipation in data transfer. In this work it is shown that the wireless NoC with the proposed CDMA based MAC protocol
Convergence of the Approximation Scheme to American Option Pricing via the Discrete Morse Semiflow
Ishii, Katsuyuki; Omata, Seiro
2011-12-15
We consider the approximation scheme to the American call option via the discrete Morse semiflow, which is a minimizing scheme of a time semi-discretized variational functional. In this paper we obtain a rate of convergence of approximate solutions and the convergence of approximate free boundaries. We mainly apply the theory of variational inequalities and that of viscosity solutions to prove our results.
Application of DOT-MORSE coupling to the analysis of three-dimensional SNAP shielding problems
NASA Technical Reports Server (NTRS)
Straker, E. A.; Childs, R. L.; Emmett, M. B.
1972-01-01
The use of discrete ordinates and Monte Carlo techniques to solve radiation transport problems is discussed. A general discussion of two possible coupling schemes is given for the two methods. The calculation of the reactor radiation scattered from a docked service and command module is used as an example of coupling discrete ordinates (DOT) and Monte Carlo (MORSE) calculations.
A Mechanical Apparatus for Hands-On Experience with the Morse Potential
ERIC Educational Resources Information Center
Everest, Michael A.
2010-01-01
A simple pulley apparatus is described that gives the student hands-on experience with the Morse potential. Students develop an internalized sense of what a covalent bond would feel like if atoms in a molecule could be manipulated by hand. This kinesthetic learning enhances the student's understanding and intuition of several chemical phenomena.…
Application of Morse Theory to Analysis of Rayleigh-Taylor Topology
Miller, P L; Bremer, P T; Cabot, W H; Cook, A W; Laney, D E; Mascarenhas, A A; Pascucci, V
2007-01-24
We present a novel Morse Theory approach for the analysis of the complex topology of the Rayleigh-Taylor mixing layer. We automatically extract bubble structures at multiple scales and identify the resolution of interest. Quantitative analysis of bubble counts over time highlights distinct mixing trends for a high-resolution Direct Numerical Simulation (DNS) [1].
NASA Astrophysics Data System (ADS)
Bargatze, L. F.
2015-12-01
Active Data Archive Product Tracking (ADAPT) is a collection of software routines that permits one to generate XML metadata files to describe and register data products in support of the NASA Heliophysics Virtual Observatory VxO effort. ADAPT is also a philosophy. The ADAPT concept is to use any and all available metadata associated with scientific data to produce XML metadata descriptions in a consistent, uniform, and organized fashion to provide blanket access to the full complement of data stored on a targeted data server. In this poster, we present an application of ADAPT to describe all of the data products that are stored by using the Common Data File (CDF) format served out by the CDAWEB and SPDF data servers hosted at the NASA Goddard Space Flight Center. These data servers are the primary repositories for NASA Heliophysics data. For this purpose, the ADAPT routines have been used to generate data resource descriptions by using an XML schema named Space Physics Archive, Search, and Extract (SPASE). SPASE is the designated standard for documenting Heliophysics data products, as adopted by the Heliophysics Data and Model Consortium. The set of SPASE XML resource descriptions produced by ADAPT includes high-level descriptions of numerical data products, display data products, or catalogs and also includes low-level "Granule" descriptions. A SPASE Granule is effectively a universal access metadata resource; a Granule associates an individual data file (e.g. a CDF file) with a "parent" high-level data resource description, assigns a resource identifier to the file, and lists the corresponding assess URL(s). The CDAWEB and SPDF file systems were queried to provide the input required by the ADAPT software to create an initial set of SPASE metadata resource descriptions. Then, the CDAWEB and SPDF data repositories were queried subsequently on a nightly basis and the CDF file lists were checked for any changes such as the occurrence of new, modified, or deleted
Arabaci, Murat; Djordjevic, Ivan B; Saunders, Ross; Marcoccia, Roberto M
2010-02-01
In order to achieve high-speed transmission over optical transport networks (OTNs) and maximize its throughput, we propose using a rate-adaptive polarization-multiplexed coded multilevel modulation with coherent detection based on component non-binary quasi-cyclic (QC) LDPC codes. Compared to prior-art bit-interleaved LDPC-coded modulation (BI-LDPC-CM) scheme, the proposed non-binary LDPC-coded modulation (NB-LDPC-CM) scheme not only reduces latency due to symbol- instead of bit-level processing but also provides either impressive reduction in computational complexity or striking improvements in coding gain depending on the constellation size. As the paper presents, compared to its prior-art binary counterpart, the proposed NB-LDPC-CM scheme addresses the needs of future OTNs, which are achieving the target BER performance and providing maximum possible throughput both over the entire lifetime of the OTN, better. PMID:20174010
NASA Astrophysics Data System (ADS)
Gersho, Allen
1990-05-01
Recent advances in algorithms and techniques for speech coding now permit high quality voice reproduction at remarkably low bit rates. The advent of powerful single-ship signal processors has made it cost effective to implement these new and sophisticated speech coding algorithms for many important applications in voice communication and storage. Some of the main ideas underlying the algorithms of major interest today are reviewed. The concept of removing redundancy by linear prediction is reviewed, first in the context of predictive quantization or DPCM. Then linear predictive coding, adaptive predictive coding, and vector quantization are discussed. The concepts of excitation coding via analysis-by-synthesis, vector sum excitation codebooks, and adaptive postfiltering are explained. The main idea of vector excitation coding (VXC) or code excited linear prediction (CELP) are presented. Finally low-delay VXC coding and phonetic segmentation for VXC are described.
47 CFR 80.357 - Working frequencies for Morse code and data transmission.
Code of Federal Regulations, 2012 CFR
2012-10-01
... transmission. 80.357 Section 80.357 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES STATIONS IN THE MARITIME SERVICES Frequencies Radiotelegraphy § 80.357 Working... narrow-band direct-printing frequencies listed in § 80.361(b) of this part for A1A or J2A...
47 CFR 80.357 - Working frequencies for Morse code and data transmission.
Code of Federal Regulations, 2014 CFR
2014-10-01
... transmission. 80.357 Section 80.357 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES STATIONS IN THE MARITIME SERVICES Frequencies Radiotelegraphy § 80.357 Working... narrow-band direct-printing frequencies listed in § 80.361(b) of this part for A1A or J2A...
47 CFR 80.357 - Working frequencies for Morse code and data transmission.
Code of Federal Regulations, 2013 CFR
2013-10-01
... transmission. 80.357 Section 80.357 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES STATIONS IN THE MARITIME SERVICES Frequencies Radiotelegraphy § 80.357 Working... narrow-band direct-printing frequencies listed in § 80.361(b) of this part for A1A or J2A...
Keightley, Peter D; Campos, José L; Booker, Tom R; Charlesworth, Brian
2016-06-01
Many approaches for inferring adaptive molecular evolution analyze the unfolded site frequency spectrum (SFS), a vector of counts of sites with different numbers of copies of derived alleles in a sample of alleles from a population. Accurate inference of the high-copy-number elements of the SFS is difficult, however, because of misassignment of alleles as derived vs. ancestral. This is a known problem with parsimony using outgroup species. Here we show that the problem is particularly serious if there is variation in the substitution rate among sites brought about by variation in selective constraint levels. We present a new method for inferring the SFS using one or two outgroups that attempts to overcome the problem of misassignment. We show that two outgroups are required for accurate estimation of the SFS if there is substantial variation in selective constraints, which is expected to be the case for nonsynonymous sites in protein-coding genes. We apply the method to estimate unfolded SFSs for synonymous and nonsynonymous sites in a population of Drosophila melanogaster from phase 2 of the Drosophila Population Genomics Project. We use the unfolded spectra to estimate the frequency and strength of advantageous and deleterious mutations and estimate that ∼50% of amino acid substitutions are positively selected but that <0.5% of new amino acid mutations are beneficial, with a scaled selection strength of Nes ≈ 12. PMID:27098912
Fu, C.Y.; Gabriel, T.A.
1997-05-01
The 1996 version of HETC has a pre-equilibrium reaction model to bridge the gap between the existing intranuclear-cascade and evaporation models. This code was used to calculate proton-induced activations, to calculate neutron fluxes for neutron energies above 19.6 MeV, and to write the neutron source for lower energies to be transported further by MORSE. For MORSE, the HILO cross section library was used for neutron transport for all detectors. Additionally for the {sup 197}Au(n, {gamma}) detector, the BUGLE96 library was used to study the effects of the low-lying {sup 57}Fe inelastic levels and the resonance self-shielding in iron. Neutron fluxes were obtained from the track-length estimator for detectors inside the beam stop and from the boundary-crossing estimator for detectors attached to the surfaces of the concrete walls. Activation cross sections given in JAERI-Data/Code are combined with the calculated neutron fluxes to get the saturated activities induced by neutrons. C/E values are too low (0.5) for Fe(N, {chi}){sup 54}Mn, close to unity for Cu(n, {chi}){sup 58}Co, and too high (6.0) for {sup 197}Au (n, {gamma}){sup 198}Au. It is difficult to interpret the disagreements because most of the activation cross sections are also calculated and their uncertainties are not known. However, the calculated results are in good agreement with those calculated by others using different codes. Calculated results for four of the ten activations reported here have not been done previously, and among the four, {sup 197}Au(n, {gamma}) is the most bothersome because its cross section is the most well known while the calculated activations for most detector locations are in largest disagreement with experiments.
NASA Astrophysics Data System (ADS)
Fakhri, H.; Dehghani, A.
2008-05-01
In a recently published paper in this journal [A. Cheaghlou and O. Faizy, J. Math. Phys. 49, 022104 (2008)], the authors introduce the Gazeau-Klauder coherent states for the trigonometric Rosen-Morse potential as an infinite superposition of the wavefunctions. It is shown that their proposed measure to realize the resolution of the identity condition is not positive definite. Consequently, the claimed coherencies for the trigonometric Rosen-Morse wavefunctions cannot actually exist.
A model of phase transitions in double-well Morse potential: Application to hydrogen bond
NASA Astrophysics Data System (ADS)
Goryainov, S. V.
2012-11-01
A model of phase transitions in double-well Morse potential is developed. Application of this model to the hydrogen bond is based on ab initio electron density calculations, which proved that the predominant contribution to the hydrogen bond energy originates from the interaction of proton with the electron shells of hydrogen-bonded atoms. This model uses a double-well Morse potential for proton. Analytical expressions for the hydrogen bond energy and the frequency of O-H stretching vibrations were obtained. Experimental data on the dependence of O-H vibration frequency on the bond length were successfully fitted with model-predicted dependences in classical and quantum mechanics approaches. Unlike empirical exponential function often used previously for dependence of O-H vibration frequency on the hydrogen bond length (Libowitzky, Mon. Chem., 1999, vol.130, 1047), the dependence reported here is theoretically substantiated.
Kirk, B.L.; Sartori, E.
1997-06-01
Subsequent to the introduction of High Performance Computing in the developed countries, the Organization for Economic Cooperation and Development/Nuclear Energy Agency (OECD/NEA) created the Task Force on Adapting Computer Codes in Nuclear Applications to Parallel Architectures (under the guidance of the Nuclear Science Committee`s Working Party on Advanced Computing) to study the growth area in supercomputing and its applicability to the nuclear community`s computer codes. The result has been four years of investigation for the Task Force in different subject fields - deterministic and Monte Carlo radiation transport, computational mechanics and fluid dynamics, nuclear safety, atmospheric models and waste management.
Ho, Choon-Lin
2009-05-15
The four exactly solvable models related to non-sinusoidal coordinates, namely, the Coulomb, Eckart, Rosen-Morse type I and II models are normally being treated separately, despite the similarity of the functional forms of the potentials, their eigenvalues and eigenfunctions. Based on an extension of the prepotential approach to exactly and quasi-exactly solvable models proposed previously, we show how these models can be derived and solved in a simple and unified way.
Electric quadrupole transitions of the Bohr Hamiltonian with the Morse potential
Inci, I.; Bonatsos, D.; Boztosun, I.
2011-08-15
Eigenfunctions of the collective Bohr Hamiltonian with the Morse potential have been obtained by using the asymptotic iteration method (AIM) for both {gamma}-unstable and rotational structures. B(E2) transition rates have been calculated and compared to experimental data. Overall good agreement is obtained for transitions within the ground-state band, while some interband transitions appear to be systematically underpredicted in {gamma}-unstable nuclei and overpredicted in rotational nuclei.
A Multi-Resolution Data Structure for Two-Dimensional Morse Functions
Bremer, P-T; Edelsbrunner, H; Hamann, B; Pascucci, V
2003-07-30
The efficient construction of simplified models is a central problem in the field of visualization. We combine topological and geometric methods to construct a multi-resolution data structure for functions over two-dimensional domains. Starting with the Morse-Smale complex we build a hierarchy by progressively canceling critical points in pairs. The data structure supports mesh traversal operations similar to traditional multi-resolution representations.
NASA Astrophysics Data System (ADS)
Sánchez-Castellanos, M.; Amezcua-Eccius, C. A.; Álvarez-Bajo, O.; Lemus, R.
2008-02-01
A general description of vibrational excitations of pyramidal molecules in both local and normal representations is presented. This study is restricted to the case when no tunneling motion is allowed. The Hamiltonian is first written in terms of curvilinear internal coordinates. The Wilson's G matrix as well as the potential are expanded in terms of Morse variables, which allows the identification of a set of six Morse oscillators as zeroth-order Hamiltonian. An algebraic realization of the Hamiltonian is obtained by introducing a linear expansion of the coordinates and momenta in terms of creation and annihilation operators of Morse functions. This algebraic realization provides in natural form the representation of the Hamiltonian in terms of local interactions. The normal interactions are constructed by successive couplings of tensors defined as linear combinations of the ladder operators. The matrix transformation between the local and normal interactions is obtained for the complete Hamiltonian. This analysis provides the spectroscopic parameters in both local and normal schemes in explicit form as functions of the force constants and structure parameters. To exemplify, the analysis of the vibrational excitations of stibine and arsine is presented. Force constants as well as the corresponding x,K relations are given. A comparison with the results obtained using the U(ν+1) unitary group approach is included.
NASA Astrophysics Data System (ADS)
Fabian, Dedecker; Peter, Cundall; Daniel, Billaux; Torsten, Groeger
Digging a shaft or drift inside a rock mass is a common practice in civil engineering when a transportation way, such as a motorway, railway tunnel or storage shaft is to be built. In most cases, the consequences of the disturbance on the medium must be known in order to estimate the behaviour of the disturbed rock mass. Indeed, excavating part of the rock causes a new distribution of the stress field around the excavation that can lead to micro-cracking and even to the failure of some rock volume in the vicinity of the shaft. Consequently, the formed micro-cracks modify the mechanical and hydraulic properties of the rock. In this paper, we present an original method for the evaluation of damage-induced permeability. ITASCA has developed and used discontinuum models to study rock damage by building particle assemblies and checking the breakage of bonds under stress. However, such models are limited in size by the very large number of particles needed to model even a comparatively small volume of rock. In fact, a large part of most models never experiences large strains and does not require the accurate description of large-strain/damage/post-peak behaviour afforded by a discontinuum model. Thus, a large model frequently can be separated into a strongly strained “core” area to be represented by a Discontinuum and a peripheral area for which continuum zones would be adequate. Based on this observation, Itasca has developed a coupled, three-dimensional, continuum/discontinuum modelling approach. The new approach, termed Adaptive Continuum/Discontinuum Code (AC/DC), is based on the use of a periodic discontinuum “base brick” for which more or less simplified continuum equivalents are derived. Depending on the level of deformation in each part of the model, the AC/DC code can dynamically select the appropriate brick type to be used. In this paper, we apply the new approach to an excavation performed in the Bure site, at which the French nuclear waste agency
NASA Astrophysics Data System (ADS)
Wang, Jianhua; Cheng, Lianglun; Wang, Tao; Peng, Xiaodong
2016-03-01
Table look-up operation plays a very important role during the decoding processing of context-based adaptive variable length decoding (CAVLD) in H.264/advanced video coding (AVC). However, frequent table look-up operation can result in big table memory access, and then lead to high table power consumption. Aiming to solve the problem of big table memory access of current methods, and then reduce high power consumption, a memory-efficient table look-up optimized algorithm is presented for CAVLD. The contribution of this paper lies that index search technology is introduced to reduce big memory access for table look-up, and then reduce high table power consumption. Specifically, in our schemes, we use index search technology to reduce memory access by reducing the searching and matching operations for code_word on the basis of taking advantage of the internal relationship among length of zero in code_prefix, value of code_suffix and code_lengh, thus saving the power consumption of table look-up. The experimental results show that our proposed table look-up algorithm based on index search can lower about 60% memory access consumption compared with table look-up by sequential search scheme, and then save much power consumption for CAVLD in H.264/AVC.
2012-01-01
We have entered a new era in agricultural and biomedical science made possible by remarkable advances in DNA sequencing technologies. The complete sequence of an individual’s set of chromosomes (collectively, its genome) provides a primary genetic code for what makes that individual unique, just as the contents of every personal computer reflect the unique attributes of its owner. But a second code, composed of “epigenetic” layers of information, affects the accessibility of the stored information and the execution of specific tasks. Nature’s second code is enigmatic and must be deciphered if we are to fully understand and optimize the genetic potential of crop plants. The goal of the Epigenomics of Plants International Consortium is to crack this second code, and ultimately master its control, to help catalyze a new green revolution. PMID:22751210
Solutions of the Dirac equation with the Morse potential energy model in higher spatial dimensions
NASA Astrophysics Data System (ADS)
Zhang, Peng; Long, Hui-Cheng; Jia, Chun-Sheng
2016-04-01
Analytical solutions of the Dirac equation with the Morse potential energy model in higher spatial dimensions have been explored. We present the bound-state energy equation and the corresponding upper and lower radial wave functions. We find that the behavior of the higher-dimensional relativistic vibrational energies remains similar to that of the three-dimensional molecular system for the X^2sum^+ state of the CP molecule. This symmetry phenomenon will break at the critical point through which the system undergoes a phase transition from a stable to an unstable state.
NASA Astrophysics Data System (ADS)
Grigoriev, Victor; Biancalana, Fabio
2010-05-01
The nonlinear properties of quasi-periodic photonic crystals based on the Thue-Morse sequence are investigated. The intrinsic spatial asymmetry of these one-dimensional structures for odd generation numbers results in bistability thresholds, which are sensitive to the propagation direction. Along with resonances of perfect transmission, this feature allows us to achieve strongly non-reciprocal propagation and to create an all-optical diode. The salient qualitative features of such optical diode action are readily explained through a simple coupled resonator model. The efficiency of a passive scheme that does not necessitate an additional short pump signal is compared to an active scheme where such a signal is required.
NASA Astrophysics Data System (ADS)
Grigoriev, V. V.; Biancalana, F.
2010-09-01
The nonlinear properties of quasiperiodic photonic crystals based on the Thue-Morse sequence are investigated. The intrinsic asymmetry of these one-dimensional structures for odd generation numbers results in bistability thresholds which are sensitive to propagation direction. Along with resonances of perfect transmission, this feature allows to obtain strongly nonreciprocal propagation and to create an all-optical diode. The efficiency of two schemes is compared: passive and active when an additional short pump signal is applied to the system. The existence of stationary gap solitons in quasiperiodic photonic crystals is shown numerically, and their difference from the Bragg case is emphasized.
NASA Astrophysics Data System (ADS)
Grigoriev, Victor; Biancalana, Fabio
2009-10-01
The nonlinear properties of quasiperiodic photonic crystals based on Thue-Morse sequence are investigated. The intrinsic asymmetry of these 1D structures for odd generation numbers results in bistability thresholds which are sensitive to propagation direction. Along with resonances of perfect transmission, this feature allows to obtain strongly nonreciprocal propagation and to create an all-optical diode (AOD). The efficiency of two schemes is compared: passive and active when an additional short-term pump signal is applied. The existence of stationary gap solitons for quasiperiodic photonic crystals is shown numerically, and their difference from the Bragg case is emphasized.
Grigoriev, Victor; Biancalana, Fabio
2009-10-07
The nonlinear properties of quasiperiodic photonic crystals based on Thue-Morse sequence are investigated. The intrinsic asymmetry of these 1D structures for odd generation numbers results in bistability thresholds which are sensitive to propagation direction. Along with resonances of perfect transmission, this feature allows to obtain strongly nonreciprocal propagation and to create an all-optical diode (AOD). The efficiency of two schemes is compared: passive and active when an additional short-term pump signal is applied. The existence of stationary gap solitons for quasiperiodic photonic crystals is shown numerically, and their difference from the Bragg case is emphasized.
Exact solution to laser rate equations: three-level laser as a Morse-like oscillator
NASA Astrophysics Data System (ADS)
León-Montiel, R. de J.; Moya-Cessa, Héctor M.
2016-08-01
It is shown how the rate equations that model a three-level laser can be cast into a single second-order differential equation, whose form describes a time-dependent harmonic oscillator. Using this result, we demonstrate that the resulting equation can be identified as a Schrödinger equation for a Morse-like potential, thus allowing us to derive exact closed-form expressions for the dynamics of the number of photons inside the laser cavity, as well as the atomic population inversion.
NASA Astrophysics Data System (ADS)
Sierra-Suarez, Jonatan A.; Majumdar, Shubhaditya; McGaughey, Alan J. H.; Malen, Jonathan A.; Higgs, C. Fred
2016-04-01
This work formulates a rough surface contact model that accounts for adhesion through a Morse potential and plasticity through the Kogut-Etsion finite element-based approximation. Compared to the commonly used Lennard-Jones (LJ) potential, the Morse potential provides a more accurate and generalized description for modeling covalent materials and surface interactions. An extension of this contact model to describe composite layered surfaces is presented and implemented to study a self-assembled monolayer (SAM) grown on a gold substrate placed in contact with a second gold substrate. Based on a comparison with prior experimental measurements of the thermal conductance of this SAM junction [Majumdar et al., Nano Lett. 15, 2985-2991 (2015)], the more general Morse potential-based contact model provides a better prediction of the percentage contact area than an equivalent LJ potential-based model.
Greenough, Jeffrey A.; de Supinski, Bronis R.; Yates, Robert K.; Rendleman, Charles A.; Skinner, David; Beckner, Vince; Lijewski, Mike; Bell, John; Sexton, James C.
2005-04-25
We describe the performance of the block-structured Adaptive Mesh Refinement (AMR) code Raptor on the 32k node IBM BlueGene/L computer. This machine represents a significant step forward towards petascale computing. As such, it presents Raptor with many challenges for utilizing the hardware efficiently. In terms of performance, Raptor shows excellent weak and strong scaling when running in single level mode (no adaptivity). Hardware performance monitors show Raptor achieves an aggregate performance of 3:0 Tflops in the main integration kernel on the 32k system. Results from preliminary AMR runs on a prototype astrophysical problem demonstrate the efficiency of the current software when running at large scale. The BG/L system is enabling a physics problem to be considered that represents a factor of 64 increase in overall size compared to the largest ones of this type computed to date. Finally, we provide a description of the development work currently underway to address our inefficiencies.
Alemgadmi, Khaled I. K. Suparmi; Cari; Deta, U. A.
2015-09-30
The approximate analytical solution of Schrodinger equation for Q-Deformed Rosen-Morse potential was investigated using Supersymmetry Quantum Mechanics (SUSY QM) method. The approximate bound state energy is given in the closed form and the corresponding approximate wave function for arbitrary l-state given for ground state wave function. The first excited state obtained using upper operator and ground state wave function. The special case is given for the ground state in various number of q. The existence of Rosen-Morse potential reduce energy spectra of system. The larger value of q, the smaller energy spectra of system.
Klempova, Bibiana; Liepelt, Roman
2016-07-01
Recent findings suggest that a Simon effect (SE) can be induced in Individual go/nogo tasks when responding next to an event-producing object salient enough to provide a reference for the spatial coding of one's own action. However, there is skepticism against referential coding for the joint Simon effect (JSE) by proponents of task co-representation. In the present study, we tested assumptions of task co-representation and referential coding by introducing unexpected double response events in a joint go/nogo and a joint independent go/nogo task. In Experiment 1b, we tested if task representations are functionally similar in joint and standard Simon tasks. In Experiment 2, we tested sequential updating of task co-representation after unexpected single response events in the joint independent go/nogo task. Results showed increased JSEs following unexpected events in the joint go/nogo and joint independent go/nogo task (Experiment 1a). While the former finding is in line with the assumptions made by both accounts (task co-representation and referential coding), the latter finding supports referential coding. In contrast to Experiment 1a, we found a decreased SE after unexpected events in the standard Simon task (Experiment 1b), providing evidence against the functional equivalence assumption between joint and two-choice Simon tasks of the task co-representation account. Finally, we found an increased JSE also following unexpected single response events (Experiment 2), ruling out that the findings of the joint independent go/nogo task in Experiment 1a were due to a re-conceptualization of the task situation. In conclusion, our findings support referential coding also for the joint Simon effect. PMID:25833374
Parameterizing the Morse potential for coarse-grained modeling of blood plasma
Zhang, Na; Zhang, Peng; Kang, Wei; Bluestein, Danny; Deng, Yuefan
2014-01-15
Multiscale simulations of fluids such as blood represent a major computational challenge of coupling the disparate spatiotemporal scales between molecular and macroscopic transport phenomena characterizing such complex fluids. In this paper, a coarse-grained (CG) particle model is developed for simulating blood flow by modifying the Morse potential, traditionally used in Molecular Dynamics for modeling vibrating structures. The modified Morse potential is parameterized with effective mass scales for reproducing blood viscous flow properties, including density, pressure, viscosity, compressibility and characteristic flow dynamics of human blood plasma fluid. The parameterization follows a standard inverse-problem approach in which the optimal micro parameters are systematically searched, by gradually decoupling loosely correlated parameter spaces, to match the macro physical quantities of viscous blood flow. The predictions of this particle based multiscale model compare favorably to classic viscous flow solutions such as Counter-Poiseuille and Couette flows. It demonstrates that such coarse grained particle model can be applied to replicate the dynamics of viscous blood flow, with the advantage of bridging the gap between macroscopic flow scales and the cellular scales characterizing blood flow that continuum based models fail to handle adequately.
Parameterizing the Morse Potential for Coarse-Grained Modeling of Blood Plasma
Zhang, Na; Zhang, Peng; Kang, Wei; Bluestein, Danny; Deng, Yuefan
2014-01-01
Multiscale simulations of fluids such as blood represent a major computational challenge of coupling the disparate spatiotemporal scales between molecular and macroscopic transport phenomena characterizing such complex fluids. In this paper, a coarse-grained (CG) particle model is developed for simulating blood flow by modifying the Morse potential, traditionally used in Molecular Dynamics for modeling vibrating structures. The modified Morse potential is parameterized with effective mass scales for reproducing blood viscous flow properties, including density, pressure, viscosity, compressibility and characteristic flow dynamics of human blood plasma fluid. The parameterization follows a standard inverse-problem approach in which the optimal micro parameters are systematically searched, by gradually decoupling loosely correlated parameter spaces, to match the macro physical quantities of viscous blood flow. The predictions of this particle based multiscale model compare favorably to classic viscous flow solutions such as Counter-Poiseuille and Couette flows. It demonstrates that such coarse grained particle model can be applied to replicate the dynamics of viscous blood flow, with the advantage of bridging the gap between macroscopic flow scales and the cellular scales characterizing blood flow that continuum based models fail to handle adequately. PMID:24910470
Modeling of diatomic molecule using the Morse potential and the Verlet algorithm
NASA Astrophysics Data System (ADS)
Fidiani, Elok
2016-03-01
Performing molecular modeling usually uses special software for Molecular Dynamics (MD) such as: GROMACS, NAMD, JMOL etc. Molecular dynamics is a computational method to calculate the time dependent behavior of a molecular system. In this work, MATLAB was used as numerical method for a simple modeling of some diatomic molecules: HCl, H2 and O2. MATLAB is a matrix based numerical software, in order to do numerical analysis, all the functions and equations describing properties of atoms and molecules must be developed manually in MATLAB. In this work, a Morse potential was generated to describe the bond interaction between the two atoms. In order to analyze the simultaneous motion of molecules, the Verlet Algorithm derived from Newton's Equations of Motion (classical mechanics) was operated. Both the Morse potential and the Verlet algorithm were integrated using MATLAB to derive physical properties and the trajectory of the molecules. The data computed by MATLAB is always in the form of a matrix. To visualize it, Visualized Molecular Dynamics (VMD) was performed. Such method is useful for development and testing some types of interaction on a molecular scale. Besides, this can be very helpful for describing some basic principles of molecular interaction for educational purposes.
NASA Technical Reports Server (NTRS)
Chen, Y. S.; Farmer, R. C.
1992-01-01
A particulate two-phase flow CFD model was developed based on the FDNS code which is a pressure based predictor plus multi-corrector Navier-Stokes flow solver. Turbulence models with compressibility correction and the wall function models were employed as submodels. A finite-rate chemistry model was used for reacting flow simulation. For particulate two-phase flow simulations, a Eulerian-Lagrangian solution method using an efficient implicit particle trajectory integration scheme was developed in this study. Effects of particle-gas reaction and particle size change to agglomeration or fragmentation were not considered in this investigation. At the onset of the present study, a two-dimensional version of FDNS which had been modified to treat Lagrangian tracking of particles (FDNS-2DEL) had already been written and was operational. The FDNS-2DEL code was too slow for practical use, mainly because it had not been written in a form amenable to vectorization on the Cray, nor was the full three-dimensional form of FDNS utilized. The specific objective of this study was to reorder to calculations into long single arrays for automatic vectorization on the Cray and to implement the full three-dimensional version of FDNS to produce the FDNS-3DEL code. Since the FDNS-2DEL code was slow, a very limited number of test cases had been run with it. This study was also intended to increase the number of cases simulated to verify and improve, as necessary, the particle tracking methodology coded in FDNS.
NASA Astrophysics Data System (ADS)
Chen, Y. S.; Farmer, R. C.
1992-04-01
A particulate two-phase flow CFD model was developed based on the FDNS code which is a pressure based predictor plus multi-corrector Navier-Stokes flow solver. Turbulence models with compressibility correction and the wall function models were employed as submodels. A finite-rate chemistry model was used for reacting flow simulation. For particulate two-phase flow simulations, a Eulerian-Lagrangian solution method using an efficient implicit particle trajectory integration scheme was developed in this study. Effects of particle-gas reaction and particle size change to agglomeration or fragmentation were not considered in this investigation. At the onset of the present study, a two-dimensional version of FDNS which had been modified to treat Lagrangian tracking of particles (FDNS-2DEL) had already been written and was operational. The FDNS-2DEL code was too slow for practical use, mainly because it had not been written in a form amenable to vectorization on the Cray, nor was the full three-dimensional form of FDNS utilized. The specific objective of this study was to reorder to calculations into long single arrays for automatic vectorization on the Cray and to implement the full three-dimensional version of FDNS to produce the FDNS-3DEL code. Since the FDNS-2DEL code was slow, a very limited number of test cases had been run with it. This study was also intended to increase the number of cases simulated to verify and improve, as necessary, the particle tracking methodology coded in FDNS.
Ganapol, Barry; Maldonado, Ivan
2014-01-23
The generation of multigroup cross sections lies at the heart of the very high temperature reactor (VHTR) core design, whether the prismatic (block) or pebble-bed type. The design process, generally performed in three steps, is quite involved and its execution is crucial to proper reactor physics analyses. The primary purpose of this project is to develop the CENTRM cross-section processing module of the SCALE code package for application to prismatic or pebble-bed core designs. The team will include a detailed outline of the entire processing procedure for application of CENTRM in a final report complete with demonstration. In addition, they will conduct a thorough verification of the CENTRM code, which has yet to be performed. The tasks for this project are to: Thoroughly test the panel algorithm for neutron slowing down; Develop the panel algorithm for multi-materials; Establish a multigroup convergence 1D transport acceleration algorithm in the panel formalism; Verify CENTRM in 1D plane geometry; Create and test the corresponding transport/panel algorithm in spherical and cylindrical geometries; and, Apply the verified CENTRM code to current VHTR core design configurations for an infinite lattice, including assessing effectiveness of Dancoff corrections to simulate TRISO particle heterogeneity.
NASA Astrophysics Data System (ADS)
Prialnik, Dina; Merk, Rainer
2008-09-01
We present a new 1-dimensional thermal evolution code suited for small icy bodies of the Solar System, based on modern adaptive grid numerical techniques, and suited for multiphase flow through a porous medium. The code is used for evolutionary calculations spanning 4.6×10 yr of a growing body made of ice and rock, starting with a 10 km radius seed and ending with an object 250 km in radius. Initial conditions are chosen to match two different classes of objects: a Kuiper belt object, and Saturn's moon Enceladus. Heating by the decay of 26Al, as well as long-lived radionuclides is taken into account. Several values of the thermal conductivity and accretion laws are tested. We find that in all cases the melting point of ice is reached in a central core. Evaporation and flow of water and vapor gradually remove the water from the core and the final (present) structure is differentiated, with a rocky, highly porous core of 80 km radius (and up to 160 km for very low conductivities). Outside the core, due to refreezing of water and vapor, a compact, ice-rich layer forms, a few tens of km thick (except in the case of very high conductivity). If the ice is initially amorphous, as expected in the Kuiper belt, the amorphous ice is preserved in an outer layer about 20 km thick. We conclude by suggesting various ways in which the code may be extended.
NASA Astrophysics Data System (ADS)
Rodrigues, Clóves G.
2016-06-01
In this work we investigate the interatomic correlation moments in two-dimensional model of a weakly anharmonic crystal (i.e., not very high temperatures) with hexagonal lattice, using the Correlative Method of Unsymmetrized Self-Consistent Field (CUSF). The numerical results are obtained (and compared) by using the Morse and Lenard-Jones potentials.
Analytical Solutions of the Fokker-Planck Equation for Generalized Morse and Hulthén Potentials
NASA Astrophysics Data System (ADS)
Anjos, R. C.; Freitas, G. B.; Coimbra-Araújo, C. H.
2016-01-01
In the present contribution we analytically calculate solutions of the transition probability of the Fokker-Planck equation (FPE) for both the generalized Morse potential and the Hulthén potential. The method is based on the formal analogy of the FPE with the Schrödinger equation using techniques from supersymmetric quantum mechanics.
Inci, I.; Boztosun, I.; Bonatsos, D.
2008-11-11
Analytical solutions of the collective Bohr Hamiltonian with the Morse potential have been obtained for the U(5)-O(6) and U(5)-SU(3) transition regions through the Asymptotic Iteration Method (AIM). The obtained energy eigenvalue equations have been used to get the experimental excitation energy spectrum of Xe and Yb isotopes. The results are in good agreement with experimental data.
Implementation of a Morse potential to model hydroxyl behavior in phyllosilicates.
Greathouse, Jeffery A; Durkin, Justin S; Larentzos, James P; Cygan, Randall T
2009-04-01
The accurate molecular simulation of many hydrated chemical systems, including clay minerals and other phyllosilicates and their interfaces with aqueous solutions, requires improved classical force field potentials to better describe structure and vibrational behavior. Classical and ab initio molecular dynamics simulations of the bulk structure of pyrophyllite, talc, and Na-montmorillonite clay phases exhibit dissimilar behavior in the hydroxyl stretch region of power spectra derived from atomic trajectories. The classical simulations, using the CLAYFF force field, include either a standard harmonic potential or a new Morse potential parametrized for both dioctahedral and trioctahedral phases for the O-H bond stretch. Comparisons of classical results with experimental values and with ab initio molecular dynamics simulations indicate improvements in the simulation of hydroxyl orientation relative to the clay octahedral sheet and in the O-H bond stretch in the high frequency region of the power spectrum. PMID:19355770
Electronic band gaps and transport in aperiodic graphene-based superlattices of Thue-Morse sequence
NASA Astrophysics Data System (ADS)
Wang, Ligang; Ma, Tianxing
2014-03-01
We investigate electronic band structure and transport properties in aperiodic graphene-based superlattices of Thue-Morse (TM) sequence. The robust properties of zero- k gap are demonstrated in both mono-layer and bi-layer graphene TM sequence. The Extra Dirac points may emerge at ky ≠ 0, and the electronic transport behaviors such as the conductance and the Fano factor are discussed in detail. Our results provide a flexible and effective way to control the transport properties in graphene-based superlattices. This work is supported by NSFCs (Nos. 11274275, 11104014 and 61078021), Research Fund for the Doctoral Program of Higher Education 20110003120007, SRF for ROCS (SEM), and the National Basic Research Program of China (No. 2011CBA00108, and 2012CB921602).
Influence of the potential range on the heat capacity of 13-atom Morse clusters
NASA Astrophysics Data System (ADS)
Moseler, Michael; Nordiek, Johannes
1999-10-01
Heat capacity curves as a function of temperature were studied for 13-atom clusters bound by Morse potentials with different range parameters ρ0 ɛ \\{3,4,5,6,14\\} using J-walking Monte Carlo. Decreasing the range of the pair potential (i.e., increasing ρ0) increases the peak of the heat capacity in the melting transition region and decreases the boiling temperature. For ρ0=14 the melting and boiling peaks merge. The short-range potential favors a transition from the catchment region of the icosahedral ground state to the basins of higher minima. On the other hand, clusters bound by the long-range potential (ρ0=3) remain in the ground-state basin even for elevated temperatures, which can be explained by the destabilization of important higher minima for ρ0<4.
A Morse manipulator molecule for the modulation of metallic shockley surface states
NASA Astrophysics Data System (ADS)
Ample, F.; Ami, S.; Joachim, C.; Thiemann, F.; Rapenne, G.
2007-02-01
A Morse manipulator like molecule able to modulate the electronic standing wave pattern of metallic Shockley surface states is presented. Its design is based on a molecular arm holding a phenyl whose distance to the metal surface is controlled by the tip apex of an STM. The standing wave patterns are calculated using an extension of the N-ESQC technique. The corrugation of the surface state modulation is proposed to be detected by a small 127 kΩ atomic scale tunnel junction supposed positioned very close to the surface and a few nanometers away from the molecule. A variation of 150 Ω of this junction resistance is detected for a phenyl surface distance variation from 0.4 to 0.24 nm.
Principal Poincaré Pontryagin function associated to some families of Morse real polynomials
NASA Astrophysics Data System (ADS)
Pelletier, M.; Uribe, M.
2014-02-01
It is known that the principal Poincaré Pontryagin function is generically an Abelian integral. We give a sufficient condition on monodromy to ensure that it is also an Abelian integral in non-generic cases. In non-generic cases it is an iterated integral. Uribe (2006 J. Dyn. Control. Syst. 12 109-34, 2009 J. Diff. Eqns 246 1313-41) gives in a special case a precise description of the principal Poincaré Pontryagin function, an iterated integral of length at most 2, involving logarithmic functions with only 1 ramification at a point at infinity. We extend this result to some non-isomonodromic families of real Morse polynomials.
Non-Perturbative and Moments Methods Applied to the Morse Potential
NASA Astrophysics Data System (ADS)
Walsh, Nathan; Ashendorf, Eric; Toland, John; Fessatidis, Vassilios; Mancini, Jay D.; Bowen, Samuel P.
2012-02-01
The well-known Morse potential has been well known to both physicists and quantum chemists for a number of years and has been used to model the behavior of diatomic molecules. Explicitly it may be written as% [ V(r)=De( e-2a( r-re) -2e^-a( r-re% ) ) +De% ] where r is the inter-atomic separation, re is the (equilibrium) bond length and De is the depth of the potential well. The width of the well is given by a^2=ke/2De with ke the effective spring constant. Here we wish to study both the ground state energy (using both the Connected Moments Expansion and the Generalized Moments expansion) as well as the entire spectrum using a Lanczos scheme. Our results will be compared with other well-established results.
NASA Astrophysics Data System (ADS)
Key, K.
2013-12-01
This work announces the public release of an open-source inversion code named MARE2DEM (Modeling with Adaptively Refined Elements for 2D Electromagnetics). Although initially designed for the rapid inversion of marine electromagnetic data, MARE2DEM now supports a wide variety of acquisition configurations for both offshore and onshore surveys that utilize electric and magnetic dipole transmitters or magnetotelluric plane waves. The model domain is flexibly parameterized using a grid of arbitrarily shaped polygonal regions, allowing for complicated structures such as topography or seismically imaged horizons to be easily assimilated. MARE2DEM efficiently solves the forward problem in parallel by dividing the input data parameters into smaller subsets using a parallel data decomposition algorithm. The data subsets are then solved in parallel using an automatic adaptive finite element method that iterative solves the forward problem on successively refined finite element meshes until a specified accuracy tolerance is met, thus freeing the end user from the burden of designing an accurate numerical modeling grid. Regularized non-linear inversion for isotropic or anisotropic conductivity is accomplished with a new implementation of Occam's method referred to as fast-Occam, which is able to minimize the objective function in much fewer forward evaluations than the required by the original method. This presentation will review the theoretical considerations behind MARE2DEM and use a few recent offshore EM data sets to demonstrate its capabilities and to showcase the software interface tools that streamline model building and data inversion.
Holden, Richard J; Rivera-Rodriguez, A Joy; Faye, Héléne; Scanlon, Matthew C; Karsh, Ben-Tzion
2013-08-01
The most common change facing nurses today is new technology, particularly bar coded medication administration technology (BCMA). However, there is a dearth of knowledge on how BCMA alters nursing work. This study investigated how BCMA technology affected nursing work, particularly nurses' operational problem-solving behavior. Cognitive systems engineering observations and interviews were conducted after the implementation of BCMA in three nursing units of a freestanding pediatric hospital. Problem-solving behavior, associated problems, and goals, were specifically defined and extracted from observed episodes of care. Three broad themes regarding BCMA's impact on problem solving were identified. First, BCMA allowed nurses to invent new problem-solving behavior to deal with pre-existing problems. Second, BCMA made it difficult or impossible to apply some problem-solving behaviors that were commonly used pre-BCMA, often requiring nurses to use potentially risky workarounds to achieve their goals. Third, BCMA created new problems that nurses were either able to solve using familiar or novel problem-solving behaviors, or unable to solve effectively. Results from this study shed light on hidden hazards and suggest three critical design needs: (1) ecologically valid design; (2) anticipatory control; and (3) basic usability. Principled studies of the actual nature of clinicians' work, including problem solving, are necessary to uncover hidden hazards and to inform health information technology design and redesign. PMID:24443642
ERIC Educational Resources Information Center
Pattavina, Paul
1980-01-01
Excerpts from an interview with William C. Morse on teacher burnout concern special educators' sense of failure and impotence, the issues connected with individualized educational programs, and the importance of the first year of teaching. (CL)
Andari, Elissar; Richard, Nathalie; Leboyer, Marion; Sirigu, Angela
2016-03-01
The neuropeptide oxytocin (OT) is one of the major targets of research in neuroscience, with respect to social functioning. Oxytocin promotes social skills and improves the quality of face processing in individuals with social dysfunctions such as autism spectrum disorder (ASD). Although one of OT's key functions is to promote social behavior during dynamic social interactions, the neural correlates of this function remain unknown. Here, we combined acute intranasal OT (IN-OT) administration (24 IU) and fMRI with an interactive ball game and a face-matching task in individuals with ASD (N = 20). We found that IN-OT selectively enhanced the brain activity of early visual areas in response to faces as compared to non-social stimuli. OT inhalation modulated the BOLD activity of amygdala and hippocampus in a context-dependent manner. Interestingly, IN-OT intake enhanced the activity of mid-orbitofrontal cortex in response to a fair partner, and insula region in response to an unfair partner. These OT-induced neural responses were accompanied by behavioral improvements in terms of allocating appropriate feelings of trust toward different partners' profiles. Our findings suggest that OT impacts the brain activity of key areas implicated in attention and emotion regulation in an adaptive manner, based on the value of social cues. PMID:26872344
Rimpela, R.J.G.
1984-02-01
The engine was installed in a dynamometer test cell at US Army Tank-Automotive Command (TACOM) and conventional dynamometer testing procedures were used to determine basic engine characteristics. The characteristics determined were full load performance, fuel economy at full load and part load, engine oil consumption, and engine heat rejection. During pre-endurance testing, the Code E-436 engine produced 378 observed kW (506.4 BHP) at full load, at rated speed of 2,600 RPM. The maximum torque during full load operation was 1439 Nm (1061 1b-ft) at 2,400 RPM. Minimum brake specific fuel consumption at full load occurred at 2,200 RPM and was 217 g/KWH (0.356 1b/BHP-HR). After the NATO Endurance Test the engine produced 375.1 observed kW (503.0 BHP) at full load and rated speed. The maximum torque was 1423.8 Nm (1050 1b-ft) at 2400 RPM. The total lube oil consumption during the 400-hour NATO endurance was 19.7 kgs (43.4 lbs). Following the endurance test visual and dimensional inspection of the engine revealed all major engine parts to be in excellent condition except for pistons. Five out of eight pistons developed cracks in the pin bores. Though the engine completed the endurance test (400 hours) and was operated for a total of 582 hours, the engine is considered as having failed the 400-hour NATO test due to piston failure.
Interaction of a Two-Level Atom with the Morse Potential in the Framework of Jaynes-Cummings Model
NASA Astrophysics Data System (ADS)
Setare R., M.; Sh., Barzanjeh
2009-09-01
A theoretical study of the dynamical behaviors of the interaction between a two-level atom with a Morse potential in the framework of the Jaynes-Cummings model (JCM) is discussed. We show that this system is equivalent to an intensity-dependent coupling between the two-level atom and the non-deformed single-mode radiation field in the presence of an additional nonlinear interaction. We study the dynamical properties of the system such as, atomic population inversion, the probability distribution of cavity-field, the Mandel parameter and atomic dipole squeezing. It is shown how the depth of the Morse potential can be affected by non-classical properties of the system. Moreover, the temporal evolution of the Husimi-distribution function is explored.
Suparmi, A. Cari, C.; Angraini, L. M.
2014-09-30
The bound state solutions of Dirac equation for Hulthen and trigonometric Rosen Morse non-central potential are obtained using finite Romanovski polynomials. The approximate relativistic energy spectrum and the radial wave functions which are given in terms of Romanovski polynomials are obtained from solution of radial Dirac equation. The angular wave functions and the orbital quantum number are found from angular Dirac equation solution. In non-relativistic limit, the relativistic energy spectrum reduces into non-relativistic energy.
Evaluation of seepage from Chester Morse Lake and Masonry Pool, King County, Washington
Hidaka, F.T.; Garrett, Arthur Angus
1967-01-01
Hydrologic data collected in the Cedar and Snoqualmie River basins on the west slope of the Cascade Range have been analyzed to determine the amount of water lost by seepage from Chester Morse Lake and Masonry Pool and the. consequent gain by seepage to the Cedar and South Fork Snoqualmie Rivers. For water years 1957-64, average losses were about 220 cfs (cubic feet per second) while average gains were about 180 cfs in the Cedar River and 50 cfs in the South Fork Snoqualmie River. Streamflow and precipitation data for water years 1908-26 and 1930-F2 indicate that a change in runoff regimen occurred in Cedar and South Fork Snoqualmie Rivers after the Boxley Creek washout in December 1918. For water years 1919-26 and 1930-32, the flow of Cedar River near Landsburg averaged about 80 cfs less than it would have if the washout had not occurred. In contrast, the flow of South Fork Snoqualmie River at North Bend averaged about 60 cfs more than it would have.
Landau levels as a limiting case of a model with the morse-like magnetic field
NASA Astrophysics Data System (ADS)
Fakhri, H.; Mojaveri, B.; Nobary, M. A. Gomshi
2010-12-01
We consider the quantum mechanics of an electron trapped on an infinite band along the x-axis in the presence of the Morse-like perpendicular magnetic field B=-Bek with B0 > 0 as a constant strength and a0 as the width of the band. It is shown that the square integrable pure states realize representations of su(1, 1) algebra via the quantum number corresponding to the linear momentum in the y-direction. The energy of the states increases by decreasing the width a0 while it is not changed by B0. It is quadratic in terms of two quantum numbers, and the linear spectrum of the Landau levels is obtained as a limiting case of a0 → ∞. All of the lowest states of the su(1, 1) representations minimize uncertainty relation and the minimizing of their second and third states is transformed to that of the Landau levels in the limit a0 → ∞. The compact forms of the Barut-Girardello coherent states corresponding to l-representation of su(1, 1) algebra and their positive definite measures on the complex plane are also calculated.
Stress on external hexagon and Morse taper implants submitted to immediate loading
Odo, Caroline H.; Pimentel, Marcele J.; Consani, Rafael L.X.; Mesquita, Marcelo F.; Nóbilo, Mauro A.A.
2015-01-01
Background/Aims This study aimed to evaluate the stress distribution around external hexagon (EH) and Morse taper (MT) implants with different prosthetic systems of immediate loading (distal bar (DB), casting technique (CT), and laser welding (LW)) by using photoelastic method. Methods Three infrastructures were manufactured on a model simulating an edentulous lower jaw. All models were composed by five implants (4.1 mm × 13.0 mm) simulating a conventional lower protocol. The samples were divided into six groups. G1: EH implants with DB and acrylic resin; G2: EH implants with titanium infrastructure CT; G3: EH implants with titanium infrastructure attached using LW; G4: MT implants with DB and acrylic resin; G5: MT implants with titanium infrastructure CT; G6: MT implants with titanium infrastructure attached using LW. After the infrastructures construction, the photoelastic models were manufactured and a loading of 4.9 N was applied in the cantilever. Five pre-determined points were analyzed by Fringes software. Results Data showed significant differences between the connection types (p < 0.0001), and there was no significant difference among the techniques used for infrastructure. Conclusion The reduction of the stress levels was more influenced by MT connection (except for CT). Different bar types submitted to immediate loading not influenced stress concentration. PMID:26605142
Construction of the Barut–Girardello quasi coherent states for the Morse potential
Popov, Dušan; Dong, Shi-Hai; Pop, Nicolina; Sajfert, Vjekoslav; Şimon, Simona
2013-12-15
The Morse oscillator (MO) potential occupies a privileged place among the anharmonic oscillator potentials due to its applications in quantum mechanics to diatomic or polyatomic molecules, spectroscopy and so on. For this potential some kinds of coherent states (especially of the Klauder–Perelomov and Gazeau–Klauder kinds) have been constructed previously. In this paper we construct the coherent states of the Barut–Girardello kind (BG-CSs) for the MO potential, which have received less attention in the scientific literature. We obtain these CSs and demonstrate that they fulfil all conditions required by the coherent state. The Mandel parameter for the pure BG-CSs and Husimi’s and P-quasi distribution functions (for the mixed-thermal states) are also presented. Finally, we show that all obtained results for the BG-CSs of MO tend, in the harmonic limit, to the corresponding results for the coherent states of the one dimensional harmonic oscillator (CSs for the HO-1D). -- Highlights: •Construct the coherent states of the Barut–Girardello kind (BG-CSs) for the MO potential. •They fulfil all the conditions needed to a coherent state. •Present the Mandel parameter and Husimi’s and P-quasi distribution functions. •All results tend to those for the one dimensional harmonic oscillator in its harmonic limit.
NASA Technical Reports Server (NTRS)
Johnson, F. T.; Samant, S. S.; Bieterman, M. B.; Melvin, R. G.; Young, D. P.; Bussoletti, J. E.; Hilmes, C. L.
1992-01-01
A new computer program, called TranAir, for analyzing complex configurations in transonic flow (with subsonic or supersonic freestream) was developed. This program provides accurate and efficient simulations of nonlinear aerodynamic flows about arbitrary geometries with the ease and flexibility of a typical panel method program. The numerical method implemented in TranAir is described. The method solves the full potential equation subject to a set of general boundary conditions and can handle regions with differing total pressure and temperature. The boundary value problem is discretized using the finite element method on a locally refined rectangular grid. The grid is automatically constructed by the code and is superimposed on the boundary described by networks of panels; thus no surface fitted grid generation is required. The nonlinear discrete system arising from the finite element method is solved using a preconditioned Krylov subspace method embedded in an inexact Newton method. The solution is obtained on a sequence of successively refined grids which are either constructed adaptively based on estimated solution errors or are predetermined based on user inputs. Many results obtained by using TranAir to analyze aerodynamic configurations are presented.
Cohen, Michael R; Smetzer, Judy L
2014-05-01
These medication errors have occurred in health care facilities at least once. They will happen again-perhaps where you work. Through education and alertness of personnel and procedural safeguards, they can be avoided. You should consider publishing accounts of errors in your newsletters and/or presenting them at your inservice training programs. Your assistance is required to continue this feature. The reports described here were received through the Institute for Safe Medication Practices (ISMP) Medication Errors Reporting Program. Any reports published by ISMP will be anonymous. Comments are also invited; the writers' names will be published if desired. ISMP may be contacted at the address shown below. Errors, close calls, or hazardous conditions may be reported directly to ISMP through the ISMP Web site (www.ismp.org), by calling 800-FAIL-SAFE, or via e-mail at ismpinfo@ismp.org. ISMP guarantees the confidentiality and security of the information received and respects reporters' wishes as to the level of detail included in publications. PMID:24958950
Cohen, Michael R.; Smetzer, Judy L.
2014-01-01
These medication errors have occurred in health care facilities at least once. They will happen again—perhaps where you work. Through education and alertness of personnel and procedural safeguards, they can be avoided. You should consider publishing accounts of errors in your newsletters and/or presenting them at your inservice training programs. Your assistance is required to continue this feature. The reports described here were received through the Institute for Safe Medication Practices (ISMP) Medication Errors Reporting Program. Any reports published by ISMP will be anonymous. Comments are also invited; the writers’ names will be published if desired. ISMP may be contacted at the address shown below. Errors, close calls, or hazardous conditions may be reported directly to ISMP through the ISMP Web site (www.ismp.org), by calling 800-FAIL-SAFE, or via e-mail at ismpinfo@ismp.org. ISMP guarantees the confidentiality and security of the information received and respects reporters’ wishes as to the level of detail included in publications. PMID:24958950
NASA Astrophysics Data System (ADS)
Durmaz, Murat; Karslioglu, Mahmut Onur
2015-04-01
There are various global and regional methods that have been proposed for the modeling of ionospheric vertical total electron content (VTEC). Global distribution of VTEC is usually modeled by spherical harmonic expansions, while tensor products of compactly supported univariate B-splines can be used for regional modeling. In these empirical parametric models, the coefficients of the basis functions as well as differential code biases (DCBs) of satellites and receivers can be treated as unknown parameters which can be estimated from geometry-free linear combinations of global positioning system observables. In this work we propose a new semi-parametric multivariate adaptive regression B-splines (SP-BMARS) method for the regional modeling of VTEC together with satellite and receiver DCBs, where the parametric part of the model is related to the DCBs as fixed parameters and the non-parametric part adaptively models the spatio-temporal distribution of VTEC. The latter is based on multivariate adaptive regression B-splines which is a non-parametric modeling technique making use of compactly supported B-spline basis functions that are generated from the observations automatically. This algorithm takes advantage of an adaptive scale-by-scale model building strategy that searches for best-fitting B-splines to the data at each scale. The VTEC maps generated from the proposed method are compared numerically and visually with the global ionosphere maps (GIMs) which are provided by the Center for Orbit Determination in Europe (CODE). The VTEC values from SP-BMARS and CODE GIMs are also compared with VTEC values obtained through calibration using local ionospheric model. The estimated satellite and receiver DCBs from the SP-BMARS model are compared with the CODE distributed DCBs. The results show that the SP-BMARS algorithm can be used to estimate satellite and receiver DCBs while adaptively and flexibly modeling the daily regional VTEC.
Beauvais, Z S; Thompson, K H; Kearfott, K J
2009-07-01
Due to a recent upward trend in the price of uranium and subsequent increased interest in uranium mining, accurate modeling of baseline dose from environmental sources of radioactivity is of increasing interest. Residual radioactivity model and code (RESRAD) is a program used to model environmental movement and calculate the dose due to the inhalation, ingestion, and exposure to radioactive materials following a placement. This paper presents a novel use of RESRAD for the calculation of dose from non-enhanced, or ancient, naturally occurring radioactive material (NORM). In order to use RESRAD to calculate the total effective dose (TED) due to ancient NORM, a procedural adaptation was developed to negate the effects of time progressive distribution of radioactive materials. A dose due to United States' average concentrations of uranium, actinium, and thorium series radionuclides was then calculated. For adults exposed in a residential setting and assumed to eat significant amounts of food grown in NORM concentrated areas, the annual dose due to national average NORM concentrations was 0.935 mSv y(-1). A set of environmental dose factors were calculated for simple estimation of dose from uranium, thorium, and actinium series radionuclides for various age groups and exposure scenarios as a function of elemental uranium and thorium activity concentrations in groundwater and soil. The values of these factors for uranium were lowest for an adult exposed in an industrial setting: 0.00476 microSv kg Bq(-1) y(-1) for soil and 0.00596 microSv m(3) Bq(-1) y(-1) for water (assuming a 1:1 234U:238U activity ratio in water). The uranium factors were highest for infants exposed in a residential setting and assumed to ingest food grown onsite: 34.8 microSv kg Bq(-1) y(-1) in soil and 13.0 microSv m(3) Bq(-1) y(-1) in water. PMID:19509509
Scattering States of l-Wave Schrödinger Equation with Modified Rosen—Morse Potential
NASA Astrophysics Data System (ADS)
Chen, Wen-Li; Shi, Yan-Wei; Wei, Gao-Feng
2016-08-01
Within a Pekeris-type approximation to the centrifugal term, we examine the approximately analytical scattering state solutions of the l-wave Schrödinger equation with the modified Rosen—Morse potential. The calculation formula of phase shifts is derived, and the corresponding bound state energy levels are also obtained from the poles of the scattering amplitude. Supported by the National Natural Science Foundation of China under Grant No. 11405128, and Natural Science Basic Research Plan in Shaanxi Province of China under Grant No. 15JK2093
Bremer, P. -T.
2014-08-26
ADAPT is a topological analysis code that allow to compute local threshold, in particular relevance based thresholds for features defined in scalar fields. The initial target application is vortex detection but the software is more generally applicable to all threshold based feature definitions.
Compressible Astrophysics Simulation Code
2007-07-18
This is an astrophysics simulation code involving a radiation diffusion module developed at LLNL coupled to compressible hydrodynamics and adaptive mesh infrastructure developed at LBNL. One intended application is to neutrino diffusion in core collapse supernovae.
Maximizing Adaptivity in Hierarchical Topological Models Using Cancellation Trees
Bremer, P; Pascucci, V; Hamann, B
2008-12-08
We present a highly adaptive hierarchical representation of the topology of functions defined over two-manifold domains. Guided by the theory of Morse-Smale complexes, we encode dependencies between cancellations of critical points using two independent structures: a traditional mesh hierarchy to store connectivity information and a new structure called cancellation trees to encode the configuration of critical points. Cancellation trees provide a powerful method to increase adaptivity while using a simple, easy-to-implement data structure. The resulting hierarchy is significantly more flexible than the one previously reported. In particular, the resulting hierarchy is guaranteed to be of logarithmic height.
NASA Astrophysics Data System (ADS)
Valdivia, Valeska; Hennebelle, Patrick
2014-11-01
Context. Ultraviolet radiation plays a crucial role in molecular clouds. Radiation and matter are tightly coupled and their interplay influences the physical and chemical properties of gas. In particular, modeling the radiation propagation requires calculating column densities, which can be numerically expensive in high-resolution multidimensional simulations. Aims: Developing fast methods for estimating column densities is mandatory if we are interested in the dynamical influence of the radiative transfer. In particular, we focus on the effect of the UV screening on the dynamics and on the statistical properties of molecular clouds. Methods: We have developed a tree-based method for a fast estimate of column densities, implemented in the adaptive mesh refinement code RAMSES. We performed numerical simulations using this method in order to analyze the influence of the screening on the clump formation. Results: We find that the accuracy for the extinction of the tree-based method is better than 10%, while the relative error for the column density can be much more. We describe the implementation of a method based on precalculating the geometrical terms that noticeably reduces the calculation time. To study the influence of the screening on the statistical properties of molecular clouds we present the probability distribution function of gas and the associated temperature per density bin and the mass spectra for different density thresholds. Conclusions: The tree-based method is fast and accurate enough to be used during numerical simulations since no communication is needed between CPUs when using a fully threaded tree. It is then suitable to parallel computing. We show that the screening for far UV radiation mainly affects the dense gas, thereby favoring low temperatures and affecting the fragmentation. We show that when we include the screening, more structures are formed with higher densities in comparison to the case that does not include this effect. We
NASA Astrophysics Data System (ADS)
Barutello, Vivina; Jadanza, Riccardo D.; Portaluri, Alessandro
2016-01-01
It is well known that the linear stability of the Lagrangian elliptic solutions in the classical planar three-body problem depends on a mass parameter β and on the eccentricity e of the orbit. We consider only the circular case ( e = 0) but under the action of a broader family of singular potentials: α-homogeneous potentials, for α in (0, 2), and the logarithmic one. It turns out indeed that the Lagrangian circular orbit persists also in this more general setting. We discover a region of linear stability expressed in terms of the homogeneity parameter α and the mass parameter β, then we compute the Morse index of this orbit and of its iterates and we find that the boundary of the stability region is the envelope of a family of curves on which the Morse indices of the iterates jump. In order to conduct our analysis we rely on a Maslov-type index theory devised and developed by Y. Long, X. Hu and S. Sun; a key role is played by an appropriate index theorem and by some precise computations of suitable Maslov-type indices.
Adaptive EAGLE dynamic solution adaptation and grid quality enhancement
NASA Technical Reports Server (NTRS)
Luong, Phu Vinh; Thompson, J. F.; Gatlin, B.; Mastin, C. W.; Kim, H. J.
1992-01-01
In the effort described here, the elliptic grid generation procedure in the EAGLE grid code was separated from the main code into a subroutine, and a new subroutine which evaluates several grid quality measures at each grid point was added. The elliptic grid routine can now be called, either by a computational fluid dynamics (CFD) code to generate a new adaptive grid based on flow variables and quality measures through multiple adaptation, or by the EAGLE main code to generate a grid based on quality measure variables through static adaptation. Arrays of flow variables can be read into the EAGLE grid code for use in static adaptation as well. These major changes in the EAGLE adaptive grid system make it easier to convert any CFD code that operates on a block-structured grid (or single-block grid) into a multiple adaptive code.
NASA Astrophysics Data System (ADS)
Sameer, M. Ikhdair; Majid, Hamzavi
2013-04-01
Approximate analytical bound-state solutions of the Dirac particle in the fields of attractive and repulsive Rosen—Morse (RM) potentials including the Coulomb-like tensor (CLT) potential are obtained for arbitrary spin-orbit quantum number κ. The Pekeris approximation is used to deal with the spin-orbit coupling terms κ (κ± 1)r-2. In the presence of exact spin and pseudospin (p-spin) symmetries, the energy eigenvalues and the corresponding normalized two-component wave functions are found by using the parametric generalization of the Nikiforov—Uvarov (NU) method. The numerical results show that the CLT interaction removes degeneracies between the spin and p-spin state doublets.
NASA Astrophysics Data System (ADS)
Li, Yuanqiao; Zhang, Hongmei; Liu, De
2016-06-01
In this paper, we evaluate the transport properties of a Thue-Morse AB-stacked bilayer graphene superlattice with different interlayer potential biases. Based on the transfer matrix method, the transmission coefficient, the conductance, and the Fano factor are numerically calculated and discussed. We find that the symmetry of the transmission coefficient with respect to normal incidence depends on the structural symmetry of the system and the new transmission peak appears in the energy band gap opening region. The conductance and the Fano factor can be greatly modulated not only by the Fermi energy and the interlayer potential bias but also by the generation number. Interestingly, the conductance exhibits the plateau of almost zero conductance and the Fano factor plateaus with Poisson value occur in the energy band gap opening region for large interlayer potential bias.
Noiseless Coding Of Magnetometer Signals
NASA Technical Reports Server (NTRS)
Rice, Robert F.; Lee, Jun-Ji
1989-01-01
Report discusses application of noiseless data-compression coding to digitized readings of spaceborne magnetometers for transmission back to Earth. Objective of such coding to increase efficiency by decreasing rate of transmission without sacrificing integrity of data. Adaptive coding compresses data by factors ranging from 2 to 6.
Clinical coding. Code breakers.
Mathieson, Steve
2005-02-24
--The advent of payment by results has seen the role of the clinical coder pushed to the fore in England. --Examinations for a clinical coding qualification began in 1999. In 2004, approximately 200 people took the qualification. --Trusts are attracting people to the role by offering training from scratch or through modern apprenticeships. PMID:15768716
NASA Astrophysics Data System (ADS)
Tang, Guoping; Mayes, Melanie A.; Parker, Jack C.; Jardine, Philip M.
2010-09-01
We implemented the widely used CXTFIT code in Excel to provide flexibility and added sensitivity and uncertainty analysis functions to improve transport parameter estimation and to facilitate model discrimination for multi-tracer experiments on structured soils. Analytical solutions for one-dimensional equilibrium and nonequilibrium convection dispersion equations were coded as VBA functions so that they could be used as ordinary math functions in Excel for forward predictions. Macros with user-friendly interfaces were developed for optimization, sensitivity analysis, uncertainty analysis, error propagation, response surface calculation, and Monte Carlo analysis. As a result, any parameter with transformations (e.g., dimensionless, log-transformed, species-dependent reactions, etc.) could be estimated with uncertainty and sensitivity quantification for multiple tracer data at multiple locations and times. Prior information and observation errors could be incorporated into the weighted nonlinear least squares method with a penalty function. Users are able to change selected parameter values and view the results via embedded graphics, resulting in a flexible tool applicable to modeling transport processes and to teaching students about parameter estimation. The code was verified by comparing to a number of benchmarks with CXTFIT 2.0. It was applied to improve parameter estimation for four typical tracer experiment data sets in the literature using multi-model evaluation and comparison. Additional examples were included to illustrate the flexibilities and advantages of CXTFIT/Excel. The VBA macros were designed for general purpose and could be used for any parameter estimation/model calibration when the forward solution is implemented in Excel. A step-by-step tutorial, example Excel files and the code are provided as supplemental material.
NASA Astrophysics Data System (ADS)
Rodríguez-Sánchez, Rafael; Martínez, José Luis; Cock, Jan De; Fernández-Escribano, Gerardo; Pieters, Bart; Sánchez, José L.; Claver, José M.; de Walle, Rik Van
2013-12-01
The H.264/AVC video coding standard introduces some improved tools in order to increase compression efficiency. Moreover, the multi-view extension of H.264/AVC, called H.264/MVC, adopts many of them. Among the new features, variable block-size motion estimation is one which contributes to high coding efficiency. Furthermore, it defines a different prediction structure that includes hierarchical bidirectional pictures, outperforming traditional Group of Pictures patterns in both scenarios: single-view and multi-view. However, these video coding techniques have high computational complexity. Several techniques have been proposed in the literature over the last few years which are aimed at accelerating the inter prediction process, but there are no works focusing on bidirectional prediction or hierarchical prediction. In this article, with the emergence of many-core processors or accelerators, a step forward is taken towards an implementation of an H.264/AVC and H.264/MVC inter prediction algorithm on a graphics processing unit. The results show a negligible rate distortion drop with a time reduction of up to 98% for the complete H.264/AVC encoder.
Xantheas, Sotiris S.; Werhahn, Jasper C.
2014-08-14
Based on the formulation of the analytical expression of the potential V(r) describing intermolecular interactions in terms of the dimensionless variables r*=r/rm and !*=V/!, where rm is the separation at the minimum and ! the well depth, we propose more generalized scalable forms for the commonly used Lennard-Jones, Mie, Morse and Buckingham exponential-6 potential energy functions (PEFs). These new generalized forms have an additional parameter from and revert to the original ones for some choice of that parameter. In this respect, the original forms can be considered as special cases of the more general forms that are introduced. We also propose a scalable, but nonrevertible to the original one, 4-parameter extended Morse potential.
Application of three-dimensional transport code to the analysis of the neutron streaming experiment
Chatani, K.; Slater, C.O.
1990-01-01
This paper summarized the calculational results of neutron streaming through a Clinch River Breeder Reactor (CRBR) Prototype coolant pipe chaseway. Particular emphasis is placed on results at bends in the chaseway. Calculations were performed with three three-dimensional codes: the discrete ordinates radiation transport code TORT and Monte Carlo radiation transport code MORSE, which were developed by Oak Ridge National Laboratory (ORNL), and the discrete ordinates code ENSEMBLE, which was developed in Japan. The purpose of the calculations is not only to compare the calculational results with the experimental results, but also to compare the results of TORT and MORSE with those of ENSEMBLE. In the TORT calculations, two types of difference methods, weighted-difference method was applied in ENSEMBLE calculation. Both TORT and ENSEMBLE produced nearly the same calculational results, but differed in the number of iterations required for converging each neutron group. Also, the two types of difference methods in the TORT calculations showed no appreciable variance in the number of iterations required. However, a noticeable disparity in the computer times and some variation in the calculational results did occur. The comparisons of the calculational results with the experimental results, showed for the epithermal neutron flux generally good agreement in the first and second legs and at the first bend where the two-dimensional modeling might be difficult. Results were fair to poor along the centerline of the first leg near the opening to the second leg because of discrete ordinates ray effects. Additionally, the agreement was good throughout the first and second legs for the thermal neutron region. Calculations with MORSE were made. These calculational results and comparisons are described also. 8 refs., 4 figs.
Onishi, Yasuo
2013-03-29
Four JAEA researchers visited PNNL for two weeks in February, 2013 to learn the PNNL-developed, unsteady, one-dimensional, river model, TODAM and the PNNL-developed, time-dependent, three dimensional, coastal water model, FLESCOT. These codes predict sediment and contaminant concentrations by accounting sediment-radionuclide interactions, e.g., adsorption/desorption and transport-deposition-resuspension of sediment-sorbed radionuclides. The objective of the river and coastal water modeling is to simulate • 134Cs and 137Cs migration in Fukushima rivers and the coastal water, and • their accumulation in the river and ocean bed along the Fukushima coast. Forecasting the future cesium behavior in the river and coastal water under various scenarios would enable JAEA to assess the effectiveness of various on-land remediation activities and if required, possible river and coastal water clean-up operations to reduce the contamination of the river and coastal water, agricultural products, fish and other aquatic biota. PNNL presented the following during the JAEA visit to PNNL: • TODAM and FLESCOT’s theories and mathematical formulations • TODAM and FLESCOT model structures • Past TODAM and FLESCOT applications • Demonstrating these two codes' capabilities by applying them to simple hypothetical river and coastal water cases. • Initial application of TODAM to the Ukedo River in Fukushima and JAEA researchers' participation in its modeling. PNNL also presented the relevant topics relevant to Fukushima environmental assessment and remediation, including • PNNL molecular modeling and EMSL computer facilities • Cesium adsorption/desorption characteristics • Experiences of connecting molecular science research results to macro model applications to the environment • EMSL tour • Hanford Site road tour. PNNL and JAEA also developed future course of actions for joint research projects on the Fukushima environmental and remediation assessments.
Vector Adaptive/Predictive Encoding Of Speech
NASA Technical Reports Server (NTRS)
Chen, Juin-Hwey; Gersho, Allen
1989-01-01
Vector adaptive/predictive technique for digital encoding of speech signals yields decoded speech of very good quality after transmission at coding rate of 9.6 kb/s and of reasonably good quality at 4.8 kb/s. Requires 3 to 4 million multiplications and additions per second. Combines advantages of adaptive/predictive coding, and code-excited linear prediction, yielding speech of high quality but requires 600 million multiplications and additions per second at encoding rate of 4.8 kb/s. Vector adaptive/predictive coding technique bridges gaps in performance and complexity between adaptive/predictive coding and code-excited linear prediction.
Mangano, Francesco Guido; Zecca, Piero; Luongo, Fabrizia; Iezzi, Giovanna; Mangano, Carlo
2014-01-01
The aim of this study was to achieve aesthetically pleasing soft tissue contours in a severely compromised tooth in the anterior region of the maxilla. For a right-maxillary central incisor with localized advanced chronic periodontitis a tooth extraction followed by reconstructive procedures and delayed implant placement was proposed and accepted by the patient. Guided bone regeneration (GBR) technique was employed, with a biphasic calcium-phosphate (BCP) block graft placed in the extraction socket in conjunction with granules of the same material and a resorbable barrier membrane. After 6 months of healing, an implant was installed. The acrylic provisional restoration remained in situ for 3 months and then was substituted with the definitive crown. This ridge reconstruction technique enabled preserving both hard and soft tissues and counteracting vertical and horizontal bone resorption after tooth extraction and allowed for an ideal three-dimensional implant placement. Localized severe alveolar bone resorption of the anterior maxilla associated with chronic periodontal disease can be successfully treated by means of ridge reconstruction with GBR and delayed implant insertion; the placement of an early-loaded, Morse taper connection implant in the grafted site was effective to create an excellent clinical aesthetic result and to maintain it along time. PMID:25431687
Arbitrary Lagrangian Eulerian Adaptive Mesh Refinement
Koniges, A.; Eder, D.; Masters, N.; Fisher, A.; Anderson, R.; Gunney, B.; Wang, P.; Benson, D.; Dixit, P.
2009-09-29
This is a simulation code involving an ALE (arbitrary Lagrangian-Eulerian) hydrocode with AMR (adaptive mesh refinement) and pluggable physics packages for material strength, heat conduction, radiation diffusion, and laser ray tracing developed a LLNL, UCSD, and Berkeley Lab. The code is an extension of the open source SAMRAI (Structured Adaptive Mesh Refinement Application Interface) code/library. The code can be used in laser facilities such as the National Ignition Facility. The code is alsi being applied to slurry flow (landslides).
Adaptive Mesh Refinement in CTH
Crawford, David
1999-05-04
This paper reports progress on implementing a new capability of adaptive mesh refinement into the Eulerian multimaterial shock- physics code CTH. The adaptivity is block-based with refinement and unrefinement occurring in an isotropic 2:1 manner. The code is designed to run on serial, multiprocessor and massive parallel platforms. An approximate factor of three in memory and performance improvements over comparable resolution non-adaptive calculations has-been demonstrated for a number of problems.
Monte Carlo Code System for High-Energy Radiation Transport Calculations.
2000-02-16
Version 00 HERMES-KFA consists of a set of Monte Carlo Codes used to simulate particle radiation and interaction with matter. The main codes are HETC, MORSE, and EGS. They are supported by a common geometry package, common random routines, a command interpreter, and auxiliary codes like NDEM that is used to generate a gamma-ray source from nuclear de-excitation after spallation processes. The codes have been modified so that any particle history falling outside the domainmore » of the physical theory of one program can be submitted to another program in the suite to complete the work. Also response data can be submitted by each program, to be collected and combined by a statistic package included within the command interpreter.« less
NASA Technical Reports Server (NTRS)
Badinger, Michael A.; Drouant, George J.
1991-01-01
Proposed hand-held tool applies indelible bar code to small parts. Possible to identify parts for management of inventory without tags or labels. Microprocessor supplies bar-code data to impact-printer-like device. Device drives replaceable scribe, which cuts bar code on surface of part. Used to mark serially controlled parts for military and aerospace equipment. Also adapts for discrete marking of bulk items used in food and pharmaceutical processing.
Chatani, K. )
1992-08-01
This report summarizes the calculational results from analyses of a Clinch River Breeder Reactor (CRBR) prototypic coolant pipe chaseway neutron streaming experiment Comparisons of calculated and measured results are presented, major emphasis being placed on results at bends in the chaseway. Calculations were performed with three three-dimensional radiation transport codes: the discrete ordinates code TORT and the Monte Carlo code MORSE, both developed by the Oak Ridge National Laboratory (ORNL), and the discrete ordinates code ENSEMBLE, developed by Japan. The calculated results from the three codes are compared (1) with previously-calculated DOT3.5 two-dimensional results, (2) among themselves, and (3) with measured results. Calculations with TORT used both the weighted-difference and nodal methods. Only the weighted-difference method was used in ENSEMBLE. When the calculated results were compared to measured results, it was found that calculation-to-experiment (C/E) ratios were good in the regions of the chaseway where two-dimensional modeling might be difficult and where there were no significant discrete ordinates ray effects. Excellent agreement was observed for responses dominated by thermal neutron contributions. MORSE-calculated results and comparisons are described also, and detailed results are presented in an appendix.
NASA Astrophysics Data System (ADS)
Tritzant-Martinez, Yalina; Zeng, Tao; Broom, Aron; Meiering, Elizabeth; Le Roy, Robert J.; Roy, Pierre-Nicholas
2013-06-01
We investigate the analytical representation of potentials of mean force (pmf) using the Morse/long-range (MLR) potential approach. The MLR method had previously been used to represent potential energy surfaces, and we assess its validity for representing free-energies. The advantage of the approach is that the potential of mean force data only needs to be calculated in the short to medium range region of the reaction coordinate while the long range can be handled analytically. This can result in significant savings in terms of computational effort since one does not need to cover the whole range of the reaction coordinate during simulations. The water dimer with rigid monomers whose interactions are described by the commonly used TIP4P model [W. Jorgensen and J. Madura, Mol. Phys. 56, 1381 (1985)], 10.1080/00268978500103111 is used as a test case. We first calculate an "exact" pmf using direct Monte Carlo (MC) integration and term such a calculation as our gold standard (GS). Second, we compare this GS with several MLR fits to the GS to test the validity of the fitting procedure. We then obtain the water dimer pmf using metadynamics simulations in a limited range of the reaction coordinate and show how the MLR treatment allows the accurate generation of the full pmf. We finally calculate the transition state theory rate constant for the water dimer dissociation process using the GS, the GS MLR fits, and the metadynamics MLR fits. Our approach can yield a compact, smooth, and accurate analytical representation of pmf data with reduced computational cost.
Heuristic dynamic complexity coding
NASA Astrophysics Data System (ADS)
Škorupa, Jozef; Slowack, Jürgen; Mys, Stefaan; Lambert, Peter; Van de Walle, Rik
2008-04-01
Distributed video coding is a new video coding paradigm that shifts the computational intensive motion estimation from encoder to decoder. This results in a lightweight encoder and a complex decoder, as opposed to the predictive video coding scheme (e.g., MPEG-X and H.26X) with a complex encoder and a lightweight decoder. Both schemas, however, do not have the ability to adapt to varying complexity constraints imposed by encoder and decoder, which is an essential ability for applications targeting a wide range of devices with different complexity constraints or applications with temporary variable complexity constraints. Moreover, the effect of complexity adaptation on the overall compression performance is of great importance and has not yet been investigated. To address this need, we have developed a video coding system with the possibility to adapt itself to complexity constraints by dynamically sharing the motion estimation computations between both components. On this system we have studied the effect of the complexity distribution on the compression performance. This paper describes how motion estimation can be shared using heuristic dynamic complexity and how distribution of complexity affects the overall compression performance of the system. The results show that the complexity can indeed be shared between encoder and decoder in an efficient way at acceptable rate-distortion performance.
Wilson, J.T.; Morlock, S.E.; Baker, N.T.
1997-01-01
Acoustic Doppler current profiler, global positioning system, and geographic information system technology were used to map the bathymetry of Morse and Geist Reservoirs, two artificial lakes used for public water supply in central Indiana. The project was a pilot study to evaluate the use of the technologies for bathymetric surveys. Bathymetric surveys were last conducted in 1978 on Morse Reservoir and in 1980 on Geist Reservoir; those surveys were done with conventional methods using networks of fathometer transects. The 1996 bathymetric surveys produced updated estimates of reservoir volumes that will serve as base-line data for future estimates of storage capacity and sedimentation rates.An acoustic Doppler current profiler and global positioning system receiver were used to collect water-depth and position data from April 1996 through October 1996. All water-depth and position data were imported to a geographic information system to create a data base. The geographic information system then was used to generate water-depth contour maps and to compute the volumes for each reservoir.The computed volume of Morse Reservoir was 22,820 acre-feet (7.44 billion gallons), with a surface area of 1,484 acres. The computed volume of Geist Reservoir was 19,280 acre-feet (6.29 billion gallons), with a surface area of 1,848 acres. The computed 1996 reservoir volumes are less than the design volumes and indicate that sedimentation has occurred in both reservoirs. Cross sections were constructed from the computer-generated surfaces for 1996 and compared to the fathometer profiles from the 1978 and 1980 surveys; analysis of these cross sections also indicates that some sedimentation has occurred in both reservoirs.The acoustic Doppler current profiler, global positioning system, and geographic information system technologies described in this report produced bathymetric maps and volume estimates more efficiently and with comparable or greater resolution than conventional
NASA Technical Reports Server (NTRS)
Pollara, Fabrizio; Hamkins, Jon; Dolinar, Sam; Andrews, Ken; Divsalar, Dariush
2006-01-01
This viewgraph presentation reviews uplink coding. The purpose and goals of the briefing are (1) Show a plan for using uplink coding and describe benefits (2) Define possible solutions and their applicability to different types of uplink, including emergency uplink (3) Concur with our conclusions so we can embark on a plan to use proposed uplink system (4) Identify the need for the development of appropriate technology and infusion in the DSN (5) Gain advocacy to implement uplink coding in flight projects Action Item EMB04-1-14 -- Show a plan for using uplink coding, including showing where it is useful or not (include discussion of emergency uplink coding).
Hess, Peter
2014-08-07
An improved microscopic cleavage model, based on a Morse-type and Lennard-Jones-type interaction instead of the previously employed half-sine function, is used to determine the maximum cleavage strength for the brittle materials diamond, tungsten, molybdenum, silicon, GaAs, silica, and graphite. The results of both interaction potentials are in much better agreement with the theoretical strength values obtained by ab initio calculations for diamond, tungsten, molybdenum, and silicon than the previous model. Reasonable estimates of the intrinsic strength are presented for GaAs, silica, and graphite, where first principles values are not available.
NASA Astrophysics Data System (ADS)
Ghoumaid, A.; Benamira, F.; Guechi, L.
2016-02-01
It is shown that the application of the Nikiforov-Uvarov method by Ikhdair for solving the Dirac equation with the radial Rosen-Morse potential plus the spin-orbit centrifugal term is inadequate because the required conditions are not satisfied. The energy spectra given is incorrect and the wave functions are not physically acceptable. We clarify the problem and prove that the spinor wave functions are expressed in terms of the generalized hypergeometric functions 2F1(a, b, c; z). The energy eigenvalues for the bound states are given by the solution of a transcendental equation involving the hypergeometric function.
High Order Modulation Protograph Codes
NASA Technical Reports Server (NTRS)
Nguyen, Thuy V. (Inventor); Nosratinia, Aria (Inventor); Divsalar, Dariush (Inventor)
2014-01-01
Digital communication coding methods for designing protograph-based bit-interleaved code modulation that is general and applies to any modulation. The general coding framework can support not only multiple rates but also adaptive modulation. The method is a two stage lifting approach. In the first stage, an original protograph is lifted to a slightly larger intermediate protograph. The intermediate protograph is then lifted via a circulant matrix to the expected codeword length to form a protograph-based low-density parity-check code.
Automated detection of semagram-laden images using adaptive neural networks
NASA Astrophysics Data System (ADS)
Cerkez, Paul S.; Cannady, James D.
2010-04-01
Digital steganography has been used extensively for electronic copyright stamping, but also for criminal or covert activities. While a variety of techniques exist for detecting steganography the identification of semagrams, messages transmitted visually in a non-textual format remain elusive. The work that will be presented describes the creation of a novel application which uses hierarchical neural network architectures to detect the likely presence of a semagram message in an image. The application was used to detect semagrams containing Morse Code messages with over 80% accuracy. These preliminary results indicate a significant advance in the detection of complex semagram patterns.
Adaptation and perceptual norms
NASA Astrophysics Data System (ADS)
Webster, Michael A.; Yasuda, Maiko; Haber, Sara; Leonard, Deanne; Ballardini, Nicole
2007-02-01
We used adaptation to examine the relationship between perceptual norms--the stimuli observers describe as psychologically neutral, and response norms--the stimulus levels that leave visual sensitivity in a neutral or balanced state. Adapting to stimuli on opposite sides of a neutral point (e.g. redder or greener than white) biases appearance in opposite ways. Thus the adapting stimulus can be titrated to find the unique adapting level that does not bias appearance. We compared these response norms to subjectively defined neutral points both within the same observer (at different retinal eccentricities) and between observers. These comparisons were made for visual judgments of color, image focus, and human faces, stimuli that are very different and may depend on very different levels of processing, yet which share the property that for each there is a well defined and perceptually salient norm. In each case the adaptation aftereffects were consistent with an underlying sensitivity basis for the perceptual norm. Specifically, response norms were similar to and thus covaried with the perceptual norm, and under common adaptation differences between subjectively defined norms were reduced. These results are consistent with models of norm-based codes and suggest that these codes underlie an important link between visual coding and visual experience.
NASA Technical Reports Server (NTRS)
1985-01-01
COSMIC MINIVER, a computer code developed by NASA for analyzing aerodynamic heating and heat transfer on the Space Shuttle, has been used by Marquardt Company to analyze heat transfer on Navy/Air Force missile bodies. The code analyzes heat transfer by four different methods which can be compared for accuracy. MINIVER saved Marquardt three months in computer time and $15,000.
Torney, D. C.
2001-01-01
We have begun to characterize a variety of codes, motivated by potential implementation as (quaternary) DNA n-sequences, with letters denoted A, C The first codes we studied are the most reminiscent of conventional group codes. For these codes, Hamming similarity was generalized so that the score for matched letters takes more than one value, depending upon which letters are matched [2]. These codes consist of n-sequences satisfying an upper bound on the similarities, summed over the letter positions, of distinct codewords. We chose similarity 2 for matches of letters A and T and 3 for matches of the letters C and G, providing a rough approximation to double-strand bond energies in DNA. An inherent novelty of DNA codes is 'reverse complementation'. The latter may be defined, as follows, not only for alphabets of size four, but, more generally, for any even-size alphabet. All that is required is a matching of the letters of the alphabet: a partition into pairs. Then, the reverse complement of a codeword is obtained by reversing the order of its letters and replacing each letter by its match. For DNA, the matching is AT/CG because these are the Watson-Crick bonding pairs. Reversal arises because two DNA sequences form a double strand with opposite relative orientations. Thus, as will be described in detail, because in vitro decoding involves the formation of double-stranded DNA from two codewords, it is reasonable to assume - for universal applicability - that the reverse complement of any codeword is also a codeword. In particular, self-reverse complementary codewords are expressly forbidden in reverse-complement codes. Thus, an appropriate distance between all pairs of codewords must, when large, effectively prohibit binding between the respective codewords: to form a double strand. Only reverse-complement pairs of codewords should be able to bind. For most applications, a DNA code is to be bi-partitioned, such that the reverse-complementary pairs are separated
Samuel, A G; Kat, D
1998-04-01
Two experiments were used to test whether selective adaptation for speech occurs automatically or instead requires attentional resources. A control condition demonstrated the usual large identification shifts caused by repeatedly presenting an adapting sound (/wa/, with listeners identifying members of a /ba/-/wa/ test series). Two types of distractor tasks were used: (1) Subjects did a rapid series of arithmetic problems during the adaptation periods (Experiments 1 and 2), or (2) they made a series of rhyming judgments, requiring phonetic coding (Experiment 2). A control experiment (Experiment 3) demonstrated that these tasks normally impose a heavy attentional cost on phonetic processing. Despite this, for both experimental conditions, the observed adaptation effect was just as large as in the control condition. This result indicates that adaptation is automatic, operating at an early, preattentive level. The implications of these results for current models of speech perception are discussed. PMID:9599999
Edge equilibrium code for tokamaks
Li, Xujing; Drozdov, Vladimir V.
2014-01-15
The edge equilibrium code (EEC) described in this paper is developed for simulations of the near edge plasma using the finite element method. It solves the Grad-Shafranov equation in toroidal coordinate and uses adaptive grids aligned with magnetic field lines. Hermite finite elements are chosen for the numerical scheme. A fast Newton scheme which is the same as implemented in the equilibrium and stability code (ESC) is applied here to adjust the grids.
Generalization of Prism Adaptation
ERIC Educational Resources Information Center
Redding, Gordon M.; Wallace, Benjamin
2006-01-01
Prism exposure produces 2 kinds of adaptive response. Recalibration is ordinary strategic remapping of spatially coded movement commands to rapidly reduce performance error. Realignment is the extraordinary process of transforming spatial maps to bring the origins of coordinate systems into correspondence. Realignment occurs when spatial…
NASA Astrophysics Data System (ADS)
Rahimi, H.
2016-07-01
The present paper attempts to determine the properties of photonic spectra of Thue-Morse, double-period and Rudin-Shapiro one-dimensional quasiperiodic multilayers. The supposed structures are constituted by high temperature HgBa2Ca2Cu3O10 and YBa2Cu3O7 superconductors. Our investigation is restricted to the visible wavelength domain. The results are demonstrated by the calculation of transmittance using transfer matrix method together with Gorter-Casimir two-fluid model. It is found that by manipulating the parameters such as incident angle, polarization, the thickness of each layer and operation temperature of superconductors the transmission spectra exhibit some interesting features. This paper, provides us a pathway to design tunable total reflector, optical filters and optical switching based on superconductor quasiregular photonic crystals.
NASA Astrophysics Data System (ADS)
Deta, U. A.; Suparmi, Cari, Husein, A. S.; Yuliani, H.; Khaled, I. K. A.; Luqman, H.; Supriyanto
2014-09-01
The Energy Spectra and Wave Function of Schrodinger equation in D-Dimensions for trigonometric Rosen-Morse potential were investigated analitically using Nikiforov-Uvarov method. This potential captures the essential traits of the quark-gluon dynamics of Quantum Chromodynamics. The approximate energy spectra are given in the close form and the corresponding approximate wave function for arbitary l-state (l ≠ 0) in D-dimensions are formulated in the form of diferential polynomials. The wave function of this potential unnormalizable for general case. The wave function of this potential unnormalizable for general case. The existence of extra dimensions (centrifugal factor) and this potential increase the energy spectra of system.
Deta, U. A.; Suparmi,; Cari,; Husein, A. S.; Yuliani, H.; Khaled, I. K. A.; Luqman, H.; Supriyanto
2014-09-30
The Energy Spectra and Wave Function of Schrodinger equation in D-Dimensions for trigonometric Rosen-Morse potential were investigated analytically using Nikiforov-Uvarov method. This potential captures the essential traits of the quark-gluon dynamics of Quantum Chromodynamics. The approximate energy spectra are given in the close form and the corresponding approximate wave function for arbitrary l-state (l ≠ 0) in D-dimensions are formulated in the form of differential polynomials. The wave function of this potential unnormalizable for general case. The wave function of this potential unnormalizable for general case. The existence of extra dimensions (centrifugal factor) and this potential increase the energy spectra of system.
Ravishankar, C., Hughes Network Systems, Germantown, MD
1998-05-08
Speech is the predominant means of communication between human beings and since the invention of the telephone by Alexander Graham Bell in 1876, speech services have remained to be the core service in almost all telecommunication systems. Original analog methods of telephony had the disadvantage of speech signal getting corrupted by noise, cross-talk and distortion Long haul transmissions which use repeaters to compensate for the loss in signal strength on transmission links also increase the associated noise and distortion. On the other hand digital transmission is relatively immune to noise, cross-talk and distortion primarily because of the capability to faithfully regenerate digital signal at each repeater purely based on a binary decision. Hence end-to-end performance of the digital link essentially becomes independent of the length and operating frequency bands of the link Hence from a transmission point of view digital transmission has been the preferred approach due to its higher immunity to noise. The need to carry digital speech became extremely important from a service provision point of view as well. Modem requirements have introduced the need for robust, flexible and secure services that can carry a multitude of signal types (such as voice, data and video) without a fundamental change in infrastructure. Such a requirement could not have been easily met without the advent of digital transmission systems, thereby requiring speech to be coded digitally. The term Speech Coding is often referred to techniques that represent or code speech signals either directly as a waveform or as a set of parameters by analyzing the speech signal. In either case, the codes are transmitted to the distant end where speech is reconstructed or synthesized using the received set of codes. A more generic term that is applicable to these techniques that is often interchangeably used with speech coding is the term voice coding. This term is more generic in the sense that the
Khorshidi, Hooman; Raoofi, Saeed; Moattari, Afagh; Bagheri, Atoosa; Kalantari, Mohammad Hassan
2016-01-01
Background and Aim. The geometry of implant-abutment interface (IAI) affects the risk of bacterial leakage and invasion into the internal parts of the implant. The aim of this study was to compare the bacterial leakage of an 11-degree Morse taper IAI with that of a butt joint connection. Materials and Methods. Two implants systems were tested (n = 10 per group): CSM (submerged) and TBR (connect). The deepest inner parts of the implants were inoculated with 2 μL of Streptococcus mutans suspension with a concentration of 108 CFU/mL. The abutments were tightened on the implants. The specimens were stored in the incubator at a temperature of 37°C for 14 days and the penetration of the bacterium in the surrounding area was determined by the observation of the solution turbidity and comparison with control specimens. Kaplan-Meier survival curve was traced for the estimation of bacterial leakage and the results between two groups of implants were statistically analyzed by chi-square test. Results. No case of the implant system with the internal conical connection design revealed bacterial leakage in 14 days and no turbidity of the solution was reported for it. In the system with butt joint implant-abutment connection, 1 case showed leakage on the third day, 1 case on the eighth day, and 5 cases on the 13th day. In total, 7 (70%) cases showed bacterial leakage in this system. Significant differences were found between the two groups of implants based on the incidence of bacterial leakage (p < 0.05). Conclusion. The 11-degree Morse taper demonstrated better resistance to microbial leakage than butt joint connection. PMID:27242903
Jones, Dean P.
2015-01-01
Abstract Significance: The redox code is a set of principles that defines the positioning of the nicotinamide adenine dinucleotide (NAD, NADP) and thiol/disulfide and other redox systems as well as the thiol redox proteome in space and time in biological systems. The code is richly elaborated in an oxygen-dependent life, where activation/deactivation cycles involving O2 and H2O2 contribute to spatiotemporal organization for differentiation, development, and adaptation to the environment. Disruption of this organizational structure during oxidative stress represents a fundamental mechanism in system failure and disease. Recent Advances: Methodology in assessing components of the redox code under physiological conditions has progressed, permitting insight into spatiotemporal organization and allowing for identification of redox partners in redox proteomics and redox metabolomics. Critical Issues: Complexity of redox networks and redox regulation is being revealed step by step, yet much still needs to be learned. Future Directions: Detailed knowledge of the molecular patterns generated from the principles of the redox code under defined physiological or pathological conditions in cells and organs will contribute to understanding the redox component in health and disease. Ultimately, there will be a scientific basis to a modern redox medicine. Antioxid. Redox Signal. 23, 734–746. PMID:25891126
AEDS Property Classification Code Manual.
ERIC Educational Resources Information Center
Association for Educational Data Systems, Washington, DC.
The control and inventory of property items using data processing machines requires a form of numerical description or code which will allow a maximum of description in a minimum of space on the data card. An adaptation of a standard industrial classification system is given to cover any expendable warehouse item or non-expendable piece of…
ERIC Educational Resources Information Center
Lai, Hsin-Chih; Chang, Chun-Yen; Li, Wen-Shiane; Fan, Yu-Lin; Wu, Ying-Tien
2013-01-01
This study presents an m-learning method that incorporates Integrated Quick Response (QR) codes. This learning method not only achieves the objectives of outdoor education, but it also increases applications of Cognitive Theory of Multimedia Learning (CTML) (Mayer, 2001) in m-learning for practical use in a diverse range of outdoor locations. When…
Arbitrary Lagrangian Eulerian Adaptive Mesh Refinement
2009-09-29
This is a simulation code involving an ALE (arbitrary Lagrangian-Eulerian) hydrocode with AMR (adaptive mesh refinement) and pluggable physics packages for material strength, heat conduction, radiation diffusion, and laser ray tracing developed a LLNL, UCSD, and Berkeley Lab. The code is an extension of the open source SAMRAI (Structured Adaptive Mesh Refinement Application Interface) code/library. The code can be used in laser facilities such as the National Ignition Facility. The code is alsi being appliedmore » to slurry flow (landslides).« less
Video coding with dynamic background
NASA Astrophysics Data System (ADS)
Paul, Manoranjan; Lin, Weisi; Lau, Chiew Tong; Lee, Bu-Sung
2013-12-01
Motion estimation (ME) and motion compensation (MC) using variable block size, sub-pixel search, and multiple reference frames (MRFs) are the major reasons for improved coding performance of the H.264 video coding standard over other contemporary coding standards. The concept of MRFs is suitable for repetitive motion, uncovered background, non-integer pixel displacement, lighting change, etc. The requirement of index codes of the reference frames, computational time in ME & MC, and memory buffer for coded frames limits the number of reference frames used in practical applications. In typical video sequences, the previous frame is used as a reference frame with 68-92% of cases. In this article, we propose a new video coding method using a reference frame [i.e., the most common frame in scene (McFIS)] generated by dynamic background modeling. McFIS is more effective in terms of rate-distortion and computational time performance compared to the MRFs techniques. It has also inherent capability of scene change detection (SCD) for adaptive group of picture (GOP) size determination. As a result, we integrate SCD (for GOP determination) with reference frame generation. The experimental results show that the proposed coding scheme outperforms the H.264 video coding with five reference frames and the two relevant state-of-the-art algorithms by 0.5-2.0 dB with less computational time.
Optimality Of Variable-Length Codes
NASA Technical Reports Server (NTRS)
Yeh, Pen-Shu; Miller, Warner H.; Rice, Robert F.
1994-01-01
Report presents analysis of performances of conceptual Rice universal noiseless coders designed to provide efficient compression of data over wide range of source-data entropies. Includes predictive preprocessor that maps source data into sequence of nonnegative integers and variable-length-coding processor, which adapts to varying entropy of source data by selecting whichever one of number of optional codes yields shortest codeword.
Compressed image transmission based on fountain codes
NASA Astrophysics Data System (ADS)
Wu, Jiaji; Wu, Xinhong; Jiao, L. C.
2011-11-01
In this paper, we propose a joint source-channel coding (JSCC) scheme for image transmission over wireless channel. In the scheme, fountain codes are integrated into bit-plane coding for channel coding. Compared to traditional erasure codes for error correcting, such as Reed-Solomon codes, fountain codes are rateless and can generate sufficient symbols on the fly. Two schemes, the EEP (Equal Error Protection) scheme and the UEP (Unequal Error Protection) scheme are described in the paper. Furthermore, the UEP scheme performs better than the EEP scheme. The proposed scheme not only can adaptively adjust the length of fountain codes according to channel loss rate but also reconstruct image even on bad channel.
Garrity, George M
2014-01-01
A recent review of the nomenclatural history of Rhodococcus equi and its heterotypic synonyms reveals a situation in which the strict application of the Rules of the International Code of Nomenclature of Prokaryotes have resulted in the renaming of this known zoonotic pathogen, which may be reasonably viewed as a perilous name. This situation can be remedied only by the Judicial Commission rendering an opinion to conserve the name Rhodococcus equi and to reject its earlier heterotypic synonym, Corynebacterium hoagii. PMID:24408953
Adaptive management is an approach to natural resource management that emphasizes learning through management where knowledge is incomplete, and when, despite inherent uncertainty, managers and policymakers must act. Unlike a traditional trial and error approach, adaptive managem...
Parallel Adaptive Mesh Refinement Library
NASA Technical Reports Server (NTRS)
Mac-Neice, Peter; Olson, Kevin
2005-01-01
Parallel Adaptive Mesh Refinement Library (PARAMESH) is a package of Fortran 90 subroutines designed to provide a computer programmer with an easy route to extension of (1) a previously written serial code that uses a logically Cartesian structured mesh into (2) a parallel code with adaptive mesh refinement (AMR). Alternatively, in its simplest use, and with minimal effort, PARAMESH can operate as a domain-decomposition tool for users who want to parallelize their serial codes but who do not wish to utilize adaptivity. The package builds a hierarchy of sub-grids to cover the computational domain of a given application program, with spatial resolution varying to satisfy the demands of the application. The sub-grid blocks form the nodes of a tree data structure (a quad-tree in two or an oct-tree in three dimensions). Each grid block has a logically Cartesian mesh. The package supports one-, two- and three-dimensional models.
A user's manual for MASH 1. 0: A Monte Carlo Adjoint Shielding Code System
Johnson, J.O.
1992-03-01
The Monte Carlo Adjoint Shielding Code System, MASH, calculates neutron and gamma-ray environments and radiation protection factors for armored military vehicles, structures, trenches, and other shielding configurations by coupling a forward discrete ordinates air-over-ground transport calculation with an adjoint Monte Carlo treatment of the shielding geometry. Efficiency and optimum use of computer time are emphasized. The code system include the GRTUNCL and DORT codes for air-over-ground transport calculations, the MORSE code with the GIFT5 combinatorial geometry package for adjoint shielding calculations, and several peripheral codes that perform the required data preparations, transformations, and coupling functions. MASH is the successor to the Vehicle Code System (VCS) initially developed at Oak Ridge National Laboratory (ORNL). The discrete ordinates calculation determines the fluence on a coupling surface surrounding the shielding geometry due to an external neutron/gamma-ray source. The Monte Carlo calculation determines the effectiveness of the fluence at that surface in causing a response in a detector within the shielding geometry, i.e., the dose importance'' of the coupling surface fluence. A coupling code folds the fluence together with the dose importance, giving the desired dose response. The coupling code can determine the dose response a a function of the shielding geometry orientation relative to the source, distance from the source, and energy response of the detector. This user's manual includes a short description of each code, the input required to execute the code along with some helpful input data notes, and a representative sample problem (input data and selected output edits) for each code.
1997-03-01
This Manual represents Revision 5 of the user documentation for the modular code system referred to as SCALE. The history of the SCALE code system dates back to 1969 when the current Computational Physics and Engineering Division at Oak Ridge National Laboratory (ORNL) began providing the transportation package certification staff at the U.S. Atomic Energy Commission with computational support in the use of the new KENO code for performing criticality safety assessments with the statistical Monte Carlo method. From 1969 to 1976 the certification staff relied on the ORNL staff to assist them in the correct use of codes and data for criticality, shielding, and heat transfer analyses of transportation packages. However, the certification staff learned that, with only occasional use of the codes, it was difficult to become proficient in performing the calculations often needed for an independent safety review. Thus, shortly after the move of the certification staff to the U.S. Nuclear Regulatory Commission (NRC), the NRC staff proposed the development of an easy-to-use analysis system that provided the technical capabilities of the individual modules with which they were familiar. With this proposal, the concept of the Standardized Computer Analyses for Licensing Evaluation (SCALE) code system was born. This volume consists of the section of the manual dealing with three of the functional modules in the code. Those are the Morse-SGC for the SCALE system, Heating 7.2, and KENO V.a. The manual describes the latest released versions of the codes.
Pittsburgh Adapts to Changing Times.
ERIC Educational Resources Information Center
States, Deidre
1985-01-01
The Samuel F. B. Morse School, built in 1874 and closed in 1980, is a historic landmark in Pittsburgh, Pennsylvania. Now the building serves as low-income housing for 70 elderly tenants and is praised as being an imaginative and creative use of an old school structure. (MLF)
PANEL CODE FOR PLANAR CASCADES
NASA Technical Reports Server (NTRS)
Mcfarland, E. R.
1994-01-01
The Panel Code for Planar Cascades was developed as an aid for the designer of turbomachinery blade rows. The effective design of turbomachinery blade rows relies on the use of computer codes to model the flow on blade-to-blade surfaces. Most of the currently used codes model the flow as inviscid, irrotational, and compressible with solutions being obtained by finite difference or finite element numerical techniques. While these codes can yield very accurate solutions, they usually require an experienced user to manipulate input data and control parameters. Also, they often limit a designer in the types of blade geometries, cascade configurations, and flow conditions that can be considered. The Panel Code for Planar Cascades accelerates the design process and gives the designer more freedom in developing blade shapes by offering a simple blade-to-blade flow code. Panel, or integral equation, solution techniques have been used for several years by external aerodynamicists who have developed and refined them into a primary design tool of the aircraft industry. The Panel Code for Planar Cascades adapts these same techniques to provide a versatile, stable, and efficient calculation scheme for internal flow. The code calculates the compressible, inviscid, irrotational flow through a planar cascade of arbitrary blade shapes. Since the panel solution technique is for incompressible flow, a compressibility correction is introduced to account for compressible flow effects. The analysis is limited to flow conditions in the subsonic and shock-free transonic range. Input to the code consists of inlet flow conditions, blade geometry data, and simple control parameters. Output includes flow parameters at selected control points. This program is written in FORTRAN IV for batch execution and has been implemented on an IBM 370 series computer with a central memory requirement of approximately 590K of 8 bit bytes. This program was developed in 1982.
Codes with special correlation.
NASA Technical Reports Server (NTRS)
Baumert, L. D.
1964-01-01
Uniform binary codes with special correlation including transorthogonality and simplex code, Hadamard matrices and difference sets uniform binary codes with special correlation including transorthogonality and simplex code, Hadamard matrices and difference sets
Barrett, Harrison H.; Furenlid, Lars R.; Freed, Melanie; Hesterman, Jacob Y.; Kupinski, Matthew A.; Clarkson, Eric; Whitaker, Meredith K.
2008-01-01
Adaptive imaging systems alter their data-acquisition configuration or protocol in response to the image information received. An adaptive pinhole single-photon emission computed tomography (SPECT) system might acquire an initial scout image to obtain preliminary information about the radiotracer distribution and then adjust the configuration or sizes of the pinholes, the magnifications, or the projection angles in order to improve performance. This paper briefly describes two small-animal SPECT systems that allow this flexibility and then presents a framework for evaluating adaptive systems in general, and adaptive SPECT systems in particular. The evaluation is in terms of the performance of linear observers on detection or estimation tasks. Expressions are derived for the ideal linear (Hotelling) observer and the ideal linear (Wiener) estimator with adaptive imaging. Detailed expressions for the performance figures of merit are given, and possible adaptation rules are discussed. PMID:18541485
A User's Manual for MASH V1.5 - A Monte Carlo Adjoint Shielding Code System
C. O. Slater; J. M. Barnes; J. O. Johnson; J.D. Drischler
1998-10-01
The Monte Carlo ~djoint ~ielding Code System, MASH, calculates neutron and gamma- ray environments and radiation protection factors for armored military vehicles, structures, trenches, and other shielding configurations by coupling a forward discrete ordinates air- over-ground transport calculation with an adjoint Monte Carlo treatment of the shielding geometry. Efficiency and optimum use of computer time are emphasized. The code system includes the GRTUNCL and DORT codes for air-over-ground transport calculations, the MORSE code with the GIFT5 combinatorial geometry package for adjoint shielding calculations, and several peripheral codes that perform the required data preparations, transformations, and coupling functions. The current version, MASH v 1.5, is the successor to the original MASH v 1.0 code system initially developed at Oak Ridge National Laboratory (ORNL). The discrete ordinates calculation determines the fluence on a coupling surface surrounding the shielding geometry due to an external neutron/gamma-ray source. The Monte Carlo calculation determines the effectiveness of the fluence at that surface in causing a response in a detector within the shielding geometry, i.e., the "dose importance" of the coupling surface fluence. A coupling code folds the fluence together with the dose importance, giving the desired dose response. The coupling code can determine the dose response as a function of the shielding geometry orientation relative to the source, distance from the source, and energy response of the detector. This user's manual includes a short description of each code, the input required to execute the code along with some helpful input data notes, and a representative sample problem.
ERIC Educational Resources Information Center
Rhodes, Gillian; Jeffery, Linda; Boeing, Alexandra; Calder, Andrew J.
2013-01-01
Despite the discovery of body-selective neural areas in occipitotemporal cortex, little is known about how bodies are visually coded. We used perceptual adaptation to determine how body identity is coded. Brief exposure to a body (e.g., anti-Rose) biased perception toward an identity with opposite properties (Rose). Moreover, the size of this…
Is the Left Hemisphere Specialized for Speech, Language and/or Something Else?
ERIC Educational Resources Information Center
Papcun, George; And Others
1974-01-01
Morse code signals were presented dichotically to Morse code operators and to naive subjects with no knowledge of Morse code. The operators showed right ear superiority, indicating left hemisphere dominance for the perception of dichotically presented Morse code letters. Naive subjects showed the same right ear superiority when presented with a…
NASA Technical Reports Server (NTRS)
Hinds, Erold W. (Principal Investigator)
1996-01-01
This report describes the progress made towards the completion of a specific task on error-correcting coding. The proposed research consisted of investigating the use of modulation block codes as the inner code of a concatenated coding system in order to improve the overall space link communications performance. The study proposed to identify and analyze candidate codes that will complement the performance of the overall coding system which uses the interleaved RS (255,223) code as the outer code.
Anderson, Jonas T.
2013-03-15
In this paper we define homological stabilizer codes on qubits which encompass codes such as Kitaev's toric code and the topological color codes. These codes are defined solely by the graphs they reside on. This feature allows us to use properties of topological graph theory to determine the graphs which are suitable as homological stabilizer codes. We then show that all toric codes are equivalent to homological stabilizer codes on 4-valent graphs. We show that the topological color codes and toric codes correspond to two distinct classes of graphs. We define the notion of label set equivalencies and show that under a small set of constraints the only homological stabilizer codes without local logical operators are equivalent to Kitaev's toric code or to the topological color codes. - Highlights: Black-Right-Pointing-Pointer We show that Kitaev's toric codes are equivalent to homological stabilizer codes on 4-valent graphs. Black-Right-Pointing-Pointer We show that toric codes and color codes correspond to homological stabilizer codes on distinct graphs. Black-Right-Pointing-Pointer We find and classify all 2D homological stabilizer codes. Black-Right-Pointing-Pointer We find optimal codes among the homological stabilizer codes.
Coding of Neuroinfectious Diseases.
Barkley, Gregory L
2015-12-01
Accurate coding is an important function of neurologic practice. This contribution to Continuum is part of an ongoing series that presents helpful coding information along with examples related to the issue topic. Tips for diagnosis coding, Evaluation and Management coding, procedure coding, or a combination are presented, depending on which is most applicable to the subject area of the issue. PMID:26633789
ERIC Educational Resources Information Center
New Mexico Univ., Albuquerque. American Indian Law Center.
The Model Children's Code was developed to provide a legally correct model code that American Indian tribes can use to enact children's codes that fulfill their legal, cultural and economic needs. Code sections cover the court system, jurisdiction, juvenile offender procedures, minor-in-need-of-care, and termination. Almost every Code section is…
Edge Equilibrium Code (EEC) For Tokamaks
Li, Xujling
2014-02-24
The edge equilibrium code (EEC) described in this paper is developed for simulations of the near edge plasma using the finite element method. It solves the Grad-Shafranov equation in toroidal coordinate and uses adaptive grids aligned with magnetic field lines. Hermite finite elements are chosen for the numerical scheme. A fast Newton scheme which is the same as implemented in the equilibrium and stability code (ESC) is applied here to adjust the grids
Papamichos, Spyros I; Margaritis, Dimitrios; Kotsianidis, Ioannis
2015-01-01
The incidence of cancer in human is high as compared to chimpanzee. However previous analysis has documented that numerous human cancer-related genes are highly conserved in chimpanzee. Till date whether human genome includes species-specific cancer-related genes that could potentially contribute to a higher cancer susceptibility remains obscure. This study focuses on MYEOV, an oncogene encoding for two protein isoforms, reported as causally involved in promoting cancer cell proliferation and metastasis in both haematological malignancies and solid tumours. First we document, via stringent in silico analysis, that MYEOV arose de novo in Catarrhini. We show that MYEOV short-isoform start codon was evolutionarily acquired after Catarrhini/Platyrrhini divergence. Throughout the course of Catarrhini evolution MYEOV acquired a gradually elongated translatable open reading frame (ORF), a gradually shortened translation-regulatory upstream ORF, and alternatively spliced mRNA variants. A point mutation introduced in human allowed for the acquisition of MYEOV long-isoform start codon. Second, we demonstrate the precious impact of exonized transposable elements on the creation of MYEOV gene structure. Third, we highlight that the initial part of MYEOV long-isoform coding DNA sequence was under positive selection pressure during Catarrhini evolution. MYEOV represents a Primate Orphan Gene that acquired, via ORF expansion, a human-protein-specific coding potential. PMID:26568894
Aurilia, Vincenzo; Parracino, Antonietta; Saviano, Michele; Rossi, Mose'; D'Auria, Sabato
2007-08-01
The complete genome of the psychrophilic bacteria Pseudoalteromonas haloplanktis TAC 125, recently published, owns a gene coding for a putative esterase activity corresponding to the ORF PSHAa1385, also classified in the Carbohydrate Active Enzymes database (CAZY) belonging to family 1 of carbohydrate esterase proteins. This ORF is 843 bp in length and codes for a protein of 280 amino acid residues. In this study we characterized and cloned the PSHAa1385 gene in Escherichia coli. We also characterized the recombinant protein by biochemical and biophysical methodologies. The PSHAa1385 gene sequence showed a significant homology with several carboxyl-esterase and acetyl-esterase genes from gamma-proteobacteria genera and yeast. The recombinant protein exhibited a significant activity towards pNP-acetate, alpha-and beta-naphthyl acetate as generic substrates, and 4-methylumbelliferyl p-trimethylammonio cinnamate chloride (MUTMAC) as a specific substrate, indicating that the protein exhibits a feruloyl esterase activity that it is displayed by similar enzymes present in other organisms. Finally, a three-dimensional model of the protein was built and the amino acid residues involved in the catalytic function of the protein were identified. PMID:17543477
Papamichos, Spyros I.; Margaritis, Dimitrios; Kotsianidis, Ioannis
2015-01-01
The incidence of cancer in human is high as compared to chimpanzee. However previous analysis has documented that numerous human cancer-related genes are highly conserved in chimpanzee. Till date whether human genome includes species-specific cancer-related genes that could potentially contribute to a higher cancer susceptibility remains obscure. This study focuses on MYEOV, an oncogene encoding for two protein isoforms, reported as causally involved in promoting cancer cell proliferation and metastasis in both haematological malignancies and solid tumours. First we document, via stringent in silico analysis, that MYEOV arose de novo in Catarrhini. We show that MYEOV short-isoform start codon was evolutionarily acquired after Catarrhini/Platyrrhini divergence. Throughout the course of Catarrhini evolution MYEOV acquired a gradually elongated translatable open reading frame (ORF), a gradually shortened translation-regulatory upstream ORF, and alternatively spliced mRNA variants. A point mutation introduced in human allowed for the acquisition of MYEOV long-isoform start codon. Second, we demonstrate the precious impact of exonized transposable elements on the creation of MYEOV gene structure. Third, we highlight that the initial part of MYEOV long-isoform coding DNA sequence was under positive selection pressure during Catarrhini evolution. MYEOV represents a Primate Orphan Gene that acquired, via ORF expansion, a human-protein-specific coding potential. PMID:26568894
A user`s manual for MASH 1.0: A Monte Carlo Adjoint Shielding Code System
Johnson, J.O.
1992-03-01
The Monte Carlo Adjoint Shielding Code System, MASH, calculates neutron and gamma-ray environments and radiation protection factors for armored military vehicles, structures, trenches, and other shielding configurations by coupling a forward discrete ordinates air-over-ground transport calculation with an adjoint Monte Carlo treatment of the shielding geometry. Efficiency and optimum use of computer time are emphasized. The code system include the GRTUNCL and DORT codes for air-over-ground transport calculations, the MORSE code with the GIFT5 combinatorial geometry package for adjoint shielding calculations, and several peripheral codes that perform the required data preparations, transformations, and coupling functions. MASH is the successor to the Vehicle Code System (VCS) initially developed at Oak Ridge National Laboratory (ORNL). The discrete ordinates calculation determines the fluence on a coupling surface surrounding the shielding geometry due to an external neutron/gamma-ray source. The Monte Carlo calculation determines the effectiveness of the fluence at that surface in causing a response in a detector within the shielding geometry, i.e., the ``dose importance`` of the coupling surface fluence. A coupling code folds the fluence together with the dose importance, giving the desired dose response. The coupling code can determine the dose response a a function of the shielding geometry orientation relative to the source, distance from the source, and energy response of the detector. This user`s manual includes a short description of each code, the input required to execute the code along with some helpful input data notes, and a representative sample problem (input data and selected output edits) for each code.
Research on pre-processing of QR Code
NASA Astrophysics Data System (ADS)
Sun, Haixing; Xia, Haojie; Dong, Ning
2013-10-01
QR code encodes many kinds of information because of its advantages: large storage capacity, high reliability, full arrange of utter-high-speed reading, small printing size and high-efficient representation of Chinese characters, etc. In order to obtain the clearer binarization image from complex background, and improve the recognition rate of QR code, this paper researches on pre-processing methods of QR code (Quick Response Code), and shows algorithms and results of image pre-processing for QR code recognition. Improve the conventional method by changing the Souvola's adaptive text recognition method. Additionally, introduce the QR code Extraction which adapts to different image size, flexible image correction approach, and improve the efficiency and accuracy of QR code image processing.
Accumulate repeat accumulate codes
NASA Technical Reports Server (NTRS)
Abbasfar, Aliazam; Divsalar, Dariush; Yao, Kung
2004-01-01
In this paper we propose an innovative channel coding scheme called 'Accumulate Repeat Accumulate codes' (ARA). This class of codes can be viewed as serial turbo-like codes, or as a subclass of Low Density Parity Check (LDPC) codes, thus belief propagation can be used for iterative decoding of ARA codes on a graph. The structure of encoder for this class can be viewed as precoded Repeat Accumulate (RA) code or as precoded Irregular Repeat Accumulate (IRA) code, where simply an accumulator is chosen as a precoder. Thus ARA codes have simple, and very fast encoder structure when they representing LDPC codes. Based on density evolution for LDPC codes through some examples for ARA codes, we show that for maximum variable node degree 5 a minimum bit SNR as low as 0.08 dB from channel capacity for rate 1/2 can be achieved as the block size goes to infinity. Thus based on fixed low maximum variable node degree, its threshold outperforms not only the RA and IRA codes but also the best known LDPC codes with the dame maximum node degree. Furthermore by puncturing the accumulators any desired high rate codes close to code rate 1 can be obtained with thresholds that stay close to the channel capacity thresholds uniformly. Iterative decoding simulation results are provided. The ARA codes also have projected graph or protograph representation that allows for high speed decoder implementation.
NASA Technical Reports Server (NTRS)
2005-01-01
The goal of this research is to develop and demonstrate innovative adaptive seal technologies that can lead to dramatic improvements in engine performance, life, range, and emissions, and enhance operability for next generation gas turbine engines. This work is concentrated on the development of self-adaptive clearance control systems for gas turbine engines. Researchers have targeted the high-pressure turbine (HPT) blade tip seal location for following reasons: Current active clearance control (ACC) systems (e.g., thermal case-cooling schemes) cannot respond to blade tip clearance changes due to mechanical, thermal, and aerodynamic loads. As such they are prone to wear due to the required tight running clearances during operation. Blade tip seal wear (increased clearances) reduces engine efficiency, performance, and service life. Adaptive sealing technology research has inherent impact on all envisioned 21st century propulsion systems (e.g. distributed vectored, hybrid and electric drive propulsion concepts).
Concatenated Coding Using Trellis-Coded Modulation
NASA Technical Reports Server (NTRS)
Thompson, Michael W.
1997-01-01
In the late seventies and early eighties a technique known as Trellis Coded Modulation (TCM) was developed for providing spectrally efficient error correction coding. Instead of adding redundant information in the form of parity bits, redundancy is added at the modulation stage thereby increasing bandwidth efficiency. A digital communications system can be designed to use bandwidth-efficient multilevel/phase modulation such as Amplitude Shift Keying (ASK), Phase Shift Keying (PSK), Differential Phase Shift Keying (DPSK) or Quadrature Amplitude Modulation (QAM). Performance gain can be achieved by increasing the number of signals over the corresponding uncoded system to compensate for the redundancy introduced by the code. A considerable amount of research and development has been devoted toward developing good TCM codes for severely bandlimited applications. More recently, the use of TCM for satellite and deep space communications applications has received increased attention. This report describes the general approach of using a concatenated coding scheme that features TCM and RS coding. Results have indicated that substantial (6-10 dB) performance gains can be achieved with this approach with comparatively little bandwidth expansion. Since all of the bandwidth expansion is due to the RS code we see that TCM based concatenated coding results in roughly 10-50% bandwidth expansion compared to 70-150% expansion for similar concatenated scheme which use convolution code. We stress that combined coding and modulation optimization is important for achieving performance gains while maintaining spectral efficiency.
Coset Codes Viewed as Terminated Convolutional Codes
NASA Technical Reports Server (NTRS)
Fossorier, Marc P. C.; Lin, Shu
1996-01-01
In this paper, coset codes are considered as terminated convolutional codes. Based on this approach, three new general results are presented. First, it is shown that the iterative squaring construction can equivalently be defined from a convolutional code whose trellis terminates. This convolutional code determines a simple encoder for the coset code considered, and the state and branch labelings of the associated trellis diagram become straightforward. Also, from the generator matrix of the code in its convolutional code form, much information about the trade-off between the state connectivity and complexity at each section, and the parallel structure of the trellis, is directly available. Based on this generator matrix, it is shown that the parallel branches in the trellis diagram of the convolutional code represent the same coset code C(sub 1), of smaller dimension and shorter length. Utilizing this fact, a two-stage optimum trellis decoding method is devised. The first stage decodes C(sub 1), while the second stage decodes the associated convolutional code, using the branch metrics delivered by stage 1. Finally, a bidirectional decoding of each received block starting at both ends is presented. If about the same number of computations is required, this approach remains very attractive from a practical point of view as it roughly doubles the decoding speed. This fact is particularly interesting whenever the second half of the trellis is the mirror image of the first half, since the same decoder can be implemented for both parts.
FEMHD: An adaptive finite element method for MHD and edge modelling
Strauss, H.R.
1995-07-01
This paper describes the code FEMHD, an adaptive finite element MHD code, which is applied in a number of different manners to model MHD behavior and edge plasma phenomena on a diverted tokamak. The code uses an unstructured triangular mesh in 2D and wedge shaped mesh elements in 3D. The code has been adapted to look at neutral and charged particle dynamics in the plasma scrape off region, and into a full MHD-particle code.
Flexible Generation of Kalman Filter Code
NASA Technical Reports Server (NTRS)
Richardson, Julian; Wilson, Edward
2006-01-01
Domain-specific program synthesis can automatically generate high quality code in complex domains from succinct specifications, but the range of programs which can be generated by a given synthesis system is typically narrow. Obtaining code which falls outside this narrow scope necessitates either 1) extension of the code generator, which is usually very expensive, or 2) manual modification of the generated code, which is often difficult and which must be redone whenever changes are made to the program specification. In this paper, we describe adaptations and extensions of the AUTOFILTER Kalman filter synthesis system which greatly extend the range of programs which can be generated. Users augment the input specification with a specification of code fragments and how those fragments should interleave with or replace parts of the synthesized filter. This allows users to generate a much wider range of programs without their needing to modify the synthesis system or edit generated code. We demonstrate the usefulness of the approach by applying it to the synthesis of a complex state estimator which combines code from several Kalman filters with user-specified code. The work described in this paper allows the complex design decisions necessary for real-world applications to be reflected in the synthesized code. When executed on simulated input data, the generated state estimator was found to produce comparable estimates to those produced by a handcoded estimator
Discussion on LDPC Codes and Uplink Coding
NASA Technical Reports Server (NTRS)
Andrews, Ken; Divsalar, Dariush; Dolinar, Sam; Moision, Bruce; Hamkins, Jon; Pollara, Fabrizio
2007-01-01
This slide presentation reviews the progress that the workgroup on Low-Density Parity-Check (LDPC) for space link coding. The workgroup is tasked with developing and recommending new error correcting codes for near-Earth, Lunar, and deep space applications. Included in the presentation is a summary of the technical progress of the workgroup. Charts that show the LDPC decoder sensitivity to symbol scaling errors are reviewed, as well as a chart showing the performance of several frame synchronizer algorithms compared to that of some good codes and LDPC decoder tests at ESTL. Also reviewed is a study on Coding, Modulation, and Link Protocol (CMLP), and the recommended codes. A design for the Pseudo-Randomizer with LDPC Decoder and CRC is also reviewed. A chart that summarizes the three proposed coding systems is also presented.
Manually operated coded switch
Barnette, Jon H.
1978-01-01
The disclosure relates to a manually operated recodable coded switch in which a code may be inserted, tried and used to actuate a lever controlling an external device. After attempting a code, the switch's code wheels must be returned to their zero positions before another try is made.
Binary primitive alternant codes
NASA Technical Reports Server (NTRS)
Helgert, H. J.
1975-01-01
In this note we investigate the properties of two classes of binary primitive alternant codes that are generalizations of the primitive BCH codes. For these codes we establish certain equivalence and invariance relations and obtain values of d and d*, the minimum distances of the prime and dual codes.
The Makah Language Program Curricular Code.
ERIC Educational Resources Information Center
Renker, Ann M.
The Makah Language Program Curricular Code (MLPCC) facilitates the systematic storage of Makah curricular information, provides a method of cataloging Makah language materials, is available to all Makah Language Program staff members, and is readily adaptable to any information processing system. The MLPCC consists of a series of symbols…
NASA Technical Reports Server (NTRS)
Shahshahani, M.
1991-01-01
The performance characteristics are discussed of certain algebraic geometric codes. Algebraic geometric codes have good minimum distance properties. On many channels they outperform other comparable block codes; therefore, one would expect them eventually to replace some of the block codes used in communications systems. It is suggested that it is unlikely that they will become useful substitutes for the Reed-Solomon codes used by the Deep Space Network in the near future. However, they may be applicable to systems where the signal to noise ratio is sufficiently high so that block codes would be more suitable than convolutional or concatenated codes.
ERIC Educational Resources Information Center
Wedman, John; Wedman, Judy
1985-01-01
The "Animals" program found on the Apple II and IIe system master disk can be adapted for use in the mathematics classroom. Instructions for making the necessary changes and suggestions for using it in lessons related to geometric shapes are provided. (JN)
Davies, Kelvin J A
2016-06-01
Homeostasis is a central pillar of modern Physiology. The term homeostasis was invented by Walter Bradford Cannon in an attempt to extend and codify the principle of 'milieu intérieur,' or a constant interior bodily environment, that had previously been postulated by Claude Bernard. Clearly, 'milieu intérieur' and homeostasis have served us well for over a century. Nevertheless, research on signal transduction systems that regulate gene expression, or that cause biochemical alterations to existing enzymes, in response to external and internal stimuli, makes it clear that biological systems are continuously making short-term adaptations both to set-points, and to the range of 'normal' capacity. These transient adaptations typically occur in response to relatively mild changes in conditions, to programs of exercise training, or to sub-toxic, non-damaging levels of chemical agents; thus, the terms hormesis, heterostasis, and allostasis are not accurate descriptors. Therefore, an operational adjustment to our understanding of homeostasis suggests that the modified term, Adaptive Homeostasis, may be useful especially in studies of stress, toxicology, disease, and aging. Adaptive Homeostasis may be defined as follows: 'The transient expansion or contraction of the homeostatic range in response to exposure to sub-toxic, non-damaging, signaling molecules or events, or the removal or cessation of such molecules or events.' PMID:27112802
Visual adaptation and face perception
Webster, Michael A.; MacLeod, Donald I. A.
2011-01-01
The appearance of faces can be strongly affected by the characteristics of faces viewed previously. These perceptual after-effects reflect processes of sensory adaptation that are found throughout the visual system, but which have been considered only relatively recently in the context of higher level perceptual judgements. In this review, we explore the consequences of adaptation for human face perception, and the implications of adaptation for understanding the neural-coding schemes underlying the visual representation of faces. The properties of face after-effects suggest that they, in part, reflect response changes at high and possibly face-specific levels of visual processing. Yet, the form of the after-effects and the norm-based codes that they point to show many parallels with the adaptations and functional organization that are thought to underlie the encoding of perceptual attributes like colour. The nature and basis for human colour vision have been studied extensively, and we draw on ideas and principles that have been developed to account for norms and normalization in colour vision to consider potential similarities and differences in the representation and adaptation of faces. PMID:21536555
NASA Technical Reports Server (NTRS)
Divsalar, Dariush (Inventor); Abbasfar, Aliazam (Inventor); Jones, Christopher R. (Inventor); Dolinar, Samuel J. (Inventor); Thorpe, Jeremy C. (Inventor); Andrews, Kenneth S. (Inventor); Yao, Kung (Inventor)
2008-01-01
An apparatus and method for encoding low-density parity check codes. Together with a repeater, an interleaver and an accumulator, the apparatus comprises a precoder, thus forming accumulate-repeat-accumulate (ARA codes). Protographs representing various types of ARA codes, including AR3A, AR4A and ARJA codes, are described. High performance is obtained when compared to the performance of current repeat-accumulate (RA) or irregular-repeat-accumulate (IRA) codes.
ERIC Educational Resources Information Center
Crompton, Helen; LaFrance, Jason; van 't Hooft, Mark
2012-01-01
A QR (quick-response) code is a two-dimensional scannable code, similar in function to a traditional bar code that one might find on a product at the supermarket. The main difference between the two is that, while a traditional bar code can hold a maximum of only 20 digits, a QR code can hold up to 7,089 characters, so it can contain much more…
A local adaptive image descriptor
NASA Astrophysics Data System (ADS)
Zahid Ishraque, S. M.; Shoyaib, Mohammad; Abdullah-Al-Wadud, M.; Monirul Hoque, Md; Chae, Oksam
2013-12-01
The local binary pattern (LBP) is a robust but computationally simple approach in texture analysis. However, LBP performs poorly in the presence of noise and large illumination variation. Thus, a local adaptive image descriptor termed as LAID is introduced in this proposal. It is a ternary pattern and is able to generate persistent codes to represent microtextures in a given image, especially in noisy conditions. It can also generate stable texture codes if the pixel intensities change abruptly due to the illumination changes. Experimental results also show the superiority of the proposed method over other state-of-the-art methods.
NASA Technical Reports Server (NTRS)
Hacker, Scott C. (Inventor); Dean, Richard J. (Inventor); Burge, Scott W. (Inventor); Dartez, Toby W. (Inventor)
2007-01-01
An adapter for installing a connector to a terminal post, wherein the connector is attached to a cable, is presented. In an embodiment, the adapter is comprised of an elongated collet member having a longitudinal axis comprised of a first collet member end, a second collet member end, an outer collet member surface, and an inner collet member surface. The inner collet member surface at the first collet member end is used to engage the connector. The outer collet member surface at the first collet member end is tapered for a predetermined first length at a predetermined taper angle. The collet includes a longitudinal slot that extends along the longitudinal axis initiating at the first collet member end for a predetermined second length. The first collet member end is formed of a predetermined number of sections segregated by a predetermined number of channels and the longitudinal slot.
Watson, Bobby L.; Aeby, Ian
1982-01-01
An adaptive data compression device for compressing data having variable frequency content, including a plurality of digital filters for analyzing the content of the data over a plurality of frequency regions, a memory, and a control logic circuit for generating a variable rate memory clock corresponding to the analyzed frequency content of the data in the frequency region and for clocking the data into the memory in response to the variable rate memory clock.
Watson, B.L.; Aeby, I.
1980-08-26
An adaptive data compression device for compressing data is described. The device has a frequency content, including a plurality of digital filters for analyzing the content of the data over a plurality of frequency regions, a memory, and a control logic circuit for generating a variable rate memory clock corresponding to the analyzed frequency content of the data in the frequency region and for clocking the data into the memory in response to the variable rate memory clock.
Code System for Analysis of Piping Reliability Including Seismic Events.
1999-04-26
Version 00 PC-PRAISE is a probabilistic fracture mechanics computer code developed for IBM or IBM compatible personal computers to estimate probabilities of leaks and breaks in nuclear power plant cooling piping. It iwas adapted from LLNL's PRAISE computer code.
NASA Astrophysics Data System (ADS)
Barton, P.
1987-04-01
The basic principles of adaptive antennas are outlined in terms of the Wiener-Hopf expression for maximizing signal to noise ratio in an arbitrary noise environment; the analogy with generalized matched filter theory provides a useful aid to understanding. For many applications, there is insufficient information to achieve the above solution and thus non-optimum constrained null steering algorithms are also described, together with a summary of methods for preventing wanted signals being nulled by the adaptive system. The three generic approaches to adaptive weight control are discussed; correlation steepest descent, weight perturbation and direct solutions based on sample matrix conversion. The tradeoffs between hardware complexity and performance in terms of null depth and convergence rate are outlined. The sidelobe cancellor technique is described. Performance variation with jammer power and angular distribution is summarized and the key performance limitations identified. The configuration and performance characteristics of both multiple beam and phase scan array antennas are covered, with a brief discussion of performance factors.
Asymmetric quantum convolutional codes
NASA Astrophysics Data System (ADS)
La Guardia, Giuliano G.
2016-01-01
In this paper, we construct the first families of asymmetric quantum convolutional codes (AQCCs). These new AQCCs are constructed by means of the CSS-type construction applied to suitable families of classical convolutional codes, which are also constructed here. The new codes have non-catastrophic generator matrices, and they have great asymmetry. Since our constructions are performed algebraically, i.e. we develop general algebraic methods and properties to perform the constructions, it is possible to derive several families of such codes and not only codes with specific parameters. Additionally, several different types of such codes are obtained.
PARAMESH: A Parallel Adaptive Mesh Refinement Community Toolkit
NASA Technical Reports Server (NTRS)
MacNeice, Peter; Olson, Kevin M.; Mobarry, Clark; deFainchtein, Rosalinda; Packer, Charles
1999-01-01
In this paper, we describe a community toolkit which is designed to provide parallel support with adaptive mesh capability for a large and important class of computational models, those using structured, logically cartesian meshes. The package of Fortran 90 subroutines, called PARAMESH, is designed to provide an application developer with an easy route to extend an existing serial code which uses a logically cartesian structured mesh into a parallel code with adaptive mesh refinement. Alternatively, in its simplest use, and with minimal effort, it can operate as a domain decomposition tool for users who want to parallelize their serial codes, but who do not wish to use adaptivity. The package can provide them with an incremental evolutionary path for their code, converting it first to uniformly refined parallel code, and then later if they so desire, adding adaptivity.
SYMTRAN - A Time-dependent Symmetric Tandem Mirror Transport Code
Hua, D; Fowler, T
2004-06-15
A time-dependent version of the steady-state radial transport model in symmetric tandem mirrors in Ref. [1] has been coded up and first tests performed. Our code, named SYMTRAN, is an adaptation of the earlier SPHERE code for spheromaks, now modified for tandem mirror physics. Motivated by Post's new concept of kinetic stabilization of symmetric mirrors, it is an extension of the earlier TAMRAC rate-equation code omitting radial transport [2], which successfully accounted for experimental results in TMX. The SYMTRAN code differs from the earlier tandem mirror radial transport code TMT in that our code is focused on axisymmetric tandem mirrors and classical diffusion, whereas TMT emphasized non-ambipolar transport in TMX and MFTF-B due to yin-yang plugs and non-symmetric transitions between the plugs and axisymmetric center cell. Both codes exhibit interesting but different non-linear behavior.
Comparison of translation loads for standard and alternative genetic codes
2010-01-01
Background The (almost) universality of the genetic code is one of the most intriguing properties of cellular life. Nevertheless, several variants of the standard genetic code have been observed, which differ in one or several of 64 codon assignments and occur mainly in mitochondrial genomes and in nuclear genomes of some bacterial and eukaryotic parasites. These variants are usually considered to be the result of non-adaptive evolution. It has been shown that the standard genetic code is preferential to randomly assembled codes for its ability to reduce the effects of errors in protein translation. Results Using a genotype-to-phenotype mapping based on a quantitative model of protein folding, we compare the standard genetic code to seven of its naturally occurring variants with respect to the fitness loss associated to mistranslation and mutation. These fitness losses are computed through computer simulations of protein evolution with mutations that are either neutral or lethal, and different mutation biases, which influence the balance between unfolding and misfolding stability. We show that the alternative codes may produce significantly different mutation and translation loads, particularly for genomes evolving with a rather large mutation bias. Most of the alternative genetic codes are found to be disadvantageous to the standard code, in agreement with the view that the change of genetic code is a mutationally driven event. Nevertheless, one of the studied alternative genetic codes is predicted to be preferable to the standard code for a broad range of mutation biases. Conclusions Our results show that, with one exception, the standard genetic code is generally better able to reduce the translation load than the naturally occurring variants studied here. Besides this exception, some of the other alternative genetic codes are predicted to be better adapted for extreme mutation biases. Hence, the fixation of alternative genetic codes might be a neutral or nearly
Characterizing the effects of multidirectional motion adaptation
McGovern, David P.; Roach, Neil W.; Webb, Ben S.
2014-01-01
Recent sensory experience can alter our perception and change the response characteristics of sensory neurons. These effects of sensory adaptation are a ubiquitous property of perceptual systems and are believed to be of fundamental importance to sensory coding. Yet we know little about how adaptation to stimulus ensembles affects our perception of the environment as most psychophysical experiments employ adaptation protocols that focus on prolonged exposure to a single visual attribute. Here, we investigate how concurrent adaptation to multiple directions of motion affects perception of subsequently presented motion using the direction aftereffect. In different conditions, observers adapted to a stimulus ensemble comprised of dot directions sampled from different distributions or to bidirectional motion. Increasing the variance of normally distributed directions reduced the magnitude of the peak direction aftereffect and broadened its tuning profile. Sampling of asymmetric Gaussian and uniform distributions resulted in shifts of direction aftereffect tuning profiles consistent with changes in the perceived global direction of the adapting stimulus. Adding dots in a direction opposite or orthogonal to a unidirectional adapting stimulus led to a pronounced reduction in the direction aftereffect. A simple population-coding model, in which adaptation selectively alters the responsivity of direction-selective neurons, can accommodate the effects of multidirectional adaptation on the perceived direction of motion. PMID:25368339
The Clawpack Community of Codes
NASA Astrophysics Data System (ADS)
Mandli, K. T.; LeVeque, R. J.; Ketcheson, D.; Ahmadia, A. J.
2014-12-01
Clawpack, the Conservation Laws Package, has long been one of the standards for solving hyperbolic conservation laws but over the years has extended well beyond this role. Today a community of open-source codes have been developed that address a multitude of different needs including non-conservative balance laws, high-order accurate methods, and parallelism while remaining extensible and easy to use, largely by the judicious use of Python and the original Fortran codes that it wraps. This talk will present some of the recent developments in projects under the Clawpack umbrella, notably the GeoClaw and PyClaw projects. GeoClaw was originally developed as a tool for simulating tsunamis using adaptive mesh refinement but has since encompassed a large number of other geophysically relevant flows including storm surge and debris-flows. PyClaw originated as a Python version of the original Clawpack algorithms but has since been both a testing ground for new algorithmic advances in the Clawpack framework but also an easily extensible framework for solving hyperbolic balance laws. Some of these extensions include the addition of WENO high-order methods, massively parallel capabilities, and adaptive mesh refinement technologies, made possible largely by the flexibility of the Python language and community libraries such as NumPy and PETSc. Because of the tight integration with Python tecnologies, both packages have benefited also from the focus on reproducibility in the Python community, notably IPython notebooks.
A co-designed equalization, modulation, and coding scheme
NASA Technical Reports Server (NTRS)
Peile, Robert E.
1992-01-01
The commercial impact and technical success of Trellis Coded Modulation seems to illustrate that, if Shannon's capacity is going to be neared, the modulation and coding of an analogue signal ought to be viewed as an integrated process. More recent work has focused on going beyond the gains obtained for Average White Gaussian Noise and has tried to combine the coding/modulation with adaptive equalization. The motive is to gain similar advances on less perfect or idealized channels.
Cellulases and coding sequences
Li, Xin-Liang; Ljungdahl, Lars G.; Chen, Huizhong
2001-01-01
The present invention provides three fungal cellulases, their coding sequences, recombinant DNA molecules comprising the cellulase coding sequences, recombinant host cells and methods for producing same. The present cellulases are from Orpinomyces PC-2.
Cellulases and coding sequences
Li, Xin-Liang; Ljungdahl, Lars G.; Chen, Huizhong
2001-02-20
The present invention provides three fungal cellulases, their coding sequences, recombinant DNA molecules comprising the cellulase coding sequences, recombinant host cells and methods for producing same. The present cellulases are from Orpinomyces PC-2.
NASA Technical Reports Server (NTRS)
Divsalar, D.; Pollara, F.
1995-01-01
A description is given of multiple turbo codes and a suitable decoder structure derived from an approximation to the maximum a posteriori probability (MAP) decision rule, which is substantially different from the decoder for two-code-based encoders.
ERIC Educational Resources Information Center
Shumack, Kellie A.; Reilly, Erin; Chamberlain, Nik
2013-01-01
space, has error-correction capacity, and can be read from any direction. These codes are used in manufacturing, shipping, and marketing, as well as in education. QR codes can be created to produce…
NASA Technical Reports Server (NTRS)
Goerke, W. S.
1972-01-01
A manual is presented as an aid in using the STEEP32 code. The code is the EXEC 8 version of the STEEP code (STEEP is an acronym for shock two-dimensional Eulerian elastic plastic). The major steps in a STEEP32 run are illustrated in a sample problem. There is a detailed discussion of the internal organization of the code, including a description of each subroutine.
Color code identification in coded structured light.
Zhang, Xu; Li, Youfu; Zhu, Limin
2012-08-01
Color code is widely employed in coded structured light to reconstruct the three-dimensional shape of objects. Before determining the correspondence, a very important step is to identify the color code. Until now, the lack of an effective evaluation standard has hindered the progress in this unsupervised classification. In this paper, we propose a framework based on the benchmark to explore the new frontier. Two basic facets of the color code identification are discussed, including color feature selection and clustering algorithm design. First, we adopt analysis methods to evaluate the performance of different color features, and the order of these color features in the discriminating power is concluded after a large number of experiments. Second, in order to overcome the drawback of K-means, a decision-directed method is introduced to find the initial centroids. Quantitative comparisons affirm that our method is robust with high accuracy, and it can find or closely approach the global peak. PMID:22859022
Software Certification - Coding, Code, and Coders
NASA Technical Reports Server (NTRS)
Havelund, Klaus; Holzmann, Gerard J.
2011-01-01
We describe a certification approach for software development that has been adopted at our organization. JPL develops robotic spacecraft for the exploration of the solar system. The flight software that controls these spacecraft is considered to be mission critical. We argue that the goal of a software certification process cannot be the development of "perfect" software, i.e., software that can be formally proven to be correct under all imaginable and unimaginable circumstances. More realistically, the goal is to guarantee a software development process that is conducted by knowledgeable engineers, who follow generally accepted procedures to control known risks, while meeting agreed upon standards of workmanship. We target three specific issues that must be addressed in such a certification procedure: the coding process, the code that is developed, and the skills of the coders. The coding process is driven by standards (e.g., a coding standard) and tools. The code is mechanically checked against the standard with the help of state-of-the-art static source code analyzers. The coders, finally, are certified in on-site training courses that include formal exams.
ERIC Educational Resources Information Center
Losee, Robert M.
1997-01-01
Proposes a model for digital library and hypermedia organizations that is adaptive, providing different conceptual orderings to support browsing for different individuals' or groups' needs. Highlights include types of links, document ordering and the Gray code (a binary programming code), adaptive classification, and an economic model for document…
NASA Astrophysics Data System (ADS)
Yu, Ya-Huei; Ho, Chien-Peng; Tsai, Chun-Jen
2007-12-01
Scalable video coding (SVC) has been an active research topic for the past decade. In the past, most SVC technologies were based on a coarse-granularity scalable model which puts many scalability constraints on the encoded bitstreams. As a result, the application scenario of adapting a preencoded bitstream multiple times along the distribution chain has not been seriously investigated before. In this paper, a model-based multiple-adaptation framework based on a wavelet video codec, MC-EZBC, is proposed. The proposed technology allows multiple adaptations on both the video data and the content-adaptive FEC protection codes. For multiple adaptations of video data, rate-distortion information must be embedded within the video bitstream in order to allow rate-distortion optimized operations for each adaptation. Experimental results show that the proposed method reduces the amount of side information by more than 50% on average when compared to the existing technique. It also reduces the number of iterations required to perform the tier-2 entropy coding by more than 64% on average. In addition, due to the nondiscrete nature of the rate-distortion model, the proposed framework also enables multiple adaptations of content-adaptive FEC protection scheme for more flexible error-resilient transmission of bitstreams.
One Hidden Object, Two Spatial Codes: Young Children's Use of Relational and Vector Coding
ERIC Educational Resources Information Center
Uttal, David H.; Sandstrom, Lisa B.; Newcombe, Nora S.
2006-01-01
An important characteristic of mature spatial cognition is the ability to encode spatial locations in terms of relations among landmarks as well as in terms of vectors that include distance and direction. In this study, we examined children's use of the relation "middle" to code the location of a hidden toy, using a procedure adapted from prior…
Jow, Hong-Nian; Murfin, W.B.; Johnson, J.D.
1993-11-01
This report describes the source term estimation codes, XSORs. The codes are written for three pressurized water reactors (Surry, Sequoyah, and Zion) and two boiling water reactors (Peach Bottom and Grand Gulf). The ensemble of codes has been named ``XSOR``. The purpose of XSOR codes is to estimate the source terms which would be released to the atmosphere in severe accidents. A source term includes the release fractions of several radionuclide groups, the timing and duration of releases, the rates of energy release, and the elevation of releases. The codes have been developed by Sandia National Laboratories for the US Nuclear Regulatory Commission (NRC) in support of the NUREG-1150 program. The XSOR codes are fast running parametric codes and are used as surrogates for detailed mechanistic codes. The XSOR codes also provide the capability to explore the phenomena and their uncertainty which are not currently modeled by the mechanistic codes. The uncertainty distributions of input parameters may be used by an. XSOR code to estimate the uncertainty of source terms.
Greg Flach, Frank Smith
2014-05-14
DLLExternalCode is the a general dynamic-link library (DLL) interface for linking GoldSim (www.goldsim.com) with external codes. The overall concept is to use GoldSim as top level modeling software with interfaces to external codes for specific calculations. The DLLExternalCode DLL that performs the linking function is designed to take a list of code inputs from GoldSim, create an input file for the external application, run the external code, and return a list of outputs, read from files created by the external application, back to GoldSim. Instructions for creating the input file, running the external code, and reading the output are contained in an instructions file that is read and interpreted by the DLL.
2014-05-14
DLLExternalCode is the a general dynamic-link library (DLL) interface for linking GoldSim (www.goldsim.com) with external codes. The overall concept is to use GoldSim as top level modeling software with interfaces to external codes for specific calculations. The DLLExternalCode DLL that performs the linking function is designed to take a list of code inputs from GoldSim, create an input file for the external application, run the external code, and return a list of outputs, read frommore » files created by the external application, back to GoldSim. Instructions for creating the input file, running the external code, and reading the output are contained in an instructions file that is read and interpreted by the DLL.« less
Lossless Video Sequence Compression Using Adaptive Prediction
NASA Technical Reports Server (NTRS)
Li, Ying; Sayood, Khalid
2007-01-01
We present an adaptive lossless video compression algorithm based on predictive coding. The proposed algorithm exploits temporal, spatial, and spectral redundancies in a backward adaptive fashion with extremely low side information. The computational complexity is further reduced by using a caching strategy. We also study the relationship between the operational domain for the coder (wavelet or spatial) and the amount of temporal and spatial redundancy in the sequence being encoded. Experimental results show that the proposed scheme provides significant improvements in compression efficiencies.
Some practical universal noiseless coding techniques
NASA Technical Reports Server (NTRS)
Rice, R. F.
1979-01-01
Some practical adaptive techniques for the efficient noiseless coding of a broad class of such data sources are developed and analyzed. Algorithms are designed for coding discrete memoryless sources which have a known symbol probability ordering but unknown probability values. A general applicability of these algorithms to solving practical problems is obtained because most real data sources can be simply transformed into this form by appropriate preprocessing. These algorithms have exhibited performance only slightly above all entropy values when applied to real data with stationary characteristics over the measurement span. Performance considerably under a measured average data entropy may be observed when data characteristics are changing over the measurement span.
Advanced technology development for image gathering, coding, and processing
NASA Technical Reports Server (NTRS)
Huck, Friedrich O.
1990-01-01
Three overlapping areas of research activities are presented: (1) Information theory and optimal filtering are extended to visual information acquisition and processing. The goal is to provide a comprehensive methodology for quantitatively assessing the end-to-end performance of image gathering, coding, and processing. (2) Focal-plane processing techniques and technology are developed to combine effectively image gathering with coding. The emphasis is on low-level vision processing akin to the retinal processing in human vision. (3) A breadboard adaptive image-coding system is being assembled. This system will be used to develop and evaluate a number of advanced image-coding technologies and techniques as well as research the concept of adaptive image coding.
Certifying Auto-Generated Flight Code
NASA Technical Reports Server (NTRS)
Denney, Ewen
2008-01-01
itself is generic, and parametrized with respect to a library of coding patterns that depend on the safety policies and the code generator. The patterns characterize the notions of definitions and uses that are specific to the given safety property. For example, for initialization safety, definitions correspond to variable initializations while uses are statements which read a variable, whereas for array bounds safety, definitions are the array declarations, while uses are statements which access an array variable. The inferred annotations are thus highly dependent on the actual program and the properties being proven. The annotations, themselves, need not be trusted, but are crucial to obtain the automatic formal verification of the safety properties without requiring access to the internals of the code generator. The approach has been applied to both in-house and commercial code generators, but is independent of the particular generator used. It is currently being adapted to flight code generated using MathWorks Real-Time Workshop, an automatic code generator that translates from Simulink/Stateflow models into embedded C code.
Efficient sensory cortical coding optimizes pursuit eye movements.
Liu, Bing; Macellaio, Matthew V; Osborne, Leslie C
2016-01-01
In the natural world, the statistics of sensory stimuli fluctuate across a wide range. In theory, the brain could maximize information recovery if sensory neurons adaptively rescale their sensitivity to the current range of inputs. Such adaptive coding has been observed in a variety of systems, but the premise that adaptation optimizes behaviour has not been tested. Here we show that adaptation in cortical sensory neurons maximizes information about visual motion in pursuit eye movements guided by that cortical activity. We find that gain adaptation drives a rapid (<100 ms) recovery of information after shifts in motion variance, because the neurons and behaviour rescale their sensitivity to motion fluctuations. Both neurons and pursuit rapidly adopt a response gain that maximizes motion information and minimizes tracking errors. Thus, efficient sensory coding is not simply an ideal standard but a description of real sensory computation that manifests in improved behavioural performance. PMID:27611214
Hardware-based JPEG 2000 video coding system
NASA Astrophysics Data System (ADS)
Schuchter, Arthur R.; Uhl, Andreas
2007-02-01
In this paper, we discuss a hardware based low complexity JPEG 2000 video coding system. The hardware system is based on a software simulation system, where temporal redundancy is exploited by coding of differential frames which are arranged in an adaptive GOP structure whereby the GOP structure itself is determined by statistical analysis of differential frames. We present a hardware video coding architecture which applies this inter-frame coding system to a Digital Signal Processor (DSP). The system consists mainly of a microprocessor (ADSP-BF533 Blackfin Processor) and a JPEG 2000 chip (ADV202).
Parallelized tree-code for clusters of personal computers
NASA Astrophysics Data System (ADS)
Viturro, H. R.; Carpintero, D. D.
2000-02-01
We present a tree-code for integrating the equations of the motion of collisionless systems, which has been fully parallelized and adapted to run in several PC-based processors simultaneously, using the well-known PVM message passing library software. SPH algorithms, not yet included, may be easily incorporated to the code. The code is written in ANSI C; it can be freely downloaded from a public ftp site. Simulations of collisions of galaxies are presented, with which the performance of the code is tested.
Peter, Frank J.; Dalton, Larry J.; Plummer, David W.
2002-01-01
A new class of mechanical code comparators is described which have broad potential for application in safety, surety, and security applications. These devices can be implemented as micro-scale electromechanical systems that isolate a secure or otherwise controlled device until an access code is entered. This access code is converted into a series of mechanical inputs to the mechanical code comparator, which compares the access code to a pre-input combination, entered previously into the mechanical code comparator by an operator at the system security control point. These devices provide extremely high levels of robust security. Being totally mechanical in operation, an access control system properly based on such devices cannot be circumvented by software attack alone.
Elder, D
1984-06-01
The logic of genetic control of development may be based on a binary epigenetic code. This paper revises the author's previous scheme dealing with the numerology of annelid metamerism in these terms. Certain features of the code had been deduced to be combinatorial, others not. This paradoxical contrast is resolved here by the interpretation that these features relate to different operations of the code; the combinatiorial to coding identity of units, the non-combinatorial to coding production of units. Consideration of a second paradox in the theory of epigenetic coding leads to a new solution which further provides a basis for epimorphic regeneration, and may in particular throw light on the "regeneration-duplication" phenomenon. A possible test of the model is also put forward. PMID:6748695
Robinson, David; Comp, Dip; Schulz, Erich; Brown, Philip; Price, Colin
1997-01-01
Abstract The Read Codes are a hierarchically-arranged controlled clinical vocabulary introduced in the early 1980s and now consisting of three maintained versions of differing complexity. The code sets are dynamic, and are updated quarterly in response to requests from users including clinicians in both primary and secondary care, software suppliers, and advice from a network of specialist healthcare professionals. The codes' continual evolution of content, both across and within versions, highlights tensions between different users and uses of coded clinical data. Internal processes, external interactions and new structural features implemented by the NHS Centre for Coding and Classification (NHSCCC) for user interactive maintenance of the Read Codes are described, and over 2000 items of user feedback episodes received over a 15-month period are analysed. PMID:9391934
NASA Astrophysics Data System (ADS)
Bravyi, Sergey
Combining protection from noise and computational universality is one of the biggest challenges in the fault-tolerant quantum computing. Topological stabilizer codes such as the 2D surface code can tolerate a high level of noise but implementing logical gates, especially non-Clifford ones, requires a prohibitively large overhead due to the need of state distillation. In this talk I will describe a new family of 2D quantum error correcting codes that enable a transversal implementation of all logical gates required for the universal quantum computing. Transversal logical gates (TLG) are encoded operations that can be realized by applying some single-qubit rotation to each physical qubit. TLG are highly desirable since they introduce no overhead and do not spread errors. It has been known before that a quantum code can have only a finite number of TLGs which rules out computational universality. Our scheme circumvents this no-go result by combining TLGs of two different quantum codes using the gauge-fixing method pioneered by Paetznick and Reichardt. The first code, closely related to the 2D color code, enables a transversal implementation of all single-qubit Clifford gates such as the Hadamard gate and the π / 2 phase shift. The second code that we call a doubled color code provides a transversal T-gate, where T is the π / 4 phase shift. The Clifford+T gate set is known to be computationally universal. The two codes can be laid out on the honeycomb lattice with two qubits per site such that the code conversion requires parity measurements for six-qubit Pauli operators supported on faces of the lattice. I will also describe numerical simulations of logical Clifford+T circuits encoded by the distance-3 doubled color code. Based on a joint work with Andrew Cross.
Phonological coding during reading
Leinenger, Mallorie
2014-01-01
The exact role that phonological coding (the recoding of written, orthographic information into a sound based code) plays during silent reading has been extensively studied for more than a century. Despite the large body of research surrounding the topic, varying theories as to the time course and function of this recoding still exist. The present review synthesizes this body of research, addressing the topics of time course and function in tandem. The varying theories surrounding the function of phonological coding (e.g., that phonological codes aid lexical access, that phonological codes aid comprehension and bolster short-term memory, or that phonological codes are largely epiphenomenal in skilled readers) are first outlined, and the time courses that each maps onto (e.g., that phonological codes come online early (pre-lexical) or that phonological codes come online late (post-lexical)) are discussed. Next the research relevant to each of these proposed functions is reviewed, discussing the varying methodologies that have been used to investigate phonological coding (e.g., response time methods, reading while eyetracking or recording EEG and MEG, concurrent articulation) and highlighting the advantages and limitations of each with respect to the study of phonological coding. In response to the view that phonological coding is largely epiphenomenal in skilled readers, research on the use of phonological codes in prelingually, profoundly deaf readers is reviewed. Finally, implications for current models of word identification (activation-verification model (Van Order, 1987), dual-route model (e.g., Coltheart, Rastle, Perry, Langdon, & Ziegler, 2001), parallel distributed processing model (Seidenberg & McClelland, 1989)) are discussed. PMID:25150679
Phonological coding during reading.
Leinenger, Mallorie
2014-11-01
The exact role that phonological coding (the recoding of written, orthographic information into a sound based code) plays during silent reading has been extensively studied for more than a century. Despite the large body of research surrounding the topic, varying theories as to the time course and function of this recoding still exist. The present review synthesizes this body of research, addressing the topics of time course and function in tandem. The varying theories surrounding the function of phonological coding (e.g., that phonological codes aid lexical access, that phonological codes aid comprehension and bolster short-term memory, or that phonological codes are largely epiphenomenal in skilled readers) are first outlined, and the time courses that each maps onto (e.g., that phonological codes come online early [prelexical] or that phonological codes come online late [postlexical]) are discussed. Next the research relevant to each of these proposed functions is reviewed, discussing the varying methodologies that have been used to investigate phonological coding (e.g., response time methods, reading while eye-tracking or recording EEG and MEG, concurrent articulation) and highlighting the advantages and limitations of each with respect to the study of phonological coding. In response to the view that phonological coding is largely epiphenomenal in skilled readers, research on the use of phonological codes in prelingually, profoundly deaf readers is reviewed. Finally, implications for current models of word identification (activation-verification model, Van Orden, 1987; dual-route model, e.g., M. Coltheart, Rastle, Perry, Langdon, & Ziegler, 2001; parallel distributed processing model, Seidenberg & McClelland, 1989) are discussed. PMID:25150679
Adaptive Precoded MIMO for LTE Wireless Communication
NASA Astrophysics Data System (ADS)
Nabilla, A. F.; Tiong, T. C.
2015-04-01
Long-Term Evolution (LTE) and Long Term Evolution-Advanced (ATE-A) have provided a major step forward in mobile communication capability. The objectives to be achieved are high peak data rates in high spectrum bandwidth and high spectral efficiencies. Technically, pre-coding means that multiple data streams are emitted from the transmit antenna with independent and appropriate weightings such that the link throughput is maximized at the receiver output thus increasing or equalizing the received signal to interference and noise (SINR) across the multiple receiver terminals. However, it is not reliable enough to fully utilize the information transfer rate to fit the condition of channel according to the bandwidth size. Thus, adaptive pre-coding is proposed. It applies pre-coding matrix indicator (PMI) channel state making it possible to change the pre-coding codebook accordingly thus improving the data rate higher than fixed pre-coding.
NASA Technical Reports Server (NTRS)
1988-01-01
American Bar Codes, Inc. developed special bar code labels for inventory control of space shuttle parts and other space system components. ABC labels are made in a company-developed anodizing aluminum process and consecutively marketed with bar code symbology and human readable numbers. They offer extreme abrasion resistance and indefinite resistance to ultraviolet radiation, capable of withstanding 700 degree temperatures without deterioration and up to 1400 degrees with special designs. They offer high resistance to salt spray, cleaning fluids and mild acids. ABC is now producing these bar code labels commercially or industrial customers who also need labels to resist harsh environments.
Reid, R.L.; Barrett, R.J.; Brown, T.G.; Gorker, G.E.; Hooper, R.J.; Kalsi, S.S.; Metzler, D.H.; Peng, Y.K.M.; Roth, K.E.; Spampinato, P.T.
1985-03-01
The FEDC Tokamak Systems Code calculates tokamak performance, cost, and configuration as a function of plasma engineering parameters. This version of the code models experimental tokamaks. It does not currently consider tokamak configurations that generate electrical power or incorporate breeding blankets. The code has a modular (or subroutine) structure to allow independent modeling for each major tokamak component or system. A primary benefit of modularization is that a component module may be updated without disturbing the remainder of the systems code as long as the imput to or output from the module remains unchanged.
Domino, Stefan; Luketa-Hanlin, Anay; Gallegos, Carlos
2006-10-27
FAA Smoke Transport Code, a physics-based Computational Fluid Dynamics tool, which couples heat, mass, and momentum transfer, has been developed to provide information on smoke transport in cargo compartments with various geometries and flight conditions. The software package contains a graphical user interface for specification of geometry and boundary conditions, analysis module for solving the governing equations, and a post-processing tool. The current code was produced by making substantial improvements and additions to a code obtained from a university. The original code was able to compute steady, uniform, isothermal turbulent pressurization. In addition, a preprocessor and postprocessor were added to arrive at the current software package.
NASA Astrophysics Data System (ADS)
Tang, Bin; Yang, Shenghao; Ye, Baoliu; Yin, Yitong; Lu, Sanglu
2015-12-01
Chunked codes are efficient random linear network coding (RLNC) schemes with low computational cost, where the input packets are encoded into small chunks (i.e., subsets of the coded packets). During the network transmission, RLNC is performed within each chunk. In this paper, we first introduce a simple transfer matrix model to characterize the transmission of chunks and derive some basic properties of the model to facilitate the performance analysis. We then focus on the design of overlapped chunked codes, a class of chunked codes whose chunks are non-disjoint subsets of input packets, which are of special interest since they can be encoded with negligible computational cost and in a causal fashion. We propose expander chunked (EC) codes, the first class of overlapped chunked codes that have an analyzable performance, where the construction of the chunks makes use of regular graphs. Numerical and simulation results show that in some practical settings, EC codes can achieve rates within 91 to 97 % of the optimum and outperform the state-of-the-art overlapped chunked codes significantly.
The multiple codes of nucleotide sequences.
Trifonov, E N
1989-01-01
Nucleotide sequences carry genetic information of many different kinds, not just instructions for protein synthesis (triplet code). Several codes of nucleotide sequences are discussed including: (1) the translation framing code, responsible for correct triplet counting by the ribosome during protein synthesis; (2) the chromatin code, which provides instructions on appropriate placement of nucleosomes along the DNA molecules and their spatial arrangement; (3) a putative loop code for single-stranded RNA-protein interactions. The codes are degenerate and corresponding messages are not only interspersed but actually overlap, so that some nucleotides belong to several messages simultaneously. Tandemly repeated sequences frequently considered as functionless "junk" are found to be grouped into certain classes of repeat unit lengths. This indicates some functional involvement of these sequences. A hypothesis is formulated according to which the tandem repeats are given the role of weak enhancer-silencers that modulate, in a copy number-dependent way, the expression of proximal genes. Fast amplification and elimination of the repeats provides an attractive mechanism of species adaptation to a rapidly changing environment. PMID:2673451
Research on Universal Combinatorial Coding
Lu, Jun; Zhang, Zhuo; Mo, Juan
2014-01-01
The conception of universal combinatorial coding is proposed. Relations exist more or less in many coding methods. It means that a kind of universal coding method is objectively existent. It can be a bridge connecting many coding methods. Universal combinatorial coding is lossless and it is based on the combinatorics theory. The combinational and exhaustive property make it closely related with the existing code methods. Universal combinatorial coding does not depend on the probability statistic characteristic of information source, and it has the characteristics across three coding branches. It has analyzed the relationship between the universal combinatorial coding and the variety of coding method and has researched many applications technologies of this coding method. In addition, the efficiency of universal combinatorial coding is analyzed theoretically. The multicharacteristic and multiapplication of universal combinatorial coding are unique in the existing coding methods. Universal combinatorial coding has theoretical research and practical application value. PMID:24772019
Research on universal combinatorial coding.
Lu, Jun; Zhang, Zhuo; Mo, Juan
2014-01-01
The conception of universal combinatorial coding is proposed. Relations exist more or less in many coding methods. It means that a kind of universal coding method is objectively existent. It can be a bridge connecting many coding methods. Universal combinatorial coding is lossless and it is based on the combinatorics theory. The combinational and exhaustive property make it closely related with the existing code methods. Universal combinatorial coding does not depend on the probability statistic characteristic of information source, and it has the characteristics across three coding branches. It has analyzed the relationship between the universal combinatorial coding and the variety of coding method and has researched many applications technologies of this coding method. In addition, the efficiency of universal combinatorial coding is analyzed theoretically. The multicharacteristic and multiapplication of universal combinatorial coding are unique in the existing coding methods. Universal combinatorial coding has theoretical research and practical application value. PMID:24772019
Fast Coding Unit Encoding Mechanism for Low Complexity Video Coding
Wu, Yueying; Jia, Kebin; Gao, Guandong
2016-01-01
In high efficiency video coding (HEVC), coding tree contributes to excellent compression performance. However, coding tree brings extremely high computational complexity. Innovative works for improving coding tree to further reduce encoding time are stated in this paper. A novel low complexity coding tree mechanism is proposed for HEVC fast coding unit (CU) encoding. Firstly, this paper makes an in-depth study of the relationship among CU distribution, quantization parameter (QP) and content change (CC). Secondly, a CU coding tree probability model is proposed for modeling and predicting CU distribution. Eventually, a CU coding tree probability update is proposed, aiming to address probabilistic model distortion problems caused by CC. Experimental results show that the proposed low complexity CU coding tree mechanism significantly reduces encoding time by 27% for lossy coding and 42% for visually lossless coding and lossless coding. The proposed low complexity CU coding tree mechanism devotes to improving coding performance under various application conditions. PMID:26999741
CFD code evaluation for internal flow modeling
NASA Technical Reports Server (NTRS)
Chung, T. J.
1990-01-01
Research on the computational fluid dynamics (CFD) code evaluation with emphasis on supercomputing in reacting flows is discussed. Advantages of unstructured grids, multigrids, adaptive methods, improved flow solvers, vector processing, parallel processing, and reduction of memory requirements are discussed. As examples, researchers include applications of supercomputing to reacting flow Navier-Stokes equations including shock waves and turbulence and combustion instability problems associated with solid and liquid propellants. Evaluation of codes developed by other organizations are not included. Instead, the basic criteria for accuracy and efficiency have been established, and some applications on rocket combustion have been made. Research toward an ultimate goal, the most accurate and efficient CFD code, is in progress and will continue for years to come.
Robust speech coding using microphone arrays
NASA Astrophysics Data System (ADS)
Li, Zhao
1998-09-01
To achieve robustness and efficiency for voice communication in noise, the noise suppression and bandwidth compression processes are combined to form a joint process using input from an array of microphones. An adaptive beamforming technique with a set of robust linear constraints and a single quadratic inequality constraint is used to preserve desired signal and to cancel directional plus ambient noise in a small room environment. This robustly constrained array processor is found to be effective in limiting signal cancelation over a wide range of input SNRs (-10 dB to +10 dB). The resulting intelligibility gains (8-10 dB) provide significant improvement to subsequent CELP coding. In addition, the desired speech activity is detected by estimating Target-to-Jammer Ratios (TJR) using subband correlations between different microphone inputs or using signals within the Generalized Sidelobe Canceler directly. These two novel techniques of speech activity detection for coding are studied thoroughly in this dissertation. Each is subsequently incorporated with the adaptive array and a 4.8 kbps CELP coder to form a Variable Bit Kate (VBR) coder with noise canceling and Spatial Voice Activity Detection (SVAD) capabilities. This joint noise suppression and bandwidth compression system demonstrates large improvements in desired speech quality after coding, accurate desired speech activity detection in various types of interference, and a reduction in the information bits required to code the speech.
NASA Technical Reports Server (NTRS)
Whalen, Michael; Schumann, Johann; Fischer, Bernd
2002-01-01
Code certification is a lightweight approach to demonstrate software quality on a formal level. Its basic idea is to require producers to provide formal proofs that their code satisfies certain quality properties. These proofs serve as certificates which can be checked independently. Since code certification uses the same underlying technology as program verification, it also requires many detailed annotations (e.g., loop invariants) to make the proofs possible. However, manually adding theses annotations to the code is time-consuming and error-prone. We address this problem by combining code certification with automatic program synthesis. We propose an approach to generate simultaneously, from a high-level specification, code and all annotations required to certify generated code. Here, we describe a certification extension of AUTOBAYES, a synthesis tool which automatically generates complex data analysis programs from compact specifications. AUTOBAYES contains sufficient high-level domain knowledge to generate detailed annotations. This allows us to use a general-purpose verification condition generator to produce a set of proof obligations in first-order logic. The obligations are then discharged using the automated theorem E-SETHEO. We demonstrate our approach by certifying operator safety for a generated iterative data classification program without manual annotation of the code.
Lichenase and coding sequences
Li, Xin-Liang; Ljungdahl, Lars G.; Chen, Huizhong
2000-08-15
The present invention provides a fungal lichenase, i.e., an endo-1,3-1,4-.beta.-D-glucanohydrolase, its coding sequence, recombinant DNA molecules comprising the lichenase coding sequences, recombinant host cells and methods for producing same. The present lichenase is from Orpinomyces PC-2.
ERIC Educational Resources Information Center
Million, June
2004-01-01
Most schools have a code of conduct, pledge, or behavioral standards, set by the district or school board with the school community. In this article, the author features some schools that created a new vision of instilling code of conducts to students based on work quality, respect, safety and courtesy. She suggests that communicating the code…
ERIC Educational Resources Information Center
Division for Early Childhood, Council for Exceptional Children, 2009
2009-01-01
The Code of Ethics of the Division for Early Childhood (DEC) of the Council for Exceptional Children is a public statement of principles and practice guidelines supported by the mission of DEC. The foundation of this Code is based on sound ethical reasoning related to professional practice with young children with disabilities and their families…
NASA Technical Reports Server (NTRS)
Hribar, Michelle R.; Frumkin, Michael; Jin, Haoqiang; Waheed, Abdul; Yan, Jerry; Saini, Subhash (Technical Monitor)
1998-01-01
Over the past decade, high performance computing has evolved rapidly; systems based on commodity microprocessors have been introduced in quick succession from at least seven vendors/families. Porting codes to every new architecture is a difficult problem; in particular, here at NASA, there are many large CFD applications that are very costly to port to new machines by hand. The LCM ("Legacy Code Modernization") Project is the development of an integrated parallelization environment (IPE) which performs the automated mapping of legacy CFD (Fortran) applications to state-of-the-art high performance computers. While most projects to port codes focus on the parallelization of the code, we consider porting to be an iterative process consisting of several steps: 1) code cleanup, 2) serial optimization,3) parallelization, 4) performance monitoring and visualization, 5) intelligent tools for automated tuning using performance prediction and 6) machine specific optimization. The approach for building this parallelization environment is to build the components for each of the steps simultaneously and then integrate them together. The demonstration will exhibit our latest research in building this environment: 1. Parallelizing tools and compiler evaluation. 2. Code cleanup and serial optimization using automated scripts 3. Development of a code generator for performance prediction 4. Automated partitioning 5. Automated insertion of directives. These demonstrations will exhibit the effectiveness of an automated approach for all the steps involved with porting and tuning a legacy code application for a new architecture.
Lakhani, Gopal
2003-01-01
It is a well observed characteristic that when a DCT block is traversed in the zigzag order, the AC coefficients generally decrease in size and the run-length of zero coefficients increase in number. This article presents a minor modification to the Huffman coding of the JPEG baseline compression algorithm to exploit this redundancy. For this purpose, DCT blocks are divided into bands so that each band can be coded using a separate code table. Three implementations are presented, which all move the end-of-block marker up in the middle of DCT block and use it to indicate the band boundaries. Experimental results are presented to compare reduction in the code size obtained by our methods with the JPEG sequential-mode Huffman coding and arithmetic coding methods. The average code reduction to the total image code size of one of our methods is 4%. Our methods can also be used for progressive image transmission and hence, experimental results are also given to compare them with two-, three-, and four-band implementations of the JPEG spectral selection method. PMID:18237897
Binary concatenated coding system
NASA Technical Reports Server (NTRS)
Monford, L. G., Jr.
1973-01-01
Coding, using 3-bit binary words, is applicable to any measurement having integer scale up to 100. System using 6-bit data words can be expanded to read from 1 to 10,000, and 9-bit data words can increase range to 1,000,000. Code may be ''read'' directly by observation after memorizing simple listing of 9's and 10's.
Computerized mega code recording.
Burt, T W; Bock, H C
1988-04-01
A system has been developed to facilitate recording of advanced cardiac life support mega code testing scenarios. By scanning a paper "keyboard" using a bar code wand attached to a portable microcomputer, the person assigned to record the scenario can easily generate an accurate, complete, timed, and typewritten record of the given situations and the obtained responses. PMID:3354937
NASA Technical Reports Server (NTRS)
Baumert, L. D.; Mceliece, R. J.; Rumsey, H., Jr.
1979-01-01
In a previous paper Pierce considered the problem of optical communication from a novel viewpoint, and concluded that performance will likely be limited by issues of coding complexity rather than by thermal noise. This paper reviews the model proposed by Pierce and presents some results on the analysis and design of codes for this application.
Combustion chamber analysis code
NASA Technical Reports Server (NTRS)
Przekwas, A. J.; Lai, Y. G.; Krishnan, A.; Avva, R. K.; Giridharan, M. G.
1993-01-01
A three-dimensional, time dependent, Favre averaged, finite volume Navier-Stokes code has been developed to model compressible and incompressible flows (with and without chemical reactions) in liquid rocket engines. The code has a non-staggered formulation with generalized body-fitted-coordinates (BFC) capability. Higher order differencing methodologies such as MUSCL and Osher-Chakravarthy schemes are available. Turbulent flows can be modeled using any of the five turbulent models present in the code. A two-phase, two-liquid, Lagrangian spray model has been incorporated into the code. Chemical equilibrium and finite rate reaction models are available to model chemically reacting flows. The discrete ordinate method is used to model effects of thermal radiation. The code has been validated extensively against benchmark experimental data and has been applied to model flows in several propulsion system components of the SSME and the STME.
Energy Conservation Code Decoded
Cole, Pam C.; Taylor, Zachary T.
2006-09-01
Designing an energy-efficient, affordable, and comfortable home is a lot easier thanks to a slime, easier to read booklet, the 2006 International Energy Conservation Code (IECC), published in March 2006. States, counties, and cities have begun reviewing the new code as a potential upgrade to their existing codes. Maintained under the public consensus process of the International Code Council, the IECC is designed to do just what its title says: promote the design and construction of energy-efficient homes and commercial buildings. Homes in this case means traditional single-family homes, duplexes, condominiums, and apartment buildings having three or fewer stories. The U.S. Department of Energy, which played a key role in proposing the changes that resulted in the new code, is offering a free training course that covers the residential provisions of the 2006 IECC.
Astrophysics Source Code Library
NASA Astrophysics Data System (ADS)
Allen, A.; DuPrie, K.; Berriman, B.; Hanisch, R. J.; Mink, J.; Teuben, P. J.
2013-10-01
The Astrophysics Source Code Library (ASCL), founded in 1999, is a free on-line registry for source codes of interest to astronomers and astrophysicists. The library is housed on the discussion forum for Astronomy Picture of the Day (APOD) and can be accessed at http://ascl.net. The ASCL has a comprehensive listing that covers a significant number of the astrophysics source codes used to generate results published in or submitted to refereed journals and continues to grow. The ASCL currently has entries for over 500 codes; its records are citable and are indexed by ADS. The editors of the ASCL and members of its Advisory Committee were on hand at a demonstration table in the ADASS poster room to present the ASCL, accept code submissions, show how the ASCL is starting to be used by the astrophysics community, and take questions on and suggestions for improving the resource.
Adaptive prediction trees for image compression.
Robinson, John A
2006-08-01
This paper presents a complete general-purpose method for still-image compression called adaptive prediction trees. Efficient lossy and lossless compression of photographs, graphics, textual, and mixed images is achieved by ordering the data in a multicomponent binary pyramid, applying an empirically optimized nonlinear predictor, exploiting structural redundancies between color components, then coding with hex-trees and adaptive runlength/Huffman coders. Color palettization and order statistics prefiltering are applied adaptively as appropriate. Over a diverse image test set, the method outperforms standard lossless and lossy alternatives. The competing lossy alternatives use block transforms and wavelets in well-studied configurations. A major result of this paper is that predictive coding is a viable and sometimes preferable alternative to these methods. PMID:16900671
ADAPTATION AND ADAPTABILITY, THE BELLEFAIRE FOLLOWUP STUDY.
ERIC Educational Resources Information Center
ALLERHAND, MELVIN E.; AND OTHERS
A RESEARCH TEAM STUDIED INFLUENCES, ADAPTATION, AND ADAPTABILITY IN 50 POORLY ADAPTING BOYS AT BELLEFAIRE, A REGIONAL CHILD CARE CENTER FOR EMOTIONALLY DISTURBED CHILDREN. THE TEAM ATTEMPTED TO GAUGE THE SUCCESS OF THE RESIDENTIAL TREATMENT CENTER IN TERMS OF THE PSYCHOLOGICAL PATTERNS AND ROLE PERFORMANCES OF THE BOYS DURING INDIVIDUAL CASEWORK…
Parallel Adaptive Multi-Mechanics Simulations using Diablo
Parsons, D; Solberg, J
2004-12-03
Coupled multi-mechanics simulations (such as thermal-stress and fluidstructure interaction problems) are of substantial interest to engineering analysts. In addition, adaptive mesh refinement techniques present an attractive alternative to current mesh generation procedures and provide quantitative error bounds that can be used for model verification. This paper discusses spatially adaptive multi-mechanics implicit simulations using the Diablo computer code. (U)
The Minnesota Adaptive Instructional System: An Intelligent CBI System.
ERIC Educational Resources Information Center
Tennyson, Robert D.; And Others
1984-01-01
Briefly reviews theoretical developments in adaptive instructional systems, defines six characteristics of intelligent computer-based management systems, and presents theory and research of Minnesota Adaptive Instructional System (MAIS). Generic programing codes for amount and sequence of instruction, instructional display time, and advisement…
Quantum convolutional codes derived from constacyclic codes
NASA Astrophysics Data System (ADS)
Yan, Tingsu; Huang, Xinmei; Tang, Yuansheng
2014-12-01
In this paper, three families of quantum convolutional codes are constructed. The first one and the second one can be regarded as a generalization of Theorems 3, 4, 7 and 8 [J. Chen, J. Li, F. Yang and Y. Huang, Int. J. Theor. Phys., doi:10.1007/s10773-014-2214-6 (2014)], in the sense that we drop the constraint q ≡ 1 (mod 4). Furthermore, the second one and the third one attain the quantum generalized Singleton bound.
Huffman coding in advanced audio coding standard
NASA Astrophysics Data System (ADS)
Brzuchalski, Grzegorz
2012-05-01
This article presents several hardware architectures of Advanced Audio Coding (AAC) Huffman noiseless encoder, its optimisations and working implementation. Much attention has been paid to optimise the demand of hardware resources especially memory size. The aim of design was to get as short binary stream as possible in this standard. The Huffman encoder with whole audio-video system has been implemented in FPGA devices.
Coding tools investigation for next generation video coding based on HEVC
NASA Astrophysics Data System (ADS)
Chen, Jianle; Chen, Ying; Karczewicz, Marta; Li, Xiang; Liu, Hongbin; Zhang, Li; Zhao, Xin
2015-09-01
The new state-of-the-art video coding standard, H.265/HEVC, has been finalized in 2013 and it achieves roughly 50% bit rate saving compared to its predecessor, H.264/MPEG-4 AVC. This paper provides the evidence that there is still potential for further coding efficiency improvements. A brief overview of HEVC is firstly given in the paper. Then, our improvements on each main module of HEVC are presented. For instance, the recursive quadtree block structure is extended to support larger coding unit and transform unit. The motion information prediction scheme is improved by advanced temporal motion vector prediction, which inherits the motion information of each small block within a large block from a temporal reference picture. Cross component prediction with linear prediction model improves intra prediction and overlapped block motion compensation improves the efficiency of inter prediction. Furthermore, coding of both intra and inter prediction residual is improved by adaptive multiple transform technique. Finally, in addition to deblocking filter and SAO, adaptive loop filter is applied to further enhance the reconstructed picture quality. This paper describes above-mentioned techniques in detail and evaluates their coding performance benefits based on the common test condition during HEVC development. The simulation results show that significant performance improvement over HEVC standard can be achieved, especially for the high resolution video materials.
Coded aperture computed tomography
NASA Astrophysics Data System (ADS)
Choi, Kerkil; Brady, David J.
2009-08-01
Diverse physical measurements can be modeled by X-ray transforms. While X-ray tomography is the canonical example, reference structure tomography (RST) and coded aperture snapshot spectral imaging (CASSI) are examples of physically unrelated but mathematically equivalent sensor systems. Historically, most x-ray transform based systems sample continuous distributions and apply analytical inversion processes. On the other hand, RST and CASSI generate discrete multiplexed measurements implemented with coded apertures. This multiplexing of coded measurements allows for compression of measurements from a compressed sensing perspective. Compressed sensing (CS) is a revelation that if the object has a sparse representation in some basis, then a certain number, but typically much less than what is prescribed by Shannon's sampling rate, of random projections captures enough information for a highly accurate reconstruction of the object. This paper investigates the role of coded apertures in x-ray transform measurement systems (XTMs) in terms of data efficiency and reconstruction fidelity from a CS perspective. To conduct this, we construct a unified analysis using RST and CASSI measurement models. Also, we propose a novel compressive x-ray tomography measurement scheme which also exploits coding and multiplexing, and hence shares the analysis of the other two XTMs. Using this analysis, we perform a qualitative study on how coded apertures can be exploited to implement physical random projections by "regularizing" the measurement systems. Numerical studies and simulation results demonstrate several examples of the impact of coding.
Nelson, R.N.
1985-05-01
This publication lists all report number codes processed by the Office of Scientific and Technical Information. The report codes are substantially based on the American National Standards Institute, Standard Technical Report Number (STRN)-Format and Creation Z39.23-1983. The Standard Technical Report Number (STRN) provides one of the primary methods of identifying a specific technical report. The STRN consists of two parts: The report code and the sequential number. The report code identifies the issuing organization, a specific program, or a type of document. The sequential number, which is assigned in sequence by each report issuing entity, is not included in this publication. Part I of this compilation is alphabetized by report codes followed by issuing installations. Part II lists the issuing organization followed by the assigned report code(s). In both Parts I and II, the names of issuing organizations appear for the most part in the form used at the time the reports were issued. However, for some of the more prolific installations which have had name changes, all entries have been merged under the current name.
NASA Astrophysics Data System (ADS)
Thy, Peter; Lesher, Charles E.; Nielsen, Troels F. D.; Brooks, C. Kent
2008-10-01
reject Morse's [Morse, S.A., 2008. Principles of applied experimental igneous petrology: a comment on "Experimental Constraints on the Skaergaard liquid line of descent" by Thy, Lesher, Nielsen, and Brooks, 2006, Lithos 92: 154-180. Lithos 105, pp. 395-399.] contention that we violated in our original study established principles of applied experimental igneous petrology. Such principles dictate that experimental and forward models are carefully tested against field observations before petrologic processes can be verified.
Weaver, H.J.
1981-11-01
The TRANSF code is a semi-interactive FORTRAN IV program which is designed to calculate the model parameters of a (structural) system by performing a least square parameter fit to measured transfer function data. The code is available at LLNL on both the 7600 and the Cray machines. The transfer function data to be fit is read into the code via a disk file. The primary mode of output is FR80 graphics, although, it is also possible to have results written to either the TTY or to a disk file.
RBMK-LOCA-Analyses with the ATHLET-Code
Petry, A.; Domoradov, A.; Finjakin, A.
1995-09-01
The scientific technical cooperation between Germany and Russia includes the area of adaptation of several German codes for the Russian-designed RBMK-reactor. One point of this cooperation is the adaptation of the Thermal-Hydraulic code ATHLET (Analyses of the Thermal-Hydraulics of LEaks and Transients), for RBMK-specific safety problems. This paper contains a short description of a RBMK-1000 reactor circuit. Furthermore, the main features of the thermal-hydraulic code ATHLET are presented. The main assumptions for the ATHLET-RBMK model are discussed. As an example for the application, the results of test calculations concerning a guillotine type rupture of a distribution group header are presented and discussed, and the general analysis conditions are described. A comparison with corresponding RELAP-calculations is given. This paper gives an overview on some problems posed and experience by application of Western best-estimate codes for RBMK-calculations.
Adaptive Image Denoising by Mixture Adaptation.
Luo, Enming; Chan, Stanley H; Nguyen, Truong Q
2016-10-01
We propose an adaptive learning procedure to learn patch-based image priors for image denoising. The new algorithm, called the expectation-maximization (EM) adaptation, takes a generic prior learned from a generic external database and adapts it to the noisy image to generate a specific prior. Different from existing methods that combine internal and external statistics in ad hoc ways, the proposed algorithm is rigorously derived from a Bayesian hyper-prior perspective. There are two contributions of this paper. First, we provide full derivation of the EM adaptation algorithm and demonstrate methods to improve the computational complexity. Second, in the absence of the latent clean image, we show how EM adaptation can be modified based on pre-filtering. The experimental results show that the proposed adaptation algorithm yields consistently better denoising results than the one without adaptation and is superior to several state-of-the-art algorithms. PMID:27416593
FORTRAN code-evaluation system
NASA Technical Reports Server (NTRS)
Capps, J. D.; Kleir, R.
1977-01-01
Automated code evaluation system can be used to detect coding errors and unsound coding practices in any ANSI FORTRAN IV source code before they can cause execution-time malfunctions. System concentrates on acceptable FORTRAN code features which are likely to produce undesirable results.
Recent Developments in the Community Code ASPECT
NASA Astrophysics Data System (ADS)
Heister, T.; Bangerth, W.; Dannberg, J.; Gassmoeller, R.
2015-12-01
The Computational Geosciences have long used community codes to provide simulation capabilities to large numbers of users. We here report on the mantle convection code ASPECT (the Advanced Solver for Problems in Earth ConvecTion) that is developed to be a community tool with a focus on bringing modern numerical methods such as adaptive meshes, large parallel computations, algebraic multigrid solvers, and modern software design. We will comment in particular on two aspects: First, the more recent additions to its numerical capabilities, such as compressible models, averaging of material parameters, melt transport, free surfaces, and plasticity. We will demonstrate these capabilities using examples from computations by members of the ASPECT user community. Second, we will discuss lessons learned in writing a code specifically for community use. This includes our experience with a software design that is fundamentally based on a plugin system for practically all areas that a user may want to describe for the particular geophysical setup they want to simulate. It also includes our experience with leading and organizing a community of users and developers, for example by organizing annual "hackathons", by encouraging code submission via github over keeping modifications private, and by designing a code for which extensions can easily be written as separate plugins rather than requiring knowledge of the computational core.
Spatially-varying IIR filter banks for image coding
NASA Technical Reports Server (NTRS)
Chung, Wilson C.; Smith, Mark J. T.
1992-01-01
This paper reports on the application of spatially variant infinite impulse response (IIR) filter banks to subband image coding. The new filter bank is based on computationally efficient recursive polyphase decompositions that dynamically change in response to the input signal. In the absence of quantization, reconstruction can be made exact. However, by proper choice of an adaptation scheme, we show that subband image coding based on time varying filter banks can yield improvement over the use of conventional filter banks.
FORTRAN Automated Code Evaluation System (FACES) user's manual, version 2
NASA Technical Reports Server (NTRS)
1975-01-01
A system which provides analysis services for FORTRAN based software systems not normally available from system software is presented. The system is not a compiler, and compiler syntax diagnostics are not duplicated. For maximum adaptation to FORTRAN dialects, the code presented to the system is assumed to be compiler acceptable. The system concentrates on acceptable FORTRAN code features which are likely to produce undesirable results and identifies potential trouble areas before they become execution time malfunctions.
Wilson, R.E.; Freeman, L.N.; Walker, S.N.
1995-09-01
The FAST2 Code which is capable of determining structural loads of a flexible, teetering, horizontal axis wind turbine is described and comparisons of calculated loads with test data at two wind speeds for the ESI-80 are given. The FAST2 Code models a two-bladed HAWT with degrees of freedom for blade flap, teeter, drive train flexibility, yaw, and windwise and crosswind tower motion. The code allows blade dimensions, stiffness, and weights to differ and models tower shadow, wind shear, and turbulence. Additionally, dynamic stall is included as are delta-3 and an underslung rotor. Load comparisons are made with ESI-80 test data in the form of power spectral density, rainflow counting, occurrence histograms and azimuth averaged bin plots. It is concluded that agreement between the FAST2 Code and test results is good.
NASA Technical Reports Server (NTRS)
1991-01-01
In recognition of a deficiency in the current modeling capability for seals, an effort was established by NASA to develop verified computational fluid dynamic concepts, codes, and analyses for seals. The objectives were to develop advanced concepts for the design and analysis of seals, to effectively disseminate the information to potential users by way of annual workshops, and to provide experimental verification for the models and codes under a wide range of operating conditions.
NASA Astrophysics Data System (ADS)
Yang, Qianli; Pitkow, Xaq
2015-03-01
Most interesting natural sensory stimuli are encoded in the brain in a form that can only be decoded nonlinearly. But despite being a core function of the brain, nonlinear population codes are rarely studied and poorly understood. Interestingly, the few existing models of nonlinear codes are inconsistent with known architectural features of the brain. In particular, these codes have information content that scales with the size of the cortical population, even if that violates the data processing inequality by exceeding the amount of information entering the sensory system. Here we provide a valid theory of nonlinear population codes by generalizing recent work on information-limiting correlations in linear population codes. Although these generalized, nonlinear information-limiting correlations bound the performance of any decoder, they also make decoding more robust to suboptimal computation, allowing many suboptimal decoders to achieve nearly the same efficiency as an optimal decoder. Although these correlations are extremely difficult to measure directly, particularly for nonlinear codes, we provide a simple, practical test by which one can use choice-related activity in small populations of neurons to determine whether decoding is suboptimal or optimal and limited by correlated noise. We conclude by describing an example computation in the vestibular system where this theory applies. QY and XP was supported by a grant from the McNair foundation.
Ma, Yong-Tao; Li, Hui; Zeng, Tao
2014-06-07
Four-dimensional ab initio intermolecular potential energy surfaces (PESs) for CH{sub 3}F–He that explicitly incorporates dependence on the Q{sub 3} stretching normal mode of the CH{sub 3}F molecule and are parametrically dependent on the other averaged intramolecular coordinates have been calculated. Analytical three-dimensional PESs for v{sub 3}(CH{sub 3}F) = 0 and 1 are obtained by least-squares fitting the vibrationally averaged potentials to the Morse/Long-Range potential function form. With the 3D PESs, we employ Lanczos algorithm to calculate rovibrational levels of the dimer system. Following some re-assignments, the predicted transition frequencies are in good agreement with experimental microwave data for ortho-CH{sub 3}F, with the root-mean-square deviation of 0.042 cm{sup −1}. We then provide the first prediction of the infrared and microwave spectra for the para-CH{sub 3}F–He dimer. The calculated infrared band origin shifts associated with the ν{sub 3} fundamental of CH{sub 3}F are 0.039 and 0.069 cm{sup −1} for para-CH{sub 3}F–He and ortho-CH{sub 3}F–He, respectively.
Adaptive Sampling Algorithms for Probabilistic Risk Assessment of Nuclear Simulations
Diego Mandelli; Dan Maljovec; Bei Wang; Valerio Pascucci; Peer-Timo Bremer
2013-09-01
Nuclear simulations are often computationally expensive, time-consuming, and high-dimensional with respect to the number of input parameters. Thus exploring the space of all possible simulation outcomes is infeasible using finite computing resources. During simulation-based probabilistic risk analysis, it is important to discover the relationship between a potentially large number of input parameters and the output of a simulation using as few simulation trials as possible. This is a typical context for performing adaptive sampling where a few observations are obtained from the simulation, a surrogate model is built to represent the simulation space, and new samples are selected based on the model constructed. The surrogate model is then updated based on the simulation results of the sampled points. In this way, we attempt to gain the most information possible with a small number of carefully selected sampled points, limiting the number of expensive trials needed to understand features of the simulation space. We analyze the specific use case of identifying the limit surface, i.e., the boundaries in the simulation space between system failure and system success. In this study, we explore several techniques for adaptively sampling the parameter space in order to reconstruct the limit surface. We focus on several adaptive sampling schemes. First, we seek to learn a global model of the entire simulation space using prediction models or neighborhood graphs and extract the limit surface as an iso-surface of the global model. Second, we estimate the limit surface by sampling in the neighborhood of the current estimate based on topological segmentations obtained locally. Our techniques draw inspirations from topological structure known as the Morse-Smale complex. We highlight the advantages and disadvantages of using a global prediction model versus local topological view of the simulation space, comparing several different strategies for adaptive sampling in both
Temporal Coding of Volumetric Imagery
NASA Astrophysics Data System (ADS)
Llull, Patrick Ryan
of other information within that video; namely, focal and spectral information. The next part of the thesis demonstrates derivative works of CACTI: compressive extended depth of field and compressive spectral-temporal imaging. These works successfully show the technique's extension of temporal coding to improve sensing performance in these other dimensions. Geometrical optics-related tradeoffs, such as the classic challenges of wide-field-of-view and high resolution photography, have motivated the development of mulitscale camera arrays. The advent of such designs less than a decade ago heralds a new era of research- and engineering-related challenges. One significant challenge is that of managing the focal volume (x,y,z ) over wide fields of view and resolutions. The fourth chapter shows advances on focus and image quality assessment for a class of multiscale gigapixel cameras developed at Duke. Along the same line of work, we have explored methods for dynamic and adaptive addressing of focus via point spread function engineering. We demonstrate another form of temporal coding in the form of physical translation of the image plane from its nominal focal position. We demonstrate this technique's capability to generate arbitrary point spread functions.
Discrete adaptive zone light elements (DAZLE): a new approach to adaptive imaging
NASA Astrophysics Data System (ADS)
Kellogg, Robert L.; Escuti, Michael J.
2007-09-01
New advances in Liquid Crystal Spatial Light Modulators (LCSLM) offer opportunities for large adaptive optics in the midwave infrared spectrum. A light focusing adaptive imaging system, using the zero-order diffraction state of a polarizer-free liquid crystal polarization grating modulator to create millions of high transmittance apertures, is envisioned in a system called DAZLE (Discrete Adaptive Zone Light Elements). DAZLE adaptively selects large sets of LCSLM apertures using the principles of coded masks, embodied in a hybrid Discrete Fresnel Zone Plate (DFZP) design. Issues of system architecture, including factors of LCSLM aperture pattern and adaptive control, image resolution and focal plane array (FPA) matching, and trade-offs between filter bandwidths, background photon noise, and chromatic aberration are discussed.
Bit-wise arithmetic coding for data compression
NASA Technical Reports Server (NTRS)
Kiely, A. B.
1994-01-01
This article examines the problem of compressing a uniformly quantized independent and identically distributed (IID) source. We present a new compression technique, bit-wise arithmetic coding, that assigns fixed-length codewords to the quantizer output and uses arithmetic coding to compress the codewords, treating the codeword bits as independent. We examine the performance of this method and evaluate the overhead required when used block-adaptively. Simulation results are presented for Gaussian and Laplacian sources. This new technique could be used as the entropy coder in a transform or subband coding system.
Habituation of visual adaptation
Dong, Xue; Gao, Yi; Lv, Lili; Bao, Min
2016-01-01
Our sensory system adjusts its function driven by both shorter-term (e.g. adaptation) and longer-term (e.g. learning) experiences. Most past adaptation literature focuses on short-term adaptation. Only recently researchers have begun to investigate how adaptation changes over a span of days. This question is important, since in real life many environmental changes stretch over multiple days or longer. However, the answer to the question remains largely unclear. Here we addressed this issue by tracking perceptual bias (also known as aftereffect) induced by motion or contrast adaptation across multiple daily adaptation sessions. Aftereffects were measured every day after adaptation, which corresponded to the degree of adaptation on each day. For passively viewed adapters, repeated adaptation attenuated aftereffects. Once adapters were presented with an attentional task, aftereffects could either reduce for easy tasks, or initially show an increase followed by a later decrease for demanding tasks. Quantitative analysis of the decay rates in contrast adaptation showed that repeated exposure of the adapter appeared to be equivalent to adaptation to a weaker stimulus. These results suggest that both attention and a non-attentional habituation-like mechanism jointly determine how adaptation develops across multiple daily sessions. PMID:26739917
Habituation of visual adaptation.
Dong, Xue; Gao, Yi; Lv, Lili; Bao, Min
2016-01-01
Our sensory system adjusts its function driven by both shorter-term (e.g. adaptation) and longer-term (e.g. learning) experiences. Most past adaptation literature focuses on short-term adaptation. Only recently researchers have begun to investigate how adaptation changes over a span of days. This question is important, since in real life many environmental changes stretch over multiple days or longer. However, the answer to the question remains largely unclear. Here we addressed this issue by tracking perceptual bias (also known as aftereffect) induced by motion or contrast adaptation across multiple daily adaptation sessions. Aftereffects were measured every day after adaptation, which corresponded to the degree of adaptation on each day. For passively viewed adapters, repeated adaptation attenuated aftereffects. Once adapters were presented with an attentional task, aftereffects could either reduce for easy tasks, or initially show an increase followed by a later decrease for demanding tasks. Quantitative analysis of the decay rates in contrast adaptation showed that repeated exposure of the adapter appeared to be equivalent to adaptation to a weaker stimulus. These results suggest that both attention and a non-attentional habituation-like mechanism jointly determine how adaptation develops across multiple daily sessions. PMID:26739917
NASA Technical Reports Server (NTRS)
Woo, Simon S.; Cheng, Michael K.
2011-01-01
The original Luby Transform (LT) coding scheme is extended to account for data transmissions where some information symbols in a message block are more important than others. Prioritized LT codes provide unequal error protection (UEP) of data on an erasure channel by modifying the original LT encoder. The prioritized algorithm improves high-priority data protection without penalizing low-priority data recovery. Moreover, low-latency decoding is also obtained for high-priority data due to fast encoding. Prioritized LT codes only require a slight change in the original encoding algorithm, and no changes at all at the decoder. Hence, with a small complexity increase in the LT encoder, an improved UEP and low-decoding latency performance for high-priority data can be achieved. LT encoding partitions a data stream into fixed-sized message blocks each with a constant number of information symbols. To generate a code symbol from the information symbols in a message, the Robust-Soliton probability distribution is first applied in order to determine the number of information symbols to be used to compute the code symbol. Then, the specific information symbols are chosen uniform randomly from the message block. Finally, the selected information symbols are XORed to form the code symbol. The Prioritized LT code construction includes an additional restriction that code symbols formed by a relatively small number of XORed information symbols select some of these information symbols from the pool of high-priority data. Once high-priority data are fully covered, encoding continues with the conventional LT approach where code symbols are generated by selecting information symbols from the entire message block including all different priorities. Therefore, if code symbols derived from high-priority data experience an unusual high number of erasures, Prioritized LT codes can still reliably recover both high- and low-priority data. This hybrid approach decides not only "how to encode
Bingham, Philip R; Santos-Villalobos, Hector J
2011-01-01
Coded aperture techniques have been applied to neutron radiography to address limitations in neutron flux and resolution of neutron detectors in a system labeled coded source imaging (CSI). By coding the neutron source, a magnified imaging system is designed with small spot size aperture holes (10 and 100 m) for improved resolution beyond the detector limits and with many holes in the aperture (50% open) to account for flux losses due to the small pinhole size. An introduction to neutron radiography and coded aperture imaging is presented. A system design is developed for a CSI system with a development of equations for limitations on the system based on the coded image requirements and the neutron source characteristics of size and divergence. Simulation has been applied to the design using McStas to provide qualitative measures of performance with simulations of pinhole array objects followed by a quantitative measure through simulation of a tilted edge and calculation of the modulation transfer function (MTF) from the line spread function. MTF results for both 100um and 10um aperture hole diameters show resolutions matching the hole diameters.
NASA Technical Reports Server (NTRS)
Noble, Viveca K.
1993-01-01
There are various elements such as radio frequency interference (RFI) which may induce errors in data being transmitted via a satellite communication link. When a transmission is affected by interference or other error-causing elements, the transmitted data becomes indecipherable. It becomes necessary to implement techniques to recover from these disturbances. The objective of this research is to develop software which simulates error control circuits and evaluate the performance of these modules in various bit error rate environments. The results of the evaluation provide the engineer with information which helps determine the optimal error control scheme. The Consultative Committee for Space Data Systems (CCSDS) recommends the use of Reed-Solomon (RS) and convolutional encoders and Viterbi and RS decoders for error correction. The use of forward error correction techniques greatly reduces the received signal to noise needed for a certain desired bit error rate. The use of concatenated coding, e.g. inner convolutional code and outer RS code, provides even greater coding gain. The 16-bit cyclic redundancy check (CRC) code is recommended by CCSDS for error detection.
NASA Astrophysics Data System (ADS)
Noble, Viveca K.
1993-11-01
There are various elements such as radio frequency interference (RFI) which may induce errors in data being transmitted via a satellite communication link. When a transmission is affected by interference or other error-causing elements, the transmitted data becomes indecipherable. It becomes necessary to implement techniques to recover from these disturbances. The objective of this research is to develop software which simulates error control circuits and evaluate the performance of these modules in various bit error rate environments. The results of the evaluation provide the engineer with information which helps determine the optimal error control scheme. The Consultative Committee for Space Data Systems (CCSDS) recommends the use of Reed-Solomon (RS) and convolutional encoders and Viterbi and RS decoders for error correction. The use of forward error correction techniques greatly reduces the received signal to noise needed for a certain desired bit error rate. The use of concatenated coding, e.g. inner convolutional code and outer RS code, provides even greater coding gain. The 16-bit cyclic redundancy check (CRC) code is recommended by CCSDS for error detection.
Molecular Adaptation during Adaptive Radiation in the Hawaiian Endemic Genus Schiedea
Kapralov, Maxim V.; Filatov, Dmitry A.
2006-01-01
Background “Explosive” adaptive radiations on islands remain one of the most puzzling evolutionary phenomena. The rate of phenotypic and ecological adaptations is extremely fast during such events, suggesting that many genes may be under fairly strong selection. However, no evidence for adaptation at the level of protein coding genes was found, so it has been suggested that selection may work mainly on regulatory elements. Here we report the first evidence that positive selection does operate at the level of protein coding genes during rapid adaptive radiations. We studied molecular adaptation in Hawaiian endemic plant genus Schiedea (Caryophyllaceae), which includes closely related species with a striking range of morphological and ecological forms, varying from rainforest vines to woody shrubs growing in desert-like conditions on cliffs. Given the remarkable difference in photosynthetic performance between Schiedea species from different habitats, we focused on the “photosynthetic” Rubisco enzyme, the efficiency of which is known to be a limiting step in plant photosynthesis. Results We demonstrate that the chloroplast rbcL gene, encoding the large subunit of Rubisco enzyme, evolved under strong positive selection in Schiedea. Adaptive amino acid changes occurred in functionally important regions of Rubisco that interact with Rubisco activase, a chaperone which promotes and maintains the catalytic activity of Rubisco. Interestingly, positive selection acting on the rbcL might have caused favorable cytotypes to spread across several Schiedea species. Significance We report the first evidence for adaptive changes at the DNA and protein sequence level that may have been associated with the evolution of photosynthetic performance and colonization of new habitats during a recent adaptive radiation in an island plant genus. This illustrates how small changes at the molecular level may change ecological species performance and helps us to understand the
Adaptive changes in visual cortex following prolonged contrast reduction
Kwon, MiYoung; Legge, Gordon E.; Fang, Fang; Cheong, Allen M. Y.; He, Sheng
2009-01-01
How does prolonged reduction in retinal-image contrast affect visual-contrast coding? Recent evidence indicates that some forms of long-term visual deprivation result in compensatory perceptual and neural changes in the adult visual pathway. It has not been established whether changes due to contrast adaptation are best characterized as “contrast gain” or “response gain.” We present a theoretical rationale for predicting that adaptation to long-term contrast reduction should result in response gain. To test this hypothesis, normally sighted subjects adapted for four hours by viewing their environment through contrast-reducing goggles. During the adaptation period, the subjects went about their usual daily activities. Subjects' contrast-discrimination thresholds and fMRI BOLD responses in cortical areas V1 and V2 were obtained before and after adaptation. Following adaptation, we observed a significant decrease in contrast-discrimination thresholds, and significant increase in BOLD responses in V1 and V2. The observed interocular transfer of the adaptation effect suggests that the adaptation has a cortical origin. These results reveal a new kind of adaptability of the adult visual cortex, an adjustment in the gain of the contrast-response in the presence of a reduced range of stimulus contrasts, which is consistent with a response-gain mechanism. The adaptation appears to be compensatory, such that the precision of contrast coding is improved for low retinal-image contrasts. PMID:19271930
Adaptive predictive multiplicative autoregressive model for medical image compression.
Chen, Z D; Chang, R F; Kuo, W J
1999-02-01
In this paper, an adaptive predictive multiplicative autoregressive (APMAR) method is proposed for lossless medical image coding. The adaptive predictor is used for improving the prediction accuracy of encoded image blocks in our proposed method. Each block is first adaptively predicted by one of the seven predictors of the JPEG lossless mode and a local mean predictor. It is clear that the prediction accuracy of an adaptive predictor is better than that of a fixed predictor. Then the residual values are processed by the MAR model with Huffman coding. Comparisons with other methods [MAR, SMAR, adaptive JPEG (AJPEG)] on a series of test images show that our method is suitable for reversible medical image compression. PMID:10232675
Phase-coded pulse aperiodic transmitter coding
NASA Astrophysics Data System (ADS)
Virtanen, I. I.; Vierinen, J.; Lehtinen, M. S.
2009-07-01
Both ionospheric and weather radar communities have already adopted the method of transmitting radar pulses in an aperiodic manner when measuring moderately overspread targets. Among the users of the ionospheric radars, this method is called Aperiodic Transmitter Coding (ATC), whereas the weather radar users have adopted the term Simultaneous Multiple Pulse-Repetition Frequency (SMPRF). When probing the ionosphere at the carrier frequencies of the EISCAT Incoherent Scatter Radar facilities, the range extent of the detectable target is typically of the order of one thousand kilometers - about seven milliseconds - whereas the characteristic correlation time of the scattered signal varies from a few milliseconds in the D-region to only tens of microseconds in the F-region. If one is interested in estimating the scattering autocorrelation function (ACF) at time lags shorter than the F-region correlation time, the D-region must be considered as a moderately overspread target, whereas the F-region is a severely overspread one. Given the technical restrictions of the radar hardware, a combination of ATC and phase-coded long pulses is advantageous for this kind of target. We evaluate such an experiment under infinitely low signal-to-noise ratio (SNR) conditions using lag profile inversion. In addition, a qualitative evaluation under high-SNR conditions is performed by analysing simulated data. The results show that an acceptable estimation accuracy and a very good lag resolution in the D-region can be achieved with a pulse length long enough for simultaneous E- and F-region measurements with a reasonable lag extent. The new experiment design is tested with the EISCAT Tromsø VHF (224 MHz) radar. An example of a full D/E/F-region ACF from the test run is shown at the end of the paper.
RAMSES-MHD: an AMR Godunov code for astrophysical applications
NASA Astrophysics Data System (ADS)
Fromang, S.; Hennebelle, P.; Teyssier, R.
2005-12-01
Godunov methods have proved in recent years to be very efficient numerical schemes to solve the hydrodynamic equations. Here, we present an extension of the 3D adaptative Mesh Refinament (AMR) code RAMSES (Teyssier 2002) to the equations of magnetohydrodynamics (MHD). The code uses the constrained transport scheme, which garantees that the divergence of the magnetic field is kept to zero to machine accuracy at all time. Different MHD Riemann solvers can be used, and the use of the MUSCL-Hancok approach combines a good accuracy with a fast exectution of the code. A variety of tests will illustrate the performances of the code and the possibilities offered by the AMR scheme. Future applications of the code are discussed.
Robust adaptive transient damping in power systems
Pierre, D.A.; Sadighi, I.; Trudnowski, D.J.; Smith, J.R.; Nehrir, M.H. . Dept. of Electrical Engineering)
1992-09-01
This Volume 1 of the final report on RP2665-1 contains two parts. part 1 consists of the following: (1) a literature review of real-time parameter identification algorithms which may be used in self-tuning adaptive control; (2) a description of mathematical discrete-time models that are linear in the parameters and that are useful for self-tuning adaptive control; (3) detailed descriptions of several variations of recursive-least-squares algorithms (RLS algorithms) and a unified representation of some of these algorithms; (4) a new variation of RLS called Corrector Least Squares (CLS); (5) a set of practical issues that need to be addressed in the implementation of RLS-based algorithms; (6) a set of simulation examples that illustrate properties of the identification methods; and (7) appendices With FORTRAN listings of several identification codes. Part 2 of this volume addresses the problem of damping electromechanical oscillations in power systems using advanced control theory. Two control strategies are developed. Controllers are then applied to a power system as power system stabilizer (PSS) units. The primary strategy is a decentralized indirect adaptive control scheme where multiple self-tuning adaptive controllers are coordinated. This adaptive scheme is presented in a general format and the stabilizing properties are demonstrated using examples. Both the adaptive and the conventional strategies are applied to a 17-machine computer-simulated power system. PSS units are applied to four generators in the system. Detailed simulation results are presented that show the feasibility and properties of both control schemes. FORTRAN codes for the control simulations are given in appendices of Part 2, as also are FORTRAN codes for the Prony identification method.
2006-10-27
FAA Smoke Transport Code, a physics-based Computational Fluid Dynamics tool, which couples heat, mass, and momentum transfer, has been developed to provide information on smoke transport in cargo compartments with various geometries and flight conditions. The software package contains a graphical user interface for specification of geometry and boundary conditions, analysis module for solving the governing equations, and a post-processing tool. The current code was produced by making substantial improvements and additions to a codemore » obtained from a university. The original code was able to compute steady, uniform, isothermal turbulent pressurization. In addition, a preprocessor and postprocessor were added to arrive at the current software package.« less
Seals Code Development Workshop
NASA Technical Reports Server (NTRS)
Hendricks, Robert C. (Compiler); Liang, Anita D. (Compiler)
1996-01-01
Seals Workshop of 1995 industrial code (INDSEAL) release include ICYL, GCYLT, IFACE, GFACE, SPIRALG, SPIRALI, DYSEAL, and KTK. The scientific code (SCISEAL) release includes conjugate heat transfer and multidomain with rotordynamic capability. Several seals and bearings codes (e.g., HYDROFLEX, HYDROTRAN, HYDROB3D, FLOWCON1, FLOWCON2) are presented and results compared. Current computational and experimental emphasis includes multiple connected cavity flows with goals of reducing parasitic losses and gas ingestion. Labyrinth seals continue to play a significant role in sealing with face, honeycomb, and new sealing concepts under investigation for advanced engine concepts in view of strict environmental constraints. The clean sheet approach to engine design is advocated with program directions and anticipated percentage SFC reductions cited. Future activities center on engine applications with coupled seal/power/secondary flow streams.
NASA Astrophysics Data System (ADS)
Vaucouleur, Sebastien
2011-02-01
We introduce code query by example for customisation of evolvable software products in general and of enterprise resource planning systems (ERPs) in particular. The concept is based on an initial empirical study on practices around ERP systems. We motivate our design choices based on those empirical results, and we show how the proposed solution helps with respect to the infamous upgrade problem: the conflict between the need for customisation and the need for upgrade of ERP systems. We further show how code query by example can be used as a form of lightweight static analysis, to detect automatically potential defects in large software products. Code query by example as a form of lightweight static analysis is particularly interesting in the context of ERP systems: it is often the case that programmers working in this field are not computer science specialists but more of domain experts. Hence, they require a simple language to express custom rules.
Code inspection instructional validation
NASA Technical Reports Server (NTRS)
Orr, Kay; Stancil, Shirley
1992-01-01
The Shuttle Data Systems Branch (SDSB) of the Flight Data Systems Division (FDSD) at Johnson Space Center contracted with Southwest Research Institute (SwRI) to validate the effectiveness of an interactive video course on the code inspection process. The purpose of this project was to determine if this course could be effective for teaching NASA analysts the process of code inspection. In addition, NASA was interested in the effectiveness of this unique type of instruction (Digital Video Interactive), for providing training on software processes. This study found the Carnegie Mellon course, 'A Cure for the Common Code', effective for teaching the process of code inspection. In addition, analysts prefer learning with this method of instruction, or this method in combination with other methods. As is, the course is definitely better than no course at all; however, findings indicate changes are needed. Following are conclusions of this study. (1) The course is instructionally effective. (2) The simulation has a positive effect on student's confidence in his ability to apply new knowledge. (3) Analysts like the course and prefer this method of training, or this method in combination with current methods of training in code inspection, over the way training is currently being conducted. (4) Analysts responded favorably to information presented through scenarios incorporating full motion video. (5) Some course content needs to be changed. (6) Some content needs to be added to the course. SwRI believes this study indicates interactive video instruction combined with simulation is effective for teaching software processes. Based on the conclusions of this study, SwRI has outlined seven options for NASA to consider. SwRI recommends the option which involves creation of new source code and data files, but uses much of the existing content and design from the current course. Although this option involves a significant software development effort, SwRI believes this option
NASA Technical Reports Server (NTRS)
Gliebe, P; Mani, R.; Shin, H.; Mitchell, B.; Ashford, G.; Salamah, S.; Connell, S.; Huff, Dennis (Technical Monitor)
2000-01-01
This report describes work performed on Contract NAS3-27720AoI 13 as part of the NASA Advanced Subsonic Transport (AST) Noise Reduction Technology effort. Computer codes were developed to provide quantitative prediction, design, and analysis capability for several aircraft engine noise sources. The objective was to provide improved, physics-based tools for exploration of noise-reduction concepts and understanding of experimental results. Methods and codes focused on fan broadband and 'buzz saw' noise and on low-emissions combustor noise and compliment work done by other contractors under the NASA AST program to develop methods and codes for fan harmonic tone noise and jet noise. The methods and codes developed and reported herein employ a wide range of approaches, from the strictly empirical to the completely computational, with some being semiempirical analytical, and/or analytical/computational. Emphasis was on capturing the essential physics while still considering method or code utility as a practical design and analysis tool for everyday engineering use. Codes and prediction models were developed for: (1) an improved empirical correlation model for fan rotor exit flow mean and turbulence properties, for use in predicting broadband noise generated by rotor exit flow turbulence interaction with downstream stator vanes: (2) fan broadband noise models for rotor and stator/turbulence interaction sources including 3D effects, noncompact-source effects. directivity modeling, and extensions to the rotor supersonic tip-speed regime; (3) fan multiple-pure-tone in-duct sound pressure prediction methodology based on computational fluid dynamics (CFD) analysis; and (4) low-emissions combustor prediction methodology and computer code based on CFD and actuator disk theory. In addition. the relative importance of dipole and quadrupole source mechanisms was studied using direct CFD source computation for a simple cascadeigust interaction problem, and an empirical combustor
Link, Hamilton E.; Schroeppel, Richard Crabtree; Neumann, William Douglas; Campbell, Philip LaRoche; Beaver, Cheryl Lynn; Pierson, Lyndon George; Anderson, William Erik
2004-10-01
If software is designed so that the software can issue functions that will move that software from one computing platform to another, then the software is said to be 'mobile'. There are two general areas of security problems associated with mobile code. The 'secure host' problem involves protecting the host from malicious mobile code. The 'secure mobile code' problem, on the other hand, involves protecting the code from malicious hosts. This report focuses on the latter problem. We have found three distinct camps of opinions regarding how to secure mobile code. There are those who believe special distributed hardware is necessary, those who believe special distributed software is necessary, and those who believe neither is necessary. We examine all three camps, with a focus on the third. In the distributed software camp we examine some commonly proposed techniques including Java, D'Agents and Flask. For the specialized hardware camp, we propose a cryptographic technique for 'tamper-proofing' code over a large portion of the software/hardware life cycle by careful modification of current architectures. This method culminates by decrypting/authenticating each instruction within a physically protected CPU, thereby protecting against subversion by malicious code. Our main focus is on the camp that believes that neither specialized software nor hardware is necessary. We concentrate on methods of code obfuscation to render an entire program or a data segment on which a program depends incomprehensible. The hope is to prevent or at least slow down reverse engineering efforts and to prevent goal-oriented attacks on the software and execution. The field of obfuscation is still in a state of development with the central problem being the lack of a basis for evaluating the protection schemes. We give a brief introduction to some of the main ideas in the field, followed by an in depth analysis of a technique called 'white-boxing'. We put forth some new attacks and improvements
Research on compression and improvement of vertex chain code
NASA Astrophysics Data System (ADS)
Yu, Guofang; Zhang, Yujie
2009-10-01
Combined with the Huffman encoding theory, the code 2 with highest emergence-probability and continution-frequency is indicated by a binary number 0,the combination of 1 and 3 with higher emergence-probability and continutionfrequency are indicated by two binary number 10,and the corresponding frequency-code are attached to the two kinds of code,the length of the frequency-code can be assigned beforehand or adaptive automatically,the code 1 and 3 with lowest emergence-probability and continution-frequency are indicated by the binary number 110 and 111 respectively.The relative encoding efficiency and decoding efficiency are supplemented to the current performance evaluation system of the chain code.the new chain code is compared with a current chain code through the test system progamed by VC++, the results show that the basic performances of the new chain code are significantly improved, and the performance advantages are improved with the size increase of the graphics.
Expressing Adaptation Strategies Using Adaptation Patterns
ERIC Educational Resources Information Center
Zemirline, N.; Bourda, Y.; Reynaud, C.
2012-01-01
Today, there is a real challenge to enable personalized access to information. Several systems have been proposed to address this challenge including Adaptive Hypermedia Systems (AHSs). However, the specification of adaptation strategies remains a difficult task for creators of such systems. In this paper, we consider the problem of the definition…
Point-Kernel Shielding Code System.
1982-02-17
Version 00 QAD-BSA is a three-dimensional, point-kernel shielding code system based upon the CCC-48/QAD series. It is designed to calculate photon dose rates and heating rates using exponential attenuation and infinite medium buildup factors. Calculational provisions include estimates of fast neutron penetration using data computed by the moments method. Included geometry routines can describe complicated source and shield geometries. An internal library contains data for many frequently used structural and shielding materials, enabling the codemore » to solve most problems with only source strengths and problem geometry required as input. This code system adapts especially well to problems requiring multiple sources and sources with asymmetrical geometry. In addition to being edited separately, the total interaction rates from many sources may be edited at each detector point. Calculated photon interaction rates agree closely with those obtained using QAD-P5A.« less
The new Italian code of medical ethics.
Fineschi, V; Turillazzi, E; Cateni, C
1997-01-01
In June 1995, the Italian code of medical ethics was revised in order that its principles should reflect the ever-changing relationship between the medical profession and society and between physicians and patients. The updated code is also a response to new ethical problems created by scientific progress; the discussion of such problems often shows up a need for better understanding on the part of the medical profession itself. Medical deontology is defined as the discipline for the study of norms of conduct for the health care professions, including moral and legal norms as well as those pertaining more strictly to professional performance. The aim of deontology is therefore, the in-depth investigation and revision of the code of medical ethics. It is in the light of this conceptual definition that one should interpret a review of the different codes which have attempted, throughout the various periods of Italy's recent history, to adapt ethical norms to particular social and health care climates. PMID:9279746
ACDOS2: an improved neutron-induced dose rate code
Lagache, J.C.
1981-06-01
To calculate the expected dose rate from fusion reactors as a function of geometry, composition, and time after shutdown a computer code, ACDOS2, was written, which utilizes up-to-date libraries of cross-sections and radioisotope decay data. ACDOS2 is in ANSI FORTRAN IV, in order to make it readily adaptable elsewhere.
Accumulate Repeat Accumulate Coded Modulation
NASA Technical Reports Server (NTRS)
Abbasfar, Aliazam; Divsalar, Dariush; Yao, Kung
2004-01-01
In this paper we propose an innovative coded modulation scheme called 'Accumulate Repeat Accumulate Coded Modulation' (ARA coded modulation). This class of codes can be viewed as serial turbo-like codes, or as a subclass of Low Density Parity Check (LDPC) codes that are combined with high level modulation. Thus at the decoder belief propagation can be used for iterative decoding of ARA coded modulation on a graph, provided a demapper transforms the received in-phase and quadrature samples to reliability of the bits.
West, J.T.; Hoffman, T.J.; Emmett, M.B.; Childs, K.W.; Petrie, L.M.; Landers, N.F.; Bryan, C.B.; Giles, G.E.
1995-04-01
SCALE--a modular code system for Standardized Computer Analyses Licensing Evaluation--has been developed by Oak Ridge National Laboratory at the request of the US Nuclear Regulatory Commission. The SCALE system utilizes well-established computer codes and methods within standard analysis sequences that (1) allow an input format designed for the occasional user and/or novice, (2) automate the data processing and coupling between modules, and (3) provide accurate and reliable results. System development has been directed at problem-dependent cross-section processing and analysis of criticality safety, shielding, heat transfer, and depletion/decay problems. Since the initial release of SCALE in 1980, the code system has been heavily used for evaluation of nuclear fuel facility and package designs. This revision documents Version 4.2 of the system. The manual is divided into three volumes: Volume 1--for the control module documentation, Volume 2--for functional module documentation; and Volume 3--for documentation of the data libraries and subroutine libraries. This volume discusses the following functional modules: MORSE-SGC; HEATING 7.2; KENO V.a; JUNEBUG-II; HEATPLOT-S; REGPLOT 6; PLORIGEN; and OCULAR.
Multiple trellis coded modulation
NASA Technical Reports Server (NTRS)
Simon, Marvin K. (Inventor); Divsalar, Dariush (Inventor)
1990-01-01
A technique for designing trellis codes to minimize bit error performance for a fading channel. The invention provides a criteria which may be used in the design of such codes which is significantly different from that used for average white Gaussian noise channels. The method of multiple trellis coded modulation of the present invention comprises the steps of: (a) coding b bits of input data into s intermediate outputs; (b) grouping said s intermediate outputs into k groups of s.sub.i intermediate outputs each where the summation of all s.sub.i,s is equal to s and k is equal to at least 2; (c) mapping each of said k groups of intermediate outputs into one of a plurality of symbols in accordance with a plurality of modulation schemes, one for each group such that the first group is mapped in accordance with a first modulation scheme and the second group is mapped in accordance with a second modulation scheme; and (d) outputting each of said symbols to provide k output symbols for each b bits of input data.
ERIC Educational Resources Information Center
American Sociological Association, Washington, DC.
The American Sociological Association's code of ethics for sociologists is presented. For sociological research and practice, 10 requirements for ethical behavior are identified, including: maintaining objectivity and integrity; fully reporting findings and research methods, without omission of significant data; reporting fully all sources of…
ERIC Educational Resources Information Center
Olsen, Florence
2003-01-01
Colleges and universities are beginning to consider collaborating on open-source-code projects as a way to meet critical software and computing needs. Points out the attractive features of noncommercial open-source software and describes some examples in use now, especially for the creation of Web infrastructure. (SLD)
Electrical Circuit Simulation Code
2001-08-09
Massively-Parallel Electrical Circuit Simulation Code. CHILESPICE is a massively-arallel distributed-memory electrical circuit simulation tool that contains many enhanced radiation, time-based, and thermal features and models. Large scale electronic circuit simulation. Shared memory, parallel processing, enhance convergence. Sandia specific device models.
Environmental Fluid Dynamics Code
The Environmental Fluid Dynamics Code (EFDC)is a state-of-the-art hydrodynamic model that can be used to simulate aquatic systems in one, two, and three dimensions. It has evolved over the past two decades to become one of the most widely used and technically defensible hydrodyn...
ERIC Educational Resources Information Center
Association of College Unions-International, Bloomington, IN.
The code of ethics for the college union and student activities professional is presented by the Association of College Unions-International. The preamble identifies the objectives of the college union as providing campus community centers and social programs that enhance the quality of life for members of the academic community. Ethics for…
ERIC Educational Resources Information Center
Burton, John K.; Wildman, Terry M.
The purpose of this study was to test the applicability of the dual coding hypothesis to children's recall performance. The hypothesis predicts that visual interference will have a small effect on the recall of visually presented words or pictures, but that acoustic interference will cause a decline in recall of visually presented words and…
NASA Astrophysics Data System (ADS)
Ninio, Jacques
1990-03-01
Recent findings on the genetic code are reviewed, including selenocysteine usage, deviations in the assignments of sense and nonsense codons, RNA editing, natural ribosomal frameshifts and non-orthodox codon-anticodon pairings. A multi-stage codon reading process is presented.
ERIC Educational Resources Information Center
Lumsden, Linda; Miller, Gabriel
2002-01-01
Students do not always make choices that adults agree with in their choice of school dress. Dress-code issues are explored in this Research Roundup, and guidance is offered to principals seeking to maintain a positive school climate. In "Do School Uniforms Fit?" Kerry White discusses arguments for and against school uniforms and summarizes the…
MAGEE,GLEN I.
2000-08-03
Computers transfer data in a number of different ways. Whether through a serial port, a parallel port, over a modem, over an ethernet cable, or internally from a hard disk to memory, some data will be lost. To compensate for that loss, numerous error detection and correction algorithms have been developed. One of the most common error correction codes is the Reed-Solomon code, which is a special subset of BCH (Bose-Chaudhuri-Hocquenghem) linear cyclic block codes. In the AURA project, an unmanned aircraft sends the data it collects back to earth so it can be analyzed during flight and possible flight modifications made. To counter possible data corruption during transmission, the data is encoded using a multi-block Reed-Solomon implementation with a possibly shortened final block. In order to maximize the amount of data transmitted, it was necessary to reduce the computation time of a Reed-Solomon encoding to three percent of the processor's time. To achieve such a reduction, many code optimization techniques were employed. This paper outlines the steps taken to reduce the processing time of a Reed-Solomon encoding and the insight into modern optimization techniques gained from the experience.
Visual Coding in Locust Photoreceptors
Faivre, Olivier; Juusola, Mikko
2008-01-01
Information capture by photoreceptors ultimately limits the quality of visual processing in the brain. Using conventional sharp microelectrodes, we studied how locust photoreceptors encode random (white-noise, WN) and naturalistic (1/f stimuli, NS) light patterns in vivo and how this coding changes with mean illumination and ambient temperature. We also examined the role of their plasma membrane in shaping voltage responses. We found that brightening or warming increase and accelerate voltage responses, but reduce noise, enabling photoreceptors to encode more information. For WN stimuli, this was accompanied by broadening of the linear frequency range. On the contrary, with NS the signaling took place within a constant bandwidth, possibly revealing a ‘preference’ for inputs with 1/f statistics. The faster signaling was caused by acceleration of the elementary phototransduction current - leading to bumps - and their distribution. The membrane linearly translated phototransduction currents into voltage responses without limiting the throughput of these messages. As the bumps reflected fast changes in membrane resistance, the data suggest that their shape is predominantly driven by fast changes in the light-gated conductance. On the other hand, the slower bump latency distribution is likely to represent slower enzymatic intracellular reactions. Furthermore, the Q10s of bump duration and latency distribution depended on light intensity. Altogether, this study suggests that biochemical constraints imposed upon signaling change continuously as locust photoreceptors adapt to environmental light and temperature conditions. PMID:18478123
Beer, M; Nohria, N
2000-01-01
Today's fast-paced economy demands that businesses change or die. But few companies manage corporate transformations as well as they would like. The brutal fact is that about 70% of all change initiatives fail. In this article, authors Michael Beer and Nitin Nohria describe two archetypes--or theories--of corporate transformation that may help executives crack the code of change. Theory E is change based on economic value: shareholder value is the only legitimate measure of success, and change often involves heavy use of economic incentives, layoffs, downsizing, and restructuring. Theory O is change based on organizational capability: the goal is to build and strengthen corporate culture. Most companies focus purely on one theory or the other, or haphazardly use a mix of both, the authors say. Combining E and O is directionally correct, they contend, but it requires a careful, conscious integration plan. Beer and Nohria present the examples of two companies, Scott Paper and Champion International, that used a purely E or purely O strategy to create change--and met with limited levels of success. They contrast those corporate transformations with that of UK-based retailer ASDA, which has successfully embraced the paradox between the opposing theories of change and integrated E and O. The lesson from ASDA? To thrive and adapt in the new economy, companies must make sure the E and O theories of business change are in sync at their own organizations. PMID:11183975
Color demosaicking via robust adaptive sparse representation
NASA Astrophysics Data System (ADS)
Huang, Lili; Xiao, Liang; Chen, Qinghua; Wang, Kai
2015-09-01
A single sensor camera can capture scenes by means of a color filter array. Each pixel samples only one of the three primary colors. We use a color demosaicking (CDM) technique to produce full color images and propose a robust adaptive sparse representation model for high quality CDM. The data fidelity term is characterized by l1 norm to suppress the heavy-tailed visual artifacts with an adaptively learned dictionary, while the regularization term is encouraged to seek sparsity by forcing sparse coding close to its nonlocal means to reduce coding errors. Based on the classical quadratic penalty function technique in optimization and an operator splitting method in convex analysis, we further present an effective iterative algorithm to solve the variational problem. The efficiency of the proposed method is demonstrated by experimental results with simulated and real camera data.
Babel, Marie; Parrein, Benoît; Déforges, Olivier; Normand, Nicolas; Guédon, Jean-Pierre; Coat, Véronique
2008-06-01
The joint source-channel coding system proposed in this paper has two aims: lossless compression with a progressive mode and the integrity of medical data, which takes into account the priorities of the image and the properties of a network with no guaranteed quality of service. In this context, the use of scalable coding, locally adapted resolution (LAR) and a discrete and exact Radon transform, known as the Mojette transform, meets this twofold requirement. In this paper, details of this joint coding implementation are provided as well as a performance evaluation with respect to the reference CALIC coding and to unequal error protection using Reed-Solomon codes. PMID:18289830
Eye gaze adaptation under interocular suppression.
Stein, Timo; Peelen, Marius V; Sterzer, Philipp
2012-01-01
The perception of eye gaze is central to social interaction in that it provides information about another person's goals, intentions, and focus of attention. Direction of gaze has been found to reflexively shift the observer's attention in the corresponding direction, and prolonged exposure to averted eye gaze adapts the visual system, biasing perception of subsequent gaze in the direction opposite to the adapting face. Here, we tested the role of conscious awareness in coding eye gaze directions. To this end, we measured aftereffects induced by adapting faces with different eye gaze directions that were presented during continuous flash suppression, a potent interocular suppression technique. In some trials the adapting face was rendered fully invisible, whereas in others it became partially visible. In Experiment 1, the adapting and test faces were presented in identical sizes and to the same eye. Even fully invisible faces were capable of inducing significant eye gaze aftereffects, although these were smaller than aftereffects from partially visible faces. When the adapting and test faces were shown to different eyes in Experiment 2, significant eye gaze aftereffects were still observed for the fully invisible faces, thus showing interocular transfer. Experiment 3 disrupted the spatial correspondence between adapting and test faces by introducing a size change. Under these conditions, aftereffects were restricted to partially visible adapting faces. These results were replicated in Experiment 4 using a blocked adaptation design. Together, these findings indicate that size-dependent low-level components of eye gaze can be represented without awareness, whereas object-centered higher-level representations of eye gaze directions depend on visual awareness. PMID:22753441
Binary coding for hyperspectral imagery
NASA Astrophysics Data System (ADS)
Wang, Jing; Chang, Chein-I.; Chang, Chein-Chi; Lin, Chinsu
2004-10-01
Binary coding is one of simplest ways to characterize spectral features. One commonly used method is a binary coding-based image software system, called Spectral Analysis Manager (SPAM) for remotely sensed imagery developed by Mazer et al. For a given spectral signature, the SPAM calculates its spectral mean and inter-band spectral difference and uses them as thresholds to generate a binary code word for this particular spectral signature. Such coding scheme is generally effective and also very simple to implement. This paper revisits the SPAM and further develops three new SPAM-based binary coding methods, called equal probability partition (EPP) binary coding, halfway partition (HP) binary coding and median partition (MP) binary coding. These three binary coding methods along with the SPAM well be evaluated for spectral discrimination and identification. In doing so, a new criterion, called a posteriori discrimination probability (APDP) is also introduced for performance measure.
NASA Technical Reports Server (NTRS)
Mcaulay, Robert J.; Quatieri, Thomas F.
1988-01-01
It has been shown that an analysis/synthesis system based on a sinusoidal representation of speech leads to synthetic speech that is essentially perceptually indistinguishable from the original. Strategies for coding the amplitudes, frequencies and phases of the sine waves have been developed that have led to a multirate coder operating at rates from 2400 to 9600 bps. The encoded speech is highly intelligible at all rates with a uniformly improving quality as the data rate is increased. A real-time fixed-point implementation has been developed using two ADSP2100 DSP chips. The methods used for coding and quantizing the sine-wave parameters for operation at the various frame rates are described.
2006-03-08
MAPVAR-KD is designed to transfer solution results from one finite element mesh to another. MAPVAR-KD draws heavily from the structure and coding of MERLIN II, but it employs a new finite element data base, EXODUS II, and offers enhanced speed and new capabilities not available in MERLIN II. In keeping with the MERLIN II documentation, the computational algorithms used in MAPVAR-KD are described. User instructions are presented. Example problems are included to demonstrate the operationmore » of the code and the effects of various input options. MAPVAR-KD is a modification of MAPVAR in which the search algorithm was replaced by a kd-tree-based search for better performance on large problems.« less
Confocal coded aperture imaging
Tobin, Jr., Kenneth William; Thomas, Jr., Clarence E.
2001-01-01
A method for imaging a target volume comprises the steps of: radiating a small bandwidth of energy toward the target volume; focusing the small bandwidth of energy into a beam; moving the target volume through a plurality of positions within the focused beam; collecting a beam of energy scattered from the target volume with a non-diffractive confocal coded aperture; generating a shadow image of said aperture from every point source of radiation in the target volume; and, reconstructing the shadow image into a 3-dimensional image of the every point source by mathematically correlating the shadow image with a digital or analog version of the coded aperture. The method can comprise the step of collecting the beam of energy scattered from the target volume with a Fresnel zone plate.
NASA Astrophysics Data System (ADS)
Massey, Richard; Stoughton, Chris; Leauthaud, Alexie; Rhodes, Jason; Koekemoer, Anton; Ellis, Richard; Shaghoulian, Edgar
2013-07-01
Charge Transfer Inefficiency (CTI) due to radiation damage above the Earth's atmosphere creates spurious trailing in images from Charge-Coupled Device (CCD) imaging detectors. Radiation damage also creates unrelated warm pixels, which can be used to measure CTI. This code provides pixel-based correction for CTI and has proven effective in Hubble Space Telescope Advanced Camera for Surveys raw images, successfully reducing the CTI trails by a factor of ~30 everywhere in the CCD and at all flux levels. The core is written in java for speed, and a front-end user interface is provided in IDL. The code operates on raw data by returning individual electrons to pixels from which they were unintentionally dragged during readout. Correction takes about 25 minutes per ACS exposure, but is trivially parallelisable to multiple processors.
A generic efficient adaptive grid scheme for rocket propulsion modeling
NASA Technical Reports Server (NTRS)
Mo, J. D.; Chow, Alan S.
1993-01-01
The objective of this research is to develop an efficient, time-accurate numerical algorithm to discretize the Navier-Stokes equations for the predictions of internal one-, two-dimensional and axisymmetric flows. A generic, efficient, elliptic adaptive grid generator is implicitly coupled with the Lower-Upper factorization scheme in the development of ALUNS computer code. The calculations of one-dimensional shock tube wave propagation and two-dimensional shock wave capture, wave-wave interactions, shock wave-boundary interactions show that the developed scheme is stable, accurate and extremely robust. The adaptive grid generator produced a very favorable grid network by a grid speed technique. This generic adaptive grid generator is also applied in the PARC and FDNS codes and the computational results for solid rocket nozzle flowfield and crystal growth modeling by those codes will be presented in the conference, too. This research work is being supported by NASA/MSFC.
Fifty years of progress in speech waveform coding
NASA Astrophysics Data System (ADS)
Atal, Bishnu S.
2004-10-01
Over the past 50 years, sustained research in speech coding has made it possible to encode speech with high speech quality at rates as low as 4 kb/s. The technology is now used in many applications, such as digital cellular phones, personal computers, and packet telephony. The early research in speech coding was aimed at reproducing speech spectra using a small number of slowly varying parameters. The focus of research shifted later to accurate reproduction of speech waveforms at low bit rates. The introduction of linear predictive coding (LPC) led to the development of new algorithms, such as adaptive predictive coding, multipulse and code-excited LPC. Code-excited LPC has become the method of choice for low bit rate speech coding and is used in most voice transmission standards. Digital speech communication is rapidly moving away from traditional circuit-switched to packet-switched networks based on IP protocols (VoIP). The focus of speech coding research is now on providing to low cost, reliable, and secure transmission of high-quality speech on IP networks.
N.V. Mokhov
2003-04-09
Status and recent developments of the MARS 14 Monte Carlo code system for simulation of hadronic and electromagnetic cascades in shielding, accelerator and detector components in the energy range from a fraction of an electronvolt up to 100 TeV are described. these include physics models both in strong and electromagnetic interaction sectors, variance reduction techniques, residual dose, geometry, tracking, histograming. MAD-MARS Beam Line Build and Graphical-User Interface.
NASA Astrophysics Data System (ADS)
Tóth, Gábor; Keppens, Rony
2012-07-01
The Versatile Advection Code (VAC) is a freely available general hydrodynamic and magnetohydrodynamic simulation software that works in 1, 2 or 3 dimensions on Cartesian and logically Cartesian grids. VAC runs on any Unix/Linux system with a Fortran 90 (or 77) compiler and Perl interpreter. VAC can run on parallel machines using either the Message Passing Interface (MPI) library or a High Performance Fortran (HPF) compiler.
NASA Technical Reports Server (NTRS)
Bjork, C.
1981-01-01
The REEDS (rocket exhaust effluent diffusion single layer) computer code is used for the estimation of certain rocket exhaust effluent concentrations and dosages and their distributions near the Earth's surface following a rocket launch event. Output from REEDS is used in producing near real time air quality and environmental assessments of the effects of certain potentially harmful effluents, namely HCl, Al2O3, CO, and NO.
GRChombo: Numerical relativity with adaptive mesh refinement
NASA Astrophysics Data System (ADS)
Clough, Katy; Figueras, Pau; Finkel, Hal; Kunesch, Markus; Lim, Eugene A.; Tunyasuvunakool, Saran
2015-12-01
In this work, we introduce {\\mathtt{GRChombo}}: a new numerical relativity code which incorporates full adaptive mesh refinement (AMR) using block structured Berger-Rigoutsos grid generation. The code supports non-trivial 'many-boxes-in-many-boxes' mesh hierarchies and massive parallelism through the message passing interface. {\\mathtt{GRChombo}} evolves the Einstein equation using the standard BSSN formalism, with an option to turn on CCZ4 constraint damping if required. The AMR capability permits the study of a range of new physics which has previously been computationally infeasible in a full 3 + 1 setting, while also significantly simplifying the process of setting up the mesh for these problems. We show that {\\mathtt{GRChombo}} can stably and accurately evolve standard spacetimes such as binary black hole mergers and scalar collapses into black holes, demonstrate the performance characteristics of our code, and discuss various physics problems which stand to benefit from the AMR technique.
Summers, R.M.; Cole, R.K. Jr.; Smith, R.C.; Stuart, D.S.; Thompson, S.L.; Hodge, S.A.; Hyman, C.R.; Sanders, R.L.
1995-03-01
MELCOR is a fully integrated, engineering-level computer code that models the progression of severe accidents in light water reactor nuclear power plants. MELCOR is being developed at Sandia National Laboratories for the U.S. Nuclear Regulatory Commission as a second-generation plant risk assessment tool and the successor to the Source Term Code Package. A broad spectrum of severe accident phenomena in both boiling and pressurized water reactors is treated in MELCOR in a unified framework. These include: thermal-hydraulic response in the reactor coolant system, reactor cavity, containment, and confinement buildings; core heatup, degradation, and relocation; core-concrete attack; hydrogen production, transport, and combustion; fission product release and transport; and the impact of engineered safety features on thermal-hydraulic and radionuclide behavior. Current uses of MELCOR include estimation of severe accident source terms and their sensitivities and uncertainties in a variety of applications. This publication of the MELCOR computer code manuals corresponds to MELCOR 1.8.3, released to users in August, 1994. Volume 1 contains a primer that describes MELCOR`s phenomenological scope, organization (by package), and documentation. The remainder of Volume 1 contains the MELCOR Users Guides, which provide the input instructions and guidelines for each package. Volume 2 contains the MELCOR Reference Manuals, which describe the phenomenological models that have been implemented in each package.
Bar coded retroreflective target
Vann, Charles S.
2000-01-01
This small, inexpensive, non-contact laser sensor can detect the location of a retroreflective target in a relatively large volume and up to six degrees of position. The tracker's laser beam is formed into a plane of light which is swept across the space of interest. When the beam illuminates the retroreflector, some of the light returns to the tracker. The intensity, angle, and time of the return beam is measured to calculate the three dimensional location of the target. With three retroreflectors on the target, the locations of three points on the target are measured, enabling the calculation of all six degrees of target position. Until now, devices for three-dimensional tracking of objects in a large volume have been heavy, large, and very expensive. Because of the simplicity and unique characteristics of this tracker, it is capable of three-dimensional tracking of one to several objects in a large volume, yet it is compact, light-weight, and relatively inexpensive. Alternatively, a tracker produces a diverging laser beam which is directed towards a fixed position, and senses when a retroreflective target enters the fixed field of view. An optically bar coded target can be read by the tracker to provide information about the target. The target can be formed of a ball lens with a bar code on one end. As the target moves through the field, the ball lens causes the laser beam to scan across the bar code.
Suboptimum decoding of block codes
NASA Technical Reports Server (NTRS)
Lin, Shu; Kasami, Tadao
1991-01-01
This paper investigates a class of decomposable codes, their distance and structural properties. it is shown that this class includes several classes of well known and efficient codes as subclasses. Several methods for constructing decomposable codes or decomposing codes are presented. A two-stage soft decision decoding scheme for decomposable codes, their translates or unions of translates is devised. This two-stage soft-decision decoding is suboptimum, and provides an excellent trade-off between the error performance and decoding complexity for codes of moderate and long block length.
Preliminary Assessment of Turbomachinery Codes
NASA Technical Reports Server (NTRS)
Mazumder, Quamrul H.
2007-01-01
This report assesses different CFD codes developed and currently being used at Glenn Research Center to predict turbomachinery fluid flow and heat transfer behavior. This report will consider the following codes: APNASA, TURBO, GlennHT, H3D, and SWIFT. Each code will be described separately in the following section with their current modeling capabilities, level of validation, pre/post processing, and future development and validation requirements. This report addresses only previously published and validations of the codes. However, the codes have been further developed to extend the capabilities of the codes.
Structural coding versus free-energy predictive coding.
van der Helm, Peter A
2016-06-01
Focusing on visual perceptual organization, this article contrasts the free-energy (FE) version of predictive coding (a recent Bayesian approach) to structural coding (a long-standing representational approach). Both use free-energy minimization as metaphor for processing in the brain, but their formal elaborations of this metaphor are fundamentally different. FE predictive coding formalizes it by minimization of prediction errors, whereas structural coding formalizes it by minimization of the descriptive complexity of predictions. Here, both sides are evaluated. A conclusion regarding competence is that FE predictive coding uses a powerful modeling technique, but that structural coding has more explanatory power. A conclusion regarding performance is that FE predictive coding-though more detailed in its account of neurophysiological data-provides a less compelling cognitive architecture than that of structural coding, which, for instance, supplies formal support for the computationally powerful role it attributes to neuronal synchronization. PMID:26407895
Convolutional coding techniques for data protection
NASA Technical Reports Server (NTRS)
Massey, J. L.
1975-01-01
Results of research on the use of convolutional codes in data communications are presented. Convolutional coding fundamentals are discussed along with modulation and coding interaction. Concatenated coding systems and data compression with convolutional codes are described.
Adaptive MPEG-2 video data hiding scheme
NASA Astrophysics Data System (ADS)
Sarkar, Anindya; Madhow, Upamanyu; Chandrasekaran, Shivkumar; Manjunath, Bangalore S.
2007-02-01
We have investigated adaptive mechanisms for high-volume transform-domain data hiding in MPEG-2 video which can be tuned to sustain varying levels of compression attacks. The data is hidden in the uncompressed domain by scalar quantization index modulation (QIM) on a selected set of low-frequency discrete cosine transform (DCT) coefficients. We propose an adaptive hiding scheme where the embedding rate is varied according to the type of frame and the reference quantization parameter (decided according to MPEG-2 rate control scheme) for that frame. For a 1.5 Mbps video and a frame-rate of 25 frames/sec, we are able to embed almost 7500 bits/sec. Also, the adaptive scheme hides 20% more data and incurs significantly less frame errors (frames for which the embedded data is not fully recovered) than the non-adaptive scheme. Our embedding scheme incurs insertions and deletions at the decoder which may cause de-synchronization and decoding failure. This problem is solved by the use of powerful turbo-like codes and erasures at the encoder. The channel capacity estimate gives an idea of the minimum code redundancy factor required for reliable decoding of hidden data transmitted through the channel. To that end, we have modeled the MPEG-2 video channel using the transition probability matrices given by the data hiding procedure, using which we compute the (hiding scheme dependent) channel capacity.
Adaptive mesh refinement in titanium
Colella, Phillip; Wen, Tong
2005-01-21
In this paper, we evaluate Titanium's usability as a high-level parallel programming language through a case study, where we implement a subset of Chombo's functionality in Titanium. Chombo is a software package applying the Adaptive Mesh Refinement methodology to numerical Partial Differential Equations at the production level. In Chombo, the library approach is used to parallel programming (C++ and Fortran, with MPI), whereas Titanium is a Java dialect designed for high-performance scientific computing. The performance of our implementation is studied and compared with that of Chombo in solving Poisson's equation based on two grid configurations from a real application. Also provided are the counts of lines of code from both sides.
Probability estimation in arithmetic and adaptive-Huffman entropy coders.
Duttweiler, D L; Chamzas, C
1995-01-01
Entropy coders, such as Huffman and arithmetic coders, achieve compression by exploiting nonuniformity in the probabilities under which a random variable to be coded takes on its possible values. Practical realizations generally require running adaptive estimates of these probabilities. An analysis of the relationship between estimation quality and the resulting coding efficiency suggests a particular scheme, dubbed scaled-count, for obtaining such estimates. It can optimally balance estimation accuracy against a need for rapid response to changing underlying statistics. When the symbols being coded are from a binary alphabet, simple hardware and software implementations requiring almost no computation are possible. A scaled-count adaptive probability estimator of the type described in this paper is used in the arithmetic coder of the JBIG and JPEG image coding standards. PMID:18289975
Combinatorial neural codes from a mathematical coding theory perspective.
Curto, Carina; Itskov, Vladimir; Morrison, Katherine; Roth, Zachary; Walker, Judy L
2013-07-01
Shannon's seminal 1948 work gave rise to two distinct areas of research: information theory and mathematical coding theory. While information theory has had a strong influence on theoretical neuroscience, ideas from mathematical coding theory have received considerably less attention. Here we take a new look at combinatorial neural codes from a mathematical coding theory perspective, examining the error correction capabilities of familiar receptive field codes (RF codes). We find, perhaps surprisingly, that the high levels of redundancy present in these codes do not support accurate error correction, although the error-correcting performance of receptive field codes catches up to that of random comparison codes when a small tolerance to error is introduced. However, receptive field codes are good at reflecting distances between represented stimuli, while the random comparison codes are not. We suggest that a compromise in error-correcting capability may be a necessary price to pay for a neural code whose structure serves not only error correction, but must also reflect relationships between stimuli. PMID:23724797
Bilayer Protograph Codes for Half-Duplex Relay Channels
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; VanNguyen, Thuy; Nosratinia, Aria
2013-01-01
Direct to Earth return links are limited by the size and power of lander devices. A standard alternative is provided by a two-hops return link: a proximity link (from lander to orbiter relay) and a deep-space link (from orbiter relay to Earth). Although direct to Earth return links are limited by the size and power of lander devices, using an additional link and a proposed coding for relay channels, one can obtain a more reliable signal. Although significant progress has been made in the relay coding problem, existing codes must be painstakingly optimized to match to a single set of channel conditions, many of them do not offer easy encoding, and most of them do not have structured design. A high-performing LDPC (low-density parity-check) code for the relay channel addresses simultaneously two important issues: a code structure that allows low encoding complexity, and a flexible rate-compatible code that allows matching to various channel conditions. Most of the previous high-performance LDPC codes for the relay channel are tightly optimized for a given channel quality, and are not easily adapted without extensive re-optimization for various channel conditions. This code for the relay channel combines structured design and easy encoding with rate compatibility to allow adaptation to the three links involved in the relay channel, and furthermore offers very good performance. The proposed code is constructed by synthesizing a bilayer structure with a pro to graph. In addition to the contribution to relay encoding, an improved family of protograph codes was produced for the point-to-point AWGN (additive white Gaussian noise) channel whose high-rate members enjoy thresholds that are within 0.07 dB of capacity. These LDPC relay codes address three important issues in an integrative manner: low encoding complexity, modular structure allowing for easy design, and rate compatibility so that the code can be easily matched to a variety of channel conditions without extensive
Organizational Adaptation and Higher Education.
ERIC Educational Resources Information Center
Cameron, Kim S.
1984-01-01
Organizational adaptation and types of adaptation needed in academe in the future are reviewed and major conceptual approaches to organizational adaptation are presented. The probable environment that institutions will face in the future that will require adaptation is discussed. (MLW)
How Can Reed-Solomon Codes Improve Steganographic Schemes?
NASA Astrophysics Data System (ADS)
Fontaine, Caroline; Galand, Fabien
The use of syndrome coding in steganographic schemes tends to reduce distortion during embedding. The more complete model comes from the wet papers [FGLS05] which allow to lock positions that cannot be modified. Recently, BCH codes have been investigated, and seem to be good candidates in this context [SW06]. Here, we show that Reed-Solomon codes are twice better with respect to the number of locked positions and that, in fact, they are optimal. We propose two methods for managing these codes in this context: the first one is based on a naive decoding process through Lagrange interpolation; the second one, more efficient, is based on list decoding techniques and provides an adaptive trade-off between the number of locked positions and the embedding efficiency.
Streamlined Genome Sequence Compression using Distributed Source Coding
Wang, Shuang; Jiang, Xiaoqian; Chen, Feng; Cui, Lijuan; Cheng, Samuel
2014-01-01
We aim at developing a streamlined genome sequence compression algorithm to support alternative miniaturized sequencing devices, which have limited communication, storage, and computation power. Existing techniques that require heavy client (encoder side) cannot be applied. To tackle this challenge, we carefully examined distributed source coding theory and developed a customized reference-based genome compression protocol to meet the low-complexity need at the client side. Based on the variation between source and reference, our protocol will pick adaptively either syndrome coding or hash coding to compress subsequences of changing code length. Our experimental results showed promising performance of the proposed method when compared with the state-of-the-art algorithm (GRS). PMID:25520552
Modeling anomalous radial transport in kinetic transport codes
NASA Astrophysics Data System (ADS)
Bodi, K.; Krasheninnikov, S. I.; Cohen, R. H.; Rognlien, T. D.
2009-11-01
Anomalous transport is typically the dominant component of the radial transport in magnetically confined plasmas, where the physical origin of this transport is believed to be plasma turbulence. A model is presented for anomalous transport that can be used in continuum kinetic edge codes like TEMPEST, NEO and the next-generation code being developed by the Edge Simulation Laboratory. The model can also be adapted to particle-based codes. It is demonstrated that the model with a velocity-dependent diffusion and convection terms can match a diagonal gradient-driven transport matrix as found in contemporary fluid codes, but can also include off-diagonal effects. The anomalous transport model is also combined with particle drifts and a particle/energy-conserving Krook collision operator to study possible synergistic effects with neoclassical transport. For the latter study, a velocity-independent anomalous diffusion coefficient is used to mimic the effect of long-wavelength ExB turbulence.
NASA Astrophysics Data System (ADS)
Gao, Wen; Jiang, Minqiang; Yu, Haoping
2013-02-01
In this paper, we first review the lossless coding mode in the version 1 of the HEVC standard that has recently finalized. We then provide a performance comparison between the lossless coding mode in the HEVC and MPEG-AVC/H.264 standards and show that the HEVC lossless coding has limited coding efficiency. To improve the performance of the lossless coding mode, several new coding tools that were contributed to JCT-VC but not adopted in version 1 of HEVC standard are introduced. In particular, we discuss sample based intra prediction and coding of residual coefficients in more detail. At the end, we briefly address a new class of coding tools, i.e., a dictionary-based coder, that is efficient in encoding screen content including graphics and text.