47 CFR 80.100 - Morse code requirement.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 5 2010-10-01 2010-10-01 false Morse code requirement. 80.100 Section 80.100... MARITIME SERVICES Operating Requirements and Procedures Operating Procedures-General § 80.100 Morse code requirement. The code employed for telegraphy must be the Morse code specified in the Telegraph...
47 CFR 80.355 - Distress, urgency, safety, call and reply Morse code frequencies.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 5 2010-10-01 2010-10-01 false Distress, urgency, safety, call and reply Morse... Distress, urgency, safety, call and reply Morse code frequencies. This section describes the distress, urgency, safety, call and reply carrier frequencies assignable to stations for Morse code...
Distinct patterns of functional and structural neuroplasticity associated with learning Morse code.
Schmidt-Wilcke, T; Rosengarth, K; Luerding, R; Bogdahn, U; Greenlee, M W
2010-07-01
Learning is based on neuroplasticity, i.e. on the capability of the brain to adapt to new experiences. Different mechanisms of neuroplasticity have been described, ranging from synaptic remodeling to changes in complex neural circuitry. To further study the relationship between changes in neural activity and changes in gray matter density associated with learning, we performed a combined longitudinal functional and morphometric magnetic resonance imaging (MRI) study on healthy volunteers who learned to decipher Morse code. We investigated 16 healthy subjects using functional MR imaging (fMRI) and voxel-based morphometry (VBM) before and after they had learned to decipher Morse code. The same set of Morse-code signals was presented to participants pre- and post-training. We found an increase in task-specific neural activity in brain regions known to be critically involved in language perception and memory, such as the inferior parietal cortex bilaterally and the medial parietal cortex during Morse code deciphering. Furthermore we found an increase in gray matter density in the left occipitotemporal region, extending into the fusiform gyrus. Anatomically neighboring sites of functional and structural neuroplasticity were revealed in the left occipitotemporal/inferior temporal cortex, but these regions only marginally overlapped. Implications of this morpho-functional dissociation for learning concepts are discussed.
An Evaluation of Modality Preference Using a "Morse Code" Recall Task
ERIC Educational Resources Information Center
Hansen, Louise; Cottrell, David
2013-01-01
Advocates of modality preference posit that individuals have a dominant sense and that when new material is presented in this preferred modality, learning is enhanced. Despite the widespread belief in this position, there is little supporting evidence. In the present study, the authors implemented a Morse code-like recall task to examine whether…
47 CFR 80.357 - Working frequencies for Morse code and data transmission.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 47 Telecommunication 5 2012-10-01 2012-10-01 false Working frequencies for Morse code and data... maritime stations for A1A, J2A, J2B (2000-27500 kHz band only), or J2D (2000-27500 kHz band only... transmissions of meteorological and navigational warnings to ships. (3) Frequencies in the 2000-27500 kHz...
Calculations of the giant-dipole-resonance photoneutrons using a coupled EGS4-morse code
Liu, J.C.; Nelson, W.R.; Kase, K.R.; Mao, X.S.
1995-10-01
The production and transport of the photoneutrons from the giant-dipoleresonance reaction have been implemented in a coupled EGS4-MORSE code. The total neutron yield (including both the direct neutron and evaporation neutron components) is calculated by folding the photoneutron yield cross sections with the photon track length distribution in the target. Empirical algorithms based on the measurements have been developed to estimate the fraction and energy of the direct neutron component for each photon. The statistical theory in the EVAP4 code, incorporated as a MORSE subroutine, is used to determine the energies of the evaporation neutrons. These represent major improvements over other calculations that assumed no direct neutrons, a constant fraction of direct neutrons, monoenergetic direct neutron, or a constant nuclear temperature for the evaporation neutrons. It was also assumed that the slow neutrons (< 2.5 MeV) are emitted isotropically and the fast neutrons are emitted anisotropically in the form of 1+Csin{sup 2}{theta}, which have a peak emission at 900. Comparisons between the calculated and the measured photoneutron results (spectra of the direct, evaporation and total neutrons; nuclear temperatures; direct neutron fractions) for materials of lead, tungsten, tantalum and copper have been made. The results show that the empirical algorithms, albeit simple, can produce reasonable results over the interested photon energy range.
Telescope Adaptive Optics Code
Phillion, D.
2005-07-28
The Telescope AO Code has general adaptive optics capabilities plus specialized models for three telescopes with either adaptive optics or active optics systems. It has the capability to generate either single-layer or distributed Kolmogorov turbulence phase screens using the FFT. Missing low order spatial frequencies are added using the Karhunen-Loeve expansion. The phase structure curve is extremely dose to the theoreUcal. Secondly, it has the capability to simulate an adaptive optics control systems. The default parameters are those of the Keck II adaptive optics system. Thirdly, it has a general wave optics capability to model the science camera halo due to scintillation from atmospheric turbulence and the telescope optics. Although this capability was implemented for the Gemini telescopes, the only default parameter specific to the Gemini telescopes is the primary mirror diameter. Finally, it has a model for the LSST active optics alignment strategy. This last model is highly specific to the LSST
Driver Code for Adaptive Optics
NASA Technical Reports Server (NTRS)
Rao, Shanti
2007-01-01
A special-purpose computer code for a deformable-mirror adaptive-optics control system transmits pixel-registered control from (1) a personal computer running software that generates the control data to (2) a circuit board with 128 digital-to-analog converters (DACs) that generate voltages to drive the deformable-mirror actuators. This program reads control-voltage codes from a text file, then sends them, via the computer s parallel port, to a circuit board with four AD5535 (or equivalent) chips. Whereas a similar prior computer program was capable of transmitting data to only one chip at a time, this program can send data to four chips simultaneously. This program is in the form of C-language code that can be compiled and linked into an adaptive-optics software system. The program as supplied includes source code for integration into the adaptive-optics software, documentation, and a component that provides a demonstration of loading DAC codes from a text file. On a standard Windows desktop computer, the software can update 128 channels in 10 ms. On Real-Time Linux with a digital I/O card, the software can update 1024 channels (8 boards in parallel) every 8 ms.
AEST: Adaptive Eigenvalue Stability Code
NASA Astrophysics Data System (ADS)
Zheng, L.-J.; Kotschenreuther, M.; Waelbroeck, F.; van Dam, J. W.; Berk, H.
2002-11-01
An adaptive eigenvalue linear stability code is developed. The aim is on one hand to include the non-ideal MHD effects into the global MHD stability calculation for both low and high n modes and on the other hand to resolve the numerical difficulty involving MHD singularity on the rational surfaces at the marginal stability. Our code follows some parts of philosophy of DCON by abandoning relaxation methods based on radial finite element expansion in favor of an efficient shooting procedure with adaptive gridding. The δ W criterion is replaced by the shooting procedure and subsequent matrix eigenvalue problem. Since the technique of expanding a general solution into a summation of the independent solutions employed, the rank of the matrices involved is just a few hundreds. This makes easier to solve the eigenvalue problem with non-ideal MHD effects, such as FLR or even full kinetic effects, as well as plasma rotation effect, taken into account. To include kinetic effects, the approach of solving for the distribution function as a local eigenvalue ω problem as in the GS2 code will be employed in the future. Comparison of the ideal MHD version of the code with DCON, PEST, and GATO will be discussed. The non-ideal MHD version of the code will be employed to study as an application the transport barrier physics in tokamak discharges.
Bolle, Caroline; Gustin, Marie-Paule; Fau, Didier; Boivin, Georges; Exbrayat, Patrick; Grosgogeat, Brigitte
2016-01-01
The purpose of this study was to investigate peri-implant tissue adaptation on platform-switched implants with a Morse cone-type connection, after 3 and 12 weeks of healing in dogs. Ten weeks after mandibular premolar extractions, eight beagle dogs received three implants each. At each biopsy interval, four animals were sacrificed and biopsies were processed for histologic analysis. The height of the peri-implant mucosa was 2.32 mm and 2.88 mm, respectively, whereas the bone level in relation to the implant platform was -0.39 mm and -0.67 mm, respectively, after 3 and 12 weeks of healing. Within the limits of the present study, platform-switched implants exhibited reduced values of biologic width and marginal bone loss when compared with previous data.
Perceptually-Based Adaptive JPEG Coding
NASA Technical Reports Server (NTRS)
Watson, Andrew B.; Rosenholtz, Ruth; Null, Cynthia H. (Technical Monitor)
1996-01-01
An extension to the JPEG standard (ISO/IEC DIS 10918-3) allows spatial adaptive coding of still images. As with baseline JPEG coding, one quantization matrix applies to an entire image channel, but in addition the user may specify a multiplier for each 8 x 8 block, which multiplies the quantization matrix, yielding the new matrix for the block. MPEG 1 and 2 use much the same scheme, except there the multiplier changes only on macroblock boundaries. We propose a method for perceptual optimization of the set of multipliers. We compute the perceptual error for each block based upon DCT quantization error adjusted according to contrast sensitivity, light adaptation, and contrast masking, and pick the set of multipliers which yield maximally flat perceptual error over the blocks of the image. We investigate the bitrate savings due to this adaptive coding scheme and the relative importance of the different sorts of masking on adaptive coding.
ERIC Educational Resources Information Center
Bruce, Guy V.
1985-01-01
Mechanically-minded middle school students who have been studying electromagnetism can construct inexpensive telegraphs resembling Samuel Morse's 1844 invention. Instructions (with diagrams), list of materials needed, and suggestions are given for a simple telegraph and for a two-way system. (DH)
Gerber, Samuel; Rübel, Oliver; Bremer, Peer-Timo; Pascucci, Valerio; Whitaker, Ross T.
2012-01-01
This paper introduces a novel partition-based regression approach that incorporates topological information. Partition-based regression typically introduce a quality-of-fit-driven decomposition of the domain. The emphasis in this work is on a topologically meaningful segmentation. Thus, the proposed regression approach is based on a segmentation induced by a discrete approximation of the Morse-Smale complex. This yields a segmentation with partitions corresponding to regions of the function with a single minimum and maximum that are often well approximated by a linear model. This approach yields regression models that are amenable to interpretation and have good predictive capacity. Typically, regression estimates are quantified by their geometrical accuracy. For the proposed regression, an important aspect is the quality of the segmentation itself. Thus, this paper introduces a new criterion that measures the topological accuracy of the estimate. The topological accuracy provides a complementary measure to the classical geometrical error measures and is very sensitive to over-fitting. The Morse-Smale regression is compared to state-of-the-art approaches in terms of geometry and topology and yields comparable or improved fits in many cases. Finally, a detailed study on climate-simulation data demonstrates the application of the Morse-Smale regression. Supplementary materials are available online and contain an implementation of the proposed approach in the R package msr, an analysis and simulations on the stability of the Morse-Smale complex approximation and additional tables for the climate-simulation study. PMID:23687424
Adaptable recursive binary entropy coding technique
NASA Astrophysics Data System (ADS)
Kiely, Aaron B.; Klimesh, Matthew A.
2002-07-01
We present a novel data compression technique, called recursive interleaved entropy coding, that is based on recursive interleaving of variable-to variable length binary source codes. A compression module implementing this technique has the same functionality as arithmetic coding and can be used as the engine in various data compression algorithms. The encoder compresses a bit sequence by recursively encoding groups of bits that have similar estimated statistics, ordering the output in a way that is suited to the decoder. As a result, the decoder has low complexity. The encoding process for our technique is adaptable in that each bit to be encoded has an associated probability-of-zero estimate that may depend on previously encoded bits; this adaptability allows more effective compression. Recursive interleaved entropy coding may have advantages over arithmetic coding, including most notably the admission of a simple and fast decoder. Much variation is possible in the choice of component codes and in the interleaving structure, yielding coder designs of varying complexity and compression efficiency; coder designs that achieve arbitrarily small redundancy can be produced. We discuss coder design and performance estimation methods. We present practical encoding and decoding algorithms, as well as measured performance results.
Adaptive down-sampling video coding
NASA Astrophysics Data System (ADS)
Wang, Ren-Jie; Chien, Ming-Chen; Chang, Pao-Chi
2010-01-01
Down-sampling coding, which sub-samples the image and encodes the smaller sized images, is one of the solutions to raise the image quality at insufficiently high rates. In this work, we propose an Adaptive Down-Sampling (ADS) coding for H.264/AVC. The overall system distortion can be analyzed as the sum of the down-sampling distortion and the coding distortion. The down-sampling distortion is mainly the loss of the high frequency components that is highly dependent of the spatial difference. The coding distortion can be derived from the classical Rate-Distortion theory. For a given rate and a video sequence, the optimum down-sampling resolution-ratio can be derived by utilizing the optimum theory toward minimizing the system distortion based on the models of the two distortions. This optimal resolution-ratio is used in both down-sampling and up-sampling processes in ADS coding scheme. As a result, the rate-distortion performance of ADS coding is always higher than the fixed ratio coding or H.264/AVC by 2 to 4 dB at low to medium rates.
Two-layer and Adaptive Entropy Coding Algorithms for H.264-based Lossless Image Coding
2008-04-01
adaptive binary arithmetic coding (CABAC) [7], and context-based adaptive variable length coding (CAVLC) [3], should be adaptively adopted for advancing...Sep. 2006. [7] H. Schwarz, D. Marpe and T. Wiegand, Context-based adaptive binary arithmetic coding in the H.264/AVC video compression standard, IEEE
SAGE - MULTIDIMENSIONAL SELF-ADAPTIVE GRID CODE
NASA Technical Reports Server (NTRS)
Davies, C. B.
1994-01-01
SAGE, Self Adaptive Grid codE, is a flexible tool for adapting and restructuring both 2D and 3D grids. Solution-adaptive grid methods are useful tools for efficient and accurate flow predictions. In supersonic and hypersonic flows, strong gradient regions such as shocks, contact discontinuities, shear layers, etc., require careful distribution of grid points to minimize grid error and produce accurate flow-field predictions. SAGE helps the user obtain more accurate solutions by intelligently redistributing (i.e. adapting) the original grid points based on an initial or interim flow-field solution. The user then computes a new solution using the adapted grid as input to the flow solver. The adaptive-grid methodology poses the problem in an algebraic, unidirectional manner for multi-dimensional adaptations. The procedure is analogous to applying tension and torsion spring forces proportional to the local flow gradient at every grid point and finding the equilibrium position of the resulting system of grid points. The multi-dimensional problem of grid adaption is split into a series of one-dimensional problems along the computational coordinate lines. The reduced one dimensional problem then requires a tridiagonal solver to find the location of grid points along a coordinate line. Multi-directional adaption is achieved by the sequential application of the method in each coordinate direction. The tension forces direct the redistribution of points to the strong gradient region. To maintain smoothness and a measure of orthogonality of grid lines, torsional forces are introduced that relate information between the family of lines adjacent to one another. The smoothness and orthogonality constraints are direction-dependent, since they relate only the coordinate lines that are being adapted to the neighboring lines that have already been adapted. Therefore the solutions are non-unique and depend on the order and direction of adaption. Non-uniqueness of the adapted grid is
Adaptive discrete cosine transform based image coding
NASA Astrophysics Data System (ADS)
Hu, Neng-Chung; Luoh, Shyan-Wen
1996-04-01
In this discrete cosine transform (DCT) based image coding, the DCT kernel matrix is decomposed into a product of two matrices. The first matrix is called the discrete cosine preprocessing transform (DCPT), whose kernels are plus or minus 1 or plus or minus one- half. The second matrix is the postprocessing stage treated as a correction stage that converts the DCPT to the DCT. On applying the DCPT to image coding, image blocks are processed by the DCPT, then a decision is made to determine whether the processed image blocks are inactive or active in the DCPT domain. If the processed image blocks are inactive, then the compactness of the processed image blocks is the same as that of the image blocks processed by the DCT. However, if the processed image blocks are active, a correction process is required; this is achieved by multiplying the processed image block by the postprocessing stage. As a result, this adaptive image coding achieves the same performance as the DCT image coding, and both the overall computation and the round-off error are reduced, because both the DCPT and the postprocessing stage can be implemented by distributed arithmetic or fast computation algorithms.
Rate-distortion optimized adaptive transform coding
NASA Astrophysics Data System (ADS)
Lim, Sung-Chang; Kim, Dae-Yeon; Jeong, Seyoon; Choi, Jin Soo; Choi, Haechul; Lee, Yung-Lyul
2009-08-01
We propose a rate-distortion optimized transform coding method that adaptively employs either integer cosine transform that is an integer-approximated version of discrete cosine transform (DCT) or integer sine transform (IST) in a rate-distortion sense. The DCT that has been adopted in most video-coding standards is known as a suboptimal substitute for the Karhunen-Loève transform. However, according to the correlation of a signal, an alternative transform can achieve higher coding efficiency. We introduce a discrete sine transform (DST) that achieves the high-energy compactness in a correlation coefficient range of -0.5 to 0.5 and is applied to the current design of H.264/AVC (advanced video coding). Moreover, to avoid the encoder and decoder mismatch and make the implementation simple, an IST that is an integer-approximated version of the DST is developed. The experimental results show that the proposed method achieves a Bjøntegaard Delta-RATE gain up to 5.49% compared to Joint model 11.0.
MORSE: current status of the two Oak Ridge versions
Emmett, M. B.; West, J. T.
1980-01-01
There are two versions of the MORSE Monte Carlo radiation transport computer code system at Oak Ridge National Laboratory. MORSE-CG is the most well-known and has undergone extensive use for many years. Development of MORSE-SGC was originally begun in order to restructure the cross section handling and thereby save storage, but the more recent goal has been to incorporate some of the KENO ability to handle multiple arrays in the geometry and to improve on 3-D plotting capabilities. New capabilities recently added to MORSE-CG include a generalized form for a Klein Nishina estimator, a new version of BREESE, the albedo package, which now allows multiple albedo materials and a revised DOMINO which handles DOT-IV tapes.
Adaptive Dynamic Event Tree in RAVEN code
Alfonsi, Andrea; Rabiti, Cristian; Mandelli, Diego; Cogliati, Joshua Joseph; Kinoshita, Robert Arthur
2014-11-01
RAVEN is a software tool that is focused on performing statistical analysis of stochastic dynamic systems. RAVEN has been designed in a high modular and pluggable way in order to enable easy integration of different programming languages (i.e., C++, Python) and coupling with other applications (system codes). Among the several capabilities currently present in RAVEN, there are five different sampling strategies: Monte Carlo, Latin Hyper Cube, Grid, Adaptive and Dynamic Event Tree (DET) sampling methodologies. The scope of this paper is to present a new sampling approach, currently under definition and implementation: an evolution of the DET me
ICAN Computer Code Adapted for Building Materials
NASA Technical Reports Server (NTRS)
Murthy, Pappu L. N.
1997-01-01
The NASA Lewis Research Center has been involved in developing composite micromechanics and macromechanics theories over the last three decades. These activities have resulted in several composite mechanics theories and structural analysis codes whose applications range from material behavior design and analysis to structural component response. One of these computer codes, the Integrated Composite Analyzer (ICAN), is designed primarily to address issues related to designing polymer matrix composites and predicting their properties - including hygral, thermal, and mechanical load effects. Recently, under a cost-sharing cooperative agreement with a Fortune 500 corporation, Master Builders Inc., ICAN was adapted to analyze building materials. The high costs and technical difficulties involved with the fabrication of continuous-fiber-reinforced composites sometimes limit their use. Particulate-reinforced composites can be thought of as a viable alternative. They are as easily processed to near-net shape as monolithic materials, yet have the improved stiffness, strength, and fracture toughness that is characteristic of continuous-fiber-reinforced composites. For example, particlereinforced metal-matrix composites show great potential for a variety of automotive applications, such as disk brake rotors, connecting rods, cylinder liners, and other hightemperature applications. Building materials, such as concrete, can be thought of as one of the oldest materials in this category of multiphase, particle-reinforced materials. The adaptation of ICAN to analyze particle-reinforced composite materials involved the development of new micromechanics-based theories. A derivative of the ICAN code, ICAN/PART, was developed and delivered to Master Builders Inc. as a part of the cooperative activity.
GAMER: GPU-accelerated Adaptive MEsh Refinement code
NASA Astrophysics Data System (ADS)
Schive, Hsi-Yu; Tsai, Yu-Chih; Chiueh, Tzihong
2016-12-01
GAMER (GPU-accelerated Adaptive MEsh Refinement) serves as a general-purpose adaptive mesh refinement + GPU framework and solves hydrodynamics with self-gravity. The code supports adaptive mesh refinement (AMR), hydrodynamics with self-gravity, and a variety of GPU-accelerated hydrodynamic and Poisson solvers. It also supports hybrid OpenMP/MPI/GPU parallelization, concurrent CPU/GPU execution for performance optimization, and Hilbert space-filling curve for load balance. Although the code is designed for simulating galaxy formation, it can be easily modified to solve a variety of applications with different governing equations. All optimization strategies implemented in the code can be inherited straightforwardly.
Generating code adapted for interlinking legacy scalar code and extended vector code
Gschwind, Michael K
2013-06-04
Mechanisms for intermixing code are provided. Source code is received for compilation using an extended Application Binary Interface (ABI) that extends a legacy ABI and uses a different register configuration than the legacy ABI. First compiled code is generated based on the source code, the first compiled code comprising code for accommodating the difference in register configurations used by the extended ABI and the legacy ABI. The first compiled code and second compiled code are intermixed to generate intermixed code, the second compiled code being compiled code that uses the legacy ABI. The intermixed code comprises at least one call instruction that is one of a call from the first compiled code to the second compiled code or a call from the second compiled code to the first compiled code. The code for accommodating the difference in register configurations is associated with the at least one call instruction.
Adaptive face coding and discrimination around the average face.
Rhodes, Gillian; Maloney, Laurence T; Turner, Jenny; Ewing, Louise
2007-03-01
Adaptation paradigms highlight the dynamic nature of face coding and suggest that identity is coded relative to an average face that is tuned by experience. In low-level vision, adaptive coding can enhance sensitivity to differences around the adapted level. We investigated whether sensitivity to differences around the average face is similarly enhanced. Converging evidence from three paradigms showed no enhancement. Discrimination of small interocular spacing differences was not better for faces close to the average (Study 1). Nor was perceived similarity reduced for face pairs close to (spanning) the average (Study 2). On the contrary, these pairs were judged most similar. Maximum likelihood perceptual difference scaling (Studies 3 and 4) confirmed that sensitivity to differences was reduced, not enhanced, around the average. We conclude that adaptive face coding does not enhance discrimination around the average face.
Adaptive Quantization Parameter Cascading in HEVC Hierarchical Coding.
Zhao, Tiesong; Wang, Zhou; Chen, Chang Wen
2016-04-20
The state-of-the-art High Efficiency Video Coding (HEVC) standard adopts a hierarchical coding structure to improve its coding efficiency. This allows for the Quantization Parameter Cascading (QPC) scheme that assigns Quantization Parameters (Qps) to different hierarchical layers in order to further improve the Rate-Distortion (RD) performance. However, only static QPC schemes have been suggested in HEVC test model (HM), which are unable to fully explore the potentials of QPC. In this paper, we propose an adaptive QPC scheme for HEVC hierarchical structure to code natural video sequences characterized by diversified textures, motions and encoder configurations. We formulate the adaptive QPC scheme as a non-linear programming problem and solve it in a scientifically sound way with a manageable low computational overhead. The proposed model addresses a generic Qp assignment problem of video coding. Therefore, it also applies to Group-Of-Picture (GOP)- level, frame-level and Coding Unit (CU)-level Qp assignments. Comprehensive experiments have demonstrated the proposed QPC scheme is able to adapt quickly to different video contents and coding configurations while achieving noticeable RD performance enhancement over all static and adaptive QPC schemes under comparison as well as HEVC default frame-level rate control. We have also made valuable observations on the distributions of adaptive QPC sets in videos of different types of contents, which provide useful insights on how to further improve static QPC schemes.
Adaptive Quantization Parameter Cascading in HEVC Hierarchical Coding.
Zhao, Tiesong; Wang, Zhou; Chen, Chang Wen
2016-07-01
The state-of-the-art High Efficiency Video Coding (HEVC) standard adopts a hierarchical coding structure to improve its coding efficiency. This allows for the quantization parameter cascading (QPC) scheme that assigns quantization parameters (Qps) to different hierarchical layers in order to further improve the rate-distortion (RD) performance. However, only static QPC schemes have been suggested in HEVC test model, which are unable to fully explore the potentials of QPC. In this paper, we propose an adaptive QPC scheme for an HEVC hierarchical structure to code natural video sequences characterized by diversified textures, motions, and encoder configurations. We formulate the adaptive QPC scheme as a non-linear programming problem and solve it in a scientifically sound way with a manageable low computational overhead. The proposed model addresses a generic Qp assignment problem of video coding. Therefore, it also applies to group-of-picture-level, frame-level and coding unit-level Qp assignments. Comprehensive experiments have demonstrated that the proposed QPC scheme is able to adapt quickly to different video contents and coding configurations while achieving noticeable RD performance enhancement over all static and adaptive QPC schemes under comparison as well as HEVC default frame-level rate control. We have also made valuable observations on the distributions of adaptive QPC sets in the videos of different types of contents, which provide useful insights on how to further improve static QPC schemes.
A novel bit-wise adaptable entropy coding technique
NASA Technical Reports Server (NTRS)
Kiely, A.; Klimesh, M.
2001-01-01
We present a novel entropy coding technique which is adaptable in that each bit to be encoded may have an associated probability esitmate which depends on previously encoded bits. The technique may have advantages over arithmetic coding. The technique can achieve arbitrarily small redundancy and admits a simple and fast decoder.
Weighted adaptively grouped multilevel space time trellis codes
NASA Astrophysics Data System (ADS)
Jain, Dharmvir; Sharma, Sanjay
2015-05-01
In existing grouped multilevel space-time trellis codes (GMLSTTCs), the groups of transmit antennas are predefined, and the transmit power is equally distributed across all transmit antennas. When the channel parameters are perfectly known at the transmitter, adaptive antenna grouping and beamforming scheme can achieve the better performance by optimum grouping of transmit antennas and properly weighting transmitted signals based on the available channel information. In this paper, we present a new code designed by combining GMLSTTCs, adaptive antenna grouping and beamforming using the channel state information at transmitter (CSIT), henceforth referred to as weighted adaptively grouped multilevel space time trellis codes (WAGMLSTTCs). The CSIT is used to adaptively group the transmitting antennas and provide a beamforming scheme by allocating the different powers to the transmit antennas. Simulation results show that WAGMLSTTCs provide improvement in error performance of 2.6 dB over GMLSTTCs.
MORSE Monte Carlo shielding calculations for the zirconium hydride reference reactor
NASA Technical Reports Server (NTRS)
Burgart, C. E.
1972-01-01
Verification of DOT-SPACETRAN transport calculations of a lithium hydride and tungsten shield for a SNAP reactor was performed using the MORSE (Monte Carlo) code. Transport of both neutrons and gamma rays was considered. Importance sampling was utilized in the MORSE calculations. Several quantities internal to the shield, as well as dose at several points outside of the configuration, were in satisfactory agreement with the DOT calculations of the same.
Adaptive Modulation and Coding for LTE Wireless Communication
NASA Astrophysics Data System (ADS)
Hadi, S. S.; Tiong, T. C.
2015-04-01
Long Term Evolution (LTE) is the new upgrade path for carrier with both GSM/UMTS networks and CDMA2000 networks. The LTE is targeting to become the first global mobile phone standard regardless of the different LTE frequencies and bands use in other countries barrier. Adaptive Modulation and Coding (AMC) is used to increase the network capacity or downlink data rates. Various modulation types are discussed such as Quadrature Phase Shift Keying (QPSK), Quadrature Amplitude Modulation (QAM). Spatial multiplexing techniques for 4×4 MIMO antenna configuration is studied. With channel station information feedback from the mobile receiver to the base station transmitter, adaptive modulation and coding can be applied to adapt to the mobile wireless channels condition to increase spectral efficiencies without increasing bit error rate in noisy channels. In High-Speed Downlink Packet Access (HSDPA) in Universal Mobile Telecommunications System (UMTS), AMC can be used to choose modulation types and forward error correction (FEC) coding rate.
Adaptive image coding based on cubic-spline interpolation
NASA Astrophysics Data System (ADS)
Jiang, Jian-Xing; Hong, Shao-Hua; Lin, Tsung-Ching; Wang, Lin; Truong, Trieu-Kien
2014-09-01
It has been investigated that at low bit rates, downsampling prior to coding and upsampling after decoding can achieve better compression performance than standard coding algorithms, e.g., JPEG and H. 264/AVC. However, at high bit rates, the sampling-based schemes generate more distortion. Additionally, the maximum bit rate for the sampling-based scheme to outperform the standard algorithm is image-dependent. In this paper, a practical adaptive image coding algorithm based on the cubic-spline interpolation (CSI) is proposed. This proposed algorithm adaptively selects the image coding method from CSI-based modified JPEG and standard JPEG under a given target bit rate utilizing the so called ρ-domain analysis. The experimental results indicate that compared with the standard JPEG, the proposed algorithm can show better performance at low bit rates and maintain the same performance at high bit rates.
More About Vector Adaptive/Predictive Coding Of Speech
NASA Technical Reports Server (NTRS)
Jedrey, Thomas C.; Gersho, Allen
1992-01-01
Report presents additional information about digital speech-encoding and -decoding system described in "Vector Adaptive/Predictive Encoding of Speech" (NPO-17230). Summarizes development of vector adaptive/predictive coding (VAPC) system and describes basic functions of algorithm. Describes refinements introduced enabling receiver to cope with errors. VAPC algorithm implemented in integrated-circuit coding/decoding processors (codecs). VAPC and other codecs tested under variety of operating conditions. Tests designed to reveal effects of various background quiet and noisy environments and of poor telephone equipment. VAPC found competitive with and, in some respects, superior to other 4.8-kb/s codecs and other codecs of similar complexity.
The multidimensional Self-Adaptive Grid code, SAGE, version 2
NASA Technical Reports Server (NTRS)
Davies, Carol B.; Venkatapathy, Ethiraj
1995-01-01
This new report on Version 2 of the SAGE code includes all the information in the original publication plus all upgrades and changes to the SAGE code since that time. The two most significant upgrades are the inclusion of a finite-volume option and the ability to adapt and manipulate zonal-matching multiple-grid files. In addition, the original SAGE code has been upgraded to Version 1.1 and includes all options mentioned in this report, with the exception of the multiple grid option and its associated features. Since Version 2 is a larger and more complex code, it is suggested (but not required) that Version 1.1 be used for single-grid applications. This document contains all the information required to run both versions of SAGE. The formulation of the adaption method is described in the first section of this document. The second section is presented in the form of a user guide that explains the input and execution of the code. The third section provides many examples. Successful application of the SAGE code in both two and three dimensions for the solution of various flow problems has proven the code to be robust, portable, and simple to use. Although the basic formulation follows the method of Nakahashi and Deiwert, many modifications have been made to facilitate the use of the self-adaptive grid method for complex grid structures. Modifications to the method and the simple but extensive input options make this a flexible and user-friendly code. The SAGE code can accommodate two-dimensional and three-dimensional, finite-difference and finite-volume, single grid, and zonal-matching multiple grid flow problems.
A trellis-searched APC (adaptive predictive coding) speech coder
Malone, K.T. ); Fischer, T.R. . Dept. of Electrical and Computer Engineering)
1990-01-01
In this paper we formulate a speech coding system that incorporates trellis coded vector quantization (TCVQ) and adaptive predictive coding (APC). A method for optimizing'' the TCVQ codebooks is presented and experimental results concerning survivor path mergings are reported. Simulation results are given for encoding rates of 16 and 9.6 kbps for a variety of coder parameters. The quality of the encoded speech is deemed excellent at an encoding rate of 16 kbps and very good at 9.6 kbps. 13 refs., 2 figs., 4 tabs.
Adaptive feature extraction using sparse coding for machinery fault diagnosis
NASA Astrophysics Data System (ADS)
Liu, Haining; Liu, Chengliang; Huang, Yixiang
2011-02-01
In the signal processing domain, there has been growing interest in sparse coding with a learned dictionary instead of a predefined one, which is advocated as an effective mathematical description for the underlying principle of mammalian sensory systems in processing information. In this paper, sparse coding is introduced as a feature extraction technique for machinery fault diagnosis and an adaptive feature extraction scheme is proposed based on it. The two core problems of sparse coding, i.e., dictionary learning and coefficients solving, are discussed in detail. A natural extension of sparse coding, shift-invariant sparse coding, is also introduced. Then, the vibration signals of rolling element bearings are taken as the target signals to verify the proposed scheme, and shift-invariant sparse coding is used for vibration analysis. With the purpose of diagnosing the different fault conditions of bearings, features are extracted following the proposed scheme: basis functions are separately learned from each class of vibration signals trying to capture the defective impulses; a redundant dictionary is built by merging all the learned basis functions; based on the redundant dictionary, the diagnostic information is made explicit in the solved sparse representations of vibration signals; sparse features are formulated in terms of activations of atoms. The multiclass linear discriminant analysis (LDA) classifier is used to test the discriminability of the extracted sparse features and the adaptability of the learned atoms. The experiments show that sparse coding is an effective feature extraction technique for machinery fault diagnosis.
The multidimensional self-adaptive grid code, SAGE
NASA Technical Reports Server (NTRS)
Davies, Carol B.; Venkatapathy, Ethiraj
1992-01-01
This report describes the multidimensional self-adaptive grid code SAGE. A two-dimensional version of this code was described in an earlier report by the authors. The formulation of the multidimensional version is described in the first section of this document. The second section is presented in the form of a user guide that explains the input and execution of the code and provides many examples. Successful application of the SAGE code in both two and three dimensions for the solution of various flow problems has proven the code to be robust, portable, and simple to use. Although the basic formulation follows the method of Nakahashi and Deiwert, many modifications have been made to facilitate the use of the self-adaptive grid method for complex grid structures. Modifications to the method and the simplified input options make this a flexible and user-friendly code. The new SAGE code can accommodate both two-dimensional and three-dimensional flow problems.
Peripheral adaptation codes for high odor concentration in glomeruli.
Lecoq, Jérôme; Tiret, Pascale; Charpak, Serge
2009-03-11
Adaptation is a general property of sensory receptor neurons and has been extensively studied in isolated cell preparation of olfactory receptor neurons. In contrast, little is known about the conditions under which peripheral adaptation occurs in the CNS during odorant stimulation. Here, we used two-photon laser-scanning microscopy and targeted extracellular recording in freely breathing anesthetized rats to investigate the correlate of peripheral adaptation at the first synapse of the olfactory pathway in olfactory bulb glomeruli. We find that during sustained stimulation at high concentration, odorants can evoke local field potential (LFP) postsynaptic responses that rapidly adapt with time, some within two inhalations. Simultaneous measurements of LFP and calcium influx at olfactory receptor neuron terminals reveal that postsynaptic adaptation is associated with a decrease in odorant-evoked calcium response, suggesting that it results from a decrease in glutamate release. This glomerular adaptation was concentration-dependent and did not change the glomerular input-output curve. In addition, in situ application of antagonists of either ionotropic glutamate receptors or metabotropic GABA(B) receptors did not affect this adaptation, thus discarding the involvement of local presynaptic inhibition. Glomerular adaptation, therefore, reflects the response decline of olfactory receptor neurons to sustained odorant. We postulate that peripheral fast adaptation is a means by which glomerular output codes for high concentration of odor.
The Rotating Morse-Pekeris Oscillator Revisited
ERIC Educational Resources Information Center
Zuniga, Jose; Bastida, Adolfo; Requena, Alberto
2008-01-01
The Morse-Pekeris oscillator model for the calculation of the vibration-rotation energy levels of diatomic molecules is revisited. This model is based on the realization of a second-order exponential expansion of the centrifugal term about the minimum of the vibrational Morse oscillator and the subsequent analytical resolution of the resulting…
Adaptive EZW coding using a rate-distortion criterion
NASA Astrophysics Data System (ADS)
Yin, Che-Yi
2001-07-01
This work presents a new method that improves on the EZW image coding algorithm. The standard EZW image coder uses a uniform quantizer with a threshold (deadzone) that is identical in all subbands. The quantization step sizes are not optimized under the rate-distortion sense. We modify the EZW by applying the Lagrange multiplier to search for the best step size for each subband and allocate the bit rate for each subband accordingly. Then we implement the adaptive EZW codec to code the wavelet coefficients. Two coding environments, independent and dependent, are considered for the optimization process. The proposed image coder retains all the good features of the EZW, namely, embedded coding, progressive transmission, order of the important bits, and enhances it through the rate-distortion optimization with respect to the step sizes.
Link-Adaptive Distributed Coding for Multisource Cooperation
NASA Astrophysics Data System (ADS)
Cano, Alfonso; Wang, Tairan; Ribeiro, Alejandro; Giannakis, Georgios B.
2007-12-01
Combining multisource cooperation and link-adaptive regenerative techniques, a novel protocol is developed capable of achieving diversity order up to the number of cooperating users and large coding gains. The approach relies on a two-phase protocol. In Phase 1, cooperating sources exchange information-bearing blocks, while in Phase 2, they transmit reencoded versions of the original blocks. Different from existing approaches, participation in the second phase does not require correct decoding of Phase 1 packets. This allows relaying of soft information to the destination, thus increasing coding gains while retaining diversity properties. For any reencoding function the diversity order is expressed as a function of the rank properties of the distributed coding strategy employed. This result is analogous to the diversity properties of colocated multi-antenna systems. Particular cases include repetition coding, distributed complex field coding (DCFC), distributed space-time coding, and distributed error-control coding. Rate, diversity, complexity and synchronization issues are elaborated. DCFC emerges as an attractive choice because it offers high-rate, full spatial diversity, and relaxed synchronization requirements. Simulations confirm analytically established assessments.
Adaptive directional lifting-based wavelet transform for image coding.
Ding, Wenpeng; Wu, Feng; Wu, Xiaolin; Li, Shipeng; Li, Houqiang
2007-02-01
We present a novel 2-D wavelet transform scheme of adaptive directional lifting (ADL) in image coding. Instead of alternately applying horizontal and vertical lifting, as in present practice, ADL performs lifting-based prediction in local windows in the direction of high pixel correlation. Hence, it adapts far better to the image orientation features in local windows. The ADL transform is achieved by existing 1-D wavelets and is seamlessly integrated into the global wavelet transform. The predicting and updating signals of ADL can be derived even at the fractional pixel precision level to achieve high directional resolution, while still maintaining perfect reconstruction. To enhance the ADL performance, a rate-distortion optimized directional segmentation scheme is also proposed to form and code a hierarchical image partition adapting to local features. Experimental results show that the proposed ADL-based image coding technique outperforms JPEG 2000 in both PSNR and visual quality, with the improvement up to 2.0 dB on images with rich orientation features.
Motion-compensated wavelet video coding using adaptive mode selection
NASA Astrophysics Data System (ADS)
Zhai, Fan; Pappas, Thrasyvoulos N.
2004-01-01
A motion-compensated wavelet video coder is presented that uses adaptive mode selection (AMS) for each macroblock (MB). The block-based motion estimation is performed in the spatial domain, and an embedded zerotree wavelet coder (EZW) is employed to encode the residue frame. In contrast to other motion-compensated wavelet video coders, where all the MBs are forced to be in INTER mode, we construct the residue frame by combining the prediction residual of the INTER MBs with the coding residual of the INTRA and INTER_ENCODE MBs. Different from INTER MBs that are not coded, the INTRA and INTER_ENCODE MBs are encoded separately by a DCT coder. By adaptively selecting the quantizers of the INTRA and INTER_ENCODE coded MBs, our goal is to equalize the characteristics of the residue frame in order to improve the overall coding efficiency of the wavelet coder. The mode selection is based on the variance of the MB, the variance of the prediction error, and the variance of the neighboring MBs' residual. Simulations show that the proposed motion-compensated wavelet video coder achieves a gain of around 0.7-0.8dB PSNR over MPEG-2 TM5, and a comparable PSNR to other 2D motion-compensated wavelet-based video codecs. It also provides potential visual quality improvement.
Adaptive coded aperture imaging: progress and potential future applications
NASA Astrophysics Data System (ADS)
Gottesman, Stephen R.; Isser, Abraham; Gigioli, George W., Jr.
2011-09-01
Interest in Adaptive Coded Aperture Imaging (ACAI) continues to grow as the optical and systems engineering community becomes increasingly aware of ACAI's potential benefits in the design and performance of both imaging and non-imaging systems , such as good angular resolution (IFOV), wide distortion-free field of view (FOV), excellent image quality, and light weight construct. In this presentation we first review the accomplishments made over the past five years, then expand on previously published work to show how replacement of conventional imaging optics with coded apertures can lead to a reduction in system size and weight. We also present a trade space analysis of key design parameters of coded apertures and review potential applications as replacement for traditional imaging optics. Results will be presented, based on last year's work of our investigation into the trade space of IFOV, resolution, effective focal length, and wavelength of incident radiation for coded aperture architectures. Finally we discuss the potential application of coded apertures for replacing objective lenses of night vision goggles (NVGs).
Cellular Adaptation Facilitates Sparse and Reliable Coding in Sensory Pathways
Farkhooi, Farzad; Froese, Anja; Muller, Eilif; Menzel, Randolf; Nawrot, Martin P.
2013-01-01
Most neurons in peripheral sensory pathways initially respond vigorously when a preferred stimulus is presented, but adapt as stimulation continues. It is unclear how this phenomenon affects stimulus coding in the later stages of sensory processing. Here, we show that a temporally sparse and reliable stimulus representation develops naturally in sequential stages of a sensory network with adapting neurons. As a modeling framework we employ a mean-field approach together with an adaptive population density treatment, accompanied by numerical simulations of spiking neural networks. We find that cellular adaptation plays a critical role in the dynamic reduction of the trial-by-trial variability of cortical spike responses by transiently suppressing self-generated fast fluctuations in the cortical balanced network. This provides an explanation for a widespread cortical phenomenon by a simple mechanism. We further show that in the insect olfactory system cellular adaptation is sufficient to explain the emergence of the temporally sparse and reliable stimulus representation in the mushroom body. Our results reveal a generic, biophysically plausible mechanism that can explain the emergence of a temporally sparse and reliable stimulus representation within a sequential processing architecture. PMID:24098101
Application of MORSE to radiation analysis of nuclear flight propulsion modules
NASA Technical Reports Server (NTRS)
Woolson, W. A.
1972-01-01
Several modifications and additions were made to the multigroup Monte Carlo code (MORSE) to implement its use in a computational procedure for performing radiation analyses of NERVA nuclear flight propulsion modules. These changes include the incorporation of a new general geometry module; the inclusion of an expectation tracklength estimator; and the option to obtain source information from two-dimensional discrete ordinates calculations. Computations comparing MORSE and a point cross section Monte Carlo code, COHORT, were made in which a coupled discrete ordinates/Monte Carlo procedure was used to calculate the gamma dose rate at tank top locations of a typical propulsion module. The dose rates obtained from the MORSE computation agreed with the dose rates obtained from the COHORT computation to within the limits of the statistical accuracy of the calculations.
Adaptive shape coding for perceptual decisions in the human brain
Kourtzi, Zoe; Welchman, Andrew E.
2015-01-01
In its search for neural codes, the field of visual neuroscience has uncovered neural representations that reflect the structure of stimuli of variable complexity from simple features to object categories. However, accumulating evidence suggests an adaptive neural code that is dynamically shaped by experience to support flexible and efficient perceptual decisions. Here, we review work showing that experience plays a critical role in molding midlevel visual representations for perceptual decisions. Combining behavioral and brain imaging measurements, we demonstrate that learning optimizes feature binding for object recognition in cluttered scenes, and tunes the neural representations of informative image parts to support efficient categorical judgements. Our findings indicate that similar learning mechanisms may mediate long-term optimization through development, tune the visual system to fundamental principles of feature binding, and optimize feature templates for perceptual decisions. PMID:26024511
Adaptive neural coding: from biological to behavioral decision-making
Louie, Kenway; Glimcher, Paul W.; Webb, Ryan
2015-01-01
Empirical decision-making in diverse species deviates from the predictions of normative choice theory, but why such suboptimal behavior occurs is unknown. Here, we propose that deviations from optimality arise from biological decision mechanisms that have evolved to maximize choice performance within intrinsic biophysical constraints. Sensory processing utilizes specific computations such as divisive normalization to maximize information coding in constrained neural circuits, and recent evidence suggests that analogous computations operate in decision-related brain areas. These adaptive computations implement a relative value code that may explain the characteristic context-dependent nature of behavioral violations of classical normative theory. Examining decision-making at the computational level thus provides a crucial link between the architecture of biological decision circuits and the form of empirical choice behavior. PMID:26722666
SAGE: The Self-Adaptive Grid Code. 3
NASA Technical Reports Server (NTRS)
Davies, Carol B.; Venkatapathy, Ethiraj
1999-01-01
The multi-dimensional self-adaptive grid code, SAGE, is an important tool in the field of computational fluid dynamics (CFD). It provides an efficient method to improve the accuracy of flow solutions while simultaneously reducing computer processing time. Briefly, SAGE enhances an initial computational grid by redistributing the mesh points into more appropriate locations. The movement of these points is driven by an equal-error-distribution algorithm that utilizes the relationship between high flow gradients and excessive solution errors. The method also provides a balance between clustering points in the high gradient regions and maintaining the smoothness and continuity of the adapted grid, The latest version, Version 3, includes the ability to change the boundaries of a given grid to more efficiently enclose flow structures and provides alternative redistribution algorithms.
Conforming Morse-Smale Complexes
Gyulassy, Attila; Gunther, David; Levine, Joshua A.; Tierny, Julien; Pascucci, Valerio
2014-08-11
Morse-Smale (MS) complexes have been gaining popularity as a tool for feature-driven data analysis and visualization. However, the quality of their geometric embedding and the sole dependence on the input scalar field data can limit their applicability when expressing application-dependent features. In this paper we introduce a new combinatorial technique to compute an MS complex that conforms to both an input scalar field and an additional, prior segmentation of the domain. The segmentation constrains the MS complex computation guaranteeing that boundaries in the segmentation are captured as separatrices of the MS complex. We demonstrate the utility and versatility of our approach with two applications. First, we use streamline integration to determine numerically computed basins/mountains and use the resulting segmentation as an input to our algorithm. This strategy enables the incorporation of prior flow path knowledge, effectively resulting in an MS complex that is as geometrically accurate as the employed numerical integration. Our second use case is motivated by the observation that often the data itself does not explicitly contain features known to be present by a domain expert. We introduce edit operations for MS complexes so that a user can directly modify their features while maintaining all the advantages of a robust topology-based representation.
Adaptive zero-tree structure for curved wavelet image coding
NASA Astrophysics Data System (ADS)
Zhang, Liang; Wang, Demin; Vincent, André
2006-02-01
We investigate the issue of efficient data organization and representation of the curved wavelet coefficients [curved wavelet transform (WT)]. We present an adaptive zero-tree structure that exploits the cross-subband similarity of the curved wavelet transform. In the embedded zero-tree wavelet (EZW) and the set partitioning in hierarchical trees (SPIHT), the parent-child relationship is defined in such a way that a parent has four children, restricted to a square of 2×2 pixels, the parent-child relationship in the adaptive zero-tree structure varies according to the curves along which the curved WT is performed. Five child patterns were determined based on different combinations of curve orientation. A new image coder was then developed based on this adaptive zero-tree structure and the set-partitioning technique. Experimental results using synthetic and natural images showed the effectiveness of the proposed adaptive zero-tree structure for encoding of the curved wavelet coefficients. The coding gain of the proposed coder can be up to 1.2 dB in terms of peak SNR (PSNR) compared to the SPIHT coder. Subjective evaluation shows that the proposed coder preserves lines and edges better than the SPIHT coder.
An Adaptive Motion Estimation Scheme for Video Coding
Gao, Yuan; Jia, Kebin
2014-01-01
The unsymmetrical-cross multihexagon-grid search (UMHexagonS) is one of the best fast Motion Estimation (ME) algorithms in video encoding software. It achieves an excellent coding performance by using hybrid block matching search pattern and multiple initial search point predictors at the cost of the computational complexity of ME increased. Reducing time consuming of ME is one of the key factors to improve video coding efficiency. In this paper, we propose an adaptive motion estimation scheme to further reduce the calculation redundancy of UMHexagonS. Firstly, new motion estimation search patterns have been designed according to the statistical results of motion vector (MV) distribution information. Then, design a MV distribution prediction method, including prediction of the size of MV and the direction of MV. At last, according to the MV distribution prediction results, achieve self-adaptive subregional searching by the new estimation search patterns. Experimental results show that more than 50% of total search points are dramatically reduced compared to the UMHexagonS algorithm in JM 18.4 of H.264/AVC. As a result, the proposed algorithm scheme can save the ME time up to 20.86% while the rate-distortion performance is not compromised. PMID:24672313
Cooperative solutions coupling a geometry engine and adaptive solver codes
NASA Technical Reports Server (NTRS)
Dickens, Thomas P.
1995-01-01
Follow-on work has progressed in using Aero Grid and Paneling System (AGPS), a geometry and visualization system, as a dynamic real time geometry monitor, manipulator, and interrogator for other codes. In particular, AGPS has been successfully coupled with adaptive flow solvers which iterate, refining the grid in areas of interest, and continuing on to a solution. With the coupling to the geometry engine, the new grids represent the actual geometry much more accurately since they are derived directly from the geometry and do not use refits to the first-cut grids. Additional work has been done with design runs where the geometric shape is modified to achieve a desired result. Various constraints are used to point the solution in a reasonable direction which also more closely satisfies the desired results. Concepts and techniques are presented, as well as examples of sample case studies. Issues such as distributed operation of the cooperative codes versus running all codes locally and pre-calculation for performance are discussed. Future directions are considered which will build on these techniques in light of changing computer environments.
MORSE/STORM: A generalized albedo option for Monte Carlo calculations
Gomes, I.C.; Stevens, P.N. )
1991-09-01
The advisability of using the albedo procedure for the Monte Carlo solution of deep penetration shielding problems that have ducts and other penetrations has been investigated. The use of albedo data can dramatically improve the computational efficiency of certain Monte Carlo calculations. However, the accuracy of these results may be unacceptable because of lost information during the albedo event and serious errors in the available differential albedo data. This study was done to evaluate and appropriately modify the MORSE/BREESE package, to develop new methods for generating the required albedo data, and to extend the adjoint capability to the albedo-modified calculations. Major modifications to MORSE/BREESE include an option to save for further use information that would be lost at the albedo event, an option to displace the point of emergence during an albedo event, and an option to use spatially dependent albedo data for both forward and adjoint calculations, which includes the point of emergence as a new random variable to be selected during an albedo event. The theoretical basis for using TORT-generated forward albedo information to produce adjuncton albedos was derived. The MORSE/STORM package was developed to perform both forward and adjoint modes of analysis using spatially dependent albedo data. Results obtained with MORSE/STORM for both forward and adjoint modes were compared with benchmark solutions. Excellent agreement and improved computational efficiency were achieved, demonstrating the full utilization of the albedo option in the MORSE code. 7 refs., 17 figs., 15 tabs.
Long non-coding RNAs in innate and adaptive immunity
Aune, Thomas M.; Spurlock, Charles F.
2015-01-01
Long noncoding RNAs (lncRNAs) represent a newly discovered class of regulatory molecules that impact a variety of biological processes in cells and organ systems. In humans, it is estimated that there may be more than twice as many lncRNA genes than protein-coding genes. However, only a handful of lncRNAs have been analyzed in detail. In this review, we describe expression and functions of lncRNAs that have been demonstrated to impact innate and adaptive immunity. These emerging paradigms illustrate remarkably diverse mechanisms that lncRNAs utilize to impact the transcriptional programs of immune cells required to fight against pathogens and maintain normal health and homeostasis. PMID:26166759
RAM: a Relativistic Adaptive Mesh Refinement Hydrodynamics Code
Zhang, Wei-Qun; MacFadyen, Andrew I.; /Princeton, Inst. Advanced Study
2005-06-06
The authors have developed a new computer code, RAM, to solve the conservative equations of special relativistic hydrodynamics (SRHD) using adaptive mesh refinement (AMR) on parallel computers. They have implemented a characteristic-wise, finite difference, weighted essentially non-oscillatory (WENO) scheme using the full characteristic decomposition of the SRHD equations to achieve fifth-order accuracy in space. For time integration they use the method of lines with a third-order total variation diminishing (TVD) Runge-Kutta scheme. They have also implemented fourth and fifth order Runge-Kutta time integration schemes for comparison. The implementation of AMR and parallelization is based on the FLASH code. RAM is modular and includes the capability to easily swap hydrodynamics solvers, reconstruction methods and physics modules. In addition to WENO they have implemented a finite volume module with the piecewise parabolic method (PPM) for reconstruction and the modified Marquina approximate Riemann solver to work with TVD Runge-Kutta time integration. They examine the difficulty of accurately simulating shear flows in numerical relativistic hydrodynamics codes. They show that under-resolved simulations of simple test problems with transverse velocity components produce incorrect results and demonstrate the ability of RAM to correctly solve these problems. RAM has been tested in one, two and three dimensions and in Cartesian, cylindrical and spherical coordinates. they have demonstrated fifth-order accuracy for WENO in one and two dimensions and performed detailed comparison with other schemes for which they show significantly lower convergence rates. Extensive testing is presented demonstrating the ability of RAM to address challenging open questions in relativistic astrophysics.
Morse oscillator propagator in the high temperature limit I: Theory
NASA Astrophysics Data System (ADS)
Toutounji, Mohamad
2017-02-01
In an earlier work of the author the time evolution of Morse oscillator was studied analytically and exactly at low temperatures whereupon optical correlation functions were calculated using Morse oscillator coherent states were employed. Morse oscillator propagator in the high temperature limit is derived and a closed form of its corresponding canonical partition function is obtained. Both diagonal and off-diagonal forms of Morse oscillator propagator are derived in the high temperature limit. Partition functions of diatomic molecules are calculated.
Adaptive Distributed Video Coding with Correlation Estimation using Expectation Propagation
Cui, Lijuan; Wang, Shuang; Jiang, Xiaoqian; Cheng, Samuel
2013-01-01
Distributed video coding (DVC) is rapidly increasing in popularity by the way of shifting the complexity from encoder to decoder, whereas no compression performance degrades, at least in theory. In contrast with conventional video codecs, the inter-frame correlation in DVC is explored at decoder based on the received syndromes of Wyner-Ziv (WZ) frame and side information (SI) frame generated from other frames available only at decoder. However, the ultimate decoding performances of DVC are based on the assumption that the perfect knowledge of correlation statistic between WZ and SI frames should be available at decoder. Therefore, the ability of obtaining a good statistical correlation estimate is becoming increasingly important in practical DVC implementations. Generally, the existing correlation estimation methods in DVC can be classified into two main types: pre-estimation where estimation starts before decoding and on-the-fly (OTF) estimation where estimation can be refined iteratively during decoding. As potential changes between frames might be unpredictable or dynamical, OTF estimation methods usually outperforms pre-estimation techniques with the cost of increased decoding complexity (e.g., sampling methods). In this paper, we propose a low complexity adaptive DVC scheme using expectation propagation (EP), where correlation estimation is performed OTF as it is carried out jointly with decoding of the factor graph-based DVC code. Among different approximate inference methods, EP generally offers better tradeoff between accuracy and complexity. Experimental results show that our proposed scheme outperforms the benchmark state-of-the-art DISCOVER codec and other cases without correlation tracking, and achieves comparable decoding performance but with significantly low complexity comparing with sampling method. PMID:23750314
Adaptive Distributed Video Coding with Correlation Estimation using Expectation Propagation.
Cui, Lijuan; Wang, Shuang; Jiang, Xiaoqian; Cheng, Samuel
2012-10-15
Distributed video coding (DVC) is rapidly increasing in popularity by the way of shifting the complexity from encoder to decoder, whereas no compression performance degrades, at least in theory. In contrast with conventional video codecs, the inter-frame correlation in DVC is explored at decoder based on the received syndromes of Wyner-Ziv (WZ) frame and side information (SI) frame generated from other frames available only at decoder. However, the ultimate decoding performances of DVC are based on the assumption that the perfect knowledge of correlation statistic between WZ and SI frames should be available at decoder. Therefore, the ability of obtaining a good statistical correlation estimate is becoming increasingly important in practical DVC implementations. Generally, the existing correlation estimation methods in DVC can be classified into two main types: pre-estimation where estimation starts before decoding and on-the-fly (OTF) estimation where estimation can be refined iteratively during decoding. As potential changes between frames might be unpredictable or dynamical, OTF estimation methods usually outperforms pre-estimation techniques with the cost of increased decoding complexity (e.g., sampling methods). In this paper, we propose a low complexity adaptive DVC scheme using expectation propagation (EP), where correlation estimation is performed OTF as it is carried out jointly with decoding of the factor graph-based DVC code. Among different approximate inference methods, EP generally offers better tradeoff between accuracy and complexity. Experimental results show that our proposed scheme outperforms the benchmark state-of-the-art DISCOVER codec and other cases without correlation tracking, and achieves comparable decoding performance but with significantly low complexity comparing with sampling method.
Seaborg, David M
2010-08-01
The canonical genetic code is on a sub-optimal adaptive peak with respect to its ability to minimize errors, and is close to, but not quite, optimal. This is demonstrated by the near-total adjacency of synonymous codons, the similarity of adjacent codons, and comparisons of frequency of amino acid usage with number of codons in the code for each amino acid. As a rare empirical example of an adaptive peak in nature, it shows adaptive peaks are real, not merely theoretical. The evolution of deviant genetic codes illustrates how populations move from a lower to a higher adaptive peak. This is done by the use of "adaptive bridges," neutral pathways that cross over maladaptive valleys by virtue of masking of the phenotypic expression of some maladaptive aspects in the genotype. This appears to be the general mechanism by which populations travel from one adaptive peak to another. There are multiple routes a population can follow to cross from one adaptive peak to another. These routes vary in the probability that they will be used, and this probability is determined by the number and nature of the mutations that happen along each of the routes. A modification of the depiction of adaptive landscapes showing genetic distances and probabilities of travel along their multiple possible routes would throw light on this important concept.
Nonoverlap Property of the Thue-Morse Sequence
2010-04-20
Fibonacci Numbers & Applic. 2010. 14. ABSTRACT In this note, we provide a new proof for the nonoverlap property of the Thue- Morse sequence using a...Nonoverlap Property of the Thue-Morse Sequence T.W. Cusicka, P. Stănicăb aDepartment of Mathematics, The State University of New York Buffalo, NY...2010 Abstract In this note, we provide a new proof for the nonoverlap property of the Thue- Morse sequence using a Boolean functions approach and
2012-03-01
Assessment of COMSCAN, A Compton Backscatter Imaging Camera , for the One-Sided Non-Destructive Inspection of Aerospace Compo- nents. Technical report...A PROGRAMMABLE LIQUID COLLIMATOR FOR BOTH CODED APERTURE ADAPTIVE IMAGING AND MULTIPLEXED COMPTON SCATTER TOMOGRAPHY THESIS Jack G. M. FitzGerald, 2d...LIQUID COLLIMATOR FOR BOTH CODED APERTURE ADAPTIVE IMAGING AND MULTIPLEXED COMPTON SCATTER TOMOGRAPHY THESIS Presented to the Faculty Department of
The topological particle and Morse theory
NASA Astrophysics Data System (ADS)
Rogers, Alice
2000-09-01
Canonical BRST quantization of the topological particle defined by a Morse function h is described. Stochastic calculus, using Brownian paths which implement the WKB method in a new way providing rigorous tunnelling results even in curved space, is used to give an explicit and simple expression for the matrix elements of the evolution operator for the BRST Hamiltonian. These matrix elements lead to a representation of the manifold cohomology in terms of critical points of h along lines developed by Witten (Witten E 1982 J. Diff. Geom. 17 661-92).
NASA Astrophysics Data System (ADS)
Su, Yan; Jun, Xie Cheng
2006-08-01
An algorithm of combining LZC and arithmetic coding algorithm for image compression is presented and both theory deduction and simulation result prove the correctness and feasibility of the algorithm. According to the characteristic of context-based adaptive binary arithmetic coding and entropy, LZC was modified to cooperate the optimized piecewise arithmetic coding, this algorithm improved the compression ratio without any additional time consumption compared to traditional method.
Adaptive Mesh Refinement Algorithms for Parallel Unstructured Finite Element Codes
Parsons, I D; Solberg, J M
2006-02-03
This project produced algorithms for and software implementations of adaptive mesh refinement (AMR) methods for solving practical solid and thermal mechanics problems on multiprocessor parallel computers using unstructured finite element meshes. The overall goal is to provide computational solutions that are accurate to some prescribed tolerance, and adaptivity is the correct path toward this goal. These new tools will enable analysts to conduct more reliable simulations at reduced cost, both in terms of analyst and computer time. Previous academic research in the field of adaptive mesh refinement has produced a voluminous literature focused on error estimators and demonstration problems; relatively little progress has been made on producing efficient implementations suitable for large-scale problem solving on state-of-the-art computer systems. Research issues that were considered include: effective error estimators for nonlinear structural mechanics; local meshing at irregular geometric boundaries; and constructing efficient software for parallel computing environments.
A User's Guide to AMR1D: An Instructional Adaptive Mesh Refinement Code for Unstructured Grids
NASA Technical Reports Server (NTRS)
deFainchtein, Rosalinda
1996-01-01
This report documents the code AMR1D, which is currently posted on the World Wide Web (http://sdcd.gsfc.nasa.gov/ESS/exchange/contrib/de-fainchtein/adaptive _mesh_refinement.html). AMR1D is a one-dimensional finite element fluid-dynamics solver, capable of adaptive mesh refinement (AMR). It was written as an instructional tool for AMR on unstructured mesh codes. It is meant to illustrate the minimum requirements for AMR on more than one dimension. For that purpose, it uses the same type of data structure that would be necessary on a two-dimensional AMR code (loosely following the algorithm described by Lohner).
Palermo, Romina; Rivolta, Davide; Wilson, C Ellie; Jeffery, Linda
2011-12-01
People with congenital prosopagnosia (CP) report difficulty recognising faces in everyday life and perform poorly on face recognition tests. Here, we investigate whether impaired adaptive face space coding might contribute to poor face recognition in CP. To pinpoint how adaptation may affect face processing, a group of CPs and matched controls completed two complementary face adaptation tasks: the figural aftereffect, which reflects adaptation to general distortions of shape, and the identity aftereffect, which directly taps the mechanisms involved in the discrimination of different face identities. CPs displayed a typical figural aftereffect, consistent with evidence that they are able to process some shape-based information from faces, e.g., cues to discriminate sex. CPs also demonstrated a significant identity aftereffect. However, unlike controls, CPs impression of the identity of the neutral average face was not significantly shifted by adaptation, suggesting that adaptive coding of identity is abnormal in CP. In sum, CPs show reduced aftereffects but only when the task directly taps the use of face norms used to code individual identity. This finding of a reduced face identity aftereffect in individuals with severe face recognition problems is consistent with suggestions that adaptive coding may have a functional role in face recognition.
Deficits in context-dependent adaptive coding of reward in schizophrenia
Kirschner, Matthias; Hager, Oliver M; Bischof, Martin; Hartmann-Riemer, Matthias N; Kluge, Agne; Seifritz, Erich; Tobler, Philippe N; Kaiser, Stefan
2016-01-01
Theoretical principles of information processing and empirical findings suggest that to efficiently represent all possible rewards in the natural environment, reward-sensitive neurons have to adapt their coding range dynamically to the current reward context. Adaptation ensures that the reward system is most sensitive for the most likely rewards, enabling the system to efficiently represent a potentially infinite range of reward information. A deficit in neural adaptation would prevent precise representation of rewards and could have detrimental effects for an organism’s ability to optimally engage with its environment. In schizophrenia, reward processing is known to be impaired and has been linked to different symptom dimensions. However, despite the fundamental significance of coding reward adaptively, no study has elucidated whether adaptive reward processing is impaired in schizophrenia. We therefore studied patients with schizophrenia (n=27) and healthy controls (n=25), using functional magnetic resonance imaging in combination with a variant of the monetary incentive delay task. Compared with healthy controls, patients with schizophrenia showed less efficient neural adaptation to the current reward context, which leads to imprecise neural representation of reward. Importantly, the deficit correlated with total symptom severity. Our results suggest that some of the deficits in reward processing in schizophrenia might be due to inefficient neural adaptation to the current reward context. Furthermore, because adaptive coding is a ubiquitous feature of the brain, we believe that our findings provide an avenue in defining a general impairment in neural information processing underlying this debilitating disorder. PMID:27430009
PHURBAS: AN ADAPTIVE, LAGRANGIAN, MESHLESS, MAGNETOHYDRODYNAMICS CODE. I. ALGORITHM
Maron, Jason L.; McNally, Colin P.; Mac Low, Mordecai-Mark E-mail: cmcnally@amnh.org
2012-05-01
We present an algorithm for simulating the equations of ideal magnetohydrodynamics and other systems of differential equations on an unstructured set of points represented by sample particles. Local, third-order, least-squares, polynomial interpolations (Moving Least Squares interpolations) are calculated from the field values of neighboring particles to obtain field values and spatial derivatives at the particle position. Field values and particle positions are advanced in time with a second-order predictor-corrector scheme. The particles move with the fluid, so the time step is not limited by the Eulerian Courant-Friedrichs-Lewy condition. Full spatial adaptivity is implemented to ensure the particles fill the computational volume, which gives the algorithm substantial flexibility and power. A target resolution is specified for each point in space, with particles being added and deleted as needed to meet this target. Particle addition and deletion is based on a local void and clump detection algorithm. Dynamic artificial viscosity fields provide stability to the integration. The resulting algorithm provides a robust solution for modeling flows that require Lagrangian or adaptive discretizations to resolve. This paper derives and documents the Phurbas algorithm as implemented in Phurbas version 1.1. A following paper presents the implementation and test problem results.
Adaptive λ estimation in Lagrangian rate-distortion optimization for video coding
NASA Astrophysics Data System (ADS)
Chen, Lulin; Garbacea, Ilie
2006-01-01
In this paper, adaptive Lagrangian multiplier λ estimation in Larangian R-D optimization for video coding is presented that is based on the ρ-domain linear rate model and distortion model. It yields that λ is a function of rate, distortion and coding input statistics and can be written as λ(R, D, σ2) = β(ln(σ2/D) + δ)D/R + k 0, with β, δ and k 0 as coding constants, σ2 is variance of prediction error input. λ(R, D, σ2) describes its ubiquitous relationship with coding statistics and coding input in hybrid video coding such as H.263, MPEG-2/4 and H.264/AVC. The lambda evaluation is de-coupled with quantization parameters. The proposed lambda estimation enables a fine encoder design and encoder control.
Adaptive uniform grayscale coded aperture design for high dynamic range compressive spectral imaging
NASA Astrophysics Data System (ADS)
Diaz, Nelson; Rueda, Hoover; Arguello, Henry
2016-05-01
Imaging spectroscopy is an important area with many applications in surveillance, agriculture and medicine. The disadvantage of conventional spectroscopy techniques is that they collect the whole datacube. In contrast, compressive spectral imaging systems capture snapshot compressive projections, which are the input of reconstruction algorithms to yield the underlying datacube. Common compressive spectral imagers use coded apertures to perform the coded projections. The coded apertures are the key elements in these imagers since they define the sensing matrix of the system. The proper design of the coded aperture entries leads to a good quality in the reconstruction. In addition, the compressive measurements are prone to saturation due to the limited dynamic range of the sensor, hence the design of coded apertures must consider saturation. The saturation errors in compressive measurements are unbounded and compressive sensing recovery algorithms only provide solutions for bounded noise or bounded with high probability. In this paper it is proposed the design of uniform adaptive grayscale coded apertures (UAGCA) to improve the dynamic range of the estimated spectral images by reducing the saturation levels. The saturation is attenuated between snapshots using an adaptive filter which updates the entries of the grayscale coded aperture based on the previous snapshots. The coded apertures are optimized in terms of transmittance and number of grayscale levels. The advantage of the proposed method is the efficient use of the dynamic range of the image sensor. Extensive simulations show improvements in the image reconstruction of the proposed method compared with grayscale coded apertures (UGCA) and adaptive block-unblock coded apertures (ABCA) in up to 10 dB.
Quantum revivals of Morse oscillators and Farey-Ford geometry
NASA Astrophysics Data System (ADS)
Li, Alvason Zhenhua; Harter, William G.
2015-07-01
Analytical eigensolutions for Morse oscillators are used to investigate quantum resonance and revivals and show how Morse anharmonicity affects revival times. A minimum semi-classical Morse revival time Tmin-rev found by Heller is related to a complete quantum revival time Trev using a quantum deviation δN parameter that in turn relates Trev to the maximum quantum beat period Tmax-beat. Also, number theory of Farey and Thales-circle geometry of Ford is shown to elegantly analyze and display fractional revivals. Such quantum dynamical analysis may have applications for spectroscopy or quantum information processing and computing.
Photon-added coherent states for the Morse oscillator
NASA Astrophysics Data System (ADS)
Popov, Dusan; Zaharie, Ioan; Dong, Shi-Hai
2006-02-01
In the paper we have constructed and investigated some properties of the Perelomov's “generalized coherent states” and photon-added coherent states for the Morse one-dimensional Hamiltonian (MO-PACSs), using the SU(2) group generators. We have found the integration measure in the resolution of unity and we have calculated some expectation values in the MO-PACSs representation. Using these states, the diagonal P-representation of the density operator is constructed as a new result for Morse potential. In addition, we have calculated some thermal expectation values for the quantum canonical diatomic gas of the Morse oscillators.
Shape-adaptive discrete wavelet transform for coding arbitrarily shaped texture
NASA Astrophysics Data System (ADS)
Li, Shipeng; Li, Weiping
1997-01-01
This paper presents a shape adaptive discrete wavelet transform (SA-DWT) scheme for coding arbitrarily shaped texture. The proposed SA-DWT can be used for object-oriented image coding. The number of coefficients after SA-DWT is identical to the number of pels contained in the arbitrarily shaped image objects. The locality property of wavelet transform and self-similarity among subbands are well preserved throughout this process.For a rectangular region, the SA-DWT is identical to a standard wavelet transform. With SA-DWT, conventional wavelet based coding schemes can be readily extended to the coding of arbitrarily shaped objects. The proposed shape adaptive wavelet transform is not unitary but the small energy increase is restricted at the boundary of objects in subbands. Two approaches of using the SA-DWT algorithm for object-oriented image and video coding are presented. One is to combine scalar SA-DWT with embedded zerotree wavelet (EZW) coding technique, the other is an extension of the normal vector wavelet coding (VWC) technique to arbitrarily shaped objects. Results of applying SA-VWC to real arbitrarily shaped texture coding are also given at the end of this paper.
Unsupervised learning approach to adaptive differential pulse code modulation.
Griswold, N C; Sayood, K
1982-04-01
This research is concerned with investigating the problem of data compression utilizing an unsupervised estimation algorithm. This extends previous work utilizing a hybrid source coder which combines an orthogonal transformation with differential pulse code modulation (DPCM). The data compression is achieved in the DPCM loop, and it is the quantizer of this scheme which is approached from an unsupervised learning procedure. The distribution defining the quantizer is represented as a set of separable Laplacian mixture densities for two-dimensional images. The condition of identifiability is shown for the Laplacian case and a decision directed estimate of both the active distribution parameters and the mixing parameters are discussed in view of a Bayesian structure. The decision directed estimators, although not optimum, provide a realizable structure for estimating the parameters which define a distribution which has become active. These parameters are then used to scale the optimum (in the mean square error sense) Laplacian quantizer. The decision criteria is modified to prevent convergence to a single distribution which in effect is the default condition for a variance estimator. This investigation was applied to a test image and the resulting data demonstrate improvement over other techniques using fixed bit assignments and ideal channel conditions.
NASA Astrophysics Data System (ADS)
Karwowski, Damian; Domański, Marek
2016-01-01
An improved context-based adaptive binary arithmetic coding (CABAC) is presented. The idea for the improvement is to use a more accurate mechanism for estimation of symbol probabilities in the standard CABAC algorithm. The authors' proposal of such a mechanism is based on the context-tree weighting technique. In the framework of a high-efficiency video coding (HEVC) video encoder, the improved CABAC allows 0.7% to 4.5% bitrate saving compared to the original CABAC algorithm. The application of the proposed algorithm marginally affects the complexity of HEVC video encoder, but the complexity of video decoder increases by 32% to 38%. In order to decrease the complexity of video decoding, a new tool has been proposed for the improved CABAC that enables scaling of the decoder complexity. Experiments show that this tool gives 5% to 7.5% reduction of the decoding time while still maintaining high efficiency in the data compression.
Rhodes, Gillian; Jeffery, Linda; Taylor, Libby; Hayward, William G; Ewing, Louise
2014-06-01
Despite their similarity as visual patterns, we can discriminate and recognize many thousands of faces. This expertise has been linked to 2 coding mechanisms: holistic integration of information across the face and adaptive coding of face identity using norms tuned by experience. Recently, individual differences in face recognition ability have been discovered and linked to differences in holistic coding. Here we show that they are also linked to individual differences in adaptive coding of face identity, measured using face identity aftereffects. Identity aftereffects correlated significantly with several measures of face-selective recognition ability. They also correlated marginally with own-race face recognition ability, suggesting a role for adaptive coding in the well-known other-race effect. More generally, these results highlight the important functional role of adaptive face-coding mechanisms in face expertise, taking us beyond the traditional focus on holistic coding mechanisms.
Dopamine Modulates Adaptive Prediction Error Coding in the Human Midbrain and Striatum
Ziauddeen, Hisham; Vestergaard, Martin D.; Spencer, Tom
2017-01-01
Learning to optimally predict rewards requires agents to account for fluctuations in reward value. Recent work suggests that individuals can efficiently learn about variable rewards through adaptation of the learning rate, and coding of prediction errors relative to reward variability. Such adaptive coding has been linked to midbrain dopamine neurons in nonhuman primates, and evidence in support for a similar role of the dopaminergic system in humans is emerging from fMRI data. Here, we sought to investigate the effect of dopaminergic perturbations on adaptive prediction error coding in humans, using a between-subject, placebo-controlled pharmacological fMRI study with a dopaminergic agonist (bromocriptine) and antagonist (sulpiride). Participants performed a previously validated task in which they predicted the magnitude of upcoming rewards drawn from distributions with varying SDs. After each prediction, participants received a reward, yielding trial-by-trial prediction errors. Under placebo, we replicated previous observations of adaptive coding in the midbrain and ventral striatum. Treatment with sulpiride attenuated adaptive coding in both midbrain and ventral striatum, and was associated with a decrease in performance, whereas bromocriptine did not have a significant impact. Although we observed no differential effect of SD on performance between the groups, computational modeling suggested decreased behavioral adaptation in the sulpiride group. These results suggest that normal dopaminergic function is critical for adaptive prediction error coding, a key property of the brain thought to facilitate efficient learning in variable environments. Crucially, these results also offer potential insights for understanding the impact of disrupted dopamine function in mental illness. SIGNIFICANCE STATEMENT To choose optimally, we have to learn what to expect. Humans dampen learning when there is a great deal of variability in reward outcome, and two brain regions that
Dopamine Modulates Adaptive Prediction Error Coding in the Human Midbrain and Striatum.
Diederen, Kelly M J; Ziauddeen, Hisham; Vestergaard, Martin D; Spencer, Tom; Schultz, Wolfram; Fletcher, Paul C
2017-02-15
Learning to optimally predict rewards requires agents to account for fluctuations in reward value. Recent work suggests that individuals can efficiently learn about variable rewards through adaptation of the learning rate, and coding of prediction errors relative to reward variability. Such adaptive coding has been linked to midbrain dopamine neurons in nonhuman primates, and evidence in support for a similar role of the dopaminergic system in humans is emerging from fMRI data. Here, we sought to investigate the effect of dopaminergic perturbations on adaptive prediction error coding in humans, using a between-subject, placebo-controlled pharmacological fMRI study with a dopaminergic agonist (bromocriptine) and antagonist (sulpiride). Participants performed a previously validated task in which they predicted the magnitude of upcoming rewards drawn from distributions with varying SDs. After each prediction, participants received a reward, yielding trial-by-trial prediction errors. Under placebo, we replicated previous observations of adaptive coding in the midbrain and ventral striatum. Treatment with sulpiride attenuated adaptive coding in both midbrain and ventral striatum, and was associated with a decrease in performance, whereas bromocriptine did not have a significant impact. Although we observed no differential effect of SD on performance between the groups, computational modeling suggested decreased behavioral adaptation in the sulpiride group. These results suggest that normal dopaminergic function is critical for adaptive prediction error coding, a key property of the brain thought to facilitate efficient learning in variable environments. Crucially, these results also offer potential insights for understanding the impact of disrupted dopamine function in mental illness.SIGNIFICANCE STATEMENT To choose optimally, we have to learn what to expect. Humans dampen learning when there is a great deal of variability in reward outcome, and two brain regions that
QoS-Aware Error Recovery in Wireless Body Sensor Networks Using Adaptive Network Coding
Razzaque, Mohammad Abdur; Javadi, Saeideh S.; Coulibaly, Yahaya; Hira, Muta Tah
2015-01-01
Wireless body sensor networks (WBSNs) for healthcare and medical applications are real-time and life-critical infrastructures, which require a strict guarantee of quality of service (QoS), in terms of latency, error rate and reliability. Considering the criticality of healthcare and medical applications, WBSNs need to fulfill users/applications and the corresponding network's QoS requirements. For instance, for a real-time application to support on-time data delivery, a WBSN needs to guarantee a constrained delay at the network level. A network coding-based error recovery mechanism is an emerging mechanism that can be used in these systems to support QoS at very low energy, memory and hardware cost. However, in dynamic network environments and user requirements, the original non-adaptive version of network coding fails to support some of the network and user QoS requirements. This work explores the QoS requirements of WBSNs in both perspectives of QoS. Based on these requirements, this paper proposes an adaptive network coding-based, QoS-aware error recovery mechanism for WBSNs. It utilizes network-level and user-/application-level information to make it adaptive in both contexts. Thus, it provides improved QoS support adaptively in terms of reliability, energy efficiency and delay. Simulation results show the potential of the proposed mechanism in terms of adaptability, reliability, real-time data delivery and network lifetime compared to its counterparts. PMID:25551485
QOS-aware error recovery in wireless body sensor networks using adaptive network coding.
Razzaque, Mohammad Abdur; Javadi, Saeideh S; Coulibaly, Yahaya; Hira, Muta Tah
2014-12-29
Wireless body sensor networks (WBSNs) for healthcare and medical applications are real-time and life-critical infrastructures, which require a strict guarantee of quality of service (QoS), in terms of latency, error rate and reliability. Considering the criticality of healthcare and medical applications, WBSNs need to fulfill users/applications and the corresponding network's QoS requirements. For instance, for a real-time application to support on-time data delivery, a WBSN needs to guarantee a constrained delay at the network level. A network coding-based error recovery mechanism is an emerging mechanism that can be used in these systems to support QoS at very low energy, memory and hardware cost. However, in dynamic network environments and user requirements, the original non-adaptive version of network coding fails to support some of the network and user QoS requirements. This work explores the QoS requirements of WBSNs in both perspectives of QoS. Based on these requirements, this paper proposes an adaptive network coding-based, QoS-aware error recovery mechanism for WBSNs. It utilizes network-level and user-/application-level information to make it adaptive in both contexts. Thus, it provides improved QoS support adaptively in terms of reliability, energy efficiency and delay. Simulation results show the potential of the proposed mechanism in terms of adaptability, reliability, real-time data delivery and network lifetime compared to its counterparts.
Object-adaptive depth compensated inter prediction for depth video coding in 3D video system
NASA Astrophysics Data System (ADS)
Kang, Min-Koo; Lee, Jaejoon; Lim, Ilsoon; Ho, Yo-Sung
2011-01-01
Nowadays, the 3D video system using the MVD (multi-view video plus depth) data format is being actively studied. The system has many advantages with respect to virtual view synthesis such as an auto-stereoscopic functionality, but compression of huge input data remains a problem. Therefore, efficient 3D data compression is extremely important in the system, and problems of low temporal consistency and viewpoint correlation should be resolved for efficient depth video coding. In this paper, we propose an object-adaptive depth compensated inter prediction method to resolve the problems where object-adaptive mean-depth difference between a current block, to be coded, and a reference block are compensated during inter prediction. In addition, unique properties of depth video are exploited to reduce side information required for signaling decoder to conduct the same process. To evaluate the coding performance, we have implemented the proposed method into MVC (multiview video coding) reference software, JMVC 8.2. Experimental results have demonstrated that our proposed method is especially efficient for depth videos estimated by DERS (depth estimation reference software) discussed in the MPEG 3DV coding group. The coding gain was up to 11.69% bit-saving, and it was even increased when we evaluated it on synthesized views of virtual viewpoints.
Boulgouris, N V; Tzovaras, D; Strintzis, M G
2001-01-01
The optimal predictors of a lifting scheme in the general n-dimensional case are obtained and applied for the lossless compression of still images using first quincunx sampling and then simple row-column sampling. In each case, the efficiency of the linear predictors is enhanced nonlinearly. Directional postprocessing is used in the quincunx case, and adaptive-length postprocessing in the row-column case. Both methods are seen to perform well. The resulting nonlinear interpolation schemes achieve extremely efficient image decorrelation. We further investigate context modeling and adaptive arithmetic coding of wavelet coefficients in a lossless compression framework. Special attention is given to the modeling contexts and the adaptation of the arithmetic coder to the actual data. Experimental evaluation shows that the best of the resulting coders produces better results than other known algorithms for multiresolution-based lossless image coding.
NASA Astrophysics Data System (ADS)
Kang, Je-Won; Ryu, Soo-Kyung
2017-02-01
In this paper a sample-adaptive prediction technique is proposed to yield efficient coding performance in an intracoding for screen content video coding. The sample-based prediction is to reduce spatial redundancies in neighboring samples. To this aim, the proposed technique uses a weighted linear combination of neighboring samples and applies the robust optimization technique, namely, ridge estimation to derive the weights in a decoder side. The ridge estimation uses L2 norm based regularization term, and, thus the solution is more robust to high variance samples such as in sharp edges and high color contrasts exhibited in screen content videos. It is demonstrated with the experimental results that the proposed technique provides an improved coding gain as compared to the HEVC screen content video coding reference software.
A Neural Mechanism for Time-Window Separation Resolves Ambiguity of Adaptive Coding
Hildebrandt, K. Jannis; Ronacher, Bernhard; Hennig, R. Matthias; Benda, Jan
2015-01-01
The senses of animals are confronted with changing environments and different contexts. Neural adaptation is one important tool to adjust sensitivity to varying intensity ranges. For instance, in a quiet night outdoors, our hearing is more sensitive than when we are confronted with the plurality of sounds in a large city during the day. However, adaptation also removes available information on absolute sound levels and may thus cause ambiguity. Experimental data on the trade-off between benefits and loss through adaptation is scarce and very few mechanisms have been proposed to resolve it. We present an example where adaptation is beneficial for one task—namely, the reliable encoding of the pattern of an acoustic signal—but detrimental for another—the localization of the same acoustic stimulus. With a combination of neurophysiological data, modeling, and behavioral tests, we show that adaptation in the periphery of the auditory pathway of grasshoppers enables intensity-invariant coding of amplitude modulations, but at the same time, degrades information available for sound localization. We demonstrate how focusing the response of localization neurons to the onset of relevant signals separates processing of localization and pattern information temporally. In this way, the ambiguity of adaptive coding can be circumvented and both absolute and relative levels can be processed using the same set of peripheral neurons. PMID:25761097
The development and application of the self-adaptive grid code, SAGE
NASA Technical Reports Server (NTRS)
Davies, Carol B.
1993-01-01
The multidimensional self-adaptive grid code, SAGE, has proven to be a flexible and useful tool in the solution of complex flow problems. Both 2- and 3-D examples given in this report show the code to be reliable and to substantially improve flowfield solutions. Since the adaptive procedure is a marching scheme the code is extremely fast and uses insignificant CPU time compared to the corresponding flow solver. The SAGE program is also machine and flow solver independent. Significant effort was made to simplify user interaction, though some parameters still need to be chosen with care. It is also difficult to tell when the adaption process has provided its best possible solution. This is particularly true if no experimental data are available or if there is a lack of theoretical understanding of the flow. Another difficulty occurs if local features are important but missing in the original grid; the adaption to this solution will not result in any improvement, and only grid refinement can result in an improved solution. These are complex issues that need to be explored within the context of each specific problem.
GAMER: A GRAPHIC PROCESSING UNIT ACCELERATED ADAPTIVE-MESH-REFINEMENT CODE FOR ASTROPHYSICS
Schive, H.-Y.; Tsai, Y.-C.; Chiueh Tzihong
2010-02-01
We present the newly developed code, GPU-accelerated Adaptive-MEsh-Refinement code (GAMER), which adopts a novel approach in improving the performance of adaptive-mesh-refinement (AMR) astrophysical simulations by a large factor with the use of the graphic processing unit (GPU). The AMR implementation is based on a hierarchy of grid patches with an oct-tree data structure. We adopt a three-dimensional relaxing total variation diminishing scheme for the hydrodynamic solver and a multi-level relaxation scheme for the Poisson solver. Both solvers have been implemented in GPU, by which hundreds of patches can be advanced in parallel. The computational overhead associated with the data transfer between the CPU and GPU is carefully reduced by utilizing the capability of asynchronous memory copies in GPU, and the computing time of the ghost-zone values for each patch is diminished by overlapping it with the GPU computations. We demonstrate the accuracy of the code by performing several standard test problems in astrophysics. GAMER is a parallel code that can be run in a multi-GPU cluster system. We measure the performance of the code by performing purely baryonic cosmological simulations in different hardware implementations, in which detailed timing analyses provide comparison between the computations with and without GPU(s) acceleration. Maximum speed-up factors of 12.19 and 10.47 are demonstrated using one GPU with 4096{sup 3} effective resolution and 16 GPUs with 8192{sup 3} effective resolution, respectively.
An assessment of the adaptive unstructured tetrahedral grid, Euler Flow Solver Code FELISA
NASA Technical Reports Server (NTRS)
Djomehri, M. Jahed; Erickson, Larry L.
1994-01-01
A three-dimensional solution-adaptive Euler flow solver for unstructured tetrahedral meshes is assessed, and the accuracy and efficiency of the method for predicting sonic boom pressure signatures about simple generic models are demonstrated. Comparison of computational and wind tunnel data and enhancement of numerical solutions by means of grid adaptivity are discussed. The mesh generation is based on the advancing front technique. The FELISA code consists of two solvers, the Taylor-Galerkin and the Runge-Kutta-Galerkin schemes, both of which are spacially discretized by the usual Galerkin weighted residual finite-element methods but with different explicit time-marching schemes to steady state. The solution-adaptive grid procedure is based on either remeshing or mesh refinement techniques. An alternative geometry adaptive procedure is also incorporated.
Adaptive software-defined coded modulation for ultra-high-speed optical transport
NASA Astrophysics Data System (ADS)
Djordjevic, Ivan B.; Zhang, Yequn
2013-10-01
In optically-routed networks, different wavelength channels carrying the traffic to different destinations can have quite different optical signal-to-noise ratios (OSNRs) and signal is differently impacted by various channel impairments. Regardless of the data destination, an optical transport system (OTS) must provide the target bit-error rate (BER) performance. To provide target BER regardless of the data destination we adjust the forward error correction (FEC) strength. Depending on the information obtained from the monitoring channels, we select the appropriate code rate matching to the OSNR range that current channel OSNR falls into. To avoid frame synchronization issues, we keep the codeword length fixed independent of the FEC code being employed. The common denominator is the employment of quasi-cyclic (QC-) LDPC codes in FEC. For high-speed implementation, low-complexity LDPC decoding algorithms are needed, and some of them will be described in this invited paper. Instead of conventional QAM based modulation schemes, we employ the signal constellations obtained by optimum signal constellation design (OSCD) algorithm. To improve the spectral efficiency, we perform the simultaneous rate adaptation and signal constellation size selection so that the product of number of bits per symbol × code rate is closest to the channel capacity. Further, we describe the advantages of using 4D signaling instead of polarization-division multiplexed (PDM) QAM, by using the 4D MAP detection, combined with LDPC coding, in a turbo equalization fashion. Finally, to solve the problems related to the limited bandwidth of information infrastructure, high energy consumption, and heterogeneity of optical networks, we describe an adaptive energy-efficient hybrid coded-modulation scheme, which in addition to amplitude, phase, and polarization state employs the spatial modes as additional basis functions for multidimensional coded-modulation.
Volumetric data analysis using Morse-Smale complexes
Natarajan, V; Pascucci, V
2005-10-13
The 3D Morse-Smale complex is a fundamental topological construct that partitions the domain of a real-valued function into regions having uniform gradient flow behavior. In this paper, we consider the construction and selective presentation of cells of the Morse-Smale complex and their use in the analysis and visualization of scientific datasets. We take advantage of the fact that cells of different dimension often characterize different types of features present in the data. For example, critical points pinpoint changes in topology by showing where components of the level sets are created, destroyed or modified in genus. Edges of the Morse-Smale complex extract filament-like features that are not explicitly modeled in the original data. Interactive selection and rendering of portions of the Morse-Smale complex introduces fundamental data management challenges due to the unstructured nature of the complex even for structured inputs. We describe a data structure that stores the Morse-Smale complex and allows efficient selective traversal of regions of interest. Finally, we illustrate the practical use of this approach by applying it to cryo-electron microscopy data of protein molecules.
An edge-based solution-adaptive method applied to the AIRPLANE code
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Thomas, Scott D.; Cliff, Susan E.
1995-01-01
Computational methods to solve large-scale realistic problems in fluid flow can be made more efficient and cost effective by using them in conjunction with dynamic mesh adaption procedures that perform simultaneous coarsening and refinement to capture flow features of interest. This work couples the tetrahedral mesh adaption scheme, 3D_TAG, with the AIRPLANE code to solve complete aircraft configuration problems in transonic and supersonic flow regimes. Results indicate that the near-field sonic boom pressure signature of a cone-cylinder is improved, the oblique and normal shocks are better resolved on a transonic wing, and the bow shock ahead of an unstarted inlet is better defined.
An edge-based solution-adaptive method applied to the AIRPLANE code
NASA Astrophysics Data System (ADS)
Biswas, Rupak; Thomas, Scott D.; Cliff, Susan E.
1995-11-01
Computational methods to solve large-scale realistic problems in fluid flow can be made more efficient and cost effective by using them in conjunction with dynamic mesh adaption procedures that perform simultaneous coarsening and refinement to capture flow features of interest. This work couples the tetrahedral mesh adaption scheme, 3D_TAG, with the AIRPLANE code to solve complete aircraft configuration problems in transonic and supersonic flow regimes. Results indicate that the near-field sonic boom pressure signature of a cone-cylinder is improved, the oblique and normal shocks are better resolved on a transonic wing, and the bow shock ahead of an unstarted inlet is better defined.
FLAG: A multi-dimensional adaptive free-Lagrange code for fully unstructured grids
Burton, D.E.; Miller, D.S.; Palmer, T.
1995-07-01
The authors describe FLAG, a 3D adaptive free-Lagrange method for unstructured grids. The grid elements were 3D polygons, which move with the flow, and are refined or reconnected as necessary to achieve uniform accuracy. The authors stressed that they were able to construct a 3D hydro version of this code in 3 months, using an object-oriented FORTRAN approach.
Adapting a Navier-Stokes code to the ICL-DAP
NASA Technical Reports Server (NTRS)
Grosch, C. E.
1985-01-01
The results of an experiment are reported, i.c., to adapt a Navier-Stokes code, originally developed on a serial computer, to concurrent processing on the CL Distributed Array Processor (DAP). The algorithm used in solving the Navier-Stokes equations is briefly described. The architecture of the DAP and DAP FORTRAN are also described. The modifications of the algorithm so as to fit the DAP are given and discussed. Finally, performance results are given and conclusions are drawn.
Effective wavelet-based compression method with adaptive quantization threshold and zerotree coding
NASA Astrophysics Data System (ADS)
Przelaskowski, Artur; Kazubek, Marian; Jamrogiewicz, Tomasz
1997-10-01
Efficient image compression technique especially for medical applications is presented. Dyadic wavelet decomposition by use of Antonini and Villasenor bank filters is followed by adaptive space-frequency quantization and zerotree-based entropy coding of wavelet coefficients. Threshold selection and uniform quantization is made on a base of spatial variance estimate built on the lowest frequency subband data set. Threshold value for each coefficient is evaluated as linear function of 9-order binary context. After quantization zerotree construction, pruning and arithmetic coding is applied for efficient lossless data coding. Presented compression method is less complex than the most effective EZW-based techniques but allows to achieve comparable compression efficiency. Specifically our method has similar to SPIHT efficiency in MR image compression, slightly better for CT image and significantly better in US image compression. Thus the compression efficiency of presented method is competitive with the best published algorithms in the literature across diverse classes of medical images.
Diagonal ordering operation technique applied to Morse oscillator
Popov, Dušan; Dong, Shi-Hai; Popov, Miodrag
2015-11-15
We generalize the technique called as the integration within a normally ordered product (IWOP) of operators referring to the creation and annihilation operators of the harmonic oscillator coherent states to a new operatorial approach, i.e. the diagonal ordering operation technique (DOOT) about the calculations connected with the normally ordered product of generalized creation and annihilation operators that generate the generalized hypergeometric coherent states. We apply this technique to the coherent states of the Morse oscillator including the mixed (thermal) state case and get the well-known results achieved by other methods in the corresponding coherent state representation. Also, in the last section we construct the coherent states for the continuous dynamics of the Morse oscillator by using two new methods: the discrete–continuous limit, respectively by solving a finite difference equation. Finally, we construct the coherent states corresponding to the whole Morse spectrum (discrete plus continuous) and demonstrate their properties according the Klauder’s prescriptions.
Data compression using adaptive transform coding. Appendix 1: Item 1. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Rost, Martin Christopher
1988-01-01
Adaptive low-rate source coders are described in this dissertation. These coders adapt by adjusting the complexity of the coder to match the local coding difficulty of the image. This is accomplished by using a threshold driven maximum distortion criterion to select the specific coder used. The different coders are built using variable blocksized transform techniques, and the threshold criterion selects small transform blocks to code the more difficult regions and larger blocks to code the less complex regions. A theoretical framework is constructed from which the study of these coders can be explored. An algorithm for selecting the optimal bit allocation for the quantization of transform coefficients is developed. The bit allocation algorithm is more fully developed, and can be used to achieve more accurate bit assignments than the algorithms currently used in the literature. Some upper and lower bounds for the bit-allocation distortion-rate function are developed. An obtainable distortion-rate function is developed for a particular scalar quantizer mixing method that can be used to code transform coefficients at any rate.
An Adaptive Source-Channel Coding with Feedback for Progressive Transmission of Medical Images
Lo, Jen-Lung; Sanei, Saeid; Nazarpour, Kianoush
2009-01-01
A novel adaptive source-channel coding with feedback for progressive transmission of medical images is proposed here. In the source coding part, the transmission starts from the region of interest (RoI). The parity length in the channel code varies with respect to both the proximity of the image subblock to the RoI and the channel noise, which is iteratively estimated in the receiver. The overall transmitted data can be controlled by the user (clinician). In the case of medical data transmission, it is vital to keep the distortion level under control as in most of the cases certain clinically important regions have to be transmitted without any visible error. The proposed system significantly reduces the transmission time and error. Moreover, the system is very user friendly since the selection of the RoI, its size, overall code rate, and a number of test features such as noise level can be set by the users in both ends. A MATLAB-based TCP/IP connection has been established to demonstrate the proposed interactive and adaptive progressive transmission system. The proposed system is simulated for both binary symmetric channel (BSC) and Rayleigh channel. The experimental results verify the effectiveness of the design. PMID:19190770
ALEGRA -- A massively parallel h-adaptive code for solid dynamics
Summers, R.M.; Wong, M.K.; Boucheron, E.A.; Weatherby, J.R.
1997-12-31
ALEGRA is a multi-material, arbitrary-Lagrangian-Eulerian (ALE) code for solid dynamics designed to run on massively parallel (MP) computers. It combines the features of modern Eulerian shock codes, such as CTH, with modern Lagrangian structural analysis codes using an unstructured grid. ALEGRA is being developed for use on the teraflop supercomputers to conduct advanced three-dimensional (3D) simulations of shock phenomena important to a variety of systems. ALEGRA was designed with the Single Program Multiple Data (SPMD) paradigm, in which the mesh is decomposed into sub-meshes so that each processor gets a single sub-mesh with approximately the same number of elements. Using this approach the authors have been able to produce a single code that can scale from one processor to thousands of processors. A current major effort is to develop efficient, high precision simulation capabilities for ALEGRA, without the computational cost of using a global highly resolved mesh, through flexible, robust h-adaptivity of finite elements. H-adaptivity is the dynamic refinement of the mesh by subdividing elements, thus changing the characteristic element size and reducing numerical error. The authors are working on several major technical challenges that must be met to make effective use of HAMMER on MP computers.
CRASH: A BLOCK-ADAPTIVE-MESH CODE FOR RADIATIVE SHOCK HYDRODYNAMICS-IMPLEMENTATION AND VERIFICATION
Van der Holst, B.; Toth, G.; Sokolov, I. V.; Myra, E. S.; Fryxell, B.; Drake, R. P.; Powell, K. G.; Holloway, J. P.; Stout, Q.; Adams, M. L.; Morel, J. E.; Karni, S.
2011-06-01
We describe the Center for Radiative Shock Hydrodynamics (CRASH) code, a block-adaptive-mesh code for multi-material radiation hydrodynamics. The implementation solves the radiation diffusion model with a gray or multi-group method and uses a flux-limited diffusion approximation to recover the free-streaming limit. Electrons and ions are allowed to have different temperatures and we include flux-limited electron heat conduction. The radiation hydrodynamic equations are solved in the Eulerian frame by means of a conservative finite-volume discretization in either one-, two-, or three-dimensional slab geometry or in two-dimensional cylindrical symmetry. An operator-split method is used to solve these equations in three substeps: (1) an explicit step of a shock-capturing hydrodynamic solver; (2) a linear advection of the radiation in frequency-logarithm space; and (3) an implicit solution of the stiff radiation diffusion, heat conduction, and energy exchange. We present a suite of verification test problems to demonstrate the accuracy and performance of the algorithms. The applications are for astrophysics and laboratory astrophysics. The CRASH code is an extension of the Block-Adaptive Tree Solarwind Roe Upwind Scheme (BATS-R-US) code with a new radiation transfer and heat conduction library and equation-of-state and multi-group opacity solvers. Both CRASH and BATS-R-US are part of the publicly available Space Weather Modeling Framework.
A optimized context-based adaptive binary arithmetic coding algorithm in progressive H.264 encoder
NASA Astrophysics Data System (ADS)
Xiao, Guang; Shi, Xu-li; An, Ping; Zhang, Zhao-yang; Gao, Ge; Teng, Guo-wei
2006-05-01
Context-based Adaptive Binary Arithmetic Coding (CABAC) is a new entropy coding method presented in H.264/AVC that is highly efficient in video coding. In the method, the probability of current symbol is estimated by using the wisely designed context model, which is adaptive and can approach to the statistic characteristic. Then an arithmetic coding mechanism largely reduces the redundancy in inter-symbol. Compared with UVLC method in the prior standard, CABAC is complicated but efficiently reduce the bit rate. Based on thorough analysis of coding and decoding methods of CABAC, This paper proposed two methods, sub-table method and stream-reuse methods, to improve the encoding efficiency implemented in H.264 JM code. In JM, the CABAC function produces bits one by one of every syntactic element. Multiplication operating times after times in the CABAC function lead to it inefficient.The proposed algorithm creates tables beforehand and then produce every bits of syntactic element. In JM, intra-prediction and inter-prediction mode selection algorithm with different criterion is based on RDO(rate distortion optimization) model. One of the parameter of the RDO model is bit rate that is produced by CABAC operator. After intra-prediction or inter-prediction mode selection, the CABAC stream is discard and is recalculated to output stream. The proposed Stream-reuse algorithm puts the stream in memory that is created in mode selection algorithm and reuses it in encoding function. Experiment results show that our proposed algorithm can averagely speed up 17 to 78 MSEL higher speed for QCIF and CIF sequences individually compared with the original algorithm of JM at the cost of only a little memory space. The CABAC was realized in our progressive h.264 encoder.
Adaptive three-dimensional motion-compensated wavelet transform for image sequence coding
NASA Astrophysics Data System (ADS)
Leduc, Jean-Pierre
1994-09-01
This paper describes a 3D spatio-temporal coding algorithm for the bit-rate compression of digital-image sequences. The coding scheme is based on different specificities namely, a motion representation with a four-parameter affine model, a motion-adapted temporal wavelet decomposition along the motion trajectories and a signal-adapted spatial wavelet transform. The motion estimation is performed on the basis of four-parameter affine transformation models also called similitude. This transformation takes into account translations, rotations and scalings. The temporal wavelet filter bank exploits bi-orthogonal linear-phase dyadic decompositions. The 2D spatial decomposition is based on dyadic signal-adaptive filter banks with either para-unitary or bi-orthogonal bases. The adaptive filtering is carried out according to a performance criterion to be optimized under constraints in order to eventually maximize the compression ratio at the expense of graceful degradations of the subjective image quality. The major principles of the present technique is, in the analysis process, to extract and to separate the motion contained in the sequences from the spatio-temporal redundancy and, in the compression process, to take into account of the rate-distortion function on the basis of the spatio-temporal psycho-visual properties to achieve the most graceful degradations. To complete this description of the coding scheme, the compression procedure is therefore composed of scalar quantizers which exploit the spatio-temporal 3D psycho-visual properties of the Human Visual System and of entropy coders which finalize the bit rate compression.
Effects of selective adaptation on coding sugar and salt tastes in mixtures.
Frank, Marion E; Goyert, Holly F; Formaker, Bradley K; Hettinger, Thomas P
2012-10-01
Little is known about coding of taste mixtures in complex dynamic stimulus environments. A protocol developed for odor stimuli was used to test whether rapid selective adaptation extracted sugar and salt component tastes from mixtures as it did component odors. Seventeen human subjects identified taste components of "salt + sugar" mixtures. In 4 sessions, 16 adapt-test stimulus pairs were presented as atomized, 150-μL "taste puffs" to the tongue tip to simulate odor sniffs. Stimuli were NaCl, sucrose, "NaCl + sucrose," and water. The sugar was 98% identified but the suppressed salt 65% identified in unadapted mixtures of 2 concentrations of NaCl, 0.1 or 0.05 M, and sucrose at 3 times those concentrations, 0.3 or 0.15 M. Rapid selective adaptation decreased identification of sugar and salt preadapted ambient components to 35%, well below the 74% self-adapted level, despite variation in stimulus concentration and adapting time (<5 or >10 s). The 96% identification of sugar and salt extra mixture components was as certain as identification of single compounds. The results revealed that salt-sugar mixture suppression, dependent on relative mixture-component concentration, was mutual. Furthermore, like odors, stronger and recent tastes are emphasized in dynamic experimental conditions replicating natural situations.
THE PLUTO CODE FOR ADAPTIVE MESH COMPUTATIONS IN ASTROPHYSICAL FLUID DYNAMICS
Mignone, A.; Tzeferacos, P.; Zanni, C.; Bodo, G.; Van Straalen, B.; Colella, P.
2012-01-01
We present a description of the adaptive mesh refinement (AMR) implementation of the PLUTO code for solving the equations of classical and special relativistic magnetohydrodynamics (MHD and RMHD). The current release exploits, in addition to the static grid version of the code, the distributed infrastructure of the CHOMBO library for multidimensional parallel computations over block-structured, adaptively refined grids. We employ a conservative finite-volume approach where primary flow quantities are discretized at the cell center in a dimensionally unsplit fashion using the Corner Transport Upwind method. Time stepping relies on a characteristic tracing step where piecewise parabolic method, weighted essentially non-oscillatory, or slope-limited linear interpolation schemes can be handily adopted. A characteristic decomposition-free version of the scheme is also illustrated. The solenoidal condition of the magnetic field is enforced by augmenting the equations with a generalized Lagrange multiplier providing propagation and damping of divergence errors through a mixed hyperbolic/parabolic explicit cleaning step. Among the novel features, we describe an extension of the scheme to include non-ideal dissipative processes, such as viscosity, resistivity, and anisotropic thermal conduction without operator splitting. Finally, we illustrate an efficient treatment of point-local, potentially stiff source terms over hierarchical nested grids by taking advantage of the adaptivity in time. Several multidimensional benchmarks and applications to problems of astrophysical relevance assess the potentiality of the AMR version of PLUTO in resolving flow features separated by large spatial and temporal disparities.
Morse theory and Seiberg-Witten monopoles on 3-Manifolds
NASA Astrophysics Data System (ADS)
Lee, Yi-Jen
1997-11-01
This thesis explores the Seiberg-Witten theory on 3- manifolds and its fascinating interplay with Morse theory, surrounding the conjectured equivalence of the Seiberg-Witten invariant of 3-manifolds and a counting invariant of gradient flows of a Morse function. (Conjecture 3.1.1 below). It comes in two main parts, which appear in Chapter 1 and Chapter 3 respectively. Chapter 1 constitutes part of the analytical component and technical details of the main theme. Here we lay down the foundation of the perturbed/unperturbed Seiberg- Witten theory on asymptotically flat 3-manifolds. Chapter 2 serves as an annex between Chapter 1 and Chapter 3. It formulates the conjecture on the equivalence of the Seiberg-Witten invariant and a counting invariant I of gradient flows. We also outline a rough sketch of proof for this conjecture via application of results in Chapter 1. Chapter 3 comprises of the topological component. It calculates the counting invariant I for closed 3- manifolds via circle-valued Morse theory, and equates it (up to a sign) with the Reidemeister torsion of the manifold. Assuming Conjecture 3.1.1, it yields a refinement of the Meng-Taubes theorem. Furthermore, this calculation generalizes to higher dimensional cases (Theorem 3.1.1, Theorem 3.1.2), by way of which, a homotopy invariant of gradient flow is equated with the Reidemeister torsion of the manifold, supposing the Morse complex of the flow is Q-acyclic.
Continuous Morse-Smale flows with three equilibrium positions
NASA Astrophysics Data System (ADS)
Zhuzhoma, E. V.; Medvedev, V. S.
2016-05-01
Continuous Morse-Smale flows on closed manifolds whose nonwandering set consists of three equilibrium positions are considered. Necessary and sufficient conditions for topological equivalence of such flows are obtained and the topological structure of the underlying manifolds is described. Bibliography: 36 titles.
Morse theory for vector fields and the Witten Laplacian
Enciso, Alberto; Peralta-Salas, Daniel
2009-05-06
In this paper we informally review some recent developments on the analytical approach to Morse-type inequalities for vector fields. Throughout this work we focus on the main ideas of this approach and emphasize the application of the theory to concrete examples.
Pilot-Assisted Adaptive Channel Estimation for Coded MC-CDMA with ICI Cancellation
NASA Astrophysics Data System (ADS)
Yui, Tatsunori; Tomeba, Hiromichi; Adachi, Fumiyuki
One of the promising wireless access techniques for the next generation mobile communications systems is multi-carrier code division multiple access (MC-CDMA). MC-CDMA can provide good transmission performance owing to the frequency diversity effect in a severe frequency-selective fading channel. However, the bit error rate (BER) performance of coded MC-CDMA is inferior to that of orthogonal frequency division multiplexing (OFDM) due to the residual inter-code interference (ICI) after frequency-domain equalization (FDE). Recently, we proposed a frequency-domain soft interference cancellation (FDSIC) to reduce the residual ICI and confirmed by computer simulation that the MC-CDMA with FDSIC provides better BER performance than OFDM. However, ideal channel estimation was assumed. In this paper, we propose adaptive decision-feedback channel estimation (ADFCE) and evaluate by computer simulation the average BER and throughput performances of turbo-coded MC-CDMA with FDSIC. We show that even if a practical channel estimation is used, MC-CDMA with FDSIC can still provide better performance than OFDM.
Tang, Yazhe; Li, Youfu
2012-09-20
In this paper, we introduce a novel surveillance system based on thermal catadioptric omnidirectional (TCO) vision. The conventional contour-based methods are difficult to be applied to the TCO sensor for detection or tracking purposes due to the distortion of TCO vision. To solve this problem, we propose a contour coding based rotating adaptive model (RAM) that can extract the contour feature from the TCO vision directly as it takes advantage of the relative angle based on the characteristics of TCO vision to change the sequence of sampling automatically. A series of experiments and quantitative analyses verify that the performance of the proposed RAM-based contour coding feature for human detection and tracking are satisfactory in TCO vision.
Long-range accelerated BOTDA sensor using adaptive linear prediction and cyclic coding.
Muanenda, Yonas; Taki, Mohammad; Pasquale, Fabrizio Di
2014-09-15
We propose and experimentally demonstrate a long-range accelerated Brillouin optical time domain analysis (BOTDA) sensor that exploits the complementary noise reduction benefits of adaptive linear prediction and optical pulse coding. The combined technique allows using orders of magnitude less the number of averages of the backscattered BOTDA traces compared to a standard single pulse BOTDA, enabling distributed strain measurement over 10 km of a standard single mode fiber with meter-scale spatial resolution and 1.8 MHz Brillouin frequency shift resolution. By optimizing the system parameters, the measurement is achieved with only 20 averages for each Brillouin gain spectrum scanned frequency, allowing for an eight times faster strain measurement compared to the use of cyclic pulse coding alone.
Global error estimation based on the tolerance proportionality for some adaptive Runge-Kutta codes
NASA Astrophysics Data System (ADS)
Calvo, M.; González-Pinto, S.; Montijano, J. I.
2008-09-01
Modern codes for the numerical solution of Initial Value Problems (IVPs) in ODEs are based in adaptive methods that, for a user supplied tolerance [delta], attempt to advance the integration selecting the size of each step so that some measure of the local error is [similar, equals][delta]. Although this policy does not ensure that the global errors are under the prescribed tolerance, after the early studies of Stetter [Considerations concerning a theory for ODE-solvers, in: R. Burlisch, R.D. Grigorieff, J. Schröder (Eds.), Numerical Treatment of Differential Equations, Proceedings of Oberwolfach, 1976, Lecture Notes in Mathematics, vol. 631, Springer, Berlin, 1978, pp. 188-200; Tolerance proportionality in ODE codes, in: R. März (Ed.), Proceedings of the Second Conference on Numerical Treatment of Ordinary Differential Equations, Humbold University, Berlin, 1980, pp. 109-123] and the extensions of Higham [Global error versus tolerance for explicit Runge-Kutta methods, IMA J. Numer. Anal. 11 (1991) 457-480; The tolerance proportionality of adaptive ODE solvers, J. Comput. Appl. Math. 45 (1993) 227-236; The reliability of standard local error control algorithms for initial value ordinary differential equations, in: Proceedings: The Quality of Numerical Software: Assessment and Enhancement, IFIP Series, Springer, Berlin, 1997], it has been proved that in many existing explicit Runge-Kutta codes the global errors behave asymptotically as some rational power of [delta]. This step-size policy, for a given IVP, determines at each grid point tn a new step-size hn+1=h(tn;[delta]) so that h(t;[delta]) is a continuous function of t. In this paper a study of the tolerance proportionality property under a discontinuous step-size policy that does not allow to change the size of the step if the step-size ratio between two consecutive steps is close to unity is carried out. This theory is applied to obtain global error estimations in a few problems that have been solved with
PHURBAS: AN ADAPTIVE, LAGRANGIAN, MESHLESS, MAGNETOHYDRODYNAMICS CODE. II. IMPLEMENTATION AND TESTS
McNally, Colin P.; Mac Low, Mordecai-Mark; Maron, Jason L. E-mail: jmaron@amnh.org
2012-05-01
We present an algorithm for simulating the equations of ideal magnetohydrodynamics and other systems of differential equations on an unstructured set of points represented by sample particles. The particles move with the fluid, so the time step is not limited by the Eulerian Courant-Friedrichs-Lewy condition. Full spatial adaptivity is required to ensure the particles fill the computational volume and gives the algorithm substantial flexibility and power. A target resolution is specified for each point in space, with particles being added and deleted as needed to meet this target. We have parallelized the code by adapting the framework provided by GADGET-2. A set of standard test problems, including 10{sup -6} amplitude linear magnetohydrodynamics waves, magnetized shock tubes, and Kelvin-Helmholtz instabilities is presented. Finally, we demonstrate good agreement with analytic predictions of linear growth rates for magnetorotational instability in a cylindrical geometry. This paper documents the Phurbas algorithm as implemented in Phurbas version 1.1.
Dynamic optical aberration correction with adaptive coded apertures techniques in conformal imaging
NASA Astrophysics Data System (ADS)
Li, Yan; Hu, Bin; Zhang, Pengbin; Zhang, Binglong
2015-02-01
Conformal imaging systems are confronted with dynamic aberration in optical design processing. In classical optical designs, for combination high requirements of field of view, optical speed, environmental adaption and imaging quality, further enhancements can be achieved only by the introduction of increased complexity of aberration corrector. In recent years of computational imaging, the adaptive coded apertures techniques which has several potential advantages over more traditional optical systems is particularly suitable for military infrared imaging systems. The merits of this new concept include low mass, volume and moments of inertia, potentially lower costs, graceful failure modes, steerable fields of regard with no macroscopic moving parts. Example application for conformal imaging system design where the elements of a set of binary coded aperture masks are applied are optimization designed is presented in this paper, simulation results show that the optical performance is closely related to the mask design and the reconstruction algorithm optimization. As a dynamic aberration corrector, a binary-amplitude mask located at the aperture stop is optimized to mitigate dynamic optical aberrations when the field of regard changes and allow sufficient information to be recorded by the detector for the recovery of a sharp image using digital image restoration in conformal optical system.
Woolgar, Alexandra; Afshar, Soheil; Williams, Mark A; Rich, Anina N
2015-10-01
How do our brains achieve the cognitive control that is required for flexible behavior? Several models of cognitive control propose a role for frontoparietal cortex in the structure and representation of task sets or rules. For behavior to be flexible, however, the system must also rapidly reorganize as mental focus changes. Here we used multivoxel pattern analysis of fMRI data to demonstrate adaptive reorganization of frontoparietal activity patterns following a change in the complexity of the task rules. When task rules were relatively simple, frontoparietal cortex did not hold detectable information about these rules. In contrast, when the rules were more complex, frontoparietal cortex showed clear and decodable rule discrimination. Our data demonstrate that frontoparietal activity adjusts to task complexity, with better discrimination of rules that are behaviorally more confusable. The change in coding was specific to the rule element of the task and was not mirrored in more specialized cortex (early visual cortex) where coding was independent of difficulty. In line with an adaptive view of frontoparietal function, the data suggest a system that rapidly reconfigures in accordance with the difficulty of a behavioral task. This system may provide a neural basis for the flexible control of human behavior.
Complexity modeling for context-based adaptive binary arithmetic coding (CABAC) in H.264/AVC decoder
NASA Astrophysics Data System (ADS)
Lee, Szu-Wei; Kuo, C.-C. Jay
2007-09-01
One way to save the power consumption in the H.264 decoder is for the H.264 encoder to generate decoderfriendly bit streams. By following this idea, a decoding complexity model of context-based adaptive binary arithmetic coding (CABAC) for H.264/AVC is investigated in this research. Since different coding modes will have an impact on the number of quantized transformed coeffcients (QTCs) and motion vectors (MVs) and, consequently, the complexity of entropy decoding, the encoder with a complexity model can estimate the complexity of entropy decoding and choose the best coding mode to yield the best tradeoff between the rate, distortion and decoding complexity performance. The complexity model consists of two parts: one for source data (i.e. QTCs) and the other for header data (i.e. the macro-block (MB) type and MVs). Thus, the proposed CABAC decoding complexity model of a MB is a function of QTCs and associated MVs, which is verified experimentally. The proposed CABAC decoding complexity model can provide good estimation results for variant bit streams. Practical applications of this complexity model will also be discussed.
NASA Astrophysics Data System (ADS)
Nightingale, James; Wang, Qi; Grecos, Christos; Goma, Sergio
2014-02-01
High Efficiency Video Coding (HEVC), the latest video compression standard (also known as H.265), can deliver video streams of comparable quality to the current H.264 Advanced Video Coding (H.264/AVC) standard with a 50% reduction in bandwidth. Research into SHVC, the scalable extension to the HEVC standard, is still in its infancy. One important area for investigation is whether, given the greater compression ratio of HEVC (and SHVC), the loss of packets containing video content will have a greater impact on the quality of delivered video than is the case with H.264/AVC or its scalable extension H.264/SVC. In this work we empirically evaluate the layer-based, in-network adaptation of video streams encoded using SHVC in situations where dynamically changing bandwidths and datagram loss ratios require the real-time adaptation of video streams. Through the use of extensive experimentation, we establish a comprehensive set of benchmarks for SHVC-based highdefinition video streaming in loss prone network environments such as those commonly found in mobile networks. Among other results, we highlight that packet losses of only 1% can lead to a substantial reduction in PSNR of over 3dB and error propagation in over 130 pictures following the one in which the loss occurred. This work would be one of the earliest studies in this cutting-edge area that reports benchmark evaluation results for the effects of datagram loss on SHVC picture quality and offers empirical and analytical insights into SHVC adaptation to lossy, mobile networking conditions.
Zou, Ding; Djordjevic, Ivan B
2016-09-05
In this paper, we propose a rate-adaptive FEC scheme based on LDPC codes together with its software reconfigurable unified FPGA architecture. By FPGA emulation, we demonstrate that the proposed class of rate-adaptive LDPC codes based on shortening with an overhead from 25% to 42.9% provides a coding gain ranging from 13.08 dB to 14.28 dB at a post-FEC BER of 10^{-15} for BPSK transmission. In addition, the proposed rate-adaptive LDPC coding combined with higher-order modulations have been demonstrated including QPSK, 8-QAM, 16-QAM, 32-QAM, and 64-QAM, which covers a wide range of signal-to-noise ratios. Furthermore, we apply the unequal error protection by employing different LDPC codes on different bits in 16-QAM and 64-QAM, which results in additional 0.5dB gain compared to conventional LDPC coded modulation with the same code rate of corresponding LDPC code.
Spatially adaptive bases in wavelet-based coding of semi-regular meshes
NASA Astrophysics Data System (ADS)
Denis, Leon; Florea, Ruxandra; Munteanu, Adrian; Schelkens, Peter
2010-05-01
In this paper we present a wavelet-based coding approach for semi-regular meshes, which spatially adapts the employed wavelet basis in the wavelet transformation of the mesh. The spatially-adaptive nature of the transform requires additional information to be stored in the bit-stream in order to allow the reconstruction of the transformed mesh at the decoder side. In order to limit this overhead, the mesh is first segmented into regions of approximately equal size. For each spatial region, a predictor is selected in a rate-distortion optimal manner by using a Lagrangian rate-distortion optimization technique. When compared against the classical wavelet transform employing the butterfly subdivision filter, experiments reveal that the proposed spatially-adaptive wavelet transform significantly decreases the energy of the wavelet coefficients for all subbands. Preliminary results show also that employing the proposed transform for the lowest-resolution subband systematically yields improved compression performance at low-to-medium bit-rates. For the Venus and Rabbit test models the compression improvements add up to 1.47 dB and 0.95 dB, respectively.
Hyperspectral image compression: adapting SPIHT and EZW to anisotropic 3-D wavelet coding.
Christophe, Emmanuel; Mailhes, Corinne; Duhamel, Pierre
2008-12-01
Hyperspectral images present some specific characteristics that should be used by an efficient compression system. In compression, wavelets have shown a good adaptability to a wide range of data, while being of reasonable complexity. Some wavelet-based compression algorithms have been successfully used for some hyperspectral space missions. This paper focuses on the optimization of a full wavelet compression system for hyperspectral images. Each step of the compression algorithm is studied and optimized. First, an algorithm to find the optimal 3-D wavelet decomposition in a rate-distortion sense is defined. Then, it is shown that a specific fixed decomposition has almost the same performance, while being more useful in terms of complexity issues. It is shown that this decomposition significantly improves the classical isotropic decomposition. One of the most useful properties of this fixed decomposition is that it allows the use of zero tree algorithms. Various tree structures, creating a relationship between coefficients, are compared. Two efficient compression methods based on zerotree coding (EZW and SPIHT) are adapted on this near-optimal decomposition with the best tree structure found. Performances are compared with the adaptation of JPEG 2000 for hyperspectral images on six different areas presenting different statistical properties.
Improving Inpatient Surveys: Web-Based Computer Adaptive Testing Accessed via Mobile Phone QR Codes
2016-01-01
Background The National Health Service (NHS) 70-item inpatient questionnaire surveys inpatients on their perceptions of their hospitalization experience. However, it imposes more burden on the patient than other similar surveys. The literature shows that computerized adaptive testing (CAT) based on item response theory can help shorten the item length of a questionnaire without compromising its precision. Objective Our aim was to investigate whether CAT can be (1) efficient with item reduction and (2) used with quick response (QR) codes scanned by mobile phones. Methods After downloading the 2008 inpatient survey data from the Picker Institute Europe website and analyzing the difficulties of this 70-item questionnaire, we used an author-made Excel program using the Rasch partial credit model to simulate 1000 patients’ true scores followed by a standard normal distribution. The CAT was compared to two other scenarios of answering all items (AAI) and the randomized selection method (RSM), as we investigated item length (efficiency) and measurement accuracy. The author-made Web-based CAT program for gathering patient feedback was effectively accessed from mobile phones by scanning the QR code. Results We found that the CAT can be more efficient for patients answering questions (ie, fewer items to respond to) than either AAI or RSM without compromising its measurement accuracy. A Web-based CAT inpatient survey accessed by scanning a QR code on a mobile phone was viable for gathering inpatient satisfaction responses. Conclusions With advances in technology, patients can now be offered alternatives for providing feedback about hospitalization satisfaction. This Web-based CAT is a possible option in health care settings for reducing the number of survey items, as well as offering an innovative QR code access. PMID:26935793
Takahasi Nearest-Neighbour Gas Revisited II: Morse Gases
NASA Astrophysics Data System (ADS)
Matsumoto, Akira
2011-12-01
Some thermodynamic quantities for the Morse potential are analytically evaluated at an isobaric process. The parameters of Morse gases for 21 substances are obtained by the second virial coefficient data and the spectroscopic data of diatomic molecules. Also some thermodynamic quantities for water are calculated numerically and drawn graphically. The inflexion point of the length L which depends on temperature T and pressure P corresponds physically to a boiling point. L indicates the liquid phase from lower temperature to the inflexion point and the gaseous phase from the inflexion point to higher temperature. The boiling temperatures indicate reasonable values compared with experimental data. The behaviour of L suggests a chance of a first-order phase transition in one dimension.
Exotic electronic properties in Thue-Morse graphene superlattices.
Xu, Yafang; Zou, Jianfei; Jin, Guojun
2013-06-19
To show the specific behavior of Dirac fermions in a quasi-periodic structure, we investigate the electronic properties in a deterministic Thue-Morse graphene superlattice. Our main findings include the following. (i) Unlike conventional Schrödinger electrons, quasi-periodic features such as the striking self-similarity and trifurcation in the transmission spectrum can be manifested only at oblique incidence. (ii) In the vicinity of the usual Dirac point, extra Dirac points emerge; their locations are dependent merely on the second generation of the Thue-Morse structure and the number is double that in the periodic graphene superlattice. (iii) A classification is given about the wavefunctions in the Thue-Morse structure which are transformed from the critical states into extended ones at the Dirac points. (iv) The electrons can transmit perfectly at the extra Dirac points, and such a collimation supplies a convenient way to experimentally detect the numbers and the locations of the extra Dirac points. These exotic electronic properties in the aperiodic graphene superlattices may facilitate some applications in graphene-based electronics.
Naud, Richard; Gerstner, Wulfram
2012-01-01
The response of a neuron to a time-dependent stimulus, as measured in a Peri-Stimulus-Time-Histogram (PSTH), exhibits an intricate temporal structure that reflects potential temporal coding principles. Here we analyze the encoding and decoding of PSTHs for spiking neurons with arbitrary refractoriness and adaptation. As a modeling framework, we use the spike response model, also known as the generalized linear neuron model. Because of refractoriness, the effect of the most recent spike on the spiking probability a few milliseconds later is very strong. The influence of the last spike needs therefore to be described with high precision, while the rest of the neuronal spiking history merely introduces an average self-inhibition or adaptation that depends on the expected number of past spikes but not on the exact spike timings. Based on these insights, we derive a 'quasi-renewal equation' which is shown to yield an excellent description of the firing rate of adapting neurons. We explore the domain of validity of the quasi-renewal equation and compare it with other rate equations for populations of spiking neurons. The problem of decoding the stimulus from the population response (or PSTH) is addressed analogously. We find that for small levels of activity and weak adaptation, a simple accumulator of the past activity is sufficient to decode the original input, but when refractory effects become large decoding becomes a non-linear function of the past activity. The results presented here can be applied to the mean-field analysis of coupled neuron networks, but also to arbitrary point processes with negative self-interaction.
Adaptive coded spreading OFDM signal for dynamic-λ optical access network
NASA Astrophysics Data System (ADS)
Liu, Bo; Zhang, Lijia; Xin, Xiangjun
2015-12-01
This paper proposes and experimentally demonstrates a novel adaptive coded spreading (ACS) orthogonal frequency division multiplexing (OFDM) signal for dynamic distributed optical ring-based access network. The wavelength can be assigned to different remote nodes (RNs) according to the traffic demand of optical network unit (ONU). The ACS can provide dynamic spreading gain to different signals according to the split ratio or transmission length, which offers flexible power budget for the network. A 10×13.12 Gb/s OFDM access with ACS is successfully demonstrated over two RNs and 120 km transmission in the experiment. The demonstrated method may be viewed as one promising for future optical metro access network.
Image sensor system with bio-inspired efficient coding and adaptation.
Okuno, Hirotsugu; Yagi, Tetsuya
2012-08-01
We designed and implemented an image sensor system equipped with three bio-inspired coding and adaptation strategies: logarithmic transform, local average subtraction, and feedback gain control. The system comprises a field-programmable gate array (FPGA), a resistive network, and active pixel sensors (APS), whose light intensity-voltage characteristics are controllable. The system employs multiple time-varying reset voltage signals for APS in order to realize multiple logarithmic intensity-voltage characteristics, which are controlled so that the entropy of the output image is maximized. The system also employs local average subtraction and gain control in order to obtain images with an appropriate contrast. The local average is calculated by the resistive network instantaneously. The designed system was successfully used to obtain appropriate images of objects that were subjected to large changes in illumination.
MPI parallelization of full PIC simulation code with Adaptive Mesh Refinement
NASA Astrophysics Data System (ADS)
Matsui, Tatsuki; Nunami, Masanori; Usui, Hideyuki; Moritaka, Toseo
2010-11-01
A new parallelization technique developed for PIC method with adaptive mesh refinement (AMR) is introduced. In AMR technique, the complicated cell arrangements are organized and managed as interconnected pointers with multiple resolution levels, forming a fully threaded tree structure as a whole. In order to retain this tree structure distributed over multiple processes, remote memory access, an extended feature of MPI2 standards, is employed. Another important feature of the present simulation technique is the domain decomposition according to the modified Morton ordering. This algorithm can group up the equal number of particle calculation loops, which allows for the better load balance. Using this advanced simulation code, preliminary results for basic physical problems are exhibited for the validity check, together with the benchmarks to test the performance and the scalability.
2013-01-01
Background Evidence-based interventions are frequently modified or adapted during the implementation process. Changes may be made to protocols to meet the needs of the target population or address differences between the context in which the intervention was originally designed and the one into which it is implemented [Addict Behav 2011, 36(6):630–635]. However, whether modification compromises or enhances the desired benefits of the intervention is not well understood. A challenge to understanding the impact of specific types of modifications is a lack of attention to characterizing the different types of changes that may occur. A system for classifying the types of modifications that are made when interventions and programs are implemented can facilitate efforts to understand the nature of modifications that are made in particular contexts as well as the impact of these modifications on outcomes of interest. Methods We developed a system for classifying modifications made to interventions and programs across a variety of fields and settings. We then coded 258 modifications identified in 32 published articles that described interventions implemented in routine care or community settings. Results We identified modifications made to the content of interventions, as well as to the context in which interventions are delivered. We identified 12 different types of content modifications, and our coding scheme also included ratings for the level at which these modifications were made (ranging from the individual patient level up to a hospital network or community). We identified five types of contextual modifications (changes to the format, setting, or patient population that do not in and of themselves alter the actual content of the intervention). We also developed codes to indicate who made the modifications and identified a smaller subset of modifications made to the ways that training or evaluations occur when evidence-based interventions are implemented. Rater
WHITE DWARF MERGERS ON ADAPTIVE MESHES. I. METHODOLOGY AND CODE VERIFICATION
Katz, Max P.; Zingale, Michael; Calder, Alan C.; Swesty, F. Douglas; Almgren, Ann S.; Zhang, Weiqun
2016-03-10
The Type Ia supernova (SN Ia) progenitor problem is one of the most perplexing and exciting problems in astrophysics, requiring detailed numerical modeling to complement observations of these explosions. One possible progenitor that has merited recent theoretical attention is the white dwarf (WD) merger scenario, which has the potential to naturally explain many of the observed characteristics of SNe Ia. To date there have been relatively few self-consistent simulations of merging WD systems using mesh-based hydrodynamics. This is the first paper in a series describing simulations of these systems using a hydrodynamics code with adaptive mesh refinement. In this paper we describe our numerical methodology and discuss our implementation in the compressible hydrodynamics code CASTRO, which solves the Euler equations, and the Poisson equation for self-gravity, and couples the gravitational and rotation forces to the hydrodynamics. Standard techniques for coupling gravitation and rotation forces to the hydrodynamics do not adequately conserve the total energy of the system for our problem, but recent advances in the literature allow progress and we discuss our implementation here. We present a set of test problems demonstrating the extent to which our software sufficiently models a system where large amounts of mass are advected on the computational domain over long timescales. Future papers in this series will describe our treatment of the initial conditions of these systems and will examine the early phases of the merger to determine its viability for triggering a thermonuclear detonation.
White Dwarf Mergers on Adaptive Meshes. I. Methodology and Code Verification
NASA Astrophysics Data System (ADS)
Katz, Max P.; Zingale, Michael; Calder, Alan C.; Swesty, F. Douglas; Almgren, Ann S.; Zhang, Weiqun
2016-03-01
The Type Ia supernova (SN Ia) progenitor problem is one of the most perplexing and exciting problems in astrophysics, requiring detailed numerical modeling to complement observations of these explosions. One possible progenitor that has merited recent theoretical attention is the white dwarf (WD) merger scenario, which has the potential to naturally explain many of the observed characteristics of SNe Ia. To date there have been relatively few self-consistent simulations of merging WD systems using mesh-based hydrodynamics. This is the first paper in a series describing simulations of these systems using a hydrodynamics code with adaptive mesh refinement. In this paper we describe our numerical methodology and discuss our implementation in the compressible hydrodynamics code CASTRO, which solves the Euler equations, and the Poisson equation for self-gravity, and couples the gravitational and rotation forces to the hydrodynamics. Standard techniques for coupling gravitation and rotation forces to the hydrodynamics do not adequately conserve the total energy of the system for our problem, but recent advances in the literature allow progress and we discuss our implementation here. We present a set of test problems demonstrating the extent to which our software sufficiently models a system where large amounts of mass are advected on the computational domain over long timescales. Future papers in this series will describe our treatment of the initial conditions of these systems and will examine the early phases of the merger to determine its viability for triggering a thermonuclear detonation.
Adaptive Coding and Modulation Experiment With NASA's Space Communication and Navigation Testbed
NASA Technical Reports Server (NTRS)
Downey, Joseph A.; Mortensen, Dale J.; Evans, Michael A.; Briones, Janette C.; Tollis, Nicholas
2016-01-01
National Aeronautics and Space Administration (NASA)'s Space Communication and Navigation Testbed is an advanced integrated communication payload on the International Space Station. This paper presents results from an adaptive coding and modulation (ACM) experiment over S-band using a direct-to-earth link between the SCaN Testbed and the Glenn Research Center. The testing leverages the established Digital Video Broadcasting Second Generation (DVB-S2) standard to provide various modulation and coding options, and uses the Space Data Link Protocol (Consultative Committee for Space Data Systems (CCSDS) standard) for the uplink and downlink data framing. The experiment was con- ducted in a challenging environment due to the multipath and shadowing caused by the International Space Station structure. Several approaches for improving the ACM system are presented, including predictive and learning techniques to accommodate signal fades. Performance of the system is evaluated as a function of end-to-end system latency (round- trip delay), and compared to the capacity of the link. Finally, improvements over standard NASA waveforms are presented.
Hybrid threshold adaptable quantum secret sharing scheme with reverse Huffman-Fibonacci-tree coding
Lai, Hong; Zhang, Jun; Luo, Ming-Xing; Pan, Lei; Pieprzyk, Josef; Xiao, Fuyuan; Orgun, Mehmet A.
2016-01-01
With prevalent attacks in communication, sharing a secret between communicating parties is an ongoing challenge. Moreover, it is important to integrate quantum solutions with classical secret sharing schemes with low computational cost for the real world use. This paper proposes a novel hybrid threshold adaptable quantum secret sharing scheme, using an m-bonacci orbital angular momentum (OAM) pump, Lagrange interpolation polynomials, and reverse Huffman-Fibonacci-tree coding. To be exact, we employ entangled states prepared by m-bonacci sequences to detect eavesdropping. Meanwhile, we encode m-bonacci sequences in Lagrange interpolation polynomials to generate the shares of a secret with reverse Huffman-Fibonacci-tree coding. The advantages of the proposed scheme is that it can detect eavesdropping without joint quantum operations, and permits secret sharing for an arbitrary but no less than threshold-value number of classical participants with much lower bandwidth. Also, in comparison with existing quantum secret sharing schemes, it still works when there are dynamic changes, such as the unavailability of some quantum channel, the arrival of new participants and the departure of participants. Finally, we provide security analysis of the new hybrid quantum secret sharing scheme and discuss its useful features for modern applications. PMID:27515908
Hybrid threshold adaptable quantum secret sharing scheme with reverse Huffman-Fibonacci-tree coding.
Lai, Hong; Zhang, Jun; Luo, Ming-Xing; Pan, Lei; Pieprzyk, Josef; Xiao, Fuyuan; Orgun, Mehmet A
2016-08-12
With prevalent attacks in communication, sharing a secret between communicating parties is an ongoing challenge. Moreover, it is important to integrate quantum solutions with classical secret sharing schemes with low computational cost for the real world use. This paper proposes a novel hybrid threshold adaptable quantum secret sharing scheme, using an m-bonacci orbital angular momentum (OAM) pump, Lagrange interpolation polynomials, and reverse Huffman-Fibonacci-tree coding. To be exact, we employ entangled states prepared by m-bonacci sequences to detect eavesdropping. Meanwhile, we encode m-bonacci sequences in Lagrange interpolation polynomials to generate the shares of a secret with reverse Huffman-Fibonacci-tree coding. The advantages of the proposed scheme is that it can detect eavesdropping without joint quantum operations, and permits secret sharing for an arbitrary but no less than threshold-value number of classical participants with much lower bandwidth. Also, in comparison with existing quantum secret sharing schemes, it still works when there are dynamic changes, such as the unavailability of some quantum channel, the arrival of new participants and the departure of participants. Finally, we provide security analysis of the new hybrid quantum secret sharing scheme and discuss its useful features for modern applications.
Hybrid threshold adaptable quantum secret sharing scheme with reverse Huffman-Fibonacci-tree coding
NASA Astrophysics Data System (ADS)
Lai, Hong; Zhang, Jun; Luo, Ming-Xing; Pan, Lei; Pieprzyk, Josef; Xiao, Fuyuan; Orgun, Mehmet A.
2016-08-01
With prevalent attacks in communication, sharing a secret between communicating parties is an ongoing challenge. Moreover, it is important to integrate quantum solutions with classical secret sharing schemes with low computational cost for the real world use. This paper proposes a novel hybrid threshold adaptable quantum secret sharing scheme, using an m-bonacci orbital angular momentum (OAM) pump, Lagrange interpolation polynomials, and reverse Huffman-Fibonacci-tree coding. To be exact, we employ entangled states prepared by m-bonacci sequences to detect eavesdropping. Meanwhile, we encode m-bonacci sequences in Lagrange interpolation polynomials to generate the shares of a secret with reverse Huffman-Fibonacci-tree coding. The advantages of the proposed scheme is that it can detect eavesdropping without joint quantum operations, and permits secret sharing for an arbitrary but no less than threshold-value number of classical participants with much lower bandwidth. Also, in comparison with existing quantum secret sharing schemes, it still works when there are dynamic changes, such as the unavailability of some quantum channel, the arrival of new participants and the departure of participants. Finally, we provide security analysis of the new hybrid quantum secret sharing scheme and discuss its useful features for modern applications.
Context adaptive binary arithmetic coding-based data hiding in partially encrypted H.264/AVC videos
NASA Astrophysics Data System (ADS)
Xu, Dawen; Wang, Rangding
2015-05-01
A scheme of data hiding directly in a partially encrypted version of H.264/AVC videos is proposed which includes three parts, i.e., selective encryption, data embedding and data extraction. Selective encryption is performed on context adaptive binary arithmetic coding (CABAC) bin-strings via stream ciphers. By careful selection of CABAC entropy coder syntax elements for selective encryption, the encrypted bitstream is format-compliant and has exactly the same bit rate. Then a data-hider embeds the additional data into partially encrypted H.264/AVC videos using a CABAC bin-string substitution technique without accessing the plaintext of the video content. Since bin-string substitution is carried out on those residual coefficients with approximately the same magnitude, the quality of the decrypted video is satisfactory. Video file size is strictly preserved even after data embedding. In order to adapt to different application scenarios, data extraction can be done either in the encrypted domain or in the decrypted domain. Experimental results have demonstrated the feasibility and efficiency of the proposed scheme.
Coding and adaptation during mechanical stimulation in the leech nervous system.
Pinato, G; Torre, V
2000-12-15
The experiments described here were designed to characterise sensory coding and adaptation during mechanical stimulation in the leech (Hirudo medicinalis). A chain of three ganglia and a segment of the body wall connected to the central ganglion were used. Eight extracellular suction pipettes and one or two intracellular electrodes were used to record action potentials from all mechanosensory neurones of the three ganglia. When the skin of the body wall was briefly touched with a filament exerting a force of about 2 mN, touch (T) cells in the central ganglion, but also those in adjacent ganglia (i.e. anterior and posterior), fired one or two action potentials. However, the threshold for action potential initiation was lower for T cells in the central ganglion than for those in adjacent ganglia. The timing of the first evoked action potential in a T cell was very reproducible with a jitter often lower than 100 us. Action potentials in T cells were not significantly correlated. When the force exerted by the filament was increased above 20 mN, pressure (P) cells in the central and neighbouring ganglia fired action potentials. Action potentials in P cells usually followed those evoked in T cells with a delay of about 20 ms and had a larger jitter of 0.5-10 ms. With stronger stimulations exceeding 50 mN, noxious (N) cells also fired action potentials. With such stimulations the majority of mechanosensory neurones in the three ganglia fired action potentials. The spatial properties of the whole receptive field of the mechanosensory neurones were explored by touching different parts of the skin. When the mechanical stimulation was applied for a longer time, i.e. 1 s, only P cells in the central ganglion continued to fire action potentials. P cells in neighbouring ganglia fully adapted after firing two or three action potentials.P cells in adjacent ganglia, having fully adapted to a steady mechanical stimulation of one part of the skin, fired action potentials following
Unbounded Trace Orbits of Thue-Morse Hamiltonian
NASA Astrophysics Data System (ADS)
Liu, Qinghui; Qu, Yanhui; Yao, Xiao
2017-03-01
It is well known that, an energy is in the spectrum of Fibonacci Hamiltonian if and only if the corresponding trace orbit is bounded. However, it is not known whether the same result holds for the Thue-Morse Hamiltonian. In this paper, we give a negative answer to this question. More precisely, we construct two subsets Σ _{II} and Σ _{III} of the spectrum of the Thue-Morse Hamiltonian, both of which are dense and uncountable, such that each energy in Σ _{II}&ucedil;p Σ _{III} corresponds to an unbounded trace orbit. Exact estimates on the norm of the transfer matrices are also obtained for these energies: for Ein Σ _{II}&ucedil;p Σ _{III}, the norms of the transfer matrices behave like e^{c_1γ √{n}}≤Vert T_{ n}(E)Vert ≤e^{c_2γ √{n}}. However, two types of energies are quite different in the sense that each energy in Σ _{II} is associated with a two-sided pseudo-localized state, while each energy in Σ _{III} is associated with a one-sided pseudo-localized state. The difference is also reflected by the local dimensions of the spectral measure: the local dimension is 0 for energies in Σ _{II} and is larger than 1 for energies in Σ _{III}. As a comparison, we mention another known countable dense subset Σ _I. Each energy in Σ _I corresponds to an eventually constant trace map and the associated eigenvector is an extended state. In summary, the Thue-Morse Hamiltonian exhibits "mixed spectral nature".
NASA Astrophysics Data System (ADS)
Hu, Chia-Chang
2005-12-01
A novel space-time adaptive near-far robust code-synchronization array detector for asynchronous DS-CDMA systems is developed in this paper. There are the same basic requirements that are needed by the conventional matched filter of an asynchronous DS-CDMA system. For the real-time applicability, a computationally efficient architecture of the proposed detector is developed that is based on the concept of the multistage Wiener filter (MWF) of Goldstein and Reed. This multistage technique results in a self-synchronizing detection criterion that requires no inversion or eigendecomposition of a covariance matrix. As a consequence, this detector achieves a complexity that is only a linear function of the size of antenna array ([InlineEquation not available: see fulltext.]), the rank of the MWF ([InlineEquation not available: see fulltext.]), the system processing gain ([InlineEquation not available: see fulltext.]), and the number of samples in a chip interval ([InlineEquation not available: see fulltext.]), that is,[InlineEquation not available: see fulltext.]. The complexity of the equivalent detector based on the minimum mean-squared error (MMSE) or the subspace-based eigenstructure analysis is a function of[InlineEquation not available: see fulltext.]. Moreover, this multistage scheme provides a rapid adaptive convergence under limited observation-data support. Simulations are conducted to evaluate the performance and convergence behavior of the proposed detector with the size of the[InlineEquation not available: see fulltext.]-element antenna array, the amount of the[InlineEquation not available: see fulltext.]-sample support, and the rank of the[InlineEquation not available: see fulltext.]-stage MWF. The performance advantage of the proposed detector over other DS-CDMA detectors is investigated as well.
Robust image transmission using a new joint source channel coding algorithm and dual adaptive OFDM
NASA Astrophysics Data System (ADS)
Farshchian, Masoud; Cho, Sungdae; Pearlman, William A.
2004-01-01
In this paper we consider the problem of robust image coding and packetization for the purpose of communications over slow fading frequency selective channels and channels with a shaped spectrum like those of digital subscribe lines (DSL). Towards this end, a novel and analytically based joint source channel coding (JSCC) algorithm to assign unequal error protection is presented. Under a block budget constraint, the image bitstream is de-multiplexed into two classes with different error responses. The algorithm assigns unequal error protection (UEP) in a way to minimize the expected mean square error (MSE) at the receiver while minimizing the probability of catastrophic failure. In order to minimize the expected mean square error at the receiver, the algorithm assigns unequal protection to the value bit class (VBC) stream. In order to minimizes the probability of catastrophic error which is a characteristic of progressive image coders, the algorithm assigns more protection to the location bit class (LBC) stream than the VBC stream. Besides having the advantage of being analytical and also numerically solvable, the algorithm is based on a new formula developed to estimate the distortion rate (D-R) curve for the VBC portion of SPIHT. The major advantage of our technique is that the worst case instantaneous minimum peak signal to noise ratio (PSNR) does not differ greatly from the averge MSE while this is not the case for the optimal single stream (UEP) system. Although both average PSNR of our method and the optimal single stream UEP are about the same, our scheme does not suffer erratic behavior because we have made the probability of catastrophic error arbitarily small. The coded image is sent via orthogonal frequency division multiplexing (OFDM) which is a known and increasing popular modulation scheme to combat ISI (Inter Symbol Interference) and impulsive noise. Using dual adaptive energy OFDM, we use the minimum energy necessary to send each bit stream at a
NASA Astrophysics Data System (ADS)
Farahvash, Shayan; Akhavan, Koorosh; Kavehrad, Mohsen
1999-12-01
This paper presents a solution to problem of providing bit- error rate performance guarantees in a fixed millimeter-wave wireless system, such as local multi-point distribution system in line-of-sight or nearly line-of-sight applications. The basic concept is to take advantage of slow-fading behavior of fixed wireless channel by changing the transmission code rate. Rate compatible punctured convolutional codes are used to implement adaptive coding. Cochannel interference analysis is carried out for downlink direction; from base station to subscriber premises. Cochannel interference is treated as a noise-like random process with a power equal to the sum of the power from finite number of interfering base stations. Two different cellular architectures based on using single or dual polarizations are investigated. Average spectral efficiency of the proposed adaptive rate system is found to be at least 3 times larger than a fixed rate system with similar outage requirements.
NASA Astrophysics Data System (ADS)
Yakopcic, Chris; Taha, Tarek M.; Shin, Eunsung; Subramanyam, Guru; Murray, P. Terrence; Rogers, Stanley
2010-08-01
The memristor, experimentally verified for the first time in 2008, is one of four fundamental passive circuit elements (the others being resistors, capacitors, and inductors). Development and characterization of memristor devices and the design of novel computing architectures based on these devices can potentially provide significant advances in intelligence processing systems for a variety of applications including image processing, robotics, and machine learning. In particular, adaptive coded aperture (diffraction) sensing, an emerging technology enabling real-time, wide-area IR/visible sensing and imaging, could benefit from new high performance biologically inspired image processing architectures based on memristors. In this paper, we present results from the fabrication and characterization of memristor devices utilizing titanium oxide dielectric layers in a parallel plate conuration. Two versions of memristor devices have been fabricated at the University of Dayton and the Air Force Research Laboratory utilizing varying thicknesses of the TiO2 dielectric layers. Our results show that the devices do exhibit the characteristic hysteresis loop in their I-V plots.
Statistical Properties of Klauder-Perelomov Coherent States for the Morse Potential
NASA Astrophysics Data System (ADS)
Daoud, M.; Popov, D.
We present in this letter a realistic construction of the coherent states for the Morse potential using the Klauder-Perelomov approach. We discuss the statistical properties of these states, by deducing the Q- and P-distribution functions. The thermal expectations for the quantum canonical ideal gas of the Morse oscillators are also calculated.
Morse Type Index Theory for Flows and Periodic Solutions for Hamiltonian Equations.
1983-09-01
Lagrange spaces we refer to J. Duistermaats paper [181 "On the Morse Index in Variational Calculus ", in which also the relation to the Maslov-index of a...2) 8 (1958), 143-181. (181 J.J. Duistermaat: On the Morse Index in Variational Calculus . Advances in Math. 21 (1976), 173-195. (191 V.I. Arnold: On
A User’s Manual for MASH 1.0 - A Monte Carlo Adjoint Shielding Code System
1992-03-01
INTRODUCTION TO MORSE The Multigroup Oak Ridge Stochastic Experiment code (MORSE)’ is a multipurpose neutron and gamma-ray transport Monte Carlo code...in the energy transfer process. Thus, these multigroup cross sections have the same format for both neutrons and gamma rays. In addition, the... multigroup cross sections in a Monte Carlo code means that the effort required to produce cross-section libraries is reduced. Coupled neutron gamma-ray cross
Rhodes, Gillian; Ewing, Louise; Jeffery, Linda; Avard, Eleni; Taylor, Libby
2014-09-01
Faces are adaptively coded relative to visual norms that are updated by experience. This coding is compromised in autism and the broader autism phenotype, suggesting that atypical adaptive coding of faces may be an endophenotype for autism. Here we investigate the nature of this atypicality, asking whether adaptive face-coding mechanisms are fundamentally altered, or simply less responsive to experience, in autism. We measured adaptive coding, using face identity aftereffects, in cognitively able children and adolescents with autism and neurotypical age- and ability-matched participants. We asked whether these aftereffects increase with adaptor identity strength as in neurotypical populations, or whether they show a different pattern indicating a more fundamental alteration in face-coding mechanisms. As expected, face identity aftereffects were reduced in the autism group, but they nevertheless increased with adaptor strength, like those of our neurotypical participants, consistent with norm-based coding of face identity. Moreover, their aftereffects correlated positively with face recognition ability, consistent with an intact functional role for adaptive coding in face recognition ability. We conclude that adaptive norm-based face-coding mechanisms are basically intact in autism, but are less readily calibrated by experience.
Robust Computation of Morse-Smale Complexes of Bilinear Functions
Norgard, G; Bremer, P T
2010-11-30
The Morse-Smale (MS) complex has proven to be a useful tool in extracting and visualizing features from scalar-valued data. However, existing algorithms to compute the MS complex are restricted to either piecewise linear or discrete scalar fields. This paper presents a new combinatorial algorithm to compute MS complexes for two dimensional piecewise bilinear functions defined on quadrilateral meshes. We derive a new invariant of the gradient flow within a bilinear cell and use it to develop a provably correct computation which is unaffected by numerical instabilities. This includes a combinatorial algorithm to detect and classify critical points as well as a way to determine the asymptotes of cell-based saddles and their intersection with cell edges. Finally, we introduce a simple data structure to compute and store integral lines on quadrilateral meshes which by construction prevents intersections and enables us to enforce constraints on the gradient flow to preserve known invariants.
Comparison between the Morse eigenfunctions and deformed oscillator wavefunctions
Recamier, J.; Mochan, W. L.; Gorayeb, M.; Paz, J. L.
2008-04-15
In this work we introduce deformed creation and annihilation operators which differ from the usual harmonic oscillator operators a, a{sup {dagger}} by a number operator function A circumflex = a circumflex f(n circumflex ), A circumflex {sup {dagger}} = f(n circumflex )a circumflex {sup {dagger}}. We construct the deformed coordinate and momentum in terms of the deformed operators and maintain only up to first order terms in the deformed operators. By application of the deformed annihilation operator upon the vacuum state we get the ground state wavefunction in the configuration space and the wavefunctions for excited states are obtained by repeated application of the deformed creation operator. Finally, we compare the wavefunctions obtained with the deformed operators with the corresponding Morse eigenfunctions.
Zheng, Dongliang; Da, Feipeng; Kemao, Qian; Seah, Hock Soon
2017-03-06
Phase-shifting profilometry combined with Gray-code patterns projection has been widely used for 3D measurement. In this technique, a phase-shifting algorithm is used to calculate the wrapped phase, and a set of Gray-code binary patterns is used to determine the unwrapped phase. In the real measurement, the captured Gray-code patterns are no longer binary, resulting in phase unwrapping errors at a large number of erroneous pixels. Although this problem has been attended and well resolved by a few methods, it remains challenging when a measured object has step-heights and the captured patterns contain invalid pixels. To effectively remove unwrapping errors and simultaneously preserve step-heights, in this paper, an effective method using an adaptive median filter is proposed. Both simulations and experiments can demonstrate its effectiveness.
Adaptation of the Advanced Spray Combustion Code to Cavitating Flow Problems
NASA Technical Reports Server (NTRS)
Liang, Pak-Yan
1993-01-01
A very important consideration in turbopump design is the prediction and prevention of cavitation. Thus far conventional CFD codes have not been generally applicable to the treatment of cavitating flows. Taking advantage of its two-phase capability, the Advanced Spray Combustion Code is being modified to handle flows with transient as well as steady-state cavitation bubbles. The volume-of-fluid approach incorporated into the code is extended and augmented with a liquid phase energy equation and a simple evaporation model. The strategy adopted also successfully deals with the cavity closure issue. Simple test cases will be presented and remaining technical challenges will be discussed.
2012-06-01
We have entered a new era in agricultural and biomedical science made possible by remarkable advances in DNA sequencing technologies. The complete sequence of an individual's set of chromosomes (collectively, its genome) provides a primary genetic code for what makes that individual unique, just as the contents of every personal computer reflect the unique attributes of its owner. But a second code, composed of "epigenetic" layers of information, affects the accessibility of the stored information and the execution of specific tasks. Nature's second code is enigmatic and must be deciphered if we are to fully understand and optimize the genetic potential of crop plants. The goal of the Epigenomics of Plants International Consortium is to crack this second code, and ultimately master its control, to help catalyze a new green revolution.
Jedidiah Morse and the Bavarian Illuminati: An Essay in the Rhetoric of Conspiracy.
ERIC Educational Resources Information Center
Griffin, Charles J. G.
1989-01-01
Focuses on three widely publicized sermons given by the Reverend Jedidiah Morse to examine the role of the jeremiad (or political sermon) in shaping public attitudes toward political dissent during the Franco-American Crisis of 1798-1799. (MM)
Development of an Adaptive Boundary-Fitted Coordinate Code for Use in Coastal and Estuarine Areas.
1985-09-01
34 Miscellaneous Paper HL-80-3, US Army Engineer Waterways Experiment Station, Vicksburg, Miss. Johnson, B. H., Thompson , J . F ., and Baker, A. J. 1984. "A...34 prepared for CERC, US Army Engineer Water- ways Experiment Station, Vicksburg, Miss. Thompson , J . F . 1983. "A Boundary-Fitted Coordinate Code for...Vol 1. Thompson , J . F ., Thames, F. C., and Mastin, C. W. 1977. "TOMCAT - A Code for Numerical Generation Systems on Fields Containing Any Number of
Morse taper dental implants and platform switching: The new paradigm in oral implantology
Macedo, José Paulo; Pereira, Jorge; Vahey, Brendan R.; Henriques, Bruno; Benfatti, Cesar A. M.; Magini, Ricardo S.; López-López, José; Souza, Júlio C. M.
2016-01-01
The aim of this study was to conduct a literature review on the potential benefits with the use of Morse taper dental implant connections associated with small diameter platform switching abutments. A Medline bibliographical search (from 1961 to 2014) was carried out. The following search items were explored: “Bone loss and platform switching,” “bone loss and implant-abutment joint,” “bone resorption and platform switching,” “bone resorption and implant-abutment joint,” “Morse taper and platform switching.” “Morse taper and implant-abutment joint,” Morse taper and bone resorption,” “crestal bone remodeling and implant-abutment joint,” “crestal bone remodeling and platform switching.” The selection criteria used for the article were: meta-analysis; randomized controlled trials; prospective cohort studies; as well as reviews written in English, Portuguese, or Spanish languages. Within the 287 studies identified, 81 relevant and recent studies were selected. Results indicated a reduced occurrence of peri-implantitis and bone loss at the abutment/implant level associated with Morse taper implants and a reduced-diameter platform switching abutment. Extrapolation of data from previous studies indicates that Morse taper connections associated with platform switching have shown less inflammation and possible bone loss with the peri-implant soft tissues. However, more long-term studies are needed to confirm these trends. PMID:27011755
Rhodes, Gillian; Jeffery, Linda; Taylor, Libby; Ewing, Louise
2013-11-01
Our ability to discriminate and recognize thousands of faces despite their similarity as visual patterns relies on adaptive, norm-based, coding mechanisms that are continuously updated by experience. Reduced adaptive coding of face identity has been proposed as a neurocognitive endophenotype for autism, because it is found in autism and in relatives of individuals with autism. Autistic traits can also extend continuously into the general population, raising the possibility that reduced adaptive coding of face identity may be more generally associated with autistic traits. In the present study, we investigated whether adaptive coding of face identity decreases as autistic traits increase in an undergraduate population. Adaptive coding was measured using face identity aftereffects, and autistic traits were measured using the Autism-Spectrum Quotient (AQ) and its subscales. We also measured face and car recognition ability to determine whether autistic traits are selectively related to face recognition difficulties. We found that men who scored higher on levels of autistic traits related to social interaction had reduced adaptive coding of face identity. This result is consistent with the idea that atypical adaptive face-coding mechanisms are an endophenotype for autism. Autistic traits were also linked with face-selective recognition difficulties in men. However, there were some unexpected sex differences. In women, autistic traits were linked positively, rather than negatively, with adaptive coding of identity, and were unrelated to face-selective recognition difficulties. These sex differences indicate that autistic traits can have different neurocognitive correlates in men and women and raise the intriguing possibility that endophenotypes of autism can differ in males and females.
Graphene mechanics: I. Efficient first principles based Morse potential.
Costescu, Bogdan I; Baldus, Ilona B; Gräter, Frauke
2014-06-28
We present a computationally efficient pairwise potential for use in molecular dynamics simulations of large graphene or carbon nanotube systems, in particular, for those under mechanical deformation, and also for mixed systems including biomolecules. Based on the Morse potential, it is only slightly more complex and computationally expensive than a harmonic bond potential, allowing such large or mixed simulations to reach experimentally relevant time scales. By fitting to data obtained from quantum mechanics (QM) calculations to represent bond breaking in graphene patches, we obtain a dissociation energy of 805 kJ mol(-1) which reflects the steepness of the QM potential up to the inflection point. A distinctive feature of our potential is its truncation at the inflection point, allowing a realistic treatment of ruptured C-C bonds without relying on a bond order model. The results obtained from equilibrium MD simulations using our potential compare favorably with results obtained from experiments and from similar simulations with more complex and computationally expensive potentials.
NASA Technical Reports Server (NTRS)
Kandula, Max; Caimi, Raoul; Steinrock, T. (Technical Monitor)
2001-01-01
An acoustic prediction capability for supersonic axisymmetric jets was developed on the basis of OVERFLOW Navier-Stokes CFD (Computational Fluid Dynamics) code of NASA Langley Research Center. Reynolds-averaged turbulent stresses in the flow field are modeled with the aid of Spalart-Allmaras one-equation turbulence model. Appropriate acoustic and outflow boundary conditions were implemented to compute time-dependent acoustic pressure in the nonlinear source-field. Based on the specification of acoustic pressure, its temporal and normal derivatives on the Kirchhoff surface, the near-field and the far-field sound pressure levels are computed via Kirchhoff surface integral, with the Kirchhoff surface chosen to enclose the nonlinear sound source region described by the CFD code. The methods are validated by a comparison of the predictions of sound pressure levels with the available data for an axisymmetric turbulent supersonic (Mach 2) perfectly expanded jet.
Lee, Chaewoo
2014-01-01
The advancement in wideband wireless network supports real time services such as IPTV and live video streaming. However, because of the sharing nature of the wireless medium, efficient resource allocation has been studied to achieve a high level of acceptability and proliferation of wireless multimedia. Scalable video coding (SVC) with adaptive modulation and coding (AMC) provides an excellent solution for wireless video streaming. By assigning different modulation and coding schemes (MCSs) to video layers, SVC can provide good video quality to users in good channel conditions and also basic video quality to users in bad channel conditions. For optimal resource allocation, a key issue in applying SVC in the wireless multicast service is how to assign MCSs and the time resources to each SVC layer in the heterogeneous channel condition. We formulate this problem with integer linear programming (ILP) and provide numerical results to show the performance under 802.16 m environment. The result shows that our methodology enhances the overall system throughput compared to an existing algorithm. PMID:25276862
Lee, Dongyul; Lee, Chaewoo
2014-01-01
The advancement in wideband wireless network supports real time services such as IPTV and live video streaming. However, because of the sharing nature of the wireless medium, efficient resource allocation has been studied to achieve a high level of acceptability and proliferation of wireless multimedia. Scalable video coding (SVC) with adaptive modulation and coding (AMC) provides an excellent solution for wireless video streaming. By assigning different modulation and coding schemes (MCSs) to video layers, SVC can provide good video quality to users in good channel conditions and also basic video quality to users in bad channel conditions. For optimal resource allocation, a key issue in applying SVC in the wireless multicast service is how to assign MCSs and the time resources to each SVC layer in the heterogeneous channel condition. We formulate this problem with integer linear programming (ILP) and provide numerical results to show the performance under 802.16 m environment. The result shows that our methodology enhances the overall system throughput compared to an existing algorithm.
NASA Astrophysics Data System (ADS)
Shahid, Z.; Chaumont, M.; Puech, W.
2009-01-01
This paper develops a new adaptive scanning methodology for intra frame scalable coding framework based on a subband/wavelet(DWTSB) coding approach for MPEG-4 AVC/H.264 scalable video coding (SVC). It attempts to take advantage of the prior knowledge of the frequencies which are present in different higher frequency subbands. We propose dyadic intra frame coding method with adaptive scan (DWTSB-AS) for each subband as traditional zigzag scan is not suitable for high frequency subbands. Thus, by just modification of the scan order of the intra frame scalable coding framework of H.264, we can get better compression. The proposed algorithm has been theoretically justified and is thoroughly evaluated against the current SVC test model JSVM and DWTSB through extensive coding experiments for scalable coding of intra frame. The simulation results show the proposed scanning algorithm consistently outperforms JSVM and DWTSB in PSNR performance. This results in extra compression for intra frames, along with spatial scalability. Thus Image and video coding applications, traditionally serviced by separate coders, can be efficiently provided by an integrated coding system.
Application of a Morse filter in the processing of brain angiograms
NASA Astrophysics Data System (ADS)
Venegas Bayona, Santiago
2014-06-01
The angiograms are frequently used to find anomalies in the blood vessels. Hence, for improving the quality of the images with an angiogram, a Morse filter will be implemented (based on the model of the Morse Potential) in a brain's vessels angiogram using both softwares Maple ® and ImageJ ®. It will be shown the results of applying a Morse filter to an angiogram of the brain vessels. First, the image was processed with ImageJ using the plug-in Anisotropic Diffusion 2D and then, the filter was implemented. As it is illustrated in the results, the edges of the stringy elements are emphasized. Particularly, this is very useful in the medical image processing of blood vessels, like angiograms, due to the narrowing or obstruction which may be caused by illness like aneurysms, thrombosis or other diseases.
The calculation of vapor-liquid coexistence curve of Morse fluid: application to iron.
Apfelbaum, E M
2011-05-21
The vapor-liquid coexistence curve of Morse fluid was calculated within the integral equations approach. The critical point coordinates were estimated. The parameters of Morse potential, fitted for elastic constants in solid phase, were used here to apply the results of present calculations to the determination of iron binodal. The properties of copper and sodium were considered in an analogous way. The calculations of pair correlation functions and isobars at liquid phase have shown that only for sodium these potential parameters allow one to obtain agreement with the measurements data. For iron another parameters are necessary to get this agreement in liquid phase. However, they give rise to very low critical temperature and pressure with respect to the estimates of other authors. Consequently, one can suppose that Morse potential is possibly inapplicable to the calculation of high temperature properties of non-alkali metals in disordered phases.
Global Asymptotical Stabilization of Morse-Smale Systems Using Weak Control-Lyapunov Functions
NASA Astrophysics Data System (ADS)
Nishida, Gou; Tsuzuki, Takayuki; Nakamura, Hisakazu; Yamashita, Yuh
This paper proposes a method of constructing weak control-Lyapunov functions for nonlinear systems by introducing a topological geometric assumption called a Morse-Smale system. A Lyapunov function is one of the most important tools to study stability and stabilization of nonlinear systems. However, a general way of finding Lyapunov functions has not been found yet. First, we confirm there is a weak Lyapunov function for Morse-Smale systems. Next, we define the escapability for singular structures of the weak Lyapunov function. If all singular structures are escapable, then the Morse-Smale system is a globally asymptotically stabilizable one. Finally, we present the method of constructing a set of weak control-Lyapunov functions to achieve global stabilization. The method is described in terms of a recursive sequence of singular structures. We call the sequence a weak Lyapunov filtration.
N-body simulations for f(R) gravity using a self-adaptive particle-mesh code
Zhao Gongbo; Koyama, Kazuya; Li Baojiu
2011-02-15
We perform high-resolution N-body simulations for f(R) gravity based on a self-adaptive particle-mesh code MLAPM. The chameleon mechanism that recovers general relativity on small scales is fully taken into account by self-consistently solving the nonlinear equation for the scalar field. We independently confirm the previous simulation results, including the matter power spectrum, halo mass function, and density profiles, obtained by Oyaizu et al.[Phys. Rev. D 78, 123524 (2008)] and Schmidt et al.[Phys. Rev. D 79, 083518 (2009)], and extend the resolution up to k{approx}20 h/Mpc for the measurement of the matter power spectrum. Based on our simulation results, we discuss how the chameleon mechanism affects the clustering of dark matter and halos on full nonlinear scales.
Design of signal-adapted multidimensional lifting scheme for lossy coding.
Gouze, Annabelle; Antonini, Marc; Barlaud, Michel; Macq, Benoît
2004-12-01
This paper proposes a new method for the design of lifting filters to compute a multidimensional nonseparable wavelet transform. Our approach is stated in the general case, and is illustrated for the 2-D separable and for the quincunx images. Results are shown for the JPEG2000 database and for satellite images acquired on a quincunx sampling grid. The design of efficient quincunx filters is a difficult challenge which has already been addressed for specific cases. Our approach enables the design of less expensive filters adapted to the signal statistics to enhance the compression efficiency in a more general case. It is based on a two-step lifting scheme and joins the lifting theory with Wiener's optimization. The prediction step is designed in order to minimize the variance of the signal, and the update step is designed in order to minimize a reconstruction error. Application for lossy compression shows the performances of the method.
Zhang, S.; Yuen, D.A.; Zhu, A.; Song, S.; George, D.L.
2011-01-01
We parallelized the GeoClaw code on one-level grid using OpenMP in March, 2011 to meet the urgent need of simulating tsunami waves at near-shore from Tohoku 2011 and achieved over 75% of the potential speed-up on an eight core Dell Precision T7500 workstation [1]. After submitting that work to SC11 - the International Conference for High Performance Computing, we obtained an unreleased OpenMP version of GeoClaw from David George, who developed the GeoClaw code as part of his PH.D thesis. In this paper, we will show the complementary characteristics of the two approaches used in parallelizing GeoClaw and the speed-up obtained by combining the advantage of each of the two individual approaches with adaptive mesh refinement (AMR), demonstrating the capabilities of running GeoClaw efficiently on many-core systems. We will also show a novel simulation of the Tohoku 2011 Tsunami waves inundating the Sendai airport and Fukushima Nuclear Power Plants, over which the finest grid distance of 20 meters is achieved through a 4-level AMR. This simulation yields quite good predictions about the wave-heights and travel time of the tsunami waves. ?? 2011 IEEE.
3D profile measurements of objects by using zero order Generalized Morse Wavelet
NASA Astrophysics Data System (ADS)
Kocahan, Özlem; Durmuş, ćaǧla; Elmas, Merve Naz; Coşkun, Emre; Tiryaki, Erhan; Özder, Serhat
2017-02-01
Generalized Morse wavelets are proposed to evaluate the phase information from projected fringe pattern with the spatial carrier frequency in the x direction. The height profile of the object is determined through the phase change distribution by using the phase of the continuous wavelet transform. The phase distribution is extracted from the optical fringe pattern choosing zero order Generalized Morse Wavelet (GMW) as a mother wavelet. In this study, standard fringe projection technique is used for obtaining images. Experimental results for the GMW phase method are compared with the results of Morlet and Paul wavelet transform.
NASA Astrophysics Data System (ADS)
Caulier, Yannick; Bernhard, Luc; Spinnler, Klaus
2011-05-01
This paper proposes a new type of color coded light structures for the inspection of complex moving objects. The novelty of the methods relies on the generation of free-form color patterns permitting the projection of color structures adapted to the geometry of the surfaces to be characterized. The point correspondence determination algorithm consists of a stepwise procedure involving simple and computationally fast methods. The algorithm is therefore robust against varying recording conditions typically arising in real-time quality control environments and can be further integrated for industrial inspection purposes. The proposed approach is validated and compared on the basis of different experimentations concerning the 3D surface reconstruction by projecting adapted spatial color coded patterns. It is demonstrated that in case of certain inspection requirements, the method permits to code more reference points that similar color coded matrix methods.
Yin, Jun; Yang, Yuwang; Wang, Lei
2016-01-01
Joint design of compressed sensing (CS) and network coding (NC) has been demonstrated to provide a new data gathering paradigm for multi-hop wireless sensor networks (WSNs). By exploiting the correlation of the network sensed data, a variety of data gathering schemes based on NC and CS (Compressed Data Gathering—CDG) have been proposed. However, these schemes assume that the sparsity of the network sensed data is constant and the value of the sparsity is known before starting each data gathering epoch, thus they ignore the variation of the data observed by the WSNs which are deployed in practical circumstances. In this paper, we present a complete design of the feedback CDG scheme where the sink node adaptively queries those interested nodes to acquire an appropriate number of measurements. The adaptive measurement-formation procedure and its termination rules are proposed and analyzed in detail. Moreover, in order to minimize the number of overall transmissions in the formation procedure of each measurement, we have developed a NP-complete model (Maximum Leaf Nodes Minimum Steiner Nodes—MLMS) and realized a scalable greedy algorithm to solve the problem. Experimental results show that the proposed measurement-formation method outperforms previous schemes, and experiments on both datasets from ocean temperature and practical network deployment also prove the effectiveness of our proposed feedback CDG scheme. PMID:27043574
Yin, Jun; Yang, Yuwang; Wang, Lei
2016-04-01
Joint design of compressed sensing (CS) and network coding (NC) has been demonstrated to provide a new data gathering paradigm for multi-hop wireless sensor networks (WSNs). By exploiting the correlation of the network sensed data, a variety of data gathering schemes based on NC and CS (Compressed Data Gathering--CDG) have been proposed. However, these schemes assume that the sparsity of the network sensed data is constant and the value of the sparsity is known before starting each data gathering epoch, thus they ignore the variation of the data observed by the WSNs which are deployed in practical circumstances. In this paper, we present a complete design of the feedback CDG scheme where the sink node adaptively queries those interested nodes to acquire an appropriate number of measurements. The adaptive measurement-formation procedure and its termination rules are proposed and analyzed in detail. Moreover, in order to minimize the number of overall transmissions in the formation procedure of each measurement, we have developed a NP-complete model (Maximum Leaf Nodes Minimum Steiner Nodes--MLMS) and realized a scalable greedy algorithm to solve the problem. Experimental results show that the proposed measurement-formation method outperforms previous schemes, and experiments on both datasets from ocean temperature and practical network deployment also prove the effectiveness of our proposed feedback CDG scheme.
NASA Astrophysics Data System (ADS)
McNie, Mark E.; Combes, David J.; Smith, Gilbert W.; Price, Nicola; Ridley, Kevin D.; Brunson, Kevin M.; Lewis, Keith L.; Slinger, Chris W.; Rogers, Stanley
2007-09-01
Coded aperture imaging has been used for astronomical applications for several years. Typical implementations use a fixed mask pattern and are designed to operate in the X-Ray or gamma ray bands. More recent applications have emerged in the visible and infra red bands for low cost lens-less imaging systems. System studies have shown that considerable advantages in image resolution may accrue from the use of multiple different images of the same scene - requiring a reconfigurable mask. We report on work to develop a novel, reconfigurable mask based on micro-opto-electro-mechanical systems (MOEMS) technology employing interference effects to modulate incident light in the mid-IR band (3-5μm). This is achieved by tuning a large array of asymmetric Fabry-Perot cavities by applying an electrostatic force to adjust the gap between a moveable upper polysilicon mirror plate supported on suspensions and underlying fixed (electrode) layers on a silicon substrate. A key advantage of the modulator technology developed is that it is transmissive and high speed (e.g. 100kHz) - allowing simpler imaging system configurations. It is also realised using a modified standard polysilicon surface micromachining process (i.e. MUMPS-like) that is widely available and hence should have a low production cost in volume. We have developed designs capable of operating across the entire mid-IR band with peak transmissions approaching 100% and high contrast. By using a pixelated array of small mirrors, a large area device comprising individually addressable elements may be realised that allows reconfiguring of the whole mask at speeds in excess of video frame rates.
Adaptive Code Division Multiple Access Protocol for Wireless Network-on-Chip Architectures
NASA Astrophysics Data System (ADS)
Vijayakumaran, Vineeth
Massive levels of integration following Moore's Law ushered in a paradigm shift in the way on-chip interconnections were designed. With higher and higher number of cores on the same die traditional bus based interconnections are no longer a scalable communication infrastructure. On-chip networks were proposed enabled a scalable plug-and-play mechanism for interconnecting hundreds of cores on the same chip. Wired interconnects between the cores in a traditional Network-on-Chip (NoC) system, becomes a bottleneck with increase in the number of cores thereby increasing the latency and energy to transmit signals over them. Hence, there has been many alternative emerging interconnect technologies proposed, namely, 3D, photonic and multi-band RF interconnects. Although they provide better connectivity, higher speed and higher bandwidth compared to wired interconnects; they also face challenges with heat dissipation and manufacturing difficulties. On-chip wireless interconnects is one other alternative proposed which doesn't need physical interconnection layout as data travels over the wireless medium. They are integrated into a hybrid NOC architecture consisting of both wired and wireless links, which provides higher bandwidth, lower latency, lesser area overhead and reduced energy dissipation in communication. However, as the bandwidth of the wireless channels is limited, an efficient media access control (MAC) scheme is required to enhance the utilization of the available bandwidth. This thesis proposes using a multiple access mechanism such as Code Division Multiple Access (CDMA) to enable multiple transmitter-receiver pairs to send data over the wireless channel simultaneously. It will be shown that such a hybrid wireless NoC with an efficient CDMA based MAC protocol can significantly increase the performance of the system while lowering the energy dissipation in data transfer. In this work it is shown that the wireless NoC with the proposed CDMA based MAC protocol
47 CFR 80.355 - Distress, urgency, safety, call and reply Morse code frequencies.
Code of Federal Regulations, 2014 CFR
2014-10-01
....0 16734.0 22279.5 A 25171.5 Alternate 2 4182.5 6277.5 8366.5 12550.5 16734.5 22280.0 A 25171.5 Gulf-Mexico: Initial 5 4183.0 6278.0 8367.0 12551.0 16735.0 22281.5 A 25171.5 Alternate 6 4183.5 6278.5...
47 CFR 80.355 - Distress, urgency, safety, call and reply Morse code frequencies.
Code of Federal Regulations, 2011 CFR
2011-10-01
....0 16734.0 22279.5 A 25171.5 Alternate 2 4182.5 6277.5 8366.5 12550.5 16734.5 22280.0 A 25171.5 Gulf-Mexico: Initial 5 4183.0 6278.0 8367.0 12551.0 16735.0 22281.5 A 25171.5 Alternate 6 4183.5 6278.5...
47 CFR 80.355 - Distress, urgency, safety, call and reply Morse code frequencies.
Code of Federal Regulations, 2012 CFR
2012-10-01
....0 16734.0 22279.5 A 25171.5 Alternate 2 4182.5 6277.5 8366.5 12550.5 16734.5 22280.0 A 25171.5 Gulf-Mexico: Initial 5 4183.0 6278.0 8367.0 12551.0 16735.0 22281.5 A 25171.5 Alternate 6 4183.5 6278.5...
47 CFR 80.355 - Distress, urgency, safety, call and reply Morse code frequencies.
Code of Federal Regulations, 2013 CFR
2013-10-01
....0 16734.0 22279.5 A 25171.5 Alternate 2 4182.5 6277.5 8366.5 12550.5 16734.5 22280.0 A 25171.5 Gulf-Mexico: Initial 5 4183.0 6278.0 8367.0 12551.0 16735.0 22281.5 A 25171.5 Alternate 6 4183.5 6278.5...
Automatic Extension of an Augmented Transition Network Grammar for Morse Code Conversations,
1980-04-01
original CATNIP ; Janet Schoof, for helping me finally get this report in print; and David Dill, for many useful suggestions for the implementations...of CATNIP and MAGE, and for the acronym ’MAGE’. This report is an expansion of a thesis of the same name that was submitted to the Department of...parser, called CATNIP (Comco-1 Augmented Transition Network Interfaced Parser) [16], uses an augmented transition network (ATN) grammar to evaluate the
A Mechanical Apparatus for Hands-On Experience with the Morse Potential
ERIC Educational Resources Information Center
Everest, Michael A.
2010-01-01
A simple pulley apparatus is described that gives the student hands-on experience with the Morse potential. Students develop an internalized sense of what a covalent bond would feel like if atoms in a molecule could be manipulated by hand. This kinesthetic learning enhances the student's understanding and intuition of several chemical phenomena.…
A Coarse-Grained Model Based on Morse Potential for Water and n-Alkanes.
Chiu, See-Wing; Scott, H Larry; Jakobsson, Eric
2010-03-09
In order to extend the time and distance scales of molecular dynamics simulations, it is essential to create accurate coarse-grained force fields, in which each particle contains several atoms. Coarse-grained force fields that utilize the Lennard-Jones potential form for pairwise nonbonded interactions have been shown to suffer from serious inaccuracy, notably with respect to describing the behavior of water. In this paper, we describe a coarse-grained force field for water, in which each particle contains four water molecules, based on the Morse potential form. By molecular dynamics simulations, we show that our force field closely replicates important water properties. We also describe a Morse potential force field for alkanes and a simulation method for alkanes in which individual particles may have variable size, providing flexibility in constructing complex molecules comprised partly or solely of alkane groups. We find that, in addition to being more accurate, the Morse potential also provides the ability to take larger time steps than the Lennard-Jones, because the short distance repulsion potential profile is less steep. We suggest that the Morse potential form should be considered as an alternative for the Lennard-Jones form for coarse-grained molecular dynamics simulations.
Gazeau-Klauder coherent states for trigonometric Rosen-Morse potential
Chenaghlou, A.; Faizy, O.
2008-02-15
The Gazeau-Klauder coherent states for the trigonometric Rosen-Morse potential are constructed. It is shown that the resolution of unity, temporal stability, and action identity conditions are satisfied for the coherent states. The Mandel parameter is also calculated for the weighting distribution function corresponding to the coherent states.
Continuous Spectrum of Trigonometric Rosen-Morse and Eckart Potentials from Free Particle Spectrum
NASA Astrophysics Data System (ADS)
Panahi, H.; Pouraram, H.
2011-06-01
The shape invariant symmetry of the Trigonometric Rosen-Morse and Eckart potentials has been studied through realization of so(3) and so(2,1) Lie algebras respectively. In this work, by using the free particle eigenfunction, we obtain continuous spectrum of these potentials by means of their shape invariance symmetry in an algebraic method.
Application of DOT-MORSE coupling to the analysis of three-dimensional SNAP shielding problems
NASA Technical Reports Server (NTRS)
Straker, E. A.; Childs, R. L.; Emmett, M. B.
1972-01-01
The use of discrete ordinates and Monte Carlo techniques to solve radiation transport problems is discussed. A general discussion of two possible coupling schemes is given for the two methods. The calculation of the reactor radiation scattered from a docked service and command module is used as an example of coupling discrete ordinates (DOT) and Monte Carlo (MORSE) calculations.
Mian, Ajmal; Hu, Yiqun; Hartley, Richard; Owens, Robyn
2013-12-01
Simple nearest neighbor classification fails to exploit the additional information in image sets. We propose self-regularized nonnegative coding to define between set distance for robust face recognition. Set distance is measured between the nearest set points (samples) that can be approximated from their orthogonal basis vectors as well as from the set samples under the respective constraints of self-regularization and nonnegativity. Self-regularization constrains the orthogonal basis vectors to be similar to the approximated nearest point. The nonnegativity constraint ensures that each nearest point is approximated from a positive linear combination of the set samples. Both constraints are formulated as a single convex optimization problem and the accelerated proximal gradient method with linear-time Euclidean projection is adapted to efficiently find the optimal nearest points between two image sets. Using the nearest points between a query set and all the gallery sets as well as the active samples used to approximate them, we learn a more discriminative Mahalanobis distance for robust face recognition. The proposed algorithm works independently of the chosen features and has been tested on gray pixel values and local binary patterns. Experiments on three standard data sets show that the proposed method consistently outperforms existing state-of-the-art methods.
The neural code for taste in the nucleus of the solitary tract of the rat: effects of adaptation.
Di Lorenzo, P M; Lemon, C H
2000-01-10
Adaptation of the tongue to NaCl, HCl, quinine or sucrose was used as a tool to study the stability and organization of response profiles in the nucleus of the solitary tract (NTS). Taste responses in the NTS were recorded in anesthetized rats before and after adaptation of the tongue to NaCl, HCl, sucrose or quinine. Results showed that the magnitude of response to test stimuli following adaptation was a function of the context, i.e., adaptation condition, in which the stimuli were presented. Over half of all taste responses were either attenuated or enhanced following the adaptation procedure: NaCl adaptation produced the most widespread, non-stimulus-selective cross-adaptation and sucrose adaptation produced the least frequent cross-adaptation and the most frequent enhancement of taste responses. Adaptation to quinine cross-adapted to sucrose and adaptation to HCl cross-adapted to quinine in over half of the units tested. The adaptation procedure sometimes unmasked taste responses where none were present beforehand and sometimes altered taste responses to test stimuli even though the adapting stimulus did not itself produce a response. These effects demonstrated a form of context-dependency of taste responsiveness in the NTS and further suggest a broad potentiality in the sensitivity of NTS units across taste stimuli. Across unit patterns of response remained distinct from each other under all adaptation conditions. Discriminability of these patterns may provide a neurophysiological basis for residual psychophysical abilities following adaptation.
Broom, Donald M
2006-01-01
The term adaptation is used in biology in three different ways. It may refer to changes which occur at the cell and organ level, or at the individual level, or at the level of gene action and evolutionary processes. Adaptation by cells, especially nerve cells helps in: communication within the body, the distinguishing of stimuli, the avoidance of overload and the conservation of energy. The time course and complexity of these mechanisms varies. Adaptive characters of organisms, including adaptive behaviours, increase fitness so this adaptation is evolutionary. The major part of this paper concerns adaptation by individuals and its relationships to welfare. In complex animals, feed forward control is widely used. Individuals predict problems and adapt by acting before the environmental effect is substantial. Much of adaptation involves brain control and animals have a set of needs, located in the brain and acting largely via motivational mechanisms, to regulate life. Needs may be for resources but are also for actions and stimuli which are part of the mechanism which has evolved to obtain the resources. Hence pigs do not just need food but need to be able to carry out actions like rooting in earth or manipulating materials which are part of foraging behaviour. The welfare of an individual is its state as regards its attempts to cope with its environment. This state includes various adaptive mechanisms including feelings and those which cope with disease. The part of welfare which is concerned with coping with pathology is health. Disease, which implies some significant effect of pathology, always results in poor welfare. Welfare varies over a range from very good, when adaptation is effective and there are feelings of pleasure or contentment, to very poor. A key point concerning the concept of individual adaptation in relation to welfare is that welfare may be good or poor while adaptation is occurring. Some adaptation is very easy and energetically cheap and
Kirk, B.L.; Sartori, E.
1997-06-01
Subsequent to the introduction of High Performance Computing in the developed countries, the Organization for Economic Cooperation and Development/Nuclear Energy Agency (OECD/NEA) created the Task Force on Adapting Computer Codes in Nuclear Applications to Parallel Architectures (under the guidance of the Nuclear Science Committee`s Working Party on Advanced Computing) to study the growth area in supercomputing and its applicability to the nuclear community`s computer codes. The result has been four years of investigation for the Task Force in different subject fields - deterministic and Monte Carlo radiation transport, computational mechanics and fluid dynamics, nuclear safety, atmospheric models and waste management.
Infinite dimensional Morse theory and Fermat's principle in general relativity. I
NASA Astrophysics Data System (ADS)
Perlick, Volker
1995-12-01
The following theorem may be viewed as the general relativistic version of Fermat's principle. Among all lightlike curves connecting a given point p to a given timelike curve γ in a Lorentzian manifold, the geodesics are characterized by stationary arrival time. Here ``arrival time'' refers to a smooth parametrization of γ. In this article the first steps are taken to make infinite dimensional Morse theory applicable to this variational problem. The space of trial curves is made into a Hilbert manifold by imposing an H2 Sobolev condition and Fermat's principle is reformulated in this infinite dimensional setting. Moreover, a Morse index theorem is presented. The mathematical formalism developed here aims at applications to the gravitational lens effect.
Oscillator-Morse-Coulomb mappings and algebras for constant or position-dependent mass
Quesne, C.
2008-02-15
The bound-state solutions and the su(1,1) description of the d-dimensional radial harmonic oscillator, the Morse, and the D-dimensional radial Coulomb Schroedinger equations are reviewed in a unified way using the point canonical transformation method. It is established that the spectrum generating su(1,1) algebra for the first problem is converted into a potential algebra for the remaining two. This analysis is then extended to Schroedinger equations containing some position-dependent mass. The deformed su(1,1) construction recently achieved for a d-dimensional radial harmonic oscillator is easily extended to the Morse and Coulomb potentials. In the last two cases, the equivalence between the resulting deformed su(1,1) potential algebra approach and a previous deformed shape invariance one generalizes to a position-dependent mass background a well-known relationship in the context of constant mass.
A Multi-Resolution Data Structure for Two-Dimensional Morse Functions
Bremer, P-T; Edelsbrunner, H; Hamann, B; Pascucci, V
2003-07-30
The efficient construction of simplified models is a central problem in the field of visualization. We combine topological and geometric methods to construct a multi-resolution data structure for functions over two-dimensional domains. Starting with the Morse-Smale complex we build a hierarchy by progressively canceling critical points in pairs. The data structure supports mesh traversal operations similar to traditional multi-resolution representations.
Electric quadrupole transitions of the Bohr Hamiltonian with the Morse potential
Inci, I.; Bonatsos, D.; Boztosun, I.
2011-08-15
Eigenfunctions of the collective Bohr Hamiltonian with the Morse potential have been obtained by using the asymptotic iteration method (AIM) for both {gamma}-unstable and rotational structures. B(E2) transition rates have been calculated and compared to experimental data. Overall good agreement is obtained for transitions within the ground-state band, while some interband transitions appear to be systematically underpredicted in {gamma}-unstable nuclei and overpredicted in rotational nuclei.
Path integral solution for a deformed radial Rosen-Morse potential
NASA Astrophysics Data System (ADS)
Kadja, A.; Benamira, F.; Guechi, L.
2017-03-01
An exact path integral treatment of a particle in a deformed radial Rosen-Morse potential is presented. For this problem with the Dirichlet boundary conditions, the Green's function is constructed in a closed form by adding to Vq(r) a δ-function perturbation and making its strength infinitely repulsive. A transcendental equation for the energy levels E_{nr} and the wave functions of the bound states can then be deduced.
Position-Dependent Mass Schrödinger Equation for the Morse Potential
NASA Astrophysics Data System (ADS)
Ovando, G.; Peña, J. J.; Morales, J.; López-Bonilla, J.
2017-01-01
The position dependent mass Schrödinger equation (PDMSE) has a wide range of quantum applications such as the study of semiconductors, quantum wells, quantum dots and impurities in crystals, among many others. On the other hand, the Morse potential is one of the most important potential models used to study the electronic properties of diatomic molecules. In this work, the solution of the effective mass one-dimensional Schrödinger equation for the Morse potential is presented. This is done by means of the canonical transformation method in algebraic form. The PDMSE is solved for any model of the proposed kinetic energy operators as for example the BenDaniel-Duke, Gora-Williams, Zhu-Kroemer or Li-Kuhn. Also, in order to solve the PDMSE with Morse potential, we consider a superpotential leading to a special form of the exactly solvable Schrödinger equation of constant mass for a class of multiparameter exponential-type potential along with a proper mass distribution. The proposed approach is general and can be applied in the search of new potentials suitable on science of materials by looking into the viable choices of the mass function.
NASA Astrophysics Data System (ADS)
Fabian, Dedecker; Peter, Cundall; Daniel, Billaux; Torsten, Groeger
Digging a shaft or drift inside a rock mass is a common practice in civil engineering when a transportation way, such as a motorway, railway tunnel or storage shaft is to be built. In most cases, the consequences of the disturbance on the medium must be known in order to estimate the behaviour of the disturbed rock mass. Indeed, excavating part of the rock causes a new distribution of the stress field around the excavation that can lead to micro-cracking and even to the failure of some rock volume in the vicinity of the shaft. Consequently, the formed micro-cracks modify the mechanical and hydraulic properties of the rock. In this paper, we present an original method for the evaluation of damage-induced permeability. ITASCA has developed and used discontinuum models to study rock damage by building particle assemblies and checking the breakage of bonds under stress. However, such models are limited in size by the very large number of particles needed to model even a comparatively small volume of rock. In fact, a large part of most models never experiences large strains and does not require the accurate description of large-strain/damage/post-peak behaviour afforded by a discontinuum model. Thus, a large model frequently can be separated into a strongly strained “core” area to be represented by a Discontinuum and a peripheral area for which continuum zones would be adequate. Based on this observation, Itasca has developed a coupled, three-dimensional, continuum/discontinuum modelling approach. The new approach, termed Adaptive Continuum/Discontinuum Code (AC/DC), is based on the use of a periodic discontinuum “base brick” for which more or less simplified continuum equivalents are derived. Depending on the level of deformation in each part of the model, the AC/DC code can dynamically select the appropriate brick type to be used. In this paper, we apply the new approach to an excavation performed in the Bure site, at which the French nuclear waste agency
NASA Astrophysics Data System (ADS)
Bargatze, L. F.
2015-12-01
Active Data Archive Product Tracking (ADAPT) is a collection of software routines that permits one to generate XML metadata files to describe and register data products in support of the NASA Heliophysics Virtual Observatory VxO effort. ADAPT is also a philosophy. The ADAPT concept is to use any and all available metadata associated with scientific data to produce XML metadata descriptions in a consistent, uniform, and organized fashion to provide blanket access to the full complement of data stored on a targeted data server. In this poster, we present an application of ADAPT to describe all of the data products that are stored by using the Common Data File (CDF) format served out by the CDAWEB and SPDF data servers hosted at the NASA Goddard Space Flight Center. These data servers are the primary repositories for NASA Heliophysics data. For this purpose, the ADAPT routines have been used to generate data resource descriptions by using an XML schema named Space Physics Archive, Search, and Extract (SPASE). SPASE is the designated standard for documenting Heliophysics data products, as adopted by the Heliophysics Data and Model Consortium. The set of SPASE XML resource descriptions produced by ADAPT includes high-level descriptions of numerical data products, display data products, or catalogs and also includes low-level "Granule" descriptions. A SPASE Granule is effectively a universal access metadata resource; a Granule associates an individual data file (e.g. a CDF file) with a "parent" high-level data resource description, assigns a resource identifier to the file, and lists the corresponding assess URL(s). The CDAWEB and SPDF file systems were queried to provide the input required by the ADAPT software to create an initial set of SPASE metadata resource descriptions. Then, the CDAWEB and SPDF data repositories were queried subsequently on a nightly basis and the CDF file lists were checked for any changes such as the occurrence of new, modified, or deleted
NASA Astrophysics Data System (ADS)
Wang, Jianhua; Cheng, Lianglun; Wang, Tao; Peng, Xiaodong
2016-03-01
Table look-up operation plays a very important role during the decoding processing of context-based adaptive variable length decoding (CAVLD) in H.264/advanced video coding (AVC). However, frequent table look-up operation can result in big table memory access, and then lead to high table power consumption. Aiming to solve the problem of big table memory access of current methods, and then reduce high power consumption, a memory-efficient table look-up optimized algorithm is presented for CAVLD. The contribution of this paper lies that index search technology is introduced to reduce big memory access for table look-up, and then reduce high table power consumption. Specifically, in our schemes, we use index search technology to reduce memory access by reducing the searching and matching operations for code_word on the basis of taking advantage of the internal relationship among length of zero in code_prefix, value of code_suffix and code_lengh, thus saving the power consumption of table look-up. The experimental results show that our proposed table look-up algorithm based on index search can lower about 60% memory access consumption compared with table look-up by sequential search scheme, and then save much power consumption for CAVLD in H.264/AVC.
2012-01-01
We have entered a new era in agricultural and biomedical science made possible by remarkable advances in DNA sequencing technologies. The complete sequence of an individual’s set of chromosomes (collectively, its genome) provides a primary genetic code for what makes that individual unique, just as the contents of every personal computer reflect the unique attributes of its owner. But a second code, composed of “epigenetic” layers of information, affects the accessibility of the stored information and the execution of specific tasks. Nature’s second code is enigmatic and must be deciphered if we are to fully understand and optimize the genetic potential of crop plants. The goal of the Epigenomics of Plants International Consortium is to crack this second code, and ultimately master its control, to help catalyze a new green revolution. PMID:22751210
Kumar, Ravi
2014-01-01
Semiblind channel estimation method provides the best trade-off in terms of bandwidth overhead, computational complexity and latency. The result after using multiple input multiple output (MIMO) systems shows higher data rate and longer transmit range without any requirement for additional bandwidth or transmit power. This paper presents the detailed analysis of diversity coding techniques using MIMO antenna systems. Different space time block codes (STBCs) schemes have been explored and analyzed with the proposed higher code rate. STBCs with higher code rates have been simulated for different modulation schemes using MATLAB environment and the simulated results have been compared in the semiblind environment which shows the improvement even in highly correlated antenna arrays and is found very close to the condition when channel state information (CSI) is known to the channel. PMID:24688379
Kumar, Ravi; Saxena, Rajiv
2014-01-01
Semiblind channel estimation method provides the best trade-off in terms of bandwidth overhead, computational complexity and latency. The result after using multiple input multiple output (MIMO) systems shows higher data rate and longer transmit range without any requirement for additional bandwidth or transmit power. This paper presents the detailed analysis of diversity coding techniques using MIMO antenna systems. Different space time block codes (STBCs) schemes have been explored and analyzed with the proposed higher code rate. STBCs with higher code rates have been simulated for different modulation schemes using MATLAB environment and the simulated results have been compared in the semiblind environment which shows the improvement even in highly correlated antenna arrays and is found very close to the condition when channel state information (CSI) is known to the channel.
Optical phase distribution evaluation by using zero order Generalized Morse Wavelet
NASA Astrophysics Data System (ADS)
Kocahan, Özlem; Elmas, Merve Naz; Durmuş, ćaǧla; Coşkun, Emre; Tiryaki, Erhan; Özder, Serhat
2017-02-01
When determining the phase from the projected fringes by using continuous wavelet transform (CWT), selection of wavelet is an important step. A new wavelet for phase retrieval from the fringe pattern with the spatial carrier frequency in the x direction is presented. As a mother wavelet, zero order generalized Morse wavelet (GMW) is chosen because of the flexible spatial and frequency localization property, and it is exactly analytic. In this study, GMW method is explained and numerical simulations are carried out to show the validity of this technique for finding the phase distributions. Results for the Morlet and Paul wavelets are compared with the results of GMW analysis.
Morse Theory for Symmetric Functionals on the Sphere and an Application to a Bifurcation Problem.
1984-04-01
If the problem exhibits some symmetry, the eigenvalues of A’(0) are generally degenerate. Under suitable assumptions, we prove that the number of...extend the BMhme-Marino result to more general situations (Th. 3.1). Since in our situation we cannot use the L. S. category, the choice of the Morse...Hessian Hf in the point xw . If f e c2 and if is given by an isolated, degenerate critical point x , then, in general , p(t,x W is not equal to td (see
Information theoretic spreading measures of the symmetric trigonometric Rosen-Morse potential
NASA Astrophysics Data System (ADS)
Nath, D.
2014-06-01
We calculate information theoretic spreading measures of the position and momentum wave functions of the symmetric trigonometric Rose-Morse potential for the states n=0,1,2,3,4,5. The position space Renyi entropy, Fisher entropy and their corresponding entropic lengths are presented analytically for n=0,1,2,3,4,5. The momentum space Renyi entropy and the Renyi lengths are calculated analytically for n = 0,1 and are calculated numerically for n=2,3,4,5. We also calculated the Fisher entropy and the Fisher lengths in the momentum space numerically for n=0,1,2,3,4,5.
Exact solution to laser rate equations: three-level laser as a Morse-like oscillator
NASA Astrophysics Data System (ADS)
León-Montiel, R. de J.; Moya-Cessa, Héctor M.
2016-08-01
It is shown how the rate equations that model a three-level laser can be cast into a single second-order differential equation, whose form describes a time-dependent harmonic oscillator. Using this result, we demonstrate that the resulting equation can be identified as a Schrödinger equation for a Morse-like potential, thus allowing us to derive exact closed-form expressions for the dynamics of the number of photons inside the laser cavity, as well as the atomic population inversion.
The generalized Morse wavelet method to determine refractive index dispersion of dielectric films
NASA Astrophysics Data System (ADS)
Kocahan, Özlem; Özcan, Seçkin; Coşkun, Emre; Özder, Serhat
2017-04-01
The continuous wavelet transform (CWT) method is a useful tool for the determination of refractive index dispersion of dielectric films. Mother wavelet selection is an important factor for the accuracy of the results when using CWT. In this study, generalized Morse wavelet (GMW) was proposed as the mother wavelet because of having two degrees of freedom. The simulation studies, based on error calculations and Cauchy Coefficient comparisons, were presented and also the noisy signal was tested by CWT method with GMW. The experimental validity of this method was checked by D263 T schott glass having 100 μm thickness and the results were compared to those from the catalog value.
Electronic dynamics under effect of a nonlinear Morse interaction and a static electric field
NASA Astrophysics Data System (ADS)
Ranciaro Neto, A.; de Moura, F. A. B. F.
2016-11-01
Considering non-interacting electrons in a one-dimension alloy in which atoms are coupled by a Morse potential, we study the system dynamics in the presence of a static electric field. Calculations are performed assuming a quantum mechanical treatment for the electronic transport and a classical Hamiltonian model for the lattice vibrations. We report numerical evidence of the existence of a soliton-electron pair, even when the electric field is turned on, and we offer a description of how the existence of such a phase depends on the magnitude of the electric field and the electron-phonon interaction.
NASA Astrophysics Data System (ADS)
Han, Yishi; Luo, Zhixiao; Wang, Jianhua; Min, Zhixuan; Qin, Xinyu; Sun, Yunlong
2014-09-01
In general, context-based adaptive variable length coding (CAVLC) decoding in H.264/AVC standard requires frequent access to the unstructured variable length coding tables (VLCTs) and significant memory accesses are consumed. Heavy memory accesses will cause high power consumption and time delays, which are serious problems for applications in portable multimedia devices. We propose a method for high-efficiency CAVLC decoding by using a program instead of all the VLCTs. The decoded codeword from VLCTs can be obtained without any table look-up and memory access. The experimental results show that the proposed algorithm achieves 100% memory access saving and 40% decoding time saving without degrading video quality. Additionally, the proposed algorithm shows a better performance compared with conventional CAVLC decoding, such as table look-up by sequential search, table look-up by binary search, Moon's method, and Kim's method.
NASA Astrophysics Data System (ADS)
Eckert, C. H. J.; Zenker, E.; Bussmann, M.; Albach, D.
2016-10-01
We present an adaptive Monte Carlo algorithm for computing the amplified spontaneous emission (ASE) flux in laser gain media pumped by pulsed lasers. With the design of high power lasers in mind, which require large size gain media, we have developed the open source code HASEonGPU that is capable of utilizing multiple graphic processing units (GPUs). With HASEonGPU, time to solution is reduced to minutes on a medium size GPU cluster of 64 NVIDIA Tesla K20m GPUs and excellent speedup is achieved when scaling to multiple GPUs. Comparison of simulation results to measurements of ASE in Y b 3 + : Y AG ceramics show perfect agreement.
NASA Technical Reports Server (NTRS)
Chen, Y. S.; Farmer, R. C.
1992-01-01
A particulate two-phase flow CFD model was developed based on the FDNS code which is a pressure based predictor plus multi-corrector Navier-Stokes flow solver. Turbulence models with compressibility correction and the wall function models were employed as submodels. A finite-rate chemistry model was used for reacting flow simulation. For particulate two-phase flow simulations, a Eulerian-Lagrangian solution method using an efficient implicit particle trajectory integration scheme was developed in this study. Effects of particle-gas reaction and particle size change to agglomeration or fragmentation were not considered in this investigation. At the onset of the present study, a two-dimensional version of FDNS which had been modified to treat Lagrangian tracking of particles (FDNS-2DEL) had already been written and was operational. The FDNS-2DEL code was too slow for practical use, mainly because it had not been written in a form amenable to vectorization on the Cray, nor was the full three-dimensional form of FDNS utilized. The specific objective of this study was to reorder to calculations into long single arrays for automatic vectorization on the Cray and to implement the full three-dimensional version of FDNS to produce the FDNS-3DEL code. Since the FDNS-2DEL code was slow, a very limited number of test cases had been run with it. This study was also intended to increase the number of cases simulated to verify and improve, as necessary, the particle tracking methodology coded in FDNS.
Alemgadmi, Khaled I. K. Suparmi; Cari; Deta, U. A.
2015-09-30
The approximate analytical solution of Schrodinger equation for Q-Deformed Rosen-Morse potential was investigated using Supersymmetry Quantum Mechanics (SUSY QM) method. The approximate bound state energy is given in the closed form and the corresponding approximate wave function for arbitrary l-state given for ground state wave function. The first excited state obtained using upper operator and ground state wave function. The special case is given for the ground state in various number of q. The existence of Rosen-Morse potential reduce energy spectra of system. The larger value of q, the smaller energy spectra of system.
Morse, Lennard-Jones, and Kratzer Potentials: A Canonical Perspective with Applications.
Walton, Jay R; Rivera-Rivera, Luis A; Lucchese, Robert R; Bevan, John W
2016-10-12
Canonical approaches are applied to classic Morse, Lennard-Jones, and Kratzer potentials. Using the canonical transformation generated for the Morse potential as a reference, inverse transformations allow the accurate generation of the Born-Oppenheimer potential for the H2(+) ion, neutral covalently bound H2, van der Waals bound Ar2, and the hydrogen bonded one-dimensional dissociative coordinate in a water dimer. Similar transformations are also generated using the Lennard-Jones and Kratzer potentials as references. Following application of inverse transformations, vibrational eigenvalues generated from the Born-Oppenheimer potentials give significantly improved quantitative comparison with values determined from the original accurately known potentials. In addition, an algorithmic strategy based upon a canonical transformation to the dimensionless form applied to the force distribution associated with a potential is presented. The resulting canonical force distribution is employed to construct an algorithm for deriving accurate estimates for the dissociation energy, the maximum attractive force, and the internuclear separations corresponding to the maximum attractive force and the potential well.
Parameterizing the Morse potential for coarse-grained modeling of blood plasma
Zhang, Na; Zhang, Peng; Kang, Wei; Bluestein, Danny; Deng, Yuefan
2014-01-15
Multiscale simulations of fluids such as blood represent a major computational challenge of coupling the disparate spatiotemporal scales between molecular and macroscopic transport phenomena characterizing such complex fluids. In this paper, a coarse-grained (CG) particle model is developed for simulating blood flow by modifying the Morse potential, traditionally used in Molecular Dynamics for modeling vibrating structures. The modified Morse potential is parameterized with effective mass scales for reproducing blood viscous flow properties, including density, pressure, viscosity, compressibility and characteristic flow dynamics of human blood plasma fluid. The parameterization follows a standard inverse-problem approach in which the optimal micro parameters are systematically searched, by gradually decoupling loosely correlated parameter spaces, to match the macro physical quantities of viscous blood flow. The predictions of this particle based multiscale model compare favorably to classic viscous flow solutions such as Counter-Poiseuille and Couette flows. It demonstrates that such coarse grained particle model can be applied to replicate the dynamics of viscous blood flow, with the advantage of bridging the gap between macroscopic flow scales and the cellular scales characterizing blood flow that continuum based models fail to handle adequately.
NASA Astrophysics Data System (ADS)
Lazo, Edmundo; Saavedra, Eduardo; Humire, Fernando; Castro, Cristobal; Cortés-Cortés, Francisco
2015-09-01
We study the localization properties of direct transmission lines when we distribute two values of inductances LA and LB according to a generalized Thue-Morse aperiodic sequence generated by the inflation rule: A → ABm-1, B → BAm-1, m ≥ 2 and integer. We regain the usual Thue-Morse sequence for m = 2. We numerically study the changes produced in the localization properties of the I (ω) electric current function with increasing m values. We demonstrate that the m = 2 case does not belong to the family m ≥ 3, because when m changes from m = 2 to m = 3, the number of extended states decreases significantly. However, for m ≫ 3, the localization properties become similar to the m = 2 case. Also, the
Ganapol, Barry; Maldonado, Ivan
2014-01-23
The generation of multigroup cross sections lies at the heart of the very high temperature reactor (VHTR) core design, whether the prismatic (block) or pebble-bed type. The design process, generally performed in three steps, is quite involved and its execution is crucial to proper reactor physics analyses. The primary purpose of this project is to develop the CENTRM cross-section processing module of the SCALE code package for application to prismatic or pebble-bed core designs. The team will include a detailed outline of the entire processing procedure for application of CENTRM in a final report complete with demonstration. In addition, they will conduct a thorough verification of the CENTRM code, which has yet to be performed. The tasks for this project are to: Thoroughly test the panel algorithm for neutron slowing down; Develop the panel algorithm for multi-materials; Establish a multigroup convergence 1D transport acceleration algorithm in the panel formalism; Verify CENTRM in 1D plane geometry; Create and test the corresponding transport/panel algorithm in spherical and cylindrical geometries; and, Apply the verified CENTRM code to current VHTR core design configurations for an infinite lattice, including assessing effectiveness of Dancoff corrections to simulate TRISO particle heterogeneity.
NASA Astrophysics Data System (ADS)
Cai, Li; Pénéliau, Yannick; Diop, Cheikh M.; Malvagi, Fausto
2014-06-01
In this paper, we discuss some improvements we recently implemented in the Monte-Carlo code TRIPOLI-4® associated with the homogenization and collapsing of subassemblies cross sections. The improvement offered us another approach to get critical multigroup cross sections with Monte-Carlo method. The new calculation method in TRIPOLI-4® tries to ensure the neutronic balances, the multiplicative factors and the critical flux spectra for some realistic geometries. We make it by at first improving the treatment of the energy transfer probability, the neutron excess weight and the neutron fission spectrum. This step is necessary for infinite geometries. The second step which will be enlarged in this paper is aimed at better dealing with the multigroup anisotropy distribution law for finite geometries. Usually, Monte-Carlo homogenized multi-group cross sections are validated within a core calculation by a deterministic code. Here, the validation of multigroup constants will also be carried out by Monte-Carlo core calculation code. Different subassemblies are tested with the new collapsing method, especially for the fast neutron reactors subassemblies.
Filomatori, Claudia V; Carballeda, Juan M; Villordo, Sergio M; Aguirre, Sebastian; Pallarés, Horacio M; Maestre, Ana M; Sánchez-Vargas, Irma; Blair, Carol D; Fabri, Cintia; Morales, Maria A; Fernandez-Sesma, Ana; Gamarnik, Andrea V
2017-03-01
The Flavivirus genus includes a large number of medically relevant pathogens that cycle between humans and arthropods. This host alternation imposes a selective pressure on the viral population. Here, we found that dengue virus, the most important viral human pathogen transmitted by insects, evolved a mechanism to differentially regulate the production of viral non-coding RNAs in mosquitos and humans, with a significant impact on viral fitness in each host. Flavivirus infections accumulate non-coding RNAs derived from the viral 3'UTRs (known as sfRNAs), relevant in viral pathogenesis and immune evasion. We found that dengue virus host adaptation leads to the accumulation of different species of sfRNAs in vertebrate and invertebrate cells. This process does not depend on differences in the host machinery; but it was found to be dependent on the selection of specific mutations in the viral 3'UTR. Dissecting the viral population and studying phenotypes of cloned variants, the molecular determinants for the switch in the sfRNA pattern during host change were mapped to a single RNA structure. Point mutations selected in mosquito cells were sufficient to change the pattern of sfRNAs, induce higher type I interferon responses and reduce viral fitness in human cells, explaining the rapid clearance of certain viral variants after host change. In addition, using epidemic and pre-epidemic Zika viruses, similar patterns of sfRNAs were observed in mosquito and human infected cells, but they were different from those observed during dengue virus infections, indicating that distinct selective pressures act on the 3'UTR of these closely related viruses. In summary, we present a novel mechanism by which dengue virus evolved an RNA structure that is under strong selective pressure in the two hosts, as regulator of non-coding RNA accumulation and viral fitness. This work provides new ideas about the impact of host adaptation on the variability and evolution of flavivirus 3'UTRs with
Inci, I.; Boztosun, I.; Bonatsos, D.
2008-11-11
Analytical solutions of the collective Bohr Hamiltonian with the Morse potential have been obtained for the U(5)-O(6) and U(5)-SU(3) transition regions through the Asymptotic Iteration Method (AIM). The obtained energy eigenvalue equations have been used to get the experimental excitation energy spectrum of Xe and Yb isotopes. The results are in good agreement with experimental data.
NASA Astrophysics Data System (ADS)
Ikhdair, Sameer M.
2012-07-01
We solve the parametric generalized effective Schrödinger equation with a specific choice of position-dependent mass function and Morse oscillator potential by means of the Nikiforov-Uvarov method combined with the Pekeris approximation scheme. All bound-state energies are found explicitly and all corresponding radial wave functions are built analytically. We choose the Weyl or Li and Kuhn ordering for the ambiguity parameters in our numerical work to calculate the energy spectrum for a few (H2, LiH, HCl and CO) diatomic molecules with arbitrary vibration n and rotation l quantum numbers and different position-dependent mass functions. Two special cases including the constant mass and the vibration s-wave (l = 0) are also investigated.
A Global “Natural” Grid Model Based on the Morse Complex
NASA Astrophysics Data System (ADS)
Wang, Hongbin; Zhao, Xuesheng; Zhu, Xinying; Li, Jiebiao
2016-11-01
In the exploration and interpretation of the extensive or global natural phenomena such as environmental monitoring, climatic analysis, hydrological analysis, meteorological service, simulation of sea level rise, etc., knowledge about the shape properties of the earth surface and terrain features is urgently needed. However, traditional discrete global grids (DGG) can not directly provide it and are confronted with the challenge of the rapid data volume growth as the modern earth surveying technology develops. In this paper, a global "natural"grid (GNG) model based on the Morse complex is proposed and a relatively comprehensive and theoretical comparison with the traditional DGG models is analyzed in details as well as some issues to be resolved in the future. Finally, the experimental and analysis results indicate that this distinct GNG model built from DGG is more significant to the advance of the geospatial data acquisition technology and to the interpretation of those extensive or global natural phenomena.
On the exact solubility in momentum space of the trigonometric Rosen-Morse potential
NASA Astrophysics Data System (ADS)
Compean, C. B.; Kirchbach, M.
2011-01-01
The Schrödinger equation with the trigonometric Rosen-Morse potential in a flat three-dimensional Euclidean space, E3, and its exact solutions are shown to be exactly Fourier transformable to momentum space, though the resulting equation is purely algebraic and cannot be cast into the canonical form of an integral Lippmann-Schwinger equation. This is because the cotangent function does not allow for an exact Fourier transform in E3. In addition, we recall that the above potential can also be viewed as an angular function of the second polar angle parametrizing the three-dimensional spherical surface, S3, of a constant radius, in which case the cotangent function would allow for an exact integral transform to momentum space. On that basis, we obtain a momentum space Lippmann-Schwinger-type equation, though the corresponding wavefunctions have to be obtained numerically.
Trace map and eigenstates of a Thue-Morse chain in a general model
NASA Astrophysics Data System (ADS)
Cheng, Sheng-Feng; Jin, Guo-Jun
2002-04-01
By the standard method proposed by Kolar and Nori [Phys. Rev. B 42, 1062 (1990)], a rigorous eight-dimensional (8D) trace map for a general model of Thue-Morse (TM) sequences is obtained. Using this trace map, the characteristics of electronic eigenstates in TM lattices are explored in a very broad way. Simultaneously, a constraint condition for energy parameters, under which the complex 8D trace map can be simplified into the ordinary form, is found. It is also proved analytically that all eigenstates of TM lattices are extended when this constraint conditon is fulfilled. Furthermore, the properties of eigenstates beyond this constraint are investigated and some wave functions with critical features are discovered by the multifractal analysis. Our results support the previous viewpoint that a TM lattice is an intermediate stage between periodic and Fibonacci structures.
NASA Astrophysics Data System (ADS)
Moon, Yong Ho; Yoon, Kun Su; Ha, Seok Wun
2009-12-01
A fast coeff_token decoding method based on new memory architecture is proposed to implement an efficient context-based adaptive variable length-coding (CAVLC) decoder. The heavy memory access needed in CAVLC decoding is a significant issue in designing a real system, such as digital multimedia broadcasting players, portable media players, and mobile phones with video, because it results in high power consumption and delay in operations. Recently, a new coeff_token variable-length decoding method has been suggested to achieve memory access reduction. However, it still requires a large portion of the total memory access in CAVLC decoding. In this work, an effective memory architecture is designed through careful examination of codewords in variable-length code tables. In addition, a novel fast decoding method is proposed to further reduce the memory accesses required for reconstructing the coeff_token element. Only one memory access is used for reconstructing each coeff_token element in the proposed method.
Holden, Richard J; Rivera-Rodriguez, A Joy; Faye, Héléne; Scanlon, Matthew C; Karsh, Ben-Tzion
2013-08-01
The most common change facing nurses today is new technology, particularly bar coded medication administration technology (BCMA). However, there is a dearth of knowledge on how BCMA alters nursing work. This study investigated how BCMA technology affected nursing work, particularly nurses' operational problem-solving behavior. Cognitive systems engineering observations and interviews were conducted after the implementation of BCMA in three nursing units of a freestanding pediatric hospital. Problem-solving behavior, associated problems, and goals, were specifically defined and extracted from observed episodes of care. Three broad themes regarding BCMA's impact on problem solving were identified. First, BCMA allowed nurses to invent new problem-solving behavior to deal with pre-existing problems. Second, BCMA made it difficult or impossible to apply some problem-solving behaviors that were commonly used pre-BCMA, often requiring nurses to use potentially risky workarounds to achieve their goals. Third, BCMA created new problems that nurses were either able to solve using familiar or novel problem-solving behaviors, or unable to solve effectively. Results from this study shed light on hidden hazards and suggest three critical design needs: (1) ecologically valid design; (2) anticipatory control; and (3) basic usability. Principled studies of the actual nature of clinicians' work, including problem solving, are necessary to uncover hidden hazards and to inform health information technology design and redesign.
Holden, Richard J.; Rivera-Rodriguez, A. Joy; Faye, Héléne; Scanlon, Matthew C.; Karsh, Ben-Tzion
2012-01-01
The most common change facing nurses today is new technology, particularly bar coded medication administration technology (BCMA). However, there is a dearth of knowledge on how BCMA alters nursing work. This study investigated how BCMA technology affected nursing work, particularly nurses’ operational problem-solving behavior. Cognitive systems engineering observations and interviews were conducted after the implementation of BCMA in three nursing units of a freestanding pediatric hospital. Problem-solving behavior, associated problems, and goals, were specifically defined and extracted from observed episodes of care. Three broad themes regarding BCMA’s impact on problem solving were identified. First, BCMA allowed nurses to invent new problem-solving behavior to deal with pre-existing problems. Second, BCMA made it difficult or impossible to apply some problem-solving behaviors that were commonly used pre-BCMA, often requiring nurses to use potentially risky workarounds to achieve their goals. Third, BCMA created new problems that nurses were either able to solve using familiar or novel problem-solving behaviors, or unable to solve effectively. Results from this study shed light on hidden hazards and suggest three critical design needs: (1) ecologically valid design; (2) anticipatory control; and (3) basic usability. Principled studies of the actual nature of clinicians’ work, including problem solving, are necessary to uncover hidden hazards and to inform health information technology design and redesign. PMID:24443642
NASA Astrophysics Data System (ADS)
Dönmez, Orhan
2004-09-01
In this paper, the general procedure to solve the general relativistic hydrodynamical (GRH) equations with adaptive-mesh refinement (AMR) is presented. In order to achieve, the GRH equations are written in the conservation form to exploit their hyperbolic character. The numerical solutions of GRH equations are obtained by high resolution shock Capturing schemes (HRSC), specifically designed to solve nonlinear hyperbolic systems of conservation laws. These schemes depend on the characteristic information of the system. The Marquina fluxes with MUSCL left and right states are used to solve GRH equations. First, different test problems with uniform and AMR grids on the special relativistic hydrodynamics equations are carried out to verify the second-order convergence of the code in one, two and three dimensions. Results from uniform and AMR grid are compared. It is found that adaptive grid does a better job when the number of resolution is increased. Second, the GRH equations are tested using two different test problems which are Geodesic flow and Circular motion of particle In order to do this, the flux part of GRH equations is coupled with source part using Strang splitting. The coupling of the GRH equations is carried out in a treatment which gives second order accurate solutions in space and time.
Aguirre, Sebastian; Pallarés, Horacio M.; Blair, Carol D.; Fabri, Cintia; Morales, Maria A.; Fernandez-Sesma, Ana; Gamarnik, Andrea V.
2017-01-01
The Flavivirus genus includes a large number of medically relevant pathogens that cycle between humans and arthropods. This host alternation imposes a selective pressure on the viral population. Here, we found that dengue virus, the most important viral human pathogen transmitted by insects, evolved a mechanism to differentially regulate the production of viral non-coding RNAs in mosquitos and humans, with a significant impact on viral fitness in each host. Flavivirus infections accumulate non-coding RNAs derived from the viral 3’UTRs (known as sfRNAs), relevant in viral pathogenesis and immune evasion. We found that dengue virus host adaptation leads to the accumulation of different species of sfRNAs in vertebrate and invertebrate cells. This process does not depend on differences in the host machinery; but it was found to be dependent on the selection of specific mutations in the viral 3’UTR. Dissecting the viral population and studying phenotypes of cloned variants, the molecular determinants for the switch in the sfRNA pattern during host change were mapped to a single RNA structure. Point mutations selected in mosquito cells were sufficient to change the pattern of sfRNAs, induce higher type I interferon responses and reduce viral fitness in human cells, explaining the rapid clearance of certain viral variants after host change. In addition, using epidemic and pre-epidemic Zika viruses, similar patterns of sfRNAs were observed in mosquito and human infected cells, but they were different from those observed during dengue virus infections, indicating that distinct selective pressures act on the 3’UTR of these closely related viruses. In summary, we present a novel mechanism by which dengue virus evolved an RNA structure that is under strong selective pressure in the two hosts, as regulator of non-coding RNA accumulation and viral fitness. This work provides new ideas about the impact of host adaptation on the variability and evolution of flavivirus 3
Andari, Elissar; Richard, Nathalie; Leboyer, Marion; Sirigu, Angela
2016-03-01
The neuropeptide oxytocin (OT) is one of the major targets of research in neuroscience, with respect to social functioning. Oxytocin promotes social skills and improves the quality of face processing in individuals with social dysfunctions such as autism spectrum disorder (ASD). Although one of OT's key functions is to promote social behavior during dynamic social interactions, the neural correlates of this function remain unknown. Here, we combined acute intranasal OT (IN-OT) administration (24 IU) and fMRI with an interactive ball game and a face-matching task in individuals with ASD (N = 20). We found that IN-OT selectively enhanced the brain activity of early visual areas in response to faces as compared to non-social stimuli. OT inhalation modulated the BOLD activity of amygdala and hippocampus in a context-dependent manner. Interestingly, IN-OT intake enhanced the activity of mid-orbitofrontal cortex in response to a fair partner, and insula region in response to an unfair partner. These OT-induced neural responses were accompanied by behavioral improvements in terms of allocating appropriate feelings of trust toward different partners' profiles. Our findings suggest that OT impacts the brain activity of key areas implicated in attention and emotion regulation in an adaptive manner, based on the value of social cues.
Gabriel, T.A.
1993-12-31
The purpose of this paper is to describe a program package, CALOR93, that has been developed to design and analyze different detector systems, in particular, calorimeters which are used in high energy physics experiments to determine the energy of particles. One`s ability to design a calorimeter to perform a certain task can have a strong influence upon the validity of experimental results. The validity of the results obtained with CALOR93 has been verified many times by comparison with experimental data. The codes (HETC93, SPECT93, LIGHT, EGS4, MORSE, and MICAP) are quite generalized and detailed enough so that any experimental calorimeter setup can be studied. Due to this generalization, some software development is necessary because of the wide diversity of calorimeter designs.
ERIC Educational Resources Information Center
Pattavina, Paul
1980-01-01
Excerpts from an interview with William C. Morse on teacher burnout concern special educators' sense of failure and impotence, the issues connected with individualized educational programs, and the importance of the first year of teaching. (CL)
Cohen, Michael R.; Smetzer, Judy L.
2014-01-01
These medication errors have occurred in health care facilities at least once. They will happen again—perhaps where you work. Through education and alertness of personnel and procedural safeguards, they can be avoided. You should consider publishing accounts of errors in your newsletters and/or presenting them at your inservice training programs. Your assistance is required to continue this feature. The reports described here were received through the Institute for Safe Medication Practices (ISMP) Medication Errors Reporting Program. Any reports published by ISMP will be anonymous. Comments are also invited; the writers’ names will be published if desired. ISMP may be contacted at the address shown below. Errors, close calls, or hazardous conditions may be reported directly to ISMP through the ISMP Web site (www.ismp.org), by calling 800-FAIL-SAFE, or via e-mail at ismpinfo@ismp.org. ISMP guarantees the confidentiality and security of the information received and respects reporters’ wishes as to the level of detail included in publications. PMID:24958950
Suparmi, A. Cari, C.; Angraini, L. M.
2014-09-30
The bound state solutions of Dirac equation for Hulthen and trigonometric Rosen Morse non-central potential are obtained using finite Romanovski polynomials. The approximate relativistic energy spectrum and the radial wave functions which are given in terms of Romanovski polynomials are obtained from solution of radial Dirac equation. The angular wave functions and the orbital quantum number are found from angular Dirac equation solution. In non-relativistic limit, the relativistic energy spectrum reduces into non-relativistic energy.
Jeukens, Julie; Bernatchez, Louis
2012-01-01
While gene expression divergence is known to be involved in adaptive phenotypic divergence and speciation, the relative importance of regulatory and structural evolution of genes is poorly understood. A recent next-generation sequencing experiment allowed identifying candidate genes potentially involved in the ongoing speciation of sympatric dwarf and normal lake whitefish (Coregonus clupeaformis), such as cytosolic malate dehydrogenase (MDH1), which showed both significant expression and sequence divergence. The main goal of this study was to investigate into more details the signatures of natural selection in the regulatory and coding sequences of MDH1 in lake whitefish and test for parallelism of these signatures with other coregonine species. Sequencing of the two regions in 118 fish from four sympatric pairs of whitefish and two cisco species revealed a total of 35 single nucleotide polymorphisms (SNPs), with more genetic diversity in European compared to North American coregonine species. While the coding region was found to be under purifying selection, an SNP in the proximal promoter exhibited significant allele frequency divergence in a parallel manner among independent sympatric pairs of North American lake whitefish and European whitefish (C. lavaretus). According to transcription factor binding simulation for 22 regulatory haplotypes of MDH1, putative binding profiles were fairly conserved among species, except for the region around this SNP. Moreover, we found evidence for the role of this SNP in the regulation of MDH1 expression level. Overall, these results provide further evidence for the role of natural selection in gene regulation evolution among whitefish species pairs and suggest its possible link with patterns of phenotypic diversity observed in coregonine species.
Webster, Michael A.
2015-01-01
Sensory systems continuously mold themselves to the widely varying contexts in which they must operate. Studies of these adaptations have played a long and central role in vision science. In part this is because the specific adaptations remain a powerful tool for dissecting vision, by exposing the mechanisms that are adapting. That is, “if it adapts, it's there.” Many insights about vision have come from using adaptation in this way, as a method. A second important trend has been the realization that the processes of adaptation are themselves essential to how vision works, and thus are likely to operate at all levels. That is, “if it's there, it adapts.” This has focused interest on the mechanisms of adaptation as the target rather than the probe. Together both approaches have led to an emerging insight of adaptation as a fundamental and ubiquitous coding strategy impacting all aspects of how we see. PMID:26858985
NASA Astrophysics Data System (ADS)
Dimitriou, K. I.; Mercouris, Th.; Constantoudis, V.; Komninos, Y.; Nicolaides, C. A.
2006-05-01
The multiphoton vibrational excitation and dissociation of Morse molecules have been computed nonperturbatively using Hamilton's and Schrφdinger's time-dependent equations, for a range of laser pulse parameters. The time-dependent Schrφdinger equation is solved by the state-specific expansion approach [e.g.,1]. For its solution, emphasis has been given on the inclusion of the continuous spectrum, whose contribution to the multiphoton probabilities for resonance excitation to a number of excited discrete states as well as to dissociation has been examined as a function of laser intensity, frequency and pulse duration. An analysis of possible quantal-classical correspondences for this system is being carried out. We note that distinct features exist from previous classical calculations [2]. For example, the dependence on the laser frequency gives rise to an asymmetry around the red-shifted frequency corresponding to the maximum probability. [1] Th. Mercouris, I. D. Petsalakis and C. A. Nicolaides, J. Phys. B 27, L519 (1994). [2] V. Constantoudis and C. A. Nicolaides, Phys. Rev. E 64, 562112 (2001). ^1This work was supported by the program 'Pythagoras' which is co - funded by the European Social Fund (75%) and Natl. Resources (25%). ^2Physics Department, National Technical University, Athens, Greece.^3Theoretical and Physical Chemistry Institute, Hellenic Research Foundation, Athens, Greece.
Evaluation of seepage from Chester Morse Lake and Masonry Pool, King County, Washington
Hidaka, F.T.; Garrett, Arthur Angus
1967-01-01
Hydrologic data collected in the Cedar and Snoqualmie River basins on the west slope of the Cascade Range have been analyzed to determine the amount of water lost by seepage from Chester Morse Lake and Masonry Pool and the. consequent gain by seepage to the Cedar and South Fork Snoqualmie Rivers. For water years 1957-64, average losses were about 220 cfs (cubic feet per second) while average gains were about 180 cfs in the Cedar River and 50 cfs in the South Fork Snoqualmie River. Streamflow and precipitation data for water years 1908-26 and 1930-F2 indicate that a change in runoff regimen occurred in Cedar and South Fork Snoqualmie Rivers after the Boxley Creek washout in December 1918. For water years 1919-26 and 1930-32, the flow of Cedar River near Landsburg averaged about 80 cfs less than it would have if the washout had not occurred. In contrast, the flow of South Fork Snoqualmie River at North Bend averaged about 60 cfs more than it would have.
Stress on external hexagon and Morse taper implants submitted to immediate loading
Odo, Caroline H.; Pimentel, Marcele J.; Consani, Rafael L.X.; Mesquita, Marcelo F.; Nóbilo, Mauro A.A.
2015-01-01
Background/Aims This study aimed to evaluate the stress distribution around external hexagon (EH) and Morse taper (MT) implants with different prosthetic systems of immediate loading (distal bar (DB), casting technique (CT), and laser welding (LW)) by using photoelastic method. Methods Three infrastructures were manufactured on a model simulating an edentulous lower jaw. All models were composed by five implants (4.1 mm × 13.0 mm) simulating a conventional lower protocol. The samples were divided into six groups. G1: EH implants with DB and acrylic resin; G2: EH implants with titanium infrastructure CT; G3: EH implants with titanium infrastructure attached using LW; G4: MT implants with DB and acrylic resin; G5: MT implants with titanium infrastructure CT; G6: MT implants with titanium infrastructure attached using LW. After the infrastructures construction, the photoelastic models were manufactured and a loading of 4.9 N was applied in the cantilever. Five pre-determined points were analyzed by Fringes software. Results Data showed significant differences between the connection types (p < 0.0001), and there was no significant difference among the techniques used for infrastructure. Conclusion The reduction of the stress levels was more influenced by MT connection (except for CT). Different bar types submitted to immediate loading not influenced stress concentration. PMID:26605142
Quantum state engineering of spin-orbit-coupled ultracold atoms in a Morse potential
NASA Astrophysics Data System (ADS)
Ban, Yue; Chen, Xi; Muga, J. G.; Sherman, E. Ya
2015-02-01
Achieving full control of a Bose-Einstein condensate can have valuable applications in metrology, quantum information processing, and quantum condensed matter physics. We propose protocols to simultaneously control the internal (related to its pseudospin-1/2) and motional (position-related) states of a spin-orbit-coupled Bose-Einstein condensate confined in a Morse potential. In the presence of synthetic spin-orbit coupling, the state transition of a noninteracting condensate can be implemented by Raman coupling and detuning terms designed by invariant-based inverse engineering. The state transfer may also be driven by tuning the direction of the spin-orbit-coupling field and modulating the magnitude of the effective synthetic magnetic field. The results can be generalized for interacting condensates by changing the time-dependent detuning to compensate for the interaction. We find that a two-level algorithm for the inverse engineering remains numerically accurate even if the entire set of possible states is considered. The proposed approach is robust against the laser-field noise and systematic device-dependent errors.
Construction of the Barut-Girardello quasi coherent states for the Morse potential
NASA Astrophysics Data System (ADS)
Popov, Dušan; Dong, Shi-Hai; Pop, Nicolina; Sajfert, Vjekoslav; Şimon, Simona
2013-12-01
The Morse oscillator (MO) potential occupies a privileged place among the anharmonic oscillator potentials due to its applications in quantum mechanics to diatomic or polyatomic molecules, spectroscopy and so on. For this potential some kinds of coherent states (especially of the Klauder-Perelomov and Gazeau-Klauder kinds) have been constructed previously. In this paper we construct the coherent states of the Barut-Girardello kind (BG-CSs) for the MO potential, which have received less attention in the scientific literature. We obtain these CSs and demonstrate that they fulfil all conditions required by the coherent state. The Mandel parameter for the pure BG-CSs and Husimi's and P-quasi distribution functions (for the mixed-thermal states) are also presented. Finally, we show that all obtained results for the BG-CSs of MO tend, in the harmonic limit, to the corresponding results for the coherent states of the one dimensional harmonic oscillator (CSs for the HO-1D).
High-birefringence photonic crystal fiber structures based on the binary morse-thue fractal sequence
NASA Astrophysics Data System (ADS)
Al-Muraeb, Ahmed; Abdel-Aty-Zohdy, Hoda
2016-09-01
A novel index-guiding Silica glass-core hexagonal High-Birefringence Photonic Crystal Fiber (HB-PCF) is proposed, with five rings of standard cladding air circular holes arranged in four formations inspired by the Binary Morse-Thue fractal Sequence (BMTS). The form birefringence, confinement loss, chromatic dispersion, effective mode area, and effective normalized frequency are evaluated for the four PCFs operating within (1.8 - 2 μm) eye-safe wavelength range. Modeling and analysis of the four PCF formations are performed deploying full-vector analysis in Finite Element Method (FEM) using COMSOL Multiphysics. Respecting fabrication and in light of commercial availability in designing the proposed PCF structures, a high birefringence of up to (6.549 × 10-3 at 2 μm) is achieved with dispersionfree single-mode operation. Confinement loss as low as (3.2 × 10-5 - 6.5 × 10-4 dB/m for 1.8 - 2 μm range) is achieved as well. Comparison against previously reported PCF structures reveals the desirably higher birefringence of our BMTS HB-PCF. The proposed PCFs are of vital use in various optical systems (e.g.: multi-wavelength fiber ring laser systems, and tunable lasers), catering for applications such as: optical sensing, LIDAR systems, material processing, optical signal processing, and optical communication.
NASA Astrophysics Data System (ADS)
Montemayor, R.; Salem, L. D.
1991-12-01
We extend the scope of the modified Riccati approach to partial solubility in quantum mechanics introduced in a previous work [L. D. Salem and R. Montemayor, Phys. Rev. A 43, 1169 (1991)]. With the use of adequate mappings u(x), we show the convenience of the modified Riccati approach to analyze potentials that can be written as rational functions on u. The necessary conditions for a Hamiltonian to be solvable are discussed in detail. By considering the exponential mapping u=e-x, we construct a family of potentials related to the exactly solvable Morse oscillator. Within this family, we have identified a three-parameter quasiexactly solvable potential, which, depending on the value of its coupling constants, leads to a symmetric or asymmetric confining potential, with a single-well or a double-well structure. Explicit expressions for the energies and eigenfunctions are given for particular cases. The analytic continuation of the symmetric subset gives rise to a quasiexactly solvable periodic potential.
Robinson, Andrew; Wu, Peter S-C; Harrop, Stephen J; Schaeffer, Patrick M; Dosztányi, Zsuzsanna; Gillings, Michael R; Holmes, Andrew J; Nevalainen, K M Helena; Stokes, H W; Otting, Gottfried; Dixon, Nicholas E; Curmi, Paul M G; Mabbutt, Bridget C
2005-03-11
The wide-ranging physiology and large genetic variability observed for prokaryotes is largely attributed, not to the prokaryotic genome itself, but rather to mechanisms of lateral gene transfer. Cassette PCR has been used to sample the integron/gene cassette metagenome from different natural environments without laboratory cultivation of the host organism, and without prior knowledge of any target protein sequence. Since over 90% of cassette genes are unrelated to any sequence in the current databases, it is not clear whether these genes code for folded functional proteins. We have selected a sample of eight cassette-encoded genes with no known homologs; five have been isolated as soluble protein products and shown by biophysical techniques to be folded. In solution, at least three of these proteins organise as stable oligomeric assemblies. The tertiary structure of one of these, Bal32a derived from a contaminated soil site, has been solved by X-ray crystallography to 1.8 A resolution. From the three-dimensional structure, Bal32a is found to be a member of the highly adaptable alpha+beta barrel family of transport proteins and enzymes. In Bal32a, the barrel cavity is unusually deep and inaccessible to solvent. Polar side-chains in its interior are reminiscent of catalytic sites of limonene-1,2-epoxide hydrolase and nogalonic acid methyl ester cyclase. These studies demonstrate the viability of direct sampling of mobile DNA as a route for the discovery of novel proteins.
NASA Technical Reports Server (NTRS)
Johnson, F. T.; Samant, S. S.; Bieterman, M. B.; Melvin, R. G.; Young, D. P.; Bussoletti, J. E.; Hilmes, C. L.
1992-01-01
A new computer program, called TranAir, for analyzing complex configurations in transonic flow (with subsonic or supersonic freestream) was developed. This program provides accurate and efficient simulations of nonlinear aerodynamic flows about arbitrary geometries with the ease and flexibility of a typical panel method program. The numerical method implemented in TranAir is described. The method solves the full potential equation subject to a set of general boundary conditions and can handle regions with differing total pressure and temperature. The boundary value problem is discretized using the finite element method on a locally refined rectangular grid. The grid is automatically constructed by the code and is superimposed on the boundary described by networks of panels; thus no surface fitted grid generation is required. The nonlinear discrete system arising from the finite element method is solved using a preconditioned Krylov subspace method embedded in an inexact Newton method. The solution is obtained on a sequence of successively refined grids which are either constructed adaptively based on estimated solution errors or are predetermined based on user inputs. Many results obtained by using TranAir to analyze aerodynamic configurations are presented.
Compressible Astrophysics Simulation Code
Howell, L.; Singer, M.
2007-07-18
This is an astrophysics simulation code involving a radiation diffusion module developed at LLNL coupled to compressible hydrodynamics and adaptive mesh infrastructure developed at LBNL. One intended application is to neutrino diffusion in core collapse supernovae.
Beauvais, Z S; Thompson, K H; Kearfott, K J
2009-07-01
Due to a recent upward trend in the price of uranium and subsequent increased interest in uranium mining, accurate modeling of baseline dose from environmental sources of radioactivity is of increasing interest. Residual radioactivity model and code (RESRAD) is a program used to model environmental movement and calculate the dose due to the inhalation, ingestion, and exposure to radioactive materials following a placement. This paper presents a novel use of RESRAD for the calculation of dose from non-enhanced, or ancient, naturally occurring radioactive material (NORM). In order to use RESRAD to calculate the total effective dose (TED) due to ancient NORM, a procedural adaptation was developed to negate the effects of time progressive distribution of radioactive materials. A dose due to United States' average concentrations of uranium, actinium, and thorium series radionuclides was then calculated. For adults exposed in a residential setting and assumed to eat significant amounts of food grown in NORM concentrated areas, the annual dose due to national average NORM concentrations was 0.935 mSv y(-1). A set of environmental dose factors were calculated for simple estimation of dose from uranium, thorium, and actinium series radionuclides for various age groups and exposure scenarios as a function of elemental uranium and thorium activity concentrations in groundwater and soil. The values of these factors for uranium were lowest for an adult exposed in an industrial setting: 0.00476 microSv kg Bq(-1) y(-1) for soil and 0.00596 microSv m(3) Bq(-1) y(-1) for water (assuming a 1:1 234U:238U activity ratio in water). The uranium factors were highest for infants exposed in a residential setting and assumed to ingest food grown onsite: 34.8 microSv kg Bq(-1) y(-1) in soil and 13.0 microSv m(3) Bq(-1) y(-1) in water.
NASA Astrophysics Data System (ADS)
Pavlou, Andrew Theodore
The Monte Carlo simulation of full-core neutron transport requires high fidelity data to represent not only the various types of possible interactions that can occur, but also the temperature and energy regimes for which these data are relevant. For isothermal conditions, nuclear cross section data are processed in advance of running a simulation. In reality, the temperatures in a neutronics simulation are not fixed, but change with respect to the temperatures computed from an associated heat transfer or thermal hydraulic (TH) code. To account for the temperature change, a code user must either 1) compute new data at the problem temperature inline during the Monte Carlo simulation or 2) pre-compute data at a variety of temperatures over the range of possible values. Inline data processing is computationally inefficient while pre-computing data at many temperatures can be memory expensive. An alternative on-the-fly approach to handle the temperature component of nuclear data is desired. By on-the-fly we mean a procedure that adjusts cross section data to the correct temperature adaptively during the Monte Carlo random walk instead of before the running of a simulation. The on-the-fly procedure should also preserve simulation runtime efficiency. While on-the-fly methods have recently been developed for higher energy regimes, the double differential scattering of thermal neutrons has not been examined in detail until now. In this dissertation, an on-the-fly sampling method is developed by investigating the temperature dependence of the thermal double differential scattering distributions. The temperature dependence is analyzed with a linear least squares regression test to develop fit coefficients that are used to sample thermal scattering data at any temperature. The amount of pre-stored thermal scattering data has been drastically reduced from around 25 megabytes per temperature per nuclide to only a few megabytes per nuclide by eliminating the need to compute data
2014-12-01
density parity check (LDPC) code, a Reed–Solomon code, and three convolutional codes. iii CONTENTS EXECUTIVE SUMMARY...the most common. Many civilian systems use low density parity check (LDPC) FEC codes, and the Navy is planning to use LDPC for some future systems...other forward error correction methods: a turbo code, a low density parity check (LDPC) code, a Reed–Solomon code, and three convolutional codes
NASA Astrophysics Data System (ADS)
Mojaveri, B.
2013-05-01
Based on the quantum states of an electron trapped on an infinite band along the x-axis in the presence of the Morse-like perpendicular magnetic field [H. Fakhri, B. Mojaveri, M.A. Gomshi Nobary, Rep. Math. Phys. 66, 299 (2010)], the Klauder-Perelomov and Gazeau-Klauder coherent states are constructed. To realize the resolution of identity, their corresponding positive definite measure on the complex plane are obtained in terms of the known functions. Also, some nonclassical properties such as sub-Poissonian statistics and squeezing effect of constructed coherent states are studied.
Bloch-like surface waves in Fibonacci quasi-crystals and Thue-Morse aperiodic dielectric multilayers
NASA Astrophysics Data System (ADS)
Koju, Vijay; Robertson, William M.
2016-09-01
Bloch surface waves (BSWs) in periodic dielectric multilayer structures with surface defect have been extensively studied. However, it has recently been recognized that quasi-crystals and aperiodic dielectric multilayers also support Bloch-like surface waves (BLSWs). In this work, we numerically show the existence of BLSWs in Fibonacci quasi-crystals and Thue-Morse aperiodic dielectric multilayers using the prism coupling technique. We compare the surface field enhancement and penetration depth of BLSWs in these structures with that of BSWs in their periodic counterparts.
Scattering States of l-Wave Schrödinger Equation with Modified Rosen—Morse Potential
NASA Astrophysics Data System (ADS)
Chen, Wen-Li; Shi, Yan-Wei; Wei, Gao-Feng
2016-08-01
Within a Pekeris-type approximation to the centrifugal term, we examine the approximately analytical scattering state solutions of the l-wave Schrödinger equation with the modified Rosen—Morse potential. The calculation formula of phase shifts is derived, and the corresponding bound state energy levels are also obtained from the poles of the scattering amplitude. Supported by the National Natural Science Foundation of China under Grant No. 11405128, and Natural Science Basic Research Plan in Shaanxi Province of China under Grant No. 15JK2093
NASA Astrophysics Data System (ADS)
Wei, Gao-Feng; Dong, Shi-Hai
2010-11-01
By applying a Pekeris-type approximation to the pseudo-centrifugal term, we study the pseudospin symmetry of a Dirac nucleon subjected to scalar and vector modified Rosen-Morse (MRM) potentials. A complicated quartic energy equation and spinor wave functions with arbitrary spin-orbit coupling quantum number k are presented. The pseudospin degeneracy is checked numerically. Pseudospin symmetry is discussed theoretically and numerically in the limit case α rightarrow 0 . It is found that the relativistic MRM potential cannot trap a Dirac nucleon in this limit.
Efficient entropy coding for scalable video coding
NASA Astrophysics Data System (ADS)
Choi, Woong Il; Yang, Jungyoup; Jeon, Byeungwoo
2005-10-01
The standardization for the scalable extension of H.264 has called for additional functionality based on H.264 standard to support the combined spatio-temporal and SNR scalability. For the entropy coding of H.264 scalable extension, Context-based Adaptive Binary Arithmetic Coding (CABAC) scheme is considered so far. In this paper, we present a new context modeling scheme by using inter layer correlation between the syntax elements. As a result, it improves coding efficiency of entropy coding in H.264 scalable extension. In simulation results of applying the proposed scheme to encoding the syntax element mb_type, it is shown that improvement in coding efficiency of the proposed method is up to 16% in terms of bit saving due to estimation of more adequate probability model.
NASA Astrophysics Data System (ADS)
Barutello, Vivina; Jadanza, Riccardo D.; Portaluri, Alessandro
2016-01-01
It is well known that the linear stability of the Lagrangian elliptic solutions in the classical planar three-body problem depends on a mass parameter β and on the eccentricity e of the orbit. We consider only the circular case ( e = 0) but under the action of a broader family of singular potentials: α-homogeneous potentials, for α in (0, 2), and the logarithmic one. It turns out indeed that the Lagrangian circular orbit persists also in this more general setting. We discover a region of linear stability expressed in terms of the homogeneity parameter α and the mass parameter β, then we compute the Morse index of this orbit and of its iterates and we find that the boundary of the stability region is the envelope of a family of curves on which the Morse indices of the iterates jump. In order to conduct our analysis we rely on a Maslov-type index theory devised and developed by Y. Long, X. Hu and S. Sun; a key role is played by an appropriate index theorem and by some precise computations of suitable Maslov-type indices.
NASA Astrophysics Data System (ADS)
Trabelsi, Youssef; Benali, Naim; Bouazzi, Yassine; Kanzari, Mounir
2013-09-01
The transmission properties of hybrid quasi-periodic photonic systems (HQPS) made by the combination of one-dimensional periodic photonic crystals (PPCs) and quasi-periodic photonic crystals (QPCs) were theoretically studied. The hybrid quasi-periodic photonic lattice based on the hetero-structures was built from the Fibonacci and Thue-Morse sequences. We addressed the microwave properties of waves through the one-dimensional symmetric Fibonacci, and Thue-Morse system i.e., a quasi-periodic structure was made up of two different dielectric materials (Rogers and air), in the quarter wavelength condition. It shows that controlling the Fibonacci parameters permits to obtain selective optical filters with the narrow passband and polychromatic stop band filters with varied properties which can be controlled as desired. From the results, we presented the self-similar features of the spectra, and we also presented the fractal process through a return map of the transmission coefficients. We extracted powerfully the band gaps of hybrid quasi-periodic multilayered structures, called "pseudo band gaps", often containing resonant states, which could be considered as a manifestation of numerous defects distributed along the structure. The results of transmittance spectra showed that the cutoff frequency could be manipulated through the thicknesses of the defects and the type of dielectric layers of the system. Taken together, the above two properties provide favorable conditions for the design of an all-microwave intermediate reflector.
Clinical coding. Code breakers.
Mathieson, Steve
2005-02-24
--The advent of payment by results has seen the role of the clinical coder pushed to the fore in England. --Examinations for a clinical coding qualification began in 1999. In 2004, approximately 200 people took the qualification. --Trusts are attracting people to the role by offering training from scratch or through modern apprenticeships.
Bremer, P. -T.
2014-08-26
ADAPT is a topological analysis code that allow to compute local threshold, in particular relevance based thresholds for features defined in scalar fields. The initial target application is vortex detection but the software is more generally applicable to all threshold based feature definitions.
NASA Astrophysics Data System (ADS)
Li, Yuanqiao; Zhang, Hongmei; Liu, De
2016-06-01
In this paper, we evaluate the transport properties of a Thue-Morse AB-stacked bilayer graphene superlattice with different interlayer potential biases. Based on the transfer matrix method, the transmission coefficient, the conductance, and the Fano factor are numerically calculated and discussed. We find that the symmetry of the transmission coefficient with respect to normal incidence depends on the structural symmetry of the system and the new transmission peak appears in the energy band gap opening region. The conductance and the Fano factor can be greatly modulated not only by the Fermi energy and the interlayer potential bias but also by the generation number. Interestingly, the conductance exhibits the plateau of almost zero conductance and the Fano factor plateaus with Poisson value occur in the energy band gap opening region for large interlayer potential bias.
NASA Astrophysics Data System (ADS)
Suzuki, Yoko; Tanimura, Yoshitaka
1999-02-01
We apply the optimized perturbation theory (OPT) to study the dynamics of a dimer molecule system in condensed phases described by a Morse potential system coupled to a heat bath. The OPT combines the techniques based on the variational principle and the perturbative expansion. The first-order approximation of the OPT agrees with Feynman's variational theory developed for the polaron problem [Statistical Mechanics: A Set of Lectures (Benjamin, London, 1972)]. The OPT makes it possible to deal with an anharmonic potential system in a nonperturbative way. Combined with the inversion method, which is a technique to carry out the Legendre transformation, we take into account the ansymmetry of the potential effectively. We then calculate the absorption spectrum of the molecule system, which relates to a two-time correlation function of a nuclear coordinate.
Three-dimensional effective mass Schrödinger equation: harmonic and Morse-type potential solutions.
Ovando, G; Morales, J; López-Bonilla, J L
2013-05-01
In this work, a scheme to generate exact wave functions and eigenvalues for the spherically symmetric three-dimensional position-dependent effective mass Schrödinger equation is presented. The methodology is implemented by means of separation of variables and point canonical transformations that allow to recognize a radial dependent equation with important differences as compared with the one-dimensional position dependent mass problem, which has been widely studied. This situation deserves to consider the boundary conditions of the emergent problem. To obtain specific exact solutions, the methodology requires known solutions of ordinary one-dimensional Schrödinger equations. We have preferred those applications that use the harmonic oscillator and the Morse oscillator solutions.
Carazo, Matthew; Sadarangani, Tina; Natarajan, Sundar; Katz, Stuart D; Blaum, Caroline; Dickson, Victoria Vaughan
2016-08-15
Geriatric syndromes are common in hospitalized elders with heart failure (HF), but association with clinical outcomes is not well characterized. The purpose of this study (N = 289) was to assess presence of geriatric syndromes using Joint Commission-mandated measures, the Braden Scale (BS) and Morse Fall Scale (MFS), and to explore prognostic utility in hospitalized HF patients. Data extracted from the electronic medical record included sociodemographics, medications, clinical data, comorbid conditions, and the BS and MFS. The primary outcome of mortality was assessed using Social Security Death Master File. Statistical analysis included Cox proportional hazards models to assess association between BS and MFS scores and all-cause mortality with adjustment for known clinical prognostic factors. Higher risk BS and MFS scores were common in hospitalized HF patients, but were not independent predictors of survival. Further study of the clinical utility of these scores and other measures of geriatric syndromes in HF is warranted.
NASA Astrophysics Data System (ADS)
Bermúdez-Montaña, M.; Lemus, R.
2017-01-01
The vibrational spectroscopic description of the ozone molecule 16O3 in its electronic ground state X1A1 is presented in the framework of a simple local model, where Morse potentials are associated with both stretching and bending modes. The Hamiltonian is written in terms of internal coordinates considering the local mode character of the ozone molecule. Later on an algebraic representation in terms of Morse ladder operators is introduced through a linear approximation in the expansion of the coordinates and momenta. Three polyads are considered in our study: P11 =ν1 +ν3 +ν2 , P21 = 2 (ν1 +ν3) +ν2 , and P32 = 3 (ν1 +ν3) + 2ν2 , as suggested by resonances derived from the fundamentals as well as from previous variational analysis. The best description is provided by the P11 polyad scheme, yielding an rms deviation of 1.85 cm-1 for a fit involving 121 energy levels. Considering the other two polyads the description is less accurate: rms = 2.78 cm-1 for polyad P21 and rms = 2.63 cm-1 for polyad P32 , considering 99 and 100 energy levels, respectively. These fits represent the best descriptions in the framework of an algebraic approach. In addition, since our algebraic model keeps the connection with configuration space, the force constants derived from the three fits have been estimated. We have found that all the available experimental energies may be assigned at least to one of the three fits. As the energy increases the eigenstates obtained from different polyad schemes differ. This fact paves the way to establish a polyad breaking approach as a next step to improve the description.
Adaptive EAGLE dynamic solution adaptation and grid quality enhancement
NASA Technical Reports Server (NTRS)
Luong, Phu Vinh; Thompson, J. F.; Gatlin, B.; Mastin, C. W.; Kim, H. J.
1992-01-01
In the effort described here, the elliptic grid generation procedure in the EAGLE grid code was separated from the main code into a subroutine, and a new subroutine which evaluates several grid quality measures at each grid point was added. The elliptic grid routine can now be called, either by a computational fluid dynamics (CFD) code to generate a new adaptive grid based on flow variables and quality measures through multiple adaptation, or by the EAGLE main code to generate a grid based on quality measure variables through static adaptation. Arrays of flow variables can be read into the EAGLE grid code for use in static adaptation as well. These major changes in the EAGLE adaptive grid system make it easier to convert any CFD code that operates on a block-structured grid (or single-block grid) into a multiple adaptive code.
Maximizing Adaptivity in Hierarchical Topological Models Using Cancellation Trees
Bremer, P; Pascucci, V; Hamann, B
2008-12-08
We present a highly adaptive hierarchical representation of the topology of functions defined over two-manifold domains. Guided by the theory of Morse-Smale complexes, we encode dependencies between cancellations of critical points using two independent structures: a traditional mesh hierarchy to store connectivity information and a new structure called cancellation trees to encode the configuration of critical points. Cancellation trees provide a powerful method to increase adaptivity while using a simple, easy-to-implement data structure. The resulting hierarchy is significantly more flexible than the one previously reported. In particular, the resulting hierarchy is guaranteed to be of logarithmic height.
Tang, Guoping; Mayes, Melanie; Parker, Jack C; Jardine, Philip M
2010-01-01
We implemented the widely used CXTFIT code in Excel to provide flexibility and added sensitivity and uncertainty analysis functions to improve transport parameter estimation and to facilitate model discrimination for multi-tracer experiments on structured soils. Analytical solutions for one-dimensional equilibrium and nonequilibrium convection dispersion equations were coded as VBA functions so that they could be used as ordinary math functions in Excel for forward predictions. Macros with user-friendly interfaces were developed for optimization, sensitivity analysis, uncertainty analysis, error propagation, response surface calculation, and Monte Carlo analysis. As a result, any parameter with transformations (e.g., dimensionless, log-transformed, species-dependent reactions, etc.) could be estimated with uncertainty and sensitivity quantification for multiple tracer data at multiple locations and times. Prior information and observation errors could be incorporated into the weighted nonlinear least squares method with a penalty function. Users are able to change selected parameter values and view the results via embedded graphics, resulting in a flexible tool applicable to modeling transport processes and to teaching students about parameter estimation. The code was verified by comparing to a number of benchmarks with CXTFIT 2.0. It was applied to improve parameter estimation for four typical tracer experiment data sets in the literature using multi-model evaluation and comparison. Additional examples were included to illustrate the flexibilities and advantages of CXTFIT/Excel. The VBA macros were designed for general purpose and could be used for any parameter estimation/model calibration when the forward solution is implemented in Excel. A step-by-step tutorial, example Excel files and the code are provided as supplemental material.
NASA Astrophysics Data System (ADS)
Tang, Guoping; Mayes, Melanie A.; Parker, Jack C.; Jardine, Philip M.
2010-09-01
We implemented the widely used CXTFIT code in Excel to provide flexibility and added sensitivity and uncertainty analysis functions to improve transport parameter estimation and to facilitate model discrimination for multi-tracer experiments on structured soils. Analytical solutions for one-dimensional equilibrium and nonequilibrium convection dispersion equations were coded as VBA functions so that they could be used as ordinary math functions in Excel for forward predictions. Macros with user-friendly interfaces were developed for optimization, sensitivity analysis, uncertainty analysis, error propagation, response surface calculation, and Monte Carlo analysis. As a result, any parameter with transformations (e.g., dimensionless, log-transformed, species-dependent reactions, etc.) could be estimated with uncertainty and sensitivity quantification for multiple tracer data at multiple locations and times. Prior information and observation errors could be incorporated into the weighted nonlinear least squares method with a penalty function. Users are able to change selected parameter values and view the results via embedded graphics, resulting in a flexible tool applicable to modeling transport processes and to teaching students about parameter estimation. The code was verified by comparing to a number of benchmarks with CXTFIT 2.0. It was applied to improve parameter estimation for four typical tracer experiment data sets in the literature using multi-model evaluation and comparison. Additional examples were included to illustrate the flexibilities and advantages of CXTFIT/Excel. The VBA macros were designed for general purpose and could be used for any parameter estimation/model calibration when the forward solution is implemented in Excel. A step-by-step tutorial, example Excel files and the code are provided as supplemental material.
NASA Astrophysics Data System (ADS)
Rodríguez-Sánchez, Rafael; Martínez, José Luis; Cock, Jan De; Fernández-Escribano, Gerardo; Pieters, Bart; Sánchez, José L.; Claver, José M.; de Walle, Rik Van
2013-12-01
The H.264/AVC video coding standard introduces some improved tools in order to increase compression efficiency. Moreover, the multi-view extension of H.264/AVC, called H.264/MVC, adopts many of them. Among the new features, variable block-size motion estimation is one which contributes to high coding efficiency. Furthermore, it defines a different prediction structure that includes hierarchical bidirectional pictures, outperforming traditional Group of Pictures patterns in both scenarios: single-view and multi-view. However, these video coding techniques have high computational complexity. Several techniques have been proposed in the literature over the last few years which are aimed at accelerating the inter prediction process, but there are no works focusing on bidirectional prediction or hierarchical prediction. In this article, with the emergence of many-core processors or accelerators, a step forward is taken towards an implementation of an H.264/AVC and H.264/MVC inter prediction algorithm on a graphics processing unit. The results show a negligible rate distortion drop with a time reduction of up to 98% for the complete H.264/AVC encoder.
Onishi, Yasuo
2013-03-29
Four JAEA researchers visited PNNL for two weeks in February, 2013 to learn the PNNL-developed, unsteady, one-dimensional, river model, TODAM and the PNNL-developed, time-dependent, three dimensional, coastal water model, FLESCOT. These codes predict sediment and contaminant concentrations by accounting sediment-radionuclide interactions, e.g., adsorption/desorption and transport-deposition-resuspension of sediment-sorbed radionuclides. The objective of the river and coastal water modeling is to simulate • 134Cs and 137Cs migration in Fukushima rivers and the coastal water, and • their accumulation in the river and ocean bed along the Fukushima coast. Forecasting the future cesium behavior in the river and coastal water under various scenarios would enable JAEA to assess the effectiveness of various on-land remediation activities and if required, possible river and coastal water clean-up operations to reduce the contamination of the river and coastal water, agricultural products, fish and other aquatic biota. PNNL presented the following during the JAEA visit to PNNL: • TODAM and FLESCOT’s theories and mathematical formulations • TODAM and FLESCOT model structures • Past TODAM and FLESCOT applications • Demonstrating these two codes' capabilities by applying them to simple hypothetical river and coastal water cases. • Initial application of TODAM to the Ukedo River in Fukushima and JAEA researchers' participation in its modeling. PNNL also presented the relevant topics relevant to Fukushima environmental assessment and remediation, including • PNNL molecular modeling and EMSL computer facilities • Cesium adsorption/desorption characteristics • Experiences of connecting molecular science research results to macro model applications to the environment • EMSL tour • Hanford Site road tour. PNNL and JAEA also developed future course of actions for joint research projects on the Fukushima environmental and remediation assessments.
Resnik, Barry I
2009-01-01
It is ethical, legal, and proper for a dermatologist to maximize income through proper coding of patient encounters and procedures. The overzealous physician can misinterpret reimbursement requirements or receive bad advice from other physicians and cross the line from aggressive coding to coding fraud. Several of the more common problem areas are discussed.
Application of three-dimensional transport code to the analysis of the neutron streaming experiment
Chatani, K.; Slater, C.O.
1990-01-01
This paper summarized the calculational results of neutron streaming through a Clinch River Breeder Reactor (CRBR) Prototype coolant pipe chaseway. Particular emphasis is placed on results at bends in the chaseway. Calculations were performed with three three-dimensional codes: the discrete ordinates radiation transport code TORT and Monte Carlo radiation transport code MORSE, which were developed by Oak Ridge National Laboratory (ORNL), and the discrete ordinates code ENSEMBLE, which was developed in Japan. The purpose of the calculations is not only to compare the calculational results with the experimental results, but also to compare the results of TORT and MORSE with those of ENSEMBLE. In the TORT calculations, two types of difference methods, weighted-difference method was applied in ENSEMBLE calculation. Both TORT and ENSEMBLE produced nearly the same calculational results, but differed in the number of iterations required for converging each neutron group. Also, the two types of difference methods in the TORT calculations showed no appreciable variance in the number of iterations required. However, a noticeable disparity in the computer times and some variation in the calculational results did occur. The comparisons of the calculational results with the experimental results, showed for the epithermal neutron flux generally good agreement in the first and second legs and at the first bend where the two-dimensional modeling might be difficult. Results were fair to poor along the centerline of the first leg near the opening to the second leg because of discrete ordinates ray effects. Additionally, the agreement was good throughout the first and second legs for the thermal neutron region. Calculations with MORSE were made. These calculational results and comparisons are described also. 8 refs., 4 figs.
Malmierca, Manuel S.; Anderson, Lucy A.; Antunes, Flora M.
2015-01-01
To follow an ever-changing auditory scene, the auditory brain is continuously creating a representation of the past to form expectations about the future. Unexpected events will produce an error in the predictions that should “trigger” the network’s response. Indeed, neurons in the auditory midbrain, thalamus and cortex, respond to rarely occurring sounds while adapting to frequently repeated ones, i.e., they exhibit stimulus specific adaptation (SSA). SSA cannot be explained solely by intrinsic membrane properties, but likely involves the participation of the network. Thus, SSA is envisaged as a high order form of adaptation that requires the influence of cortical areas. However, present research supports the hypothesis that SSA, at least in its simplest form (i.e., to frequency deviants), can be transmitted in a bottom-up manner through the auditory pathway. Here, we briefly review the underlying neuroanatomy of the corticofugal projections before discussing state of the art studies which demonstrate that SSA present in the medial geniculate body (MGB) and inferior colliculus (IC) is not inherited from the cortex but can be modulated by the cortex via the corticofugal pathways. By modulating the gain of neurons in the thalamus and midbrain, the auditory cortex (AC) would refine SSA subcortically, preventing irrelevant information from reaching the cortex. PMID:25805974
NASA Technical Reports Server (NTRS)
Badinger, Michael A.; Drouant, George J.
1991-01-01
Proposed hand-held tool applies indelible bar code to small parts. Possible to identify parts for management of inventory without tags or labels. Microprocessor supplies bar-code data to impact-printer-like device. Device drives replaceable scribe, which cuts bar code on surface of part. Used to mark serially controlled parts for military and aerospace equipment. Also adapts for discrete marking of bulk items used in food and pharmaceutical processing.
Xantheas, Sotiris S; Werhahn, Jasper C
2014-08-14
Based on the formulation of the analytical expression of the potential V(r) describing intermolecular interactions in terms of the dimensionless variables r* = r/r(m) and ɛ* = V/ɛ, where r(m) is the separation at the minimum and ɛ the well depth, we propose more generalized scalable forms for the commonly used Mie, Lennard-Jones, Morse, and Buckingham exponential-6 potential energy functions. These new generalized forms have an additional parameter from the original forms and revert to the original ones for some choice of that parameter. In this respect, the original forms of those potentials can be considered as special cases of the more general forms that are introduced. We also propose a scalable, non-revertible to the original one, 4-parameter extended Morse potential.
Xantheas, Sotiris S.; Werhahn, Jasper C.
2014-08-14
Based on the formulation of the analytical expression of the potential V(r) describing intermolecular interactions in terms of the dimensionless variables r*=r/rm and !*=V/!, where rm is the separation at the minimum and ! the well depth, we propose more generalized scalable forms for the commonly used Lennard-Jones, Mie, Morse and Buckingham exponential-6 potential energy functions (PEFs). These new generalized forms have an additional parameter from and revert to the original ones for some choice of that parameter. In this respect, the original forms can be considered as special cases of the more general forms that are introduced. We also propose a scalable, but nonrevertible to the original one, 4-parameter extended Morse potential.
Bressan, Eriberto; Lops, Diego; Tomasi, Cristiano; Ricci, Sara; Stocchero, Michele; Carniel, Emanuele Luigi
2014-07-01
Nowadays, dental implantology is a reliable technique for treatment of partially and completely edentulous patients. The achievement of stable dentition is ensured by implant-supported fixed dental prostheses. Morse taper conometric system may provide fixed retention between implants and dental prostheses. The aim of this study was to investigate retentive performance and mechanical strength of a Morse taper conometric system used as implant-supported fixed dental prostheses retention. Experimental and finite element investigations were performed. Experimental tests were achieved on a specific abutment-coping system, accounting for both cemented and non-cemented situations. The results from the experimental activities were processed to identify the mechanical behavior of the coping-abutment interface. Finally, the achieved information was applied to develop reliable finite element models of different abutment-coping systems. The analyses were developed accounting for different geometrical conformations of the abutment-coping system, such as different taper angle. The results showed that activation process, occurred through a suitable insertion force, could provide retentive performances equal to a cemented system without compromising the mechanical functionality of the system. These findings suggest that Morse taper conometrical system can provide a fixed connection between implants and dental prostheses if proper insertion force is applied. Activation process does not compromise the mechanical functionality of the system.
Multiple component codes based generalized LDPC codes for high-speed optical transport.
Djordjevic, Ivan B; Wang, Ting
2014-07-14
A class of generalized low-density parity-check (GLDPC) codes suitable for optical communications is proposed, which consists of multiple local codes. It is shown that Hamming, BCH, and Reed-Muller codes can be used as local codes, and that the maximum a posteriori probability (MAP) decoding of these local codes by Ashikhmin-Lytsin algorithm is feasible in terms of complexity and performance. We demonstrate that record coding gains can be obtained from properly designed GLDPC codes, derived from multiple component codes. We then show that several recently proposed classes of LDPC codes such as convolutional and spatially-coupled codes can be described using the concept of GLDPC coding, which indicates that the GLDPC coding can be used as a unified platform for advanced FEC enabling ultra-high speed optical transport. The proposed class of GLDPC codes is also suitable for code-rate adaption, to adjust the error correction strength depending on the optical channel conditions.
Kubilius, Jonas
2014-01-01
Sharing code is becoming increasingly important in the wake of Open Science. In this review I describe and compare two popular code-sharing utilities, GitHub and Open Science Framework (OSF). GitHub is a mature, industry-standard tool but lacks focus towards researchers. In comparison, OSF offers a one-stop solution for researchers but a lot of functionality is still under development. I conclude by listing alternative lesser-known tools for code and materials sharing.
Mangano, Francesco Guido; Zecca, Piero; Luongo, Fabrizia; Iezzi, Giovanna; Mangano, Carlo
2014-01-01
The aim of this study was to achieve aesthetically pleasing soft tissue contours in a severely compromised tooth in the anterior region of the maxilla. For a right-maxillary central incisor with localized advanced chronic periodontitis a tooth extraction followed by reconstructive procedures and delayed implant placement was proposed and accepted by the patient. Guided bone regeneration (GBR) technique was employed, with a biphasic calcium-phosphate (BCP) block graft placed in the extraction socket in conjunction with granules of the same material and a resorbable barrier membrane. After 6 months of healing, an implant was installed. The acrylic provisional restoration remained in situ for 3 months and then was substituted with the definitive crown. This ridge reconstruction technique enabled preserving both hard and soft tissues and counteracting vertical and horizontal bone resorption after tooth extraction and allowed for an ideal three-dimensional implant placement. Localized severe alveolar bone resorption of the anterior maxilla associated with chronic periodontal disease can be successfully treated by means of ridge reconstruction with GBR and delayed implant insertion; the placement of an early-loaded, Morse taper connection implant in the grafted site was effective to create an excellent clinical aesthetic result and to maintain it along time. PMID:25431687
Monte Carlo Code System for High-Energy Radiation Transport Calculations.
FILGES, DETLEF
2000-02-16
Version 00 HERMES-KFA consists of a set of Monte Carlo Codes used to simulate particle radiation and interaction with matter. The main codes are HETC, MORSE, and EGS. They are supported by a common geometry package, common random routines, a command interpreter, and auxiliary codes like NDEM that is used to generate a gamma-ray source from nuclear de-excitation after spallation processes. The codes have been modified so that any particle history falling outside the domain of the physical theory of one program can be submitted to another program in the suite to complete the work. Also response data can be submitted by each program, to be collected and combined by a statistic package included within the command interpreter.
Almeida, Pedro; Barbosa, Raquel; Bensasson, Douda; Gonçalves, Paula; Sampaio, José Paulo
2017-02-23
In Saccharomyces cerevisiae, the main yeast in wine fermentation, the opportunity to examine divergence at the molecular level between a domesticated lineage and its wild counterpart arose recently due to the identification of the closest relatives of wine strains, a wild population associated with Mediterranean oaks. Since genomic data is available for a considerable number of representatives belonging to both groups, we used population genomics to estimate the degree and distribution of nucleotide variation between wine yeasts and their closest wild relatives. We found widespread genome-wide divergence, particularly at non-coding sites, which, together with above average divergence in trans-acting DNA binding proteins, may suggest an important role for divergence at the level of transcriptional regulation. Nine outlier regions putatively under strong divergent selection were highlighted by a genome wide scan under stringent conditions. Several cases of introgressions originating in the sibling species S. paradoxus, were also identified in the Mediterranean oak population. FFZ1 and SSU1, mostly known for conferring sulphite resistance in wine yeasts, were among the introgressed genes, although not fixed. Because the introgressions detected in our study are not found in wine strains, we hypothesise that ongoing divergent ecological selection segregates the two forms between the different niches. Together, our results provide a first insight into the extent and kind of divergence between wine yeasts and their closest wild relatives. This article is protected by copyright. All rights reserved.
High Order Modulation Protograph Codes
NASA Technical Reports Server (NTRS)
Nguyen, Thuy V. (Inventor); Nosratinia, Aria (Inventor); Divsalar, Dariush (Inventor)
2014-01-01
Digital communication coding methods for designing protograph-based bit-interleaved code modulation that is general and applies to any modulation. The general coding framework can support not only multiple rates but also adaptive modulation. The method is a two stage lifting approach. In the first stage, an original protograph is lifted to a slightly larger intermediate protograph. The intermediate protograph is then lifted via a circulant matrix to the expected codeword length to form a protograph-based low-density parity-check code.
Vector Adaptive/Predictive Encoding Of Speech
NASA Technical Reports Server (NTRS)
Chen, Juin-Hwey; Gersho, Allen
1989-01-01
Vector adaptive/predictive technique for digital encoding of speech signals yields decoded speech of very good quality after transmission at coding rate of 9.6 kb/s and of reasonably good quality at 4.8 kb/s. Requires 3 to 4 million multiplications and additions per second. Combines advantages of adaptive/predictive coding, and code-excited linear prediction, yielding speech of high quality but requires 600 million multiplications and additions per second at encoding rate of 4.8 kb/s. Vector adaptive/predictive coding technique bridges gaps in performance and complexity between adaptive/predictive coding and code-excited linear prediction.
Chatani, K. )
1992-08-01
This report summarizes the calculational results from analyses of a Clinch River Breeder Reactor (CRBR) prototypic coolant pipe chaseway neutron streaming experiment Comparisons of calculated and measured results are presented, major emphasis being placed on results at bends in the chaseway. Calculations were performed with three three-dimensional radiation transport codes: the discrete ordinates code TORT and the Monte Carlo code MORSE, both developed by the Oak Ridge National Laboratory (ORNL), and the discrete ordinates code ENSEMBLE, developed by Japan. The calculated results from the three codes are compared (1) with previously-calculated DOT3.5 two-dimensional results, (2) among themselves, and (3) with measured results. Calculations with TORT used both the weighted-difference and nodal methods. Only the weighted-difference method was used in ENSEMBLE. When the calculated results were compared to measured results, it was found that calculation-to-experiment (C/E) ratios were good in the regions of the chaseway where two-dimensional modeling might be difficult and where there were no significant discrete ordinates ray effects. Excellent agreement was observed for responses dominated by thermal neutron contributions. MORSE-calculated results and comparisons are described also, and detailed results are presented in an appendix.
Kubilius, Jonas
2014-01-01
Sharing code is becoming increasingly important in the wake of Open Science. In this review I describe and compare two popular code-sharing utilities, GitHub and Open Science Framework (OSF). GitHub is a mature, industry-standard tool but lacks focus towards researchers. In comparison, OSF offers a one-stop solution for researchers but a lot of functionality is still under development. I conclude by listing alternative lesser-known tools for code and materials sharing. PMID:25165519
NASA Astrophysics Data System (ADS)
Tritzant-Martinez, Yalina; Zeng, Tao; Broom, Aron; Meiering, Elizabeth; Le Roy, Robert J.; Roy, Pierre-Nicholas
2013-06-01
We investigate the analytical representation of potentials of mean force (pmf) using the Morse/long-range (MLR) potential approach. The MLR method had previously been used to represent potential energy surfaces, and we assess its validity for representing free-energies. The advantage of the approach is that the potential of mean force data only needs to be calculated in the short to medium range region of the reaction coordinate while the long range can be handled analytically. This can result in significant savings in terms of computational effort since one does not need to cover the whole range of the reaction coordinate during simulations. The water dimer with rigid monomers whose interactions are described by the commonly used TIP4P model [W. Jorgensen and J. Madura, Mol. Phys. 56, 1381 (1985)], 10.1080/00268978500103111 is used as a test case. We first calculate an "exact" pmf using direct Monte Carlo (MC) integration and term such a calculation as our gold standard (GS). Second, we compare this GS with several MLR fits to the GS to test the validity of the fitting procedure. We then obtain the water dimer pmf using metadynamics simulations in a limited range of the reaction coordinate and show how the MLR treatment allows the accurate generation of the full pmf. We finally calculate the transition state theory rate constant for the water dimer dissociation process using the GS, the GS MLR fits, and the metadynamics MLR fits. Our approach can yield a compact, smooth, and accurate analytical representation of pmf data with reduced computational cost.
Transmission resonances in above-barrier reflection of ultra-cold atoms by the Rosen-Morse potential
NASA Astrophysics Data System (ADS)
Ishkhanyan, H. A.; Krainov, V. P.; Ishkhanyan, A. M.
2010-04-01
Quantum above-barrier reflection of ultra-cold atoms by the Rosen-Morse potential is analytically considered within the mean-field Gross-Pitaevskii approximation. Reformulating the problem of reflectionless transmission as a quasi-linear eigenvalue problem for the potential depth, an approximation for the specific height of the potential that supports reflectionless transmission of the incoming matter wave is derived via modification of the Rayleigh-Schrödinger time-independent perturbation theory. The approximation provides a highly accurate description of the resonance position for all the resonance orders if the nonlinearity parameter is small compared with the incoming particle's chemical potential. Notably, the result for the first transmission resonance turns out to be exact, i.e. the derived formula for the resonant potential height gives the exact value of the first nonlinear resonance's position for all the allowed variation range of the involved parameters, the nonlinearity parameter and chemical potential. This has been demonstrated by constructing the exact solution of the problem for the first resonance. Furthermore, the presented approximation reveals that, in contrast to the linear case, in the nonlinear case reflectionless transmission may occur not only for potential wells but also for potential barriers with positive potential height. It also shows that the nonlinear shift of the resonance position from the position of the corresponding linear resonance is approximately described as a linear function of the resonance order. Finally, a compact (yet, highly accurate) analytic formula for the nth-order resonance position is constructed via combination of analytical and numerical methods.
Wilson, J.T.; Morlock, S.E.; Baker, N.T.
1997-01-01
Acoustic Doppler current profiler, global positioning system, and geographic information system technology were used to map the bathymetry of Morse and Geist Reservoirs, two artificial lakes used for public water supply in central Indiana. The project was a pilot study to evaluate the use of the technologies for bathymetric surveys. Bathymetric surveys were last conducted in 1978 on Morse Reservoir and in 1980 on Geist Reservoir; those surveys were done with conventional methods using networks of fathometer transects. The 1996 bathymetric surveys produced updated estimates of reservoir volumes that will serve as base-line data for future estimates of storage capacity and sedimentation rates.An acoustic Doppler current profiler and global positioning system receiver were used to collect water-depth and position data from April 1996 through October 1996. All water-depth and position data were imported to a geographic information system to create a data base. The geographic information system then was used to generate water-depth contour maps and to compute the volumes for each reservoir.The computed volume of Morse Reservoir was 22,820 acre-feet (7.44 billion gallons), with a surface area of 1,484 acres. The computed volume of Geist Reservoir was 19,280 acre-feet (6.29 billion gallons), with a surface area of 1,848 acres. The computed 1996 reservoir volumes are less than the design volumes and indicate that sedimentation has occurred in both reservoirs. Cross sections were constructed from the computer-generated surfaces for 1996 and compared to the fathometer profiles from the 1978 and 1980 surveys; analysis of these cross sections also indicates that some sedimentation has occurred in both reservoirs.The acoustic Doppler current profiler, global positioning system, and geographic information system technologies described in this report produced bathymetric maps and volume estimates more efficiently and with comparable or greater resolution than conventional
NASA Astrophysics Data System (ADS)
Ghoumaid, A.; Benamira, F.; Guechi, L.
2016-02-01
It is shown that the application of the Nikiforov-Uvarov method by Ikhdair for solving the Dirac equation with the radial Rosen-Morse potential plus the spin-orbit centrifugal term is inadequate because the required conditions are not satisfied. The energy spectra given is incorrect and the wave functions are not physically acceptable. We clarify the problem and prove that the spinor wave functions are expressed in terms of the generalized hypergeometric functions 2F1(a, b, c; z). The energy eigenvalues for the bound states are given by the solution of a transcendental equation involving the hypergeometric function.
Hess, Peter
2014-08-07
An improved microscopic cleavage model, based on a Morse-type and Lennard-Jones-type interaction instead of the previously employed half-sine function, is used to determine the maximum cleavage strength for the brittle materials diamond, tungsten, molybdenum, silicon, GaAs, silica, and graphite. The results of both interaction potentials are in much better agreement with the theoretical strength values obtained by ab initio calculations for diamond, tungsten, molybdenum, and silicon than the previous model. Reasonable estimates of the intrinsic strength are presented for GaAs, silica, and graphite, where first principles values are not available.
NASA Astrophysics Data System (ADS)
Montalbán, A.; Velasco, V. R.; Tutor, J.; Fernández-Velicia, F. J.
2007-06-01
We have studied the vibrational frequencies and atom displacements of one-dimensional systems formed by combinations of Thue-Morse and Rudin-Shapiro quasi-regular stackings with periodic ones. The materials are described by nearest-neighbor force constants and the corresponding atom masses. These systems exhibit differences in the frequency spectrum as compared to the original simple quasi-regular generations and periodic structures. The most important feature is the presence of separate confinement of the atom displacements in one of the parts forming the total composite structure for different frequency ranges, thus acting as a kind of phononic cavity.
Ravishankar, C., Hughes Network Systems, Germantown, MD
1998-05-08
Speech is the predominant means of communication between human beings and since the invention of the telephone by Alexander Graham Bell in 1876, speech services have remained to be the core service in almost all telecommunication systems. Original analog methods of telephony had the disadvantage of speech signal getting corrupted by noise, cross-talk and distortion Long haul transmissions which use repeaters to compensate for the loss in signal strength on transmission links also increase the associated noise and distortion. On the other hand digital transmission is relatively immune to noise, cross-talk and distortion primarily because of the capability to faithfully regenerate digital signal at each repeater purely based on a binary decision. Hence end-to-end performance of the digital link essentially becomes independent of the length and operating frequency bands of the link Hence from a transmission point of view digital transmission has been the preferred approach due to its higher immunity to noise. The need to carry digital speech became extremely important from a service provision point of view as well. Modem requirements have introduced the need for robust, flexible and secure services that can carry a multitude of signal types (such as voice, data and video) without a fundamental change in infrastructure. Such a requirement could not have been easily met without the advent of digital transmission systems, thereby requiring speech to be coded digitally. The term Speech Coding is often referred to techniques that represent or code speech signals either directly as a waveform or as a set of parameters by analyzing the speech signal. In either case, the codes are transmitted to the distant end where speech is reconstructed or synthesized using the received set of codes. A more generic term that is applicable to these techniques that is often interchangeably used with speech coding is the term voice coding. This term is more generic in the sense that the
Edge equilibrium code for tokamaks
Li, Xujing; Drozdov, Vladimir V.
2014-01-15
The edge equilibrium code (EEC) described in this paper is developed for simulations of the near edge plasma using the finite element method. It solves the Grad-Shafranov equation in toroidal coordinate and uses adaptive grids aligned with magnetic field lines. Hermite finite elements are chosen for the numerical scheme. A fast Newton scheme which is the same as implemented in the equilibrium and stability code (ESC) is applied here to adjust the grids.
Adaptive Mesh Refinement in CTH
Crawford, David
1999-05-04
This paper reports progress on implementing a new capability of adaptive mesh refinement into the Eulerian multimaterial shock- physics code CTH. The adaptivity is block-based with refinement and unrefinement occurring in an isotropic 2:1 manner. The code is designed to run on serial, multiprocessor and massive parallel platforms. An approximate factor of three in memory and performance improvements over comparable resolution non-adaptive calculations has-been demonstrated for a number of problems.
ERIC Educational Resources Information Center
Lai, Hsin-Chih; Chang, Chun-Yen; Li, Wen-Shiane; Fan, Yu-Lin; Wu, Ying-Tien
2013-01-01
This study presents an m-learning method that incorporates Integrated Quick Response (QR) codes. This learning method not only achieves the objectives of outdoor education, but it also increases applications of Cognitive Theory of Multimedia Learning (CTML) (Mayer, 2001) in m-learning for practical use in a diverse range of outdoor locations. When…
NASA Astrophysics Data System (ADS)
Ikhdair, Sameer M.
2009-06-01
The analytic solutions of the spatially-dependent mass Schrödinger equation of diatomic molecules with the centrifugal term l(l+1)/r2 for the generalized q-deformed Morse potential are obtained approximately by means of a parametric generalization of the Nikiforov-Uvarov (NU) method combined with the Pekeris approximation scheme. The energy eigenvalues and the corresponding normalized radial wave functions are calculated in closed form with a physically motivated choice of a reciprocal Morse-like mass function, m(r)=m0/(1-δe)2,0⩽δ<1, where a and re are the range of the potential and the equilibrium position of the nuclei. The constant mass case when δ→0 is also studied. The energy states for H 2, LiH, HCl and CO diatomic molecules are calculated and compared favourably well with those obtained by using other approximation methods for arbitrary vibrational n and rotational l quantum numbers.
AEDS Property Classification Code Manual.
ERIC Educational Resources Information Center
Association for Educational Data Systems, Washington, DC.
The control and inventory of property items using data processing machines requires a form of numerical description or code which will allow a maximum of description in a minimum of space on the data card. An adaptation of a standard industrial classification system is given to cover any expendable warehouse item or non-expendable piece of…
1995-01-01
then made the filter bank pitch-adaptive thus ensuring roughly one sine wave per filter . The analysis in these systems does not explicitly model and...estimate the sine- wave components, but rather views them as outputs of a bank of uniformly-spaced bandpass filters . The synthesis waveform can be...viewed as a sum of the modified outputs of this filter bank . Although speech of good quality has reportedly been synthesized using these techniques
Video coding with dynamic background
NASA Astrophysics Data System (ADS)
Paul, Manoranjan; Lin, Weisi; Lau, Chiew Tong; Lee, Bu-Sung
2013-12-01
Motion estimation (ME) and motion compensation (MC) using variable block size, sub-pixel search, and multiple reference frames (MRFs) are the major reasons for improved coding performance of the H.264 video coding standard over other contemporary coding standards. The concept of MRFs is suitable for repetitive motion, uncovered background, non-integer pixel displacement, lighting change, etc. The requirement of index codes of the reference frames, computational time in ME & MC, and memory buffer for coded frames limits the number of reference frames used in practical applications. In typical video sequences, the previous frame is used as a reference frame with 68-92% of cases. In this article, we propose a new video coding method using a reference frame [i.e., the most common frame in scene (McFIS)] generated by dynamic background modeling. McFIS is more effective in terms of rate-distortion and computational time performance compared to the MRFs techniques. It has also inherent capability of scene change detection (SCD) for adaptive group of picture (GOP) size determination. As a result, we integrate SCD (for GOP determination) with reference frame generation. The experimental results show that the proposed coding scheme outperforms the H.264 video coding with five reference frames and the two relevant state-of-the-art algorithms by 0.5-2.0 dB with less computational time.
A secure and efficient entropy coding based on arithmetic coding
NASA Astrophysics Data System (ADS)
Li, Hengjian; Zhang, Jiashu
2009-12-01
A novel security arithmetic coding scheme based on nonlinear dynamic filter (NDF) with changeable coefficients is proposed in this paper. The NDF is employed to generate the pseudorandom number generator (NDF-PRNG) and its coefficients are derived from the plaintext for higher security. During the encryption process, the mapping interval in each iteration of arithmetic coding (AC) is decided by both the plaintext and the initial values of NDF, and the data compression is also achieved with entropy optimality simultaneously. And this modification of arithmetic coding methodology which also provides security is easy to be expanded into the most international image and video standards as the last entropy coding stage without changing the existing framework. Theoretic analysis and numerical simulations both on static and adaptive model show that the proposed encryption algorithm satisfies highly security without loss of compression efficiency respect to a standard AC or computation burden.
Robust coding over noisy overcomplete channels.
Doi, Eizaburo; Balcan, Doru C; Lewicki, Michael S
2007-02-01
We address the problem of robust coding in which the signal information should be preserved in spite of intrinsic noise in the representation. We present a theoretical analysis for 1- and 2-D cases and characterize the optimal linear encoder and decoder in the mean-squared error sense. Our analysis allows for an arbitrary number of coding units, thus including both under- and over-complete representations, and provides insights into optimal coding strategies. In particular, we show how the form of the code adapts to the number of coding units and to different data and noise conditions in order to achieve robustness. We also present numerical solutions of robust coding for high-dimensional image data, demonstrating that these codes are substantially more robust than other linear image coding methods such as PCA, ICA, and wavelets.
Apostolova, E.S.; Tulub, A.V.
1995-04-01
Based on the six-parametric model energy operator containing Morse potentials, the spectrum of valence vibrations of d-chloroform and fluoroform is calculated to the vibrational quantum number v= 2 with an accuracy of 5 and 6.3 cm{sup -1}, respectively. Upon HCCl{sub 3}{r_arrow} DCC1{sub 3} isotopic substitution, the CH-bond dissociation energy increases by 25.2 kJ/mol, which agrees with experimental estimates. The CH-bond dissociation energy 444.2 kJ/mol in the HCF{sub 3} molecule is quite close to its experimental value of 446.4 {+-} 4.2 kJ/mol. The influence of the high-frequency laser field on the vibrational spectrum of HCC{sub 3} is considered using averaging over the vibration period.
NASA Astrophysics Data System (ADS)
Rahimi, H.
2016-07-01
The present paper attempts to determine the properties of photonic spectra of Thue-Morse, double-period and Rudin-Shapiro one-dimensional quasiperiodic multilayers. The supposed structures are constituted by high temperature HgBa2Ca2Cu3O10 and YBa2Cu3O7 superconductors. Our investigation is restricted to the visible wavelength domain. The results are demonstrated by the calculation of transmittance using transfer matrix method together with Gorter-Casimir two-fluid model. It is found that by manipulating the parameters such as incident angle, polarization, the thickness of each layer and operation temperature of superconductors the transmission spectra exhibit some interesting features. This paper, provides us a pathway to design tunable total reflector, optical filters and optical switching based on superconductor quasiregular photonic crystals.
Khorshidi, Hooman; Raoofi, Saeed; Moattari, Afagh; Bagheri, Atoosa; Kalantari, Mohammad Hassan
2016-01-01
Background and Aim. The geometry of implant-abutment interface (IAI) affects the risk of bacterial leakage and invasion into the internal parts of the implant. The aim of this study was to compare the bacterial leakage of an 11-degree Morse taper IAI with that of a butt joint connection. Materials and Methods. Two implants systems were tested (n = 10 per group): CSM (submerged) and TBR (connect). The deepest inner parts of the implants were inoculated with 2 μL of Streptococcus mutans suspension with a concentration of 108 CFU/mL. The abutments were tightened on the implants. The specimens were stored in the incubator at a temperature of 37°C for 14 days and the penetration of the bacterium in the surrounding area was determined by the observation of the solution turbidity and comparison with control specimens. Kaplan-Meier survival curve was traced for the estimation of bacterial leakage and the results between two groups of implants were statistically analyzed by chi-square test. Results. No case of the implant system with the internal conical connection design revealed bacterial leakage in 14 days and no turbidity of the solution was reported for it. In the system with butt joint implant-abutment connection, 1 case showed leakage on the third day, 1 case on the eighth day, and 5 cases on the 13th day. In total, 7 (70%) cases showed bacterial leakage in this system. Significant differences were found between the two groups of implants based on the incidence of bacterial leakage (p < 0.05). Conclusion. The 11-degree Morse taper demonstrated better resistance to microbial leakage than butt joint connection.
Adaptation and perceptual norms
NASA Astrophysics Data System (ADS)
Webster, Michael A.; Yasuda, Maiko; Haber, Sara; Leonard, Deanne; Ballardini, Nicole
2007-02-01
We used adaptation to examine the relationship between perceptual norms--the stimuli observers describe as psychologically neutral, and response norms--the stimulus levels that leave visual sensitivity in a neutral or balanced state. Adapting to stimuli on opposite sides of a neutral point (e.g. redder or greener than white) biases appearance in opposite ways. Thus the adapting stimulus can be titrated to find the unique adapting level that does not bias appearance. We compared these response norms to subjectively defined neutral points both within the same observer (at different retinal eccentricities) and between observers. These comparisons were made for visual judgments of color, image focus, and human faces, stimuli that are very different and may depend on very different levels of processing, yet which share the property that for each there is a well defined and perceptually salient norm. In each case the adaptation aftereffects were consistent with an underlying sensitivity basis for the perceptual norm. Specifically, response norms were similar to and thus covaried with the perceptual norm, and under common adaptation differences between subjectively defined norms were reduced. These results are consistent with models of norm-based codes and suggest that these codes underlie an important link between visual coding and visual experience.
NASA Technical Reports Server (NTRS)
Hinds, Erold W. (Principal Investigator)
1996-01-01
This report describes the progress made towards the completion of a specific task on error-correcting coding. The proposed research consisted of investigating the use of modulation block codes as the inner code of a concatenated coding system in order to improve the overall space link communications performance. The study proposed to identify and analyze candidate codes that will complement the performance of the overall coding system which uses the interleaved RS (255,223) code as the outer code.
Generalization of Prism Adaptation
ERIC Educational Resources Information Center
Redding, Gordon M.; Wallace, Benjamin
2006-01-01
Prism exposure produces 2 kinds of adaptive response. Recalibration is ordinary strategic remapping of spatially coded movement commands to rapidly reduce performance error. Realignment is the extraordinary process of transforming spatial maps to bring the origins of coordinate systems into correspondence. Realignment occurs when spatial…
A user's manual for MASH 1. 0: A Monte Carlo Adjoint Shielding Code System
Johnson, J.O.
1992-03-01
The Monte Carlo Adjoint Shielding Code System, MASH, calculates neutron and gamma-ray environments and radiation protection factors for armored military vehicles, structures, trenches, and other shielding configurations by coupling a forward discrete ordinates air-over-ground transport calculation with an adjoint Monte Carlo treatment of the shielding geometry. Efficiency and optimum use of computer time are emphasized. The code system include the GRTUNCL and DORT codes for air-over-ground transport calculations, the MORSE code with the GIFT5 combinatorial geometry package for adjoint shielding calculations, and several peripheral codes that perform the required data preparations, transformations, and coupling functions. MASH is the successor to the Vehicle Code System (VCS) initially developed at Oak Ridge National Laboratory (ORNL). The discrete ordinates calculation determines the fluence on a coupling surface surrounding the shielding geometry due to an external neutron/gamma-ray source. The Monte Carlo calculation determines the effectiveness of the fluence at that surface in causing a response in a detector within the shielding geometry, i.e., the dose importance'' of the coupling surface fluence. A coupling code folds the fluence together with the dose importance, giving the desired dose response. The coupling code can determine the dose response a a function of the shielding geometry orientation relative to the source, distance from the source, and energy response of the detector. This user's manual includes a short description of each code, the input required to execute the code along with some helpful input data notes, and a representative sample problem (input data and selected output edits) for each code.
Predictive coding of multisensory timing
Shi, Zhuanghua; Burr, David
2016-01-01
The sense of time is foundational for perception and action, yet it frequently departs significantly from physical time. In the paper we review recent progress on temporal contextual effects, multisensory temporal integration, temporal recalibration, and related computational models. We suggest that subjective time arises from minimizing prediction errors and adaptive recalibration, which can be unified in the framework of predictive coding, a framework rooted in Helmholtz’s ‘perception as inference’. PMID:27695705
ERIC Educational Resources Information Center
Rhodes, Gillian; Jeffery, Linda; Boeing, Alexandra; Calder, Andrew J.
2013-01-01
Despite the discovery of body-selective neural areas in occipitotemporal cortex, little is known about how bodies are visually coded. We used perceptual adaptation to determine how body identity is coded. Brief exposure to a body (e.g., anti-Rose) biased perception toward an identity with opposite properties (Rose). Moreover, the size of this…
1997-03-01
This Manual represents Revision 5 of the user documentation for the modular code system referred to as SCALE. The history of the SCALE code system dates back to 1969 when the current Computational Physics and Engineering Division at Oak Ridge National Laboratory (ORNL) began providing the transportation package certification staff at the U.S. Atomic Energy Commission with computational support in the use of the new KENO code for performing criticality safety assessments with the statistical Monte Carlo method. From 1969 to 1976 the certification staff relied on the ORNL staff to assist them in the correct use of codes and data for criticality, shielding, and heat transfer analyses of transportation packages. However, the certification staff learned that, with only occasional use of the codes, it was difficult to become proficient in performing the calculations often needed for an independent safety review. Thus, shortly after the move of the certification staff to the U.S. Nuclear Regulatory Commission (NRC), the NRC staff proposed the development of an easy-to-use analysis system that provided the technical capabilities of the individual modules with which they were familiar. With this proposal, the concept of the Standardized Computer Analyses for Licensing Evaluation (SCALE) code system was born. This volume consists of the section of the manual dealing with three of the functional modules in the code. Those are the Morse-SGC for the SCALE system, Heating 7.2, and KENO V.a. The manual describes the latest released versions of the codes.
NASA Astrophysics Data System (ADS)
Aciksoz, Esra; Bayrak, Orhan; Soylu, Asim
2016-10-01
The behavior of a donor in the GaAs-Ga1-x Al x As quantum well wire represented by the Morse potential is examined within the framework of the effective-mass approximation. The donor binding energies are numerically calculated for with and without the electric and magnetic fields in order to show their influence on the binding energies. Moreover, how the donor binding energies change for the constant potential parameters (D e, r e, and a) as well as with the different values of the electric and magnetic field strengths is determined. It is found that the donor binding energy is highly dependent on the external electric and magnetic fields as well as parameters of the Morse potential. Project supported by the Turkish Science Research Council (TÜBİTAK) and the Financial Supports from Akdeniz and Nigde Universities.
Diagnostic Coding for Epilepsy.
Williams, Korwyn; Nuwer, Marc R; Buchhalter, Jeffrey R
2016-02-01
Accurate coding is an important function of neurologic practice. This contribution to Continuum is part of an ongoing series that presents helpful coding information along with examples related to the issue topic. Tips for diagnosis coding, Evaluation and Management coding, procedure coding, or a combination are presented, depending on which is most applicable to the subject area of the issue.
ERIC Educational Resources Information Center
New Mexico Univ., Albuquerque. American Indian Law Center.
The Model Children's Code was developed to provide a legally correct model code that American Indian tribes can use to enact children's codes that fulfill their legal, cultural and economic needs. Code sections cover the court system, jurisdiction, juvenile offender procedures, minor-in-need-of-care, and termination. Almost every Code section is…
Phylogeny of genetic codes and punctuation codes within genetic codes.
Seligmann, Hervé
2015-03-01
Punctuation codons (starts, stops) delimit genes, reflect translation apparatus properties. Most codon reassignments involve punctuation. Here two complementary approaches classify natural genetic codes: (A) properties of amino acids assigned to codons (classical phylogeny), coding stops as X (A1, antitermination/suppressor tRNAs insert unknown residues), or as gaps (A2, no translation, classical stop); and (B) considering only punctuation status (start, stop and other codons coded as -1, 0 and 1 (B1); 0, -1 and 1 (B2, reflects ribosomal translational dynamics); and 1, -1, and 0 (B3, starts/stops as opposites)). All methods separate most mitochondrial codes from most nuclear codes; Gracilibacteria consistently cluster with metazoan mitochondria; mitochondria co-hosted with chloroplasts cluster with nuclear codes. Method A1 clusters the euplotid nuclear code with metazoan mitochondria; A2 separates euplotids from mitochondria. Firmicute bacteria Mycoplasma/Spiroplasma and Protozoan (and lower metazoan) mitochondria share codon-amino acid assignments. A1 clusters them with mitochondria, they cluster with the standard genetic code under A2: constraints on amino acid ambiguity versus punctuation-signaling produced the mitochondrial versus bacterial versions of this genetic code. Punctuation analysis B2 converges best with classical phylogenetic analyses, stressing the need for a unified theory of genetic code punctuation accounting for ribosomal constraints.
A User's Manual for MASH V1.5 - A Monte Carlo Adjoint Shielding Code System
C. O. Slater; J. M. Barnes; J. O. Johnson; J.D. Drischler
1998-10-01
The Monte Carlo ~djoint ~ielding Code System, MASH, calculates neutron and gamma- ray environments and radiation protection factors for armored military vehicles, structures, trenches, and other shielding configurations by coupling a forward discrete ordinates air- over-ground transport calculation with an adjoint Monte Carlo treatment of the shielding geometry. Efficiency and optimum use of computer time are emphasized. The code system includes the GRTUNCL and DORT codes for air-over-ground transport calculations, the MORSE code with the GIFT5 combinatorial geometry package for adjoint shielding calculations, and several peripheral codes that perform the required data preparations, transformations, and coupling functions. The current version, MASH v 1.5, is the successor to the original MASH v 1.0 code system initially developed at Oak Ridge National Laboratory (ORNL). The discrete ordinates calculation determines the fluence on a coupling surface surrounding the shielding geometry due to an external neutron/gamma-ray source. The Monte Carlo calculation determines the effectiveness of the fluence at that surface in causing a response in a detector within the shielding geometry, i.e., the "dose importance" of the coupling surface fluence. A coupling code folds the fluence together with the dose importance, giving the desired dose response. The coupling code can determine the dose response as a function of the shielding geometry orientation relative to the source, distance from the source, and energy response of the detector. This user's manual includes a short description of each code, the input required to execute the code along with some helpful input data notes, and a representative sample problem.
Rhodes, Gillian; Jeffery, Linda; Boeing, Alexandra; Calder, Andrew J
2013-04-01
Despite the discovery of body-selective neural areas in occipitotemporal cortex, little is known about how bodies are visually coded. We used perceptual adaptation to determine how body identity is coded. Brief exposure to a body (e.g., anti-Rose) biased perception toward an identity with opposite properties (Rose). Moreover, the size of this aftereffect increased with adaptor extremity, as predicted by norm-based, opponent coding of body identity. A size change between adapt and test bodies minimized the effects of low-level, retinotopic adaptation. These results demonstrate that body identity, like face identity, is opponent coded in higher-level vision. More generally, they show that a norm-based multidimensional framework, which is well established for face perception, may provide a powerful framework for understanding body perception.
Ma, Yong-Tao; Zeng, Tao; Li, Hui
2014-06-07
Four-dimensional ab initio intermolecular potential energy surfaces (PESs) for CH3F-He that explicitly incorporates dependence on the Q3 stretching normal mode of the CH3F molecule and are parametrically dependent on the other averaged intramolecular coordinates have been calculated. Analytical three-dimensional PESs for v3(CH3F) = 0 and 1 are obtained by least-squares fitting the vibrationally averaged potentials to the Morse/Long-Range potential function form. With the 3D PESs, we employ Lanczos algorithm to calculate rovibrational levels of the dimer system. Following some re-assignments, the predicted transition frequencies are in good agreement with experimental microwave data for ortho-CH3F, with the root-mean-square deviation of 0.042 cm(-1). We then provide the first prediction of the infrared and microwave spectra for the para-CH3F-He dimer. The calculated infrared band origin shifts associated with the ν3 fundamental of CH3F are 0.039 and 0.069 cm(-1) for para-CH3F-He and ortho-CH3F-He, respectively.
The code of ethics for nurses.
Zahedi, F; Sanjari, M; Aala, M; Peymani, M; Aramesh, K; Parsapour, A; Maddah, Ss Bagher; Cheraghi, Ma; Mirzabeigi, Gh; Larijani, B; Dastgerdi, M Vahid
2013-01-01
Nurses are ever-increasingly confronted with complex concerns in their practice. Codes of ethics are fundamental guidance for nursing as many other professions. Although there are authentic international codes of ethics for nurses, the national code would be the additional assistance provided for clinical nurses in their complex roles in care of patients, education, research and management of some parts of health care system in the country. A national code can provide nurses with culturally-adapted guidance and help them to make ethical decisions more closely to the Iranian-Islamic background. Given the general acknowledgement of the need, the National Code of Ethics for Nurses was compiled as a joint project (2009-2011). The Code was approved by the Health Policy Council of the Ministry of Health and Medical Education and communicated to all universities, healthcare centers, hospitals and research centers early in 2011. The focus of this article is on the course of action through which the Code was compiled, amended and approved. The main concepts of the code will be also presented here. No doubt, development of the codes should be considered as an ongoing process. This is an overall responsibility to keep the codes current, updated with the new progresses of science and emerging challenges, and pertinent to the nursing practice.
Code-excited linear predictive coding of multispectral MR images
NASA Astrophysics Data System (ADS)
Hu, Jian-Hong; Wang, Yao; Cahill, Patrick
1996-02-01
This paper reports a multispectral code excited linear predictive coding method for the compression of well-registered multispectral MR images. Different linear prediction models and the adaptation schemes have been compared. The method which uses forward adaptive autoregressive (AR) model has proven to achieve a good compromise between performance, complexity and robustness. This approach is referred to as the MFCELP method. Given a set of multispectral images, the linear predictive coefficients are updated over non-overlapping square macroblocks. Each macro-block is further divided into several micro-blocks and, the best excitation signals for each microblock are determined through an analysis-by-synthesis procedure. To satisfy the high quality requirement for medical images, the error between the original images and the synthesized ones are further specified using a vector quantizer. The MFCELP method has been applied to 26 sets of clinical MR neuro images (20 slices/set, 3 spectral bands/slice, 256 by 256 pixels/image, 12 bits/pixel). It provides a significant improvement over the discrete cosine transform (DCT) based JPEG method, a wavelet transform based embedded zero-tree wavelet (EZW) coding method, as well as the MSARMA method we developed before.
Edge Equilibrium Code (EEC) For Tokamaks
Li, Xujling
2014-02-24
The edge equilibrium code (EEC) described in this paper is developed for simulations of the near edge plasma using the finite element method. It solves the Grad-Shafranov equation in toroidal coordinate and uses adaptive grids aligned with magnetic field lines. Hermite finite elements are chosen for the numerical scheme. A fast Newton scheme which is the same as implemented in the equilibrium and stability code (ESC) is applied here to adjust the grids
Accumulate repeat accumulate codes
NASA Technical Reports Server (NTRS)
Abbasfar, Aliazam; Divsalar, Dariush; Yao, Kung
2004-01-01
In this paper we propose an innovative channel coding scheme called 'Accumulate Repeat Accumulate codes' (ARA). This class of codes can be viewed as serial turbo-like codes, or as a subclass of Low Density Parity Check (LDPC) codes, thus belief propagation can be used for iterative decoding of ARA codes on a graph. The structure of encoder for this class can be viewed as precoded Repeat Accumulate (RA) code or as precoded Irregular Repeat Accumulate (IRA) code, where simply an accumulator is chosen as a precoder. Thus ARA codes have simple, and very fast encoder structure when they representing LDPC codes. Based on density evolution for LDPC codes through some examples for ARA codes, we show that for maximum variable node degree 5 a minimum bit SNR as low as 0.08 dB from channel capacity for rate 1/2 can be achieved as the block size goes to infinity. Thus based on fixed low maximum variable node degree, its threshold outperforms not only the RA and IRA codes but also the best known LDPC codes with the dame maximum node degree. Furthermore by puncturing the accumulators any desired high rate codes close to code rate 1 can be obtained with thresholds that stay close to the channel capacity thresholds uniformly. Iterative decoding simulation results are provided. The ARA codes also have projected graph or protograph representation that allows for high speed decoder implementation.
Concatenated Coding Using Trellis-Coded Modulation
NASA Technical Reports Server (NTRS)
Thompson, Michael W.
1997-01-01
In the late seventies and early eighties a technique known as Trellis Coded Modulation (TCM) was developed for providing spectrally efficient error correction coding. Instead of adding redundant information in the form of parity bits, redundancy is added at the modulation stage thereby increasing bandwidth efficiency. A digital communications system can be designed to use bandwidth-efficient multilevel/phase modulation such as Amplitude Shift Keying (ASK), Phase Shift Keying (PSK), Differential Phase Shift Keying (DPSK) or Quadrature Amplitude Modulation (QAM). Performance gain can be achieved by increasing the number of signals over the corresponding uncoded system to compensate for the redundancy introduced by the code. A considerable amount of research and development has been devoted toward developing good TCM codes for severely bandlimited applications. More recently, the use of TCM for satellite and deep space communications applications has received increased attention. This report describes the general approach of using a concatenated coding scheme that features TCM and RS coding. Results have indicated that substantial (6-10 dB) performance gains can be achieved with this approach with comparatively little bandwidth expansion. Since all of the bandwidth expansion is due to the RS code we see that TCM based concatenated coding results in roughly 10-50% bandwidth expansion compared to 70-150% expansion for similar concatenated scheme which use convolution code. We stress that combined coding and modulation optimization is important for achieving performance gains while maintaining spectral efficiency.
Coset Codes Viewed as Terminated Convolutional Codes
NASA Technical Reports Server (NTRS)
Fossorier, Marc P. C.; Lin, Shu
1996-01-01
In this paper, coset codes are considered as terminated convolutional codes. Based on this approach, three new general results are presented. First, it is shown that the iterative squaring construction can equivalently be defined from a convolutional code whose trellis terminates. This convolutional code determines a simple encoder for the coset code considered, and the state and branch labelings of the associated trellis diagram become straightforward. Also, from the generator matrix of the code in its convolutional code form, much information about the trade-off between the state connectivity and complexity at each section, and the parallel structure of the trellis, is directly available. Based on this generator matrix, it is shown that the parallel branches in the trellis diagram of the convolutional code represent the same coset code C(sub 1), of smaller dimension and shorter length. Utilizing this fact, a two-stage optimum trellis decoding method is devised. The first stage decodes C(sub 1), while the second stage decodes the associated convolutional code, using the branch metrics delivered by stage 1. Finally, a bidirectional decoding of each received block starting at both ends is presented. If about the same number of computations is required, this approach remains very attractive from a practical point of view as it roughly doubles the decoding speed. This fact is particularly interesting whenever the second half of the trellis is the mirror image of the first half, since the same decoder can be implemented for both parts.
Discussion on LDPC Codes and Uplink Coding
NASA Technical Reports Server (NTRS)
Andrews, Ken; Divsalar, Dariush; Dolinar, Sam; Moision, Bruce; Hamkins, Jon; Pollara, Fabrizio
2007-01-01
This slide presentation reviews the progress that the workgroup on Low-Density Parity-Check (LDPC) for space link coding. The workgroup is tasked with developing and recommending new error correcting codes for near-Earth, Lunar, and deep space applications. Included in the presentation is a summary of the technical progress of the workgroup. Charts that show the LDPC decoder sensitivity to symbol scaling errors are reviewed, as well as a chart showing the performance of several frame synchronizer algorithms compared to that of some good codes and LDPC decoder tests at ESTL. Also reviewed is a study on Coding, Modulation, and Link Protocol (CMLP), and the recommended codes. A design for the Pseudo-Randomizer with LDPC Decoder and CRC is also reviewed. A chart that summarizes the three proposed coding systems is also presented.
ERIC Educational Resources Information Center
Rahn, Erwin
1984-01-01
Discusses the evolution of standards for bar codes (series of printed lines and spaces that represent numbers, symbols, and/or letters of alphabet) and describes the two types most frequently adopted by libraries--Code-A-Bar and CODE 39. Format of the codes is illustrated. Six references and definitions of terminology are appended. (EJS)
Manually operated coded switch
Barnette, Jon H.
1978-01-01
The disclosure relates to a manually operated recodable coded switch in which a code may be inserted, tried and used to actuate a lever controlling an external device. After attempting a code, the switch's code wheels must be returned to their zero positions before another try is made.
Image compression with embedded multiwavelet coding
NASA Astrophysics Data System (ADS)
Liang, Kai-Chieh; Li, Jin; Kuo, C.-C. Jay
1996-03-01
An embedded image coding scheme using the multiwavelet transform and inter-subband prediction is proposed in this research. The new proposed coding scheme consists of the following building components: GHM multiwavelet transform, prediction across subbands, successive approximation quantization, and adaptive binary arithmetic coding. Our major contribution is the introduction of a set of prediction rules to fully exploit the correlations between multiwavelet coefficients in different frequency bands. The performance of the proposed new method is comparable to that of state-of-the-art wavelet compression methods.
ERIC Educational Resources Information Center
Crompton, Helen; LaFrance, Jason; van 't Hooft, Mark
2012-01-01
A QR (quick-response) code is a two-dimensional scannable code, similar in function to a traditional bar code that one might find on a product at the supermarket. The main difference between the two is that, while a traditional bar code can hold a maximum of only 20 digits, a QR code can hold up to 7,089 characters, so it can contain much more…
NASA Technical Reports Server (NTRS)
Divsalar, Dariush (Inventor); Abbasfar, Aliazam (Inventor); Jones, Christopher R. (Inventor); Dolinar, Samuel J. (Inventor); Thorpe, Jeremy C. (Inventor); Andrews, Kenneth S. (Inventor); Yao, Kung (Inventor)
2008-01-01
An apparatus and method for encoding low-density parity check codes. Together with a repeater, an interleaver and an accumulator, the apparatus comprises a precoder, thus forming accumulate-repeat-accumulate (ARA codes). Protographs representing various types of ARA codes, including AR3A, AR4A and ARJA codes, are described. High performance is obtained when compared to the performance of current repeat-accumulate (RA) or irregular-repeat-accumulate (IRA) codes.
Research on pre-processing of QR Code
NASA Astrophysics Data System (ADS)
Sun, Haixing; Xia, Haojie; Dong, Ning
2013-10-01
QR code encodes many kinds of information because of its advantages: large storage capacity, high reliability, full arrange of utter-high-speed reading, small printing size and high-efficient representation of Chinese characters, etc. In order to obtain the clearer binarization image from complex background, and improve the recognition rate of QR code, this paper researches on pre-processing methods of QR code (Quick Response Code), and shows algorithms and results of image pre-processing for QR code recognition. Improve the conventional method by changing the Souvola's adaptive text recognition method. Additionally, introduce the QR code Extraction which adapts to different image size, flexible image correction approach, and improve the efficiency and accuracy of QR code image processing.
Flexible Generation of Kalman Filter Code
NASA Technical Reports Server (NTRS)
Richardson, Julian; Wilson, Edward
2006-01-01
Domain-specific program synthesis can automatically generate high quality code in complex domains from succinct specifications, but the range of programs which can be generated by a given synthesis system is typically narrow. Obtaining code which falls outside this narrow scope necessitates either 1) extension of the code generator, which is usually very expensive, or 2) manual modification of the generated code, which is often difficult and which must be redone whenever changes are made to the program specification. In this paper, we describe adaptations and extensions of the AUTOFILTER Kalman filter synthesis system which greatly extend the range of programs which can be generated. Users augment the input specification with a specification of code fragments and how those fragments should interleave with or replace parts of the synthesized filter. This allows users to generate a much wider range of programs without their needing to modify the synthesis system or edit generated code. We demonstrate the usefulness of the approach by applying it to the synthesis of a complex state estimator which combines code from several Kalman filters with user-specified code. The work described in this paper allows the complex design decisions necessary for real-world applications to be reflected in the synthesized code. When executed on simulated input data, the generated state estimator was found to produce comparable estimates to those produced by a handcoded estimator
Parallel Adaptive Mesh Refinement Library
NASA Technical Reports Server (NTRS)
Mac-Neice, Peter; Olson, Kevin
2005-01-01
Parallel Adaptive Mesh Refinement Library (PARAMESH) is a package of Fortran 90 subroutines designed to provide a computer programmer with an easy route to extension of (1) a previously written serial code that uses a logically Cartesian structured mesh into (2) a parallel code with adaptive mesh refinement (AMR). Alternatively, in its simplest use, and with minimal effort, PARAMESH can operate as a domain-decomposition tool for users who want to parallelize their serial codes but who do not wish to utilize adaptivity. The package builds a hierarchy of sub-grids to cover the computational domain of a given application program, with spatial resolution varying to satisfy the demands of the application. The sub-grid blocks form the nodes of a tree data structure (a quad-tree in two or an oct-tree in three dimensions). Each grid block has a logically Cartesian mesh. The package supports one-, two- and three-dimensional models.
A user`s manual for MASH 1.0: A Monte Carlo Adjoint Shielding Code System
Johnson, J.O.
1992-03-01
The Monte Carlo Adjoint Shielding Code System, MASH, calculates neutron and gamma-ray environments and radiation protection factors for armored military vehicles, structures, trenches, and other shielding configurations by coupling a forward discrete ordinates air-over-ground transport calculation with an adjoint Monte Carlo treatment of the shielding geometry. Efficiency and optimum use of computer time are emphasized. The code system include the GRTUNCL and DORT codes for air-over-ground transport calculations, the MORSE code with the GIFT5 combinatorial geometry package for adjoint shielding calculations, and several peripheral codes that perform the required data preparations, transformations, and coupling functions. MASH is the successor to the Vehicle Code System (VCS) initially developed at Oak Ridge National Laboratory (ORNL). The discrete ordinates calculation determines the fluence on a coupling surface surrounding the shielding geometry due to an external neutron/gamma-ray source. The Monte Carlo calculation determines the effectiveness of the fluence at that surface in causing a response in a detector within the shielding geometry, i.e., the ``dose importance`` of the coupling surface fluence. A coupling code folds the fluence together with the dose importance, giving the desired dose response. The coupling code can determine the dose response a a function of the shielding geometry orientation relative to the source, distance from the source, and energy response of the detector. This user`s manual includes a short description of each code, the input required to execute the code along with some helpful input data notes, and a representative sample problem (input data and selected output edits) for each code.
Adaptive management is an approach to natural resource management that emphasizes learning through management where knowledge is incomplete, and when, despite inherent uncertainty, managers and policymakers must act. Unlike a traditional trial and error approach, adaptive managem...
Papamichos, Spyros I.; Margaritis, Dimitrios; Kotsianidis, Ioannis
2015-01-01
The incidence of cancer in human is high as compared to chimpanzee. However previous analysis has documented that numerous human cancer-related genes are highly conserved in chimpanzee. Till date whether human genome includes species-specific cancer-related genes that could potentially contribute to a higher cancer susceptibility remains obscure. This study focuses on MYEOV, an oncogene encoding for two protein isoforms, reported as causally involved in promoting cancer cell proliferation and metastasis in both haematological malignancies and solid tumours. First we document, via stringent in silico analysis, that MYEOV arose de novo in Catarrhini. We show that MYEOV short-isoform start codon was evolutionarily acquired after Catarrhini/Platyrrhini divergence. Throughout the course of Catarrhini evolution MYEOV acquired a gradually elongated translatable open reading frame (ORF), a gradually shortened translation-regulatory upstream ORF, and alternatively spliced mRNA variants. A point mutation introduced in human allowed for the acquisition of MYEOV long-isoform start codon. Second, we demonstrate the precious impact of exonized transposable elements on the creation of MYEOV gene structure. Third, we highlight that the initial part of MYEOV long-isoform coding DNA sequence was under positive selection pressure during Catarrhini evolution. MYEOV represents a Primate Orphan Gene that acquired, via ORF expansion, a human-protein-specific coding potential. PMID:26568894
The genetic code constrains yet facilitates Darwinian evolution.
Firnberg, Elad; Ostermeier, Marc
2013-08-01
An important goal of evolutionary biology is to understand the constraints that shape the dynamics and outcomes of evolution. Here, we address the extent to which the structure of the standard genetic code constrains evolution by analyzing adaptive mutations of the antibiotic resistance gene TEM-1 β-lactamase and the fitness distribution of codon substitutions in two influenza hemagglutinin inhibitor genes. We find that the architecture of the genetic code significantly constrains the adaptive exploration of sequence space. However, the constraints endow the code with two advantages: the ability to restrict access to amino acid mutations with a strong negative effect and, most remarkably, the ability to enrich for adaptive mutations. Our findings support the hypothesis that the standard genetic code was shaped by selective pressure to minimize the deleterious effects of mutation yet facilitate the evolution of proteins through imposing an adaptive mutation bias.
ERIC Educational Resources Information Center
McCabe, Donald; Trevino, Linda Klebe
2002-01-01
Explores the rise in student cheating and evidence that students cheat less often at schools with an honor code. Discusses effective use of such codes and creation of a peer culture that condemns dishonesty. (EV)
Cellulases and coding sequences
Li, Xin-Liang; Ljungdahl, Lars G.; Chen, Huizhong
2001-02-20
The present invention provides three fungal cellulases, their coding sequences, recombinant DNA molecules comprising the cellulase coding sequences, recombinant host cells and methods for producing same. The present cellulases are from Orpinomyces PC-2.
Cellulases and coding sequences
Li, Xin-Liang; Ljungdahl, Lars G.; Chen, Huizhong
2001-01-01
The present invention provides three fungal cellulases, their coding sequences, recombinant DNA molecules comprising the cellulase coding sequences, recombinant host cells and methods for producing same. The present cellulases are from Orpinomyces PC-2.
ERIC Educational Resources Information Center
Shumack, Kellie A.; Reilly, Erin; Chamberlain, Nik
2013-01-01
space, has error-correction capacity, and can be read from any direction. These codes are used in manufacturing, shipping, and marketing, as well as in education. QR codes can be created to produce…
DIANE multiparticle transport code
NASA Astrophysics Data System (ADS)
Caillaud, M.; Lemaire, S.; Ménard, S.; Rathouit, P.; Ribes, J. C.; Riz, D.
2014-06-01
DIANE is the general Monte Carlo code developed at CEA-DAM. DIANE is a 3D multiparticle multigroup code. DIANE includes automated biasing techniques and is optimized for massive parallel calculations.
Ghoumaid, A.; Benamira, F.; Guechi, L.
2016-02-15
It is shown that the application of the Nikiforov-Uvarov method by Ikhdair for solving the Dirac equation with the radial Rosen-Morse potential plus the spin-orbit centrifugal term is inadequate because the required conditions are not satisfied. The energy spectra given is incorrect and the wave functions are not physically acceptable. We clarify the problem and prove that the spinor wave functions are expressed in terms of the generalized hypergeometric functions {sub 2}F{sub 1}(a, b, c; z). The energy eigenvalues for the bound states are given by the solution of a transcendental equation involving the hypergeometric function.
Jones, T.
1993-11-01
This paper examines the results of previous wire code research to determines the relationship with childhood cancer, wire codes and electromagnetic fields. The paper suggests that, in the original Savitz study, biases toward producing a false positive association between high wire codes and childhood cancer were created by the selection procedure.
Universal Noiseless Coding Subroutines
NASA Technical Reports Server (NTRS)
Schlutsmeyer, A. P.; Rice, R. F.
1986-01-01
Software package consists of FORTRAN subroutines that perform universal noiseless coding and decoding of integer and binary data strings. Purpose of this type of coding to achieve data compression in sense that coded data represents original data perfectly (noiselessly) while taking fewer bits to do so. Routines universal because they apply to virtually any "real-world" data source.
Mapping Local Codes to Read Codes.
Bonney, Wilfred; Galloway, James; Hall, Christopher; Ghattas, Mikhail; Tramma, Leandro; Nind, Thomas; Donnelly, Louise; Jefferson, Emily; Doney, Alexander
2017-01-01
Background & Objectives: Legacy laboratory test codes make it difficult to use clinical datasets for meaningful translational research, where populations are followed for disease risk and outcomes over many years. The Health Informatics Centre (HIC) at the University of Dundee hosts continuous biochemistry data from the clinical laboratories in Tayside and Fife dating back as far as 1987. However, the HIC-managed biochemistry dataset is coupled with incoherent sample types and unstandardised legacy local test codes, which increases the complexity of using the dataset for reasonable population health outcomes. The objective of this study was to map the legacy local test codes to the Scottish 5-byte Version 2 Read Codes using biochemistry data extracted from the repository of the Scottish Care Information (SCI) Store.
Sodhi, M; Mukesh, M; Kishore, A; Mishra, B P; Kataria, R S; Joshi, B K
2013-09-25
Due to evolutionary divergence, cattle (taurine, and indicine) and buffalo are speculated to have different responses to heat stress condition. Variation in candidate genes associated with a heat-shock response may provide an insight into the dissimilarity and suggest targets for intervention. The present work was undertaken to characterize one of the inducible heat shock protein genes promoter and coding regions in diverse breeds of Indian zebu cattle and buffaloes. The genomic DNA from a panel of 117 unrelated animals representing 14 diversified native cattle breeds and 6 buffalo breeds were utilized to determine the complete sequence and gene diversity of HSP70.1 gene. The coding region of HSP70.1 gene in Indian zebu cattle, Bos taurus and buffalo was similar in length (1,926 bp) encoding a HSP70 protein of 641 amino acids with a calculated molecular weight (Mw) of 70.26 kDa. However buffalo had a longer 5' and 3' untranslated region (UTR) of 204 and 293 nucleotides respectively, in comparison to Indian zebu cattle and Bos taurus wherein length of 5' and 3'-UTR was 172 and 286 nucleotides, respectively. The increased length of buffalo HSP70.1 gene compared to indicine and taurine gene was due to two insertions each in 5' and 3'-UTR. Comparative sequence analysis of cattle (taurine and indicine) and buffalo HSP70.1 gene revealed a total of 54 gene variations (50 SNPs and 4 INDELs) among the three species in the HSP70.1 gene. The minor allele frequencies of these nucleotide variations varied from 0.03 to 0.5 with an average of 0.26. Among the 14 B. indicus cattle breeds studied, a total of 19 polymorphic sites were identified: 4 in the 5'-UTR and 15 in the coding region (of these 2 were non-synonymous). Analysis among buffalo breeds revealed 15 SNPs throughout the gene: 6 at the 5' flanking region and 9 in the coding region. In bubaline 5'-UTR, 2 additional putative transcription factor binding sites (Elk-1 and C-Re1) were identified, other than three common sites
SYMTRAN - A Time-dependent Symmetric Tandem Mirror Transport Code
Hua, D; Fowler, T
2004-06-15
A time-dependent version of the steady-state radial transport model in symmetric tandem mirrors in Ref. [1] has been coded up and first tests performed. Our code, named SYMTRAN, is an adaptation of the earlier SPHERE code for spheromaks, now modified for tandem mirror physics. Motivated by Post's new concept of kinetic stabilization of symmetric mirrors, it is an extension of the earlier TAMRAC rate-equation code omitting radial transport [2], which successfully accounted for experimental results in TMX. The SYMTRAN code differs from the earlier tandem mirror radial transport code TMT in that our code is focused on axisymmetric tandem mirrors and classical diffusion, whereas TMT emphasized non-ambipolar transport in TMX and MFTF-B due to yin-yang plugs and non-symmetric transitions between the plugs and axisymmetric center cell. Both codes exhibit interesting but different non-linear behavior.
Software Certification - Coding, Code, and Coders
NASA Technical Reports Server (NTRS)
Havelund, Klaus; Holzmann, Gerard J.
2011-01-01
We describe a certification approach for software development that has been adopted at our organization. JPL develops robotic spacecraft for the exploration of the solar system. The flight software that controls these spacecraft is considered to be mission critical. We argue that the goal of a software certification process cannot be the development of "perfect" software, i.e., software that can be formally proven to be correct under all imaginable and unimaginable circumstances. More realistically, the goal is to guarantee a software development process that is conducted by knowledgeable engineers, who follow generally accepted procedures to control known risks, while meeting agreed upon standards of workmanship. We target three specific issues that must be addressed in such a certification procedure: the coding process, the code that is developed, and the skills of the coders. The coding process is driven by standards (e.g., a coding standard) and tools. The code is mechanically checked against the standard with the help of state-of-the-art static source code analyzers. The coders, finally, are certified in on-site training courses that include formal exams.
The Clawpack Community of Codes
NASA Astrophysics Data System (ADS)
Mandli, K. T.; LeVeque, R. J.; Ketcheson, D.; Ahmadia, A. J.
2014-12-01
Clawpack, the Conservation Laws Package, has long been one of the standards for solving hyperbolic conservation laws but over the years has extended well beyond this role. Today a community of open-source codes have been developed that address a multitude of different needs including non-conservative balance laws, high-order accurate methods, and parallelism while remaining extensible and easy to use, largely by the judicious use of Python and the original Fortran codes that it wraps. This talk will present some of the recent developments in projects under the Clawpack umbrella, notably the GeoClaw and PyClaw projects. GeoClaw was originally developed as a tool for simulating tsunamis using adaptive mesh refinement but has since encompassed a large number of other geophysically relevant flows including storm surge and debris-flows. PyClaw originated as a Python version of the original Clawpack algorithms but has since been both a testing ground for new algorithmic advances in the Clawpack framework but also an easily extensible framework for solving hyperbolic balance laws. Some of these extensions include the addition of WENO high-order methods, massively parallel capabilities, and adaptive mesh refinement technologies, made possible largely by the flexibility of the Python language and community libraries such as NumPy and PETSc. Because of the tight integration with Python tecnologies, both packages have benefited also from the focus on reproducibility in the Python community, notably IPython notebooks.
Comparison of translation loads for standard and alternative genetic codes
2010-01-01
Background The (almost) universality of the genetic code is one of the most intriguing properties of cellular life. Nevertheless, several variants of the standard genetic code have been observed, which differ in one or several of 64 codon assignments and occur mainly in mitochondrial genomes and in nuclear genomes of some bacterial and eukaryotic parasites. These variants are usually considered to be the result of non-adaptive evolution. It has been shown that the standard genetic code is preferential to randomly assembled codes for its ability to reduce the effects of errors in protein translation. Results Using a genotype-to-phenotype mapping based on a quantitative model of protein folding, we compare the standard genetic code to seven of its naturally occurring variants with respect to the fitness loss associated to mistranslation and mutation. These fitness losses are computed through computer simulations of protein evolution with mutations that are either neutral or lethal, and different mutation biases, which influence the balance between unfolding and misfolding stability. We show that the alternative codes may produce significantly different mutation and translation loads, particularly for genomes evolving with a rather large mutation bias. Most of the alternative genetic codes are found to be disadvantageous to the standard code, in agreement with the view that the change of genetic code is a mutationally driven event. Nevertheless, one of the studied alternative genetic codes is predicted to be preferable to the standard code for a broad range of mutation biases. Conclusions Our results show that, with one exception, the standard genetic code is generally better able to reduce the translation load than the naturally occurring variants studied here. Besides this exception, some of the other alternative genetic codes are predicted to be better adapted for extreme mutation biases. Hence, the fixation of alternative genetic codes might be a neutral or nearly
Neutron sources in the Varian Clinac 2100C/2300C medical accelerator calculated by the EGS4 code.
Mao, X S; Kase, K R; Liu, J C; Nelson, W R; Kleck, J H; Johnsen, S
1997-04-01
The photoneutron yields produced in different components of the medical accelerator heads evaluated in these studies (24-MV Clinac 2500 and a Clinac 2100C/2300C running in the 10-MV, 15-MV, 18-MV and 20-MV modes) were calculated by the EGS4 Monte Carlo code using a modified version of the Combinatorial Geometry of MORSE-CG. Actual component dimensions and materials (i.e., targets, collimators, flattening filters, jaws and shielding for specific accelerator heads) were used in the geometric simulations. Calculated relative neutron yields in different components of a 24-MV Clinac 2500 were compared with the published measured data, and were found to agree to within +/-30%. Total neutron yields produced in the Clinac 2100/2300, as a function of primary electron energy and field size, are presented. A simplified Clinac 2100/2300C geometry is presented to calculate neutron yields, which were compared with those calculated by using the fully-described geometry.
Pittsburgh Adapts to Changing Times.
ERIC Educational Resources Information Center
States, Deidre
1985-01-01
The Samuel F. B. Morse School, built in 1874 and closed in 1980, is a historic landmark in Pittsburgh, Pennsylvania. Now the building serves as low-income housing for 70 elderly tenants and is praised as being an imaginative and creative use of an old school structure. (MLF)
Jow, Hong-Nian; Murfin, W.B.; Johnson, J.D.
1993-11-01
This report describes the source term estimation codes, XSORs. The codes are written for three pressurized water reactors (Surry, Sequoyah, and Zion) and two boiling water reactors (Peach Bottom and Grand Gulf). The ensemble of codes has been named ``XSOR``. The purpose of XSOR codes is to estimate the source terms which would be released to the atmosphere in severe accidents. A source term includes the release fractions of several radionuclide groups, the timing and duration of releases, the rates of energy release, and the elevation of releases. The codes have been developed by Sandia National Laboratories for the US Nuclear Regulatory Commission (NRC) in support of the NUREG-1150 program. The XSOR codes are fast running parametric codes and are used as surrogates for detailed mechanistic codes. The XSOR codes also provide the capability to explore the phenomena and their uncertainty which are not currently modeled by the mechanistic codes. The uncertainty distributions of input parameters may be used by an. XSOR code to estimate the uncertainty of source terms.
Predictive depth coding of wavelet transformed images
NASA Astrophysics Data System (ADS)
Lehtinen, Joonas
1999-10-01
In this paper, a new prediction based method, predictive depth coding, for lossy wavelet image compression is presented. It compresses a wavelet pyramid composition by predicting the number of significant bits in each wavelet coefficient quantized by the universal scalar quantization and then by coding the prediction error with arithmetic coding. The adaptively found linear prediction context covers spatial neighbors of the coefficient to be predicted and the corresponding coefficients on lower scale and in the different orientation pyramids. In addition to the number of significant bits, the sign and the bits of non-zero coefficients are coded. The compression method is tested with a standard set of images and the results are compared with SFQ, SPIHT, EZW and context based algorithms. Even though the algorithm is very simple and it does not require any extra memory, the compression results are relatively good.
Verification of the Calore thermal analysis code.
Dowding, Kevin J.; Blackwell, Bennie Francis
2004-07-01
Calore is the ASC code developed to model steady and transient thermal diffusion with chemistry and dynamic enclosure radiation. An integral part of the software development process is code verification, which addresses the question 'Are we correctly solving the model equations'? This process aids the developers in that it identifies potential software bugs and gives the thermal analyst confidence that a properly prepared input will produce satisfactory output. Grid refinement studies have been performed on problems for which we have analytical solutions. In this talk, the code verification process is overviewed and recent results are presented. Recent verification studies have focused on transient nonlinear heat conduction and verifying algorithms associated with (tied) contact and adaptive mesh refinement. In addition, an approach to measure the coverage of the verification test suite relative to intended code applications is discussed.
Greg Flach, Frank Smith
2014-05-14
DLLExternalCode is the a general dynamic-link library (DLL) interface for linking GoldSim (www.goldsim.com) with external codes. The overall concept is to use GoldSim as top level modeling software with interfaces to external codes for specific calculations. The DLLExternalCode DLL that performs the linking function is designed to take a list of code inputs from GoldSim, create an input file for the external application, run the external code, and return a list of outputs, read from files created by the external application, back to GoldSim. Instructions for creating the input file, running the external code, and reading the output are contained in an instructions file that is read and interpreted by the DLL.
Defeating the coding monsters.
Colt, Ross
2007-02-01
Accuracy in coding is rapidly becoming a required skill for military health care providers. Clinic staffing, equipment purchase decisions, and even reimbursement will soon be based on the coding data that we provide. Learning the complicated myriad of rules to code accurately can seem overwhelming. However, the majority of clinic visits in a typical outpatient clinic generally fall into two major evaluation and management codes, 99213 and 99214. If health care providers can learn the rules required to code a 99214 visit, then this will provide a 90% solution that can enable them to accurately code the majority of their clinic visits. This article demonstrates a step-by-step method to code a 99214 visit, by viewing each of the three requirements as a monster to be defeated.
Auditory adaptation in voice perception.
Schweinberger, Stefan R; Casper, Christoph; Hauthal, Nadine; Kaufmann, Jürgen M; Kawahara, Hideki; Kloth, Nadine; Robertson, David M C; Simpson, Adrian P; Zäske, Romi
2008-05-06
Perceptual aftereffects following adaptation to simple stimulus attributes (e.g., motion, color) have been studied for hundreds of years. A striking recent discovery was that adaptation also elicits contrastive aftereffects in visual perception of complex stimuli and faces [1-6]. Here, we show for the first time that adaptation to nonlinguistic information in voices elicits systematic auditory aftereffects. Prior adaptation to male voices causes a voice to be perceived as more female (and vice versa), and these auditory aftereffects were measurable even minutes after adaptation. By contrast, crossmodal adaptation effects were absent, both when male or female first names and when silently articulating male or female faces were used as adaptors. When sinusoidal tones (with frequencies matched to male and female voice fundamental frequencies) were used as adaptors, no aftereffects on voice perception were observed. This excludes explanations for the voice aftereffect in terms of both pitch adaptation and postperceptual adaptation to gender concepts and suggests that contrastive voice-coding mechanisms may routinely influence voice perception. The role of adaptation in calibrating properties of high-level voice representations indicates that adaptation is not confined to vision but is a ubiquitous mechanism in the perception of nonlinguistic social information from both faces and voices.
Barrett, Harrison H.; Furenlid, Lars R.; Freed, Melanie; Hesterman, Jacob Y.; Kupinski, Matthew A.; Clarkson, Eric; Whitaker, Meredith K.
2008-01-01
Adaptive imaging systems alter their data-acquisition configuration or protocol in response to the image information received. An adaptive pinhole single-photon emission computed tomography (SPECT) system might acquire an initial scout image to obtain preliminary information about the radiotracer distribution and then adjust the configuration or sizes of the pinholes, the magnifications, or the projection angles in order to improve performance. This paper briefly describes two small-animal SPECT systems that allow this flexibility and then presents a framework for evaluating adaptive systems in general, and adaptive SPECT systems in particular. The evaluation is in terms of the performance of linear observers on detection or estimation tasks. Expressions are derived for the ideal linear (Hotelling) observer and the ideal linear (Wiener) estimator with adaptive imaging. Detailed expressions for the performance figures of merit are given, and possible adaptation rules are discussed. PMID:18541485
Peter, Frank J.; Dalton, Larry J.; Plummer, David W.
2002-01-01
A new class of mechanical code comparators is described which have broad potential for application in safety, surety, and security applications. These devices can be implemented as micro-scale electromechanical systems that isolate a secure or otherwise controlled device until an access code is entered. This access code is converted into a series of mechanical inputs to the mechanical code comparator, which compares the access code to a pre-input combination, entered previously into the mechanical code comparator by an operator at the system security control point. These devices provide extremely high levels of robust security. Being totally mechanical in operation, an access control system properly based on such devices cannot be circumvented by software attack alone.
NASA Technical Reports Server (NTRS)
Solomon, G.
1992-01-01
A new investigation shows that, starting from the BCH (21,15;3) code represented as a 7 x 3 matrix and adding a row and column to add even parity, one obtains an 8 x 4 matrix (32,15;8) code. An additional dimension is obtained by specifying odd parity on the rows and even parity on the columns, i.e., adjoining to the 8 x 4 matrix, the matrix, which is zero except for the fourth column (of all ones). Furthermore, any seven rows and three columns will form the BCH (21,15;3) code. This box code has the same weight structure as the quadratic residue and BCH codes of the same dimensions. Whether there exists an algebraic isomorphism to either code is as yet unknown.
Certifying Auto-Generated Flight Code
NASA Technical Reports Server (NTRS)
Denney, Ewen
2008-01-01
itself is generic, and parametrized with respect to a library of coding patterns that depend on the safety policies and the code generator. The patterns characterize the notions of definitions and uses that are specific to the given safety property. For example, for initialization safety, definitions correspond to variable initializations while uses are statements which read a variable, whereas for array bounds safety, definitions are the array declarations, while uses are statements which access an array variable. The inferred annotations are thus highly dependent on the actual program and the properties being proven. The annotations, themselves, need not be trusted, but are crucial to obtain the automatic formal verification of the safety properties without requiring access to the internals of the code generator. The approach has been applied to both in-house and commercial code generators, but is independent of the particular generator used. It is currently being adapted to flight code generated using MathWorks Real-Time Workshop, an automatic code generator that translates from Simulink/Stateflow models into embedded C code.
NASA Technical Reports Server (NTRS)
Shapiro, Wilbur
1996-01-01
This is an overview of new and updated industrial codes for seal design and testing. GCYLT (gas cylindrical seals -- turbulent), SPIRALI (spiral-groove seals -- incompressible), KTK (knife to knife) Labyrinth Seal Code, and DYSEAL (dynamic seal analysis) are covered. CGYLT uses G-factors for Poiseuille and Couette turbulence coefficients. SPIRALI is updated to include turbulence and inertia, but maintains the narrow groove theory. KTK labyrinth seal code handles straight or stepped seals. And DYSEAL provides dynamics for the seal geometry.
Phonological coding during reading
Leinenger, Mallorie
2014-01-01
The exact role that phonological coding (the recoding of written, orthographic information into a sound based code) plays during silent reading has been extensively studied for more than a century. Despite the large body of research surrounding the topic, varying theories as to the time course and function of this recoding still exist. The present review synthesizes this body of research, addressing the topics of time course and function in tandem. The varying theories surrounding the function of phonological coding (e.g., that phonological codes aid lexical access, that phonological codes aid comprehension and bolster short-term memory, or that phonological codes are largely epiphenomenal in skilled readers) are first outlined, and the time courses that each maps onto (e.g., that phonological codes come online early (pre-lexical) or that phonological codes come online late (post-lexical)) are discussed. Next the research relevant to each of these proposed functions is reviewed, discussing the varying methodologies that have been used to investigate phonological coding (e.g., response time methods, reading while eyetracking or recording EEG and MEG, concurrent articulation) and highlighting the advantages and limitations of each with respect to the study of phonological coding. In response to the view that phonological coding is largely epiphenomenal in skilled readers, research on the use of phonological codes in prelingually, profoundly deaf readers is reviewed. Finally, implications for current models of word identification (activation-verification model (Van Order, 1987), dual-route model (e.g., Coltheart, Rastle, Perry, Langdon, & Ziegler, 2001), parallel distributed processing model (Seidenberg & McClelland, 1989)) are discussed. PMID:25150679
FEMHD: An adaptive finite element method for MHD and edge modelling
Strauss, H.R.
1995-07-01
This paper describes the code FEMHD, an adaptive finite element MHD code, which is applied in a number of different manners to model MHD behavior and edge plasma phenomena on a diverted tokamak. The code uses an unstructured triangular mesh in 2D and wedge shaped mesh elements in 3D. The code has been adapted to look at neutral and charged particle dynamics in the plasma scrape off region, and into a full MHD-particle code.
Some practical universal noiseless coding techniques
NASA Technical Reports Server (NTRS)
Rice, R. F.
1979-01-01
Some practical adaptive techniques for the efficient noiseless coding of a broad class of such data sources are developed and analyzed. Algorithms are designed for coding discrete memoryless sources which have a known symbol probability ordering but unknown probability values. A general applicability of these algorithms to solving practical problems is obtained because most real data sources can be simply transformed into this form by appropriate preprocessing. These algorithms have exhibited performance only slightly above all entropy values when applied to real data with stationary characteristics over the measurement span. Performance considerably under a measured average data entropy may be observed when data characteristics are changing over the measurement span.
Reid, R.L.; Barrett, R.J.; Brown, T.G.; Gorker, G.E.; Hooper, R.J.; Kalsi, S.S.; Metzler, D.H.; Peng, Y.K.M.; Roth, K.E.; Spampinato, P.T.
1985-03-01
The FEDC Tokamak Systems Code calculates tokamak performance, cost, and configuration as a function of plasma engineering parameters. This version of the code models experimental tokamaks. It does not currently consider tokamak configurations that generate electrical power or incorporate breeding blankets. The code has a modular (or subroutine) structure to allow independent modeling for each major tokamak component or system. A primary benefit of modularization is that a component module may be updated without disturbing the remainder of the systems code as long as the imput to or output from the module remains unchanged.
Bombin, H.
2010-03-15
We introduce a family of two-dimensional (2D) topological subsystem quantum error-correcting codes. The gauge group is generated by two-local Pauli operators, so that two-local measurements are enough to recover the error syndrome. We study the computational power of code deformation in these codes and show that boundaries cannot be introduced in the usual way. In addition, we give a general mapping connecting suitable classical statistical mechanical models to optimal error correction in subsystem stabilizer codes that suffer from depolarizing noise.
Domino, Stefan; Luketa-Hanlin, Anay; Gallegos, Carlos
2006-10-27
FAA Smoke Transport Code, a physics-based Computational Fluid Dynamics tool, which couples heat, mass, and momentum transfer, has been developed to provide information on smoke transport in cargo compartments with various geometries and flight conditions. The software package contains a graphical user interface for specification of geometry and boundary conditions, analysis module for solving the governing equations, and a post-processing tool. The current code was produced by making substantial improvements and additions to a code obtained from a university. The original code was able to compute steady, uniform, isothermal turbulent pressurization. In addition, a preprocessor and postprocessor were added to arrive at the current software package.
NASA Technical Reports Server (NTRS)
Garabedian, P. R.
1979-01-01
Computer codes for the design and analysis of transonic airfoils are considered. The design code relies on the method of complex characteristics in the hodograph plane to construct shockless airfoil. The analysis code uses artificial viscosity to calculate flows with weak shock waves at off-design conditions. Comparisons with experiments show that an excellent simulation of two dimensional wind tunnel tests is obtained. The codes have been widely adopted by the aircraft industry as a tool for the development of supercritical wing technology.
NASA Astrophysics Data System (ADS)
Kinzig, Ann P.
2015-03-01
This paper is intended as a brief introduction to climate adaptation in a conference devoted otherwise to the physics of sustainable energy. Whereas mitigation involves measures to reduce the probability of a potential event, such as climate change, adaptation refers to actions that lessen the impact of climate change. Mitigation and adaptation differ in other ways as well. Adaptation does not necessarily have to be implemented immediately to be effective; it only needs to be in place before the threat arrives. Also, adaptation does not necessarily require global, coordinated action; many effective adaptation actions can be local. Some urban communities, because of land-use change and the urban heat-island effect, currently face changes similar to some expected under climate change, such as changes in water availability, heat-related morbidity, or changes in disease patterns. Concern over those impacts might motivate the implementation of measures that would also help in climate adaptation, despite skepticism among some policy makers about anthropogenic global warming. Studies of ancient civilizations in the southwestern US lends some insight into factors that may or may not be important to successful adaptation.
Advanced technology development for image gathering, coding, and processing
NASA Technical Reports Server (NTRS)
Huck, Friedrich O.
1990-01-01
Three overlapping areas of research activities are presented: (1) Information theory and optimal filtering are extended to visual information acquisition and processing. The goal is to provide a comprehensive methodology for quantitatively assessing the end-to-end performance of image gathering, coding, and processing. (2) Focal-plane processing techniques and technology are developed to combine effectively image gathering with coding. The emphasis is on low-level vision processing akin to the retinal processing in human vision. (3) A breadboard adaptive image-coding system is being assembled. This system will be used to develop and evaluate a number of advanced image-coding technologies and techniques as well as research the concept of adaptive image coding.
Fast Coding Unit Encoding Mechanism for Low Complexity Video Coding
Wu, Yueying; Jia, Kebin; Gao, Guandong
2016-01-01
In high efficiency video coding (HEVC), coding tree contributes to excellent compression performance. However, coding tree brings extremely high computational complexity. Innovative works for improving coding tree to further reduce encoding time are stated in this paper. A novel low complexity coding tree mechanism is proposed for HEVC fast coding unit (CU) encoding. Firstly, this paper makes an in-depth study of the relationship among CU distribution, quantization parameter (QP) and content change (CC). Secondly, a CU coding tree probability model is proposed for modeling and predicting CU distribution. Eventually, a CU coding tree probability update is proposed, aiming to address probabilistic model distortion problems caused by CC. Experimental results show that the proposed low complexity CU coding tree mechanism significantly reduces encoding time by 27% for lossy coding and 42% for visually lossless coding and lossless coding. The proposed low complexity CU coding tree mechanism devotes to improving coding performance under various application conditions. PMID:26999741
Fast Coding Unit Encoding Mechanism for Low Complexity Video Coding.
Gao, Yuan; Liu, Pengyu; Wu, Yueying; Jia, Kebin; Gao, Guandong
2016-01-01
In high efficiency video coding (HEVC), coding tree contributes to excellent compression performance. However, coding tree brings extremely high computational complexity. Innovative works for improving coding tree to further reduce encoding time are stated in this paper. A novel low complexity coding tree mechanism is proposed for HEVC fast coding unit (CU) encoding. Firstly, this paper makes an in-depth study of the relationship among CU distribution, quantization parameter (QP) and content change (CC). Secondly, a CU coding tree probability model is proposed for modeling and predicting CU distribution. Eventually, a CU coding tree probability update is proposed, aiming to address probabilistic model distortion problems caused by CC. Experimental results show that the proposed low complexity CU coding tree mechanism significantly reduces encoding time by 27% for lossy coding and 42% for visually lossless coding and lossless coding. The proposed low complexity CU coding tree mechanism devotes to improving coding performance under various application conditions.
ERIC Educational Resources Information Center
Million, June
2004-01-01
In this article, the author discusses an e-mail survey of principals from across the country regarding whether or not their school had a formal staff dress code. The results indicate that most did not have a formal dress code, but agreed that professional dress for teachers was not only necessary, but showed respect for the school and had a…
Lichenase and coding sequences
Li, Xin-Liang; Ljungdahl, Lars G.; Chen, Huizhong
2000-08-15
The present invention provides a fungal lichenase, i.e., an endo-1,3-1,4-.beta.-D-glucanohydrolase, its coding sequence, recombinant DNA molecules comprising the lichenase coding sequences, recombinant host cells and methods for producing same. The present lichenase is from Orpinomyces PC-2.
NASA Technical Reports Server (NTRS)
Hribar, Michelle R.; Frumkin, Michael; Jin, Haoqiang; Waheed, Abdul; Yan, Jerry; Saini, Subhash (Technical Monitor)
1998-01-01
Over the past decade, high performance computing has evolved rapidly; systems based on commodity microprocessors have been introduced in quick succession from at least seven vendors/families. Porting codes to every new architecture is a difficult problem; in particular, here at NASA, there are many large CFD applications that are very costly to port to new machines by hand. The LCM ("Legacy Code Modernization") Project is the development of an integrated parallelization environment (IPE) which performs the automated mapping of legacy CFD (Fortran) applications to state-of-the-art high performance computers. While most projects to port codes focus on the parallelization of the code, we consider porting to be an iterative process consisting of several steps: 1) code cleanup, 2) serial optimization,3) parallelization, 4) performance monitoring and visualization, 5) intelligent tools for automated tuning using performance prediction and 6) machine specific optimization. The approach for building this parallelization environment is to build the components for each of the steps simultaneously and then integrate them together. The demonstration will exhibit our latest research in building this environment: 1. Parallelizing tools and compiler evaluation. 2. Code cleanup and serial optimization using automated scripts 3. Development of a code generator for performance prediction 4. Automated partitioning 5. Automated insertion of directives. These demonstrations will exhibit the effectiveness of an automated approach for all the steps involved with porting and tuning a legacy code application for a new architecture.
NASA Technical Reports Server (NTRS)
Whalen, Michael; Schumann, Johann; Fischer, Bernd
2002-01-01
Code certification is a lightweight approach to demonstrate software quality on a formal level. Its basic idea is to require producers to provide formal proofs that their code satisfies certain quality properties. These proofs serve as certificates which can be checked independently. Since code certification uses the same underlying technology as program verification, it also requires many detailed annotations (e.g., loop invariants) to make the proofs possible. However, manually adding theses annotations to the code is time-consuming and error-prone. We address this problem by combining code certification with automatic program synthesis. We propose an approach to generate simultaneously, from a high-level specification, code and all annotations required to certify generated code. Here, we describe a certification extension of AUTOBAYES, a synthesis tool which automatically generates complex data analysis programs from compact specifications. AUTOBAYES contains sufficient high-level domain knowledge to generate detailed annotations. This allows us to use a general-purpose verification condition generator to produce a set of proof obligations in first-order logic. The obligations are then discharged using the automated theorem E-SETHEO. We demonstrate our approach by certifying operator safety for a generated iterative data classification program without manual annotation of the code.
Xie, Boyang; Tang, Kun; Cheng, Hua; Liu, Zhengyou; Chen, Shuqi; Tian, Jianguo
2017-02-01
Coding acoustic metasurfaces can combine simple logical bits to acquire sophisticated functions in wave control. The acoustic logical bits can achieve a phase difference of exactly π and a perfect match of the amplitudes for the transmitted waves. By programming the coding sequences, acoustic metasurfaces with various functions, including creating peculiar antenna patterns and waves focusing, have been demonstrated.
Computerized mega code recording.
Burt, T W; Bock, H C
1988-04-01
A system has been developed to facilitate recording of advanced cardiac life support mega code testing scenarios. By scanning a paper "keyboard" using a bar code wand attached to a portable microcomputer, the person assigned to record the scenario can easily generate an accurate, complete, timed, and typewritten record of the given situations and the obtained responses.
Pseudonoise code tracking loop
NASA Technical Reports Server (NTRS)
Laflame, D. T. (Inventor)
1980-01-01
A delay-locked loop is presented for tracking a pseudonoise (PN) reference code in an incoming communication signal. The loop is less sensitive to gain imbalances, which can otherwise introduce timing errors in the PN reference code formed by the loop.
Parallelized tree-code for clusters of personal computers
NASA Astrophysics Data System (ADS)
Viturro, H. R.; Carpintero, D. D.
2000-02-01
We present a tree-code for integrating the equations of the motion of collisionless systems, which has been fully parallelized and adapted to run in several PC-based processors simultaneously, using the well-known PVM message passing library software. SPH algorithms, not yet included, may be easily incorporated to the code. The code is written in ANSI C; it can be freely downloaded from a public ftp site. Simulations of collisions of galaxies are presented, with which the performance of the code is tested.
OHAMA, Takeshi; INAGAKI, Yuji; BESSHO, Yoshitaka; OSAWA, Syozo
2008-01-01
In 1985, we reported that a bacterium, Mycoplasma capricolum, used a deviant genetic code, namely UGA, a “universal” stop codon, was read as tryptophan. This finding, together with the deviant nuclear genetic codes in not a few organisms and a number of mitochondria, shows that the genetic code is not universal, and is in a state of evolution. To account for the changes in codon meanings, we proposed the codon capture theory stating that all the code changes are non-disruptive without accompanied changes of amino acid sequences of proteins. Supporting evidence for the theory is presented in this review. A possible evolutionary process from the ancient to the present-day genetic code is also discussed. PMID:18941287
Combustion chamber analysis code
NASA Technical Reports Server (NTRS)
Przekwas, A. J.; Lai, Y. G.; Krishnan, A.; Avva, R. K.; Giridharan, M. G.
1993-01-01
A three-dimensional, time dependent, Favre averaged, finite volume Navier-Stokes code has been developed to model compressible and incompressible flows (with and without chemical reactions) in liquid rocket engines. The code has a non-staggered formulation with generalized body-fitted-coordinates (BFC) capability. Higher order differencing methodologies such as MUSCL and Osher-Chakravarthy schemes are available. Turbulent flows can be modeled using any of the five turbulent models present in the code. A two-phase, two-liquid, Lagrangian spray model has been incorporated into the code. Chemical equilibrium and finite rate reaction models are available to model chemically reacting flows. The discrete ordinate method is used to model effects of thermal radiation. The code has been validated extensively against benchmark experimental data and has been applied to model flows in several propulsion system components of the SSME and the STME.
Physics and numerics of the tensor code (incomplete preliminary documentation)
Burton, D.E.; Lettis, L.A. Jr.; Bryan, J.B.; Frary, N.R.
1982-07-15
The present TENSOR code is a descendant of a code originally conceived by Maenchen and Sack and later adapted by Cherry. Originally, the code was a two-dimensional Lagrangian explicit finite difference code which solved the equations of continuum mechanics. Since then, implicit and arbitrary Lagrange-Euler (ALE) algorithms have been added. The code has been used principally to solve problems involving the propagation of stress waves through earth materials, and considerable development of rock and soil constitutive relations has been done. The code has been applied extensively to the containment of underground nuclear tests, nuclear and high explosive surface and subsurface cratering, and energy and resource recovery. TENSOR is supported by a substantial array of ancillary routines. The initial conditions are set up by a generator code TENGEN. ZON is a multipurpose code which can be used for zoning, rezoning, overlaying, and linking from other codes. Linking from some codes is facilitated by another code RADTEN. TENPLT is a fixed time graphics code which provides a wide variety of plotting options and output devices, and which is capable of producing computer movies by postprocessing problem dumps. Time history graphics are provided by the TIMPLT code from temporal dumps produced during production runs. While TENSOR can be run as a stand-alone controllee, a special controller code TCON is available to better interface the code with the LLNL computer system during production jobs. In order to standardize compilation procedures and provide quality control, a special compiler code BC is used. A number of equation of state generators are available among them ROC and PMUGEN.
ERIC Educational Resources Information Center
Exceptional Parent, 1987
1987-01-01
Suggestions are presented for helping disabled individuals learn to use or adapt toothbrushes for proper dental care. A directory lists dental health instructional materials available from various organizations. (CB)
NASA Technical Reports Server (NTRS)
Watson, Andrew B.
1990-01-01
All vision systems, both human and machine, transform the spatial image into a coded representation. Particular codes may be optimized for efficiency or to extract useful image features. Researchers explored image codes based on primary visual cortex in man and other primates. Understanding these codes will advance the art in image coding, autonomous vision, and computational human factors. In cortex, imagery is coded by features that vary in size, orientation, and position. Researchers have devised a mathematical model of this transformation, called the Hexagonal oriented Orthogonal quadrature Pyramid (HOP). In a pyramid code, features are segregated by size into layers, with fewer features in the layers devoted to large features. Pyramid schemes provide scale invariance, and are useful for coarse-to-fine searching and for progressive transmission of images. The HOP Pyramid is novel in three respects: (1) it uses a hexagonal pixel lattice, (2) it uses oriented features, and (3) it accurately models most of the prominent aspects of primary visual cortex. The transform uses seven basic features (kernels), which may be regarded as three oriented edges, three oriented bars, and one non-oriented blob. Application of these kernels to non-overlapping seven-pixel neighborhoods yields six oriented, high-pass pyramid layers, and one low-pass (blob) layer.
Nelson, R.N.
1985-05-01
This publication lists all report number codes processed by the Office of Scientific and Technical Information. The report codes are substantially based on the American National Standards Institute, Standard Technical Report Number (STRN)-Format and Creation Z39.23-1983. The Standard Technical Report Number (STRN) provides one of the primary methods of identifying a specific technical report. The STRN consists of two parts: The report code and the sequential number. The report code identifies the issuing organization, a specific program, or a type of document. The sequential number, which is assigned in sequence by each report issuing entity, is not included in this publication. Part I of this compilation is alphabetized by report codes followed by issuing installations. Part II lists the issuing organization followed by the assigned report code(s). In both Parts I and II, the names of issuing organizations appear for the most part in the form used at the time the reports were issued. However, for some of the more prolific installations which have had name changes, all entries have been merged under the current name.
Embedded foveation image coding.
Wang, Z; Bovik, A C
2001-01-01
The human visual system (HVS) is highly space-variant in sampling, coding, processing, and understanding. The spatial resolution of the HVS is highest around the point of fixation (foveation point) and decreases rapidly with increasing eccentricity. By taking advantage of this fact, it is possible to remove considerable high-frequency information redundancy from the peripheral regions and still reconstruct a perceptually good quality image. Great success has been obtained previously by a class of embedded wavelet image coding algorithms, such as the embedded zerotree wavelet (EZW) and the set partitioning in hierarchical trees (SPIHT) algorithms. Embedded wavelet coding not only provides very good compression performance, but also has the property that the bitstream can be truncated at any point and still be decoded to recreate a reasonably good quality image. In this paper, we propose an embedded foveation image coding (EFIC) algorithm, which orders the encoded bitstream to optimize foveated visual quality at arbitrary bit-rates. A foveation-based image quality metric, namely, foveated wavelet image quality index (FWQI), plays an important role in the EFIC system. We also developed a modified SPIHT algorithm to improve the coding efficiency. Experiments show that EFIC integrates foveation filtering with foveated image coding and demonstrates very good coding performance and scalability in terms of foveated image quality measurement.
CFD code evaluation for internal flow modeling
NASA Technical Reports Server (NTRS)
Chung, T. J.
1990-01-01
Research on the computational fluid dynamics (CFD) code evaluation with emphasis on supercomputing in reacting flows is discussed. Advantages of unstructured grids, multigrids, adaptive methods, improved flow solvers, vector processing, parallel processing, and reduction of memory requirements are discussed. As examples, researchers include applications of supercomputing to reacting flow Navier-Stokes equations including shock waves and turbulence and combustion instability problems associated with solid and liquid propellants. Evaluation of codes developed by other organizations are not included. Instead, the basic criteria for accuracy and efficiency have been established, and some applications on rocket combustion have been made. Research toward an ultimate goal, the most accurate and efficient CFD code, is in progress and will continue for years to come.
Robust speech coding using microphone arrays
NASA Astrophysics Data System (ADS)
Li, Zhao
1998-09-01
To achieve robustness and efficiency for voice communication in noise, the noise suppression and bandwidth compression processes are combined to form a joint process using input from an array of microphones. An adaptive beamforming technique with a set of robust linear constraints and a single quadratic inequality constraint is used to preserve desired signal and to cancel directional plus ambient noise in a small room environment. This robustly constrained array processor is found to be effective in limiting signal cancelation over a wide range of input SNRs (-10 dB to +10 dB). The resulting intelligibility gains (8-10 dB) provide significant improvement to subsequent CELP coding. In addition, the desired speech activity is detected by estimating Target-to-Jammer Ratios (TJR) using subband correlations between different microphone inputs or using signals within the Generalized Sidelobe Canceler directly. These two novel techniques of speech activity detection for coding are studied thoroughly in this dissertation. Each is subsequently incorporated with the adaptive array and a 4.8 kbps CELP coder to form a Variable Bit Kate (VBR) coder with noise canceling and Spatial Voice Activity Detection (SVAD) capabilities. This joint noise suppression and bandwidth compression system demonstrates large improvements in desired speech quality after coding, accurate desired speech activity detection in various types of interference, and a reduction in the information bits required to code the speech.
New Codes for Ambient Seismic Noise Analysis
NASA Astrophysics Data System (ADS)
Duret, F.; Mooney, W. D.; Detweiler, S.
2007-12-01
In order to determine a velocity model of the crust, scientists generally use earthquakes recorded by seismic stations. However earthquakes do not occur continuously and most are too weak to be useful. When no event is recorded, a waveform is generally considered to be noise. This noise, however, is not useless and carries a wealth of information. Thus, ambient seismic noise analysis is an inverse method of investigating the Earth's interior. Until recently, this technique was quite difficult to apply, as it requires significant computing capacities. In early 2007, however, a team led by Gregory Benson and Mike Ritzwoller from UC Boulder published a paper describing a new method for extracting group and phase velocities from those waveforms. The analysis consisting of recovering Green functions between a pair of stations, is composed of four steps: 1) single station data preparation, 2) cross-correlation and stacking, 3) quality control and data selection and 4) dispersion measurements. At the USGS, we developed a set of ready-to-use computing codes for analyzing waveforms to run the ambient noise analysis of Benson et al. (2007). Our main contribution to the analysis technique was to fully automate the process. The computation codes were written in Fortran 90 and the automation scripts were written in Perl. Furthermore, some operations were run with SAC. Our choices of programming language offer an opportunity to adapt our codes to the major platforms. The codes were developed under Linux but are meant to be adapted to Mac OS X and Windows platforms. The codes have been tested on Southern California data and our results compare nicely with those from the UC Boulder team. Next, we plan to apply our codes to Indonesian data, so that we might take advantage of newly upgraded seismic stations in that region.
Code Disentanglement: Initial Plan
Wohlbier, John Greaton; Kelley, Timothy M.; Rockefeller, Gabriel M.; Calef, Matthew Thomas
2015-01-27
The first step to making more ambitious changes in the EAP code base is to disentangle the code into a set of independent, levelized packages. We define a package as a collection of code, most often across a set of files, that provides a defined set of functionality; a package a) can be built and tested as an entity and b) fits within an overall levelization design. Each package contributes one or more libraries, or an application that uses the other libraries. A package set is levelized if the relationships between packages form a directed, acyclic graph and each package uses only packages at lower levels of the diagram (in Fortran this relationship is often describable by the use relationship between modules). Independent packages permit independent- and therefore parallel|development. The packages form separable units for the purposes of development and testing. This is a proven path for enabling finer-grained changes to a complex code.
NASA Technical Reports Server (NTRS)
1991-01-01
In recognition of a deficiency in the current modeling capability for seals, an effort was established by NASA to develop verified computational fluid dynamic concepts, codes, and analyses for seals. The objectives were to develop advanced concepts for the design and analysis of seals, to effectively disseminate the information to potential users by way of annual workshops, and to provide experimental verification for the models and codes under a wide range of operating conditions.
NASA Astrophysics Data System (ADS)
Yang, Qianli; Pitkow, Xaq
2015-03-01
Most interesting natural sensory stimuli are encoded in the brain in a form that can only be decoded nonlinearly. But despite being a core function of the brain, nonlinear population codes are rarely studied and poorly understood. Interestingly, the few existing models of nonlinear codes are inconsistent with known architectural features of the brain. In particular, these codes have information content that scales with the size of the cortical population, even if that violates the data processing inequality by exceeding the amount of information entering the sensory system. Here we provide a valid theory of nonlinear population codes by generalizing recent work on information-limiting correlations in linear population codes. Although these generalized, nonlinear information-limiting correlations bound the performance of any decoder, they also make decoding more robust to suboptimal computation, allowing many suboptimal decoders to achieve nearly the same efficiency as an optimal decoder. Although these correlations are extremely difficult to measure directly, particularly for nonlinear codes, we provide a simple, practical test by which one can use choice-related activity in small populations of neurons to determine whether decoding is suboptimal or optimal and limited by correlated noise. We conclude by describing an example computation in the vestibular system where this theory applies. QY and XP was supported by a grant from the McNair foundation.
PARAMESH: A Parallel Adaptive Mesh Refinement Community Toolkit
NASA Technical Reports Server (NTRS)
MacNeice, Peter; Olson, Kevin M.; Mobarry, Clark; deFainchtein, Rosalinda; Packer, Charles
1999-01-01
In this paper, we describe a community toolkit which is designed to provide parallel support with adaptive mesh capability for a large and important class of computational models, those using structured, logically cartesian meshes. The package of Fortran 90 subroutines, called PARAMESH, is designed to provide an application developer with an easy route to extend an existing serial code which uses a logically cartesian structured mesh into a parallel code with adaptive mesh refinement. Alternatively, in its simplest use, and with minimal effort, it can operate as a domain decomposition tool for users who want to parallelize their serial codes, but who do not wish to use adaptivity. The package can provide them with an incremental evolutionary path for their code, converting it first to uniformly refined parallel code, and then later if they so desire, adding adaptivity.
Visual adaptation and face perception
Webster, Michael A.; MacLeod, Donald I. A.
2011-01-01
The appearance of faces can be strongly affected by the characteristics of faces viewed previously. These perceptual after-effects reflect processes of sensory adaptation that are found throughout the visual system, but which have been considered only relatively recently in the context of higher level perceptual judgements. In this review, we explore the consequences of adaptation for human face perception, and the implications of adaptation for understanding the neural-coding schemes underlying the visual representation of faces. The properties of face after-effects suggest that they, in part, reflect response changes at high and possibly face-specific levels of visual processing. Yet, the form of the after-effects and the norm-based codes that they point to show many parallels with the adaptations and functional organization that are thought to underlie the encoding of perceptual attributes like colour. The nature and basis for human colour vision have been studied extensively, and we draw on ideas and principles that have been developed to account for norms and normalization in colour vision to consider potential similarities and differences in the representation and adaptation of faces. PMID:21536555
Coding tools investigation for next generation video coding based on HEVC
NASA Astrophysics Data System (ADS)
Chen, Jianle; Chen, Ying; Karczewicz, Marta; Li, Xiang; Liu, Hongbin; Zhang, Li; Zhao, Xin
2015-09-01
The new state-of-the-art video coding standard, H.265/HEVC, has been finalized in 2013 and it achieves roughly 50% bit rate saving compared to its predecessor, H.264/MPEG-4 AVC. This paper provides the evidence that there is still potential for further coding efficiency improvements. A brief overview of HEVC is firstly given in the paper. Then, our improvements on each main module of HEVC are presented. For instance, the recursive quadtree block structure is extended to support larger coding unit and transform unit. The motion information prediction scheme is improved by advanced temporal motion vector prediction, which inherits the motion information of each small block within a large block from a temporal reference picture. Cross component prediction with linear prediction model improves intra prediction and overlapped block motion compensation improves the efficiency of inter prediction. Furthermore, coding of both intra and inter prediction residual is improved by adaptive multiple transform technique. Finally, in addition to deblocking filter and SAO, adaptive loop filter is applied to further enhance the reconstructed picture quality. This paper describes above-mentioned techniques in detail and evaluates their coding performance benefits based on the common test condition during HEVC development. The simulation results show that significant performance improvement over HEVC standard can be achieved, especially for the high resolution video materials.
Allen, Craig R.; Garmestani, Ahjond S.
2015-01-01
Adaptive management is an approach to natural resource management that emphasizes learning through management where knowledge is incomplete, and when, despite inherent uncertainty, managers and policymakers must act. Unlike a traditional trial and error approach, adaptive management has explicit structure, including a careful elucidation of goals, identification of alternative management objectives and hypotheses of causation, and procedures for the collection of data followed by evaluation and reiteration. The process is iterative, and serves to reduce uncertainty, build knowledge and improve management over time in a goal-oriented and structured process.
Rule-based frequency domain speech coding
NASA Astrophysics Data System (ADS)
McMillan, Vance M.
1990-12-01
A speech processing system is designed to simulate the transmission of speech signals using a speech coding scheme. The transmitter portion of the simulation extracts a minimized set of frequencies in Fourier space which represents the essence of each of the speech timeslices. These parameters are then adaptively quantized and transmitted to a receiver portion of the coding scheme. The receiver then generates an estimate of the original timeslice from the transmitted parameters using a sinusoidal speech model. After initial design, how each of the design parameters affect the human perceived quality of speech is studied. This is done with listening tests. The listening tests consist of having volunteers listen to a series of speech reconstructions. Each reconstruction is the result of the coding scheme acting on the same speech input file with the design parameters varied. The design parameters which are varied are: number of frequencies used in the sinusoidal speech model for reconstruction, number of bits to encode amplitude information, and number of bits used to code phase information. The final design parameters for the coding scheme were selected based on the results of the listening tests. Post design listening tests showed that the system was capable of 4800 bps speech transmission with a quality rating of five on a scale from zero (not understandable) to ten (sounds just like original speech).
Recent Developments in the Community Code ASPECT
NASA Astrophysics Data System (ADS)
Heister, T.; Bangerth, W.; Dannberg, J.; Gassmoeller, R.
2015-12-01
The Computational Geosciences have long used community codes to provide simulation capabilities to large numbers of users. We here report on the mantle convection code ASPECT (the Advanced Solver for Problems in Earth ConvecTion) that is developed to be a community tool with a focus on bringing modern numerical methods such as adaptive meshes, large parallel computations, algebraic multigrid solvers, and modern software design. We will comment in particular on two aspects: First, the more recent additions to its numerical capabilities, such as compressible models, averaging of material parameters, melt transport, free surfaces, and plasticity. We will demonstrate these capabilities using examples from computations by members of the ASPECT user community. Second, we will discuss lessons learned in writing a code specifically for community use. This includes our experience with a software design that is fundamentally based on a plugin system for practically all areas that a user may want to describe for the particular geophysical setup they want to simulate. It also includes our experience with leading and organizing a community of users and developers, for example by organizing annual "hackathons", by encouraging code submission via github over keeping modifications private, and by designing a code for which extensions can easily be written as separate plugins rather than requiring knowledge of the computational core.
Song, Chenchen; Wang, Lee-Ping; Martínez, Todd J
2016-01-12
We present an automated code engine (ACE) that automatically generates optimized kernels for computing integrals in electronic structure theory on a given graphical processing unit (GPU) computing platform. The code generator in ACE creates multiple code variants with different memory and floating point operation trade-offs. A graph representation is created as the foundation of the code generation, which allows the code generator to be extended to various types of integrals. The code optimizer in ACE determines the optimal code variant and GPU configurations for a given GPU computing platform by scanning over all possible code candidates and then choosing the best-performing code candidate for each kernel. We apply ACE to the optimization of effective core potential integrals and gradients. It is observed that the best code candidate varies with differing angular momentum, floating point precision, and type of GPU being used, which shows that the ACE may be a powerful tool in adapting to fast evolving GPU architectures.
RBMK-LOCA-Analyses with the ATHLET-Code
Petry, A.; Domoradov, A.; Finjakin, A.
1995-09-01
The scientific technical cooperation between Germany and Russia includes the area of adaptation of several German codes for the Russian-designed RBMK-reactor. One point of this cooperation is the adaptation of the Thermal-Hydraulic code ATHLET (Analyses of the Thermal-Hydraulics of LEaks and Transients), for RBMK-specific safety problems. This paper contains a short description of a RBMK-1000 reactor circuit. Furthermore, the main features of the thermal-hydraulic code ATHLET are presented. The main assumptions for the ATHLET-RBMK model are discussed. As an example for the application, the results of test calculations concerning a guillotine type rupture of a distribution group header are presented and discussed, and the general analysis conditions are described. A comparison with corresponding RELAP-calculations is given. This paper gives an overview on some problems posed and experience by application of Western best-estimate codes for RBMK-calculations.
Bingham, Philip R; Santos-Villalobos, Hector J
2011-01-01
Coded aperture techniques have been applied to neutron radiography to address limitations in neutron flux and resolution of neutron detectors in a system labeled coded source imaging (CSI). By coding the neutron source, a magnified imaging system is designed with small spot size aperture holes (10 and 100 m) for improved resolution beyond the detector limits and with many holes in the aperture (50% open) to account for flux losses due to the small pinhole size. An introduction to neutron radiography and coded aperture imaging is presented. A system design is developed for a CSI system with a development of equations for limitations on the system based on the coded image requirements and the neutron source characteristics of size and divergence. Simulation has been applied to the design using McStas to provide qualitative measures of performance with simulations of pinhole array objects followed by a quantitative measure through simulation of a tilted edge and calculation of the modulation transfer function (MTF) from the line spread function. MTF results for both 100um and 10um aperture hole diameters show resolutions matching the hole diameters.
NASA Technical Reports Server (NTRS)
Noble, Viveca K.
1993-01-01
There are various elements such as radio frequency interference (RFI) which may induce errors in data being transmitted via a satellite communication link. When a transmission is affected by interference or other error-causing elements, the transmitted data becomes indecipherable. It becomes necessary to implement techniques to recover from these disturbances. The objective of this research is to develop software which simulates error control circuits and evaluate the performance of these modules in various bit error rate environments. The results of the evaluation provide the engineer with information which helps determine the optimal error control scheme. The Consultative Committee for Space Data Systems (CCSDS) recommends the use of Reed-Solomon (RS) and convolutional encoders and Viterbi and RS decoders for error correction. The use of forward error correction techniques greatly reduces the received signal to noise needed for a certain desired bit error rate. The use of concatenated coding, e.g. inner convolutional code and outer RS code, provides even greater coding gain. The 16-bit cyclic redundancy check (CRC) code is recommended by CCSDS for error detection.
NASA Astrophysics Data System (ADS)
Bingham, Philip; Santos-Villalobos, Hector; Tobin, Ken
2011-03-01
Coded aperture techniques have been applied to neutron radiography to address limitations in neutron flux and resolution of neutron detectors in a system labeled coded source imaging (CSI). By coding the neutron source, a magnified imaging system is designed with small spot size aperture holes (10 and 100μm) for improved resolution beyond the detector limits and with many holes in the aperture (50% open) to account for flux losses due to the small pinhole size. An introduction to neutron radiography and coded aperture imaging is presented. A system design is developed for a CSI system with a development of equations for limitations on the system based on the coded image requirements and the neutron source characteristics of size and divergence. Simulation has been applied to the design using McStas to provide qualitative measures of performance with simulations of pinhole array objects followed by a quantitative measure through simulation of a tilted edge and calculation of the modulation transfer function (MTF) from the line spread function. MTF results for both 100μm and 10μm aperture hole diameters show resolutions matching the hole diameters.
Adaptive Search through Constraint Violations
1990-01-01
ZIP Code) 3939 O’Hara Street 800 North Quincy Street Pittsburgh, PA 15260 Arlington, VA 22217-5000 8a NAME OF FUNDING/SPONSORING Bb OFFICE SYMBOL 9...Pittsburgh, PA . Smith, D. A., Greeno, J. G., & Vitolo, T. M., (in press). A model of competence for counting. Cognitive Science. VanLehn, K. (in press...1990). Adaptive search through constraint violations (Technical Report No. KUL-90-01). Pittsburgh, PA : Learning Research and Development Center
Temporal Coding of Volumetric Imagery
NASA Astrophysics Data System (ADS)
Llull, Patrick Ryan
of other information within that video; namely, focal and spectral information. The next part of the thesis demonstrates derivative works of CACTI: compressive extended depth of field and compressive spectral-temporal imaging. These works successfully show the technique's extension of temporal coding to improve sensing performance in these other dimensions. Geometrical optics-related tradeoffs, such as the classic challenges of wide-field-of-view and high resolution photography, have motivated the development of mulitscale camera arrays. The advent of such designs less than a decade ago heralds a new era of research- and engineering-related challenges. One significant challenge is that of managing the focal volume (x,y,z ) over wide fields of view and resolutions. The fourth chapter shows advances on focus and image quality assessment for a class of multiscale gigapixel cameras developed at Duke. Along the same line of work, we have explored methods for dynamic and adaptive addressing of focus via point spread function engineering. We demonstrate another form of temporal coding in the form of physical translation of the image plane from its nominal focal position. We demonstrate this technique's capability to generate arbitrary point spread functions.
ASPT software source code: ASPT signal excision software package
NASA Astrophysics Data System (ADS)
Parliament, Hugh
1992-08-01
The source code for the ASPT Signal Excision Software Package which is part of the Adaptive Signal Processing Testbed (ASPT) is presented. The source code covers the programs 'excision', 'ab.out', 'd0.out', 'bd1.out', 'develop', 'despread', 'sorting', and 'convert'. These programs are concerned with collecting data, filtering out interference from a spread spectrum signal, analyzing the results, and developing and testing new filtering algorithms.
FORTRAN Automated Code Evaluation System (FACES) user's manual, version 2
NASA Technical Reports Server (NTRS)
1975-01-01
A system which provides analysis services for FORTRAN based software systems not normally available from system software is presented. The system is not a compiler, and compiler syntax diagnostics are not duplicated. For maximum adaptation to FORTRAN dialects, the code presented to the system is assumed to be compiler acceptable. The system concentrates on acceptable FORTRAN code features which are likely to produce undesirable results and identifies potential trouble areas before they become execution time malfunctions.
1987-07-01
2. Geometry Configuration for the Iron Slab .. ....... ... 46 4-3. Comparison of MICAP and MORSE Neutron Leakage Flux Spectra out the Back Face of a...MICAP and MORSE Neutron Leakage Flux Spectra out the Back Face of a 50 cm x 50 cm x 20 cm Iron Slab with a Point Isotropic 14.0 MeV Neutron Source...in a 50 cm x 50 cm x 20 cm Iron Slab with a Point Isotropic 14.0 MeV Neutron Source. . . 51 4-9. Comparison of PHOTON and MORSE Photon Leakage Flux
The Effects of Color-Coding Indicator on Dark Adaptation
1966-06-16
ivs I 1l’i1 1h r is’ 4 3-~~~~ii(;i3I (CI ( d’’ 4 1u)Il(43 di(Iii3 1:13\\ c t-l u( 34433 (3 1(3 c (3 4 -41 11311431h vl 1Iiltil 3i3 (4333ldil- (I’ W;13
Adaptive bit truncation and compensation method for EZW image coding
NASA Astrophysics Data System (ADS)
Dai, Sheng-Kui; Zhu, Guangxi; Wang, Yao
2003-09-01
The embedded zero-tree wavelet algorithm (EZW) is widely adopted to compress wavelet coefficients of images with the property that the bits stream can be truncated and produced anywhere. The lower bit plane of the wavelet coefficents is verified to be less important than the higher bit plane. Therefore it can be truncated and not encoded. Based on experiments, a generalized function, which can provide a glancing guide for EZW encoder to intelligently decide the number of low bit plane to be truncated, is deduced in this paper. In the EZW decoder, a simple method is presented to compensate for the truncated wavelet coefficients, and finally it can surprisingly enhance the quality of reconstructed image and spend scarcely any additional cost at the same time.
Adaptation of adaptive optics systems.
NASA Astrophysics Data System (ADS)
Xin, Yu; Zhao, Dazun; Li, Chen
1997-10-01
In the paper, a concept of an adaptation of adaptive optical system (AAOS) is proposed. The AAOS has certain real time optimization ability against the variation of the brightness of detected objects m, atmospheric coherence length rO and atmospheric time constant τ by means of changing subaperture number and diameter, dynamic range, and system's temporal response. The necessity of AAOS using a Hartmann-Shack wavefront sensor and some technical approaches are discussed. Scheme and simulation of an AAOS with variable subaperture ability by use of both hardware and software are presented as an example of the system.
NASA Astrophysics Data System (ADS)
Vaucouleur, Sebastien
2011-02-01
We introduce code query by example for customisation of evolvable software products in general and of enterprise resource planning systems (ERPs) in particular. The concept is based on an initial empirical study on practices around ERP systems. We motivate our design choices based on those empirical results, and we show how the proposed solution helps with respect to the infamous upgrade problem: the conflict between the need for customisation and the need for upgrade of ERP systems. We further show how code query by example can be used as a form of lightweight static analysis, to detect automatically potential defects in large software products. Code query by example as a form of lightweight static analysis is particularly interesting in the context of ERP systems: it is often the case that programmers working in this field are not computer science specialists but more of domain experts. Hence, they require a simple language to express custom rules.
Seals Code Development Workshop
NASA Technical Reports Server (NTRS)
Hendricks, Robert C. (Compiler); Liang, Anita D. (Compiler)
1996-01-01
Seals Workshop of 1995 industrial code (INDSEAL) release include ICYL, GCYLT, IFACE, GFACE, SPIRALG, SPIRALI, DYSEAL, and KTK. The scientific code (SCISEAL) release includes conjugate heat transfer and multidomain with rotordynamic capability. Several seals and bearings codes (e.g., HYDROFLEX, HYDROTRAN, HYDROB3D, FLOWCON1, FLOWCON2) are presented and results compared. Current computational and experimental emphasis includes multiple connected cavity flows with goals of reducing parasitic losses and gas ingestion. Labyrinth seals continue to play a significant role in sealing with face, honeycomb, and new sealing concepts under investigation for advanced engine concepts in view of strict environmental constraints. The clean sheet approach to engine design is advocated with program directions and anticipated percentage SFC reductions cited. Future activities center on engine applications with coupled seal/power/secondary flow streams.
NASA Astrophysics Data System (ADS)
Griffiths, Mike; Fedun, Viktor; Mumford, Stuart; Gent, Frederick
2013-06-01
The Sheffield Advanced Code (SAC) is a fully non-linear MHD code designed for simulations of linear and non-linear wave propagation in gravitationally strongly stratified magnetized plasma. It was developed primarily for the forward modelling of helioseismological processes and for the coupling processes in the solar interior, photosphere, and corona; it is built on the well-known VAC platform that allows robust simulation of the macroscopic processes in gravitationally stratified (non-)magnetized plasmas. The code has no limitations of simulation length in time imposed by complications originating from the upper boundary, nor does it require implementation of special procedures to treat the upper boundaries. SAC inherited its modular structure from VAC, thereby allowing modification to easily add new physics.
Autocatalysis, information and coding.
Wills, P R
2001-01-01
Autocatalytic self-construction in macromolecular systems requires the existence of a reflexive relationship between structural components and the functional operations they perform to synthesise themselves. The possibility of reflexivity depends on formal, semiotic features of the catalytic structure-function relationship, that is, the embedding of catalytic functions in the space of polymeric structures. Reflexivity is a semiotic property of some genetic sequences. Such sequences may serve as the basis for the evolution of coding as a result of autocatalytic self-organisation in a population of assignment catalysts. Autocatalytic selection is a mechanism whereby matter becomes differentiated in primitive biochemical systems. In the case of coding self-organisation, it corresponds to the creation of symbolic information. Prions are present-day entities whose replication through autocatalysis reflects aspects of biological semiotics less obvious than genetic coding.
StarFinder: A code for stellar field analysis
NASA Astrophysics Data System (ADS)
Diolaiti, Emiliano; Bendinelli, Orazio; Bonaccini, Domenico; Close, Laird M.; Currie, Doug G.; Parmeggiani, Gianluigi
2000-11-01
StarFinder is an IDL code for the deep analysis of stellar fields, designed for Adaptive Optics well-sampled images with high and low Strehl ratio. The Point Spread Function is extracted directly from the frame, to take into account the actual structure of the instrumental response and the atmospheric effects. The code is written in IDL language and organized in the form of a self-contained widget-based application, provided with a series of tools for data visualization and analysis. A description of the method and some applications to Adaptive Optics data are presented.
Code inspection instructional validation
NASA Technical Reports Server (NTRS)
Orr, Kay; Stancil, Shirley
1992-01-01
The Shuttle Data Systems Branch (SDSB) of the Flight Data Systems Division (FDSD) at Johnson Space Center contracted with Southwest Research Institute (SwRI) to validate the effectiveness of an interactive video course on the code inspection process. The purpose of this project was to determine if this course could be effective for teaching NASA analysts the process of code inspection. In addition, NASA was interested in the effectiveness of this unique type of instruction (Digital Video Interactive), for providing training on software processes. This study found the Carnegie Mellon course, 'A Cure for the Common Code', effective for teaching the process of code inspection. In addition, analysts prefer learning with this method of instruction, or this method in combination with other methods. As is, the course is definitely better than no course at all; however, findings indicate changes are needed. Following are conclusions of this study. (1) The course is instructionally effective. (2) The simulation has a positive effect on student's confidence in his ability to apply new knowledge. (3) Analysts like the course and prefer this method of training, or this method in combination with current methods of training in code inspection, over the way training is currently being conducted. (4) Analysts responded favorably to information presented through scenarios incorporating full motion video. (5) Some course content needs to be changed. (6) Some content needs to be added to the course. SwRI believes this study indicates interactive video instruction combined with simulation is effective for teaching software processes. Based on the conclusions of this study, SwRI has outlined seven options for NASA to consider. SwRI recommends the option which involves creation of new source code and data files, but uses much of the existing content and design from the current course. Although this option involves a significant software development effort, SwRI believes this option
1989-09-30
Unclassified 2a SECURITY CLASSiF-ICATiON AUTHORIT’Y 3 DIStRIBUTION AVAILABILITY OF REPORT N,A Approved for public release; 2o DECLASSIFICAIiON DOWNGRADING SCH DI...SUMMARY OF POLAR ACHIEVEMENTS ..... .......... 3 3 . POLAR CODE PHYSICAL MODELS ..... ............. 5 3.1 PL-ASMA Su ^"ru5 I1LS SH A...11 Structure of the Bipolar Plasma Sheath Generated by SPEAR I ... ...... 1 3 The POLAR Code Wake Model: Comparison with in Situ Observations . . 23
NASA Technical Reports Server (NTRS)
Gliebe, P; Mani, R.; Shin, H.; Mitchell, B.; Ashford, G.; Salamah, S.; Connell, S.; Huff, Dennis (Technical Monitor)
2000-01-01
This report describes work performed on Contract NAS3-27720AoI 13 as part of the NASA Advanced Subsonic Transport (AST) Noise Reduction Technology effort. Computer codes were developed to provide quantitative prediction, design, and analysis capability for several aircraft engine noise sources. The objective was to provide improved, physics-based tools for exploration of noise-reduction concepts and understanding of experimental results. Methods and codes focused on fan broadband and 'buzz saw' noise and on low-emissions combustor noise and compliment work done by other contractors under the NASA AST program to develop methods and codes for fan harmonic tone noise and jet noise. The methods and codes developed and reported herein employ a wide range of approaches, from the strictly empirical to the completely computational, with some being semiempirical analytical, and/or analytical/computational. Emphasis was on capturing the essential physics while still considering method or code utility as a practical design and analysis tool for everyday engineering use. Codes and prediction models were developed for: (1) an improved empirical correlation model for fan rotor exit flow mean and turbulence properties, for use in predicting broadband noise generated by rotor exit flow turbulence interaction with downstream stator vanes: (2) fan broadband noise models for rotor and stator/turbulence interaction sources including 3D effects, noncompact-source effects. directivity modeling, and extensions to the rotor supersonic tip-speed regime; (3) fan multiple-pure-tone in-duct sound pressure prediction methodology based on computational fluid dynamics (CFD) analysis; and (4) low-emissions combustor prediction methodology and computer code based on CFD and actuator disk theory. In addition. the relative importance of dipole and quadrupole source mechanisms was studied using direct CFD source computation for a simple cascadeigust interaction problem, and an empirical combustor
Link, Hamilton E.; Schroeppel, Richard Crabtree; Neumann, William Douglas; Campbell, Philip LaRoche; Beaver, Cheryl Lynn; Pierson, Lyndon George; Anderson, William Erik
2004-10-01
If software is designed so that the software can issue functions that will move that software from one computing platform to another, then the software is said to be 'mobile'. There are two general areas of security problems associated with mobile code. The 'secure host' problem involves protecting the host from malicious mobile code. The 'secure mobile code' problem, on the other hand, involves protecting the code from malicious hosts. This report focuses on the latter problem. We have found three distinct camps of opinions regarding how to secure mobile code. There are those who believe special distributed hardware is necessary, those who believe special distributed software is necessary, and those who believe neither is necessary. We examine all three camps, with a focus on the third. In the distributed software camp we examine some commonly proposed techniques including Java, D'Agents and Flask. For the specialized hardware camp, we propose a cryptographic technique for 'tamper-proofing' code over a large portion of the software/hardware life cycle by careful modification of current architectures. This method culminates by decrypting/authenticating each instruction within a physically protected CPU, thereby protecting against subversion by malicious code. Our main focus is on the camp that believes that neither specialized software nor hardware is necessary. We concentrate on methods of code obfuscation to render an entire program or a data segment on which a program depends incomprehensible. The hope is to prevent or at least slow down reverse engineering efforts and to prevent goal-oriented attacks on the software and execution. The field of obfuscation is still in a state of development with the central problem being the lack of a basis for evaluating the protection schemes. We give a brief introduction to some of the main ideas in the field, followed by an in depth analysis of a technique called 'white-boxing'. We put forth some new attacks and improvements
NASA Astrophysics Data System (ADS)
Qureshi, S. U. H.
1985-09-01
Theoretical work which has been effective in improving data transmission by telephone and radio links using adaptive equalization (AE) techniques is reviewed. AE has been applied to reducing the temporal dispersion effects, such as intersymbol interference, caused by the channel accessed. Attention is given to the Nyquist telegraph transmission theory, least mean square error adaptive filtering and the theory and structure of linear receive and transmit filters for reducing error. Optimum nonlinear receiver structures are discussed in terms of optimality criteria as a function of error probability. A suboptimum receiver structure is explored in the form of a decision-feedback equalizer. Consideration is also given to quadrature amplitude modulation and transversal equalization for receivers.
NASA Technical Reports Server (NTRS)
Hacker, Scott C. (Inventor); Dean, Richard J. (Inventor); Burge, Scott W. (Inventor); Dartez, Toby W. (Inventor)
2007-01-01
An adapter for installing a connector to a terminal post, wherein the connector is attached to a cable, is presented. In an embodiment, the adapter is comprised of an elongated collet member having a longitudinal axis comprised of a first collet member end, a second collet member end, an outer collet member surface, and an inner collet member surface. The inner collet member surface at the first collet member end is used to engage the connector. The outer collet member surface at the first collet member end is tapered for a predetermined first length at a predetermined taper angle. The collet includes a longitudinal slot that extends along the longitudinal axis initiating at the first collet member end for a predetermined second length. The first collet member end is formed of a predetermined number of sections segregated by a predetermined number of channels and the longitudinal slot.
Watson, B.L.; Aeby, I.
1980-08-26
An adaptive data compression device for compressing data is described. The device has a frequency content, including a plurality of digital filters for analyzing the content of the data over a plurality of frequency regions, a memory, and a control logic circuit for generating a variable rate memory clock corresponding to the analyzed frequency content of the data in the frequency region and for clocking the data into the memory in response to the variable rate memory clock.
Coding for urologic office procedures.
Dowling, Robert A; Painter, Mark
2013-11-01
This article summarizes current best practices for documenting, coding, and billing common office-based urologic procedures. Topics covered include general principles, basic and advanced urologic coding, creation of medical records that support compliant coding practices, bundled codes and unbundling, global periods, modifiers for procedure codes, when to bill for evaluation and management services during the same visit, coding for supplies, and laboratory and radiology procedures pertinent to urology practice. Detailed information is included for the most common urology office procedures, and suggested resources and references are provided. This information is of value to physicians, office managers, and their coding staff.
Accumulate Repeat Accumulate Coded Modulation
NASA Technical Reports Server (NTRS)
Abbasfar, Aliazam; Divsalar, Dariush; Yao, Kung
2004-01-01
In this paper we propose an innovative coded modulation scheme called 'Accumulate Repeat Accumulate Coded Modulation' (ARA coded modulation). This class of codes can be viewed as serial turbo-like codes, or as a subclass of Low Density Parity Check (LDPC) codes that are combined with high level modulation. Thus at the decoder belief propagation can be used for iterative decoding of ARA coded modulation on a graph, provided a demapper transforms the received in-phase and quadrature samples to reliability of the bits.
ERIC Educational Resources Information Center
Zirkel, Perry A.
2000-01-01
As illustrated by two recent decisions, the courts in the past decade have demarcated wide boundaries for school officials considering dress codes, whether in the form of selective prohibitions or required uniforms. Administrators must warn the community, provide legitimate justification and reasonable clarity, and comply with state law. (MLH)
ERIC Educational Resources Information Center
Lumsden, Linda; Miller, Gabriel
2002-01-01
Students do not always make choices that adults agree with in their choice of school dress. Dress-code issues are explored in this Research Roundup, and guidance is offered to principals seeking to maintain a positive school climate. In "Do School Uniforms Fit?" Kerry White discusses arguments for and against school uniforms and summarizes the…
Building Codes and Regulations.
ERIC Educational Resources Information Center
Fisher, John L.
The hazard of fire is of great concern to libraries due to combustible books and new plastics used in construction and interiors. Building codes and standards can offer architects and planners guidelines to follow but these standards should be closely monitored, updated, and researched for fire prevention. (DS)
ERIC Educational Resources Information Center
Uerling, Donald F.
School officials see a need for regulations that prohibit disruptive and inappropriate forms of expression and attire; students see these regulations as unwanted restrictions on their freedom. This paper reviews court litigation involving constitutional limitations on school authority, dress and hair codes, state law constraints, and school…
ERIC Educational Resources Information Center
King, Kevin
1992-01-01
Coding tasks, a valuable technique for teaching English as a Second Language, are presented that enable students to look at patterns and structures of marital communication as well as objectively evaluate the degree of happiness or distress in the marriage. (seven references) (JL)
Electrical Circuit Simulation Code
Wix, Steven D.; Waters, Arlon J.; Shirley, David
2001-08-09
Massively-Parallel Electrical Circuit Simulation Code. CHILESPICE is a massively-arallel distributed-memory electrical circuit simulation tool that contains many enhanced radiation, time-based, and thermal features and models. Large scale electronic circuit simulation. Shared memory, parallel processing, enhance convergence. Sandia specific device models.
Multiple trellis coded modulation
NASA Technical Reports Server (NTRS)
Simon, Marvin K. (Inventor); Divsalar, Dariush (Inventor)
1990-01-01
A technique for designing trellis codes to minimize bit error performance for a fading channel. The invention provides a criteria which may be used in the design of such codes which is significantly different from that used for average white Gaussian noise channels. The method of multiple trellis coded modulation of the present invention comprises the steps of: (a) coding b bits of input data into s intermediate outputs; (b) grouping said s intermediate outputs into k groups of s.sub.i intermediate outputs each where the summation of all s.sub.i,s is equal to s and k is equal to at least 2; (c) mapping each of said k groups of intermediate outputs into one of a plurality of symbols in accordance with a plurality of modulation schemes, one for each group such that the first group is mapped in accordance with a first modulation scheme and the second group is mapped in accordance with a second modulation scheme; and (d) outputting each of said symbols to provide k output symbols for each b bits of input data.
NASA Astrophysics Data System (ADS)
Barton, P.
1987-04-01
The basic principles of adaptive antennas are outlined in terms of the Wiener-Hopf expression for maximizing signal to noise ratio in an arbitrary noise environment; the analogy with generalized matched filter theory provides a useful aid to understanding. For many applications, there is insufficient information to achieve the above solution and thus non-optimum constrained null steering algorithms are also described, together with a summary of methods for preventing wanted signals being nulled by the adaptive system. The three generic approaches to adaptive weight control are discussed; correlation steepest descent, weight perturbation and direct solutions based on sample matrix conversion. The tradeoffs between hardware complexity and performance in terms of null depth and convergence rate are outlined. The sidelobe cancellor technique is described. Performance variation with jammer power and angular distribution is summarized and the key performance limitations identified. The configuration and performance characteristics of both multiple beam and phase scan array antennas are covered, with a brief discussion of performance factors.
Coding Theory and Projective Spaces
NASA Astrophysics Data System (ADS)
Silberstein, Natalia
2008-05-01
The projective space of order n over a finite field F_q is a set of all subspaces of the vector space F_q^{n}. In this work, we consider error-correcting codes in the projective space, focusing mainly on constant dimension codes. We start with the different representations of subspaces in the projective space. These representations involve matrices in reduced row echelon form, associated binary vectors, and Ferrers diagrams. Based on these representations, we provide a new formula for the computation of the distance between any two subspaces in the projective space. We examine lifted maximum rank distance (MRD) codes, which are nearly optimal constant dimension codes. We prove that a lifted MRD code can be represented in such a way that it forms a block design known as a transversal design. The incidence matrix of the transversal design derived from a lifted MRD code can be viewed as a parity-check matrix of a linear code in the Hamming space. We find the properties of these codes which can be viewed also as LDPC codes. We present new bounds and constructions for constant dimension codes. First, we present a multilevel construction for constant dimension codes, which can be viewed as a generalization of a lifted MRD codes construction. This construction is based on a new type of rank-metric codes, called Ferrers diagram rank-metric codes. Then we derive upper bounds on the size of constant dimension codes which contain the lifted MRD code, and provide a construction for two families of codes, that attain these upper bounds. We generalize the well-known concept of a punctured code for a code in the projective space to obtain large codes which are not constant dimension. We present efficient enumerative encoding and decoding techniques for the Grassmannian. Finally we describe a search method for constant dimension lexicodes.
Image Coding Based on Address Vector Quantization.
NASA Astrophysics Data System (ADS)
Feng, Yushu
Image coding is finding increased application in teleconferencing, archiving, and remote sensing. This thesis investigates the potential of Vector Quantization (VQ), a relatively new source coding technique, for compression of monochromatic and color images. Extensions of the Vector Quantization technique to the Address Vector Quantization method have been investigated. In Vector Quantization, the image data to be encoded are first processed to yield a set of vectors. A codeword from the codebook which best matches the input image vector is then selected. Compression is achieved by replacing the image vector with the index of the code-word which produced the best match, the index is sent to the channel. Reconstruction of the image is done by using a table lookup technique, where the label is simply used as an address for a table containing the representative vectors. A code-book of representative vectors (codewords) is generated using an iterative clustering algorithm such as K-means, or the generalized Lloyd algorithm. A review of different Vector Quantization techniques are given in chapter 1. Chapter 2 gives an overview of codebook design methods including the Kohonen neural network to design codebook. During the encoding process, the correlation of the address is considered and Address Vector Quantization is developed for color image and monochrome image coding. Address VQ which includes static and dynamic processes is introduced in chapter 3. In order to overcome the problems in Hierarchical VQ, Multi-layer Address Vector Quantization is proposed in chapter 4. This approach gives the same performance as that of the normal VQ scheme but the bit rate is about 1/2 to 1/3 as that of the normal VQ method. In chapter 5, a Dynamic Finite State VQ based on a probability transition matrix to select the best subcodebook to encode the image is developed. In chapter 6, a new adaptive vector quantization scheme, suitable for color video coding, called "A Self -Organizing
Transfert radiatif numerique pour un code SPH
NASA Astrophysics Data System (ADS)
Viau, Joseph Edmour Serge
2001-03-01
Le besoin de reproduire la formation d'etoiles par simulations numeriques s'est fait de plus en plus present au cours des 30 dernieres annees. Depuis Larson (1968), les codes de simulations n'ont eu de cesse de s'ameliorer. D'ailleurs, en 1977, Lucy introduit une autre methode de calcul venant concurrencer la methode par grille. Cette nouvelle facon de calculer utilise en effet des points a defaut d'utiliser des grilles, ce qui est une bien meilleure adaptation aux calculs d'un effondrement gravitationnel. Il restait cependant le probleme d'ajouter le transfert radiatif a un tel code. Malgre la proposition de Brookshaw (1984), qui nous montre une formule permettant d'ajouter le transfert radiatif sous la forme SPH tout en evitant la double sommation genante qu'elle implique, aucun code SPH a ce jour ne contient un transfert radiatif satisfaisant. Cette these presente pour la premiere fois un code SPH muni d'un transfert radiatif adequat. Toutes les difficultes ont pu etre surmontees afin d'obtenir finalement le transfert radiatif "vrai" qui survient dans l'effondrement d'un nuage moleculaire. Pour verifier l'integrite de nos resultats, une comparaison avec le nonisothermal test case de Boss & Myhill (1993) nous revele un resultat fort satisfaisant. En plus de suivre fidelement la courbe de l'evolution de la temperature centrale en fonction de la densite centrale, notre code est exempt de toutes les anomalies rencontrees par les codes par grille. Le test du cas de la conduction thermique nous a lui aussi servit a verifier la fiabilite de notre code. La aussi les resultats sont fort satisfaisants. Faisant suite a ces resultats, le code fut utilise dans deux situations reelles de recherche, ce qui nous a permis de demontrer les nombreuses possibilites que nous donne notre nouveau code. Dans un premier temps, nous avons tudie le comportement de la temperature dans un disque d'accretion durant son evolution. Ensuite nous avons refait en partie une experience de Bonnell
Lossless Video Sequence Compression Using Adaptive Prediction
NASA Technical Reports Server (NTRS)
Li, Ying; Sayood, Khalid
2007-01-01
We present an adaptive lossless video compression algorithm based on predictive coding. The proposed algorithm exploits temporal, spatial, and spectral redundancies in a backward adaptive fashion with extremely low side information. The computational complexity is further reduced by using a caching strategy. We also study the relationship between the operational domain for the coder (wavelet or spatial) and the amount of temporal and spatial redundancy in the sequence being encoded. Experimental results show that the proposed scheme provides significant improvements in compression efficiencies.
The new Italian code of medical ethics.
Fineschi, V; Turillazzi, E; Cateni, C
1997-01-01
In June 1995, the Italian code of medical ethics was revised in order that its principles should reflect the ever-changing relationship between the medical profession and society and between physicians and patients. The updated code is also a response to new ethical problems created by scientific progress; the discussion of such problems often shows up a need for better understanding on the part of the medical profession itself. Medical deontology is defined as the discipline for the study of norms of conduct for the health care professions, including moral and legal norms as well as those pertaining more strictly to professional performance. The aim of deontology is therefore, the in-depth investigation and revision of the code of medical ethics. It is in the light of this conceptual definition that one should interpret a review of the different codes which have attempted, throughout the various periods of Italy's recent history, to adapt ethical norms to particular social and health care climates. PMID:9279746
The new Italian code of medical ethics.
Fineschi, V; Turillazzi, E; Cateni, C
1997-08-01
In June 1995, the Italian code of medical ethics was revised in order that its principles should reflect the ever-changing relationship between the medical profession and society and between physicians and patients. The updated code is also a response to new ethical problems created by scientific progress; the discussion of such problems often shows up a need for better understanding on the part of the medical profession itself. Medical deontology is defined as the discipline for the study of norms of conduct for the health care professions, including moral and legal norms as well as those pertaining more strictly to professional performance. The aim of deontology is therefore, the in-depth investigation and revision of the code of medical ethics. It is in the light of this conceptual definition that one should interpret a review of the different codes which have attempted, throughout the various periods of Italy's recent history, to adapt ethical norms to particular social and health care climates.
Learning deep hierarchical visual feature coding.
Goh, Hanlin; Thome, Nicolas; Cord, Matthieu; Lim, Joo-Hwee
2014-12-01
In this paper, we propose a hybrid architecture that combines the image modeling strengths of the bag of words framework with the representational power and adaptability of learning deep architectures. Local gradient-based descriptors, such as SIFT, are encoded via a hierarchical coding scheme composed of spatial aggregating restricted Boltzmann machines (RBM). For each coding layer, we regularize the RBM by encouraging representations to fit both sparse and selective distributions. Supervised fine-tuning is used to enhance the quality of the visual representation for the categorization task. We performed a thorough experimental evaluation using three image categorization data sets. The hierarchical coding scheme achieved competitive categorization accuracies of 79.7% and 86.4% on the Caltech-101 and 15-Scenes data sets, respectively. The visual representations learned are compact and the model's inference is fast, as compared with sparse coding methods. The low-level representations of descriptors that were learned using this method result in generic features that we empirically found to be transferrable between different image data sets. Further analysis reveal the significance of supervised fine-tuning when the architecture has two layers of representations as opposed to a single layer.
NASA Astrophysics Data System (ADS)
Thy, Peter; Lesher, Charles E.; Nielsen, Troels F. D.; Brooks, C. Kent
2008-10-01
reject Morse's [Morse, S.A., 2008. Principles of applied experimental igneous petrology: a comment on "Experimental Constraints on the Skaergaard liquid line of descent" by Thy, Lesher, Nielsen, and Brooks, 2006, Lithos 92: 154-180. Lithos 105, pp. 395-399.] contention that we violated in our original study established principles of applied experimental igneous petrology. Such principles dictate that experimental and forward models are carefully tested against field observations before petrologic processes can be verified.
Binary coding for hyperspectral imagery
NASA Astrophysics Data System (ADS)
Wang, Jing; Chang, Chein-I.; Chang, Chein-Chi; Lin, Chinsu
2004-10-01
Binary coding is one of simplest ways to characterize spectral features. One commonly used method is a binary coding-based image software system, called Spectral Analysis Manager (SPAM) for remotely sensed imagery developed by Mazer et al. For a given spectral signature, the SPAM calculates its spectral mean and inter-band spectral difference and uses them as thresholds to generate a binary code word for this particular spectral signature. Such coding scheme is generally effective and also very simple to implement. This paper revisits the SPAM and further develops three new SPAM-based binary coding methods, called equal probability partition (EPP) binary coding, halfway partition (HP) binary coding and median partition (MP) binary coding. These three binary coding methods along with the SPAM well be evaluated for spectral discrimination and identification. In doing so, a new criterion, called a posteriori discrimination probability (APDP) is also introduced for performance measure.
NASA Technical Reports Server (NTRS)
Mcaulay, Robert J.; Quatieri, Thomas F.
1988-01-01
It has been shown that an analysis/synthesis system based on a sinusoidal representation of speech leads to synthetic speech that is essentially perceptually indistinguishable from the original. Strategies for coding the amplitudes, frequencies and phases of the sine waves have been developed that have led to a multirate coder operating at rates from 2400 to 9600 bps. The encoded speech is highly intelligible at all rates with a uniformly improving quality as the data rate is increased. A real-time fixed-point implementation has been developed using two ADSP2100 DSP chips. The methods used for coding and quantizing the sine-wave parameters for operation at the various frame rates are described.
NASA Astrophysics Data System (ADS)
The Wellhead Protection Area code is now available for distribution by the International Ground Water Modeling Center in Indianapolis, Ind. The WHPA code is a modular, semianalytical, groundwater flow model developed for the U.S. Environmental Protection Agency, Office of Ground Water Protection, designed to assist state and local technical staff with the task of Wellhead Protection Area (WHPA) delineation. A complete news item appeared in Eos, May 1, 1990, p. 690.The model consists of four independent, semianalytical modules that may be used to identify the areal extent of groundwater contribution to one or multiple pumping wells. One module is a general particle tracking program that may be used as a post-processor for two-dimensional, numerical models of groundwater flow. One module incorporates a Monte Carlo approach to investigate the effects of uncertain input parameters on capture zones. Multiple pumping and injection wells may be present and barrier or stream boundary conditions may be investigated.
NASA Astrophysics Data System (ADS)
The Wellhead Protection Area (WHPA) code is now available for distribution by the International Ground Water Modeling Center in Indianapolis, Ind. The WHPA code is a modular, semi-analytical, groundwater flow model developed for the U.S. Environmental Protection Agency, Office of Ground Water Protection. It is designed to assist state and local technical staff with the task of WHPA delineation.The model consists of four independent, semi-analytical modules that may be used to identify the areal extent of groundwater contribution to one or multiple pumping wells. One module is a general particle tracking program that may be used as a post-processor for two-dimensional, numerical models of groundwater flow. One module incorporates a Monte Carlo approach to investigate the effects of uncertain input parameters on capture zones. Multiple pumping and injection wells may be present and barrier or stream boundary conditions may be investigated.
Confocal coded aperture imaging
Tobin, Jr., Kenneth William; Thomas, Jr., Clarence E.
2001-01-01
A method for imaging a target volume comprises the steps of: radiating a small bandwidth of energy toward the target volume; focusing the small bandwidth of energy into a beam; moving the target volume through a plurality of positions within the focused beam; collecting a beam of energy scattered from the target volume with a non-diffractive confocal coded aperture; generating a shadow image of said aperture from every point source of radiation in the target volume; and, reconstructing the shadow image into a 3-dimensional image of the every point source by mathematically correlating the shadow image with a digital or analog version of the coded aperture. The method can comprise the step of collecting the beam of energy scattered from the target volume with a Fresnel zone plate.
Fully scalable video coding with packed stream
NASA Astrophysics Data System (ADS)
Lopez, Manuel F.; Rodriguez, Sebastian G.; Ortiz, Juan Pablo; Dana, Jose Miguel; Ruiz, Vicente G.; Garcia, Inmaculada
2005-03-01
Scalable video coding is a technique which allows a compressed video stream to be decoded in several different ways. This ability allows a user to adaptively recover a specific version of a video depending on its own requirements. Video sequences have temporal, spatial and quality scalabilities. In this work we introduce a novel fully scalable video codec. It is based on a motion-compensated temporal filtering (MCTF) of the video sequences and it uses some of the basic elements of JPEG 2000. This paper describes several specific proposals for video on demand and video-conferencing applications over non-reliable packet-switching data networks.
Revised numerical wrapper for PIES code
NASA Astrophysics Data System (ADS)
Raburn, Daniel; Reiman, Allan; Monticello, Donald
2015-11-01
A revised external numerical wrapper has been developed for the Princeton Iterative Equilibrium Solver (PIES code), which is capable of calculating 3D MHD equilibria with islands. The numerical wrapper has been demonstrated to greatly improve the rate of convergence in numerous cases corresponding to equilibria in the TFTR device where magnetic islands are present. The numerical wrapper makes use of a Jacobian-free Newton-Krylov solver along with adaptive preconditioning and a sophisticated subspace-restricted Levenberg-Marquardt backtracking algorithm. The details of the numerical wrapper and several sample results are presented.
2003-02-10
HYCOM code development Alan J. Wallcraft Naval Research Laboratory 2003 Layered Ocean Model Users’ Workshop February 10, 2003 Report Documentation...unlimited 13. SUPPLEMENTARY NOTES Layered Ocean Modeling Workshop (LOM 2003), Miami, FL, Feb 2003 14. ABSTRACT 15. SUBJECT TERMS 16. SECURITY...Kraus-Turner mixed-layer Æ Energy-Loan (passive) ice model Æ High frequency atmospheric forcing Æ New I/O scheme (.a and .b files) Æ Scalability via
NASA Technical Reports Server (NTRS)
Bjork, C.
1981-01-01
The REEDS (rocket exhaust effluent diffusion single layer) computer code is used for the estimation of certain rocket exhaust effluent concentrations and dosages and their distributions near the Earth's surface following a rocket launch event. Output from REEDS is used in producing near real time air quality and environmental assessments of the effects of certain potentially harmful effluents, namely HCl, Al2O3, CO, and NO.
Poukey, J.W.
1988-01-01
The trajectory code TRAJ has been used extensively to study nonimmersed foilless electron diodes. The basic goal of the research is to design low-emittance injectors for electron linacs and propagation experiments. Systems studied during 1987 include Delphi, Recirc, and Troll. We also discuss a partly successful attempt to extend the same techniques to high currents (tens of kA). 7 refs., 30 figs.
1981-11-24
n.cet..ary ad Identfy by block nutrb.) Visible radiation Sensors Infrared radiation Line and band transitions Isophots High altitude nuclear data...radiation (watts sr) in arbitrary wavelength intervals is determined. The results are a series of " isophot " plots for rbitrariiy placed cameras or sensors...Section II. The output of the PHARO code consists of contour plots of radiative intensity (watts/cm ster) or " isophot " plots for arbitrarily placed sensors
NASA Astrophysics Data System (ADS)
Price, Daniel; Wurster, James; Nixon, Chris
2016-05-01
I will present the capabilities of the Phantom SPH code for global simulations of dust and gas in protoplanetary discs. I will present our new algorithms for simulating both small and large grains in discs, as well as our progress towards simulating evolving grain populations and coupling with radiation. Finally, I will discuss our recent applications to HL Tau and the physics of dust gap opening.
N.V. Mokhov
2003-04-09
Status and recent developments of the MARS 14 Monte Carlo code system for simulation of hadronic and electromagnetic cascades in shielding, accelerator and detector components in the energy range from a fraction of an electronvolt up to 100 TeV are described. these include physics models both in strong and electromagnetic interaction sectors, variance reduction techniques, residual dose, geometry, tracking, histograming. MAD-MARS Beam Line Build and Graphical-User Interface.
Orthopedics coding and funding.
Baron, S; Duclos, C; Thoreux, P
2014-02-01
The French tarification à l'activité (T2A) prospective payment system is a financial system in which a health-care institution's resources are based on performed activity. Activity is described via the PMSI medical information system (programme de médicalisation du système d'information). The PMSI classifies hospital cases by clinical and economic categories known as diagnosis-related groups (DRG), each with an associated price tag. Coding a hospital case involves giving as realistic a description as possible so as to categorize it in the right DRG and thus ensure appropriate payment. For this, it is essential to understand what determines the pricing of inpatient stay: namely, the code for the surgical procedure, the patient's principal diagnosis (reason for admission), codes for comorbidities (everything that adds to management burden), and the management of the length of inpatient stay. The PMSI is used to analyze the institution's activity and dynamism: change on previous year, relation to target, and comparison with competing institutions based on indicators such as the mean length of stay performance indicator (MLS PI). The T2A system improves overall care efficiency. Quality of care, however, is not presently taken account of in the payment made to the institution, as there are no indicators for this; work needs to be done on this topic.
Summers, R.M.; Cole, R.K. Jr.; Smith, R.C.; Stuart, D.S.; Thompson, S.L.; Hodge, S.A.; Hyman, C.R.; Sanders, R.L.
1995-03-01
MELCOR is a fully integrated, engineering-level computer code that models the progression of severe accidents in light water reactor nuclear power plants. MELCOR is being developed at Sandia National Laboratories for the U.S. Nuclear Regulatory Commission as a second-generation plant risk assessment tool and the successor to the Source Term Code Package. A broad spectrum of severe accident phenomena in both boiling and pressurized water reactors is treated in MELCOR in a unified framework. These include: thermal-hydraulic response in the reactor coolant system, reactor cavity, containment, and confinement buildings; core heatup, degradation, and relocation; core-concrete attack; hydrogen production, transport, and combustion; fission product release and transport; and the impact of engineered safety features on thermal-hydraulic and radionuclide behavior. Current uses of MELCOR include estimation of severe accident source terms and their sensitivities and uncertainties in a variety of applications. This publication of the MELCOR computer code manuals corresponds to MELCOR 1.8.3, released to users in August, 1994. Volume 1 contains a primer that describes MELCOR`s phenomenological scope, organization (by package), and documentation. The remainder of Volume 1 contains the MELCOR Users Guides, which provide the input instructions and guidelines for each package. Volume 2 contains the MELCOR Reference Manuals, which describe the phenomenological models that have been implemented in each package.
Bar coded retroreflective target
Vann, Charles S.
2000-01-01
This small, inexpensive, non-contact laser sensor can detect the location of a retroreflective target in a relatively large volume and up to six degrees of position. The tracker's laser beam is formed into a plane of light which is swept across the space of interest. When the beam illuminates the retroreflector, some of the light returns to the tracker. The intensity, angle, and time of the return beam is measured to calculate the three dimensional location of the target. With three retroreflectors on the target, the locations of three points on the target are measured, enabling the calculation of all six degrees of target position. Until now, devices for three-dimensional tracking of objects in a large volume have been heavy, large, and very expensive. Because of the simplicity and unique characteristics of this tracker, it is capable of three-dimensional tracking of one to several objects in a large volume, yet it is compact, light-weight, and relatively inexpensive. Alternatively, a tracker produces a diverging laser beam which is directed towards a fixed position, and senses when a retroreflective target enters the fixed field of view. An optically bar coded target can be read by the tracker to provide information about the target. The target can be formed of a ball lens with a bar code on one end. As the target moves through the field, the ball lens causes the laser beam to scan across the bar code.
Suboptimum decoding of block codes
NASA Technical Reports Server (NTRS)
Lin, Shu; Kasami, Tadao
1991-01-01
This paper investigates a class of decomposable codes, their distance and structural properties. it is shown that this class includes several classes of well known and efficient codes as subclasses. Several methods for constructing decomposable codes or decomposing codes are presented. A two-stage soft decision decoding scheme for decomposable codes, their translates or unions of translates is devised. This two-stage soft-decision decoding is suboptimum, and provides an excellent trade-off between the error performance and decoding complexity for codes of moderate and long block length.
Preliminary Assessment of Turbomachinery Codes
NASA Technical Reports Server (NTRS)
Mazumder, Quamrul H.
2007-01-01
This report assesses different CFD codes developed and currently being used at Glenn Research Center to predict turbomachinery fluid flow and heat transfer behavior. This report will consider the following codes: APNASA, TURBO, GlennHT, H3D, and SWIFT. Each code will be described separately in the following section with their current modeling capabilities, level of validation, pre/post processing, and future development and validation requirements. This report addresses only previously published and validations of the codes. However, the codes have been further developed to extend the capabilities of the codes.
ACDOS2: an improved neutron-induced dose rate code
Lagache, J.C.
1981-06-01
To calculate the expected dose rate from fusion reactors as a function of geometry, composition, and time after shutdown a computer code, ACDOS2, was written, which utilizes up-to-date libraries of cross-sections and radioisotope decay data. ACDOS2 is in ANSI FORTRAN IV, in order to make it readily adaptable elsewhere.
Developing an ethical code for engineers: the discursive approach.
Lozano, J Félix
2006-04-01
From the Hippocratic Oath on, deontological codes and other professional self-regulation mechanisms have been used to legitimize and identify professional groups. New technological challenges and, above all, changes in the socioeconomic environment require adaptable codes which can respond to new demands. We assume that ethical codes for professionals should not simply focus on regulative functions, but must also consider ideological and educative functions. Any adaptations should take into account both contents (values, norms and recommendations) and the drafting process itself. In this article we propose a process for developing a professional ethical code for an official professional association (Colegio Oficial de Ingenieros Industriales de Valencia (COIIV) starting from the philosophical assumptions of discursive ethics but adapting them to critical hermeneutics. Our proposal is based on the Integrity Approach rather than the Compliance Approach. A process aiming to achieve an effective ethical document that fulfils regulative and ideological functions requires a participative, dialogical and reflexive methodology. This process must respond to moral exigencies and demands for efficiency and professional effectiveness. In addition to the methodological proposal we present our experience of producing an ethical code for the industrial engineers' association in Valencia (Spain) where this methodology was applied, and we evaluate the detected problems and future potential.
Construction of new quantum MDS codes derived from constacyclic codes
NASA Astrophysics Data System (ADS)
Taneja, Divya; Gupta, Manish; Narula, Rajesh; Bhullar, Jaskaran
Obtaining quantum maximum distance separable (MDS) codes from dual containing classical constacyclic codes using Hermitian construction have paved a path to undertake the challenges related to such constructions. Using the same technique, some new parameters of quantum MDS codes have been constructed here. One set of parameters obtained in this paper has achieved much larger distance than work done earlier. The remaining constructed parameters of quantum MDS codes have large minimum distance and were not explored yet.
Convolutional coding techniques for data protection
NASA Technical Reports Server (NTRS)
Massey, J. L.
1975-01-01
Results of research on the use of convolutional codes in data communications are presented. Convolutional coding fundamentals are discussed along with modulation and coding interaction. Concatenated coding systems and data compression with convolutional codes are described.
Combinatorial neural codes from a mathematical coding theory perspective.
Curto, Carina; Itskov, Vladimir; Morrison, Katherine; Roth, Zachary; Walker, Judy L
2013-07-01
Shannon's seminal 1948 work gave rise to two distinct areas of research: information theory and mathematical coding theory. While information theory has had a strong influence on theoretical neuroscience, ideas from mathematical coding theory have received considerably less attention. Here we take a new look at combinatorial neural codes from a mathematical coding theory perspective, examining the error correction capabilities of familiar receptive field codes (RF codes). We find, perhaps surprisingly, that the high levels of redundancy present in these codes do not support accurate error correction, although the error-correcting performance of receptive field codes catches up to that of random comparison codes when a small tolerance to error is introduced. However, receptive field codes are good at reflecting distances between represented stimuli, while the random comparison codes are not. We suggest that a compromise in error-correcting capability may be a necessary price to pay for a neural code whose structure serves not only error correction, but must also reflect relationships between stimuli.
New quantum MDS-convolutional codes derived from constacyclic codes
NASA Astrophysics Data System (ADS)
Li, Fengwei; Yue, Qin
2015-12-01
In this paper, we utilize a family of Hermitian dual-containing constacyclic codes to construct classical and quantum MDS convolutional codes. Our classical and quantum convolutional codes are optimal in the sense that they attain the classical (quantum) generalized Singleton bound.
A class of constacyclic BCH codes and new quantum codes
NASA Astrophysics Data System (ADS)
liu, Yang; Li, Ruihu; Lv, Liangdong; Ma, Yuena
2017-03-01
Constacyclic BCH codes have been widely studied in the literature and have been used to construct quantum codes in latest years. However, for the class of quantum codes of length n=q^{2m}+1 over F_{q^2} with q an odd prime power, there are only the ones of distance δ ≤ 2q^2 are obtained in the literature. In this paper, by a detailed analysis of properties of q2-ary cyclotomic cosets, maximum designed distance δ _{max} of a class of Hermitian dual-containing constacyclic BCH codes with length n=q^{2m}+1 are determined, this class of constacyclic codes has some characteristic analog to that of primitive BCH codes over F_{q^2}. Then we can obtain a sequence of dual-containing constacyclic codes of designed distances 2q^2<δ ≤ δ _{max}. Consequently, new quantum codes with distance d > 2q^2 can be constructed from these dual-containing codes via Hermitian Construction. These newly obtained quantum codes have better code rate compared with those constructed from primitive BCH codes.
Summary of 1990 Code Conference
Cooper, R.K.; Chan, Kwok-Chi D.
1990-01-01
The Conference on Codes and the Linear Accelerator Community was held in Los Alamos in January 1990, and had approximately 100 participants. This conference was the second in a series which has as its goal the exchange of information about codes and code practices among those writing and actually using these codes for the design and analysis of linear accelerators and their components. The first conference was held in San Diego in January 1988, and concentrated on beam dynamics codes and Maxwell solvers. This most recent conference concentrated on 3-D codes and techniques to handle the large amounts of data required for three-dimensional problems. In addition to descriptions of codes, their algorithms and implementations, there were a number of paper describing the use of many of the codes. Proceedings of both these conferences are available. 3 refs., 2 tabs.
1993-07-26
flow (very low speed and subsonic flows, transonic flow, supersonic and hypersonic flows), 2) advances in unstructured adaptive gridding techniques ...realistically simulated by CFD techniques . SAIC has been involved in all aspects of these developments and is on the forefront of CFD technology...differential equations of gasdynamnics. Presently, as a result of steady improvement ii the various integration techniques , the advantages which could be gained
Adaptive Encoding for Numerical Data Compression.
ERIC Educational Resources Information Center
Yokoo, Hidetoshi
1994-01-01
Discusses the adaptive compression of computer files of numerical data whose statistical properties are not given in advance. A new lossless coding method for this purpose, which utilizes Adelson-Velskii and Landis (AVL) trees, is proposed. The method is effective to any word length. Its application to the lossless compression of gray-scale images…
Chemical Laser Computer Code Survey,
1980-12-01
DOCUMENTATION: Resonator Geometry Synthesis Code Requi rement NV. L. Gamiz); Incorporate General Resonator into Ray Trace Code (W. H. Southwell... Synthesis Code Development (L. R. Stidhm) CATEGRY ATIUEOPTICS KINETICS GASOYNAM41CS None * None *iNone J.LEVEL Simrple Fabry Perot Simple SaturatedGt... Synthesis Co2de Require- ment (V L. ami l ncor~orate General Resonatorn into Ray Trace Code (W. H. Southwel) Srace Optimization Algorithms and Equations (W
Energy Codes and Standards: Facilities
Bartlett, Rosemarie; Halverson, Mark A.; Shankle, Diana L.
2007-01-01
Energy codes and standards play a vital role in the marketplace by setting minimum requirements for energy-efficient design and construction. They outline uniform requirements for new buildings as well as additions and renovations. This article covers basic knowledge of codes and standards; development processes of each; adoption, implementation, and enforcement of energy codes and standards; and voluntary energy efficiency programs.
Coding Issues in Grounded Theory
ERIC Educational Resources Information Center
Moghaddam, Alireza
2006-01-01
This paper discusses grounded theory as one of the qualitative research designs. It describes how grounded theory generates from data. Three phases of grounded theory--open coding, axial coding, and selective coding--are discussed, along with some of the issues which are the source of debate among grounded theorists, especially between its…
2016-08-01
TELECOMMUNICATIONS AND TIMING GROUP IRIG STANDARD 200-16 IRIG SERIAL TIME CODE FORMATS DISTRIBUTION A: APPROVED FOR...ARNOLD ENGINEERING DEVELOPMENT COMPLEX NATIONAL AERONAUTICS AND SPACE ADMINISTRATION This page intentionally left blank. IRIG SERIAL TIME CODE ...Serial Time Code Formats, RCC 200-16, August 2016 v Table of Contents Preface
ERIC Educational Resources Information Center
Bobbitt, L. G.; Carroll, C. D.
The National Center for Education Statistics conducts surveys which require the coding of the respondent's major field of study. This paper presents a new system for the coding of major field of study. It operates on-line i a Computer Assisted Telephone Interview (CATI) environment and allows conversational checks to verify coding directly from…
NASA Technical Reports Server (NTRS)
Laflame, D. T.
1980-01-01
Delay-locked loop tracks pseudonoise codes without introducing dc timing errors, because it is not sensitive to gain imbalance between signal processing arms. "Early" and "late" reference codes pass in combined form through both arms, and each arm acts on both codes. Circuit accomodates 1 dB weaker input signals with tracking ability equal to that of tau-dither loops.
Validation of the BEPLATE code
Giles, G.E.; Bullock, J.S.
1997-11-01
The electroforming simulation code BEPLATE (Boundary Element-PLATE) has been developed and validated for specific applications at Oak Ridge. New areas of application are opening up and more validations are being performed. This paper reports the validation experience of the BEPLATE code on two types of electroforms and describes some recent applications of the code.
Authorship Attribution of Source Code
ERIC Educational Resources Information Center
Tennyson, Matthew F.
2013-01-01
Authorship attribution of source code is the task of deciding who wrote a program, given its source code. Applications include software forensics, plagiarism detection, and determining software ownership. A number of methods for the authorship attribution of source code have been presented in the past. A review of those existing methods is…
2014-09-05
COVERED 00-00-2014 to 00-00-2014 4. TITLE AND SUBTITLE Ptolemy Coding Style 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6...lisp module for GNU Emacs that has appropriate indenting rules. This file works well with Emacs under both Unix and Windows. • testsuite/ptspell is a...Unix. It is much more liberal that the commonly used “GPL” or “ GNU Public License,” which encumbers the software and derivative works with the
Structured error recovery for code-word-stabilized quantum codes
Li Yunfan; Dumer, Ilya; Grassl, Markus; Pryadko, Leonid P.
2010-05-15
Code-word-stabilized (CWS) codes are, in general, nonadditive quantum codes that can correct errors by an exhaustive search of different error patterns, similar to the way that we decode classical nonlinear codes. For an n-qubit quantum code correcting errors on up to t qubits, this brute-force approach consecutively tests different errors of weight t or less and employs a separate n-qubit measurement in each test. In this article, we suggest an error grouping technique that allows one to simultaneously test large groups of errors in a single measurement. This structured error recovery technique exponentially reduces the number of measurements by about 3{sup t} times. While it still leaves exponentially many measurements for a generic CWS code, the technique is equivalent to syndrome-based recovery for the special case of additive CWS codes.
Structured error recovery for code-word-stabilized quantum codes
NASA Astrophysics Data System (ADS)
Li, Yunfan; Dumer, Ilya; Grassl, Markus; Pryadko, Leonid P.
2010-05-01
Code-word-stabilized (CWS) codes are, in general, nonadditive quantum codes that can correct errors by an exhaustive search of different error patterns, similar to the way that we decode classical nonlinear codes. For an n-qubit quantum code correcting errors on up to t qubits, this brute-force approach consecutively tests different errors of weight t or less and employs a separate n-qubit measurement in each test. In this article, we suggest an error grouping technique that allows one to simultaneously test large groups of errors in a single measurement. This structured error recovery technique exponentially reduces the number of measurements by about 3t times. While it still leaves exponentially many measurements for a generic CWS code, the technique is equivalent to syndrome-based recovery for the special case of additive CWS codes.
Low Density Parity Check Codes: Bandwidth Efficient Channel Coding
NASA Technical Reports Server (NTRS)
Fong, Wai; Lin, Shu; Maki, Gary; Yeh, Pen-Shu
2003-01-01
Low Density Parity Check (LDPC) Codes provide near-Shannon Capacity performance for NASA Missions. These codes have high coding rates R=0.82 and 0.875 with moderate code lengths, n=4096 and 8176. Their decoders have inherently parallel structures which allows for high-speed implementation. Two codes based on Euclidean Geometry (EG) were selected for flight ASIC implementation. These codes are cyclic and quasi-cyclic in nature and therefore have a simple encoder structure. This results in power and size benefits. These codes also have a large minimum distance as much as d,,, = 65 giving them powerful error correcting capabilities and error floors less than lo- BER. This paper will present development of the LDPC flight encoder and decoder, its applications and status.
New quantum codes constructed from quaternary BCH codes
NASA Astrophysics Data System (ADS)
Xu, Gen; Li, Ruihu; Guo, Luobin; Ma, Yuena
2016-10-01
In this paper, we firstly study construction of new quantum error-correcting codes (QECCs) from three classes of quaternary imprimitive BCH codes. As a result, the improved maximal designed distance of these narrow-sense imprimitive Hermitian dual-containing quaternary BCH codes are determined to be much larger than the result given according to Aly et al. (IEEE Trans Inf Theory 53:1183-1188, 2007) for each different code length. Thus, families of new QECCs are newly obtained, and the constructed QECCs have larger distance than those in the previous literature. Secondly, we apply a combinatorial construction to the imprimitive BCH codes with their corresponding primitive counterpart and construct many new linear quantum codes with good parameters, some of which have parameters exceeding the finite Gilbert-Varshamov bound for linear quantum codes.
Quantum Codes From Cyclic Codes Over The Ring R2
NASA Astrophysics Data System (ADS)
Altinel, Alev; Güzeltepe, Murat
2016-10-01
Let R 2 denotes the ring F 2 + μF 2 + υ2 + μυF 2 + wF 2 + μwF 2 + υwF 2 + μυwF2. In this study, we construct quantum codes from cyclic codes over the ring R2, for arbitrary length n, with the restrictions μ2 = 0, υ2 = 0, w 2 = 0, μυ = υμ, μw = wμ, υw = wυ and μ (υw) = (μυ) w. Also, we give a necessary and sufficient condition for cyclic codes over R2 that contains its dual. As a final point, we obtain the parameters of quantum error-correcting codes from cyclic codes over R2 and we give an example of quantum error-correcting codes form cyclic codes over R 2.
Block adaptive rate controlled image data compression
NASA Technical Reports Server (NTRS)
Rice, R. F.; Hilbert, E.; Lee, J.-J.; Schlutsmeyer, A.
1979-01-01
A block adaptive rate controlled (BARC) image data compression algorithm is described. It is noted that in the algorithm's principal rate controlled mode, image lines can be coded at selected rates by combining practical universal noiseless coding techniques with block adaptive adjustments in linear quantization. Compression of any source data at chosen rates of 3.0 bits/sample and above can be expected to yield visual image quality with imperceptible degradation. Exact reconstruction will be obtained if the one-dimensional difference entropy is below the selected compression rate. It is noted that the compressor can also be operated as a floating rate noiseless coder by simply not altering the input data quantization. Here, the universal noiseless coder ensures that the code rate is always close to the entropy. Application of BARC image data compression to the Galileo orbiter mission of Jupiter is considered.
Voice aftereffects of adaptation to speaker identity.
Zäske, Romi; Schweinberger, Stefan R; Kawahara, Hideki
2010-09-01
While adaptation to complex auditory stimuli has traditionally been reported for linguistic properties of speech, the present study demonstrates non-linguistic high-level aftereffects in the perception of voice identity, following adaptation to voices or faces of personally familiar speakers. In Exp. 1, prolonged exposure to speaker A's voice biased the perception of identity-ambiguous voice morphs between speakers A and B towards speaker B (and vice versa). Significantly biased voice identity perception was also observed in Exp. 2 when adaptors were videos of speakers' silently articulating faces, although effects were reduced in magnitude relative to those seen in Exp. 1. By contrast, adaptation to an unrelated speaker C elicited an intermediate proportion of speaker A identifications in both experiments. While crossmodal aftereffects on auditory identification (Exp. 2) dissipated rapidly, unimodal aftereffects (Exp. 1) were still measurable a few minutes after adaptation. These novel findings suggest contrastive coding of voice identity in long-term memory, with at least two perceptual mechanisms of voice identity adaptation: one related to auditory coding of voice characteristics, and another related to multimodal coding of familiar speaker identity.
Measuring Diagnoses: ICD Code Accuracy
O'Malley, Kimberly J; Cook, Karon F; Price, Matt D; Wildes, Kimberly Raiford; Hurdle, John F; Ashton, Carol M
2005-01-01
Objective To examine potential sources of errors at each step of the described inpatient International Classification of Diseases (ICD) coding process. Data Sources/Study Setting The use of disease codes from the ICD has expanded from classifying morbidity and mortality information for statistical purposes to diverse sets of applications in research, health care policy, and health care finance. By describing a brief history of ICD coding, detailing the process for assigning codes, identifying where errors can be introduced into the process, and reviewing methods for examining code accuracy, we help code users more systematically evaluate code accuracy for their particular applications. Study Design/Methods We summarize the inpatient ICD diagnostic coding process from patient admission to diagnostic code assignment. We examine potential sources of errors at each step and offer code users a tool for systematically evaluating code accuracy. Principle Findings Main error sources along the “patient trajectory” include amount and quality of information at admission, communication among patients and providers, the clinician's knowledge and experience with the illness, and the clinician's attention to detail. Main error sources along the “paper trail” include variance in the electronic and written records, coder training and experience, facility quality-control efforts, and unintentional and intentional coder errors, such as misspecification, unbundling, and upcoding. Conclusions By clearly specifying the code assignment process and heightening their awareness of potential error sources, code users can better evaluate the applicability and limitations of codes for their particular situations. ICD codes can then be used in the most appropriate ways. PMID:16178999
Bilayer Protograph Codes for Half-Duplex Relay Channels
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; VanNguyen, Thuy; Nosratinia, Aria
2013-01-01
Direct to Earth return links are limited by the size and power of lander devices. A standard alternative is provided by a two-hops return link: a proximity link (from lander to orbiter relay) and a deep-space link (from orbiter relay to Earth). Although direct to Earth return links are limited by the size and power of lander devices, using an additional link and a proposed coding for relay channels, one can obtain a more reliable signal. Although significant progress has been made in the relay coding problem, existing codes must be painstakingly optimized to match to a single set of channel conditions, many of them do not offer easy encoding, and most of them do not have structured design. A high-performing LDPC (low-density parity-check) code for the relay channel addresses simultaneously two important issues: a code structure that allows low encoding complexity, and a flexible rate-compatible code that allows matching to various channel conditions. Most of the previous high-performance LDPC codes for the relay channel are tightly optimized for a given channel quality, and are not easily adapted without extensive re-optimization for various channel conditions. This code for the relay channel combines structured design and easy encoding with rate compatibility to allow adaptation to the three links involved in the relay channel, and furthermore offers very good performance. The proposed code is constructed by synthesizing a bilayer structure with a pro to graph. In addition to the contribution to relay encoding, an improved family of protograph codes was produced for the point-to-point AWGN (additive white Gaussian noise) channel whose high-rate members enjoy thresholds that are within 0.07 dB of capacity. These LDPC relay codes address three important issues in an integrative manner: low encoding complexity, modular structure allowing for easy design, and rate compatibility so that the code can be easily matched to a variety of channel conditions without extensive
NASA Astrophysics Data System (ADS)
Abdullah, Alyasa Gan; Wah, Yap Bee
2015-02-01
The computation of the approximate values of the trigonometric sines was discovered by Bhaskara I (c. 600-c.680), a seventh century Indian mathematician and is known as the Bjaskara's I's sine approximation formula. The formula is given in his treatise titled Mahabhaskariya. In the 14th century, Madhava of Sangamagrama, a Kerala mathematician astronomer constructed the table of trigonometric sines of various angles. Madhava's table gives the measure of angles in arcminutes, arcseconds and sixtieths of an arcsecond. The search for more accurate formulas led to the discovery of the power series expansion by Madhava of Sangamagrama (c.1350-c. 1425), the founder of the Kerala school of astronomy and mathematics. In 1715, the Taylor series was introduced by Brook Taylor an English mathematician. If the Taylor series is centered at zero, it is called a Maclaurin series, named after the Scottish mathematician Colin Maclaurin. Some of the important Maclaurin series expansions include trigonometric functions. This paper introduces the genetic code of the sine of an angle without using power series expansion. The genetic code using square root approach reveals the pattern in the signs (plus, minus) and sequence of numbers in the sine of an angle. The square root approach complements the Pythagoras method, provides a better understanding of calculating an angle and will be useful for teaching the concepts of angles in trigonometry.
Fleishman, Gregory D.; Kuznetsov, Alexey A.
2010-10-01
Radiation produced by charged particles gyrating in a magnetic field is highly significant in the astrophysics context. Persistently increasing resolution of astrophysical observations calls for corresponding three-dimensional modeling of the radiation. However, available exact equations are prohibitively slow in computing a comprehensive table of high-resolution models required for many practical applications. To remedy this situation, we develop approximate gyrosynchrotron (GS) codes capable of quickly calculating the GS emission (in non-quantum regime) from both isotropic and anisotropic electron distributions in non-relativistic, mildly relativistic, and ultrarelativistic energy domains applicable throughout a broad range of source parameters including dense or tenuous plasmas and weak or strong magnetic fields. The computation time is reduced by several orders of magnitude compared with the exact GS algorithm. The new algorithm performance can gradually be adjusted to the user's needs depending on whether precision or computation speed is to be optimized for a given model. The codes are made available for users as a supplement to this paper.
Hu, J H; Wang, Y; Cahill, P T
1997-01-01
This paper reports a multispectral code excited linear prediction (MCELP) method for the compression of multispectral images. Different linear prediction models and adaptation schemes have been compared. The method that uses a forward adaptive autoregressive (AR) model has been proven to achieve a good compromise between performance, complexity, and robustness. This approach is referred to as the MFCELP method. Given a set of multispectral images, the linear predictive coefficients are updated over nonoverlapping three-dimensional (3-D) macroblocks. Each macroblock is further divided into several 3-D micro-blocks, and the best excitation signal for each microblock is determined through an analysis-by-synthesis procedure. The MFCELP method has been applied to multispectral magnetic resonance (MR) images. To satisfy the high quality requirement for medical images, the error between the original image set and the synthesized one is further specified using a vector quantizer. This method has been applied to images from 26 clinical MR neuro studies (20 slices/study, three spectral bands/slice, 256x256 pixels/band, 12 b/pixel). The MFCELP method provides a significant visual improvement over the discrete cosine transform (DCT) based Joint Photographers Expert Group (JPEG) method, the wavelet transform based embedded zero-tree wavelet (EZW) coding method, and the vector tree (VT) coding method, as well as the multispectral segmented autoregressive moving average (MSARMA) method we developed previously.
Determinate-state convolutional codes
NASA Technical Reports Server (NTRS)
Collins, O.; Hizlan, M.
1991-01-01
A determinate state convolutional code is formed from a conventional convolutional code by pruning away some of the possible state transitions in the decoding trellis. The type of staged power transfer used in determinate state convolutional codes proves to be an extremely efficient way of enhancing the performance of a concatenated coding system. The decoder complexity is analyzed along with free distances of these new codes and extensive simulation results is provided of their performance at the low signal to noise ratios where a real communication system would operate. Concise, practical examples are provided.
Code-Switching in Miami Spanish: The Domain of Health Care Services.
ERIC Educational Resources Information Center
Staczek, John J.
1983-01-01
Spanish-English code switching in the context of Miami health care services is examined, focusing on the transactional role relationships that require Spanish language use. Examples are taken from printed sources and oral language. Semantic shift, vocabulary adaptation, syntactic code switching, and Spanish acquisition by non-Hispanics are…
West, J.T.; Hoffman, T.J.; Emmett, M.B.; Childs, K.W.; Petrie, L.M.; Landers, N.F.; Bryan, C.B.; Giles, G.E.
1995-04-01
SCALE--a modular code system for Standardized Computer Analyses Licensing Evaluation--has been developed by Oak Ridge National Laboratory at the request of the US Nuclear Regulatory Commission. The SCALE system utilizes well-established computer codes and methods within standard analysis sequences that (1) allow an input format designed for the occasional user and/or novice, (2) automate the data processing and coupling between modules, and (3) provide accurate and reliable results. System development has been directed at problem-dependent cross-section processing and analysis of criticality safety, shielding, heat transfer, and depletion/decay problems. Since the initial release of SCALE in 1980, the code system has been heavily used for evaluation of nuclear fuel facility and package designs. This revision documents Version 4.2 of the system. The manual is divided into three volumes: Volume 1--for the control module documentation, Volume 2--for functional module documentation; and Volume 3--for documentation of the data libraries and subroutine libraries. This volume discusses the following functional modules: MORSE-SGC; HEATING 7.2; KENO V.a; JUNEBUG-II; HEATPLOT-S; REGPLOT 6; PLORIGEN; and OCULAR.
Circular codes, symmetries and transformations.
Fimmel, Elena; Giannerini, Simone; Gonzalez, Diego Luis; Strüngmann, Lutz
2015-06-01
Circular codes, putative remnants of primeval comma-free codes, have gained considerable attention in the last years. In fact they represent a second kind of genetic code potentially involved in detecting and maintaining the normal reading frame in protein coding sequences. The discovering of an universal code across species suggested many theoretical and experimental questions. However, there is a key aspect that relates circular codes to symmetries and transformations that remains to a large extent unexplored. In this article we aim at addressing the issue by studying the symmetries and transformations that connect different circular codes. The main result is that the class of 216 C3 maximal self-complementary codes can be partitioned into 27 equivalence classes defined by a particular set of transformations. We show that such transformations can be put in a group theoretic framework with an intuitive geometric interpretation. More general mathematical results about symmetry transformations which are valid for any kind of circular codes are also presented. Our results pave the way to the study of the biological consequences of the mathematical structure behind circular codes and contribute to shed light on the evolutionary steps that led to the observed symmetries of present codes.
How Can Reed-Solomon Codes Improve Steganographic Schemes?
NASA Astrophysics Data System (ADS)
Fontaine, Caroline; Galand, Fabien
The use of syndrome coding in steganographic schemes tends to reduce distortion during embedding. The more complete model comes from the wet papers [FGLS05] which allow to lock positions that cannot be modified. Recently, BCH codes have been investigated, and seem to be good candidates in this context [SW06]. Here, we show that Reed-Solomon codes are twice better with respect to the number of locked positions and that, in fact, they are optimal. We propose two methods for managing these codes in this context: the first one is based on a naive decoding process through Lagrange interpolation; the second one, more efficient, is based on list decoding techniques and provides an adaptive trade-off between the number of locked positions and the embedding efficiency.
Streamlined Genome Sequence Compression using Distributed Source Coding
Wang, Shuang; Jiang, Xiaoqian; Chen, Feng; Cui, Lijuan; Cheng, Samuel
2014-01-01
We aim at developing a streamlined genome sequence compression algorithm to support alternative miniaturized sequencing devices, which have limited communication, storage, and computation power. Existing techniques that require heavy client (encoder side) cannot be applied. To tackle this challenge, we carefully examined distributed source coding theory and developed a customized reference-based genome compression protocol to meet the low-complexity need at the client side. Based on the variation between source and reference, our protocol will pick adaptively either syndrome coding or hash coding to compress subsequences of changing code length. Our experimental results showed promising performance of the proposed method when compared with the state-of-the-art algorithm (GRS). PMID:25520552
Making your code citable with the Astrophysics Source Code Library
NASA Astrophysics Data System (ADS)
Allen, Alice; DuPrie, Kimberly; Schmidt, Judy; Berriman, G. Bruce; Hanisch, Robert J.; Mink, Jessica D.; Nemiroff, Robert J.; Shamir, Lior; Shortridge, Keith; Taylor, Mark B.; Teuben, Peter J.; Wallin, John F.
2016-01-01
The Astrophysics Source Code Library (ASCL, ascl.net) is a free online registry of codes used in astronomy research. With nearly 1,200 codes, it is the largest indexed resource for astronomy codes in existence. Established in 1999, it offers software authors a path to citation of their research codes even without publication of a paper describing the software, and offers scientists a way to find codes used in refereed publications, thus improving the transparency of the research. It also provides a method to quantify the impact of source codes in a fashion similar to the science metrics of journal articles. Citations using ASCL IDs are accepted by major astronomy journals and if formatted properly are tracked by ADS and other indexing services. The number of citations to ASCL entries increased sharply from 110 citations in January 2014 to 456 citations in September 2015. The percentage of code entries in ASCL that were cited at least once rose from 7.5% in January 2014 to 17.4% in September 2015. The ASCL's mid-2014 infrastructure upgrade added an easy entry submission form, more flexible browsing, search capabilities, and an RSS feeder for updates. A Changes/Additions form added this past fall lets authors submit links for papers that use their codes for addition to the ASCL entry even if those papers don't formally cite the codes, thus increasing the transparency of that research and capturing the value of their software to the community.
Practices in Code Discoverability: Astrophysics Source Code Library
NASA Astrophysics Data System (ADS)
Allen, A.; Teuben, P.; Nemiroff, R. J.; Shamir, L.
2012-09-01
Here we describe the Astrophysics Source Code Library (ASCL), which takes an active approach to sharing astrophysics source code. ASCL's editor seeks out both new and old peer-reviewed papers that describe methods or experiments that involve the development or use of source code, and adds entries for the found codes to the library. This approach ensures that source codes are added without requiring authors to actively submit them, resulting in a comprehensive listing that covers a significant number of the astrophysics source codes used in peer-reviewed studies. The ASCL now has over 340 codes in it and continues to grow. In 2011, the ASCL has on average added 19 codes per month. An advisory committee has been established to provide input and guide the development and expansion of the new site, and a marketing plan has been developed and is being executed. All ASCL source codes have been used to generate results published in or submitted to a refereed journal and are freely available either via a download site or from an identified source. This paper provides the history and description of the ASCL. It lists the requirements for including codes, examines the advantages of the ASCL, and outlines some of its future plans.
Statistical Coding and Decoding of Heartbeat Intervals
Lucena, Fausto; Barros, Allan Kardec; Príncipe, José C.; Ohnishi, Noboru
2011-01-01
The heart integrates neuroregulatory messages into specific bands of frequency, such that the overall amplitude spectrum of the cardiac output reflects the variations of the autonomic nervous system. This modulatory mechanism seems to be well adjusted to the unpredictability of the cardiac demand, maintaining a proper cardiac regulation. A longstanding theory holds that biological organisms facing an ever-changing environment are likely to evolve adaptive mechanisms to extract essential features in order to adjust their behavior. The key question, however, has been to understand how the neural circuitry self-organizes these feature detectors to select behaviorally relevant information. Previous studies in computational perception suggest that a neural population enhances information that is important for survival by minimizing the statistical redundancy of the stimuli. Herein we investigate whether the cardiac system makes use of a redundancy reduction strategy to regulate the cardiac rhythm. Based on a network of neural filters optimized to code heartbeat intervals, we learn a population code that maximizes the information across the neural ensemble. The emerging population code displays filter tuning proprieties whose characteristics explain diverse aspects of the autonomic cardiac regulation, such as the compromise between fast and slow cardiac responses. We show that the filters yield responses that are quantitatively similar to observed heart rate responses during direct sympathetic or parasympathetic nerve stimulation. Our findings suggest that the heart decodes autonomic stimuli according to information theory principles analogous to how perceptual cues are encoded by sensory systems. PMID:21694763
Spatiotopic coding during dynamic head tilt
Turi, Marco; Burr, David C.
2016-01-01
Humans maintain a stable representation of the visual world effortlessly, despite constant movements of the eyes, head, and body, across multiple planes. Whereas visual stability in the face of saccadic eye movements has been intensely researched, fewer studies have investigated retinal image transformations induced by head movements, especially in the frontal plane. Unlike head rotations in the horizontal and sagittal planes, tilting the head in the frontal plane is only partially counteracted by torsional eye movements and consequently induces a distortion of the retinal image to which we seem to be completely oblivious. One possible mechanism aiding perceptual stability is an active reconstruction of a spatiotopic map of the visual world, anchored in allocentric coordinates. To explore this possibility, we measured the positional motion aftereffect (PMAE; the apparent change in position after adaptation to motion) with head tilts of ∼42° between adaptation and test (to dissociate retinal from allocentric coordinates). The aftereffect was shown to have both a retinotopic and spatiotopic component. When tested with unpatterned Gaussian blobs rather than sinusoidal grating stimuli, the retinotopic component was greatly reduced, whereas the spatiotopic component remained. The results suggest that perceptual stability may be maintained at least partially through mechanisms involving spatiotopic coding. NEW & NOTEWORTHY Given that spatiotopic coding could play a key role in maintaining visual stability, we look for evidence of spatiotopic coding after retinal image transformations caused by head tilt. To this end, we measure the strength of the positional motion aftereffect (PMAE; previously shown to be largely spatiotopic after saccades) after large head tilts. We find that, as with eye movements, the spatial selectivity of the PMAE has a large spatiotopic component after head rotation. PMID:27903636
Electromagnetic particle simulation codes
NASA Technical Reports Server (NTRS)
Pritchett, P. L.
1985-01-01
Electromagnetic particle simulations solve the full set of Maxwell's equations. They thus include the effects of self-consistent electric and magnetic fields, magnetic induction, and electromagnetic radiation. The algorithms for an electromagnetic code which works directly with the electric and magnetic fields are described. The fields and current are separated into transverse and longitudinal components. The transverse E and B fields are integrated in time using a leapfrog scheme applied to the Fourier components. The particle pushing is performed via the relativistic Lorentz force equation for the particle momentum. As an example, simulation results are presented for the electron cyclotron maser instability which illustrate the importance of relativistic effects on the wave-particle resonance condition and on wave dispersion.
NASA Astrophysics Data System (ADS)
Korobka, A.; May, I.
1985-01-01
A code lock with a microcircuit was invented which contains only a very few components. Two DD-triggers control the state of two identical transistors. When both transistors are turned on simultaneously the transistor VS1 is turned on so that the electromagnet YA1 pulls in the bolt and the door opens. This will happen only when a logic 1 appears at the inverted output of the first trigger and at the straight output of the second one. After the door is opened, a button on it resets the contactors to return both triggers to their original state. The electromagnetic is designed to produce the necessary pull force and sufficient power when under rectified 127 V line voltage, with the neutral wire of the lock circuit always connected to the - terminal of the power supply.
Liman, Emily R.; Zhang, Yali V.; Montell, Craig
2014-01-01
Five canonical tastes, bitter, sweet, umami (amino acid), salty and sour (acid) are detected by animals as diverse as fruit flies and humans, consistent with a near universal drive to consume fundamental nutrients and to avoid toxins or other harmful compounds. Surprisingly, despite this strong conservation of basic taste qualities between vertebrates and invertebrates, the receptors and signaling mechanisms that mediate taste in each are highly divergent. The identification over the last two decades of receptors and other molecules that mediate taste has led to stunning advances in our understanding of the basic mechanisms of transduction and coding of information by the gustatory systems of vertebrates and invertebrates. In this review, we discuss recent advances in taste research, mainly from the fly and mammalian systems, and we highlight principles that are common across species, despite stark differences in receptor types. PMID:24607224